<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[VideoSDK - Updates about video APIs]]></title><description><![CDATA[Get all the updates about SDK and APIs, latest release, news and many more.]]></description><link>https://www.videosdk.live/</link><generator>VideoSDK Blog</generator><lastBuildDate>Sun, 19 Apr 2026 05:06:23 GMT</lastBuildDate><atom:link href="https://www.videosdk.live/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Introducing Prism - VideoSDK Agents V1.0.0]]></title><description><![CDATA[Learn more about Prism - VideoSDK Agent V1.0.0 , a complete rethink of the VideoSDK AI voice pipeline, built for full control, flexibility, and production-scale reliability.]]></description><link>https://www.videosdk.live/blog/introducing-videosdk-ai-voice-agents-v1</link><guid isPermaLink="false">69d3404055831517a5a8a61a</guid><category><![CDATA[ai agents]]></category><category><![CDATA[AI voice agent]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 07 Apr 2026 13:07:03 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/04/Agent-V1---6.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/04/Agent-V1---6.png" alt="Introducing Prism - VideoSDK Agents V1.0.0"/><p><em>A complete rearchitecture of the VideoSDK AI voice pipeline.</em></p><p>We've been building AI voice agents for a while now. And the more we built, the more we ran into the same wall: the pipeline was in the way.</p><p>You couldn't swap a voice. You couldn't intercept what the LLM sees. You couldn't mix a custom STT with a realtime model. And when something broke in production, there was nothing to look at - no traces, no metrics, no logs.</p><p>So we rebuilt everything.</p><p>Today we're releasing <strong>Prism: Agents V1.0.0</strong>, a stable, production-ready rearchitecture of the VideoSDK Agents framework. It is not backward compatible with v0.x.</p><blockquote><a href="https://github.com/videosdk-live/videosdk-agents/releases/tag/v1.0.0">Full release notes and migration guide →</a></blockquote><h2 id="what-was-broken-in-v0x">What was broken in v0.x</h2><p>The old framework had two pipeline classes: <code>CascadingPipeline</code> for STT → LLM → TTS chains, and <code>RealtimePipeline</code> for speech-to-speech models. Every new capability we wanted to add was fighting the architecture.</p><p>Hybrid mode : running a custom STT into a realtime LLM  was impossible. The two pipelines had no way to talk to each other. If you didn't like the voice of your realtime model, you were stuck with it. If you wanted to clean or normalize a transcript before inference, there was no hook to do it. And observability was completely absent.</p><p>Adding features meant breaking things. So instead of patching it, we redesigned from scratch.</p><h2 id="agent-v1-architecture">Agent V1 architecture</h2><p>The <a href="https://docs.videosdk.live/ai_agents/core-components/overview" rel="noreferrer">Agent Session</a> orchestrates the entire workflow, combining the Agent with a Pipeline for real-time communication. The unified Pipeline automatically detects the best mode based on the components you provide whether that's a full cascade STT-LLM-TTS setup, a realtime speech-to-speech model, or a hybrid of both.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/04/agent_v1_architecture.jpg" class="kg-image" alt="Introducing Prism - VideoSDK Agents V1.0.0" loading="lazy" width="2208" height="1234"/></figure><ol><li><strong>Agent</strong>&nbsp;- This is the base class for defining your agent's identity and behavior. Here, you can configure custom instructions, manage its state, and register function tools.</li><li><a href="https://docs.videosdk.live/ai_agents/core-components/pipeline" rel="noreferrer"><strong>Pipeline</strong>&nbsp;</a>- This unified component manages the real-time flow of audio and data between the user and the AI models. It auto-detects the optimal mode based on the components you provide:<ul><li><strong>Cascade Mode</strong>&nbsp;- Provide STT, LLM, TTS, VAD, and Turn Detector for maximum flexibility and control over each processing stage.</li><li><strong>Realtime Mode</strong>&nbsp;- Provide a realtime model (e.g., OpenAI Realtime, Google Gemini Live, AWS Nova Sonic) for lowest-latency speech-to-speech processing.</li><li><strong>Hybrid Mode</strong>&nbsp;- Combine a realtime model with an external STT (for knowledge base support) or external TTS (for custom voice support).</li></ul></li><li><a href="https://docs.videosdk.live/ai_agents/core-components/agent-session" rel="noreferrer"><strong>Agent Session</strong></a>&nbsp;- This component brings together the agent and pipeline to manage the agent's lifecycle within a VideoSDK meeting.</li><li><a href="https://docs.videosdk.live/ai_agents/core-components/pipeline-hooks" rel="noreferrer"><strong>Pipeline Hooks</strong></a>&nbsp;- A middleware system for intercepting and processing data at any stage of the pipeline. Use hooks for custom STT/TTS processing, observing or modifying LLM output, lifecycle events, and more.</li></ol><h2 id="one-pipeline-class">One Pipeline Class</h2><p>The core change in V1 is the replacement of <code>CascadingPipeline</code> and <code>RealtimePipeline</code> with a single <code>Pipeline</code> class.</p><p>#Before</p><pre><code class="language-python">from videosdk.agents import CascadingPipeline, RealtimePipeline

pipeline = CascadingPipeline(stt=..., llm=..., tts=..., vad=..., turn_detector=...)
pipeline = RealtimePipeline(llm=OpenAIRealtime(...))
</code></pre><p>#After</p><pre><code class="language-python">from videosdk.agents import Pipeline

pipeline = Pipeline(stt=..., llm=..., tts=..., vad=..., turn_detector=...)
pipeline = Pipeline(llm=OpenAIRealtime(...))</code></pre><p>Pass any combination of components. The <code>PipelineOrchestrator</code> analyzes what you've given it and automatically selects the correct execution mode : cascade, realtime, or hybrid. You never configure it directly. You just pass components.</p><h2 id="three-modes-one-interface">Three Modes, One Interface</h2><p><strong>Cascade</strong> Mode -  full control over every stage.</p><pre><code class="language-python">pipeline = Pipeline(
    stt=DeepgramSTT(),
    llm=GoogleLLM(),
    tts=CartesiaTTS(),
    vad=SileroVAD(),
    turn_detector=TurnDetector(),
)
</code></pre><p><strong>Realtime</strong> Mode -  lowest latency, single model for the full voice pipeline.</p><pre><code class="language-python">pipeline = Pipeline(
    llm=GeminiRealtime(
        model="gemini-3.1-flash-live-preview",
        config=GeminiLiveConfig(voice="Leda", response_modalities=["AUDIO"]),
    )
)
</code></pre><p>Supported realtime models: <code>OpenAIRealtime</code>, <code>GeminiRealtime</code>, <code>AWSNovaSonic</code>, <code>AzureVoiceLive</code></p><p><strong>Hybrid</strong> : this is what was impossible before.</p><pre><code class="language-python"># Bring your own STT, use a realtime LLM
pipeline = Pipeline(stt=DeepgramSTT(), llm=OpenAIRealtime(...))

# Use a realtime LLM, bring your own voice
pipeline = Pipeline(llm=OpenAIRealtime(...), tts=ElevenLabsTTS(...))
</code></pre><p>You are no longer bounded by what the model provider gives you. Don't like the default voice? Swap it. Want your own transcription layer feeding a realtime model? Done.</p><h2 id="flexible-agent-composition">Flexible Agent Composition</h2><p>V1 also unlocks the ability to run partial pipelines for specific use cases.</p><pre><code class="language-python">Pipeline(stt=...)                     # Transcription agent
Pipeline(llm=...)                     # Text chatbot
Pipeline(stt=..., llm=..., tts=...)   # Voice + chat
Pipeline(stt=..., llm=..., tts=..., vad=..., turn_detector=...)      # Full voice agent
Pipeline(llm=OpenAIRealtime(...))     # Realtime voice agent
</code></pre><p>Same class, same structure, different capabilities depending on what you pass in.</p><h2 id="pipeline-hooks">Pipeline Hooks</h2><p><code>ConversationalFlow</code> is removed. In its place is <code>@pipeline.on(...)</code> : a decorator-based hooks system that lets you intercept and transform data at any stage, without subclassing anything.</p><pre><code class="language-python">@pipeline.on("stt")
async def on_transcript(text: str) -&gt; str:
    return text.strip()                        # normalize before LLM

@pipeline.on("tts")
async def on_tts(text: str) -&gt; str:
    return text.replace("SDK", "S D K")       # fix pronunciation

@pipeline.on("llm")
async def on_llm(messages):
    yield "Transferring you now."              # bypass LLM entirely
</code></pre><p>Hooks are available at every stage: <code>stt</code>, <code>tts</code>, <code>llm</code>, <code>vision_frame</code>, <code>user_turn_start</code>, <code>user_turn_end</code>, <code>agent_turn_start</code>, <code>agent_turn_end</code>. You can intercept raw audio streams at the <code>stt</code> and <code>tts</code> hooks — this is audio-level control, not just text.</p><blockquote><a href="https://github.com/videosdk-live/videosdk-agents/blob/main/examples/voice_pipeline_hooks.py">Hooks walkthrough →</a></blockquote><h2 id="observability-built-in">Observability, Built In</h2><p>Every V1 pipeline ships with per-component metrics, structured logging, and OpenTelemetry tracing across cascade, realtime, and hybrid modes. No extra setup. Configure custom endpoints via <code>RoomOptions</code>.</p><p>This was the biggest missing piece in v0.x. When something breaks in production, you now have something to look at.</p><h2 id="docs-mcp-server">Docs MCP Server</h2><p>Query VideoSDK documentation directly from your AI agent. The MCP server gives instant access to SDK references, implementation guides, and even source-level details inside your workflow. No manual searching, no context switching. Built for MCP-compatible agents like Claude, Cursor, or your own.</p><pre><code class="language- python">{
    "mcpServers": {
        "videosdkAgentDocs": {
            "serverUrl": "https://mcp.videosdk.live/mcp"
        }
    }
}</code></pre><blockquote>Explore the <a href="https://docs.videosdk.live/ai_agents/docs-mcp-server" rel="noreferrer">mcp-server docs -&gt;</a></blockquote><h2 id="agent-skills">Agent Skills</h2><p>Extend your agent with reusable capabilities. Define tools, actions, and behaviors once, and plug them into any pipeline. From API calls to complex workflows, Agent Skills let your agents do more than just respond - they can act, integrate, and automate.</p><blockquote>Explore the <a href="https://github.com/videosdk-live/agents/blob/main/examples/SKILLS.md" rel="noreferrer">agents-skills -&gt;</a></blockquote><h2 id="latency">Latency</h2><p>The pipeline stages in V1 run concurrently. STT, LLM, and TTS never block each other. TTS audio streams to the room as soon as the first chunk is generated not after full synthesis. In realtime mode, raw audio is routed directly to the model with zero transcription overhead.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/04/01_e2e_latency--1-.png" class="kg-image" alt="Introducing Prism - VideoSDK Agents V1.0.0" loading="lazy" width="3138" height="1590"/></figure><p>Interruptions are detected at the earliest possible point in the pipeline. In-flight LLM and TTS generation is cancelled immediately on user speech. Avatar audio flushes cleanly on interrupt with no residual artifacts.</p><p>We benchmarked V1 against the fastest pipelines in the industry. The numbers are in the release notes.</p><h2 id="22-production-ready-templates">22 Production Ready Templates</h2><p>We've shipped 22 production-ready agent templates covering the most common real-world use cases. Your logic. Your pipeline. These are the starting points.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/04/Screenshot-2026-04-08-at-4.29.50-PM.png" class="kg-image" alt="Introducing Prism - VideoSDK Agents V1.0.0" loading="lazy" width="3400" height="1682"/></figure><blockquote><a href="https://github.com/videosdk-live/videosdk-agents/tree/main/use_case_examples">Browse all examples →</a></blockquote><h2 id="breaking-changes">Breaking Changes</h2>
<!--kg-card-begin: html-->
<table>
<thead>
<tr>
<th>v0.x</th>
<th>V1.0.0</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>CascadingPipeline</code></td>
<td><code>Pipeline</code></td>
</tr>
<tr>
<td><code>RealtimePipeline</code></td>
<td><code>Pipeline</code></td>
</tr>
<tr>
<td><code>ConversationalFlow</code></td>
<td><code>@pipeline.on(...)</code> hooks</td>
</tr>
</tbody>
</table>
<!--kg-card-end: html-->
<p>Function tools, agent lifecycle, <code>AgentSession</code>, <code>WorkerJob</code>, fallback providers, MCP tools, knowledge base, VAD, and turn detection all continue to work as before.</p><blockquote><a href="https://github.com/videosdk-live/videosdk-agents/releases/tag/v1.0.0#migration-guide">Full migration guide →</a></blockquote><h2 id="resources">Resources</h2><ul><li>Learn how to <a href="https://docs.videosdk.live/ai_agents/deployments/agent-cloud/cli/deploy" rel="noreferrer">deploy your agents</a>.</li><li>Follow the docs to start building your <a href="https://docs.videosdk.live/ai_agents/introduction" rel="noreferrer"> AI Voice Agents today</a></li><li>Contact our <a href="https://www.videosdk.live/contact" rel="noreferrer">sales team</a> to explore solutions tailored to your needs.</li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li></ul>]]></content:encoded></item><item><title><![CDATA[Product Updates - March 2026 : Agents SDK v1.0.0, Unified Pipeline & Agent Participants Across All SDKs]]></title><description><![CDATA[One pipeline. Three modes. Five SDKs updated. March brings the stable release of Agents SDK v1.0.0 - a ground-up rebuild with a unified Pipeline class, real-time agent state tracking, and native Agent Participant support across JS, React, React Native, Flutter, and iOS.]]></description><link>https://www.videosdk.live/blog/product-updates-march-2026</link><guid isPermaLink="false">69ce013255831517a5a8a5cb</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 03 Apr 2026 06:45:47 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/04/March-2026.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <style>
    :root {
      --primary: #A497D9;
      --bg: #050608;
      --card-bg: #111217;
      --text-main: #E0E0E0;
      --text-muted: #A0A0A0;
      --border: #22242C;
    }
    body {
      font-family: 'Inter', -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, sans-serif;
      line-height: 1.7;
      color: var(--text-main);
      background-color: var(--bg);
      margin: 0;
      padding: 0;
    }
    .container { max-width: 850px; margin: 0 auto; padding: 60px 20px; }
    header { text-align: left; margin-bottom: 60px; }
    h2 { font-size: 28px; color: #FFFFFF; margin-top: 50px; padding-bottom: 10px; border-bottom: 1px solid var(--border); }
    h3 { font-size: 22px; color: #FFFFFF; margin-top: 35px; }
    p { margin-bottom: 20px; }
    strong { color: #FFFFFF; }
    ul, ol { margin-bottom: 20px; padding-left: 20px; }
    li { margin-bottom: 10px; }
    .highlight { color: var(--primary); font-weight: 600; }
    a { color: var(--primary); text-decoration: none; }
    a:hover { text-decoration: underline; }
    .ic { font-family: 'Fira Code','Cascadia Code','Consolas',monospace; font-size: 13.5px; color: #A497D9; font-weight: 700; }
    .card { background-color: var(--card-bg); border-radius: 12px; padding: 30px; margin: 30px 0; }
    .img-container { margin: 40px 0; text-align: center; }
    .img-container img { width: 100%; height: auto; border-radius: 8px; }
    .img-caption { font-size: 13px; color: var(--text-muted); margin-top: 10px; display: block; }
    .cta-button { background-color: var(--primary); color: #000000; padding: 6px 20px; border-radius: 6px; font-size: 15px; font-weight: bold; display: inline-block; text-decoration: none; }
    .cta-box { text-align: left; background: linear-gradient(135deg, #111217 0%, #1a1b23 100%); padding: 40px; border-radius: 12px; border: 1px solid var(--primary); margin: 60px 0; }
    @media (max-width: 600px) { .container { padding: 40px 15px; } }
  </style>
</meta></meta></head>
<body>
  <div class="container">
    <header>
      <img src="https://assets.videosdk.live/static-assets/ghost/2026/04/March-2026.jpg" alt="Product Updates - March 2026 : Agents SDK v1.0.0, Unified Pipeline & Agent Participants Across All SDKs"/><p style="font-size:18px;color:#A0A0A0;">A complete recap of everything we shipped in March 2026: Agents SDK v1.0.0, unified pipeline, and Agent Participant support across all RTC SDKs.</p>
    </header>
    <p>Welcome to the March edition of the VideoSDK Monthly Updates! This month marks one of the biggest milestones for our AI stack — the official launch of <span class="highlight">Agents v1.0.0</span>.</p>
    <p>We have reimagined how developers build AI agents with a unified pipeline architecture, introduced hooks for deep customization, expanded our plugin ecosystem, and rolled out agent support across every SDK. This is a huge one, let's get started!</p>

    <h2>Agents v1.0.0: A New Foundation for AI Voice Agents</h2>
    <p>This release is more than just an upgrade. It is a complete architectural shift. With Agents v1.0.0, we have unified multiple execution models into a single, flexible system that adapts to your use case automatically.</p>
    <div class="img-container">
      <img src="https://assets.videosdk.live/images/image%20%2860%29.png" alt="Product Updates - March 2026 : Agents SDK v1.0.0, Unified Pipeline & Agent Participants Across All SDKs">
    </img></div>
    <p><a href="https://github.com/videosdk-live/agents/releases/tag/v1.0.0" target="_blank">Read full release notes on GitHub</a></p>

    <h3>Unified Pipeline Architecture</h3>
    <p><span class="ic">CascadingPipeline</span> and <span class="ic">RealtimePipeline</span> have been replaced with a single <strong>Pipeline</strong> class. Configure one pipeline and the SDK automatically determines the execution mode based on the components you provide.</p>

    <h3>Cascade Mode: STT to LLM to TTS</h3>
    <p>Compose any provider chain for complete control over each stage.</p>
    <div style="background:#0d0e13 !important;border-radius:10px;overflow:hidden;margin:20px 0 28px;border:none !important;box-shadow:none !important;outline:none !important;">
      <div style="background:#16171e !important;padding:8px 18px;font-size:11px;font-weight:600;color:#6e7685;letter-spacing:0.08em;text-transform:uppercase;font-family:'Fira Code',monospace;">python</div>
      <pre style="padding:16px 18px;overflow-x:auto;font-family:'Fira Code','Cascadia Code','Consolas',monospace;font-size:13px;line-height:1.8;color:#c9d1d9;margin:0;background:#0d0e13 !important;border:none !important;box-shadow:none !important;outline:none !important;">pipeline = <span style="color:#A497D9;">Pipeline</span>(
    stt=DeepgramSTT(),
    llm=GoogleLLM(),
    tts=CartesiaTTS(),
    vad=SileroVAD(),
    turn_detector=TurnDetector(),
)
session = AgentSession(agent=MyAgent(), pipeline=pipeline)
await session.start(wait_for_participant=True, run_until_shutdown=True)</pre>
    </div>

    <h3>Realtime Mode: Lowest Latency with Unified Models</h3>
    <p>Use a single realtime model for the full voice pipeline.</p>
    <div style="background:#0d0e13 !important;border-radius:10px;overflow:hidden;margin:20px 0 28px;border:none !important;box-shadow:none !important;outline:none !important;">
      <div style="background:#16171e !important;padding:8px 18px;font-size:11px;font-weight:600;color:#6e7685;letter-spacing:0.08em;text-transform:uppercase;font-family:'Fira Code',monospace;">python</div>
      <pre style="padding:16px 18px;overflow-x:auto;font-family:'Fira Code','Cascadia Code','Consolas',monospace;font-size:13px;line-height:1.8;color:#c9d1d9;margin:0;background:#0d0e13 !important;border:none !important;box-shadow:none !important;outline:none !important;">pipeline = <span style="color:#A497D9;">Pipeline</span>(
    llm=GeminiRealtime(
        model=<span style="color:#98c379;">"gemini-3.1-flash-live-preview"</span>,
        config=GeminiLiveConfig(voice=<span style="color:#98c379;">"Leda"</span>, response_modalities=[<span style="color:#98c379;">"AUDIO"</span>]),
    )
)
session = AgentSession(agent=MyAgent(), pipeline=pipeline)
await session.start(wait_for_participant=True, run_until_shutdown=True)</pre>
    </div>
    <p>Other supported realtime models: <a href="https://docs.videosdk.live/ai_agents/plugins/realtime/openai" target="_blank"><span class="ic">OpenAIRealtime</span></a>, <a href="https://docs.videosdk.live/ai_agents/plugins/realtime/aws-nova-sonic" target="_blank"><span class="ic">AWSNovaSonic</span></a>, <a href="https://docs.videosdk.live/ai_agents/plugins/realtime/azure-voice-live" target="_blank"><span class="ic">AzureVoiceLive</span></a>, <a href="https://docs.videosdk.live/ai_agents/plugins/realtime/xai-grok" target="_blank"><span class="ic">xAI Grok</span></a>, <a href="https://docs.videosdk.live/ai_agents/plugins/realtime/ultravox" target="_blank"><span class="ic">Ultravox</span></a>.</p>

    <h3>Hybrid Mode: Mix Cascade and Realtime Components</h3>
    <p>Read the full <a href="https://docs.videosdk.live/ai_agents/core-components/pipeline#hybrid-mode" target="_blank">Hybrid Mode docs</a>.</p>
    <div style="background:#0d0e13 !important;border-radius:10px;overflow:hidden;margin:20px 0 28px;border:none !important;box-shadow:none !important;outline:none !important;">
      <div style="background:#16171e !important;padding:8px 18px;font-size:11px;font-weight:600;color:#6e7685;letter-spacing:0.08em;text-transform:uppercase;font-family:'Fira Code',monospace;">python</div>
      <pre style="padding:16px 18px;overflow-x:auto;font-family:'Fira Code','Cascadia Code','Consolas',monospace;font-size:13px;line-height:1.8;color:#c9d1d9;margin:0;background:#0d0e13 !important;border:none !important;box-shadow:none !important;outline:none !important;"><span style="color:#5c6370;"># Custom STT + Realtime LLM (bring your own transcription)</span>
pipeline = <span style="color:#A497D9;">Pipeline</span>(stt=DeepgramSTT(), llm=OpenAIRealtime(...))
<span style="color:#5c6370;"># Realtime LLM + Custom TTS (bring your own voice)</span>
pipeline = <span style="color:#A497D9;">Pipeline</span>(llm=OpenAIRealtime(...), tts=ElevenLabsTTS(...))</pre>
    </div>

    <h3>Flexible Agent Composition</h3>
    <p>Just pass the components you need. The pipeline handles the rest.</p>
    <div style="background:#0d0e13 !important;border-radius:10px;overflow:hidden;margin:20px 0 28px;border:none !important;box-shadow:none !important;outline:none !important;">
      <div style="background:#16171e !important;padding:8px 18px;font-size:11px;font-weight:600;color:#6e7685;letter-spacing:0.08em;text-transform:uppercase;font-family:'Fira Code',monospace;">python</div>
      <pre style="padding:16px 18px;overflow-x:auto;font-family:'Fira Code','Cascadia Code','Consolas',monospace;font-size:13px;line-height:1.8;color:#c9d1d9;margin:0;background:#0d0e13 !important;border:none !important;box-shadow:none !important;outline:none !important;"><span style="color:#A497D9;">Pipeline</span>(stt=...)                                         <span style="color:#5c6370;"># Transcription only</span>
<span style="color:#A497D9;">Pipeline</span>(llm=...)                                         <span style="color:#5c6370;"># Text chatbot</span>
<span style="color:#A497D9;">Pipeline</span>(stt=..., llm=..., tts=...)                       <span style="color:#5c6370;"># Voice + Chat</span>
<span style="color:#A497D9;">Pipeline</span>(stt=..., llm=..., tts=..., vad=..., turn_detector=...)  <span style="color:#5c6370;"># Full voice agent</span>
<span style="color:#A497D9;">Pipeline</span>(llm=OpenAIRealtime(...))                         <span style="color:#5c6370;"># Realtime voice agent</span></pre>
    </div>

    <h3>Pipeline Hooks System</h3>
    <p><span class="ic">ConversationalFlow</span> has been removed in favour of <span class="ic">@pipeline.on(...)</span> — a lightweight hooks engine that lets you intercept and transform data at any stage without subclassing. See the full <a href="https://docs.videosdk.live/ai_agents/core-components/pipeline-hooks" target="_blank">Pipeline Hooks docs</a>.</p>

    <!-- Hooks table — fully inline styled so Ghost can't override -->
    <table style="width:100%;border-collapse:collapse;margin:20px 0 28px;font-size:14px;background:transparent !important;">
      <thead>
        <tr>
          <th style="background:#16171e !important;color:#A497D9;padding:12px 16px;text-align:left;font-weight:700;border-bottom:1px solid #22242C;font-family:'Inter',sans-serif;">HOOK</th>
          <th style="background:#16171e !important;color:#A497D9;padding:12px 16px;text-align:left;font-weight:700;border-bottom:1px solid #22242C;font-family:'Inter',sans-serif;">WHAT YOU CAN INTERCEPT / MODIFY</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td style="background:#111217 !important;padding:12px 16px;border-bottom:1px solid #22242C;color:#E0E0E0;vertical-align:top;"><span style="font-family:'Fira Code',monospace;color:#A497D9;font-weight:700;font-size:13px;background:none !important;padding:0;border:none !important;box-shadow:none !important;">stt</span></td>
          <td style="background:#111217 !important;padding:12px 16px;border-bottom:1px solid #22242C;color:#E0E0E0;vertical-align:top;">Incoming audio stream and transcript text. Clean, redact, or replace before LLM.</td>
        </tr>
        <tr>
          <td style="background:#111217 !important;padding:12px 16px;border-bottom:1px solid #22242C;color:#E0E0E0;vertical-align:top;"><span style="font-family:'Fira Code',monospace;color:#A497D9;font-weight:700;font-size:13px;background:none !important;padding:0;border:none !important;box-shadow:none !important;">tts</span></td>
          <td style="background:#111217 !important;padding:12px 16px;border-bottom:1px solid #22242C;color:#E0E0E0;vertical-align:top;">Outgoing text and synthesized audio stream. Adjust pronunciation, filter, or re-route.</td>
        </tr>
        <tr>
          <td style="background:#111217 !important;padding:12px 16px;border-bottom:1px solid #22242C;color:#E0E0E0;vertical-align:top;"><span style="font-family:'Fira Code',monospace;color:#A497D9;font-weight:700;font-size:13px;background:none !important;padding:0;border:none !important;box-shadow:none !important;">llm</span></td>
          <td style="background:#111217 !important;padding:12px 16px;border-bottom:1px solid #22242C;color:#E0E0E0;vertical-align:top;">Message list. Bypass the model entirely with a yield, or modify before inference.</td>
        </tr>
        <tr>
          <td style="background:#111217 !important;padding:12px 16px;border-bottom:1px solid #22242C;color:#E0E0E0;vertical-align:top;"><span style="font-family:'Fira Code',monospace;color:#A497D9;font-weight:700;font-size:13px;background:none !important;padding:0;border:none !important;box-shadow:none !important;">vision_frame</span></td>
          <td style="background:#111217 !important;padding:12px 16px;border-bottom:1px solid #22242C;color:#E0E0E0;vertical-align:top;">Raw video frames from participants</td>
        </tr>
        <tr>
          <td style="background:#111217 !important;padding:12px 16px;border-bottom:1px solid #22242C;color:#E0E0E0;vertical-align:top;"><span style="font-family:'Fira Code',monospace;color:#A497D9;font-weight:700;font-size:13px;background:none !important;padding:0;border:none !important;box-shadow:none !important;">user_turn_start</span> / <span style="font-family:'Fira Code',monospace;color:#A497D9;font-weight:700;font-size:13px;background:none !important;padding:0;border:none !important;box-shadow:none !important;">user_turn_end</span></td>
          <td style="background:#111217 !important;padding:12px 16px;border-bottom:1px solid #22242C;color:#E0E0E0;vertical-align:top;">User speaking lifecycle</td>
        </tr>
        <tr>
          <td style="background:#111217 !important;padding:12px 16px;border-bottom:none;color:#E0E0E0;vertical-align:top;"><span style="font-family:'Fira Code',monospace;color:#A497D9;font-weight:700;font-size:13px;background:none !important;padding:0;border:none !important;box-shadow:none !important;">agent_turn_start</span> / <span style="font-family:'Fira Code',monospace;color:#A497D9;font-weight:700;font-size:13px;background:none !important;padding:0;border:none !important;box-shadow:none !important;">agent_turn_end</span></td>
          <td style="background:#111217 !important;padding:12px 16px;border-bottom:none;color:#E0E0E0;vertical-align:top;">Agent response lifecycle</td>
        </tr>
      </tbody>
    </table>

    <div style="background:#0d0e13 !important;border-radius:10px;overflow:hidden;margin:20px 0 28px;border:none !important;box-shadow:none !important;outline:none !important;">
      <div style="background:#16171e !important;padding:8px 18px;font-size:11px;font-weight:600;color:#6e7685;letter-spacing:0.08em;text-transform:uppercase;font-family:'Fira Code',monospace;">python</div>
      <pre style="padding:16px 18px;overflow-x:auto;font-family:'Fira Code','Cascadia Code','Consolas',monospace;font-size:13px;line-height:1.8;color:#c9d1d9;margin:0;background:#0d0e13 !important;border:none !important;box-shadow:none !important;outline:none !important;"><span style="color:#A497D9;">@pipeline.on("stt")</span>
async def on_transcript(text: str) -> str:
    return text.strip()          <span style="color:#5c6370;"># normalize before LLM</span>
<span style="color:#A497D9;">@pipeline.on("tts")</span>
async def on_tts(text: str) -> str:
    return text.replace(<span style="color:#98c379;">"SDK"</span>, <span style="color:#98c379;">"S D K"</span>)   <span style="color:#5c6370;"># fix pronunciation</span>
<span style="color:#A497D9;">@pipeline.on("llm")</span>
async def on_llm(messages):
    yield <span style="color:#98c379;">"Transferring you now."</span>  <span style="color:#5c6370;"># bypass LLM entirely</span></pre>
    </div>

    <h3>Observability</h3>
    <p>Per-component metrics, structured logging, and OpenTelemetry tracing are built in across all pipeline modes. Custom endpoints configurable via <span class="ic">RoomOptions</span>.</p>

    <h3>Anam AI Avatar Plugin</h3>
    <p>Bring your agents to life with the <strong>Anam AI</strong> avatar plugin. Attach it to your pipeline with <span class="ic">avatar=AnamAI(...)</span> and the framework handles WebRTC data channel audio routing, interrupts, and teardown automatically. Read the <a href="https://docs.videosdk.live/ai_agents/plugins/avatar/anam-ai" target="_blank">Anam AI plugin docs</a> or <a href="https://www.videosdk.live/blog/how-to-build-ai-virtual-avatars-using-anam-ai-and-videosdk-ai-voice-agents" target="_blank">read this blog for full setup</a>.</p>
    <div class="img-container">
      <img src="https://assets.videosdk.live/images/image%20%2859%29.png" alt="Product Updates - March 2026 : Agents SDK v1.0.0, Unified Pipeline & Agent Participants Across All SDKs">
    </img></div>

    <h3>LangChain and LangGraph Support</h3>
    <p>Drop in any LangChain <span class="ic">BaseChatModel</span> or LangGraph <span class="ic">StateGraph</span> as your pipeline LLM. See the full <a href="https://docs.videosdk.live/ai_agents/plugins/llm/langchain-llm" target="_blank">LangChain plugin docs</a>.</p>
    <div style="background:#0d0e13 !important;border-radius:10px;overflow:hidden;margin:20px 0 28px;border:none !important;box-shadow:none !important;outline:none !important;">
      <div style="background:#16171e !important;padding:8px 18px;font-size:11px;font-weight:600;color:#6e7685;letter-spacing:0.08em;text-transform:uppercase;font-family:'Fira Code',monospace;">python</div>
      <pre style="padding:16px 18px;overflow-x:auto;font-family:'Fira Code','Cascadia Code','Consolas',monospace;font-size:13px;line-height:1.8;color:#c9d1d9;margin:0;background:#0d0e13 !important;border:none !important;box-shadow:none !important;outline:none !important;">from videosdk.plugins.langchain import <span style="color:#A497D9;">LangChainLLM</span>, <span style="color:#A497D9;">LangGraphLLM</span>
<span style="color:#5c6370;"># Use any LangChain BaseChatModel</span>
pipeline = <span style="color:#A497D9;">Pipeline</span>(llm=LangChainLLM(model=ChatOpenAI(...)))
<span style="color:#5c6370;"># Use a LangGraph StateGraph</span>
pipeline = <span style="color:#A497D9;">Pipeline</span>(llm=LangGraphLLM(graph=my_graph))</pre>
    </div>

    <h3>Structured Recording</h3>
    <div style="background:#0d0e13 !important;border-radius:10px;overflow:hidden;margin:20px 0 28px;border:none !important;box-shadow:none !important;outline:none !important;">
      <div style="background:#16171e !important;padding:8px 18px;font-size:11px;font-weight:600;color:#6e7685;letter-spacing:0.08em;text-transform:uppercase;font-family:'Fira Code',monospace;">python</div>
      <pre style="padding:16px 18px;overflow-x:auto;font-family:'Fira Code','Cascadia Code','Consolas',monospace;font-size:13px;line-height:1.8;color:#c9d1d9;margin:0;background:#0d0e13 !important;border:none !important;box-shadow:none !important;outline:none !important;">from videosdk.agents import <span style="color:#A497D9;">RoomOptions</span>, <span style="color:#A497D9;">RecordingOptions</span>
room_options = <span style="color:#A497D9;">RoomOptions</span>(
    recording=True,  <span style="color:#5c6370;"># audio-only by default</span>
    recording_options=RecordingOptions(
        video=True,         <span style="color:#5c6370;"># opt-in to camera recording</span>
        screen_share=True,  <span style="color:#5c6370;"># opt-in to screen share recording</span>
    )
)</pre>
    </div>

    <div class="card" style="background-color:#111217;border-radius:12px;padding:30px;margin:30px 0;">
      <h3 style="margin-top:0;">Migrating from v0.x</h3>
      <p>Replace <span class="ic">CascadingPipeline</span> and <span class="ic">RealtimePipeline</span> with <span class="ic">Pipeline</span>. Replace any <span class="ic">ConversationalFlow</span> subclass with <span class="ic">@pipeline.on(...)</span> hooks. Constructor arguments stay the same. Everything else (AgentSession, WorkerJob, VAD, fallback providers, MCP tools) works as before.</p>
      <a href="https://github.com/videosdk-live/agents/releases/tag/v1.0.0" style="background-color:#A497D9;color:#000000;padding:8px 22px;border-radius:6px;font-size:14px;font-weight:700;display:inline-block;text-decoration:none;">Full Release Notes on GitHub</a>
    </div>

    <h2>Agent Participant Support Across All RTC SDKs</h2>
    <p>AI agents are now first-class citizens across every VideoSDK platform. All five RTC SDKs shipped native <strong>Agent Participant</strong> support this month. Your app can now detect, track, and react to AI agents in a room including live state changes and real-time transcription.</p>
    <ul>
      <li><strong>JS SDK</strong> v0.7.0</li>
      <li><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/release-notes#v080" target="_blank"><strong>React SDK</strong> v0.8.0</a></li>
      <li><a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/release-notes#v090" target="_blank"><strong>React Native SDK</strong> v0.9.0</a></li>
      <li><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/release-notes#380" target="_blank"><strong>Flutter SDK</strong> v3.8.0</a></li>
      <li><a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/release-notes#v270" target="_blank"><strong>iOS SDK</strong> v2.7.0</a></li>
    </ul>

    <h3>What Every SDK Now Supports</h3>
    <ul>
      <li><strong>AgentParticipant / isAgent:</strong> When an agent joins a room, it is automatically identified as a distinct participant type. No manual detection needed.</li>
      <li><strong>AgentState enum:</strong> Track lifecycle with <span class="ic">IDLE</span>, <span class="ic">LISTENING</span>, <span class="ic">THINKING</span>, and <span class="ic">SPEAKING</span>. Know exactly what your agent is doing at any moment.</li>
      <li><strong>State change events:</strong> React and React Native expose <span class="ic">onAgentStateChange</span>; JS fires <span class="ic">agent-state-change</span>; iOS uses <span class="ic">onAgentStateChanged</span>; Flutter uses <span class="ic">Events.agentStateChanged</span>.</li>
      <li><strong>Live transcription:</strong> Receive real-time agent transcription with the speaking participant and a timestamped text segment, available across all five SDKs.</li>
    </ul>

    <h2>Google Gemini 3.1 Support</h2>
    <p>VideoSDK now supports <strong>Google Gemini 3.1</strong> in the realtime pipeline. Drop in <span class="ic">GeminiRealtime</span> with the latest model and get ultra-low-latency voice responses powered by Gemini's newest architecture.</p>
    <div style="background:#0d0e13 !important;border-radius:10px;overflow:hidden;margin:20px 0 28px;border:none !important;box-shadow:none !important;outline:none !important;">
      <div style="background:#16171e !important;padding:8px 18px;font-size:11px;font-weight:600;color:#6e7685;letter-spacing:0.08em;text-transform:uppercase;font-family:'Fira Code',monospace;">python</div>
      <pre style="padding:16px 18px;overflow-x:auto;font-family:'Fira Code','Cascadia Code','Consolas',monospace;font-size:13px;line-height:1.8;color:#c9d1d9;margin:0;background:#0d0e13 !important;border:none !important;box-shadow:none !important;outline:none !important;">pipeline = <span style="color:#A497D9;">Pipeline</span>(
    llm=GeminiRealtime(
        model=<span style="color:#98c379;">"gemini-3.1-flash-live-preview"</span>,
        config=GeminiLiveConfig(voice=<span style="color:#98c379;">"Leda"</span>, response_modalities=[<span style="color:#98c379;">"AUDIO"</span>]),
    )
)</pre>
    </div>
    <div class="img-container" style="position:relative;padding-bottom:56.25%;height:0;overflow:hidden;border-radius:8px;">
      <iframe style="position:absolute;top:0;left:0;width:100%;height:100%;border:0 !important;border-radius:8px;" src="https://www.youtube.com/embed/J9Wqdqpqjx4?si=QX_W6MY0E-9NnJJ3" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""/>
    </div>

    <h2>AI Voice Agent Starter Apps</h2>
    <p>Get up and running in minutes with our ready-to-use starter apps. Each one is pre-wired with Agent Participant support, live transcription, and AgentState UI out of the box.</p>
    <ul>
      <li><a href="https://docs.videosdk.live/ai_agents/agent-runtime/connect-agent/web-integrations/agent-starter-react" target="_blank"><strong>React Starter App</strong></a>: Web integration with full agent UI</li>
      <li><a href="https://docs.videosdk.live/ai_agents/agent-runtime/connect-agent/mobile-integrations/agent-starter-flutter" target="_blank"><strong>Flutter Starter App</strong></a>: Cross-platform mobile agent integration</li>
      <li><a href="https://docs.videosdk.live/ai_agents/agent-runtime/connect-agent/mobile-integrations/agent-starter-ios" target="_blank"><strong>iOS Starter App</strong></a>: Native iOS agent integration</li>
    </ul>

    <h2>New Content and Resources</h2>
    <p>New guides cover the unified Pipeline, hooks deep dive, agent lifecycle and events, and SDK-specific agent integration.</p>
    <h3>New Guides and Tutorials</h3>
    <ul>
      <li><strong>Deploy your AI Voice Agent:</strong> <a href="https://www.videosdk.live/blog/how-to-deploy-your-ai-voice-agents-in-videosdk" target="_blank">A step-by-step guide to deploying production-ready voice agents on VideoSDK.</a></li>
      <li><strong>Build AI Virtual Avatars with Anam AI:</strong> <a href="https://www.videosdk.live/blog/how-to-build-ai-virtual-avatars-using-anam-ai-and-videosdk-ai-voice-agents" target="_blank">How to build interactive AI avatars using the Anam AI plugin.</a></li>
      <li><strong>Pipeline hooks deep dive:</strong> <a href="https://docs.videosdk.live/ai_agents/core-components/pipeline-hooks" target="_blank">How to intercept and transform data at every pipeline stage.</a></li>
      <li><strong>LangChain and LangGraph integration:</strong> <a href="https://docs.videosdk.live/ai_agents/plugins/llm/langchain-llm" target="_blank">Connect agents to external tools and multi-step reasoning workflows.</a></li>
    </ul>
    <h3>Featured Videos</h3>
    <ul>
      <li>▶️ <a href="https://youtu.be/J9Wqdqpqjx4" target="_blank">Building with Google Gemini 3.1 on VideoSDK</a></li>
    </ul>

    <h2>SDK Sketches</h2>
    <div class="img-container">
      <img src="https://assets.videosdk.live/images/penguin1.png" alt="Product Updates - March 2026 : Agents SDK v1.0.0, Unified Pipeline & Agent Participants Across All SDKs" style="border-radius:8px;max-width:620px;">
      <span class="img-caption">This month's sketch: CascadingPipeline and RealtimePipeline walk into v1.0.0... and they're the same picture.</span>
    </img></div>

    <h2>What's Next?</h2>
    <p>As we move through 2026, our focus remains on pushing the boundaries of what's possible with real-time communication and AI. Expect even more powerful tools, deeper platform integrations, and a relentless focus on the developer experience.</p>

    <div class="cta-box">
      <h3 style="margin-top:0;">Ready to Build?</h3>
      <p>Upgrade to Agents SDK v1.0.0 and the latest RTC SDKs to get everything in this release.</p>
      <a href="https://github.com/videosdk-live/agents" style="background-color:#A497D9;color:#000000;padding:8px 22px;border-radius:6px;font-size:14px;font-weight:700;display:inline-block;text-decoration:none;margin-right:12px;">GitHub</a>
      <a href="https://discord.com/invite/Gpmj6eCq5u" style="background-color:#A497D9;color:#000000;padding:8px 22px;border-radius:6px;font-size:14px;font-weight:700;display:inline-block;text-decoration:none;">Join our Discord</a>
    </div>
  </div>
</body>
</html>
<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[Build AI Avatar Agent with VideoSDK & Simli Face API]]></title><description><![CDATA[Add AI avatar to your VideoSDK agent with Python. Build an interactive, voice-enabled assistant that can answer live weather queries and more.]]></description><link>https://www.videosdk.live/blog/ai-avatar-agent</link><guid isPermaLink="false">686b573cd4a340b6b279c5bd</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[#sumit-so]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 14 Jul 2025 08:42:50 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/07/ai-avatar-demo.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/07/ai-avatar-demo.png" alt="Build AI Avatar Agent with VideoSDK & Simli Face API"/><p>In this blog, you'll learn how to add an <strong>AI Avatar</strong> to a VideoSDK agent in a straightforward, practical way. By the end, you’ll have a real-time, talking digital assistant with a face, a voice, and the power to answer live weather questions — all running in your browser.</p><h2 id="project-architecture">Project Architecture</h2><pre><code class="language-bash">├── main.py              # Main agent implementation
├── requirements.txt     # Python dependencies
├── mcp_weather.py       # Weather MCP server
├── .env.example         # Environment variables template
└── README.md            # This file</code></pre><h2 id="set-up-your-python-project">Set Up Your Python Project</h2><p>We'll build this project in Python. Start by ensuring your environment is ready and all required dependencies are installed.</p><ul><li>Make sure you've a <code>python &gt;=3.12</code> </li></ul><h3 id="create-and-activate-a-virtual-environment">Create and Activate a Virtual Environment</h3><pre><code class="language-bash">python -m venv .venv
# On Windows
.venv\Scripts\activate
# On macOS/Linux
source .venv/bin/activate
</code></pre><h3 id="install-the-required-dependencies">Install the Required Dependencies</h3><p>Create a <code>requirements.txt</code> file and add these lines:</p><pre><code class="language-bash">videosdk-agents
videosdk-plugins-google
videosdk-plugins-simli
python-dotenv
fastmcp</code></pre><p>Then install them:</p><pre><code class="language-bash">pip install -r requirements.txt
</code></pre><h2 id="the-big-picture-how-the-pieces-connect">The Big Picture: How the Pieces Connect</h2><p>Before diving into the code, let’s map out the core components and how they interact:</p><ul><li><strong>VideoSDK Agent</strong><br>The “director” that orchestrates everything. It manages the session, connects to the playground, and coordinates the avatar, voice, and tools.</br></li><li><strong>Google Gemini (via VideoSDK plugin)</strong><br>The “brain” of your agent, responsible for understanding what you say and generating natural-sounding replies in real time.</br></li><li><strong>Simli Avatar (via VideoSDK plugin)</strong><br>The “face” and “voice” of your agent. It animates and speaks the responses generated by Gemini, making the agent feel alive.</br></li><li><strong>MCP Weather Tool (Model Context Protocol)</strong><br>The “specialist prop master.” When the conversation calls for weather info, the agent calls out to this separate process, which fetches live weather data and returns it as dialogue.</br></li></ul><p><strong>How it all works in a conversation:</strong></p><ol><li>You speak to the avatar in the browser or a mobile application (using the VideoSDK playground).</li><li>The agent (<code>main.py</code>) receives your message, processes it with Gemini, and speaks the response using Simli.</li><li>If you ask about the weather, the agent reaches out to the MCP weather tool (<code>mcp_weather.py</code>), which fetches the answer and brings it into the conversation in real time.</li></ol><p>For more on how the playground works, check out the <a href="https://docs.videosdk.live/ai_agents/playground">VideoSDK AI Playground documentation</a>.</p><h2 id="the-heart-of-the-show-%E2%80%94-the-key-files">The Heart of the Show — The Key Files</h2><h3 id="mainpy-%E2%80%94-the-orchestrator"><code>main.py</code> — The Orchestrator</h3><p>This is the main script where the “performance” comes together:</p><ul><li>It configures your AI agent with a voice, a face, and the ability to call out to external tools (like the weather server).</li><li>When you run it, it spins up a VideoSDK room and connects your agent to the browser-based playground, ready to talk in real time.</li></ul><pre><code class="language-python">import asyncio
import sys
from pathlib import Path
import requests
from videosdk.agents import Agent, AgentSession, RealTimePipeline, JobContext, RoomOptions, WorkerJob, MCPServerStdio
from videosdk.plugins.google import GeminiRealtime, GeminiLiveConfig
from videosdk.plugins.simli import SimliAvatar, SimliConfig
from dotenv import load_dotenv
import os

load_dotenv(override=True)

def get_room_id(auth_token: str) -&gt; str:
    url = "https://api.videosdk.live/v2/rooms"
    headers = {
        "Authorization": auth_token
    }
    response = requests.post(url, headers=headers)
    response.raise_for_status()
    return response.json()["roomId"]

class MyVoiceAgent(Agent):
    def __init__(self):
        mcp_script_weather = Path(__file__).parent / "mcp_weather.py"
        super().__init__(
            instructions="You are VideoSDK's AI Avatar Voice Agent with real-time capabilities. You are a helpful virtual assistant with a visual avatar that can answer questions about weather help with other tasks in real-time.",
            mcp_servers = [
                MCPServerStdio(
                    executable_path=sys.executable,
                    process_arguments= [str(mcp_script_weather)],
                    session_timeout=30
                )
                ]
        )

    async def on_enter(self) -&gt; None:
        await self.session.say("Hello! I'm your real-time AI avatar assistant. How can I help you today?")
    
    async def on_exit(self) -&gt; None:
        await self.session.say("Goodbye! It was great talking with you!")
        

async def start_session(context: JobContext):
    # Initialize Gemini Realtime model
    model = GeminiRealtime(
        model="gemini-2.0-flash-live-001",
        # When GOOGLE_API_KEY is set in .env - DON'T pass api_key parameter
        api_key="xxxxxx", 
        config=GeminiLiveConfig(
            voice="Leda",  # Puck, Charon, Kore, Fenrir, Aoede, Leda, Orus, and Zephyr.
            response_modalities=["AUDIO"]
        )
    )

    # Initialize Simli Avatar
    simli_config = SimliConfig(
        apiKey="xxxxxxxxxxxxx",
        faceId="0c2b8b04-5274-41f1-a21c-d5c98322efa9" # default
    )
    simli_avatar = SimliAvatar(config=simli_config)

    # Create pipeline with avatar
    pipeline = RealTimePipeline(
        model=model,
        avatar=simli_avatar
    )
    
    session = AgentSession(
        agent=MyVoiceAgent(),
        pipeline=pipeline
    )

    try:
        await context.connect()
        await session.start()
        await asyncio.Event().wait()
    finally:
        await session.close()
        await context.shutdown()

def make_context() -&gt; JobContext:
    auth_token = os.getenv("VIDEOSDK_AUTH_TOKEN")
    room_id = get_room_id(auth_token)
    room_options = RoomOptions(
        room_id=room_id,
        auth_token=auth_token,
        name="Simli Avatar Realtime Agent",
        playground=True 
    )
    return JobContext(room_options=room_options)


if __name__ == "__main__":
    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
    job.start() 
</code></pre><h3 id="mcpweatherpy-%E2%80%94-the-weather-specialist-mcp-tool"><code>mcp_weather.py</code> — The Weather Specialist (MCP Tool)</h3><p><strong>About MCP:</strong><br>The <strong>Model Context Protocol (MCP)</strong> allows your agent to “call out” to external specialists when it doesn’t know something itself. In this project, <code>mcp_weather.py</code> is that specialist: a dedicated service that fetches live weather data for any city using the <strong>OpenWeatherMap API</strong>. When you ask your agent about the weather, it seamlessly passes your request to this <strong>MCP tool</strong> and brings the answer back, all in real time.</br></p><pre><code class="language-python">from fastmcp import FastMCP
import httpx
import os
from dotenv import load_dotenv

load_dotenv(override=True)

OPENWEATHER_API_KEY = os.getenv("OPENWEATHER_API_KEY")
# Replace with your actual OpenWeatherMap API key
OPENWEATHER_URL = "https://api.openweathermap.org/data/2.5/weather"

mcp = FastMCP("CurrentWeatherServer")

@mcp.tool()
async def get_current_weather(city: str) -&gt; str:
    """
    Get the current weather for a given city using OpenWeatherMap API.
    """
    params = {
        "q": city,
        "appid": OPENWEATHER_API_KEY,
        "units": "metric"
    }
    async with httpx.AsyncClient() as client:
        try:
            response = await client.get(OPENWEATHER_URL, params=params, timeout=10)
            
            # Better error handling for authorization issues
            if response.status_code == 401:
                return f"Authorization error: Invalid API key. Please check your OpenWeatherMap API key."
            elif response.status_code == 404:
                return f"City '{city}' not found. Please check the spelling."
            
            response.raise_for_status()
            data = response.json()
            weather = data["weather"][0]["description"].capitalize()
            temp = data["main"]["temp"]
            feels_like = data["main"]["feels_like"]
            humidity = data["main"]["humidity"]
            wind_speed = data.get("wind", {}).get("speed", "N/A")
            return (f"Hi Sumit!, Current weather in {city}:\n"
                    f"{weather}, temperature: {temp}°C, feels like: {feels_like}°C.\n"
                    f"Humidity: {humidity}%, Wind speed: {wind_speed} m/s")
        except httpx.RequestError as e:
            return f"Network error: Could not retrieve weather data for {city}: {e}"
        except Exception as e:
            return f"Could not retrieve weather data for {city}: {e}"

if __name__ == "__main__":
    mcp.run(transport="stdio")
</code></pre><h2 id="step-by-step-bringing-your-ai-avatar-to-life">Step-by-Step: Bringing Your AI Avatar to Life</h2><ol><li><strong>Set up your environment variables:</strong><ul><li>Copy <code>.env.example</code> to <code>.env</code></li></ul></li><li><strong>Talk to your AI avatar!</strong><ul><li>Say hello, ask about the weather (“What’s the weather in London?”), or have a general conversation.</li><li>The avatar will speak and respond using Gemini and Simli, and fetch live weather using the MCP tool.</li></ul></li></ol><p><strong>Open the VideoSDK playground URL</strong> printed in your terminal.<br>This will look like:</br></p><pre><code class="language-bash">https://playground.videosdk.live?token=...&amp;meetingId=...</code></pre><p><strong>Run your agent:</strong></p><pre><code class="language-bash">python main.py
</code></pre><p>Fill out the following in <code>.env</code>:</p><pre><code class="language-bash">VIDEOSDK_AUTH_TOKEN=your-videosdk-token
SIMLI_API_KEY=your-simli-api-key
SIMLI_FACE_ID=your-simli-face-id
OPENWEATHER_API_KEY=your-openweathermap-key
GOOGLE_API_KEY=your-google-api-key</code></pre><p><strong>Now step onto the stage, run the code, and meet your creation. The talking avatar is waiting. What will you say first?</strong></p><p>You can dive deeper into the playground and agent capabilities in the <a href="https://docs.videosdk.live/ai_agents/playground">VideoSDK AI Playground documentation</a>.</p>]]></content:encoded></item><item><title><![CDATA[How to Build an AI Telephony Agent for Inbound and Outbound Calls]]></title><description><![CDATA[Build an AI telephony agent for inbound and outbound calls. Step-by-step guide to SIP, call automation, and real-time AI voice solutions.]]></description><link>https://www.videosdk.live/blog/ai-telephony-agent-inbound-outbound-calls</link><guid isPermaLink="false">685d2da94ec7927e167c891a</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[#sumit-so]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 14 Jul 2025 08:40:20 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/06/Outbound-Calls.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/06/Outbound-Calls.png" alt="How to Build an AI Telephony Agent for Inbound and Outbound Calls"/><p>What if you could build your own <a href="https://www.videosdk.live/voice-agents" rel="noreferrer">AI-powered voice agent</a>—one that can answer and place calls, handle appointment scheduling, route customers, collect feedback, and even run automated surveys, all in real time? In this blog, I'll show you how to build a robust, production-ready AI telephony agent with SIP and VoIP integration using Python, VideoSDK, and the latest AI models—all open source and totally extensible.</p><p>We'll go step by step from project setup to a working inbound and outbound AI voice agent. You’ll get code you can copy, clear explanations, and links to deeper docs for each component. By the end, you’ll have the foundation for scalable, enterprise-grade customer service automation or custom telephony workflows.</p><h2 id="why-ai-telephony%E2%80%94and-why-now">Why AI Telephony—And Why Now?</h2><p>Traditional telephony systems are rigid, expensive, and hard to adapt to new business needs. But with AI voice agents and SIP (Session Initiation Protocol), you can build next-generation solutions: think appointment bots, emergency notification systems, automated feedback collectors, and more. The magic lies in combining real-time VoIP telephony (using SIP trunks from providers like Twilio) with advanced AI—like Google Gemini or OpenAI—for natural conversations and smart call handling.</p><h2 id="architecture-overview-modular-extensible-real-time">Architecture Overview: Modular, Extensible, Real-Time</h2><p>Our architecture separates concerns for maximum flexibility:</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/06/image.png" class="kg-image" alt="How to Build an AI Telephony Agent for Inbound and Outbound Calls" loading="lazy" width="2379" height="1214"/></figure><ul><li><strong>SIP Integration</strong> (VoIP telephony, call control, DTMF, call transfer, call recording)</li><li><strong>AI Voice Agent</strong> (Powered by VideoSDK’s agent framework, integrates LLMs, STT, TTS, sentiment analysis)</li><li><strong>Session Management</strong> (Inbound/outbound call routing, session lifecycle)</li><li><strong>Provider Abstraction</strong> (Easily switch SIP providers—Twilio, Plivo, etc.)</li><li><strong>Pluggable AI Capabilities</strong> (Swap in Google, OpenAI, or custom models)</li></ul><p>You can add features like runtime configuration, call transcription, web dashboards, and more—all with Python.</p><h2 id="project-structure">Project Structure</h2><p>Let’s start by laying out the recommended project structure, just like the demo repo:</p><pre><code class="language-text">ai-telephony-demo/
├── ai/                  # AI and LLM plugins (optional, for custom logic)
├── providers/           # Telephony/SIP provider integrations
├── services/            # Business logic, utilities, and workflow services
├── voice_agent.py       # Core AI voice agent
├── server.py            # FastAPI application and entrypoint
├── config.py            # Environment-driven config
├── requirements.txt     # Python dependencies
</code></pre><h2 id="dependencies">Dependencies</h2><p>Install the dependencies listed in <code>requirements.txt</code>:</p><pre><code class="language-bash">pip install -r requirements.txt
</code></pre><p>Key dependencies include:</p><ul><li><code>fastapi</code> &amp; <code>uvicorn</code> for the server</li><li><code>videosdk</code>, <code>videosdk-agents</code>, and plugins for agent logic</li><li><code>twilio</code>, <code>google-cloud-speech</code>, <code>google-cloud-texttospeech</code> for SIP &amp; AI</li><li><code>python-dotenv</code> for config</li></ul><h2 id="configuration">Configuration</h2><p>Create a <code>.env</code> file in your project root with all the required keys:</p><pre><code class="language-env">VIDEOSDK_AUTH_TOKEN=your_videosdk_auth_token
VIDEOSDK_SIP_USERNAME=your_sip_username
VIDEOSDK_SIP_PASSWORD=your_sip_password
GOOGLE_API_KEY=your_google_api_key
TWILIO_SID=your_twilio_sid
TWILIO_AUTH_TOKEN=your_twilio_auth_token
TWILIO_NUMBER=your_twilio_phone_number
</code></pre><p>Your <code>config.py</code> loads and validates these:</p><pre><code class="language-python">import os
import logging
from dotenv import load_dotenv

load_dotenv()
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

class Config:
    VIDEOSDK_AUTH_TOKEN = os.getenv("VIDEOSDK_AUTH_TOKEN")
    VIDEOSDK_SIP_USERNAME = os.getenv("VIDEOSDK_SIP_USERNAME")
    VIDEOSDK_SIP_PASSWORD = os.getenv("VIDEOSDK_SIP_PASSWORD")
    GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")
    TWILIO_ACCOUNT_SID = os.getenv("TWILIO_SID")
    TWILIO_AUTH_TOKEN = os.getenv("TWILIO_AUTH_TOKEN")
    TWILIO_NUMBER = os.getenv("TWILIO_NUMBER")

    @classmethod
    def validate(cls):
        required_vars = {
            "VIDEOSDK_AUTH_TOKEN": cls.VIDEOSDK_AUTH_TOKEN,
            "VIDEOSDK_SIP_USERNAME": cls.VIDEOSDK_SIP_USERNAME,
            "VIDEOSDK_SIP_PASSWORD": cls.VIDEOSDK_SIP_PASSWORD,
            "GOOGLE_API_KEY": cls.GOOGLE_API_KEY,
            "TWILIO_SID": cls.TWILIO_ACCOUNT_SID,
            "TWILIO_AUTH_TOKEN": cls.TWILIO_AUTH_TOKEN,
            "TWILIO_NUMBER": cls.TWILIO_NUMBER,
        }
        missing = [v for v, val in required_vars.items() if not val]
        if missing:
            for v in missing:
                logger.error(f"Missing environment variable: {v}")
            raise ValueError(f"Missing required environment variables: {', '.join(missing)}")
        logger.info("All required environment variables are set.")

Config.validate()
</code></pre><h2 id="the-voice-agent-ai-powered-call-automation">The Voice Agent: AI-Powered Call Automation</h2><p>Your agent logic lives in <code>voice_agent.py</code>. Here’s the real implementation from the repo:</p><pre><code class="language-python">import logging
from typing import Optional, List, Any
from videosdk.agents import Agent

logger = logging.getLogger(__name__)

class VoiceAgent(Agent):
    """An outbound call agent specialized for medical appointment scheduling."""

    def __init__(
        self,
        instructions: str = "You are a medical appointment scheduling assistant. Your goal is to confirm upcoming appointments (5th June 2025 at 11:00 AM) and reschedule if needed.",
        tools: Optional[List[Any]] = None,
        context: Optional[dict] = None,
    ) -&gt; None:
        super().__init__(
            instructions=instructions,
            tools=tools or []
        )
        self.context = context or {}
        self.logger = logging.getLogger(__name__)
        
    async def on_enter(self) -&gt; None:
        self.logger.info("Agent entered the session.")
        initial_greeting = self.context.get(
            "initial_greeting",
            "Hello, this is Neha, calling from City Medical Center regarding your upcoming appointment. Is this a good time to speak?"
        )
        await self.session.say(initial_greeting)

    async def on_exit(self) -&gt; None:
        self.logger.info("Call ended")
</code></pre><p>You can customize instructions, context, and plug in different tools/plugins for STT, TTS, or LLMs.</p><h2 id="the-server-handling-calls-routing-and-agent-sessions">The Server: Handling Calls, Routing, and Agent Sessions</h2><p>The <code>server.py</code> file uses FastAPI to handle incoming SIP webhooks, manage sessions, and glue everything together:</p><pre><code class="language-python">import logging
from fastapi import FastAPI, Request, Form, BackgroundTasks, HTTPException
from fastapi.responses import PlainTextResponse
from config import Config
from models import OutboundCallRequest, CallResponse, SessionInfo
from providers import get_provider
from services import VideoSDKService, SessionManager

logger = logging.getLogger(__name__)

app = FastAPI(
    title="VideoSDK AI Agent Call Server (Modular)",
    description="Modular FastAPI server for inbound/outbound calls with VideoSDK AI Agent using different providers.",
    version="2.0.0"
)

videosdk_service = VideoSDKService()
session_manager = SessionManager()
sip_provider = get_provider("twilio")  # Use your SIP provider

@app.get("/health", response_class=PlainTextResponse)
async def health_check():
    active_sessions = session_manager.get_active_sessions_count()
    return f"Server is healthy. Active sessions: {active_sessions}"

@app.post("/inbound-call", response_class=PlainTextResponse)
async def inbound_call(
    request: Request,
    background_tasks: BackgroundTasks,
    CallSid: str = Form(...),
    From: str = Form(...),
    To: str = Form(...),
):
    logger.info(f"Inbound call received from {From} to {To}. CallSid: {CallSid}")
    try:
        room_id = await videosdk_service.create_room()
        session = await session_manager.create_session(room_id, "inbound")
        background_tasks.add_task(session_manager.run_session, session, room_id)
        sip_endpoint = videosdk_service.get_sip_endpoint(room_id)
        twiml = sip_provider.generate_twiml(sip_endpoint)
        logger.info(f"Responding to {sip_provider.get_provider_name()} inbound call {CallSid} with TwiML to dial SIP: {sip_endpoint}")
        return twiml
    except HTTPException as e:
        logger.error(f"Failed to handle inbound call {CallSid}: {e.detail}")
        return PlainTextResponse(f"&lt;Response&gt;&lt;Say&gt;An error occurred: {e.detail}&lt;/Say&gt;&lt;/Response&gt;", status_code=500)
    except Exception as e:
        logger.error(f"Unhandled error in inbound call {CallSid}: {e}", exc_info=True)
        return PlainTextResponse("&lt;Response&gt;&lt;Say&gt;An unexpected error occurred. Please try again later.&lt;/Say&gt;&lt;/Response&gt;", status_code=500)

@app.post("/outbound-call")
async def outbound_call(request_body: OutboundCallRequest, background_tasks: BackgroundTasks):
    to_number = request_body.to_number
    initial_greeting = request_body.initial_greeting
    logger.info(f"Request to initiate outbound call to: {to_number}")

    if not to_number:
        raise HTTPException(status_code=400, detail="'to_number' is required.")

    try:
        room_id = await videosdk_service.create_room()
        session = await session_manager.create_session(
            room_id, 
            "outbound", 
            initial_greeting
        )
        background_tasks.add_task(session_manager.run_session, session, room_id)
        sip_endpoint = videosdk_service.get_sip_endpoint(room_id)
        twiml = sip_provider.generate_twiml(sip_endpoint)
        call_result = sip_provider.initiate_outbound_call(to_number, twiml)
        logger.info(f"Outbound call initiated via {sip_provider.get_provider_name()} to {to_number}. "
                   f"Call SID: {call_result['call_sid']}. VideoSDK Room: {room_id}")
        return CallResponse(
            message="Outbound call initiated successfully",
            twilio_call_sid=call_result['call_sid'],
            videosdk_room_id=room_id
        )
    except HTTPException as e:
        logger.error(f"Failed to initiate outbound call to {to_number}: {e.detail}")
        raise e
    except Exception as e:
        logger.error(f"Unhandled error initiating outbound call to {to_number}: {e}", exc_info=True)
        raise HTTPException(status_code=500, detail=f"Failed to initiate outbound call: {e}")

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000) 
</code></pre><h2 id="modular-providers-services-and-models">Modular Providers, Services, and Models</h2><p>The demo repo is designed to be modular and extensible:</p><ul><li><code>providers/</code> contains code for handling different SIP providers (Twilio, Vonage, etc).</li><li><code>services/</code> manages VideoSDK integration, room/session management, and business logic.</li><li><code>models.py</code> defines request/response data for FastAPI endpoints.</li></ul><p>You can easily add or swap providers, business rules, and AI models</p><h2 id="extending-with-mcp-agent2agent-protocol">Extending with MCP &amp; Agent2Agent Protocol</h2><p>To enable advanced features like agent-to-agent transfer, call control, and real-time management:</p><ul><li>Integrate the <a href="https://docs.videosdk.live/ai_agents/mcp-integration">MCP protocol</a> for call control, muting, and participant management.</li><li>Use the <a href="https://docs.videosdk.live/ai_agents/a2a/overview">Agent2Agent protocol</a> to automate handoffs and workflows between agents.</li></ul><p>You can build on the provided classes and add hooks in your <code>VoiceAgent</code> or session logic to coordinate with these protocols.</p><h2 id="running-and-testing">Running and Testing</h2><ol><li>Use tools like ngrok to expose your server to the public internet for SIP webhooks.</li><li>Configure your SIP provider (e.g., Twilio) to point to your <code>/inbound-call</code> endpoint.</li><li>Trigger inbound or outbound calls and watch your AI agent handle real conversations!</li></ol><p>Start your FastAPI server:</p><pre><code class="language-bash">uvicorn server:app --reload
</code></pre><h2 id="key-takeaways">Key Takeaways</h2><ul><li>This open-source project provides a real, modular foundation for AI-powered telephony using SIP, VoIP, and cloud AI.</li><li>The code is production-grade and extensible—just add your workflows, providers, or AI plugins.</li><li>You can enable advanced call control, routing, A2A communication, and more with VideoSDK protocols.</li></ul><h2 id="resources-next-steps">Resources &amp; Next Steps</h2><ul><li>Explore the <a href="https://github.com/videosdk-community/ai-telephony-demo">ai-telephony-demo repo</a> for the full codebase and more docs.</li><li>Learn more about <a href="https://docs.videosdk.live/ai_agents/introduction">VideoSDK AI Agents</a>, <a href="https://docs.videosdk.live/ai_agents/a2a/overview">A2A</a>, and <a href="https://docs.videosdk.live/ai_agents/mcp-integration">MCP</a>.</li><li>Build your own use case: appointment scheduling, customer service automation, or scalable feedback collection!</li></ul>]]></content:encoded></item><item><title><![CDATA[Introducing "NAMO" Real-Time Speech AI Model: On-Device & Hybrid Cloud]]></title><description><![CDATA[VideoSDK revolutionizes industries with NAMO, the world's first open-source Real-Time Speech AI model, cutting costs 20x. Combining on-device and cloud efficiency, it ensures secure, fast AI interactions—empowering BFSI, Healthcare, and more to deploy powerful AI on everyday devices.]]></description><link>https://www.videosdk.live/blog/introducing-namo-real-time-speech-ai-model-on-device-hybrid-cloud</link><guid isPermaLink="false">67dba8584556210426689c2f</guid><category><![CDATA[ANNOUNCEMENT]]></category><dc:creator><![CDATA[Arjun Kava]]></dc:creator><pubDate>Fri, 21 Mar 2025 07:00:00 GMT</pubDate><content:encoded><![CDATA[<p>At VideoSDK, Our mission is to build infrastructure of digital humans that runs on every device. We’re expanding into real-time speech-to-speech and vision AI agents that run privately on-device or in a hybrid cloud setting—delivering up to 98% of GPT-4’s accuracy at just 17.5% of its cost.</p><h2 id="unified-sdk-for-on-device-sslm-cloud-sslm-collaboration">Unified SDK for On-Device SSLM + Cloud SSLM Collaboration </h2><p>Today, we are launching NAMO-SSLM (Small Speech Language Model) hybrid model with state of the art hybrid inferencing engine. </p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/03/Model-Diagram-V3.jpg" class="kg-image" alt="" loading="lazy" width="645" height="160"/></figure><p>Unified SDK for On-Device SSLM + Cloud SSLM </p><p>We’ve developed an inference engine that couples a streamlined, on-device  with a more powerful, cloud-based SSLM, working in tandem for real-time speech use cases. Our engine ensures that most of the heavy lifting—long-context processing and advanced reasoning—occurs only when needed, thereby reducing cloud inference costs without sacrificing performance.</p>
<!--kg-card-begin: html-->
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/CFQFkwWc-XQ?si=4EH-8klYJ663bfb2" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""/>
<!--kg-card-end: html-->
<p>We’ve tackled three key challenges in end-to-end speech models: enabling multi-turn RAG for rich contextual responses, silent function calling for real-time tool use without delays, and client-side tool support like canvas and whiteboard via our Agent SDK.</p><h2 id="achieving-20%C3%97-cost-reduction-with-98-cloud-performance"><strong>Achieving 20× Cost Reduction with 98% Cloud Performance</strong></h2><p>Our engine orchestrates a local-remote workflow, delivering real-time speech capabilities on low-latency devices while leveraging cloud infrastructure for complex tasks. This results in a 20× cost reduction while maintaining 98% of cloud performance, ensuring a cost-effective, privacy-friendly, and high-performance solution.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://assets.videosdk.live/static-assets/ghost/2025/03/NAMO-SSLM-Device---Cloud.png" class="kg-image" alt="" loading="lazy" width="1920" height="1080"><figcaption><span style="white-space: pre-wrap;">Cost Vs Accuracy Tradeoff</span></figcaption></img></figure><h2 id="enterprise-grade-privacy-compliance">Enterprise-Grade Privacy &amp; Compliance</h2><p>NAMO-SSLM keeps sensitive data on-device by default, ensuring documents and conversations never leave the device unless explicitly required. This hybrid design minimizes cloud dependency, reduces risk exposure, and delivers GDPR, HIPAA, and SOC2 compliance—making it more secure and privacy-first than traditional cloud-only models.</p><p>This approach not only meets the pressing need for robust compliance in highly regulated sectors like BFSI and Healthcare, but also benefits any organization demanding rapid, private, and cost-effective AI-driven customer experiences.</p><h2 id="open-sourcing-namo-sslm">Open Sourcing NAMO-SSLM</h2><p>We are excited to open-source <strong>NAMO-SSLM</strong>, small yet powerful real-time multi-modal. The AI landscape is shifting from massive, resource-intensive models to lightweight, optimized small models—and for good reason. Small models (like NAMO-SSLM) offer a compelling mix of efficiency, speed, and cost-effectiveness, making them the smarter choice for real-world applications.</p><p>Key research includes:</p><ul><li><strong>Run on CPU</strong>: Run model real-time on consumer CPU devices.</li><li><strong>Multimodal (voice + vision)</strong>: Native support for real-time speech and vision and OCR capabilites.</li><li><strong>Low Latency, Real-Time Processing</strong>: Real-time streaming support with end to end latency as low as 80ms.</li><li><strong>Multilingual Support</strong>: Enable multi-lang and hybrid language capabilities such as hing-lish. </li><li><strong>Multi-turn RAG</strong>: Supports multi-turn RAG to retrive rich context from while keeping conversation  real-time</li><li><strong>Voiced + Silent Function / Tools Calling</strong>: Function calling support with silent voice with text as well as voice output. </li><li><strong>Client Side Tool Support</strong>: Tool support for building client side UI/UX such as canvas, whiteboard etc. </li></ul>
<!--kg-card-begin: html-->
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/mq-2R7hjQL4?si=aQzz5ZBxq9X0cYgT" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""/>
<!--kg-card-end: html-->
<p/>
<!--kg-card-begin: html-->
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/WKydRV85MOc?si=jUQrSE1MVcno_yBy" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""/>
<!--kg-card-end: html-->
<p/>
<!--kg-card-begin: html-->
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/YD853cGuc78?si=HQDoE9mipF-13ezv" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""/>
<!--kg-card-end: html-->
<p><strong>NAMO-SSLM Github</strong>:  <a href="https://github.com/videosdk-live/namo-sslm">https://github.com/videosdk-live/namo-sslm</a></p><p>The future of AI is efficient, real-time, and accessible on every device. With NAMO-SSLM, we're pioneering a hybrid on-device + cloud approach that delivers 5.7× cost efficiency while maintaining 98% of cloud-level performance.</p><p>Our hybrid architecture enables low-latency, private AI for a wide range of industries:</p><ul><li><strong>Healthcare</strong> – Real-time AI assistants for clinical support</li><li><strong>Banking</strong> – Secure and seamless voice authentication</li><li><strong>Insurance</strong> – AI-powered automated claims processing</li><li><strong>Social</strong> – Instant multilingual voice translation</li><li><strong>Education</strong> – Personalized AI tutors for interactive learning</li><li><strong>Smart Glasses</strong> – Hands-free, AI-driven digital assistants</li><li><strong>Robots</strong> – Conversational AI for intelligent automation</li><li><strong>Gaming</strong> – Realistic, AI-powered NPC interactions</li></ul><p>By open-sourcing NAMO-SSLM, we are empowering developers and businesses to build privacy-first, low-latency AI applications across industries—from healthcare to gaming, smart glasses to education.</p><p>The AI revolution is shifting from large, resource-heavy models to lightweight, real-time, multimodal intelligence. NAMO-SSLM is the future—blending speech and vision AI with real-time processing, multilingual capabilities, and CPU-optimized performance.</p><p>Join us in <strong>building the next generation of digital humans.</strong> 🚀</p><p><a href="https://jobs.videosdk.live/forms/c0d46e347e2171b19b0bc2227031030e612ce9506bba525a4890df1751d4eb6e">Apply now</a></p>]]></content:encoded></item><item><title><![CDATA[Building Real-Time AI Voice Agents with Google Gemini 3.1 Flash Live and VideoSDK]]></title><description><![CDATA[In this blog post, you’ll learn how to build a real-time AI voice agent using Google’s Gemini 3.1 Flash Live Preview and the VideoSDK Python SDK.]]></description><link>https://www.videosdk.live/blog/how-to-build-real-time-ai-voice-agents-with-google-gemini-3-1-flash-live-and-videosdk</link><guid isPermaLink="false">69c76f6555831517a5a8a562</guid><category><![CDATA[ai agents]]></category><category><![CDATA[AI voice agent]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 30 Mar 2026 08:18:22 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/03/Gemini-3-Pro---Flash.svg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/03/Gemini-3-Pro---Flash.svg" alt="Building Real-Time AI Voice Agents with Google Gemini 3.1 Flash Live and VideoSDK"/><p>Google just launched Gemini 3.1 Flash Live Preview its most capable real-time voice and audio model yet. If you're building AI voice agents, conversational apps, or anything that needs low-latency audio intelligence, this model is a big deal. And with <a href="https://github.com/videosdk-live/agents" rel="noreferrer">VideoSDK's Python SDK</a>, plugging it into your app takes just a few minutes.</p><p>In this blog, we'll walk through what the new model can do, and then build a working voice agent step by step using VideoSDK.</p><h3 id="whats-new-in-gemini-31-flash-live-preview">What's New in Gemini 3.1 Flash Live Preview</h3><p>Google describes this as its "<strong>highest-quality audio and voice model yet</strong>," and there are a few things that actually back that up.</p><p>It's built for real-time, audio-first experiences. Unlike models that convert speech to text and then process it, Gemini 3.1 Flash Live works audio-to-audio meaning it hears you and responds as audio, keeping the conversation feeling natural and fast.</p><h3 id="heres-what-stands-out">Here's what stands out:</h3><ul><li>Lower latency than before. Compared to 2.5 Flash Native Audio, this model is noticeably faster. Fewer awkward pauses, snappier responses. That matters a lot when you're building voice agents where delays break the experience.</li><li>It actually understands how you say things. The model picks up on acoustic nuances, pitch, pace, tone. So it can tell when you're asking a casual question vs. when you sound urgent or confused.</li><li>Better background noise handling. It filters out noise more effectively, which means it works in real environments, not just quiet studios.</li><li>Multilingual out of the box. Over 90 languages supported for real-time conversations.</li><li>Longer conversation memory. It can follow the thread of a conversation for twice as long as the previous generation. So your agent won't "forget" what was said earlier in a long session.</li><li>Tool use during live conversations. This one is huge for agent builders. The model can now trigger external tools (APIs, functions, searches) while a live conversation is happening not just at the end of a turn.</li><li>Multimodal awareness. It handles audio and video inputs together, so you can build agents that respond to what they see and hear at the same time.</li></ul><p><strong>The model ID is: gemini-3.1-flash-live-preview</strong></p><h2 id="building-a-voice-agent-with-videosdk">Building a Voice Agent with VideoSDK</h2><p>VideoSDK gives you everything you need to wire Gemini 3.1 Flash Live into a real voice application. Here's how to get set up from scratch.</p><h3 id="step-1-create-and-activate-a-python-virtual-environment">Step 1 : Create and Activate a Python Virtual Environment</h3><p>First, create a clean Python environment so your project dependencies stay isolated.</p><pre><code class="language-python">python3 -m venv venv</code></pre><p><strong>Activate it:</strong></p><p><strong>macOS/Linux </strong></p><pre><code class="language-python">source venv/bin/activate</code></pre><p><strong>Windows</strong></p><pre><code class="language-python">venv\Scripts\activate</code></pre><p>You should see (venv) in your terminal, which means you're good to go.</p><h3 id="step-2-set-up-your-environment-variables">Step 2 : Set Up Your Environment Variables</h3><p>Create a .env file in your project root and add your API keys:</p><pre><code class="language-python">VIDEOSDK_AUTH_TOKEN=your_videosdk_token_here
GOOGLE_API_KEY=your_google_api_key_here</code></pre><p>You can get your <a href="https://dub.sh/zXYQt7V" rel="noreferrer">VideoSDK auth token</a> from the <a href="https://dub.sh/zXYQt7V" rel="noreferrer">VideoSDK dashboard</a> and your Google API key from Google AI Studio.</p><p>Important: when <a href="https://aistudio.google.com/app/api-keys" rel="noreferrer">GOOGLE_API_KEY</a> is set in your .env file, do not pass api_key as a parameter in your code the SDK picks it up automatically.</p><h3 id="step-3-install-the-required-packages">Step 3 : Install the Required Packages</h3><p>Install VideoSDK's agents SDK along with the Google plugin:</p><pre><code class="language-python">pip install "videosdk-agents[google]"</code></pre><h3 id="step-4-create-your-agent-mainpy">Step 4 : Create Your Agent (main.py)</h3><p>Create a file called main.py in your project folder and paste in the following code:</p><pre><code class="language-python">from videosdk.agents import Agent, AgentSession, Pipeline, JobContext, RoomOptions, WorkerJob
from videosdk.plugins.google import GeminiRealtime, GeminiLiveConfig
import logging
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", handlers=[logging.StreamHandler()])

class MyVoiceAgent(Agent):
    def __init__(self):
        super().__init__(
            instructions="You Are VideoSDK's Voice Agent.You are a helpful voice assistant that can answer questions and help with tasks.",
        )

    async def on_enter(self) -&gt; None:
        await self.session.say("Hello, how can I help you today?")
    
    async def on_exit(self) -&gt; None:
        await self.session.say("Goodbye!")

async def start_session(context: JobContext):
    agent = MyVoiceAgent()
    model = GeminiRealtime(
        model="gemini-3.1-flash-live-preview",
        # When GOOGLE_API_KEY is set in .env - DON'T pass api_key parameter
        # api_key="AIXXXXXXXXXXXXXXXXXXXX", 
        config=GeminiLiveConfig(
            voice="Leda", # Puck, Charon, Kore, Fenrir, Aoede, Leda, Orus, and Zephyr.
            response_modalities=["AUDIO"]
        )
    )

    pipeline = Pipeline(llm=model)
    session = AgentSession(
        agent=agent,
        pipeline=pipeline
    )

    await session.start(wait_for_participant=True, run_until_shutdown=True)

def make_context() -&gt; JobContext:
    room_options = RoomOptions(
        # room_id="&lt;room_id&gt;", # Replace it with your actual room_id
        name="Gemini Realtime Agent",
        playground=True,
    )

    return JobContext(room_options=room_options)

if __name__ == "__main__":
    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
    job.start()</code></pre><p><strong>To run the agent:</strong></p><pre><code class="language-python">python main.py</code></pre><p>Once you run this command, a playground URL will appear in your terminal. You can use this URL to interact with your AI agent.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/03/Screenshot-2026-03-28-at-1.05.35-PM.png" class="kg-image" alt="Building Real-Time AI Voice Agents with Google Gemini 3.1 Flash Live and VideoSDK" loading="lazy" width="2100" height="698"/></figure><h3 id="what-can-you-build-with-this">What Can You Build With This?</h3><p>Gemini 3.1 Flash Live + VideoSDK opens up a pretty wide range of real-world use cases:</p><ul><li>Customer support voice bots. Replace or supplement your call center with agents that actually understand tone and can handle multilingual customers in real time.</li><li>AI meeting assistants. Agents that join calls, take notes, answer questions from participants, and trigger follow-up actions mid-conversation.</li><li>Healthcare intake agents. Voice-based triage agents that collect patient information, ask follow-up questions, and route to the right department all in a natural spoken conversation.</li><li>Language tutors. Real-time conversation partners that catch pronunciation issues, adjust their pace based on the learner, and respond naturally.</li><li>Voice-controlled IoT and home automation. Agents that listen continuously, understand context, and trigger device actions through tool use all in sub-second response times.</li><li>Live interview prep tools. Candidates practice answering questions aloud and get spoken feedback instantly.</li></ul><h2 id="conclusion">Conclusion</h2><p>Gemini 3.1 Flash Live Preview is a meaningful step forward for real-time voice AI. The improvements in latency, noise handling, multilingual support, and especially live tool use make it a strong foundation for production voice agents.</p><p>VideoSDK wraps all of that into a clean Python SDK that gets you from zero to a running agent in a handful of lines. Whether you're prototyping or building something you intend to ship, the setup here gives you a solid starting point.</p><h2 id="next-steps-and-resources">Next Steps and Resources</h2><ul><li>Check <a href="https://docs.videosdk.live/ai_agents/plugins/realtime/google-live-api" rel="noreferrer">Gemini3.1 implementation</a> docs</li><li>Learn how to <a href="https://docs.videosdk.live/ai_agents/deployments/agent-cloud/cli/deploy" rel="noreferrer">deploy your agents</a></li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li></ul>]]></content:encoded></item><item><title><![CDATA[How to build AI Virtual Avatars using Anam-AI and VideoSDK AI Voice Agents]]></title><description><![CDATA[Learn how to build a fully interactive AI virtual avatar by combining real-time voice intelligence with lifelike digital humans. In this guide, you’ll explore how to connect conversational AI with expressive avatar rendering..]]></description><link>https://www.videosdk.live/blog/how-to-build-ai-virtual-avatars-using-anam-ai-and-videosdk-ai-voice-agents</link><guid isPermaLink="false">69c515c155831517a5a8a516</guid><category><![CDATA[ai agents]]></category><category><![CDATA[AI voice agent]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 26 Mar 2026 11:33:51 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/03/thumbnail_avatar.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/03/thumbnail_avatar.png" alt="How to build AI Virtual Avatars using Anam-AI and VideoSDK AI Voice Agents"/><p>Building AI virtual avatars is no longer about stitching together experimental tools it’s about delivering a complete, real-time conversational experience where voice, intelligence, and visual presence work as one system.</p><p>By combining Anam-AI’s lifelike digital humans with VideoSDK AI Voice Agents, developers can create interactive avatars that don’t just speak they listen, reason, respond, and express themselves visually in real time.</p><p>In a production setting, the avatar is not merely a visual layer. It is the final stage of the conversational pipeline where AI responses become human presence. Once a model generates speech, that audio must be delivered with minimal latency and synchronized perfectly with facial animation. Even small delays can break immersion, disrupt turn-taking, and make the interaction feel artificial.</p><p><a href="https://anam.ai/" rel="noreferrer">AnamAI</a> is designed specifically for real-time avatar rendering. It converts live audio streams into natural facial motion, lip synchronization, and expressive behavior, allowing AI agents to appear as believable digital humans. Meanwhile, VideoSDK handles the conversational backbone , capturing user audio, routing it to AI models, streaming responses back, and managing real-time sessions at scale.</p><p>In this guide, we’ll walk through how to build a fully interactive AI virtual avatar using<a href="https://docs.videosdk.live/ai_agents/plugins/avatar/anam-ai" rel="noreferrer"> AnamAI</a> and <a href="https://docs.videosdk.live/ai_agents/plugins/avatar/anam-ai" rel="noreferrer">VideoSDK AI Voice Agents</a>, connect a real-time speech model, enable tool usage, and deploy an avatar that can hold natural conversations with users.</p><h2 id="why-use-anamai-videosdk-for-ai-avatars">Why Use AnamAI + VideoSDK for AI Avatars?</h2><ul><li><strong>Real-time talking avatars</strong> with natural lip-sync</li><li><strong>End-to-end voice interaction pipeline</strong></li><li><strong>Low-latency streaming</strong>, suitable for live conversations</li><li><strong>Tool-enabled intelligence</strong> (weather, search, actions)</li><li><strong>Production-ready infrastructure</strong></li></ul><p>If you’re already building conversational AI, adding a visual avatar layer can dramatically improve engagement and trust. You can use either a <a href="https://github.com/videosdk-live/agents/blob/main/examples/avatar/anam_realtime_example.py" rel="noreferrer">realtime pipeline </a>or a <a href="https://github.com/videosdk-live/agents/blob/main/examples/avatar/anam_cascading_example.py" rel="noreferrer">cascading pipeline</a>.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/03/anam--1-.png" class="kg-image" alt="How to build AI Virtual Avatars using Anam-AI and VideoSDK AI Voice Agents" loading="lazy" width="3563" height="2019"/></figure><h2 id="let%E2%80%99s-get-started">Let’s get started</h2><h3 id="step-1-create-and-activate-the-virtual-environment">Step 1 : Create and activate the virtual environment</h3><p>macOS/Linux</p><pre><code class="language-python">python3.12 -m venv venv
source venv/bin/activate</code></pre><p>windows</p><pre><code class="language-python">python -m venv venv
venv\Scripts\activate</code></pre><h3 id="step-2-install-all-dependencies">Step 2 : Install all dependencies</h3><p>Install the required VideoSDK Agents package:</p><pre><code class="language-python">pip install"videosdk-agents[anam,google]"</code></pre><p>Install any additional plugins you plan to use (Anam avatar plugin is included in the ecosystem).</p><h3 id="step-3-authentication">Step 3 : Authentication</h3><p>You will need:</p><ul><li><a href="https://lab.anam.ai/api-keys">AnamAI API key</a> and Avatar ID</li><li><a href="https://aistudio.google.com/">Google API key</a> (for Gemini realtime model)</li><li><a href="https://dub.sh/zXYQt7V">VideoSDK authentication token</a></li></ul><pre><code class="language-python">ANAM_API_KEY=your_anam_key
ANAM_AVATAR_ID=your_avatar_id
GOOGLE_API_KEY=your_google_key
VIDEOSDK_AUTH_TOKEN=your_token</code></pre><p>When using a .env file, the SDK automatically reads credentials you don’t need to pass them manually.</p><h3 id="step-4-create-a-mainpy-file">Step 4 : Create a <a href="http://main.py/" rel="noopener noreferrer">main.py</a> file</h3><p>In this example, we’ve used a <a href="https://github.com/videosdk-live/agents/blob/main/examples/avatar/anam_realtime_example.py" rel="noreferrer">realtime pipeline</a>. However, if you want to use a cascading pipeline, you can <a href="https://github.com/videosdk-live/agents/blob/main/examples/avatar/anam_cascading_example.py" rel="noreferrer">follow this example</a>.</p><pre><code class="language-python">import aiohttp
import os

from videosdk.agents import Agent, AgentSession, RealTimePipeline, function_tool, JobContext, RoomOptions, WorkerJob
from videosdk.plugins.google import GeminiRealtime, GeminiLiveConfig
from videosdk.plugins.anam import AnamAvatar
import logging
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", handlers=[logging.StreamHandler()])

@function_tool
async def get_weather(
    latitude: str,
    longitude: str,
):
    """Called when the user asks about the weather. This function will return the weather for
    the given location. When given a location, please estimate the latitude and longitude of the
    location and do not ask the user for them.

    Args:
        latitude: The latitude of the location
        longitude: The longitude of the location
    """
    print("###Getting weather for", latitude, longitude)
    url = f"https://api.open-meteo.com/v1/forecast?latitude={latitude}&amp;longitude={longitude}&amp;current=temperature_2m"
    weather_data = {}
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            if response.status == 200:
                data = await response.json()
                print("###Weather data", data)
                weather_data = {
                    "temperature": data["current"]["temperature_2m"],
                    "temperature_unit": "Celsius",
                }
            else:
                raise Exception(
                    f"Failed to get weather data, status code: {response.status}"
                )

    return weather_data


class MyVoiceAgent(Agent):
    def __init__(self):
        super().__init__(
            instructions="You are VideoSDK's AI Avatar Voice Agent with real-time capabilities. You are a helpful virtual assistant with a visual avatar that can answer questions about weather help with other tasks in real-time.",
            tools=[get_weather]
        )

    async def on_enter(self) -&gt; None:
        await self.session.say("Hello! I'm your real-time AI avatar assistant powered by VideoSDK. How can I help you today?")
    
    async def on_exit(self) -&gt; None:
        await self.session.say("Goodbye! It was great talking with you!")
        

async def start_session(context: JobContext):
    # Initialize Gemini Realtime model
    model = GeminiRealtime(
        model="gemini-2.5-flash-native-audio-preview-12-2025",
        # When GOOGLE_API_KEY is set in .env - DON'T pass api_key parameter
        # api_key="AIXXXXXXXXXXXXXXXXXXXX", 
        config=GeminiLiveConfig(
            voice="Leda",  # Puck, Charon, Kore, Fenrir, Aoede, Leda, Orus, and Zephyr.
            response_modalities=["AUDIO"]
        )
    )

    # Initialize Anam Avatar
    anam_avatar = AnamAvatar(
        api_key=os.getenv("ANAM_API_KEY"),
        avatar_id=os.getenv("ANAM_AVATAR_ID"),
    )

    # Create pipeline with avatar
    pipeline = RealTimePipeline(model=model, avatar=anam_avatar)

    session = AgentSession(agent=MyVoiceAgent(), pipeline=pipeline)

    await session.start(wait_for_participant=True, run_until_shutdown=True)

def make_context() -&gt; JobContext:
    room_options = RoomOptions(
        room_id="&lt;room_id&gt;",
        name="Anam Avatar Realtime Agent",
        playground=False 
    )

    return JobContext(room_options=room_options)


if __name__ == "__main__":
    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
    job.start()</code></pre><h3 id="step-5-run-the-file">Step 5 : Run the file</h3><pre><code class="language-python">python main.py</code></pre><h3 id="step-6-deploy-your-ai-agent">Step 6 : Deploy your AI Agent</h3><p>Follow <a href="https://www.videosdk.live/blog/how-to-deploy-your-ai-voice-agents-in-videosdk" rel="noopener noreferrer">this guide</a> to deploy your AI voice agent. We’ll walk you through every step required to set up, configure, and launch your agent successfully.</p><h2 id="real-world-applications">Real-World Applications</h2><p>AI virtual avatars unlock a wide range of real-time interactive experiences. By combining conversational AI with expressive digital humans, you can build applications such as:</p><ul><li><strong>Customer support agents</strong> that provide instant, human-like assistance</li><li><strong>Virtual tutors and trainers</strong> for personalized learning experiences</li><li><strong>Healthcare assistants</strong> that guide patients and answer common questions</li><li><strong>AI sales representatives</strong> that engage and qualify leads in real time</li><li><strong>Event hosts and presenters</strong> for webinars, conferences, and live streams</li><li><strong>Interactive entertainment characters</strong> for games and immersive experiences</li></ul><p>In any scenario where <strong>human-like communication improves engagement</strong>, AI avatars can significantly enhance the user experience.</p><h2 id="conclusion">Conclusion</h2><p>AI avatars represent the next evolution of conversational interfaces. Text chatbots lack presence, and voice assistants lack visual connection, but real-time digital humans combine intelligence, speech, and embodiment into a single experience.</p><p>With AnamAI providing expressive visual rendering and VideoSDK delivering the real-time conversational infrastructure, developers can now build production-ready virtual humans that feel natural, responsive, and engaging.</p><p>Plug in your model, choose an avatar, and bring your AI to life.</p><h2 id="resources-and-next-steps">Resources and Next Steps</h2><ul><li>You can also use cascading pipeline instead of realtime pipeline - <a href="https://github.com/videosdk-live/agents/blob/main/examples/avatar/anam_cascading_example.py">see full working example here</a></li><li>For more information visit <a href="https://docs.videosdk.live/ai_agents/plugins/avatar/anam-ai">anamAI documentation</a></li><li>Learn how to&nbsp;<a href="https://docs.videosdk.live/ai_agents/deployments/introduction">deploy your AI Agents</a>.</li><li>Sign up at&nbsp;<a href="https://dub.sh/zXYQt7V">VideoSDK Dashboard</a></li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li></ul>]]></content:encoded></item><item><title><![CDATA[How to Deploy Your VideoSDK AI Voice Agents]]></title><description><![CDATA[Agent Cloud simplifies the deployment process by letting you package, deploy, and manage your agent in the cloud using a simple CLI workflow. In this guide, you'll learn how to move your agent from local development to a production-ready deployment.]]></description><link>https://www.videosdk.live/blog/how-to-deploy-your-ai-voice-agents-in-videosdk</link><guid isPermaLink="false">698f128355831517a5a8a286</guid><category><![CDATA[ai agents]]></category><category><![CDATA[AI voice agent]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 12 Mar 2026 05:47:20 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/03/deployment.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/03/deployment.png" alt="How to Deploy Your VideoSDK AI Voice Agents"/><p>You built an AI agent. It works locally. Then reality hits.</p><p>How do you run it 24/7?<br>How do you scale it?<br>How do you secure keys, manage versions, and handle real users?</br></br></p><p>Productionizing an agent is not just “docker build &amp;&amp; run.”<br>It’s infrastructure, orchestration, secrets, sessions, regions, and reliability.</br></p><p>Agent Cloud removes that entire layer.<br>With a single CLI workflow, you can package your agent, deploy it globally, run live sessions, and manage everything from versioning to scaling without building your own backend platform.</br></p><p>This guide walks through the complete deployment process, from local code to a production-ready agent running in the cloud.</p><ul><li>Build and push your agent container</li><li>Deploy and version your agent</li><li>Monitor and control deployments</li><li>Use the VideoSDK dashboard for low-code management</li></ul><p>By the end, your agent will be live and ready to serve users.</p><h2 id="prerequisites">Prerequisites</h2><p>Before starting, make sure you have:</p><ul><li><a href="https://www.docker.com/" rel="noreferrer">Docker</a> installed and running</li><li>A <a href="https://dub.sh/zXYQt7V" rel="noreferrer">VideoSDK account</a></li><li>An AI agent project with a Dockerfile</li></ul><h2 id="1-install-the-cli">1.  Install the CLI</h2><p>To get started with VideoSDK Agent Cloud, you need to install the VideoSDK CLI. There are two ways to install it:</p><h3 id="using-pip">Using pip</h3><p>You can also install the VideoSDK CLI using&nbsp;<code>pip</code>, the Python package manager.</p><pre><code class="language-python">pip install videosdk-cli</code></pre><h3 id="using-curl">Using curl</h3><p>This is the quickest way to install the VideoSDK CLI on Linux and macOS.</p><pre><code class="language-python">curl -fsSL https://videosdk.live/install | bash</code></pre><h2 id="2-authenticate-the-cli">2. Authenticate the CLI</h2><p>First, connect your local environment to your account.</p><pre><code class="language-python">videosdk auth login</code></pre><p>The CLI opens a browser window where you approve the authentication request. Once approved, a secure token is stored locally and reused for future commands.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/02/auth-cli.png" class="kg-image" alt="How to Deploy Your VideoSDK AI Voice Agents" loading="lazy" width="3306" height="1458"/></figure><h2 id="initialize-your-agent">Initialize your agent</h2><p>The&nbsp;<code>init</code>&nbsp;command sets up a new agent deployment by creating an agent and a deployment in VideoSDK cloud. It also generates a&nbsp;<code>videosdk.yaml</code>&nbsp;configuration file in your project directory.</p><pre><code class="language-python">videosdk agent init --name my-agent</code></pre><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/02/Screenshot-2026-02-24-at-11.40.24-AM.png" class="kg-image" alt="How to Deploy Your VideoSDK AI Voice Agents" loading="lazy" width="2232" height="294"/></figure><h3 id="videosdkyaml-structure">videosdk.yaml Structure</h3><p>The generated&nbsp;<code>videosdk.yaml</code>&nbsp;file will look like this:</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/02/Screenshot-2026-02-24-at-11.41.22-AM.png" class="kg-image" alt="How to Deploy Your VideoSDK AI Voice Agents" loading="lazy" width="462" height="280"/></figure><p>Add this agent_id from the videosdk.yaml file to the code in this section </p><pre><code class="language-python">if __name__ == "__main__":
    job = WorkerJob(entrypoint=start_session, jobctx=make_context,
        options=Options(
            register=True,
            agent_id="your-agent_id"
        )
    )
    job.start()</code></pre><h2 id="3-build-your-agent-container">3. Build Your Agent Container</h2><p>Agent Cloud runs agents as containers. Use the CLI to build a Docker image for your agent.</p><p>You need to log in to Docker first. After logging in, you will get your Docker ID. Replace <code>myrepo</code> with your Docker ID and use it in all the commands below.</p><pre><code class="language-python">videosdk agent build --image myrepo/myagent:v1
</code></pre><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/02/Screenshot-2026-02-24-at-12.06.58-PM.png" class="kg-image" alt="How to Deploy Your VideoSDK AI Voice Agents" loading="lazy" width="2206" height="986"/></figure><h3 id="what-happens">What Happens</h3><ol><li><strong>Cloud Creation</strong>: The CLI communicates with VideoSDK cloud to create a new agent and a corresponding deployment.</li><li><strong>Config Generation</strong>: A&nbsp;<code>videosdk.yaml</code>&nbsp;file is created in your current directory. This file contains the unique IDs for your agent and deployment.</li><li><strong>Project Setup</strong>: Your local project is now linked to the cloud resources.</li></ol><h2 id="4-push-the-image-to-a-registry">4. Push the Image to a Registry</h2><p>Before deployment, the image must be available in a container registry.</p><pre><code class="language-python">videosdk agent push --image myrepo/myagent:v1</code></pre><p>Supported registries include Docker Hub, GitHub Container Registry, AWS ECR, Google Container Registry, and Azure.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/02/Screenshot-2026-02-24-at-11.50.40-AM.png" class="kg-image" alt="How to Deploy Your VideoSDK AI Voice Agents" loading="lazy" width="2228" height="624"/></figure><p>For private registries:</p><pre><code class="language-python">videosdk agent push \
  --image ghcr.io/myorg/myagent:v1 \
  -u username \
  -p token</code></pre><h2 id="4-deploy-your-agent">4. Deploy Your Agent</h2><p>Create a new version of your agent on VideoSDK cloud.</p><pre><code class="language-python">videosdk agent deploy --image myrepo/myagent:v1
</code></pre><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/02/Screenshot-2026-02-24-at-11.53.51-AM.png" class="kg-image" alt="How to Deploy Your VideoSDK AI Voice Agents" loading="lazy" width="2216" height="490"/></figure><p>This creates a <strong>versioned deployment</strong> with configurable compute resources and scaling.</p><p>You can customize:</p><ul><li>Replica limits</li><li>Compute profile</li><li>Deployment region</li><li>Secrets</li><li>Version tags</li></ul><pre><code class="language-python">videosdk agent deploy my-version \
  --image myrepo/myagent:v1 \
  --profile cpu-medium \
  --region in002
</code></pre><h2 id="5-manage-versions">5. Manage Versions</h2><p>Each deployment creates a version that can be activated, updated, or rolled back.</p><p>List versions:</p><pre><code class="language-python">videosdk agent version list
</code></pre><p>Activate a version:</p><pre><code class="language-python">videosdk agent version activate -v ver123</code></pre><p>Update scaling:</p><pre><code class="language-python">videosdk agent version update -v ver123 \
  --min-replica 2 \
  --max-replica 10
</code></pre><p>Check status:</p><pre><code class="language-python">videosdk agent version status -v ver123
</code></pre><h2 id="6-secure-configuration-with-secrets">6. Secure Configuration with Secrets</h2><p>Agents often need API keys or credentials.</p><p>Create a secret set from a <code>.env</code> file:</p><pre><code class="language-python">videosdk agent secrets my-secrets create --file .env
</code></pre><p>Use it during deployment:</p><pre><code class="language-python">videosdk agent deploy \
  --image myrepo/myagent:v1 \
  --env-secret my-secrets
</code></pre><p>Secrets are injected securely as environment variables at runtime.</p><h2 id="7-start-a-live-agent-session">7. Start a Live Agent Session</h2><p>Once deployed, you can run the agent in a room.</p><pre><code class="language-python">videosdk agent session start -v ver123
</code></pre><p>This launches a live instance of your agent.</p><p>You can specify a room:</p><pre><code class="language-python">videosdk agent session start -v ver123 -r support-room
</code></pre><p>Stop the session:</p><pre><code class="language-python">videosdk agent session stop -r support-room
</code></pre><p>List sessions:</p><pre><code class="language-python">videosdk agent sessions list
</code></pre><h2 id="8-manage-everything-from-the-dashboard-low-code-option">8. Manage Everything from the Dashboard (Low-Code Option)</h2><p>If you prefer a visual workflow, the VideoSDK dashboard provides full control over deployments.</p><p>From the dashboard you can:</p><ul><li>View all agents</li><li>Monitor versions and replicas</li><li>Inspect logs and sessions</li><li>Manage secrets</li><li>Start or stop deployments</li><li>Configure regions and scaling</li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/02/Screenshot-2026-02-24-at-1.38.11-PM.png" class="kg-image" alt="How to Deploy Your VideoSDK AI Voice Agents" loading="lazy" width="2926" height="1220"/></figure><p>The CLI and dashboard stay fully synchronized changes made in one are reflected in the other.</p><h2 id="why-this-workflow-matters">Why This Workflow Matters</h2><p>Traditional deployment of real-time AI systems requires stitching together multiple components:</p><ul><li>Container orchestration</li><li>Autoscaling</li><li>Secrets management</li><li>Session routing</li><li>Infrastructure monitoring</li><li>Regional deployment</li></ul><p>Agent Cloud abstracts this complexity into a developer-friendly workflow while preserving control and flexibility.</p><p>You focus on building intelligence. The platform handles running it reliably.</p><h2 id="final-thoughts">Final Thoughts</h2><p>With the VideoSDK CLI, you can go from a local prototype to a production-ready deployment in minutes complete with scaling, versioning, and live sessions.</p><p>Whether you prefer CLI automation or dashboard control, Agent Cloud provides a streamlined path to running AI agents in the real world.</p><h2 id="next-steps-and-resources">Next Steps and Resources</h2><ul><li>Explore more about deployments through <a href="https://docs.videosdk.live/ai_agents/deployments/introduction" rel="noreferrer">the docs</a></li><li>Sign up at VideoSDK -&nbsp;<a href="https://dub.sh/BVOvGNr" rel="noreferrer">dashboard</a></li><li> Have feedback or ideas? Comment below or join our Discord to build and ship AI call agents faster with VideoSDK&nbsp;<a href="https://dub.sh/yDV95i6" rel="noopener noreferrer">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li></ul>]]></content:encoded></item><item><title><![CDATA[Introducing VideoSDK Agent Cloud: Deploy Voice Agents at Production Scale]]></title><description><![CDATA[In this launch, VideoSDK introduces Deployments a purpose-built infrastructure layer for running AI voice agents reliably in production. Designed to handle scaling, concurrency, latency, and uptime out of the box.]]></description><link>https://www.videosdk.live/blog/introducing-videosdk-agent-cloud-deploy-voice-agents-at-production-scale</link><guid isPermaLink="false">699ebca155831517a5a8a3c3</guid><category><![CDATA[ai agents]]></category><category><![CDATA[AI voice agent]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 26 Feb 2026 12:01:24 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/02/deployment.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/02/deployment.png" alt="Introducing VideoSDK Agent Cloud: Deploy Voice Agents at Production Scale"/><p>AI innovation doesn’t slow down in production  but infrastructure often does.<br>As agents move from demos  to real users, the challenges shift from building intelligence to running it reliably at scale. Concurrency spikes, cold starts, regional latency, uptime guarantees , these aren’t model problems. They’re infrastructure problems.</br></p><p>At VideoSDK, we believe AI teams shouldn’t have to become infrastructure teams to ship production systems.</p><p>Today, we’re introducing <a href="https://docs.videosdk.live/ai_agents/deployments/agent-cloud/cli/deploy" rel="noreferrer"><strong>Cloud</strong> <strong>Deployments</strong></a> - a purpose-built infrastructure layer for running AI Voice agents in the real world. Designed for scale, reliability, and operational visibility, Deployments handle the complexity of provisioning, scaling, and maintaining runtime environments so your agents can perform consistently under real traffic through <a href="https://docs.videosdk.live/ai_agents/deployments/agent-cloud/cli/deploy" rel="noreferrer">agent runtime dashboard</a> and <a href="https://docs.videosdk.live/ai_agents/deployments/agent-cloud/cli/deploy" rel="noreferrer">CLI.</a></p><p>From startup MVPs to enterprise-grade workloads, this is the foundation that lets AI products move to production - without operational overhead.</p><h2 id="who-this-is-for">Who This Is For</h2><p>Deployments are built for teams shipping real AI products:</p><ul><li><strong>Voice AI builders</strong> running real-time conversations</li><li><strong>AI startups</strong> moving from demo → production</li><li><strong>Infra &amp; platform teams</strong> managing scale and reliability</li><li><strong>Enterprises</strong> deploying mission-critical automation</li></ul><h2 id="from-local-agent-to-production-endpoint">From Local Agent to Production Endpoint </h2><p>With Deployments, you can:</p><ul><li>Launch agents in dedicated compute environments</li><li>Scale automatically based on active sessions</li><li>Choose performance tiers based on workload</li><li>Run globally with regional isolation</li><li>Maintain predictable performance under load</li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/02/blog3.png" class="kg-image" alt="Introducing VideoSDK Agent Cloud: Deploy Voice Agents at Production Scale" loading="lazy" width="2054" height="1388"/></figure><h2 id="deploy-in-minutesdashboard-or-cli">Deploy in Minutes - <a href="https://dub.sh/zXYQt7V" rel="noreferrer">Dashboard</a> or CLI</h2><p>Deployments support both visual and developer-first workflows.</p><h3 id="1-deploy-from-the-dashboard-fastest-path">1) Deploy from the <a href="https://dub.sh/zXYQt7V" rel="noreferrer">Dashboard</a> (Fastest Path)</h3><p><a href="https://dub.sh/zXYQt7V" rel="noreferrer">The dashboard</a> provides a guided experience where configuration, provisioning, and launch are handled automatically if building agents from runtime dashboard.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/02/blog.png" class="kg-image" alt="Introducing VideoSDK Agent Cloud: Deploy Voice Agents at Production Scale" loading="lazy" width="2360" height="1464"/></figure><p>After selecting CPU Profile options and region, your agent is deployed to a managed environment no manual setup required.</p><h3 id="compute-plans-made-simple"><strong>Compute Plans Made Simple</strong></h3><p>We designed compute options to be intuitive not cloud-jargon heavy.</p><ul><li>Agent Session CPU-Small (0.5 Cores, 1 GB)</li><li>Agent Session CPU-Medium (1 Core, 2 GB)</li><li>Agent Session CPU-Large (2 Cores, 3 GB)</li><li>Agent Reserved CPU-Small (0.5 Cores, 1 GB)</li><li>Agent Reserved CPU-Medium (1 Core, 2 GB)</li><li>Agent Reserved CPU-Large (2 Cores, 3 GB)</li><li>Agent Observability - Currently free</li></ul><h3 id="2-deploy-via-cli-automation-friendly">2) <a href="https://docs.videosdk.live/ai_agents/deployments/agent-cloud/cli/deploy" rel="noreferrer">Deploy via CLI </a>(Automation-Friendly)</h3><p>For teams integrating deployments into CI/CD pipelines or developer workflows, the <a href="https://docs.videosdk.live/ai_agents/deployments/agent-cloud/cli/deploy" rel="noreferrer">CLI </a>offers full control. Deployments can be launched with a few commands to configure and start the runtime environment.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/02/blog2.png" class="kg-image" alt="Introducing VideoSDK Agent Cloud: Deploy Voice Agents at Production Scale" loading="lazy" width="3162" height="905"/></figure><h2 id="deployment-observability-control">Deployment Observability &amp; Control</h2><p>Monitor and manage your Agent Deployment end-to-end from the dashboard ,view active and historical sessions, track deployed versions, securely configure environment secrets, and inspect real-time logs and errors to debug issues, audit behavior, and ensure reliable production performance.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/02/blog5.png" class="kg-image" alt="Introducing VideoSDK Agent Cloud: Deploy Voice Agents at Production Scale" loading="lazy" width="1204" height="832"/></figure><h2 id="enterprise-onboarding-at-scale">Enterprise Onboarding at Scale</h2><p>Financial institutions and large enterprises often handle hundreds of customer onboarding interactions every day many of them time-sensitive and compliance-critical.</p>
<!--kg-card-begin: html-->
<p style="font-size:20px; line-height:1.7; margin:20px 0;">

  Imagine an organization like 
  <span style="
    color: #B87EED;
    padding: 4px 8px;
    border-radius: 6px;
    font-weight: 600;
  ">
    ICICI Prudential running 100+ AI-powered onboarding calls daily
  </span>. 

  Traffic fluctuates throughout the day. Peak hours demand higher concurrency. 
  Latency must stay low. Uptime is non-negotiable.

</p>
<!--kg-card-end: html-->
<p>Instead of managing servers, autoscaling groups, and regional failovers, teams focus on improving the onboarding experience.</p><p>Deployments ensure enterprise-grade reliability while keeping operations simple.</p><h3 id="build-smarter-deploy-faster-scale-without-limits">Build Smarter. Deploy Faster. Scale Without Limits.</h3><p>Deployments remove the hardest part of shipping AI systems: running them reliably in the real world. What once required complex infrastructure, scaling strategies, and constant operational oversight can now be done in minutes with performance and reliability built in.</p><p>Whether you're launching your first production agent, handling thousands of conversations, or powering mission-critical workflows, Deployments give you the confidence to scale without re-architecting or managing servers.</p><p>Start small with on-demand sessions. Move to always-on capacity as you grow. Expand across regions as your users scale. The platform evolves with you from MVP to enterprise.</p><p>Your team focuses on intelligence and experience. We handle the infrastructure.</p><p><strong>Deploy your agent, reach real users, and build the future of AI Voice without operational friction.</strong></p><h2 id="resources"><strong>Resources</strong></h2><ul><li>Read more about how to deploy agents through CLI - <a href="https://docs.videosdk.live/ai_agents/deployments/agent-cloud/cli/deploy" rel="noreferrer">docs link</a>.</li><li>Low-code <a href="https://dub.sh/zXYQt7V" rel="noreferrer">agent runtime dashboard for AI Voice Agents</a>.</li><li>Sign in to <a href="https://dub.sh/zXYQt7V" rel="noreferrer">VideoSDK Dashboard</a>.</li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6" rel="noopener noreferrer">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li></ul>]]></content:encoded></item><item><title><![CDATA[Product Updates - January 2026 : Managed Inference, Expanded AI Ecosystem, and Advanced SDK Improvements]]></title><description><![CDATA[Kick off 2026 with VideoSDK’s biggest updates yet. January brings VideoSDK-managed inference for AI Agents, expanded AI integrations, new agent evaluation tools, advanced video optimizations, and deeper support for IoT voice experiences .]]></description><link>https://www.videosdk.live/blog/product-updates-january-2026-managed-inference-expanded-ai-ecosystem-and-advanced-sdk-improvements</link><guid isPermaLink="false">69942bd555831517a5a8a2a7</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 24 Feb 2026 11:18:19 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/02/2026.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/02/2026.png" alt="Product Updates - January 2026 : Managed Inference, Expanded AI Ecosystem, and Advanced SDK Improvements"/><p>A complete recap of our biggest platform and product updates from January 2026.</p><p>Welcome to the January edition of the VideoSDK Monthly Updates! We’re starting 2026 with a major leap forward in AI infrastructure, voice capabilities, and developer control across the platform.</p><p>This month introduces a powerful new milestone: <a href="https://docs.videosdk.live/ai_agents/plugins/inference/videosdk-inference" rel="noreferrer"><strong>VideoSDK-managed inference for AI Agents</strong></a>, dramatically expanding what you can build without managing complex AI pipelines. Alongside this, we’ve broadened our AI ecosystem, launched evaluation tooling for agent performance, delivered advanced video optimization across SDKs, and pushed deeper into IoT voice experiences.</p><p>Let’s dive in.</p><h2 id="videosdk-managed-inference-in-agents-sdk">VideoSDK-Managed Inference In Agents SDK</h2><p>The biggest highlight this month is the launch of <a href="https://docs.videosdk.live/ai_agents/plugins/inference/videosdk-inference" rel="noreferrer"><strong>VideoSDK-managed inference support</strong></a> for Agents, a major step toward making real-time AI truly turnkey.</p><p>You can now run complete voice agent pipelines through the VideoSDK Gateway, eliminating the need to orchestrate multiple AI providers yourself.</p><h3 id="cascading-pipeline-supported-models-stt-%E2%86%92-llm-%E2%86%92-tts">Cascading Pipeline Supported Models (STT → LLM → TTS)</h3><p>Fully managed routing across components:</p><ul><li><strong>Speech-to-Text:</strong> <a href="https://docs.videosdk.live/ai_agents/plugins/inference/videosdk-inference#sttgoogle" rel="noreferrer">Google</a>, <a href="https://docs.videosdk.live/ai_agents/plugins/inference/videosdk-inference#stt-configuration" rel="noreferrer">SarvamAI</a>, Deepgram</li><li><strong>Large Language Models:</strong> <a href="https://docs.videosdk.live/ai_agents/plugins/inference/videosdk-inference#llmgoogle" rel="noreferrer">Google-supported models</a></li><li><strong>Text-to-Speech:</strong> <a href="https://docs.videosdk.live/ai_agents/plugins/inference/videosdk-inference#ttsgoogle" rel="noreferrer">Google</a>, <a href="https://docs.videosdk.live/ai_agents/plugins/inference/videosdk-inference#ttssarvam" rel="noreferrer">SarvamAI</a>, Cartesia</li></ul><h3 id="realtime-pipeline-supported-models">Realtime Pipeline Supported Models</h3><ul><li>Powered by <a href="https://docs.videosdk.live/ai_agents/plugins/inference/videosdk-inference#realtimegemini" rel="noreferrer">Gemini Realtime</a></li><li>Enables ultra-low-latency conversational experiences</li></ul><p>This update transforms agents from DIY integrations into a <strong>managed AI platform experience</strong>, dramatically reducing infrastructure overhead and time to production.</p><h3 id="expanded-speech-audio-capabilities">Expanded Speech &amp; Audio Capabilities</h3><ul><li>ElevenLabs : <a href="https://docs.videosdk.live/ai_agents/plugins/tts/eleven-labs" rel="noreferrer">Enhanced TTS</a> with language control</li><li><a href="https://docs.videosdk.live/ai_agents/plugins/tts/cartesia-tts" rel="noreferrer">Cartesia</a> : Advanced generation configuration (emotion, speed, volume)</li></ul><p>Together, these integrations significantly expand the range of voices, languages, latency profiles, and quality options available for building conversational AI.</p><h2 id="introducing-agent-evaluation-benchmarking">Introducing Agent Evaluation &amp; Benchmarking</h2><p>Building agents is only half the challenge measuring their performance is equally critical.</p><p>This month introduces <a href="https://docs.videosdk.live/ai_agents/core-components/testing-and-evaluation" rel="noreferrer"><strong>videosdk-eval</strong></a>, a dedicated framework for testing and validating agent quality before production deployment.</p><p>Key capabilities include:</p><ul><li>Simulation of multi-turn conversations</li><li>Component-level evaluation (STT, LLM, TTS)</li><li>Latency tracking per component and end-to-end</li><li>Performance and quality benchmarking</li></ul>
<!--kg-card-begin: html-->
<iframe width="560" height="315" src="https://www.youtube.com/embed/hPAdAmCU2OA?si=V2FAQRjKji3rNx3B" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""/>
<!--kg-card-end: html-->
<h2 id="real-time-analytics-observability-improvements">Real-Time Analytics &amp; Observability Improvements</h2><p>Production AI systems require deep visibility into performance. January brings major upgrades to analytics across the agent platform.</p><p>Enhancements include:</p><ul><li>Improved latency tracking and tracing</li><li>Token usage collection for major AI providers</li><li>Real-time analytics streaming via PubSub</li><li>Centralized playground analytics mode</li></ul><p>These capabilities provide actionable insights for optimizing cost, performance, and user experience.</p><h2 id="advanced-video-optimization-across-sdks">Advanced Video Optimization Across SDKs</h2><p>Our core RTC SDKs received significant upgrades to video quality control and monitoring.</p><h3 id="ios-sdkreliability-lifecycle-improvements"><a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer">iOS SDK</a> - Reliability &amp; Lifecycle Improvements</h3><p>New capabilities enhance stability for production apps:</p><ul><li>Explicit FAILED meeting state</li><li>Bulk removal of event listeners</li><li>Deterministic media track lifecycle management</li><li>Automatic restoration of microphone and camera states after reconnection</li></ul><h3 id="android-sdkimproved-observability"><a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer">Android SDK</a> - Improved Observability</h3><p>Android updates focused on developer experience and runtime stability:</p><ul><li>Enhanced trace messaging</li><li>Safer listener management</li><li>Thread-safety improvements</li></ul><h2 id="expanding-into-iot-voice-experiences">Expanding Into IoT Voice Experiences</h2><p>January also marks continued progress toward embedded real-time communication.</p><h3 id="iot-sdk-enhancements"><a href="https://docs.videosdk.live/iot/guide/video-and-audio-calling-api-sdk/iot-sdk" rel="noreferrer">IoT SDK</a> Enhancements</h3><ul><li>Acoustic Echo Cancellation (AEC) support for ESP32-S3-Korvo-V2</li></ul><p>This enables clearer voice interactions on hardware devices such as smart assistants, kiosks, and industrial systems.</p><h2 id="%E2%9C%A8-what-this-means-for-developers">✨ What This Means for Developers</h2><p>January’s updates move the platform decisively toward a future where developers can build sophisticated real-time AI systems without managing infrastructure complexity.</p><p>From fully managed inference pipelines to expanded AI providers, evaluation tooling, and advanced media controls, these improvements significantly reduce the gap between prototype and production.</p><h2 id="sdk-sketches">SDK Sketches</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/02/jan-product-update.png" class="kg-image" alt="Product Updates - January 2026 : Managed Inference, Expanded AI Ecosystem, and Advanced SDK Improvements" loading="lazy" width="1344" height="768"/></figure><h2 id="whats-next">What's Next?</h2><p>As we continue through 2026, our focus remains on making real-time AI and communication infrastructure more powerful, flexible, and accessible to developers worldwide.</p><p>Expect deeper integrations, smarter agents, and continued improvements across performance, reliability, and developer experience.</p><h2 id="new-content-and-resources">New Content and Resources</h2><p>Explore our latest tutorials and blogs to help you build more advanced AI agents and voice workflows.</p><ul><li>Voicemail detection <a href="https://youtu.be/KSzxtUd5ULU" rel="noreferrer">tutorial</a> - Learn how to automatically detect voicemails and trigger intelligent callbacks or alternate flows.</li><li>Multi agent switching <a href="https://youtu.be/kF82zCIfT3s" rel="noreferrer">video tutorial</a> - See how to automatically hand off conversations between specialized agents while preserving context.</li><li>Testing and eval<a href="https://youtu.be/hPAdAmCU2OA" rel="noreferrer"> tutorial</a> - A step-by-step guide to validating agent performance using evaluation tools and simulated conversations.</li></ul><p>📝 Blogs</p><ul><li>Testing and <a href="https://www.videosdk.live/blog/introducing-testing-and-evaluation-in-ai-voice-agents" rel="noreferrer">eval blog</a> - Best practices for measuring quality, latency, and reliability before deploying to production.</li><li>Voicemail <a href="https://www.videosdk.live/blog/how-to-enable-voice-mail-detection-in-ai-voice-agents" rel="noreferrer">detection blog</a> - How to build smarter telephony agents that understand call outcomes.</li><li>Multi agent <a href="https://www.videosdk.live/blog/how-to-build-an-ai-voice-system-using-real-time-multi-agent-switching" rel="noreferrer">switching blog</a> - Designing complex workflows with specialized agents collaborating in real time.</li></ul>
<!--kg-card-begin: html-->
<style>
   :root {
      --primary: #A497D9;
      --bg: #050608;
      --card-bg: #111217;
      --text-main: #E0E0E0;
      --text-muted: #A0A0A0;
      --border: #22242C;
    }
     .cta {
      font-family: 'Inter', -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, sans-serif;
      line-height: 1.7;
      color: var(--text-main);
      background-color: var(--bg);
      margin: 0;
      padding: 0;
    }


      .cta-button {
        background-color: var(--primary);
        color: #000000;
        padding: 6px 20px;
        border-radius: 6px;
        font-size: 15px;
        font-weight: bold;
        display: inline-block;
        text-decoration: none;
    }
    .cta-box {
      text-align: left;
      background: linear-gradient(135deg, #111217 0%, #1a1b23 100%);
      padding: 40px;
      border-radius: 12px;
      border: 1px solid var(--primary);
      margin: 60px 0;
    }
</style>
<div class="cta">
<div class="cta-box">
      <h3>Ready to Build with the Latest?</h3>
      <p>Upgrade your SDKs to the latest versions to take advantage of all these new features and improvements.</p>
      <a href="https://discord.com/invite/Gpmj6eCq5u" class="cta-button">Join our Discord Community</a>
    </div></div>

<!--kg-card-end: html-->
<h2 id=""/>]]></content:encoded></item><item><title><![CDATA[Announcing VideoSDK Inference: One Magic API for Every Voice AI Model]]></title><description><![CDATA[We’re thrilled to announce Inferencing in VideoSDK AI Voice Agents a unified way to run STT, LLM, TTS, and Realtime models directly inside your voice pipeline without managing multiple accounts through Agent Runtime Dashboard and Python Agents SDK.]]></description><link>https://www.videosdk.live/blog/announcing-videosdk-inference-one-magic-api-for-every-voice-ai-model</link><guid isPermaLink="false">697c971555831517a5a8a12f</guid><category><![CDATA[AI voice agent]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 16 Feb 2026 06:05:52 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/02/image--49-.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/02/image--49-.png" alt="Announcing VideoSDK Inference: One Magic API for Every Voice AI Model"/><p>Building AI voice agents has always been powerful, but slow. You had models STT, LLMs, TTS and the tools to use them. But maintaining accounts across multiple vendors for speech recognition, language models, and speech synthesis, each with its own keys, quotas, billing, and APIs was a major challenge.</p><p><strong>Today, that changes.</strong></p><p>We’re thrilled to announce <strong>Inferencing in VideoSDK AI Voice Agents</strong> a unified way to run STT, LLM, TTS, and Realtime models <strong>directly inside your voice pipeline</strong> without managing multiple accounts through <a href="https://dub.sh/zXYQt7V" rel="noreferrer">Agent Runtime Dashboard</a> and <a href="https://github.com/videosdk-live/agents" rel="noreferrer">Python Agents SDK</a>.</p><p>Inferencing works in both the <strong>CascadingPipeline</strong> and the <strong>RealtimePipeline</strong>, giving you full flexibility to build modular or fully streaming voice agents. Whether you want incremental transcripts, staged execution, or fully native realtime audio, Inferencing makes it easy.</p><h2 id="what-is-videosdk-inference">What is VideoSDK Inference?</h2><p><strong>VideoSDK Inference</strong> is a managed gateway that gives you access to multiple AI models. All <strong>without providing your own API keys</strong> for providers like Sarvam AI or Google Gemini.</p><p>Authentication, routing, retries, and billing are handled by VideoSDK usage is simply charged against your <strong>VideoSDK account balance</strong>.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/02/inference-architecture-1.png" class="kg-image" alt="Announcing VideoSDK Inference: One Magic API for Every Voice AI Model" loading="lazy" width="3801" height="1176"/></figure><h3 id="supported-categories">Supported Categories</h3><ul><li><strong>STT</strong>: Sarvam, Google, Deepgram</li><li><strong>LLMs</strong>: Google Gemini</li><li><strong>TTS</strong>: Sarvam, Google, Cartesia</li><li><strong>Realtime</strong>: Gemini Native Audio</li></ul><h1 id="inferencing-via-agent-runtime-dashboard">Inferencing via Agent Runtime Dashboard</h1><p>Inferencing in VideoSDK is now fully accessible through the<a href="https://dub.sh/zXYQt7V" rel="noreferrer"> dashboard,</a> giving developers direct control over model selection and pipeline configuration without needing to manage infrastructure manually.</p><p>From the <a href="https://dub.sh/zXYQt7V" rel="noreferrer">dashboard</a>, developers can:</p><ul><li><strong>Select STT, LLM, TTS, or Realtime models</strong> and enable them in the pipeline with a single click.</li><li><strong>Switch providers instantly</strong>, allowing rapid experimentation and iteration .</li><li><strong>Attach deployment endpoints</strong> for web or telephony, making the agent immediately accessible to users.</li></ul><p>With this approach, ideas move from configuration to <strong>live, interactive conversations in minutes</strong>, making it possible to test new workflows, swap models, or iterate on conversational design almost instantly.</p>
<!--kg-card-begin: html-->
<iframe width="560" height="315" src="https://www.youtube.com/embed/2XvhVJETSrU?si=sgy6ywKrF4Pp3v5z" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""/>
<!--kg-card-end: html-->
<h1 id="inferencing-via-code-agents-sdk">Inferencing via Code (Agents SDK)</h1><p>With VideoSDK Inferencing, developers can now integrate STT, LLM, TTS, and Realtime models directly into their voice agents all handled inside the VideoSDK. This enables rapid experimentation, modular pipelines, and low-latency real-time conversations.</p><h3 id="installation">Installation</h3><p>The Inference plugin is included in the core <a href="https://pypi.org/project/videosdk-agents/" rel="noreferrer">VideoSDK Agents SDK</a>. Install it via</p><pre><code class="language-python">pip install videosdk-agents
</code></pre><h3 id="importing-inference-classes">Importing Inference Classes</h3><p>You can import the Inference classes directly from <code>videosdk.agents.inference</code>:</p><pre><code class="language-python">from videosdk.agents.inference import STT, LLM, TTS, Realtime
</code></pre><h3 id="cascadingpipeline-example">CascadingPipeline Example</h3><p>The <a href="https://docs.videosdk.live/ai_agents/core-components/cascading-pipeline" rel="noreferrer"><strong>CascadingPipeline</strong></a> is ideal for modular, stage-by-stage processing. Here’s an example of building a simple agent using STT, LLM, and TTS via the VideoSDK Inference Gateway:</p><pre><code class="language-python">pipeline = CascadingPipeline(
        stt=STT.sarvam(model_id="saarika:v2.5", language="en-IN"),
        llm=LLM.google(model="gemini-2.5-flash"),
        tts=TTS.sarvam(model_id="bulbul:v2", speaker="anushka", language="en-IN"),
        vad=SileroVAD()
    )
</code></pre><h3 id="realtimepipeline-example">RealTimePipeline Example</h3><p>For <strong>low-latency, fully streaming voice agents</strong>, the <a href="https://docs.videosdk.live/ai_agents/core-components/realtime-pipeline" rel="noreferrer">RealTimePipeline</a> handles Realtime inference with minimal delay. Here’s an example using Gemini Live Native Audio:</p><pre><code class="language-python">pipeline = RealTimePipeline(
        model=Realtime.gemini(
            model="gemini-2.5-flash-native-audio-preview-12-2025",
            voice="Puck",
            language_code="en-US",
            response_modalities=["AUDIO"],
            temperature=0.7
        )
    )</code></pre><p>With this approach, developers retain:</p><ul><li><strong>Full programmatic control</strong> over pipeline stages, model parameters, and execution behavior.</li><li><strong>Modular provider replacement</strong>, making it easy to swap STT, LLM, or TTS engines.</li></ul><p>The result: a <strong>fully configurable, production-ready AI voice agent</strong> that can be deployed in minutes.</p><h1 id="conclusion">Conclusion</h1><p>Voice AI is no longer limited by model capability. It’s limited by how fast you can deploy it. With <strong>Inferencing in VideoSDK AI Voice Agents</strong>, deployment becomes effortless. Whether through the dashboard or programmatically via the SDK, you can <strong>build, select, enable, and go live</strong> in minutes.</p><p>The era of modular, low-latency, real-time voice agents is here. With Inferencing, your ideas move from concept to conversation faster than ever.</p><p>Build. Select. Configure. Go live.</p><h2 id="resources-and-next-steps">Resources and Next Steps</h2><ul><li>For More Information Read the <a href="https://docs.videosdk.live/ai_agents/plugins/inference/videosdk-inference" rel="noreferrer">Inference documentation</a>.</li><li>Learn how to&nbsp;<a href="https://docs.videosdk.live/ai_agents/deployments/introduction">deploy your AI Agents</a>.</li><li>Sign up at&nbsp;<a href="https://dub.sh/zXYQt7V">VideoSDK Dashboard</a></li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li></ul>]]></content:encoded></item><item><title><![CDATA[Introducing VideoSDK Phone Numbers: Build AI Call Agents in 60 seconds]]></title><description><![CDATA[Today, we’re launching VideoSDK Phone Numbers, a first-party telephony capability that lets you connect AI voice agents directly to the phone network.]]></description><link>https://www.videosdk.live/blog/introducing-videosdk-phone-numbers-build-ai-call-agents-in-60-seconds</link><guid isPermaLink="false">697b132c55831517a5a8a103</guid><category><![CDATA[ai agents]]></category><category><![CDATA[AI voice agent]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 09 Feb 2026 14:20:41 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/02/image--47-.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/02/image--47-.png" alt="Introducing VideoSDK Phone Numbers: Build AI Call Agents in 60 seconds"/><p>Today, we’re launching <a href="https://dub.sh/zXYQt7V" rel="noreferrer"><strong>VideoSDK</strong></a><strong> Phone Numbers</strong> a first-party telephony capability that lets you connect voice agents and calling workflows directly to the phone network, <strong>without relying on third-party providers like Twilio</strong>.</p><p>You can now purchase phone numbers straight from the <a href="https://dub.sh/zXYQt7V" rel="noreferrer"><strong>VideoSDK Dashboard</strong></a>, attach them to your agent or calling logic, and go live in minutes.</p><p>No external accounts. No SIP trunk configuration. No extra setup.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/02/phone-number-dashboard.png" class="kg-image" alt="Introducing VideoSDK Phone Numbers: Build AI Call Agents in 60 seconds" loading="lazy" width="2920" height="1256"/></figure><h2 id="fewer-hops-better-calls">Fewer Hops. Better Calls.</h2><p>Until now, developers building phone-based voice experiences on VideoSDK typically had to:</p><ul><li>Sign up with a third-party telephony provider (like Twilio)</li><li>Purchase and manage phone numbers externally</li><li>Configure SIP trunks and credentials</li><li>Integrate and debug multiple systems before making the first call</li></ul><p>By offering <strong>native phone numbers directly within </strong><a href="https://dub.sh/zXYQt7V" rel="noreferrer"><strong>VideoSDK</strong></a>, we remove those extra layers and make phone-based voice apps significantly easier to build and operate in just 3 steps.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2026/02/phone-number-architecture1.png" class="kg-image" alt="Introducing VideoSDK Phone Numbers: Build AI Call Agents in 60 seconds" loading="lazy" width="2866" height="1150"/></figure><h2 id="how-it-works">How It Works</h2><p>Getting started with VideoSDK Phone Numbers is simple:</p><p><strong>1. Buy a Phone Number</strong> : Search and purchase available phone numbers directly from the <a href="https://dub.sh/zXYQt7V" rel="noreferrer"><strong>VideoSDK Dashboard</strong></a>.</p><p><strong>2. Attach Your Agent or Logic</strong> : Associate the phone number with your voice agent or inbound call routing logic so incoming calls are handled automatically.</p><p><strong>3. Go Live</strong> : Call the number and your agent answers instantly.<br>Manage numbers, update routing, and monitor usage all from one place.</br></p>
<!--kg-card-begin: html-->
<iframe width="600" height="400" src="https://www.youtube.com/embed/K4g51dBkrrk?si=AwHrXWs0fv3X9jlV" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""/>
<!--kg-card-end: html-->
<h2 id="what-you-can-build">What You Can Build</h2><p>Phone calls remain critical across many industries. With <a href="https://dub.sh/zXYQt7V" rel="noreferrer">VideoSDK Phone Numbers</a>, you can spin up phone-based voice agents in minutes for:</p><ul><li>Sales qualification and lead routing</li><li>Order tracking and delivery updates</li><li>Appointment booking and status updates</li><li>Restaurant and service business call automation</li></ul><p>If your application involves real-time voice and the phone network, this feature is built for you</p><h2 id="resources">Resources</h2><ul><li>Sign up at VideoSDK -&nbsp;<a href="https://dub.sh/BVOvGNr" rel="noreferrer">dashboard</a></li><li> Have feedback or ideas? Comment below or join our Discord to build and ship AI call agents faster with VideoSDK&nbsp;<a href="https://dub.sh/yDV95i6" rel="noopener noreferrer">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li></ul><p/><h2 id=""/>]]></content:encoded></item><item><title><![CDATA[Introducing the Ultravox Realtime Plugin in VideoSDK]]></title><description><![CDATA[Learn more about building real-time voice agents with Ultravox and VideoSDK Agents, where listening, reasoning, and speaking happen together for ultra-low latency, natural conversations.]]></description><link>https://www.videosdk.live/blog/introducing-the-ultravox-realtime-plugin-in-videosdk</link><guid isPermaLink="false">6968d02e55831517a5a89f8d</guid><category><![CDATA[ai agents]]></category><category><![CDATA[AI voice agent]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 27 Jan 2026 04:51:01 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/01/Ultravox.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/01/Ultravox.png" alt="Introducing the Ultravox Realtime Plugin in VideoSDK"/><p>Real-time voice agents are fundamentally different from traditional AI pipelines. Instead of processing speech in separate steps speech-to-text, reasoning, and text-to-speech they operate as a continuous conversation loop. Every millisecond matters.</p><p>Ultravox is designed specifically for this use case. It enables low-latency, real-time conversational AI where listening, reasoning, and speaking happen together. In this blog, we’ll walk through how to use Ultravox with the VideoSDK Agents SDK to build responsive, interactive voice agents.</p><h2 id="key-features">Key Features</h2><ul><li><strong>Real-Time Conversations</strong> : Ultravox is optimized for live voice interactions, making conversations feel natural and responsive rather than delayed or scripted.</li><li><strong>Function Calling</strong> : Agents can call tools or external APIs during a conversation such as fetching weather data or triggering workflows without breaking the interaction flow.</li><li><strong>Custom Agent Behavior</strong> : You can shape how your agent behaves using system prompts, allowing you to define tone, personality, or role-specific behavior.</li><li><strong>Call Control</strong> : Ultravox-powered agents can manage the conversation lifecycle, including ending calls gracefully when the interaction is complete.</li><li><strong>MCP Integration</strong> : Ultravox supports Model Context Protocol (MCP), allowing agents to connect to external tools and data sources using:<ul><li><code>MCPServerStdio</code> for local processes</li><li><code>MCPServerHTTP</code> for remote services</li></ul></li></ul><p>This makes it easier to build agents that interact with real systems instead of just responding with text.</p><h2 id="installation">Installation</h2><p>To get started, install the <a href="https://pypi.org/project/videosdk-plugins-ultravox/" rel="noreferrer">Ultravox-enabled VideoSDK Agents package:</a></p><pre><code class="language-python">pip install "videosdk-plugins-ultravox"</code></pre><h2 id="authentication">Authentication</h2><p>Ultravox requires an API key.</p><ol><li>Generate an API key from the <a href="https://app.ultravox.ai/" rel="noreferrer">Ultravox dashboard</a></li><li>Sign up at VideoSDK - <a href="https://dub.sh/BVOvGNr" rel="noreferrer">authentication token</a></li></ol><pre><code class="language-python">ULTRAVOX_API_KEY=your_api_key_here
VIDEOSDK_AUTH_TOKEN = token</code></pre><p>When using environment variables, you don’t need to pass the API key directly in your code the SDK picks it up automatically.</p><h2 id="importing-ultravox">Importing Ultravox</h2><pre><code class="language-python">from videosdk.plugins.ultravox import UltravoxRealtime, UltravoxLiveConfig
</code></pre><h2 id="basic-usage-example">Basic Usage Example</h2><p>Below is a minimal example of setting up a real-time Ultravox agent using VideoSDK’s <code>RealTimePipeline</code>.</p><pre><code class="language-python">from videosdk.plugins.ultravox import UltravoxRealtime, UltravoxLiveConfig
from videosdk.agents import RealTimePipeline

# Initialize the Ultravox real-time model
model = UltravoxRealtime(
    model="fixie-ai/ultravox",
    config=UltravoxLiveConfig(
        voice="54ebeae1-88df-4d66-af13-6c41283b4332"
    )
)

# Create the real-time pipeline
pipeline = RealTimePipeline(model=model)</code></pre><p>This setup creates a real-time conversational agent where:</p><ul><li>Audio input is processed continuously</li><li>Responses are generated with minimal delay</li><li>Speech output is streamed back to the user</li></ul><h2 id="configuration-options">Configuration Options</h2><p>Ultravox provides fine-grained control over real-time behavior through <code>UltravoxLiveConfig</code>:</p><ul><li><code>voice</code>: Voice ID used for synthesized speech</li><li><code>language_hint</code>: Hint for the expected conversation language (e.g., <code>"en"</code>)</li><li><code>temperature</code>: Controls response randomness</li><li><code>vad_turn_endpoint_delay</code>: Delay (ms) before a speech turn is considered complete</li><li><code>vad_minimum_turn_duration</code>: Minimum duration (ms) for a valid speech turn</li></ul><p>These parameters help balance responsiveness, stability, and conversational accuracy.</p><h2 id="when-should-you-use-ultravox">When Should You Use Ultravox?</h2><p>Ultravox is a strong fit when:</p><ul><li>You need real-time, low-latency voice conversations</li><li>Turn-taking speed is critical</li><li>You want to avoid managing separate STT, LLM, and TTS components</li><li>Your agent needs to interact live with users or systems</li></ul><p>For batch processing or highly controlled pipelines, a traditional STT → LLM → TTS setup may still make sense. Ultravox shines when conversations need to feel immediate.</p><h2 id="conclusion">Conclusion</h2><p>Ultravox simplifies real-time voice agents by collapsing the entire conversational loop into a single model. Instead of orchestrating multiple components, developers can focus on agent behavior, tools, and interaction flow. When paired with VideoSDK’s real-time pipelines, Ultravox enables voice agents that respond quickly, act intelligently, and feel natural in live conversations.</p><h2 id="resources-and-next-steps">Resources and Next Steps</h2><ul><li>Read more information on <a href="https://docs.ultravox.ai/overview" rel="noreferrer">Ultravox realtime model</a></li><li>Check out<a href="https://github.com/videosdk-live/agents/blob/main/videosdk-plugins/videosdk-plugins-ultravox/videosdk/plugins/ultravox/ultravox_realtime.py" rel="noreferrer"> code implementation on github</a></li><li><strong>Explore more</strong> : Read docs on <a href="https://docs.videosdk.live/ai_agents/plugins/realtime/ultravox" rel="noreferrer">Ultravox Realtime Plugin</a></li><li>Learn how to&nbsp;<a href="https://docs.videosdk.live/ai_agents/deployments/introduction" rel="noreferrer">deploy your AI Agents</a>.</li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6" rel="noopener noreferrer">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li><li>Sign up at VideoSDK - <a href="https://dub.sh/BVOvGNr" rel="noreferrer">authentication token</a></li></ul><p/><p/><h2 id=""/>]]></content:encoded></item><item><title><![CDATA[Introducing xAI Grok Real-Time Speech-to-Speech Plugin for VideoSDK Agents]]></title><description><![CDATA[Build real-time voice and text agents with xAI’s Grok now natively integrated into VideoSDK Agents for multimodal, context-aware AI experiences.]]></description><link>https://www.videosdk.live/blog/iintroducing-real-time-xai-grok-plugin-for-videosdk-ai-voice-agentin-videosdk</link><guid isPermaLink="false">6964fbfb55831517a5a89f03</guid><category><![CDATA[ai agents]]></category><category><![CDATA[AI voice agent]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 22 Jan 2026 12:53:51 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/01/image--40-.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/01/image--40-.png" alt="Introducing xAI Grok Real-Time Speech-to-Speech Plugin for VideoSDK Agents"/><p>We’re excited to introduce <strong>xAI (Grok) Realtime model support</strong> in <strong>VideoSDK AI Voice Agents</strong>, enabling developers to build <strong>real-time, multimodal AI voice systems</strong> powered by xAI’s Grok models.</p><p>With this integration, your agents can reason over <strong>voice and text and </strong>perform <strong>function calls</strong>.</p><h2 id="why-xai-grok-with-videosdk">Why xAI (Grok) with VideoSDK?</h2><p>xAI’s Grok models are designed for <strong>low-latency, real-time interactions</strong>, making them a strong fit for conversational AI systems. When combined with VideoSDK’s real-time streaming and agent pipeline, you can build:</p><ul><li>Voice-first AI agents</li><li>Multimodal assistants (voice + text)</li><li>Agents with live web and X search</li><li>Context-aware agents grounded in your own data</li></ul><p>All without managing complex audio or streaming infrastructure.</p><h2 id="key-features">Key Features</h2><ul><li><strong>Multi-modal Interactions</strong>: Utilize xAI's powerful Grok models for voice and text.</li><li><strong>Function Calling</strong>: Define custom tools to retrieve weather data, interact with external APIs, or perform other actions.</li><li><strong>Web Search</strong>: Enable real-time web search capabilities by setting&nbsp;<code>enable_web_search=True</code>.</li><li><strong>X Search</strong>: Access X (formerly Twitter) content by setting&nbsp;<code>enable_x_search=True</code>&nbsp;and providing&nbsp;<code>allowed_x_handles</code>.</li></ul><h2 id="authentication">Authentication</h2><ol><li>The Nvidia TTS plugin requires an<a href="https://console.x.ai/home" rel="noreferrer"> <strong>xAI API key</strong></a>. Set the API key as an environment variable in your <code>.env</code> file:</li><li>Sign up at VideoSDK for <a href="https://dub.sh/BVOvGNr" rel="noreferrer">authentication token</a></li></ol><pre><code class="language-python">XAI_API_KEY=your-nvidia-api-key
VIDEOSDK_AUTH_TOKEN = token</code></pre><p>When using environment variables, you don’t need to pass the API key directly in your code. The SDK automatically picks it up at runtime.</p><h3 id="using-videosdk-with-xai%E2%80%99s-grok-plugin">Using VideoSDK with xAI’s Grok Plugin</h3><p><a href="https://pypi.org/project/videosdk-plugins-xai/" rel="noreferrer">Install the xAI plugin:</a></p><pre><code class="language-python">pip install "videosdk-plugins-xai"</code></pre><h2 id="quick-example">Quick example:</h2><pre><code class="language-python">from videosdk.plugins.xai import XAIRealtime, XAIRealtimeConfig
from videosdk.agents import RealTimePipeline

# Initialize the xAI Grok real-time model
model = XAIRealtime(
    model="grok-4-1-fast-non-reasoning",
    api_key="your-xai-api-key",
    config=XAIRealtimeConfig(
        voice="Eve",
        # collection_id="your-collection-id" # Optional
    )
)

# Create the pipeline with the model
pipeline = RealTimePipeline(model=model)</code></pre><h2 id="configuration-options">Configuration Options</h2><ul><li><code>model</code>: The Grok model to use (e.g.,&nbsp;<code>"grok-4-1-fast-non-reasoning"</code>).</li><li><code>api_key</code>: Your xAI API key (can also be set via the&nbsp;<code>XAI_API_KEY</code>&nbsp;environment variable).</li><li><code>config</code>: An&nbsp;<code>XAIRealtimeConfig</code>&nbsp;object for advanced options:<ul><li><code>voice</code>: (str) The voice to use for audio output (e.g.,&nbsp;<code>"Eve"</code>,&nbsp;<code>"Ara"</code>,&nbsp;<code>"Rex"</code>,&nbsp;<code>"Sal"</code>,&nbsp;<code>"Leo"</code>).</li><li><code>enable_web_search</code>: (bool) Enable or disable web search capabilities.</li><li><code>enable_x_search</code>: (bool) Enable or disable search on X (Twitter).</li><li><code>allowed_x_handles</code>: (List[str]) A list of allowed X handles to search within.</li><li><code>collection_id</code>: (str, optional) The ID of a custom collection from your xAI Console storage to provide additional context.</li><li><code>turn_detection</code>: Configuration for detecting when a user has finished speaking.</li></ul></li></ul><h2 id="collection-storage">Collection Storage</h2><p>xAI Grok supports using "collections" to provide additional context to your agent, grounding its responses in your own documents or data.</p><p>To use a collection:</p><ol><li><strong>Navigate to xAI Console</strong>: Go to your&nbsp;<a href="https://console.x.ai/" rel="noopener noreferrer">console.x.ai</a>&nbsp;dashboard.</li><li><strong>Access Storage</strong>: Click on the&nbsp;<strong>Storage</strong>&nbsp;section in the sidebar.</li><li><strong>Create New Collection</strong>: Click the "Create New Collection" button.</li><li><strong>Upload Files</strong>: Upload your relevant documents or data files to the new collection.</li><li><strong>Get Collection ID</strong>: Once the collection is created, copy its&nbsp;<strong>Collection ID</strong>.</li><li><strong>Use in Config</strong>: Pass the copied ID to your agent's configuration:</li></ol><pre><code class="language-python">config=XAIRealtimeConfig(
    voice="Eve",
    collection_id="your-collection-id-from-console",
    # ... other config options
)</code></pre><p>The agent will now use the content of this collection to inform its responses.</p><h2 id="conclusion">Conclusion</h2><p>With xAI Grok now integrated into VideoSDK Agents, developers can build real-time AI voice systems that are faster, smarter, and easier to scale. By combining Grok’s powerful multimodal models with VideoSDK’s low-latency real-time pipeline, you can move from prototype to production-ready voice agents in just a few lines of code. Whether you’re building assistants, support agents, or interactive AI experiences, this integration gives you the foundation to create natural, real-time conversations with confidence.</p><h2 id="resources-and-next-steps">Resources and Next Steps</h2><ul><li>Explore the&nbsp;<a href="https://docs.videosdk.live/ai_agents/plugins/realtime/xai-grok" rel="noreferrer">documentation</a>.</li><li>Learn how to&nbsp;<a href="https://docs.videosdk.live/ai_agents/deployments/introduction" rel="noreferrer">deploy your AI Agents</a>.</li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6" rel="noopener noreferrer">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li><li>Sign up at VideoSDK for <a href="https://dub.sh/BVOvGNr" rel="noreferrer">authentication token</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Introducing the Nvidia Speech to Text Plugin in VideoSDK]]></title><description><![CDATA[Learn how to integrate NVIDIA STT with the VideoSDK Agents SDK to generate fast, accurate, and production-ready transcriptions.]]></description><link>https://www.videosdk.live/blog/introducing-the-nvidia-speech-to-text-plugin-in-videosdk</link><guid isPermaLink="false">6968cc7555831517a5a89f4f</guid><category><![CDATA[ai agents]]></category><category><![CDATA[AI voice agent]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 20 Jan 2026 13:11:43 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/01/image--38-.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/01/image--38-.png" alt="Introducing the Nvidia Speech to Text Plugin in VideoSDK"/><p>Speech recognition is a critical building block for real-time AI voice agents. To deliver fast, accurate, and production-ready transcription, VideoSDK integrates with <strong>Nvidia Speech-to-Text (STT) </strong>a high‑performance, low‑latency speech recognition solution designed for real‑time applications.</p><p>In this blog, we’ll walk through how Nvidia STT works with the VideoSDK Agents SDK and how you can quickly integrate it into your AI voice pipeline.</p><h2 id="why-nvidia-stt">Why Nvidia STT?</h2><p>Nvidia STT is built for speed and accuracy. It is well suited for real-time voice agents where low latency, streaming transcription, and stable performance are essential.</p><p>With VideoSDK’s plugin-based architecture, you can easily swap or test STT providers making Nvidia STT a strong choice for production-grade voice experiences.</p><h2 id="installation">Installation</h2><p>To get started, install the <a href="https://pypi.org/project/videosdk-plugins-nvidia/" rel="noreferrer">Nvidia-enabled VideoSDK Agents plugin</a>:</p><pre><code class="language-python">pip install "videosdk-plugins-nvidia"</code></pre><p>This package adds native support for Nvidia STT inside the VideoSDK Agents ecosystem.</p><h2 id="authentication">Authentication</h2><ol><li>The Nvidia STT plugin requires an <strong>Nvidia API key</strong>. Set the API key as an environment variable in your <code>.env</code> file:</li><li>Sign up at VideoSDK for <a href="https://dub.sh/BVOvGNr" rel="noreferrer">authentication token</a></li></ol><pre><code class="language-python">NVIDIA_API_KEY=your-nvidia-api-key
VIDEOSDK_AUTH_TOKEN = token</code></pre><p>When using environment variables, you don’t need to pass the API key directly in your code. The SDK automatically picks it up at runtime.</p><h2 id="importing-nvidia-stt">Importing Nvidia STT</h2><p>Once installed, import the Nvidia STT plugin into your project:</p><pre><code class="language-python">from videosdk.plugins.nvidia import NvidiaSTT</code></pre><h2 id="example-using-nvidia-stt-in-a-cascading-pipeline">Example: Using Nvidia STT in a Cascading Pipeline</h2><pre><code class="language-python">from videosdk.plugins.nvidia import NvidiaSTT
from videosdk.agents import CascadingPipeline

# Initialize the Nvidia STT model
stt = NvidiaSTT(
    model="parakeet-1.1b-en-US-asr-streaming-silero-vad-sortformer",
    language_code="en-US",
    profanity_filter=False,
    automatic_punctuation=True
)

# Add STT to the cascading pipeline
pipeline = CascadingPipeline(stt=stt)</code></pre><h2 id="configuration-options">Configuration Options</h2><p>Nvidia STT exposes several configuration options so you can fine-tune transcription behavior:</p><ul><li><strong>api_key</strong>: Nvidia API key (optional if set via environment variable)</li><li><strong>model</strong>: Nvidia Riva STT model to use</li><li><strong>server</strong>: Riva server address (default: <code>grpc.nvcf.nvidia.com:443</code>)</li><li><strong>function_id</strong>: Nvidia service function ID</li><li><strong>language_code</strong>: Language for transcription (default: <code>en-US</code>)</li><li><strong>sample_rate</strong>: Audio sample rate in Hz (default: <code>16000</code>)</li><li><strong>profanity_filter</strong>: Enable or disable profanity filtering</li><li><strong>automatic_punctuation</strong>: Enable automatic punctuation</li><li><strong>use_ssl</strong>: Enable SSL connection</li></ul><p>These options make it easy to adapt Nvidia STT to different real-world voice scenarios.</p><h2 id="conclusion">Conclusion</h2><p>By integrating Nvidia STT with VideoSDK Agents, you get a powerful, flexible speech recognition layer that fits naturally into real-time AI voice workflows. Whether you’re testing individual components or deploying a full voice agent pipeline, Nvidia STT gives you the speed and reliability required for modern conversational experiences.</p><h2 id="resources-and-next-steps">Resources and Next Steps</h2><ul><li>Read more information on <a href="https://docs.nvidia.com/deeplearning/riva/user-guide/docs/index.html" rel="noreferrer">Nvidia Riva STT</a></li><li>Check out <a href="https://github.com/videosdk-live/agents/blob/main/videosdk-plugins/videosdk-plugins-nvidia/videosdk/plugins/nvidia/stt.py" rel="noreferrer">full code implementation on github</a></li><li><strong>Explore more</strong> : Read docs on <a href="https://docs.videosdk.live/ai_agents/plugins/stt/nvidia" rel="noreferrer">Nvidia STT Plugin</a></li><li>Learn how to&nbsp;<a href="https://docs.videosdk.live/ai_agents/deployments/introduction" rel="noreferrer">deploy your AI Agents</a>.</li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6" rel="noopener noreferrer">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li></ul>]]></content:encoded></item><item><title><![CDATA[Introducing the MurfAI Text To Speech Plugin in VideoSDK]]></title><description><![CDATA[Learn how to integrate Murf AI Text-to-Speech with VideoSDK Agents to generate natural, expressive, and low-latency voice output for AI agents.]]></description><link>https://www.videosdk.live/blog/introducing-the-murfai-text-to-speech-plugin-in-videosdk</link><guid isPermaLink="false">6964bccc55831517a5a89ebd</guid><category><![CDATA[ai agents]]></category><category><![CDATA[AI voice agent]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 20 Jan 2026 06:26:32 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/01/image--32-.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/01/image--32-.png" alt="Introducing the MurfAI Text To Speech Plugin in VideoSDK"/><p>We’re excited to introduce <strong>Murf AI Text-to-Speech (TTS) support in VideoSDK Agents</strong>, enabling developers to generate <strong>natural, expressive voice output</strong> using Murf AI’s high-quality speech models.</p><p>With this integration, you can add <strong>human-like voices</strong>, advanced <strong>voice customization</strong>, and <strong>low-latency streaming audio</strong> to your AI agents — all seamlessly within VideoSDK’s real-time pipeline.</p><h2 id="why-murf-ai-with-videosdk">Why Murf AI with VideoSDK?</h2><p>Murf AI offers studio-quality voices with fine-grained control over tone, pace, and style. When combined with VideoSDK Agents, you can build:</p><ul><li>Natural-sounding AI voice agents</li><li>Expressive speech with adjustable pitch, rate, and style</li><li>Low-latency, streaming TTS for real-time conversations</li><li>Globally deployable agents with multi-region support</li></ul><p>All without managing complex audio pipelines or streaming logic.</p><h2 id="authentication">Authentication</h2><ol><li>The  MurfAI TTS plugin requires an<a href="https://murf.ai/" rel="noreferrer"> MURFAI<strong> API key</strong>.</a> Set the API key as an environment variable in your <code>.env</code> file:</li><li>Sign up at VideoSDK for <a href="https://dub.sh/BVOvGNr" rel="noreferrer">authentication token</a></li></ol><pre><code class="language-python">MURFAI_API_KEY=your-nvidia-api-key
VIDEOSDK_AUTH_TOKEN = token</code></pre><p>When using environment variables, you don’t need to pass the API key directly in your code. The SDK automatically picks it up at runtime.</p><h2 id="using-videosdk-with-murf-ai-tts-plugin">Using VideoSDK with Murf AI TTS Plugin</h2><p>Install the <a href="https://pypi.org/project/videosdk-plugins-murfai/" rel="noreferrer">Murf AI plugin:</a></p><pre><code class="language-python">pip install "videosdk-plugins-murfai"</code></pre><h2 id="quick-example">Quick Example</h2><pre><code class="language-python">from videosdk.plugins.murfai import MurfAITTS, MurfAIVoiceSettings
from videosdk.agents import CascadingPipeline

# Configure voice settings
voice_settings = MurfAIVoiceSettings(
    pitch=0,
    rate=0,
    style="Conversational",
    variation=1,
    multi_native_locale=None
)

# Initialize the Murf AI TTS model
tts = MurfAITTS(
    # When MURFAI_API_KEY is set in .env - DON'T pass api_key parameter
    api_key="your-murfai-api-key",
    region="US_EAST",
    model="Falcon",
    voice="en-US-natalie",
    voice_settings=voice_settings,
    enable_streaming=True
)

# Add tts to cascading pipeline
pipeline = CascadingPipeline(tts=tts)
</code></pre><blockquote>for detailed explanation on configuration options visit <a href="https://docs.videosdk.live/ai_agents/plugins/tts/murf-ai-tts#configuration-options" rel="noreferrer">murfai-plugin-documentation.</a></blockquote><h2 id="conclusion">Conclusion</h2><p>With Murf AI TTS now integrated into VideoSDK Agents, developers can deliver <strong>natural, expressive speech</strong> in real-time AI voice systems with minimal setup. By combining Murf AI’s powerful text-to-speech models with VideoSDK’s real-time agent pipelines, you can build production-ready voice experiences that sound more human and feel more engaging.</p><h2 id="resources-and-next-steps">Resources and Next Steps</h2><ul><li>Explore the&nbsp;<a href="https://docs.videosdk.live/ai_agents/plugins/tts/murf-ai-tts" rel="noreferrer">documentation</a>.</li><li>Learn how to&nbsp;<a href="https://docs.videosdk.live/ai_agents/deployments/introduction" rel="noreferrer">deploy your AI Agents</a>.</li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6" rel="noopener noreferrer">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li><li>Sign up at VideoSDK for <a href="https://dub.sh/BVOvGNr" rel="noreferrer">authentication token</a></li></ul><p/>]]></content:encoded></item><item><title><![CDATA[Introducing the Nvidia Text to Speech Plugin in VideoSDK]]></title><description><![CDATA[Learn how to integrate NVIDIA Riva TTS with the VideoSDK Agents SDK to deliver real-time, low-latency speech that makes AI voice agents sound natural, responsive, and production-ready.]]></description><link>https://www.videosdk.live/blog/introducing-the-nvidia-text-to-speech-plugin-in-videosdk</link><guid isPermaLink="false">6968ce8355831517a5a89f74</guid><category><![CDATA[ai agents]]></category><category><![CDATA[plugins]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 19 Jan 2026 07:54:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/01/image--35-.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/01/image--35-.png" alt="Introducing the Nvidia Text to Speech Plugin in VideoSDK"/><p>When an AI agent speaks, latency and voice quality matter as much as the words themselves. Text-to-speech is the final step in the interaction, and it directly shapes how natural and responsive an agent feels.</p><p>Nvidia TTS, powered by Riva, is built for real-time systems where speech needs to be generated quickly, consistently, and at scale. In this guide, we’ll walk through how to integrate Nvidia TTS with the VideoSDK Agents SDK and use it as part of a low-latency voice pipeline.</p><h2 id="installation">Installation</h2><p>To get started, install the <a href="https://pypi.org/project/videosdk-plugins-nvidia/" rel="noreferrer">Nvidia-enabled VideoSDK Agents plugin</a>:</p><pre><code class="language-python">pip install "videosdk-plugins-nvidia"</code></pre><p>This package adds native support for Nvidia TTS inside the VideoSDK Agents ecosystem.</p><h2 id="authentication">Authentication</h2><ol><li>The Nvidia TTS plugin requires an <a href="https://build.nvidia.com/settings/api-keys" rel="noreferrer"><strong>Nvidia API key</strong></a>. Set the API key as an environment variable in your <code>.env</code> file:</li><li>Sign up at VideoSDK for <a href="https://dub.sh/BVOvGNr" rel="noreferrer">authentication token</a></li></ol><pre><code class="language-python">NVIDIA_API_KEY=your-nvidia-api-key
VIDEOSDK_AUTH_TOKEN = token</code></pre><p>When using environment variables, you don’t need to pass the API key directly in your code. The SDK automatically picks it up at runtime.</p><h2 id="importing-nvidia-tts">Importing Nvidia TTS</h2><p>Once installed, import the Nvidia TTS plugin into your project:</p><pre><code class="language-python">from videosdk.plugins.nvidia import NvidiaTTS</code></pre><h2 id="example-using-nvidia-tts-in-a-cascading-pipeline">Example: Using Nvidia TTS in a Cascading Pipeline</h2><pre><code class="language-python">from videosdk.plugins.nvidia import NvidiaTTS
from videosdk.agents import CascadingPipeline

# Initialize the Nvidia TTS model

tts = NvidiaTTS(
    api_key="your-nvidia-api-key",
    voice_name="Magpie-Multilingual.EN-US.Aria",
    language_code="en-US",
    sample_rate=24000
)

#  Add tts to cascading pipeline
pipeline = CascadingPipeline(tts=tts)</code></pre><h2 id="configuration-options">Configuration Options</h2><p>Nvidia STT exposes several configuration options so you can fine-tune transcription behavior:</p><ul><li><code>api_key</code>: Your Nvidia API key (required, can also be set via environment variable)</li><li><code>server</code>: The Nvidia Riva server address (default:&nbsp;<code>"grpc.nvcf.nvidia.com:443"</code>)</li><li><code>function_id</code>: The specific function ID for the service (default:&nbsp;<code>"877104f7-e885-42b9-8de8-f6e4c6303969"</code>)</li><li><code>voice_name</code>: (str) The voice to use (default:&nbsp;<code>"Magpie-Multilingual.EN-US.Aria"</code>)</li><li><code>language_code</code>: (str) Language code for synthesis (default:&nbsp;<code>"en-US"</code>)</li><li><code>sample_rate</code>: (int) Audio sample rate in Hz (default:&nbsp;<code>24000</code>)</li><li><code>use_ssl</code>: (bool) Enable SSL connection (default:&nbsp;<code>True</code>)</li></ul><p>These options make it easy to adapt Nvidia TTS to different real-world voice scenarios.</p><h2 id="conclusion">Conclusion</h2><p>Nvidia TTS fits naturally into real-time AI agents where fast, reliable speech output is critical. By combining Riva’s optimized speech models with VideoSDK’s agent pipeline, you get precise control over voice output without adding complexity to your system. Whether you’re prototyping a voice assistant or running production-grade agents, having a predictable and testable TTS layer helps you build conversations that sound responsive, stable, and human.</p><h2 id="resources-and-next-steps">Resources and Next Steps</h2><ul><li><strong>Explore more</strong> : Read docs on <a href="https://docs.videosdk.live/ai_agents/plugins/tts/nvidia" rel="noreferrer">Nvidia TTS Plugin</a></li><li>Learn how to&nbsp;<a href="https://docs.videosdk.live/ai_agents/deployments/introduction" rel="noreferrer">deploy your AI Agents</a></li><li>Read more information on <a href="https://docs.nvidia.com/deeplearning/riva/user-guide/docs/index.html" rel="noreferrer">Nvidia Riva </a>TTS</li><li>Check out <a href="https://github.com/videosdk-live/agents/blob/main/videosdk-plugins/videosdk-plugins-nvidia/videosdk/plugins/nvidia/tts.py" rel="noreferrer">full code implementation on github</a></li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6" rel="noopener noreferrer">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li></ul><p/>]]></content:encoded></item><item><title><![CDATA[Introducing the Gladia Speech to Text Plugin in VideoSDK]]></title><description><![CDATA[We’re introducing the Gladia Speech-to-Text plugin for VideoSDK. With multilingual support, instant partial results, and handling of mixed languages, it provides a reliable speech input layer for voice-driven applications.]]></description><link>https://www.videosdk.live/blog/introducing-the-gladia-stt-plugin-in-videosdk</link><guid isPermaLink="false">6968d27055831517a5a89fbb</guid><category><![CDATA[plugins]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[community]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 16 Jan 2026 05:24:28 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/01/image--33-.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/01/image--33-.png" alt="Introducing the Gladia Speech to Text Plugin in VideoSDK"/><p>Speech-to-text is the first and most critical step in any voice agent. If transcription is slow or inaccurate, everything downstream reasoning and response suffers. Gladia STT is built for real-time transcription with strong multilingual support, fast partial results, and handling of code-switching.</p><p>In this guide, we’ll walk through how to integrate Gladia STT with the VideoSDK Agents SDK and use it as a reliable input layer for voice-driven applications</p><h2 id="why-gladia-stt">Why Gladia STT?</h2><p>Many voice agents operate in environments where users switch languages mid-sentence or expect instant feedback while speaking. Gladia is optimized for these scenarios. It provides:</p><ul><li>Low-latency transcription</li><li>Support for multiple languages</li><li>Automatic code-switching</li><li>Partial transcripts for faster turn detection</li></ul><p>This makes it a strong choice for real-time agents, live calls, and interactive voice applications.</p><h2 id="key-features">Key Features</h2><ul><li><strong>Real-Time Transcription</strong> : Gladia streams transcription results as audio is processed, reducing perceived latency in conversations.</li><li><strong>Multilingual Support</strong> : You can specify one or more languages, making it suitable for global or multilingual users.</li><li><strong>Code-Switching</strong> : Gladia can automatically detect and switch languages within the same conversation without manual intervention.</li><li><strong>Partial Transcripts</strong> : By enabling partial transcripts, agents can start reasoning before the user finishes speaking, improving responsiveness.</li></ul><h2 id="installation">Installation</h2><p>Install the <a href="https://pypi.org/project/videosdk-plugins-gladia/" rel="noreferrer">Gladia-STT VideoSDK Agents package</a>:</p><pre><code class="language-python">pip install "videosdk-plugins-gladia"</code></pre><h2 id="authentication">Authentication</h2><ol><li>Sign up at Gladia : <a href="https://app.gladia.io/signup" rel="noreferrer">signup link</a></li><li>Sign up at VideoSDK - <a href="https://dub.sh/BVOvGNr" rel="noreferrer">authentication token</a></li></ol><pre><code class="language-python">GLADIA_API_KEY=your_api_key_here
VIDEOSDK_AUTH_TOKEN = token</code></pre><p>When using environment variables, you don’t need to pass the API key directly in code the SDK reads it automatically.</p><h2 id="importing-gladia-stt">Importing Gladia STT</h2><pre><code class="language-python">from videosdk.plugins.gladia import GladiaSTT</code></pre><h2 id="basic-usage-example">Basic Usage Example</h2><p>Below is a minimal example showing how to configure Gladia STT and attach it to a cascading pipeline.</p><pre><code class="language-python">from videosdk.plugins.gladia import GladiaSTT
from videosdk.agents import CascadingPipeline

# Initialize the Gladia STT model
stt = GladiaSTT(
    api_key="your-gladia-api-key",
    languages=["en"],
    code_switching=True,
    receive_partial_transcripts=True
)

#  Add stt to a cascading pipeline
pipeline = CascadingPipeline(stt=stt)</code></pre><p>This setup enables:</p><ul><li>Real-time transcription</li><li>Automatic language switching</li><li>Partial transcripts for faster downstream processing</li></ul><h2 id="configuration-options">Configuration Options</h2><p>Gladia STT provides fine-grained control over transcription behavior:</p><ul><li><code>languages</code>: List of language codes to detect (e.g., <code>["en", "fr"]</code>)</li><li><code>code_switching</code>: Enables automatic language switching</li><li><code>receive_partial_transcripts</code>: Streams interim results for lower latency</li><li><code>model</code>: STT model to use (default: <code>"solaria-1"</code>)</li><li><code>input_sample_rate</code>: Incoming audio sample rate</li><li><code>output_sample_rate</code>: Processing sample rate</li><li><code>encoding</code>: Audio encoding format</li><li><code>bit_depth</code>: Audio bit depth</li><li><code>channels</code>: Number of audio channels (mono or stereo)</li></ul><p>These parameters let you tune accuracy, latency, and compatibility with your audio pipeline.</p><h2 id="conclusion">Conclusion</h2><p>Gladia STT provides a strong foundation for real-time voice agents by combining speed, accuracy, and multilingual flexibility. When integrated with VideoSDK’s agent pipelines, it enables agents to listen effectively even in dynamic, multilingual conversations. A reliable STT layer like Gladia helps ensure that downstream reasoning and responses stay accurate, responsive, and consistent.</p><h2 id="resources-and-next-steps">Resources and Next Steps</h2><ul><li>Read more information on <a href="https://docs.gladia.io/chapters/introduction" rel="noreferrer">Gladia STT model</a></li><li>Check out<a href="https://github.com/videosdk-live/agents/blob/main/videosdk-plugins/videosdk-plugins-gladia/videosdk/plugins/gladia/stt.py" rel="noreferrer"> full code implementation on github</a></li><li><strong>Explore more</strong> : Read documentation on <a href="https://docs.videosdk.live/ai_agents/plugins/stt/gladia" rel="noreferrer">Gladia STT Plugin</a></li><li>Learn how to&nbsp;<a href="https://docs.videosdk.live/ai_agents/deployments/introduction" rel="noreferrer">deploy your AI Agents</a>.</li><li>Sign up at <a href="https://dub.sh/zXYQt7V" rel="noreferrer">VideoSDK Dashboard</a></li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6" rel="noopener noreferrer">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li></ul><p/><h2 id=""/><h2 id="-1"/>]]></content:encoded></item><item><title><![CDATA[Introducing Testing and Evaluation in AI Voice Agents]]></title><description><![CDATA[Learn how to run testing and evaluation for AI voice agents using the VideoSDK Agent SDK, including STT, LLM, and TTS benchmarking, latency metrics, and LLM-based response judging.]]></description><link>https://www.videosdk.live/blog/introducing-testing-and-evaluation-in-AI-Voice-Agents</link><guid isPermaLink="false">6964cfbe55831517a5a89ecb</guid><category><![CDATA[ai agents]]></category><category><![CDATA[AI voice agent]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 15 Jan 2026 11:15:03 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/01/Eval.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/01/Eval.png" alt="Introducing Testing and Evaluation in AI Voice Agents"/><p>When building AI agents, the first validation step is often informal: trigger the agent, speak a sentence, and confirm that it responds. If speech is recognized, text is generated, and audio plays back, the system is considered functional.</p><p>This approach works for demos. It does not work for production.</p><p>As voice agents are deployed in real environments, previously invisible issues begin to surface. Response times increase under load. Transcription quality degrades gradually rather than catastrophically. Language models produce responses that are fluent but misaligned with user intent. When failures are reported, teams often lack the data required to diagnose them. There are no baselines, no thresholds, and no historical comparisons only subjective impressions.</p><p>This reveals a fundamental problem with modern AI agents: <strong>they are easy to demonstrate, but difficult to measure</strong>.</p><p>To address this gap, we introduced a structured Testing and Evaluation framework in the VideoSDK Agent SDK. This post explains the engineering principles behind that framework and how to think about evaluating real-time voice agents .</p><h2 id="%E2%80%9Cdoes-it-work%E2%80%9D-is-not-an-engineering-metric">“Does It Work?” Is Not an Engineering Metric</h2><p>In early development, teams frequently rely on qualitative validation: does the agent respond correctly in a small number of manual tests? While useful during prototyping, this approach collapses under production constraints.</p><p>From an engineering standpoint, a single successful interaction demonstrates only that one execution path completed without failure. It provides no information about:</p><ul><li>Latency distributions</li><li>Variance across inputs and environments</li><li>Regression across versions</li><li>Sensitivity to load and concurrency</li></ul><p>More critically, it produces no measurable artifacts that can be compared over time.</p><p>Production systems require observability. Without quantitative metrics, regressions are detected only after user complaints, and root-cause analysis becomes speculative. What appears to be a “model issue” may in fact be a latency spike, a transcription error, or a cascading failure across multiple stages.</p><p>Voice agents must therefore be evaluated as <strong>systems</strong>, not demos.</p><h2 id="the-first-principles-view-what-is-an-ai-agent">The First-Principles View: What Is an AI Agent?</h2><p>At its core, a real-time agent is a pipeline:</p><ol><li><strong>Speech-to-Text (STT)</strong> converts audio into text</li><li><strong>LLM</strong> interprets intent, reasons, and decides what to say</li><li><strong>Text-to-Speech (TTS)</strong> turns the response back into audio</li></ol><p>Every failure in production maps to one of these layers. So instead of testing “the agent”, we should be asking:</p><ul><li>How long does STT take?</li><li>How accurate is the transcription?</li><li>Does the LLM respond correctly <em>for this input</em>?</li><li>Does latency compound across the pipeline?</li></ul><p>This is the mental model behind the evaluation framework.</p><h2 id="implementation-measuring-the-pipeline">Implementation: Measuring the Pipeline</h2><p>Once the mental model is clear, evaluation becomes a <strong>systematic process</strong>:</p><ol><li><strong>Define what to measure</strong> : Decide which metrics matter (latency, accuracy, quality).</li><li><strong>Instrument each component</strong> : Measure STT, LLM, and TTS individually.</li><li><strong>Measure end-to-end performance</strong> : Capture how component interactions affect the overall experience.</li></ol><pre><code class="language-python">from videosdk.agents import Evaluation, EvalMetric

eval = Evaluation(
    name="agent-eval",
    metrics=[
        EvalMetric.STT_LATENCY,
        EvalMetric.LLM_LATENCY,
        EvalMetric.TTS_LATENCY,
        EvalMetric.END_TO_END_LATENCY
    ],
    output_dir="./reports"
)</code></pre><p>This immediately answers questions like:</p><ul><li>How long does each component take?</li><li>Where is most of the latency coming from?</li><li>What does end-to-end user experience look like?</li></ul><h2 id="adding-a-turn-testing-the-full-pipeline">Adding a Turn: Testing the Full Pipeline</h2><p>A “turn” is a single user-agent interaction that involves <strong>STT → LLM → TTS</strong>. Testing a turn allows you to see how <strong>errors propagate</strong> and how <strong>latency accumulates</strong>.</p><pre><code class="language-python">from videosdk.agents import (
    EvalTurn, STTComponent, LLMComponent, TTSComponent,
    STTEvalConfig, LLMEvalConfig, TTSEvalConfig
)

eval.add_turn(
    EvalTurn(
        stt=STTComponent.deepgram(
            STTEvalConfig(file_path="./sample.wav")
        ),
        llm=LLMComponent.google(
            LLMEvalConfig(
                model="gemini-2.5-flash-lite",
                use_stt_output=True
            )
        ),
        tts=TTSComponent.google(
            TTSEvalConfig(
                model="en-US-Standard-A",
                use_llm_output=True
            )
        )
    )
)</code></pre><h2 id="evaluating-response-quality-with-an-llm-judge">Evaluating Response Quality with an LLM Judge</h2><p>Latency alone doesn’t guarantee a good experience. An agent can respond quickly and still be wrong.</p><p>To solve this, the SDK supports <strong>LLM-as-Judge</strong>, which evaluates responses on qualitative dimensions.</p><pre><code class="language-python">from videosdk.agents import LLMAsJudge, LLMAsJudgeMetric

judge = LLMAsJudge.google(
    model="gemini-2.5-flash-lite",
    prompt="Is the response relevant and logically correct?",
    checks=[
        LLMAsJudgeMetric.RELEVANCE,
        LLMAsJudgeMetric.REASONING,
        LLMAsJudgeMetric.SCORE
    ]
)</code></pre><h2 id="testing-components-in-isolation">Testing Components in Isolation</h2><p>Not every issue requires end-to-end testing. Sometimes you just want to isolate a single component.</p><h3 id="stt-only">STT Only</h3><pre><code class="language-python">eval.add_turn(
    EvalTurn(
        stt=STTComponent.deepgram(
            STTEvalConfig(file_path="./sports.wav")
        )
    )
)</code></pre><h3 id="llm-only">LLM Only</h3><pre><code class="language-python">eval.add_turn(
    EvalTurn(
        llm=LLMComponent.google(
            LLMEvalConfig(
                model="gemini-2.5-flash-lite",
                mock_input="Explain photosynthesis in one paragraph"
            )
        )
    )
)</code></pre><p>This makes debugging faster and removes noise from unrelated stages.</p><h2 id="running-the-evaluation">Running the Evaluation</h2><p>Once your turns are defined, running the evaluation is straightforward.</p><pre><code class="language-python">results = eval.run()
results.save()</code></pre><p>The SDK generates structured reports that you can track over time to catch regressions and compare model performance.</p><h2 id="conclusion">Conclusion</h2><p>Building a voice AI agent that “works” in a demo is easy. <strong>Ensuring it works reliably in the real world requires structured testing and evaluation at every stage  from speech recognition to language understanding to speech synthesis.</strong> This approach not only identifies hidden errors and latency issues but also ensures the agent responds accurately, handles interruptions, and delivers a seamless user experience. In short, <strong>testing is not optional; it’s the foundation for building AI agents users can trust.</strong></p><h2 id="resources-and-next-steps">Resources and Next Steps</h2><ul><li>Explore the&nbsp;<a href="https://docs.videosdk.live/ai_agents/core-components/testing-and-evaluation" rel="noreferrer">documentation</a>&nbsp;for full code implementation.</li><li>Learn how to&nbsp;<a href="https://docs.videosdk.live/ai_agents/deployments/introduction" rel="noreferrer">deploy your AI Agents</a>.</li><li><strong>Explore more:</strong>&nbsp;Check out the&nbsp;<a href="https://docs.videosdk.live/ai_agents/core-components/multi-agent-switching" rel="noreferrer">VideoSDK documentation</a>&nbsp;for more features.</li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6" rel="noopener noreferrer">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li></ul>]]></content:encoded></item><item><title><![CDATA[How to Build an AI Voice System Using Real-Time Multi-Agent Switching]]></title><description><![CDATA[In this blog you'll learn about how to build an AI systems with multi-agent switching that intelligently transfer control between specialized agents. Keep conversations natural, tasks organized, and users engaged by letting each agent focus on what it does best.]]></description><link>https://www.videosdk.live/blog/how-to-build-an-ai-voice-system-using-real-time-multi-agent-switching</link><guid isPermaLink="false">6964a72d55831517a5a89ea8</guid><category><![CDATA[ai agents]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 13 Jan 2026 04:54:55 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/01/image--29-.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/01/image--29-.png" alt="How to Build an AI Voice System Using Real-Time Multi-Agent Switching"/><p>In today’s fast-paced world, users expect AI assistants to handle complex workflows . Imagine a healthcare assistant that not only understands your symptoms but can also instantly transfer you to the right specialist without making you repeat yourself. This is possible with <strong>multi-agent switching</strong>, and in this guide, we’ll show you how to build it using <strong>VideoSDK</strong>.</p><h2 id="why-multi-agent-switching-matters">Why Multi-Agent Switching Matters</h2><p>Traditional AI assistants often rely on a single agent to handle all tasks. This can get complicated when multiple tools or domains are involved. <strong>Multi-agent switching</strong> breaks a workflow into specialized agents, each focusing on a specific domain or task. <strong>For example in healthcare agent : </strong></p><ul><li>One agent handles <strong>general healthcare inquiries</strong>.</li><li>Another manages <strong>appointment scheduling</strong>.</li><li>A third provides <strong>medical support or guidance</strong>.</li></ul><p>By coordinating smaller agents that operate independently, you create a system that is <strong>modular, maintainable, and more intelligent</strong>.</p><h2 id="context-inheritance-keeping-conversations-smooth">Context Inheritance: Keeping Conversations Smooth</h2><p>When switching agents, you want the new agent to either:</p><ul><li><strong>Know the previous conversation</strong> (<code>inherit_context=True</code>)<br>→ Ideal for maintaining continuity, so users don’t have to repeat themselves.</br></li><li><strong>Start fresh</strong> (<code>inherit_context=False</code>)<br>→ Useful when switching to a completely unrelated task.</br></li></ul><p>This flexibility ensures that your AI behaves naturally and intelligently.</p><h2 id="how-multi-agent-switching-works">How Multi-Agent Switching Works</h2><ol><li>The primary VideoSDK agent listens and understands the user’s intent.</li><li>If specialized assistance is needed, it invokes a <strong>function tool</strong> to transfer control.</li><li>The new agent takes over. If <code>inherit_context=True</code>, it has access to the previous chat, keeping the conversation seamless.</li><li>The specialized agent handles the user’s request and completes the interaction.</li></ol><h2 id="implementation-example-healthcare-agents">Implementation Example: Healthcare Agents</h2><p>Below is a simplified implementation of a <strong>Healthcare AI Voice System</strong> using VideoSDK:</p><pre><code class="language-python">import logging
from videosdk.agents import Agent, AgentSession, CascadingPipeline, function_tool, WorkerJob ,ConversationFlow, JobContext, RoomOptions
from videosdk.plugins.deepgram import DeepgramSTT
from videosdk.plugins.google import GoogleLLM
from videosdk.plugins.cartesia import CartesiaTTS
from videosdk.plugins.silero import SileroVAD
from videosdk.plugins.turn_detector import TurnDetector, pre_download_model

pre_download_model()


class HealthcareAgent(Agent):
    def __init__(self):
        super().__init__(
            instructions="""
You are a general healthcare assistant. Help users with medical inquiries, 
guide them to the right specialist, and route them to appointment booking or medical support when needed.
Respond clearly, calmly, and professionally.
""",
        )

    async def on_enter(self) -&gt; None:
        await self.session.reply(
            instructions="Greet the user politely and ask how you can assist with their health-related concern today."
        )

    async def on_exit(self) -&gt; None:
        await self.session.say("Take care and stay healthy!")

    @function_tool()
    async def transfer_to_appointment(self) -&gt; Agent:
        """Transfer to the healthcare appointment specialist for scheduling or changes."""
        return AppointmentAgent(inherit_context=True)

    @function_tool()
    async def transfer_to_medical_support(self) -&gt; Agent:
        """Transfer to medical support for symptoms, reports, or health guidance."""
        return MedicalSupportAgent(inherit_context=True)


class AppointmentAgent(Agent):
    def __init__(self, inherit_context: bool = False):
        super().__init__(
            instructions="""
You are an appointment specialist. Help users schedule, modify, or cancel 
doctor visits, follow-ups, tests, or telehealth appointments.
""",
            inherit_context=inherit_context,
        )

    async def on_enter(self) -&gt; None:
        await self.session.say(
            "You’re connected with appointments. What would you like to schedule or update today?"
        )

    async def on_exit(self) -&gt; None:
        await self.session.say("Your appointment request is complete. Wishing you good health!")


class MedicalSupportAgent(Agent):
    def __init__(self, inherit_context: bool = False):
        super().__init__(
            instructions="""
You are a medical support specialist. Help users with symptoms, 
health concerns, basic guidance, or understanding reports. 
You are NOT a doctor — provide general support and routing only.
""",
            inherit_context=inherit_context,
        )

    async def on_enter(self) -&gt; None:
        await self.session.say(
            "You’re now connected with medical support. How can I help with your health concern?"
        )

    async def on_exit(self) -&gt; None:
        await self.session.say("Glad I could help. Take care and stay well!")


async def entrypoint(ctx: JobContext):
    agent = HealthcareAgent()
    conversation_flow = ConversationFlow(agent)

    pipeline = CascadingPipeline(
        stt=DeepgramSTT(),
        llm=GoogleLLM(),
        tts=CartesiaTTS(),
        vad=SileroVAD(),
        turn_detector=TurnDetector()
    )
    session = AgentSession(
        agent=agent, 
        pipeline=pipeline,
        conversation_flow=conversation_flow
    )

    await session.start(wait_for_participant=True, run_until_shutdown=True)

def make_context() -&gt; JobContext:
    room_options = RoomOptions(room_id="&lt;room_id&gt;", name="Multi Agent Switch Agent", playground=True)
    return JobContext(room_options=room_options)

if __name__ == "__main__":
    job = WorkerJob(entrypoint=entrypoint, jobctx=make_context)
    job.start()</code></pre><blockquote>With this setup, your healthcare AI can <strong>detect user intent</strong> and route them to the correct specialized agent while keeping context intact.</blockquote><p>Multi-agent AI is a <strong>game-changer for healthcare voice assistants</strong>. It allows complex workflows to be broken down into specialized agents, ensures <strong>smooth context transitions</strong>, and improves user experience. Whether you’re handling patient symptoms, appointment scheduling, or medical guidance, this system keeps conversations <strong>natural, professional, and effective</strong>.</p><p>With VideoSDK, building such a system is easier than ever. You just need a few agents, some function tools, and your imagination to create a truly intelligent healthcare assistant.</p><h2 id="resources-and-next-steps">Resources and Next Steps</h2><ul><li>Explore the&nbsp;<a href="https://github.com/videosdk-live/agents-quickstart/tree/main/Multi%20Agent%20Switch/Travel%20Agent" rel="noreferrer">travel-agent-example</a>&nbsp;for full code implementation.</li><li>Learn how to&nbsp;<a href="https://docs.videosdk.live/ai_agents/deployments/introduction" rel="noreferrer">deploy your AI Agents</a>.</li><li><strong>Explore more:</strong>&nbsp;Check out the&nbsp;<a href="https://docs.videosdk.live/ai_agents/core-components/multi-agent-switching" rel="noreferrer">VideoSDK documentation</a>&nbsp;for more features.</li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6" rel="noopener noreferrer">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li></ul>]]></content:encoded></item><item><title><![CDATA[How to enable Voice Mail Detection in AI Voice Agents]]></title><description><![CDATA[Learn how Voice Mail Detection improves outbound calling by identifying voicemail systems automatically. Instead of wasting time speaking into silence, your agent can leave a message or end the call smoothly, helping save call time, reduce costs, and improve overall calling efficiency.]]></description><link>https://www.videosdk.live/blog/how-to-enable-voice-mail-detection-in-ai-voice-agents</link><guid isPermaLink="false">69526a9b64df6f042b4d4b41</guid><category><![CDATA[ai telephony]]></category><category><![CDATA[SIP integration]]></category><category><![CDATA[AI voice agent]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 08 Jan 2026 05:33:13 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/01/image--28-.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2026/01/image--28-.png" alt="How to enable Voice Mail Detection in AI Voice Agents"/><p>When building outbound calling workflows, one of the most common challenge is handling unanswered calls that get redirected to voicemail systems. Without proper detection, an AI agent may continue speaking unnecessarily or wait indefinitely, leading to wasted resources and poor user experience.</p><p><strong>Voice Mail Detection</strong> in VideoSDK solves this problem by automatically identifying voicemail scenarios <em>and allowing your agent to take the appropriate action such as leaving a message or ending</em> the call gracefully.</p><h2 id="what-problem-this-solves">What Problem This Solves</h2><p>In outbound calling workflows, unanswered calls are often routed to voicemail systems. Without detection, agents may continue speaking or wait unnecessarily.</p><p>Voice Mail Detection lets you:</p><ul><li>Detect voicemail systems automatically</li><li>Control how your agent responds</li><li>End calls cleanly after voicemail handling</li></ul><h2 id="enabling-voice-mail-detection">Enabling Voice Mail Detection</h2><p>To use voicemail detection, import and add&nbsp;<code>VoiceMailDetector</code>&nbsp;to your agent configuration and register a callback that defines how voicemail should be handled.</p><pre><code class="language-python">from videosdk.agents import VoiceMailDetector
from videosdk.plugins.openai import OpenAILLM

async def voice_mail_callback(message):
    print("Voice Mail message received:", message)

voicemail = VoiceMailDetector(
    llm=OpenAILLM(),
    duration=5,
    callback=custom_callback_voicemail,
)

session = AgentSession(
    voice_mail_detector=voicemail
)</code></pre><h2 id="full-working-example">Full Working Example</h2><p>To set up incoming call handling, outbound calling, and routing rules, check out the&nbsp;<a href="https://docs.videosdk.live/telephony/ai-telephony-agent-quick-start#part-2-connect-your-agent-to-the-phone-network" rel="noopener noreferrer">Quick Start Example</a></p><pre><code class="language-python">import logging
from videosdk.agents import Agent, AgentSession, CascadingPipeline,WorkerJob,ConversationFlow, JobContext, RoomOptions, Options,VoiceMailDetector
from videosdk.plugins.deepgram import DeepgramSTT
from videosdk.plugins.openai import OpenAILLM
from videosdk.plugins.elevenlabs import ElevenLabsTTS
from videosdk.plugins.silero import SileroVAD
from videosdk.plugins.turn_detector import TurnDetector, pre_download_model

logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", handlers=[logging.StreamHandler()])
pre_download_model()
class VoiceAgent(Agent):
    def __init__(self):
        super().__init__(
            instructions="You are a helpful voice assistant that can answer questions."
        )
    async def on_enter(self) -&gt; None:
        await self.session.say("Hello, how can I help you today?")
    
    async def on_exit(self) -&gt; None:
        await self.session.say("Goodbye!")
        
async def entrypoint(ctx: JobContext):
    
    agent = VoiceAgent()
    conversation_flow = ConversationFlow(agent)

    pipeline=CascadingPipeline(
        stt=DeepgramSTT(),
        llm=OpenAILLM(),
        tts=ElevenLabsTTS(),
        vad=SileroVAD(),
        turn_detector=TurnDetector()
    )
    
    async def voice_mail_callback(message):
        print("Voice Mail message received:", message)

    voice_mail_detector = VoiceMailDetector(llm=OpenAILLM(), duration=5.0, callback = voice_mail_callback)

    session = AgentSession(
        agent=agent, 
        pipeline=pipeline,
        conversation_flow=conversation_flow,
        voice_mail_detector = voice_mail_detector,
    )

    await session.start(wait_for_participant=True, run_until_shutdown=True)

def make_context() -&gt; JobContext:
    room_options = RoomOptions(name="Voice Mail Detector Test", playground=True)
    return JobContext(room_options=room_options) 
 
if __name__ == "__main__":
    job = WorkerJob(entrypoint=entrypoint, jobctx=make_context, options=Options(agent_id="YOUR_AGENT_ID", max_processes=2, register=True, host="localhost", port=8081))
    job.start()</code></pre><h3 id="conclusion">Conclusion</h3><p>Voice Mail Detection ensures your AI agent handles unanswered calls intelligently. By automatically detecting voicemail and triggering the right action, it prevents wasted time, improves call efficiency, and makes outbound workflows reliable and production-ready.</p><h2 id="resources-and-next-steps">Resources and Next Steps</h2><ul><li>Explore the&nbsp;<strong>voice mail detection </strong><a href="https://github.com/videosdk-live/agents-quickstart/blob/main/Voice%20Mail%20Detector/voice_mail_detector.py" rel="noreferrer">implementation</a>&nbsp;on github.</li><li>Read <a href="https://docs.videosdk.live/ai_agents/core-components/voice-mail-detection" rel="noreferrer">voice mail detection</a> docs.</li><li>To set up inbound calls, outbound calls, and routing rules check out the&nbsp;<a href="https://docs.videosdk.live/telephony/managing-calls/making-outbound-calls" rel="noopener noreferrer">Quick Start Example</a>.</li><li>Learn how to&nbsp;<a href="https://docs.videosdk.live/ai_agents/deployments/introduction" rel="noreferrer">deploy your AI Agents</a>.</li><li><strong>Explore more:</strong>&nbsp;Check out the&nbsp;<a href="https://dub.sh/b19X60T" rel="noopener noreferrer">VideoSDK documentation</a>&nbsp;for more features.</li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6" rel="noopener noreferrer">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li></ul>]]></content:encoded></item><item><title><![CDATA[Product Updates - December 2025 : New Billing & Pricing, AI Agents with Graphs & Fallback, and More!]]></title><description><![CDATA[Announcing our new transparent billing system and pricing for 2026! This month's update also gives your AI agents a "brain" with Conversational Graphs, makes them unstoppable with Provider Fallback, and adds powerful new video optimization features across all core SDKs.
]]></description><link>http://www.videosdk.live/blog/product-updates-december-2025</link><guid isPermaLink="false">695ccadd55831517a5a89d9c</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 05 Jan 2026 13:25:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/01/13-1.jpg" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  
  <style>
    :root {
      --primary: #A497D9;
      --bg: #050608;
      --card-bg: #111217;
      --text-main: #E0E0E0;
      --text-muted: #A0A0A0;
      --border: #22242C;
    }
    body {
      font-family: 'Inter', -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, sans-serif;
      line-height: 1.7;
      color: var(--text-main);
      background-color: var(--bg);
      margin: 0;
      padding: 0;
    }
    .container {
      max-width: 850px;
      margin: 0 auto;
      padding: 60px 20px;
    }
    header {
      text-align: left;
      margin-bottom: 60px;
    }
    h1 {
      font-size: 42px;
      line-height: 1.2;
      color: #FFFFFF;
      margin-bottom: 20px;
    }
    h2 {
      font-size: 28px;
      color: var(--primary);
      margin-top: 50px;
      padding-bottom: 10px;
      border-bottom: 1px solid var(--border);
    }
    h3 {
      font-size: 22px;
      color: #FFFFFF;
      margin-top: 35px;
    }
    p {
      margin-bottom: 20px;
    }
    strong {
      color: #FFFFFF;
    }
    .card {
      background-color: var(--card-bg);
      border: 1px solid var(--border);
      border-radius: 12px;
      padding: 30px;
      margin: 30px 0;
    }
    .img-container {
      margin: 40px 0;
      text-align: center;
    }
    .img-container img {
      width: 100%;
      height: auto;
      border-radius: 8px;
      border: 1px solid var(--border);
      box-shadow: 0 10px 30px rgba(0,0,0,0.5);
    }
    .img-caption {
      font-size: 13px;
      color: var(--text-muted);
      margin-top: 10px;
      display: block;
    }
    ul, ol {
      margin-bottom: 20px;
      padding-left: 20px;
    }
    li {
      margin-bottom: 10px;
    }
    .highlight {
      color: var(--primary);
      font-weight: 600;
    }
    a {
      color: var(--primary);
      text-decoration: none;
    }
    a:hover {
      text-decoration: underline;
    }
    .cta-button {
        background-color: var(--primary);
        color: #000000;
        padding: 6px 20px;
        border-radius: 6px;
        font-size: 15px;
        font-weight: bold;
        display: inline-block;
        text-decoration: none;
    }
    .cta-box {
      text-align: left;
      background: linear-gradient(135deg, #111217 0%, #1a1b23 100%);
      padding: 40px;
      border-radius: 12px;
      border: 1px solid var(--primary);
      margin: 60px 0;
    }
    hr {
      border: 0;
      border-top: 1px solid var(--border);
      margin: 60px 0;
    }
    @media (max-width: 600px) {
      h1 { font-size: 32px; }
      .container { padding: 40px 15px; }
    }
  </style>
</meta></meta></head>
<body>

  <div class="container">
    <header>
      
      <img src="https://assets.videosdk.live/static-assets/ghost/2026/01/13-1.jpg" alt="Product Updates - December 2025 : New Billing & Pricing, AI Agents with Graphs & Fallback, and More!"/><p style="font-size: 18px; color: var(--text-muted);">A complete recap of our biggest platform and product updates from December 2025.</p>
    </header>

    <p>Welcome to the December edition of the VideoSDK Monthly Updates! As we close out the year, we’re launching our most significant platform update yet: a completely redesigned <span class="highlight">transparent billing system and new pricing for 2026</span>.</p>
    <p>On the product front, our AI Agents have gained a "brain" with <strong>Conversational Graph support</strong>, become more resilient with <strong>provider fallback and recovery</strong>, and we’ve rolled out powerful <strong>video optimization features</strong> across all our core SDKs. This is a huge one, let's get started!</p>

    <hr>

    <h2>New Billing Dashboard & Pricing for 2026</h2>
    <p>Our mission has always been to empower developers with world-class infrastructure. As we head into 2026, we're taking a massive leap forward by making our billing as scalable, transparent, and real-time as our infrastructure.</p>
    <p>We have replaced traditional monthly billing with a high-performance <strong>Prepaid Wallet system</strong>, giving you complete visibility and control over your spending in real time.</p>

    <h3>Key Highlights of the New System:</h3>
    <ul>
        <li><strong>The Prepaid Wallet:</strong> No more month-end invoices. See your real-time balance and top up instantly.</li>
        <li><strong>$20 "Start Building" Balance:</strong> Every new account now receives a one-time $20 free balance immediately upon signup. No credit card required.</li>
        <li><strong>Simplified Plans:</strong> Our new Plans tab (Free, Pay-As-You-Go, Enterprise) makes it easy to find the right fit for your stage.</li>
        <li><strong>On-Demand Scaling:</strong> Manage concurrency, agent sessions, and recording limits in real-time from the new "Usage Limits" tab.</li>
        <li><strong>Redesigned Billing Dashboard:</strong> A cleaner, more intuitive experience to manage payments, view spending, and download invoices.</li>
    </ul>

    <div class="card">
      <h3 style="margin-top: 0;">Read the Full Announcement</h3>
      <p>This is a fundamental upgrade to our platform designed for total transparency and control. For all the details, including notes for existing users, check out our dedicated blog post.</p>
      <a href="https://www.videosdk.live/blog/videosdk-pricing-2026-billing-dashboard" class="cta-button">Read the Full Announcement</a>
    </div>

    <hr>

    <h2>Major AI Advancements: Building Smarter, More Resilient Agents</h2>
    <p>This month, the Agents SDK hit its 50th release and gained some of its most powerful features to date, focusing on workflow control, reliability, and an expanded ecosystem.</p>

    <h3><a href="https://docs.videosdk.live/ai_agents/core-components/conversational-graph">Introducing Conversational Graphs (Agents SDK v0.0.51)</a></h3>
    <p>Give your agents a "brain." This groundbreaking feature lets you design stateful, deterministic agent workflows using states and transitions. It's the ultimate tool for building complex, reliable, and strictly controlled conversational AI, from customer support flows to structured data collection.</p>
    
    <h3><a href="https://docs.videosdk.live/ai_agents/core-components/fallback-adapter">Unprecedented Reliability with Provider Fallback (Agents SDK v0.0.53)</a></h3>
    <p>Never worry about a single provider failure again. You can now configure a prioritized list of STT, LLM, and TTS providers. If a primary provider fails, the agent will automatically and seamlessly fall back to a secondary one, with cooldown-based recovery to switch back when the service is healthy.</p>

    <h3>Advanced Telephony & Agent Control (Agents SDK v0.0.48 & v0.0.49)</h3>
    <p>We've added a suite of powerful telephony features:</p>
    <ul>
        <li><a href="https://docs.videosdk.live/ai_agents/core-components/call-transfer"><strong>SIP Call Transfer:</strong></a> Transfer an ongoing SIP call to another phone number or SIP endpoint.</li>
        <li><a href="https://docs.videosdk.live/ai_agents/core-components/multi-agent-switching"><strong>Multi-Agent Switch:</strong></a> Seamlessly hand off a conversation from one agent to another (e.g., from a general agent to a booking specialist), with optional context inheritance.</li>
        <li><a href="https://docs.videosdk.live/ai_agents/core-components/dtmf-events"><strong>DTMF Handling</strong></a> & <a href="https://docs.videosdk.live/ai_agents/core-components/voice-mail-detection"><strong>Voicemail Detection:</strong></a> Capture keypad input and automatically detect voicemails to trigger callbacks.</li>
    </ul>

    <h3>An Ever-Expanding AI Ecosystem (Agents SDK v0.0.52 & v0.0.55)</h3>
    <p>We've massively expanded our plugin support, adding integrations for <strong>Google Vertex AI</strong>, <a href="https://docs.videosdk.live/ai_agents/plugins/realtime/xai-grok"><strong>xAI (Grok) Realtime Voice & LLM</strong></a>, <a href="https://docs.videosdk.live/ai_agents/plugins/tts/murf-ai-tts"><strong>MurfAI TTS</strong></a>, <a href="https://docs.videosdk.live/ai_agents/plugins/realtime/ultravox"><strong>Ultravox Realtime</strong></a>, and <a href="https://docs.videosdk.live/ai_agents/plugins/stt/gladia"><strong>Gladia.IO STT</strong></a>.</p>
    
    <hr>

    <h2>Core SDK Enhancements: Video Optimization & Quality Monitoring</h2>
    <p>We've rolled out powerful new features across our core SDKs to give you more control over video quality and a deeper understanding of stream health.</p>

    <h3>Advanced Video Track Optimization</h3>
    <p>Now available on <strong>JS, React, React Native, and Flutter SDKs</strong>, these new parameters in <strong>createCameraVideoTrack()</strong> give you granular control over quality and bandwidth:</p>
    <ul>
        <li><strong>bitrateMode</strong>: Choose between <strong>high_quality</strong>, <strong>balanced</strong>, and <strong>bandwidth_optimized</strong>.</li>
        <li><strong>maxLayer</strong>: Specify the maximum number of simulcast layers to publish.</li>
    </ul>
    
    <h3>Real-Time Quality Monitoring</h3>
    <p>Our <strong>iOS and Flutter SDKs</strong> now feature a <span class="highlight">Quality Limitation Event</span> to detect bandwidth, network, or CPU issues on the local device, and a <span class="highlight">Stream State Change</span> method/event to monitor remote streams for freeze, stuck, or recovery events.</p>
    
    <hr>

    <h2>📚 New Content & Resources</h2>
    <p>Explore our latest guides, tutorials, and videos to get the most out of these new features.</p>
    
    <h3>New Guides & Tutorials</h3>
    <ul>
      <li><strong>Build an AI WhatsApp Voice Agent:</strong> <a href="https://dev.to/chaitrali_kakde/how-to-build-an-ai-whatsapp-voice-agent-with-videosdk-step-by-step-guide-2o6l">A step-by-step guide to deploying your first agent on WhatsApp.</a></li>
      <li><strong>Enable Preemptive Responses:</strong> <a href="https://www.videosdk.live/blog/how-to-enable-preemptive-response-in-ai-voice-agents">Learn how to make your agents respond faster with preemptive transcript generation.</a></li>
      <li><strong>Handle DTMF Events:</strong> <a href="https://www.videosdk.live/blog/dtmf-events-in-telephony-ai-agent">A deep dive into capturing and using keypad tones in your telephony agents.</a></li>
      <li><strong>Implement Call Transfers:</strong> <a href="https://www.videosdk.live/blog/how-to-transfer-calls-in-ai-voice-agents">Master the art of seamlessly transferring calls between agents or to phone numbers.</a></li>
    </ul>

    <h3>Featured Videos</h3>
    <ul>
        <li>▶️ <a href="https://dub.sh/znk9osA">Preemptive Response for AI Voice Agents</a></li>
        <li>▶️ <a href="https://dub.sh/j1KDeuN">DTMF Events for SIP AI Voice Agents</a></li>
        <li>▶️ <a href="https://dub.sh/wGsFwYu">Call Transfer for AI Voice Agents</a></li>
    </ul>

    <hr>

    <h2>✨ Community Spotlight</h2>
    
    <div class="img-container">
      <a href="https://www.videosdk.live/customers/exterview">
        <img src="https://strapi.videosdk.live/uploads/exterview_case_study_481edf1d53.png" alt="Product Updates - December 2025 : New Billing & Pricing, AI Agents with Graphs & Fallback, and More!" style="border: none; box-shadow: none;">
      </img></a>
    </div>

    <div style="border-left: 3px solid var(--primary); padding-left: 20px; margin-top: 30px; margin-bottom: 30px;">
        <a href="https://www.videosdk.live/customers/exterview" style="font-size: 18px; line-height: 1.5;">Explore how Exterview builds a global asynchronous video interview platform with VideoSDK.</a>
    </div>
    
    <h2>SDK Sketches</h2>
    <div class="img-container">
      <!-- Placeholder for the comic image -->
      <img src="https://strapi.videosdk.live/uploads/convo_graph_sketch_1c1e592ece.png" alt="Product Updates - December 2025 : New Billing & Pricing, AI Agents with Graphs & Fallback, and More!">
      <span class="img-caption">This month's sketch: The difference between a rigid, hard-coded AI and one powered by flexible Conversational Graphs.</span>
    </img></div>

    <hr>

    <h2>What's Next?</h2>
    <p>As we head into 2026, our focus remains on pushing the boundaries of what's possible with real-time communication and AI. Expect even more powerful tools, deeper platform integrations, and a relentless focus on the developer experience.</p>
    
    <div class="cta-box">
      <h3>Ready to Build with the Latest?</h3>
      <p>Upgrade your SDKs to the latest versions to take advantage of all these new features and improvements.</p>
      <a href="https://discord.com/invite/Gpmj6eCq5u" class="cta-button">Join our Discord Community</a>
    </div>

    
  </hr></hr></hr></hr></hr></hr></div>

</body>
</html>
<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[Announcing New Pricing: Free Credits, Simpler Pricing, and a Smarter Billing Dashboard]]></title><description><![CDATA[VideoSDK introduces a new Pay-As-You-Go On-Demand model and a redesigned Billing Dashboard featuring a $20 free balance, prepaid wallet, and real-time usage controls.]]></description><link>https://www.videosdk.live/blog/videosdk-pricing-2026-billing-dashboard</link><guid isPermaLink="false">695bee1855831517a5a89d2c</guid><category><![CDATA[ANNOUNCEMENT]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Wed, 31 Dec 2025 18:40:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2026/01/Price-1.png" medium="image"/><content:encoded><![CDATA[
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <style>
    :root {
      --primary: #A497D9;
      --bg: #050608;
      --card-bg: #111217;
      --text-main: #E0E0E0;
      --text-muted: #A0A0A0;
      --border: #22242C;
    }
    body {
      font-family: 'Inter', -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, sans-serif;
      line-height: 1.7;
      color: var(--text-main);
      background-color: var(--bg);
      margin: 0;
      padding: 0;
    }
    .container {
      max-width: 850px;
      margin: 0 auto;
      padding: 60px 20px;
    }
    header {
      text-align: left;
      margin-bottom: 60px;
    }
    h1 {
      font-size: 42px;
      line-height: 1.2;
      color: #FFFFFF;
      margin-bottom: 20px;
    }
    h2 {
      font-size: 28px;
      color: var(--primary);
      margin-top: 50px;
      padding-bottom: 10px;
      border-bottom: 1px solid var(--border);
    }
    h3 {
      font-size: 22px;
      color: #FFFFFF;
      margin-top: 35px;
    }
    p {
      margin-bottom: 20px;
    }
    strong {
      color: #FFFFFF;
    }
    .card {
      background-color: var(--card-bg);
      border: 1px solid var(--border);
      border-radius: 12px;
      padding: 30px;
      margin: 30px 0;
    }
    .img-container {
      margin: 40px 0;
      text-align: center;
    }
    .img-container img {
      width: 100%;
      height: auto;
      border-radius: 8px;
      border: 1px solid var(--border);
      box-shadow: 0 10px 30px rgba(0,0,0,0.5);
    }
    .img-caption {
      font-size: 13px;
      color: var(--text-muted);
      margin-top: 10px;
      display: block;
    }
    ul, ol {
      margin-bottom: 20px;
      padding-left: 20px;
    }
    li {
      margin-bottom: 10px;
    }
    .highlight {
      color: var(--primary);
      font-weight: 600;
    }
    a {
      color: var(--primary);
      text-decoration: none;
    }
    a:hover {
      text-decoration: underline;
    }
    .cta-box {
      text-align: center;
      background: linear-gradient(135deg, #111217 0%, #1a1b23 100%);
      padding: 40px;
      border-radius: 12px;
      border: 1px solid var(--primary);
      margin: 60px 0;
    }
    hr {
      border: 0;
      border-top: 1px solid var(--border);
      margin: 60px 0;
    }
    @media (max-width: 600px) {
      h1 { font-size: 32px; }
      .container { padding: 40px 15px; }
    }
  </style>
</meta></meta></head>
<body>

  <div class="container">
    <header>
      <img src="https://assets.videosdk.live/static-assets/ghost/2026/01/Price-1.png" alt="Announcing New Pricing: Free Credits, Simpler Pricing, and a Smarter Billing Dashboard"/><p style="font-size: 18px; color: var(--text-muted);">Real-time scale. Transparent billing. Total control.</p>
    </header>

    <p>At VideoSDK, our mission has always been to empower developers to build world-class live video experiences without the complexity of infrastructure management. As we head into 2026, we are taking a massive leap forward by making our billing as scalable, transparent, and real-time as our infrastructure.</p>

    <p>Today, we are thrilled to announce our new Pricing plans and a completely redesigned <strong>Billing Dashboard</strong>.</p>

    <!-- Quick Links Navigation Grid -->
    <div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(140px, 1fr)); gap: 12px; margin: 35px 0;">
        <a href="https://www.videosdk.live/pricing" target="_blank" style="text-decoration: none; background: #111217; border: 1px solid #22242C; padding: 16px; border-radius: 10px; transition: transform 0.2s ease, border-color 0.2s ease;">
            <div style="color: #A497D9; font-size: 10px; font-weight: 800; text-transform: uppercase; margin-bottom: 4px; letter-spacing: 0.5px;">Pricing & Plans</div>
            <div style="color: #FFFFFF; font-size: 20px; font-weight: 500;">View Plans</div>
        </a>
        <a href="https://docs.videosdk.live/help_docs/pricing" target="_blank" style="text-decoration: none; background: #111217; border: 1px solid #22242C; padding: 16px; border-radius: 10px; transition: transform 0.2s ease, border-color 0.2s ease;">
            <div style="color: #A497D9; font-size: 10px; font-weight: 800; text-transform: uppercase; margin-bottom: 4px; letter-spacing: 0.5px;">Common Questions</div>
            <div style="color: #FFFFFF; font-size: 20px; font-weight: 500;">Pricing & FAQs</div>
        </a>
        <a href="https://docs.videosdk.live/help_docs/quotas-and-limit" target="_blank" style="text-decoration: none; background: #111217; border: 1px solid #22242C; padding: 16px; border-radius: 10px; transition: transform 0.2s ease, border-color 0.2s ease;">
            <div style="color: #A497D9; font-size: 10px; font-weight: 800; text-transform: uppercase; margin-bottom: 4px; letter-spacing: 0.5px;">Usage Limits</div>
            <div style="color: #FFFFFF; font-size: 20px; font-weight: 500;">Quotas & Scale</div>
        </a>
        <a href="https://www.videosdk.live/contact" target="_blank" style="text-decoration: none; background: #111217; border: 1px solid #22242C; padding: 16px; border-radius: 10px; transition: transform 0.2s ease, border-color 0.2s ease;">
            <div style="color: #A497D9; font-size: 10px; font-weight: 800; text-transform: uppercase; margin-bottom: 4px; letter-spacing: 0.5px;">Need help?</div>
            <div style="color: #FFFFFF; font-size: 20px; font-weight: 500;">Book a call</div>
        </a>
    </div>

    <hr>

    <h2>1. The Power of the Prepaid Wallet</h2>
    <p>We’ve replaced traditional monthly billing cycles with a high-performance <strong>Prepaid Wallet system</strong>. Instead of waiting for a month-end invoice, you now have complete visibility and control over your spending in real-time.</p>
    
    <p><a href="https://app.videosdk.live/profile/billing?tab=overview" target="_blank"><strong>Go to Billing Overview →</strong></a></p>

    <div class="img-container">
      <img src="https://strapi.videosdk.live/uploads/Overview_de99afd47f.png" alt="Announcing New Pricing: Free Credits, Simpler Pricing, and a Smarter Billing Dashboard">
      <span class="img-caption">Figure 1: The new Dashboard Overview with Real-Time Balance.</span>
    </img></div>

    <h3>Key Features of the Overview Tab:</h3>
    <ul>
      <li><strong>Real-Time Balance:</strong> Instantly see your available funds.</li>
      <li><strong>Quick Top-ups:</strong> Add funds with one click using preset amounts or custom values.</li>
      <li><strong>Auto-Reload:</strong> Never worry about service interruptions. Enable <a href="https://docs.videosdk.live/help_docs/pricing">Auto-Reload</a> to automatically top up your wallet when it falls below a threshold you set.</li>
      <li><strong>Spend Summary:</strong> A high-level view of your recent usage and costs.</li>
    </ul>

    <div class="card">
      <h3 style="margin-top: 0;">The $20 "Start Building" Balance</h3>
      <p>We believe every great idea deserves a chance to be built. That’s why starting today, <strong>every new account receives a one-time $20 free balance</strong> added to their wallet immediately upon signup. No credit card is required to start—just sign up and deploy your first AI Agent or Video Call room instantly.</p>
    </div>

    <hr>

    <h2>2. Simplified Plans for Every Stage</h2>
    <p>Whether you are a solo developer or a global enterprise, our new <a href="https://app.videosdk.live/profile/billing?tab=plans" target="_blank"><strong>Plans tab</strong></a> makes it easy to understand your trajectory.</p>

    <div class="img-container">
      <img src="https://strapi.videosdk.live/uploads/Plans_85710a8adc.png" alt="Announcing New Pricing: Free Credits, Simpler Pricing, and a Smarter Billing Dashboard">
      <span class="img-caption">Figure 2: Compare and switch between flexible plans.</span>
    </img></div>

    <ul>
      <li><strong>Free Tier:</strong> Perfect for exploration, featuring the $20 free balance and community support.</li>
      <li><strong>Pay-As-You-Go (On-Demand):</strong> Our most popular choice for production apps. Scale seamlessly with <a href="https://www.videosdk.live/pricing">usage-based pricing</a> and high concurrency.</li>
      <li><strong>Enterprise:</strong> For high-volume demands requiring dedicated support, custom SLAs, and private cloud options.</li>
    </ul>

    <hr>

    <h2>3. Manage Add-ons & Compliance (Beta)</h2>
    <p>Security and compliance are no longer "manual" processes. Within the new <a href="https://app.videosdk.live/profile/billing?tab=addons" target="_blank"><strong>Add-ons tab</strong></a>, you can activate enterprise-grade compliance features with a single click.</p>

    <div class="img-container">
      <img src="https://strapi.videosdk.live/uploads/Add_Ons_a9077c45a9.png" alt="Announcing New Pricing: Free Credits, Simpler Pricing, and a Smarter Billing Dashboard">
      <span class="img-caption">Figure 3: One-click activation for <a href="https://docs.videosdk.live/help_docs/security-and-privacy/iso-27001">ISO</a>, <a href="https://docs.videosdk.live/help_docs/security-and-privacy/soc2">SOC2</a>, <a href="https://docs.videosdk.live/help_docs/security-and-privacy/hipaa">HIPAA</a>, and <a href="https://docs.videosdk.live/help_docs/security-and-privacy/gdpr">GDPR</a> instantly.</span>
    </img></div>

    <p>Enable standards like <strong>ISO 27001</strong>, <strong>SOC2 Type II</strong>, <strong>HIPAA</strong>, and <strong>GDPR</strong> instantly. This tab also hosts powerful network features like <a href="https://docs.videosdk.live/help_docs/security-and-privacy/geo-fencing"><strong>Geo-fencing</strong></a> (restricting traffic to specific regions) and <a href="https://docs.videosdk.live/help_docs/security-and-privacy/cloud-proxy"><strong>Cloud Proxy</strong></a> (routing traffic through managed secure gates).</p>

    <hr>

    <h2>4. On-Demand Scaling with Usage Limits (Beta)</h2>
    <p>Gone are the days of emailing support to increase your concurrency limits. The <a href="https://app.videosdk.live/profile/billing?tab=concurrency" target="_blank"><strong>Usage Limits tab</strong></a> allows you to manage your capacity in real-time.</p>

    <div class="img-container">
      <img src="https://strapi.videosdk.live/uploads/Usage_Limits_d8bdbaf762.png" alt="Announcing New Pricing: Free Credits, Simpler Pricing, and a Smarter Billing Dashboard">
      <span class="img-caption">Figure 4: Purchase and monitor limit packs on demand.</span>
    </img></div>

    <p>Need to host more <strong>Agent Sessions</strong>, increase <strong>Agent Deployments</strong>, or scale your <strong>Recordings</strong>? Simply purchase <a href="https://docs.videosdk.live/help_docs/quotas-and-limit-new"><strong>Usage Limit Packs</strong></a>. These packs are applied to your account instantly and run on a monthly subscription model, deducted from your wallet balance.</p>

    <hr>

    <h2>5. Security & Payment Management</h2>
    <p>Transparency extends to how you manage your data and payments. The <a href="https://app.videosdk.live/profile/billing?tab=billinginfo" target="_blank"><strong>Payment Details</strong></a> and <a href="https://app.videosdk.live/profile/billing?tab=invoices" target="_blank"><strong>Billing History</strong></a> tabs provide a clear audit trail.</p>

    <div class="img-container">
      <img src="https://strapi.videosdk.live/uploads/Payment_Details_876c198e97.png" alt="Announcing New Pricing: Free Credits, Simpler Pricing, and a Smarter Billing Dashboard">
      <span class="img-caption">Figure 5: Manage cards and billing addresses securely.</span>
    </img></div>

    <ul>
      <li><strong>Secure Payments:</strong> Manage multiple credit cards and set default methods for Auto-Reload.</li>
      <li><strong>Verified Billing:</strong> Keep your business legal name, address, and VAT/GST details up to date for automated, compliant invoicing.</li>
    </ul>

    <div class="img-container">
      <img src="https://strapi.videosdk.live/uploads/Billing_History_70fe9b2bf5.png" alt="Announcing New Pricing: Free Credits, Simpler Pricing, and a Smarter Billing Dashboard">
      <span class="img-caption">Figure 6: Download invoices and track transaction status.</span>
    </img></div>

    <p><strong>Billing History:</strong> Track every transaction, check due dates, and download official PDF invoices for your accounting records.</p>

    <hr>

    <h2>6. Real-Time Spend Insights (Beta)</h2>
    <p>Understanding where your money goes is critical for optimization. The <a href="https://app.videosdk.live/profile/billing?tab=spendinfo" target="_blank"><strong>Spend Info tab</strong></a> provides a granular breakdown of every cent spent.</p>

    <div class="img-container">
      <img src="https://strapi.videosdk.live/uploads/Spend_Info_2c8dbb9f80.png" alt="Announcing New Pricing: Free Credits, Simpler Pricing, and a Smarter Billing Dashboard">
      <span class="img-caption">Figure 7: High-resolution breakdown of usage and costs.</span>
    </img></div>

    <p>Track costs for:</p>
    <ul>
      <li><strong>Session Minutes:</strong> RTC and Agent usage.</li>
      <li><strong>Concurrency Packs:</strong> Real-time application of your monthly <a href="https://docs.videosdk.live/help_docs/quotas-and-limit-new">limit packs</a>.</li>
      <li><strong>Overage Handling:</strong> Clear visibility into any usage that exceeds your current prepaid limits.</li>
    </ul>

    <div class="cta-box">
      <h3>Ready to Scale in 2026?</h3>
      <p>The new billing dashboard is live! Log in now to explore the new experience and claim your balance.</p>
      <a href="https://app.videosdk.live/profile/billing" style="background-color: var(--primary); color: #000000; padding: 8px 20px; border-radius: 6px; font-size: 15px; font-weight: bold; display: inline-block;">Go to My Dashboard</a>
    </div>

    <hr>

    <h3>Note:</h3>
    
    <div class="card">
        <h4 style="margin-top: 0;">Existing Free Users</h4>
        <p>To help you explore our paid features, we’ve added <strong>$20 in free credit</strong> to your account.</p>
        <ul>
            <li>If you already see the $20 balance, you’re all set.</li>
            <li>If you don’t see it, your account may be inactive. Simply add your <a href="https://app.videosdk.live/profile/billing?tab=billinginfo">billing details</a> in the dashboard, and the $20 credit will be applied automatically.</li>
        </ul>

        <p><strong>What you need to do:</strong><br>
        Add your <a href="https://app.videosdk.live/profile/billing?tab=billinginfo">billing information</a> to activate your account and use your free balance. If the credit still doesn't appear, our support team will be happy to help.</br></p>
    </div>

    <div class="card">
        <h4 style="margin-top: 0;">Existing PAYG Users</h4>
        <p>We’ve ensured a smooth transition with no service interruption.
</p>
        <ul>
            <li>Your service will continue as usual until 31 January under Pay-As-You-Go (postpaid) billing.
</li>
            <li>Your January usage will be billed in February, and the invoice will be sent on 3 February.
</li>
            <li>From 3 February onward, all usage will be charged under the new pricing model.
</li>
            <li>Your account will move to the Pay-As-You-Go (PAYG) on-demand model (prepaid), where you can add funds to your wallet and manage usage in real time.
</li>

        </ul>
        <p><strong>What you need to do:</strong><br>
        Please add funds to your wallet before <strong>31 January</strong> and review your <a href="https://app.videosdk.live/profile/billing?tab=concurrency">usage limits</a>. If your usage is high, increase your limits to avoid any service disruption.</br></p>
    </div>

    <p>For a deeper dive into the numbers, check out our <a href="https://docs.videosdk.live/help_docs/pricing"><strong>Detailed Pricing FAQ</strong></a>.</p>

    <p style="margin-top: 60px;">
      Happy Building in 2026!<br>
      <strong>Team VideoSDK</strong>
    </br></p>
  </hr></hr></hr></hr></hr></hr></hr></div>

</body>
</html>
<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[How to Transfer Calls in AI Voice Agents Using SIP Telephony]]></title><description><![CDATA[Learn how to automatically transfer ongoing SIP call to another phone number without disconnecting the call or redialing using Call Transfer in SIP Telephony]]></description><link>https://www.videosdk.live/blog/how-call-transfer-works-in-ai-voice-agents</link><guid isPermaLink="false">694e2f7f64df6f042b4d4af1</guid><category><![CDATA[ai telephony]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[AI voice agent]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 26 Dec 2025 13:53:28 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/12/Call-Transfer.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/12/Call-Transfer.png" alt="How to Transfer Calls in AI Voice Agents Using SIP Telephony"/><p>When someone calls a business, they’re usually looking for a quick and clear resolution not a conversation with a bot that can’t help them move forward. Sometimes it’s a billing issue. Sometimes it’s a sales inquiry. And sometimes, they just want to speak to a real person.</p><p>A well-designed AI voice agent knows when to assist and when to step aside. That’s where <strong>Call Transfer in VideoSDK</strong> comes in. It allows your AI agent to transfer an ongoing SIP call to the right person, without disconnecting the caller or breaking the flow of the conversation.</p><h2 id="what-is-call-transfer">What is Call Transfer</h2><p><strong>Call Transfer</strong> enables an AI agent to move an active SIP call to another phone number without ending the session. From the caller’s perspective, the transition is automatic, the call continues without dropping, there’s no need to redial, and the conversation flows naturally without awkward pauses or repeated explanations.</p><h3 id="how-it-works">How It Works</h3><ul><li>The agent evaluates the user’s intent to determine when a call transfer is required and then triggers the function tool.</li><li>When the function tool is triggered, it tells the system to move the call to another phone number.</li><li>The ongoing SIP call is forwarded to the new number instantly, without disconnecting or redialing.</li></ul><blockquote>Get started and sign up at <strong>VideoSDK </strong>to get your <a href="https://dub.sh/BVOvGNr" rel="noreferrer">authentication token</a></blockquote><h2 id="how-to-trigger-call-transfer">How To Trigger Call Transfer</h2><p>To set up incoming call handling, outbound calling, and routing rules, check out the&nbsp;<a href="https://docs.videosdk.live/telephony/ai-telephony-agent-quick-start#part-2-connect-your-agent-to-the-phone-network" rel="noopener noreferrer">Quick Start Example</a></p><pre><code class="language-python">from videosdk.agents import Agent, function_tool, 

class CallTransferAgent(Agent):
    def __init__(self):
        super().__init__(
            instructions="You are the Call Transfer Agent Which Help and provide to transfer on going call to new number. use transfer_call tool to transfer the call to new number.",
        )

    async def on_enter(self) -&gt; None:
        await self.session.say("Hello Buddy, How can I help you today?")

    async def on_exit(self) -&gt; None:
        await self.session.say("Goodbye Buddy, Thank you for calling!")

    @function_tool
    async def transfer_call(self) -&gt; None:
        """Transfer the call to Provided number"""
        token = os.getenv("VIDEOSDK_AUTH_TOKEN")
        transfer_to = os.getenv("CALL_TRANSFER_TO") 
        return await self.session.call_transfer(token,transfer_to)</code></pre><h2 id="full-working-example">Full Working Example</h2><pre><code class="language-python">import logging
import os

from videosdk.agents import (Agent, AgentSession, CascadingPipeline, function_tool, WorkerJob, ConversationFlow, JobContext, RoomOptions, Options)
from videosdk.plugins.deepgram import DeepgramSTT
from videosdk.plugins.google import GoogleLLM
from videosdk.plugins.cartesia import CartesiaTTS
from videosdk.plugins.silero import SileroVAD
from videosdk.plugins.turn_detector import TurnDetector, pre_download_model

# Setup logging
logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
    handlers=[logging.StreamHandler()]
)

# Pre-download turn detector model
pre_download_model()


class CallTransferAgent(Agent):
    def __init__(self):
        super().__init__(
            instructions=(
                "You are the Call Transfer Agent. "
                "Help transfer ongoing calls to a new number using the transfer_call tool."
            )
        )

    async def on_enter(self) -&gt; None:
        await self.session.say("Hello Buddy, How can I help you today?")

    async def on_exit(self) -&gt; None:
        await self.session.say("Goodbye Buddy, Thank you for calling!")

    @function_tool
    async def transfer_call(self) -&gt; None:
        """Transfer the call to the provided number"""
        token = os.getenv("VIDEOSDK_AUTH_TOKEN")
        transfer_to = os.getenv("CALL_TRANSFER_TO")
        return await self.session.call_transfer(token, transfer_to)


async def entrypoint(ctx: JobContext):
    agent = CallTransferAgent()
    conversation_flow = ConversationFlow(agent)

    pipeline = CascadingPipeline(
        stt=DeepgramSTT(),
        llm=GoogleLLM(),
        tts=CartesiaTTS(),
        vad=SileroVAD(),
        turn_detector=TurnDetector()
    )

    session = AgentSession(
        agent=agent,
        pipeline=pipeline,
        conversation_flow=conversation_flow
    )

    await session.start(wait_for_participant=True, run_until_shutdown=True)


def make_context() -&gt; JobContext:
    room_options = RoomOptions(name="Call Transfer Agent", playground=True)
    return JobContext(room_options=room_options)


if __name__ == "__main__":
    job = WorkerJob(
        entrypoint=entrypoint,
        jobctx=make_context,
        options=Options(
            agent_id="YOUR_AGENT_ID",
            register=True,
            host="localhost",
            port=8081
        )
    )
    job.start()</code></pre><h2 id="conclusion">Conclusion</h2><p>Call Transfer transforms your AI voice agent from a simple responder into a capable call-handling system. By automatically routing ongoing SIP calls to the right person, it ensures users never experience dropped calls or awkward handoffs. </p><h2 id="resources-and-next-steps">Resources and Next Steps</h2><ul><li>Explore the&nbsp;<strong>call-transfer</strong><a href="https://github.com/videosdk-live/agents-quickstart/blob/main/Call%20Transfer/call_transfer.py" rel="noreferrer">-implementation</a>&nbsp;on github.</li><li>To set up inbound calls, outbound calls, and routing rules check out the&nbsp;<a href="https://docs.videosdk.live/telephony/managing-calls/making-outbound-calls" rel="noopener noreferrer">Quick Start Example</a>.</li><li>Learn how to&nbsp;<a href="https://docs.videosdk.live/ai_agents/deployments/introduction" rel="noreferrer">deploy your AI Agents</a>.</li><li>Sign up at VideoSDK - <a href="https://dub.sh/BVOvGNr" rel="noreferrer">authentication token</a></li><li><strong>Explore more:</strong>&nbsp;Check out the&nbsp;<a href="https://dub.sh/b19X60T" rel="noopener noreferrer">VideoSDK documentation</a>&nbsp;for more features.</li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6" rel="noopener noreferrer">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li></ul>]]></content:encoded></item><item><title><![CDATA[How to enable DTMF Events in Telephony AI Agent]]></title><description><![CDATA[Learn how DTMF input powers reliable, menu-driven voice interactions. This blog explores common use cases and shows how VideoSDK voice agents process real-time keypad events to drive precise call flows.]]></description><link>https://www.videosdk.live/blog/dtmf-events-in-telephony-ai-agent</link><guid isPermaLink="false">6949321e64df6f042b4d4aa5</guid><category><![CDATA[ai telephony]]></category><category><![CDATA[AI voice agent]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Wed, 24 Dec 2025 10:24:43 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/12/dtmf-event-thumbnail.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/12/dtmf-event-thumbnail.png" alt="How to enable DTMF Events in Telephony AI Agent"/><p>Not every caller wants to speak to a voice agent. In many call scenarios, users expect to press a key to make a selection, confirm an action, or move forward in a call flow. This is especially common in menu-based systems, short responses, or situations where speech recognition may not be reliable.</p><p>DTMF (Dual-Tone Multi-Frequency) input gives voice agents a clear and predictable way to handle these interactions. When a caller presses a key on their phone, the agent receives that input instantly and can use it to control the call flow or trigger application logic.</p><p>In this post, we’ll explore how DTMF events can be used in a VideoSDK-powered voice agent, starting from common interaction patterns and moving into how the system processes keypad input in real time.</p><h3 id="typical-interaction-patterns-using-dtmf">Typical Interaction Patterns Using DTMF</h3><p>DTMF input is commonly used at decision points in a call, such as:</p><ul><li>Selecting options from a call menu</li><li>Confirming or canceling an action</li><li>Providing short numeric input</li><li>Navigating between steps in a call flow</li></ul><p>These interactions are simple, fast, and familiar to callers, which makes them a good fit for structured voice experiences.</p><h2 id="how-it-works">How It Works</h2><ul><li><strong>DTMF Event Detection</strong>: The agent detects key presses (0–9, *, #) from the caller during a call session.</li><li><strong>Real-Time Processing</strong>: Each key press generates a DTMF event that is delivered to the agent immediately.</li><li><strong>Callback Integration</strong>: A user-defined callback function handles incoming DTMF events.</li><li><strong>Action Execution</strong>: The agent executes actions or triggers workflows based on the received DTMF input like building IVR flows, collecting user input, or triggering actions in your application.</li></ul><h2 id="step-1-enabling-dtmf-events">Step 1 : Enabling DTMF Events</h2><p>DTMF event detection can be enabled in two ways:</p><ul><li><strong>Via Dashboard:</strong><ul><li>When creating or editing a SIP gateway in the <a href="https://dub.sh/zXYQt7V" rel="noreferrer">VideoSDK dashboard</a>, enable the&nbsp;<code>DTMF</code>&nbsp;option.</li></ul></li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/12/DTMF-events--1-.png" class="kg-image" alt="How to enable DTMF Events in Telephony AI Agent" loading="lazy" width="1598" height="833"/></figure><ul><li><strong>Via API:</strong><ul><li>Set the&nbsp;<code>enableDtmf</code>&nbsp;parameter to&nbsp;<code>true</code>&nbsp;when creating or updating a SIP gateway using the API.</li></ul></li></ul><pre><code class="language-python">curl 	-H 'Authorization: $YOUR_TOKEN' \ 
  -H 'Content-Type: application/json' \ 
  -d '{
  	"name" : "Twilio Inbound Gateway",
    "enableDtmf" : "true",
  	"numbers" : ["+0123456789"]

  }' \ 
  -XPOST https://api.videosdk.live/v2/sip/inbound-gateways</code></pre><p>Once enabled, DTMF events will be detected and published for all calls routed through that gateway.</p><h2 id="step-2-implementation">Step 2 .  Implementation</h2><p>To set up inbound calls, outbound calls, and routing rules check out the&nbsp;<a href="https://docs.videosdk.live/telephony/managing-calls/making-outbound-calls" rel="noopener noreferrer">Quick Start Example</a>.</p><pre><code class="language-python">from videosdk.agents import AgentSession, DTMFHandler

async def entrypoint(ctx: JobContext):
  
    async def dtmf_callback(digit: int):
        if digit == 1:
            agent.instructions = "You are a Sales Representative. Your goal is to sell our products"
            await agent.session.say(
                "Routing you to Sales. Hi, I'm from Sales. How can I help you today?"
            )
        elif digit == 2:
            agent.instructions = "You are a Support Specialist. Your goal is to help customers with technical issues."
            await agent.session.say(
                "Routing you to Support. Hi, I'm from Support. What issue are you facing?"
            )
        else:
            await agent.session.say(
                "Invalid input. Press 1 for Sales or 2 for Support."
            )

    dtmf_handler = DTMFHandler(dtmf_callback)

    session = AgentSession(
        dtmf_handler = dtmf_handler,
    )</code></pre><h2 id="full-working-example">Full Working Example</h2><pre><code class="language-python">import logging
from videosdk.agents import Agent, AgentSession, CascadingPipeline,WorkerJob,ConversationFlow, JobContext, RoomOptions, Options,DTMFHandler
from videosdk.plugins.deepgram import DeepgramSTT
from videosdk.plugins.openai import OpenAILLM
from videosdk.plugins.elevenlabs import ElevenLabsTTS
from videosdk.plugins.silero import SileroVAD
from videosdk.plugins.turn_detector import TurnDetector, pre_download_model

logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", handlers=[logging.StreamHandler()])
pre_download_model()
class VoiceAgent(Agent):
    def __init__(self):
        super().__init__(
            instructions="You are a helpful voice assistant that can answer questions."
        )
    async def on_enter(self) -&gt; None:
        await self.session.say("Hello, how can I help you today?")
    
    async def on_exit(self) -&gt; None:
        await self.session.say("Goodbye!")
        
async def entrypoint(ctx: JobContext):
    
    agent = VoiceAgent()
    conversation_flow = ConversationFlow(agent)

    pipeline=CascadingPipeline(
        stt=DeepgramSTT(),
        llm=OpenAILLM(),
        tts=ElevenLabsTTS(),
        vad=SileroVAD(),
        turn_detector=TurnDetector()
    )
    
    async def dtmf_callback(message):
        print("DTMF message received:", message)

    dtmf_handler = DTMFHandler(dtmf_callback)

    session = AgentSession(
        agent=agent, 
        pipeline=pipeline,
        conversation_flow=conversation_flow,
        dtmf_handler = dtmf_handler,
    )

    await session.start(wait_for_participant=True, run_until_shutdown=True)

def make_context() -&gt; JobContext:
    room_options = RoomOptions(name="DTMF Agent Test", playground=True)
    return JobContext(room_options=room_options) 
 
if __name__ == "__main__":
    job = WorkerJob(entrypoint=entrypoint, jobctx=make_context, options=Options(agent_id="YOUR_AGENT_ID", max_processes=2, register=True, host="localhost", port=8081))
    job.start()</code></pre><p>By enabling DTMF detection and handling events at the agent level, you can build predictable call flows, guide users through menus, and trigger application logic without interrupting the call experience. When combined with voice input, DTMF gives you more control over how users interact with your agent.</p><p>This makes DTMF a practical addition to any voice agent that needs clear, deterministic user input during a call.</p><h2 id="resources-and-next-steps">Resources and Next Steps</h2><ul><li>Explore the&nbsp;<strong>dtmf-event</strong><a href="https://github.com/videosdk-live/agents-quickstart/blob/main/DTMF%20Handler/dtmf_handler.py" rel="noreferrer">-implementation-example</a>&nbsp;for full code implementation.</li><li>To set up inbound calls, outbound calls, and routing rules check out the&nbsp;<a href="https://docs.videosdk.live/telephony/managing-calls/making-outbound-calls" rel="noopener noreferrer">Quick Start Example</a>.</li><li>Learn how to&nbsp;<a href="https://docs.videosdk.live/ai_agents/deployments/introduction" rel="noreferrer">deploy your AI Agents</a>.</li><li><strong>Explore more:</strong>&nbsp;Check out the&nbsp;<a href="https://docs.videosdk.live/ai_agents/core-components/dtmf-events" rel="noreferrer">VideoSDK documentation</a>&nbsp;for more features.</li><li>👉 Share your thoughts, roadblocks, or success stories in the comments or join our&nbsp;<a href="https://dub.sh/yDV95i6" rel="noopener noreferrer">Discord community ↗</a>. We’re excited to learn from your journey and help you build even better AI-powered communication tools!</li></ul>]]></content:encoded></item><item><title><![CDATA[How to enable preemptive response in AI Voice Agents]]></title><description><![CDATA[Learn how Preemptive Response reduces voice AI latency by streaming partial transcripts to the LLM, enabling faster and more natural conversational agents.]]></description><link>https://www.videosdk.live/blog/how-to-enable-preemptive-response-in-ai-voice-agents</link><guid isPermaLink="false">693f8ddf64df6f042b4d4a37</guid><category><![CDATA[preemtive-response]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 15 Dec 2025 09:12:06 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/12/preemptive-response-thumbnail.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/12/preemptive-response-thumbnail.png" alt="How to enable preemptive response in AI Voice Agents"/><p>When it comes to voice AI, the real challenge isn’t speed <strong>it’s timing</strong>.</p><p>A response that arrives a second too late feels unnatural. That tiny pause is enough to remind users they’re talking to a machine. Humans don’t wait for sentences to end. We anticipate intent and respond at the right moment. Traditional voice agents don’t. They wait for silence and that’s what makes conversations feel slow.</p><p><strong>Preemptive Response</strong> fixes this by letting voice agents start understanding and preparing responses while the user is still speaking.</p><h2 id="what-is-preemptive-response">What Is Preemptive Response?</h2><p>Preemptive Response is a capability that allows a voice agent to start understanding a user’s intent <strong>before they finish speaking</strong>.</p><p>As the user talks, the Speech-to-Text engine emits <strong>partial transcripts in real time</strong>. These partial results are enough for the agent to begin reasoning early, instead of waiting for the full sentence and a moment of silence.</p><p>The goal isn’t to interrupt the user it’s to be <em>ready</em> at the right moment.</p><h2 id="how-preemptive-response-works">How Preemptive response works</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/12/preemptive-response-1.png" class="kg-image" alt="How to enable preemptive response in AI Voice Agents" loading="lazy" width="2766" height="1571"/></figure><ul><li>User audio is streamed to the STT, which generates partial transcripts.</li><li>These partial transcripts are immediately sent to the LLM to enable preemptive (early) responses.</li><li>The LLM output is then passed to the TTS to generate the spoken response.</li></ul><h2 id="enabling-preemptive-response">Enabling Preemptive Response</h2><p>To enable this feature, set the&nbsp;<code>enable_preemptive_generation</code>&nbsp;flag to&nbsp;<code>True</code>&nbsp;when initializing your STT plugin (e.g.,&nbsp;<code>DeepgramSTTV2</code>).</p><pre><code class="language-python">from videosdk.plugins.deepgram import DeepgramSTTV2

stt = DeepgramSTTV2(
    enable_preemptive_generation=True
)</code></pre><p>Once enabled, partial transcripts start flowing automatically and your agent begins preparing responses earlier by design.</p><blockquote>Currently, preemptive response generation is limited to Deepgram’s STT implementation and is available only in the Flux model.</blockquote><h2 id="implementation">Implementation</h2><h3 id="prerequisites">Prerequisites</h3><ul><li>A VideoSDK authentication token (generate from&nbsp;<a href="https://app.videosdk.live/" rel="noopener noreferrer">app.videosdk.live</a>), follow to guide to&nbsp;<a href="https://docs.videosdk.live/ai_agents/authentication-and-token">generate videosdk token</a></li><li>A VideoSDK meeting ID (you can generate one using the&nbsp;<a href="https://docs.videosdk.live/api-reference/realtime-communication/create-room" rel="noopener noreferrer">Create Room API</a>)</li><li><a href="https://www.python.org/downloads/release/python-3120/" rel="noreferrer">Python 3.12 or higher</a></li></ul><h3 id="install-dependencies">Install dependencies</h3><pre><code class="language-python">pip install "videosdk-agents[deepgram,openai,elevenlabs,silero,turn_detector]"</code></pre><h3 id="set-api-keys-in-env">Set API Keys in .env</h3><pre><code class="language-python">DEEPGRAM_API_KEY = "Your Deepgram API Key"
OPENAI_API_KEY = "Your OpenAI API Key"
ELEVENLABS_API_KEY = "Your ElevenLabs API Key"
VIDEOSDK_AUTH_TOKEN = "VideoSDK Auth token"</code></pre><div class="kg-card kg-callout-card kg-callout-card-yellow"><div class="kg-callout-text">Get API keys&nbsp;<a href="https://console.deepgram.com/" target="_blank" rel="noopener noreferrer">Deepgram ↗</a>,&nbsp;<a href="https://platform.openai.com/api-keys" target="_blank" rel="noopener noreferrer">OpenAI ↗</a>,&nbsp;<a href="https://elevenlabs.io/app/settings/api-keys" target="_blank" rel="noopener noreferrer">ElevenLabs ↗</a>&nbsp;&amp;&nbsp;<a href="https://docs.videosdk.live/ai_agents/[https://app.videosdk.live](https://app.videosdk.live/api-keys)" target="_blank" rel="noopener noreferrer">VideoSDK Dashboard ↗</a>&nbsp;follow to guide to&nbsp;<a href="https://docs.videosdk.live/ai_agents/authentication-and-token" target="_blank" rel="noopener noreferrer">generate videosdk token</a></div></div><h2 id="full-working-example">Full Working Example</h2><pre><code class="language-python">import asyncio
import os
from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
from videosdk.plugins.silero import SileroVAD
from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
from videosdk.plugins.deepgram import DeepgramSTTV2
from videosdk.plugins.openai import OpenAILLM
from videosdk.plugins.elevenlabs import ElevenLabsTTS

# Pre-download the Turn Detector model to avoid delays during startup
pre_download_model()

class MyVoiceAgent(Agent):
    def __init__(self):
        super().__init__(instructions="You are a helpful voice assistant that can answer questions and help with tasks.")

    async def on_enter(self):
        await self.session.say("Hello! How can I help you today?")

    async def on_exit(self):
        await self.session.say("Goodbye!")

async def start_session(context: JobContext):
    # 1. Create the agent and conversation flow
    agent = MyVoiceAgent()
    conversation_flow = ConversationFlow(agent)

    # 2. Define the pipeline with Preemptive Generation enabled
    pipeline = CascadingPipeline(
        stt=DeepgramSTTV2(
            model="flux-general-en",
            enable_preemptive_generation=True  # Enable low-latency partials
        ),
        llm=OpenAILLM(model="gpt-4o"),
        tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
        vad=SileroVAD(threshold=0.35),
        turn_detector=TurnDetector(threshold=0.8)
    )

    # 3. Initialize the session
    session = AgentSession(
        agent=agent,
        pipeline=pipeline,
        conversation_flow=conversation_flow
    )

    try:
        await context.connect()
        await session.start()
        # Keep the session running
        await asyncio.Event().wait()
    finally:
        # Clean up resources
        await session.close()
        await context.shutdown()

def make_context() -&gt; JobContext:
    room_options = RoomOptions(
        name="VideoSDK Cascaded Agent",
        playground=True
    )
    return JobContext(room_options=room_options)

if __name__ == "__main__":
    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
    job.start()</code></pre><h4 id="run-the-python-script">Run the Python Script</h4><pre><code class="language-python">python main.py</code></pre><p>You can also use console for running the script :</p><pre><code class="language-python">python main.py console</code></pre><p>With Preemptive Response enabled, the voice agent no longer waits for speech to end. It begins processing intent as audio arrives, reducing latency and keeping conversations natural. The result is a responsive, end-to-end voice experience that feels fluid in real time.</p><h2 id="next-steps">Next Steps</h2><ol><li>Explore the&nbsp;<a href="https://docs.videosdk.live/ai_agents/core-components/preemptive-response" rel="noreferrer">preemptive-response-docs</a>&nbsp;for more information.</li><li>Learn how to&nbsp;<a href="https://docs.videosdk.live/ai_agents/deployments/introduction" rel="noreferrer">deploy your AI Agents</a>.</li><li>Visit D<a href="https://developers.deepgram.com/docs/flux/quickstart" rel="noreferrer">eepgram's flux documentation</a>.</li></ol>]]></content:encoded></item><item><title><![CDATA[Product Updates - November 2025 : Agent Runtime, WHIP/WHEP and Realtime Data Store]]></title><description><![CDATA[This month, we're changing the game for AI development. Introducing the Agent Runtime, our new no-code/low-code agent builder! Plus, WHIP/WHEP docs, sync data with the new Realtime Store, and check out our full suite of new quickstart guides.
]]></description><link>http://www.videosdk.live/blog/product-updates-november-2025</link><guid isPermaLink="false">6926a5ab64df6f042b4d4952</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 28 Nov 2025 14:48:27 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/12/1-1.jpg" medium="image"/><content:encoded><![CDATA[<blockquote>Welcome to the November edition of the VideoSDK Monthly Updates! This month, we’re changing the game for AI development with the launch of our<strong> Agent Runtime, a No-Code/Low-Code Agent Builder </strong>on the dashboard which will also allow you to add <strong>Knowledge Base </strong>from dashboard. We're also giving your AI agents&nbsp;<strong>vision capabilities</strong>&nbsp;now with <strong>OpenAI Realtime</strong> and rolling out the&nbsp;<strong>Realtime Store</strong>, a powerful new data synchronization feature across all major SDKs. Let’s dive into the biggest updates!</blockquote><h2 id="build-ai-agents-runtime-with-no-code"><strong>Build AI Agents </strong>Runtime<strong> with No Code</strong></h2><img src="https://assets.videosdk.live/static-assets/ghost/2025/12/1-1.jpg" alt="Product Updates - November 2025 : Agent Runtime, WHIP/WHEP and Realtime Data Store"/><p>We are thrilled to announce the launch of the&nbsp;<strong>Agent Runtime</strong>, our new no-code/low-code AI agent builder, now live on your developer dashboard!</p><p>Building, testing, and deploying powerful conversational AI has never been easier. The Agent Runtime allows you to create and configure complex AI agents through an intuitive visual interface, dramatically reducing development time and making advanced AI accessible to everyone. Connect your favorite LLMs, configure responses, and deploy agents without writing a single line of code.</p><h3 id="connect-a-knowledge-base-from-the-dashboard"><strong>Connect a Knowledge Base from the Dashboard</strong></h3><p>Making your agents smarter is now just a few clicks away. You can now&nbsp;<strong>upload documents and create a knowledge base directly from the dashboard</strong>. Our system automatically handles the vectorization and retrieval (RAG), allowing your no-code agents to provide accurate answers based on your own custom data. It's the easiest way to build a specialized, expert AI agent.</p><ul><li><a href="https://www.google.com/url?sa=E&amp;q=link-to-dashboard"><strong>Start Building with the Agent Runtime Today!</strong></a></li><li><a href="https://docs.videosdk.live/ai_agents/agent-runtime/build-agent" rel="noreferrer">Checkout the docs here</a> </li></ul><h2 id="new-feature-spotlight-the-realtime-store"><strong>New Feature Spotlight: The Realtime Store</strong></h2><p>We're excited to introduce the&nbsp;<strong>Realtime Store</strong>, a synchronized key-value database built directly into your meeting sessions. This powerful feature acts as a shared data layer, allowing you to store, update, and observe custom data across all participants in real time.</p><p>It's perfect for building collaborative features like shared whiteboards, live polls, synchronized presentations, or tracking any shared state within your application.</p><p><strong>The Realtime Store is now available on:</strong></p><ul><li><strong>JavaScript SDK</strong>&nbsp;(&gt;v0.3.10)</li><li><strong>React SDK</strong>&nbsp;(&gt;v0.4.10) </li><li><strong>React Native SDK</strong>&nbsp;(&gt;v0.5.0)</li><li><strong>iOS SDK</strong>&nbsp;(&gt;v2.2.7)</li><li><strong>Android SDK</strong>&nbsp;(&gt;v1.1.0)</li><li>=&gt; <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/realtimestore" rel="noreferrer">View Docs</a></li></ul><h2 id="ai-agents-vision-updates-a-simpler-lifecycle"><strong>AI Agents Vision Updates &amp; A Simpler Lifecycle</strong></h2><p>We are fully invested in our Agents SDK and have given tones of upgrade this month, making agents smarter, more capable, and easier to manage.</p><ul><li><strong>Enhanced Vision Capabilities (Agents SDK v0.0.42):</strong>&nbsp;<ul><li>Vision in cascading pipeline! With the new&nbsp;<strong>capture_frames()&nbsp;</strong>method, you can pass video frames directly into your agent's&nbsp;<strong>reply()&nbsp;</strong>method. This unlocks powerful new use cases like real-time visual analysis, describing on-screen activity, and more in either of the pipelines. We've even added support for continuous vision frame processing in our OpenAI and Gemini Live plugins.</li></ul></li><li><strong>Simplified Session Lifecycle (Agents SDK v0.0.44):</strong>&nbsp;<ul><li>We've made managing agent sessions effortless. You can now <strong>use&nbsp;session.start(wait_for_participant=True, run_until_shutdown=True)</strong>&nbsp;to handle the entire lifecycle with a single line of code, while still retaining full manual control if you need it.</li></ul></li><li><strong>An Expanding AI Ecosystem (Agents SDK v0.0.42-46):</strong>&nbsp;<ul><li>We've added support for&nbsp;<strong>Deepgram STT V2</strong>,&nbsp;<strong>ElevenLabs Scribe V2</strong>, and improved our plugins for&nbsp;<strong>Sarvam AI</strong>,&nbsp;<strong>Gemini Live</strong>, and&nbsp;<strong>OpenAI</strong>.</li></ul></li><li><a href="https://github.com/videosdk-live/agents/releases" rel="noreferrer"><strong>View full Agents SDK changelog</strong></a></li></ul><h2 id="a-better-developer-experience"><strong>A Better Developer Experience</strong></h2><p>We’ve shipped a series of quality-of-life improvements to give you deeper observability and control.</p><ul><li><strong>Deeper Insight into "Why":</strong>&nbsp;<ul><li>We’ve added&nbsp;reason&nbsp;and&nbsp;code&nbsp;parameters to the&nbsp;meeting-left&nbsp;and&nbsp;participant-left&nbsp;events across our&nbsp;<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer"><strong>JS</strong></a><strong>, </strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer"><strong>React</strong></a><strong>, </strong><a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer"><strong>React Native</strong></a><strong>, </strong><a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer"><strong>Android</strong></a><strong>, and </strong><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer"><strong>Flutter</strong></a><strong> SDKs</strong>. You'll now know exactly why a user's session ended.</li></ul></li><li><strong>Enhanced Observability:</strong>&nbsp;<ul><li>We've made all trace messages user-facing and added a&nbsp;<strong>sessionId</strong>&nbsp;parameter across our&nbsp;<strong>JS, React, and React Native SDKs</strong>, enabling you to fetch and analyze traces directly from the dashboard.</li></ul></li><li><strong>New WHIP/WHEP Documentation:</strong>&nbsp;<ul><li>For developers working with standardized ingest and egress protocols, our comprehensive WHIP/WHEP documentation is now live!&nbsp;=&gt; <a href="https://docs.videosdk.live/javascript/guide/interactive-live-streaming/whip-whep-ils" rel="noreferrer"><strong>View the Docs</strong></a></li></ul></li></ul><h2 id="core-sdk-enhancements"><strong>Core SDK Enhancements</strong></h2><ul><li><strong>iOS SDK (v2.2.9 &amp; v2.3.0):</strong>&nbsp;<ul><li>Now includes advanced video track optimization with&nbsp;<strong>BitrateMode</strong>&nbsp;and&nbsp;<strong>maxLayer</strong>&nbsp;parameters for fine-tuning quality and bandwidth. =&gt; <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer">View Changelogs</a></li></ul></li><li><strong>Flutter SDK (v3.2.0):</strong>&nbsp;<ul><li>You can now&nbsp;<strong>pause()</strong>&nbsp;and&nbsp;<strong>resume()</strong>&nbsp;active streams, giving you more control over media playback. =&gt; <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer">View Changelogs</a></li></ul></li><li><strong>Android SDK (v1.0.0):</strong>&nbsp;<ul><li>Added the&nbsp;<strong>OldMessagesReceived</strong>&nbsp;event to retrieve persisted PubSub messages and set the default consumer video quality to high. =&gt; <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer">View Changelogs</a></li></ul></li><li><strong>React Native SDK (v0.5.0):</strong>&nbsp;<ul><li>This massive release includes Beta support for&nbsp;<strong>Adaptive Subscriptions</strong>, new quality limitation and stream state events, and the&nbsp;useStream&nbsp;hook for easier stream management. =&gt; <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer">View Changelogs</a></li></ul></li></ul><h2 id="%F0%9F%93%9A-new-content-resources">📚 New Content &amp; Resources</h2><h3 id="new-platform-specific-quickstart-guides-for-agents-runtime"><strong>New Platform-Specific Quickstart Guides for Agents Runtime</strong></h3><ul><li>Getting started with VideoSDK Agents has never been easier. With the release of Agent Runtime, we've also published a full suite of new quick-start agent integration guides tailored to your favourite platform to connect the no code agent from within your meeting room. Whether you're building for web or mobile, we've got you covered.</li></ul><p>Find your guide here:</p>
<!--kg-card-begin: html-->
<style>
    .custom-quickstart-table {
        width: 100%;
        border-collapse: collapse;
        text-align: left;
    }
    .custom-quickstart-table th, .custom-quickstart-table td {
        padding: 12px 15px;
        border-bottom: 1px solid #333; /* Dark theme border */
    }
    .custom-quickstart-table .section-header {
        background-color: #1a1a1a; /* Slightly different background for header */
        font-weight: bold;
        color: #fff;
    }
    .custom-quickstart-table a {
        color: #a585f7; /* A link color that might match your theme */
        text-decoration: none;
    }
     .custom-quickstart-table a:hover {
        text-decoration: underline;
    }
</style>

<table class="custom-quickstart-table">
    <thead>
        <tr>
            <th>Name</th>
            <th>Link</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td colspan="2" class="section-header">Web</td>
        </tr>
        <tr>
            <td>JavaScript Quickstart</td>
            <td><a href="https://docs.videosdk.live/ai_agents/agent-runtime/connect-agent/web-integrations/with-javascript">[Doc]</a></td>
        </tr>
        <tr>
            <td>React Quickstart</td>
            <td><a href="https://docs.videosdk.live/ai_agents/agent-runtime/connect-agent/web-integrations/with-react">[Doc]</a></td>
        </tr>
        <tr>
            <td colspan="2" class="section-header">Mobile</td>
        </tr>
        <tr>
            <td>React Native Quickstart</td>
            <td><a href="https://docs.videosdk.live/ai_agents/agent-runtime/connect-agent/mobile-integrations/with-react-native">[Doc]</a></td>
        </tr>
        <tr>
            <td>iOS Quickstart</td>
            <td><a href="https://docs.videosdk.live/ai_agents/agent-runtime/connect-agent/mobile-integrations/with-ios">[Doc]</a></td>
        </tr>
        <tr>
            <td>Flutter Quickstart</td>
            <td><a href="https://docs.videosdk.live/ai_agents/agent-runtime/connect-agent/mobile-integrations/with-flutter">[Doc]</a></td>
        </tr>
    </tbody>
</table>
<!--kg-card-end: html-->
<h3 id="guide-integrate-with-traditional-telephony-using-sip-connect"><strong>Guide: Integrate with Traditional Telephony using SIP-Connect</strong></h3><p>Bridge the gap between your digital meetings and the world of traditional telephony. Our new guide shows you how to use SIP-Connect to enable dial-in participants and connect your application with any phone line.</p><ul><li>📖 <a href="https://docs.videosdk.live/telephony/sip-connect" rel="noreferrer">Read the SIP-Connect Guide</a></li></ul><h3 id="guide-ai-voice-agent-observability"><strong>Guide: AI Voice Agent Observability</strong></h3><p>Dive into our new practical guide on debugging latency and improving your AI agents' performance. Learn how to leverage VideoSDK traces for a deeper understanding of your agent's behavior.</p><ul><li>📖 <a href="https://dev.to/chaitrali_kakde/a-practical-guide-to-ai-voice-agent-observability-debugging-latency-with-videosdk-traces-4kmj" rel="noreferrer">Read the Observability Guide on Dev.to</a></li></ul><p/><h2 id="%E2%9C%A8-community-spotlight"><strong>✨ Community Spotlight</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/12/image-1.png" class="kg-image" alt="Product Updates - November 2025 : Agent Runtime, WHIP/WHEP and Realtime Data Store" loading="lazy" width="1886" height="702"/></figure><blockquote><a href="https://www.videosdk.live/customers/coderschool"><em>Explore how CoderSchool achieves student engagement growth of 100,000+ sessions per month and builds huge tech talent network in Southeast Asia.</em></a></blockquote><h2 id="sdk-sketches">SDK Sketches</h2><blockquote>This month's sketch: Using Agent Runtime vs Traditional Coding </blockquote><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/12/image-2.png" class="kg-image" alt="Product Updates - November 2025 : Agent Runtime, WHIP/WHEP and Realtime Data Store" loading="lazy" width="2166" height="1276"/></figure><p><em>That’s a wrap for November! From our new no-code builder to realtime store across SDKs, we can't wait to see what you build with these new tools.</em></p><p>We'd love to hear your feedback! If you have any questions, suggestions, or issues, please don't hesitate to contact our support team.</p><ul><li><a href="https://discord.com/invite/Gpmj6eCq5u" rel="noreferrer"><strong>Join our Discord Community</strong></a></li><li><a href="https://www.videosdk.live/contact" rel="noreferrer"><strong>Contact our support team</strong></a></li><li><a href="https://www.youtube.com/@VideoSDK" rel="noreferrer"><strong>Explore Youtube Videos</strong></a></li><li><a href="https://x.com/Video_SDK" rel="noreferrer"><strong>Follow us on Twitter/X</strong></a></li></ul><p>➡️ New to VideoSDK? <a href="https://www.videosdk.live/signup">Sign up now</a> and get <strong><em>10,000 free minutes</em></strong> to start building amazing audio &amp; video experiences!</p><p/>]]></content:encoded></item><item><title><![CDATA[Product Updates - October 2025 : Supercharged AI Agents, New SDK Features & More]]></title><description><![CDATA[This month's update is all about AI! We're unveiling Namo, our powerful in-house turn detection model for truly natural conversations, and a new WhatsApp AI Voice Agent Quickstart. Plus, get the details on major Android video control features and new React monitoring hooks.
]]></description><link>http://www.videosdk.live/blog/product-updates-october-2025</link><guid isPermaLink="false">6904956a64df6f042b4d477c</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 31 Oct 2025 14:16:54 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/11/Monthly-Image.png" medium="image"/><content:encoded><![CDATA[<blockquote>Welcome to the VideoSDK Monthly updates, your all-in-one recap of our latest releases and platform enhancements! This month was all about making our AI agents smarter and more responsive, giving you deeper control over your media streams, and improving stability across all our SDKs. We've launched a massive evolution for the&nbsp;<strong>VideoSDK Agents SDK</strong>, a brand new&nbsp;<strong>WhatsApp Agent Quickstart</strong>, and rolled out powerful new features for our&nbsp;<strong>Android and React SDKs</strong>. Let's get into the details!</blockquote><h2 id="a-major-leap-forward-for-ai-agents"><strong>A Major Leap Forward for AI Agents</strong></h2><img src="https://assets.videosdk.live/static-assets/ghost/2025/11/Monthly-Image.png" alt="Product Updates - October 2025 : Supercharged AI Agents, New SDK Features & More"/><p>This month, our open-source VideoSDK Agents SDK received a series of transformative updates, focusing on making AI-powered voice interactions more natural, stable, and efficient. At the heart of this evolution is&nbsp;<strong>Namo</strong>, our new proprietary multilingual turn detection model.</p><h3 id="introducing-namo-our-in-house-model-for-perfect-turn-taking"><strong>Introducing Namo: Our In-House Model for Perfect Turn-Taking</strong></h3><p>A key part of making AI interactions feel human is knowing&nbsp;exactly&nbsp;when to speak. To solve this complex challenge, we're proud to introduce&nbsp;<strong>Namo-Turn-Detector</strong>, our in-house turn detection model, developed right here at VideoSDK.</p><p>While standard Voice Activity Detection (VAD) can tell you if someone is talking, Namo goes a step further. It's specifically trained to understand the nuances of conversation, accurately determining the precise moment a user has finished their thought and is ready for the agent to respond.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/10/5.png" class="kg-image" alt="Product Updates - October 2025 : Supercharged AI Agents, New SDK Features & More" loading="lazy" width="3200" height="1800"/></figure><p>This leads to:</p><ul><li><strong>Seamless Turn-Taking:</strong>&nbsp;Eliminates awkward pauses and prevents the agent from interrupting the user mid-sentence.</li><li><strong>Higher Accuracy:</strong>&nbsp;Reduces errors in detecting the end of speech, making the conversation more reliable.</li><li><strong>A Truly Natural Flow:</strong>&nbsp;Creates interactions that feel less robotic and more like talking to a real person.</li><li><a href="https://docs.videosdk.live/ai_agents/core-components/turn-detection-and-vad" rel="noreferrer"><strong>Explore Docs</strong></a><strong> </strong></li><li><a href="https://huggingface.co/collections/videosdk-live/namo-turn-detector-v1" rel="noreferrer"><strong>Checkout all the languages supported and upvote/like if you find it worthwhile</strong></a></li></ul><h2 id="%F0%9F%9A%80-sdk-releases-updates"><strong>🚀 SDK Releases &amp; Updates</strong></h2><p>Here’s a full breakdown of all the SDK releases and enhancements from the past month.</p><h4 id="agents-sdk-v0037v0041">Agents SDK (v0.0.37 - v0.0.41)</h4><p>Beyond the conversational improvements above, we've packed the Agents SDK with powerful new tools for developers.&nbsp;</p><ul><li><strong>UtteranceHandle&nbsp;for Lifecycle Management:</strong>&nbsp;Gain granular control over the lifecycle of an agent's speech. You can now track completion, handle user interruptions, and&nbsp;await&nbsp;utterances to prevent overlapping TTS.</li><li><strong>Enhanced Background Audio:</strong>&nbsp;Create more immersive agent experiences by playing background audio (e.g., thinking sounds, music) during agent interactions with new methods like&nbsp;<strong>play_background_audio()</strong>.</li><li><strong>CometAPI Plugin Integration:</strong>&nbsp;Simplify your AI stack by using multiple STT, LLM, and TTS services from different providers with a single API key.</li><li><strong>Improved</strong>:&nbsp;We've also enhanced our plugin ecosystem with support for the <a href="https://docs.videosdk.live/ai_agents/plugins/namo-turn-detector" rel="noreferrer">Namo TurnDetector model</a>, <a href="https://docs.videosdk.live/ai_agents/plugins/tts/deepgram" rel="noreferrer">Deepgram TTS</a>, a new <a href="https://docs.videosdk.live/ai_agents/plugins/tts/eleven-labs" rel="noreferrer">ElevenLabs TTS</a> plugin, and full integration with <a href="https://docs.videosdk.live/ai_agents/plugins/realtime/azure-voice-live" rel="noreferrer">Azure's real-time voice</a> services.</li><li><a href="https://github.com/videosdk-live/agents/releases" rel="noreferrer"><strong>View full Agents SDK changelog</strong></a></li></ul><h4 id="android-sdk-v060"><strong>Android SDK (v0.6.0)</strong></h4><p>This release introduces advanced video track optimization features, giving you greater control over quality and bandwidth.</p><ul><li><strong>Video Bitrate Control (BitrateMode):</strong>&nbsp;Easily manage video quality by choosing between three modes:&nbsp;<strong>BANDWIDTH_OPTIMIZED</strong>,&nbsp;<strong>BALANCED</strong>&nbsp;(default), and&nbsp;<strong>HIGH_QUALITY</strong>.</li><li><strong>Simulcast Layer Control (maxLayer):</strong>&nbsp;Specify the maximum number of simulcast layers to publish for a video track, allowing for fine-tuned performance.</li><li><strong>Improved:</strong>&nbsp;The&nbsp;<strong>getVideoStats()</strong>&nbsp;method now returns a&nbsp;JsonArray, providing detailed statistics for all produced video layers.</li><li><a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer"><strong>View full Android SDK changelog</strong></a></li></ul><h4 id="react-sdk-v043v049"><strong>React SDK (v0.4.3 - v0.4.9)</strong></h4><p>Our React SDK received a host of new features focused on real-time monitoring and easier stream management.</p><ul><li><strong>onQualityLimitation&nbsp;Event:</strong>&nbsp;Proactively monitor local call quality by detecting bandwidth limits, network congestion, or CPU limitations in real time.</li><li><strong>useStream&nbsp;Hook:</strong>&nbsp;Get direct access to all methods and properties of media streams within your components for simplified stream management.</li><li><strong>onStreamStateChanged&nbsp;Event:</strong>&nbsp;Better monitor the stream health of remote participants by detecting&nbsp;freeze,&nbsp;stuck, or&nbsp;recovery&nbsp;events.</li><li><strong>Improved:&nbsp;</strong>The&nbsp;VideoPlayer&nbsp;component now includes a&nbsp;muted&nbsp;attribute and supports passing a custom&nbsp;ref.</li><li><strong>&nbsp;Fixed:&nbsp;</strong><em>We</em> resolved a memory leak in&nbsp;EventEmitter&nbsp;and fixed a video orientation issue on iOS browsers.</li><li><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer"><strong>View full React SDK changelog</strong></a></li></ul><h4 id="javascript-sdk-v036v037"><strong>JavaScript SDK (v0.3.6 - v0.3.7)</strong></h4><ul><li><strong>Improved:</strong>&nbsp;Enabled simulcast layers for all custom tracks.</li><li><strong>Fixed:&nbsp;</strong>Resolved a default camera issue in React Native, fixed a bug that created multiple webcam producers, and corrected a video orientation issue on iOS browsers.</li><li><strong>⚠Deprecated:</strong>&nbsp;The&nbsp;getNetworkStats&nbsp;method has been deprecated.</li><li><a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer"><strong>View full JS SDK changelog</strong></a></li></ul><h4 id="react-native-sdk-v041v042"><strong>React Native SDK (v0.4.1 - v0.4.2)</strong></h4><ul><li><strong>Improved:</strong>&nbsp;The SDK is now compatible with&nbsp;<strong>React Native 0.82+</strong>&nbsp;and&nbsp;<strong>Expo SDK 54+</strong>.</li><li><strong>Fixed:</strong>&nbsp;Resolved issues with the&nbsp;defaultCamera&nbsp;parameter and&nbsp;changeWebcam()&nbsp;method behavior.</li><li><a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer"><strong>View full React Native SDK changelog</strong></a></li></ul><h4 id="flutter-sdk-v310"><strong>Flutter SDK (v3.1.0)</strong>&nbsp;</h4><ul><li><strong>Fixed: </strong>Addressed an issue where the microphone would stop working after a device was removed and fixed a bug preventing stats from displaying correctly on the dashboard.</li><li><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/release-notes" rel="noreferrer"><strong>View full Flutter SDK changelog</strong></a></li></ul><h2 id="%F0%9F%94%A7-platform-dashboard-updates">🔧 Platform &amp; Dashboard Updates</h2><h4 id="a-brand-new-dashboard-experience"><strong>A Brand New Dashboard Experience!</strong></h4><p>We've rolled out a redesigned developer dashboard! The new interface is cleaner, more intuitive, and makes it easier than ever to navigate your projects, monitor usage, and access your API keys.&nbsp;<a href="https://app.videosdk.live/" rel="noreferrer"><strong>Log in to check it out!</strong></a></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/10/dash.png" class="kg-image" alt="Product Updates - October 2025 : Supercharged AI Agents, New SDK Features & More" loading="lazy" width="5760" height="3729"/></figure><h2 id="%F0%9F%93%9A-new-content-resources">📚 New Content &amp; Resources</h2><h4 id="new-platform-specific-quickstart-guides-for-agents"><strong>New Platform-Specific Quickstart Guides for Agents</strong></h4><ul><li>Getting started with VideoSDK Agents has never been easier. We've published a full suite of new quick-start agent integration guides tailored to your favourite platform. Whether you're building for web, mobile, or even IoT, we've got you covered.</li></ul><p>Find your guide here:</p>
<!--kg-card-begin: html-->
<style>
    .custom-quickstart-table {
        width: 100%;
        border-collapse: collapse;
        text-align: left;
    }
    .custom-quickstart-table th, .custom-quickstart-table td {
        padding: 12px 15px;
        border-bottom: 1px solid #333; /* Dark theme border */
    }
    .custom-quickstart-table .section-header {
        background-color: #1a1a1a; /* Slightly different background for header */
        font-weight: bold;
        color: #fff;
    }
    .custom-quickstart-table a {
        color: #a585f7; /* A link color that might match your theme */
        text-decoration: none;
    }
     .custom-quickstart-table a:hover {
        text-decoration: underline;
    }
</style>

<table class="custom-quickstart-table">
    <thead>
        <tr>
            <th>Name</th>
            <th>Link</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td colspan="2" class="section-header">Web</td>
        </tr>
        <tr>
            <td>JavaScript Quickstart</td>
            <td><a href="https://docs.videosdk.live/ai_agents/ai-agent-quickstart-js">[Doc]</a></td>
        </tr>
        <tr>
            <td>React Quickstart</td>
            <td><a href="https://docs.videosdk.live/ai_agents/ai-agent-quickstart-react">[Doc]</a></td>
        </tr>
        <tr>
            <td colspan="2" class="section-header">Mobile</td>
        </tr>
        <tr>
            <td>React Native Quickstart</td>
            <td><a href="https://docs.videosdk.live/ai_agents/ai-agent-quickstart-react-native">[Doc]</a></td>
        </tr>
        <tr>
            <td>iOS Quickstart</td>
            <td><a href="https://docs.videosdk.live/ai_agents/ai-agent-quickstart-ios">[Doc]</a></td>
        </tr>
        <tr>
            <td>Flutter Quickstart</td>
            <td><a href="https://docs.videosdk.live/ai_agents/ai-agent-quickstart-flutter">[Doc]</a></td>
        </tr>
        <tr>
            <td>Unity Quickstart</td>
            <td><a href="https://docs.videosdk.live/ai_agents/ai-agent-quickstart-unity">[Doc]</a></td>
        </tr>
        <tr>
            <td colspan="2" class="section-header">Physical AI</td>
        </tr>
        <tr>
            <td>IoT Quickstart</td>
            <td><a href="https://docs.videosdk.live/ai_agents/ai-agent-quickstart-iot">[Doc]</a></td>
        </tr>
    </tbody>
</table>
<!--kg-card-end: html-->
<h4 id="guide-build-an-ai-voice-agent-with-a-rag-pipeline"><strong>Guide: Build an AI Voice Agent with a RAG Pipeline</strong></h4><ul><li>Take your AI agents to the next level by connecting them to your own knowledge base. Our latest guide walks you through building a sophisticated AI voice agent using the Retrieval-Augmented Generation (RAG) pipeline, allowing it to answer questions based on your custom documents and data.</li><li>📖&nbsp;<a href="https://www.google.com/url?sa=E&amp;q=https%3A%2F%2Fwww.videosdk.live%2Fblog%2Fbuild-an-ai-voice-agent-using-the-rag-pipeline-and-videosdk"><strong>Read the RAG pipeline guide</strong></a></li></ul><h4 id="guide-handle-speech-with-the-namo-turn-detection-model"><strong>Guide: Handle Speech with the Namo Turn Detection Model</strong></h4><ul><li>Ready to put our powerful new Namo model into action? This practical guide provides the code and step-by-step instructions you need to implement Namo in your own AI voice agents for seamless, human-like turn-taking.</li><li>📖&nbsp;<a href="https://www.google.com/url?sa=E&amp;q=https%3A%2F%2Fwww.videosdk.live%2Fblog%2Fhandle-speech-in-ai-voice-agents-with-namo-turn-detection-model"><strong>Read the Namo implementation guide</strong></a></li></ul><h4 id="the-whatsapp-ai-voice-agent-quickstart"><strong>The WhatsApp AI Voice Agent Quickstart</strong></h4><ul><li>You can now build and deploy AI voice agents that answer WhatsApp Business calls instantly. Our new Quickstart Guide leverages direct SIP integration with the Meta Business Platform, removing the need for third-party telephony and enabling fast, seamless conversational automation.</li><li>📖&nbsp;<a href="https://docs.videosdk.live/ai_agents/whatsapp-voice-agent-quick-start" rel="noreferrer"><strong>Get Started with the WhatsApp Quickstart</strong></a></li></ul><h2 id="%F0%9F%94%AE-whats-next">🔮 What's Next?</h2><p>This is just the beginning! We're already hard at work on our next set of features, including expanding our AI capabilities and improving SDK performance across the board. Stay tuned for more updates next month!</p><h2 id="%E2%9C%A8-community-spotlight"><strong>✨ Community Spotlight</strong></h2><blockquote>Hear how the team at&nbsp;<strong>Fi money</strong>&nbsp;is enhancing their customer experience using VideoSDK.</blockquote>
<!--kg-card-begin: html-->
<iframe src="https://www.linkedin.com/embed/feed/update/urn:li:ugcPost:7379037677221912576?collapsed=1" height="551" width="504" frameborder="0" allowfullscreen="" title="Embedded post"/>
<!--kg-card-end: html-->
<h2 id="sdk-sketches">SDK Sketches</h2><blockquote>This month's sketch: The difference between a frustrating, robotic conversation and one powered by Namo.</blockquote><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/11/8.png" class="kg-image" alt="Product Updates - October 2025 : Supercharged AI Agents, New SDK Features & More" loading="lazy" width="968" height="452"/></figure><p><em>That’s a wrap for this month! Upgrade your SDKs to the latest versions to take advantage of all these new features and improvements.</em></p><p>We'd love to hear your feedback! If you have any questions, suggestions, or issues, please don't hesitate to contact our support team.</p><ul><li><a href="https://discord.com/invite/Gpmj6eCq5u" rel="noreferrer"><strong>Join our Discord Community</strong></a></li><li><a href="https://www.videosdk.live/contact" rel="noreferrer"><strong>Contact our support team</strong></a></li><li><a href="https://www.youtube.com/@VideoSDK" rel="noreferrer"><strong>Explore Youtube Videos</strong></a></li><li><a href="https://x.com/Video_SDK" rel="noreferrer"><strong>Follow us on Twitter/X</strong></a></li></ul><p>➡️ New to VideoSDK? <a href="https://www.videosdk.live/signup">Sign up now</a> and get <strong><em>10,000 free minutes</em></strong> to start building amazing audio &amp; video experiences!</p><p/>]]></content:encoded></item><item><title><![CDATA[How to handle speech in AI Voice Agents with Namo Turn Detection Model]]></title><description><![CDATA[Learn how to make your AI Voice Agents sound natural and interruption-aware using the NAMO Turn Detection model - a semantic, transformer-based system that replaces silence timers with true speech understanding.]]></description><link>https://www.videosdk.live/blog/handle-speech-in-ai-voice-agents-with-namo-turn-detection-model</link><guid isPermaLink="false">69047e2864df6f042b4d46ec</guid><category><![CDATA[ai agents]]></category><category><![CDATA[AI voice agent]]></category><category><![CDATA[turn detection]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 31 Oct 2025 10:45:29 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/10/7.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/10/7.png" alt="How to handle speech in AI Voice Agents with Namo Turn Detection Model"/><p>When it comes to building conversational AI, the real challenge isn’t what your AI says it’s when it says it. Timing defines whether your voice agent feels natural and human-like or robotic and awkward.</p><p>In real conversations, we overlap, pause, and jump in mid-sentence. Yet humans almost never talk over each other by accident. How? Because we’re constantly predicting intent. Traditional voice agents don’t have that instinct. They wait for silence - literally. That’s why most bots either interrupt you too soon or sit there awkwardly, unsure if you’re done talking.</p><p>To fix this, <strong>VideoSDK</strong> built <strong>Namo-v1</strong> - an open-source, high-performance <strong>turn detection model</strong> that understands meaning, not just silence.</p><h2 id="from-silence-detection-to-speech-understanding">From Silence Detection to Speech Understanding</h2><p>Most voice agents rely on a <strong>silence timer</strong> to decide when you’ve finished speaking. For instance, after 800 ms of quiet, the bot assumes you’re done and replies. But what if you were just thinking, or hesitating before your next word?</p><p>That’s where <strong>Namo</strong> changes the game. Instead of reacting to quiet moments, it uses <strong>semantic understanding</strong> to detect intent and conversation flow.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/10/namo_v1_turn_detection_12e042c6ec.png" class="kg-image" alt="How to handle speech in AI Voice Agents with Namo Turn Detection Model" loading="lazy" width="4338" height="1767"/></figure><p>This above diagram illustrates the <strong>VideoSDK Namo-V1 Turn Detection architecture</strong>. It shows how a user’s speech (captured in the VideoSDK Room) passes through <strong>Speech-to-Text (STT)</strong> and <strong>ChatContext</strong>, which then interfaces with the <strong>Turn Detector</strong>.</p><h2 id="say-reply-and-interrupt">Say, Reply and Interrupt</h2><p>Every conversation can be broken down into three key speech events:</p><ol><li><strong>Say</strong> - The agent speaks.</li><li><strong>Reply</strong> - The agent listens and responds when the user is done.</li><li><strong>Interrupt</strong> -The agent gracefully stops talking if the user jumps in mid-sentence.</li></ol><p>The real test for these voice agents is <strong>interruption handling</strong>, though what we call barge-in. That moment when the user says, “Wait no, that’s not what I meant…” right in the middle of the AI’s sentence. Most systems panic there. They either keep talking awkwardly or stop too late.</p><p>But with <strong>VAD + Namo</strong>, your agent can <strong>detect the user’s intent mid-response</strong>, immediately pause its speech output, and switch to listening mode just like a real human conversation.</p><p>Watch this YouTube Video : <a href="https://youtu.be/IL0OSOD38bo">https://youtu.be/IL0OSOD38bo</a></p><h2 id="implementation-combining-vad-and-namo">Implementation: Combining VAD and Namo</h2><h3 id="1-voice-activity-detection-vad">1. <strong>Voice Activity Detection (VAD)</strong></h3><p>VAD is your first layer. It detects when speech is happening separating human voice from background noise.</p><pre><code class="language-python">from videosdk.plugins.silero import SileroVAD

# Configure VAD to detect speech activity
vad = SileroVAD(
    threshold=0.5,                    # Sensitivity to speech (0.3-0.8)
    min_speech_duration=0.1,          # Ignore very brief sounds
    min_silence_duration=0.75         # Wait time before considering speech ended
)
</code></pre><p>This helps your agent stay reactive without misfiring on every background sound.</p><h3 id="2-namo-turn-detection">2. <strong>Namo Turn Detection</strong></h3><p>Once the audio is cleaned up, <strong>Namo Turn Detector</strong> adds intelligence. It understands the meaning behind the speech and predicts whether the user is truly finished talking  or just pausing.</p><pre><code class="language-python">from videosdk.plugins.turn_detector import NamoTurnDetectorV1, pre_download_namo_turn_v1_model

# Pre-download the multilingual model to avoid runtime delays
pre_download_namo_turn_v1_model()

# Initialize the multilingual Turn Detector
turn_detector = NamoTurnDetectorV1(
    threshold=0.7  # Confidence level for triggering a response
)
</code></pre><ul><li><strong>Multilingual Support</strong> - Works across 20+ languages</li><li><strong>Context-Aware</strong> - Recognizes thinking pauses</li><li><strong>Interrupt Smartness</strong> - Responds instantly to barge-ins</li></ul><h3 id="3-pipeline-integration">3. <strong>Pipeline Integration</strong></h3><p>You can plug both detectors into a unified <strong>Cascading Pipeline</strong> to give your agent real conversational timing.</p><pre><code class="language-python">from videosdk.agents import CascadingPipeline
from videosdk.plugins.silero import SileroVAD
from videosdk.plugins.turn_detector import NamoTurnDetectorV1, pre_download_namo_turn_v1_model

# Pre-download the model you intend to use
pre_download_namo_turn_v1_model(language="en")

pipeline = CascadingPipeline(
    stt=your_stt_provider,
    llm=your_llm_provider,
    tts=your_tts_provider,
    vad=SileroVAD(threshold=0.5),
    turn_detector=NamoTurnDetectorV1(language="en", threshold=0.7)
)
</code></pre><p>Now, when your agent is mid-reply and detects incoming speech via <strong>VAD</strong>, Namo helps it <strong>semantically confirm</strong> if it’s an interruption and instantly pause its own output. That’s real-time, real-human responsiveness.</p><h2 id="complete-example">Complete Example</h2><pre><code class="language-python">import os
from typing import AsyncIterator
from videosdk.agents import Agent, AgentSession, CascadingPipeline,JobContext, RoomOptions, WorkerJob, ConversationFlow
from videosdk.plugins.openai import OpenAILLM
from videosdk.plugins.deepgram import DeepgramSTT
from videosdk.plugins.elevenlabs import ElevenLabsTTS
from videosdk.plugins.silero import SileroVAD
from videosdk.plugins.turn_detector import NamoTurnDetectorV1, pre_download_namo_turn_v1_model

# Pre-downloading the Namo Turn Detector model
pre_download_namo_turn_v1_model()

class MyVoiceAgent(Agent):
    def __init__(self):
        super().__init__(
            instructions="You are VideoSDK's Voice Agent. You are a helpful voice assistant that can answer questions about weather.IMPORTANT: don't generate response above 2 lines",
        )

    async def on_enter(self) -&gt; None:
        await self.session.say("Hello, how can I help you today?")
    
    async def on_exit(self) -&gt; None:
        await self.session.say("Goodbye!")
        

async def start_session(context: JobContext):

    agent = MyVoiceAgent()
    conversation_flow = ConversationFlow(agent)
    pipeline = CascadingPipeline(
        stt=DeepgramSTT(),
        llm=OpenAILLM(),
        tts=ElevenLabsTTS(),
        vad=SileroVAD(),
        turn_detector=NamoTurnDetectorV1()
    )

    session = AgentSession(
        agent=agent,
        pipeline=pipeline,
        conversation_flow=conversation_flow
    )

    await context.run_until_shutdown(session=session,wait_for_participant=True)

def make_context() -&gt; JobContext:
    room_options = RoomOptions(
        room_id="&lt;room_id&gt;",
        name="Namo Turn Detector Agent",
        auth_token=os.getenv("VIDEOSDK_AUTH_TOKEN"),
        playground=True,
    )

    return JobContext(room_options=room_options)

if __name__ == "__main__":
    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
    job.start() </code></pre><p><strong>Run your agent:</strong></p><pre><code class="language-bash">python main.py
</code></pre><p><strong>Open the VideoSDK playground URL</strong> printed in your terminal.<br>This will look like:</br></p><pre><code class="language-bash">https://playground.videosdk.live?token=...&amp;meetingId=...</code></pre><p>It can <strong>speak</strong>, <strong>wait</strong>, <strong>listen</strong>, and even <strong>yield mid-sentence</strong> when a human jumps in. Good conversation isn’t about perfect grammar it’s about timing, empathy, and flow. By combining <strong>VAD</strong> and <strong>Namo</strong>, you give your AI agent the ability to truly listen like a human: to speak when it should, wait when it must, and stop when someone else has something to say.</p><h2 id="looking-ahead-future-directions">Looking Ahead: Future Directions</h2><ul><li><strong>Multi-party turn-taking</strong>&nbsp;detection: deciding when one speaker yields to another.</li><li><strong>Hybrid signals</strong>: combine semantics with prosody, pitch, silence, etc.</li><li><strong>Adaptive thresholds &amp; confidence models</strong>: dynamic sensitivity based on conversation flow.</li><li><strong>Distilled / edge versions</strong>&nbsp;for latency-constrained devices.</li><li><strong>Continuous learning / feedback loop</strong>: let models adapt to usage patterns over time.</li></ul><h3 id="integrate-namo-turn-detection-model-on-any-device">Integrate Namo-Turn-Detection-Model on Any Device</h3><ul><li><a href="https://docs.videosdk.live/ai_agents/ai-phone-agent-quick-start" rel="noreferrer">Telephony</a></li><li><a href="https://docs.videosdk.live/ai_agents/whatsapp-voice-agent-quick-start" rel="noreferrer">WhatsApp</a></li><li><a href="https://docs.videosdk.live/ai_agents/voice-agent-quick-start" rel="noreferrer">Web</a></li><li><a href="https://docs.videosdk.live/ai_agents/introduction" rel="noreferrer">Mobile</a></li><li><a href="https://docs.videosdk.live/unity/guide/video-and-audio-calling-api-sdk/concept-and-architecture" rel="noreferrer">Gaming</a></li><li><a href="https://docs.videosdk.live/iot/guide/video-and-audio-calling-api-sdk/concept-and-architecture" rel="noreferrer">Physical AI</a></li></ul><h2 id="resources-and-next-step">Resources and Next Step</h2><ul><li>For a deep dive into Namo’s architecture, multilingual benchmarks, and model performance, visit the <a href="https://www.videosdk.live/blog/namo-turn-detection-v1-semantic-turn-detection-for-ai-voice-agents" rel="noreferrer">Namo turn detection Plugin Page</a></li><li>You can also explore the <a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Multilingual" rel="noreferrer">Hugging Face model collection</a> to find specialized models for each supported language.</li><li>Explore this quick start guide to get you started with <a href="https://github.com/videosdk-live/agents-quickstart/tree/main/Namo%20Turn%20Detector">Namo Turn Detector</a></li><li>Ecommerce agent with <a href="https://github.com/videosdk-community/ai-agent-demo/tree/conversational-flow">natural turn detection and interruption handling</a></li></ul><h2 id="citation">Citation</h2><pre><code class="language-python">@software{namo2025,
  title={Namo Turn Detector v1: Semantic Turn Detection for Conversational AI},
  author={VideoSDK Team},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/collections/videosdk-live/namo-turn-detector-v1-68d52c0564d2164e9d17ca97}
}
</code></pre>]]></content:encoded></item><item><title><![CDATA[How to Build an AI Voice Agent Using the RAG Pipeline and VideoSDK]]></title><description><![CDATA[Learn how to build an AI Voice Agent with Retrieval-Augmented Generation (RAG). This guide walks through ingestion, embeddings, retrieval, and real-time voice response with complete code examples.]]></description><link>https://www.videosdk.live/blog/build-an-ai-voice-agent-using-the-rag-pipeline-and-videosdk</link><guid isPermaLink="false">6901e52a64df6f042b4d4598</guid><category><![CDATA[ai agents]]></category><category><![CDATA[AI voice agent]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 31 Oct 2025 10:25:33 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/10/image--1-.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/10/image--1-.png" alt="How to Build an AI Voice Agent Using the RAG Pipeline and VideoSDK"/><p>Language models are powerful, but their responses are limited to the information within their context window. Once that limit is reached, they often start guessing. <strong>Retrieval-Augmented Generation (RAG)</strong> helps overcome this by allowing the model to fetch relevant information from an external knowledge base before generating a response.</p><p>In this post, we’ll <strong>build an example RAG-powered voice agent using VideoSDK, ChromaDB, and OpenAI</strong>. This demo shows how you can combine <strong>real-time audio input, intelligent data retrieval, and natural voice responses</strong> to create a more reliable and context-aware conversational agent.</p><h3 id="rag-architecture-explained">RAG Architecture Explained</h3><p>The architecture below shows how <strong>VideoSDK</strong> brings together real-time voice communication and <strong>Retrieval-Augmented Generation (RAG)</strong> to create a smarter, context-aware AI assistant.</p><p>Everything starts inside the <strong>VideoSDK Room</strong>, where the user speaks. The <strong>User Voice Input</strong> is captured and passed into the <strong>Voice Processing pipeline</strong>.</p><ol><li><strong>Speech to Text (STT)</strong> : The user’s audio is first converted into text using a speech recognition model like <strong>Deepgram</strong>.</li><li><strong>Embedding Model</strong> : The transcribed text is transformed into a numerical vector representation (embedding).</li><li><strong>Vector Database</strong> : These embeddings are used to search a database knowledge base for semantically relevant documents. This is where retrieval happens , the AI fetches real, factual context instead of guessing.</li><li><strong>LLM (Large Language Model)</strong> : The retrieved context is passed to the<strong> LLM</strong>, which generates a grounded, accurate response.</li><li><strong>Text to Speech (TTS)</strong> : Finally, the generated text response is converted back into natural voice like <strong>ElevenLabs TTS</strong>, and streamed back to the user as the <strong>Agent Voice Output</strong>.</li></ol><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://assets.videosdk.live/static-assets/ghost/2025/10/voice_agent_rag.png" class="kg-image" alt="How to Build an AI Voice Agent Using the RAG Pipeline and VideoSDK" loading="lazy" width="3477" height="1615"><figcaption><span style="white-space: pre-wrap;">Rag-Architecture</span></figcaption></img></figure><h2 id="prerequisites">Prerequisites</h2><ul><li>A VideoSDK authentication token (generate from&nbsp;<a href="https://app.videosdk.live/" rel="noopener noreferrer">app.videosdk.live</a>), follow to guide to&nbsp;<a href="https://docs.videosdk.live/ai_agents/authentication-and-token">generate videosdk token</a></li><li>A VideoSDK meeting ID (you can generate one using the&nbsp;<a href="https://docs.videosdk.live/api-reference/realtime-communication/create-room" rel="noopener noreferrer">Create Room API</a>)</li><li><a href="https://www.python.org/downloads/release/python-3120/" rel="noreferrer">Python 3.12 or higher</a></li></ul><h3 id="install-dependencies">Install dependencies</h3><pre><code class="language-python">pip install "videosdk-agents[deepgram,openai,elevenlabs,silero,turn_detector]"
pip install chromadb openai numpy</code></pre><h3 id="set-api-keys-in-env">Set API Keys in .env</h3><pre><code class="language-python">DEEPGRAM_API_KEY = "Your Deepgram API Key"
OPENAI_API_KEY = "Your OpenAI API Key"
ELEVENLABS_API_KEY = "Your ElevenLabs API Key"
VIDEOSDK_AUTH_TOKEN = "VideoSDK Auth token"</code></pre><div class="kg-card kg-callout-card kg-callout-card-yellow"><div class="kg-callout-text">Get API keys&nbsp;<a href="https://console.deepgram.com/" target="_blank" rel="noopener noreferrer">Deepgram ↗</a>,&nbsp;<a href="https://platform.openai.com/api-keys" target="_blank" rel="noopener noreferrer">OpenAI ↗</a>,&nbsp;<a href="https://elevenlabs.io/app/settings/api-keys" target="_blank" rel="noopener noreferrer">ElevenLabs ↗</a>&nbsp;&amp;&nbsp;<a href="https://docs.videosdk.live/ai_agents/[https://app.videosdk.live](https://app.videosdk.live/api-keys)" target="_blank" rel="noopener noreferrer">VideoSDK Dashboard ↗</a>&nbsp;follow to guide to&nbsp;<a href="https://docs.videosdk.live/ai_agents/authentication-and-token" target="_blank" rel="noopener noreferrer">generate videosdk token</a></div></div><h2 id="implementation">Implementation</h2><h3 id="step-1-custom-voice-agent-with-rag">Step 1: Custom Voice Agent with RAG</h3><p>Create a main.py file and add a  custom agent class that extends&nbsp;<code>Agent</code>&nbsp;and adds retrieval capabilities:</p><pre><code class="language-python">class VoiceAgent(Agent):
    def __init__(self):
        super().__init__(
            instructions="""You are a helpful voice assistant that answers questions
            based on provided context. Use the retrieved documents to ground your answers.
            If no relevant context is found, say so. Be concise and conversational."""
        )

        # Initialize OpenAI client for embeddings
        self.openai_client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))

        # Define your knowledge base
        self.documents = [
            "What is VideoSDK? VideoSDK is a comprehensive video calling and live streaming platform...",
            "How do I authenticate with VideoSDK? Use JWT tokens generated with your API key...",
            # Add more documents
        ]

        # Set up ChromaDB
        self.chroma_client = chromadb.Client()  # In-memory
        # For persistence: chromadb.PersistentClient(path="./chroma_db")
        self.collection = self.chroma_client.create_collection(
            name="videosdk_faq_collection"
        )

        # Generate embeddings and populate database
        self._initialize_knowledge_base()

    def _initialize_knowledge_base(self):
        """Generate embeddings and store documents."""
        embeddings = [self._get_embedding_sync(doc) for doc in self.documents]
        self.collection.add(
            documents=self.documents,
            embeddings=embeddings,
            ids=[f"doc_{i}" for i in range(len(self.documents))]
        )

Step 2: Embedding Generation
Implement both synchronous (for initialization) and asynchronous (for runtime) embedding methods:

main.py
    def _get_embedding_sync(self, text: str) -&gt; list[float]:
        """Synchronous embedding for initialization."""
        import openai
        client = openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
        response = client.embeddings.create(
            input=text,
            model="text-embedding-ada-002"
        )
        return response.data[0].embedding

    async def get_embedding(self, text: str) -&gt; list[float]:
        """Async embedding for runtime queries."""
        response = await self.openai_client.embeddings.create(
            input=text,
            model="text-embedding-ada-002"
        )
        return response.data[0].embedding</code></pre><h3 id="step-2-embedding-generation">Step 2: Embedding Generation</h3><p>Implement both synchronous (for initialization) and asynchronous (for runtime) embedding methods:</p><pre><code class="language-python">    def _get_embedding_sync(self, text: str) -&gt; list[float]:
        """Synchronous embedding for initialization."""
        import openai
        client = openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
        response = client.embeddings.create(
            input=text,
            model="text-embedding-ada-002"
        )
        return response.data[0].embedding

    async def get_embedding(self, text: str) -&gt; list[float]:
        """Async embedding for runtime queries."""
        response = await self.openai_client.embeddings.create(
            input=text,
            model="text-embedding-ada-002"
        )
        return response.data[0].embedding</code></pre><h3 id="step-3-retrieval-method">Step 3: Retrieval Method</h3><p>Add semantic search capability:</p><pre><code class="language-python">    async def retrieve(self, query: str, k: int = 2) -&gt; list[str]:
        """Retrieve top-k most relevant documents from vector database."""
        # Generate query embedding
        query_embedding = await self.get_embedding(query)

        # Query ChromaDB
        results = self.collection.query(
            query_embeddings=[query_embedding],
            n_results=k
        )

        # Return matching documents
        return results["documents"][0] if results["documents"] else []</code></pre><h3 id="step-4-agent-lifecycle-hooks">Step 4: Agent Lifecycle Hooks</h3><p>Define agent behavior on entry and exit:</p><pre><code class="language-python"> async def on_enter(self) -&gt; None:
        """Called when agent session starts."""
        await self.session.say("Hello! I'm your VideoSDK assistant. How can I help you today?")

    async def on_exit(self) -&gt; None:
        """Called when agent session ends."""
        await self.session.say("Thank you for using VideoSDK. Goodbye!")</code></pre><h3 id="step-5-custom-conversation-flow">Step 5: Custom Conversation Flow</h3><p>Override the conversation flow to inject retrieved context:</p><pre><code class="language-python">class RAGConversationFlow(ConversationFlow):
    async def run(self, transcript: str) -&gt; AsyncIterator[str]:
        """
        Process user input with RAG pipeline.
        Args:
            transcript: User's speech transcribed to text
        Yields:
            Generated response chunks
        """
        # Step 1: Retrieve relevant documents
        context_docs = await self.agent.retrieve(transcript)

        # Step 2: Format context
        if context_docs:
            context_str = "\n\n".join([f"Document {i+1}: {doc}"
                                      for i, doc in enumerate(context_docs)])
        else:
            context_str = "No relevant context found."

        # Step 3: Inject context into conversation
        self.agent.chat_context.add_message(
            role="system",
            content=f"Retrieved Context:\n{context_str}\n\nUse this context to answer the user's question."
        )

        # Step 4: Generate response with LLM
        async for response_chunk in self.process_with_llm():
            yield response_chunk</code></pre><h3 id="step-6-session-and-pipeline-setup">Step 6: Session and Pipeline Setup</h3><p>Configure the agent session and start the job:</p><pre><code class="language-python">async def entrypoint(ctx: JobContext):
    agent = VoiceAgent()

    conversation_flow = RAGConversationFlow(
        agent=agent,
    )

    session = AgentSession(
        agent=agent,
        pipeline=CascadingPipeline(
            stt=DeepgramSTT(),
            llm=OpenAILLM(),
            tts=ElevenLabsTTS(),
            vad=SileroVAD(),
            turn_detector=TurnDetector()
        ),
        conversation_flow=conversation_flow,
    )

    # Register cleanup
    ctx.add_shutdown_callback(lambda: session.close())

    # Start agent
    try:
        await ctx.connect()
        print("Waiting for participant...")
        await ctx.room.wait_for_participant()
        print("Participant joined - starting session")
        await session.start()
        await asyncio.Event().wait()
    except KeyboardInterrupt:
        print("\nShutting down gracefully...")
    finally:
        await session.close()
        await ctx.shutdown()

def make_context() -&gt; JobContext:
    room_options = RoomOptions(name="RAG Voice Assistant", playground=True)
    return JobContext(room_options=room_options)

if __name__ == "__main__":
    job = WorkerJob(entrypoint=entrypoint, jobctx=make_context)
    job.start()</code></pre><h4 id="step-7-run-the-python-script">Step 7: Run the Python Script</h4><pre><code class="language-python">python main.py</code></pre><p>You can also use console for running the script :</p><pre><code class="language-python">python main.py console</code></pre><p>Now that the full RAG pipeline is in place, the agent can seamlessly handle every stage from capturing voice input to fetching relevant context and generating fact-based spoken responses. It’s a fully functional, end-to-end intelligent voice system powered by VideoSDK.</p><h2 id="best-practices">Best Practices</h2><ol><li>Document Quality: Use clear, well-structured documents with specific information</li><li>Chunk Size: Keep chunks between 300-800 words for optimal retrieval</li><li>Retrieval Count: Start with k=2-3, adjust based on response quality and latency</li><li>Context Window: Ensure retrieved context fits within LLM token limits</li><li>Persistent Storage: Use PersistentClient in production to save embeddings</li><li>Error Handling: Always handle retrieval failures gracefully</li><li>Testing: Test with diverse queries to ensure good coverage</li></ol><h2 id="resources-and-next-steps">Resources and Next Steps</h2><ol><li>Explore the <a href="https://github.com/videosdk-live/agents-quickstart/blob/main/RAG/rag.py" rel="noreferrer">rag-implementation-example</a> for full code implementation.</li><li>Read more about how to implement advanced methods like <a href="https://docs.videosdk.live/ai_agents/core-components/rag#dynamic-document-updates" rel="noreferrer">Dynamic Document Updates</a> and <a href="https://docs.videosdk.live/ai_agents/core-components/rag#document-chunking" rel="noreferrer">Document chunking</a> in RAG.</li><li>Learn how to <a href="https://docs.videosdk.live/ai_agents/deployments/introduction" rel="noreferrer">deploy your AI Agents</a>.</li><li>Visit <a href="https://docs.trychroma.com/docs/overview/getting-started" rel="noreferrer">Chroma DB Documentation</a></li><li>Build your own use case:&nbsp;knowledge-based chatbots, document search assistants, and context-aware voice agents.</li></ol>]]></content:encoded></item><item><title><![CDATA[Namo-Turn-Detection-v1: Semantic Turn Detection for AI Voice Agents]]></title><description><![CDATA[Turn-taking, the ability to know exactly when a user has finished speaking, is the invisible force behind natural human conversation. Yet most voice agents today rely on Voice Activity Detection (VAD) or fixed silence timers, leading to premature cut-offs or long, robotic pauses.

We introduce state of the art NAMO Turn Detector v1 (NAMO-v1), an open-source, ONNX-optimized semantic turn detector model that predicts conversational boundaries by understanding meaning, not just silence. NAMO achiev]]></description><link>https://www.videosdk.live/blog/namo-turn-detection-v1-semantic-turn-detection-for-ai-voice-agents</link><guid isPermaLink="false">68dc221564df6f042b4d43b3</guid><dc:creator><![CDATA[Arjun Kava]]></dc:creator><pubDate>Wed, 01 Oct 2025 14:10:02 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/10/Talk-Naturally.-Namo-v1-Listens-2--7-.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/10/Talk-Naturally.-Namo-v1-Listens-2--7-.png" alt="Namo-Turn-Detection-v1: Semantic Turn Detection for AI Voice Agents"/><p>Turn-taking, the ability to know exactly when a user has finished speaking, is the invisible force behind natural human conversation. Yet most voice agents today rely on <strong>Voice Activity Detection (VAD)</strong> or fixed silence timers, leading to <strong>premature cut-offs</strong> or <strong>long, robotic pauses</strong>.</p><p>We introduce state of the art <a href="https://github.com/videosdk-live/NAMO-Turn-Detector-v1/tree/main" rel="noreferrer"><strong>NAMO Turn Detector v1 (NAMO-v1)</strong></a>, an <strong>open-source, ONNX-optimized semantic turn detector model</strong> that predicts conversational boundaries by understanding <strong>meaning</strong>, not just silence. NAMO achieves <strong>&lt;19 ms inference</strong> for specialized single-language models, <strong>&lt;29 ms for multilingual</strong>, and up to <strong>97.3 % accuracy,</strong> making it the first practical drop-in replacement for VAD in real-time voice systems.</p><h2 id="1-why-existing-approaches-break-down">1. Why Existing Approaches Break Down</h2><p>Most deployed voice agents use one of two approaches:</p><ul><li><strong>Silence-based VAD:</strong> very fast and lightweight but either interrupts users mid-sentence or waits too long to be sure they’re done.</li><li><strong>ASR endpointing (pause + punctuation):</strong> better than raw energy detection, but still a proxy; hesitations and lists often look “finished” when they’re not, and behavior varies wildly across languages.</li></ul><p>Both approaches force product teams into a painful <strong>latency vs. interruption trade-off</strong>: either set a long buffer (safe but robotic) or a short one (fast but rude).</p><h2 id="2-namo%E2%80%99s-semantic-advantage">2. NAMO’s Semantic Advantage</h2><p>NAMO replaces “silence as a proxy” with <strong>semantic understanding</strong>. The model looks at the text stream from your ASR and predicts whether the thought is complete. This single change brings:</p><ul><li><strong>Lower floor transfer time</strong> (snappier replies) <strong>without raising false cut-offs.</strong></li><li><strong>Multilingual robustness:</strong> one model works across <strong>23+ languages</strong>, no per-language tuning.</li><li><strong>Production latency:</strong> ONNX-quantized models run in <strong>&lt;30 ms</strong> on CPU/GPU with almost no accuracy loss.</li><li><strong>Observability &amp; tuning:</strong> you can get calibrated probabilities and adjust thresholds for “fast vs. safe.”</li></ul><p>Namo uses Natural Language Understanding to analyze the semantic meaning and context of speech, distinguishing between:</p><ul><li><strong>Complete utterances</strong>&nbsp;(user is done speaking)</li><li><strong>Incomplete utterances</strong>&nbsp;(user will continue speaking)</li></ul><h3 id="key-features">Key Features</h3><ul><li><strong>Semantic Understanding</strong>: Analyzes meaning and context, not just silence</li><li><strong>Ultra-Fast Inference</strong>: &lt;19ms for specialized models, &lt;29ms for multilingual</li><li><strong>Lightweight</strong>: ~135MB (specialized) / ~295MB (multilingual)</li><li><strong>High Accuracy</strong>: Up to 97.3% for specialized models, 90.25% average for multilingual</li><li><strong>Production-Ready</strong>: ONNX-optimized for real-time, enterprise-grade applications</li><li><strong>Easy Integration</strong>: Standalone usage or plug-and-play with VideoSDK Agents SDK</li></ul><h2 id="3-performance-benchmarks">3. Performance Benchmarks</h2><h3 id="latency-throughput">Latency &amp; Throughput</h3><p>Using ONNX quantization, NAMO moves from <strong>61 ms to 28 ms inference</strong> (multilingual) and <strong>38 ms to 14.9 ms</strong> (specialized).</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/10/image.png" class="kg-image" alt="Namo-Turn-Detection-v1: Semantic Turn Detection for AI Voice Agents" loading="lazy" width="4004" height="2810"/></figure><ul><li><strong>Relative speedup:</strong> up to <strong>2.56×</strong></li><li><strong>Throughput:</strong> doubled (35.6 to 66.8 tokens/sec)</li></ul><h3 id="accuracy-impact">Accuracy Impact</h3><p>Quantization preserves accuracy:</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/10/image-1.png" class="kg-image" alt="Namo-Turn-Detection-v1: Semantic Turn Detection for AI Voice Agents" loading="lazy" width="3339" height="1307"/></figure><p>Confusion matrices show virtually unchanged true/false rates before and after quantization.</p><h3 id="language-coverage">Language Coverage</h3><p>Average multilingual accuracy: <strong>90.25 %</strong><br>Specialized single-language models: <strong>97.3 %</strong> (Turkish/Korean), <strong>&gt;93 %</strong> Hindi, Japanese, German.</br></p><h2 id="5-impact-on-real-time-voice-ai">5. Impact on Real-Time Voice AI</h2><p>With NAMO you get:</p><ul><li><strong>Snappier responses</strong> without the “one Mississippi” delay.</li><li><strong>Fewer interruptions</strong> when users pause mid-thought.</li><li><strong>Consistent UX across markets</strong> without tuning for each language.</li><li><strong>Cost-effective scaling</strong> — works with any STT and runs efficiently on commodity servers.</li></ul><h2 id="6-impact-on-real-time-voice-ai">6. Impact on Real-Time Voice AI</h2><p><a href="https://github.com/videosdk-live/NAMO-Turn-Detector-v1#available-models"/>Namo offers both&nbsp;<strong>specialized single-language models</strong>&nbsp;and a&nbsp;<strong>unified multilingual model</strong></p><table>
<thead>
<tr>
<th>Variant</th>
<th>Languages / Focus</th>
<th>Model Size</th>
<th>Latency*</th>
<th>Typical Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Multilingual</strong></td>
<td>23 languages</td>
<td>~295 MB</td>
<td>&lt; 29 ms</td>
<td>~90.25 % (average)</td>
</tr>
<tr>
<td><strong>Language-Specialized</strong></td>
<td>One language per model</td>
<td>~135 MB</td>
<td>&lt; 19 ms</td>
<td>Up to 97.3 %</td>
</tr>
</tbody>
</table>
<blockquote>* Latency measured after quantization on target inference hardware.</blockquote><h3 id="multilingual-model-recommended">Multilingual Model (Recommended): </h3><ul><li>Model: Namo-Turn-Detector-v1-Multilingual</li><li>Base: mmBERT</li><li>Languages: All 23 supported languages</li><li>Inference: &lt;29ms</li><li>Size: ~295MB</li><li>Average Accuracy: 90.25%</li><li>Model Link: <a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Multilingual" rel="noreferrer">Namo Turn Detector v1 - MultiLingual</a></li></ul><h4 id="performance-benchmarks-for-multilingual-model">Performance Benchmarks for Multilingual Model</h4><p><a href="https://github.com/videosdk-live/NAMO-Turn-Detector-v1/edit/main/README.md#-performance-benchmarks-for-multilingual-model"/></p><p>Evaluated on&nbsp;<strong>25,000+ diverse utterances</strong>&nbsp;across all supported languages.</p><table>
<thead>
<tr>
<th>Language</th>
<th>Accuracy</th>
<th>Precision</th>
<th>Recall</th>
<th>F1 Score</th>
<th>Samples</th>
</tr>
</thead>
<tbody>
<tr>
<td>🇹🇷 Turkish</td>
<td>97.31%</td>
<td>0.9611</td>
<td>0.9853</td>
<td>0.9730</td>
<td>966</td>
</tr>
<tr>
<td>🇰🇷 Korean</td>
<td>96.85%</td>
<td>0.9541</td>
<td>0.9842</td>
<td>0.9690</td>
<td>890</td>
</tr>
<tr>
<td>🇯🇵 Japanese</td>
<td>94.36%</td>
<td>0.9099</td>
<td>0.9857</td>
<td>0.9463</td>
<td>834</td>
</tr>
<tr>
<td>🇩🇪 German</td>
<td>94.25%</td>
<td>0.9135</td>
<td>0.9772</td>
<td>0.9443</td>
<td>1,322</td>
</tr>
<tr>
<td>🇮🇳 Hindi</td>
<td>93.98%</td>
<td>0.9276</td>
<td>0.9603</td>
<td>0.9436</td>
<td>1,295</td>
</tr>
<tr>
<td>🇳🇱 Dutch</td>
<td>92.79%</td>
<td>0.8959</td>
<td>0.9738</td>
<td>0.9332</td>
<td>1,401</td>
</tr>
<tr>
<td>🇳🇴 Norwegian</td>
<td>91.65%</td>
<td>0.8717</td>
<td>0.9801</td>
<td>0.9227</td>
<td>1,976</td>
</tr>
<tr>
<td>🇨🇳 Chinese</td>
<td>91.64%</td>
<td>0.8859</td>
<td>0.9608</td>
<td>0.9219</td>
<td>945</td>
</tr>
<tr>
<td>🇫🇮 Finnish</td>
<td>91.58%</td>
<td>0.8746</td>
<td>0.9702</td>
<td>0.9199</td>
<td>1,010</td>
</tr>
<tr>
<td>🇬🇧 English</td>
<td>90.86%</td>
<td>0.8507</td>
<td>0.9801</td>
<td>0.9108</td>
<td>2,845</td>
</tr>
<tr>
<td>🇵🇱 Polish</td>
<td>90.68%</td>
<td>0.8619</td>
<td>0.9568</td>
<td>0.9069</td>
<td>976</td>
</tr>
<tr>
<td>🇮🇩 Indonesian</td>
<td>90.22%</td>
<td>0.8514</td>
<td>0.9707</td>
<td>0.9071</td>
<td>971</td>
</tr>
<tr>
<td>🇮🇹 Italian</td>
<td>90.15%</td>
<td>0.8562</td>
<td>0.9640</td>
<td>0.9069</td>
<td>782</td>
</tr>
<tr>
<td>🇩🇰 Danish</td>
<td>89.73%</td>
<td>0.8517</td>
<td>0.9644</td>
<td>0.9045</td>
<td>779</td>
</tr>
<tr>
<td>🇵🇹 Portuguese</td>
<td>89.56%</td>
<td>0.8410</td>
<td>0.9676</td>
<td>0.8999</td>
<td>1,398</td>
</tr>
<tr>
<td>🇪🇸 Spanish</td>
<td>88.88%</td>
<td>0.8304</td>
<td>0.9681</td>
<td>0.8940</td>
<td>1,295</td>
</tr>
<tr>
<td>🇮🇳 Marathi</td>
<td>88.50%</td>
<td>0.8762</td>
<td>0.9008</td>
<td>0.8883</td>
<td>774</td>
</tr>
<tr>
<td>🇺🇦 Ukrainian</td>
<td>87.94%</td>
<td>0.8164</td>
<td>0.9587</td>
<td>0.8819</td>
<td>929</td>
</tr>
<tr>
<td>🇷🇺 Russian</td>
<td>87.48%</td>
<td>0.8318</td>
<td>0.9547</td>
<td>0.8890</td>
<td>1,470</td>
</tr>
<tr>
<td>🇻🇳 Vietnamese</td>
<td>86.45%</td>
<td>0.8135</td>
<td>0.9439</td>
<td>0.8738</td>
<td>1,004</td>
</tr>
<tr>
<td>🇸🇦 Arabic</td>
<td>84.90%</td>
<td>0.7965</td>
<td>0.9439</td>
<td>0.8639</td>
<td>947</td>
</tr>
<tr>
<td>🇧🇩 Bengali</td>
<td>79.40%</td>
<td>0.7874</td>
<td>0.7939</td>
<td>0.7907</td>
<td>1,000</td>
</tr>
</tbody>
</table>
<p><strong>Average Accuracy: 90.25%</strong> across all languages</p><h3 id="specialized-single-language-models">Specialized Single-Language Models</h3><p><a href="https://github.com/videosdk-live/NAMO-Turn-Detector-v1/tree/main#specialized-single-language-models"/></p><ul><li><strong>Architecture</strong>: DistilBERT-based</li><li><strong>Inference</strong>: &lt;19ms</li><li><strong>Size</strong>: ~135MB each</li></ul><table>
<thead>
<tr>
<th>Language</th>
<th>Model Link</th>
<th>Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td>🇰🇷 Korean</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Korean">Namo-v1-Korean</a></td>
<td>97.3%</td>
</tr>
<tr>
<td>🇹🇷 Turkish</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Turkish">Namo-v1-Turkish</a></td>
<td>96.8%</td>
</tr>
<tr>
<td>🇯🇵 Japanese</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Japanese">Namo-v1-Japanese</a></td>
<td>93.5%</td>
</tr>
<tr>
<td>🇮🇳 Hindi</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Hindi">Namo-v1-Hindi</a></td>
<td>93.1%</td>
</tr>
<tr>
<td>🇩🇪 German</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-German">Namo-v1-German</a></td>
<td>91.9%</td>
</tr>
<tr>
<td>🇬🇧 English</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-English">Namo-v1-English</a></td>
<td>91.5%</td>
</tr>
<tr>
<td>🇳🇱 Dutch</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Dutch">Namo-v1-Dutch</a></td>
<td>90.0%</td>
</tr>
<tr>
<td>🇮🇳 Marathi</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Marathi">Namo-v1-Marathi</a></td>
<td>89.7%</td>
</tr>
<tr>
<td>🇨🇳 Chinese</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Chinese">Namo-v1-Chinese</a></td>
<td>88.8%</td>
</tr>
<tr>
<td>🇵🇱 Polish</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Polish">Namo-v1-Polish</a></td>
<td>87.8%</td>
</tr>
<tr>
<td>🇳🇴 Norwegian</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Norwegian">Namo-v1-Norwegian</a></td>
<td>87.3%</td>
</tr>
<tr>
<td>🇮🇩 Indonesian</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Indonesian">Namo-v1-Indonesian</a></td>
<td>87.1%</td>
</tr>
<tr>
<td>🇵🇹 Portuguese</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Portuguese">Namo-v1-Portuguese</a></td>
<td>86.9%</td>
</tr>
<tr>
<td>🇮🇹 Italian</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Italian">Namo-v1-Italian</a></td>
<td>86.8%</td>
</tr>
<tr>
<td>🇪🇸 Spanish</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Spanish">Namo-v1-Spanish</a></td>
<td>86.7%</td>
</tr>
<tr>
<td>🇩🇰 Danish</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Danish">Namo-v1-Danish</a></td>
<td>86.5%</td>
</tr>
<tr>
<td>🇻🇳 Vietnamese</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Vietnamese">Namo-v1-Vietnamese</a></td>
<td>86.2%</td>
</tr>
<tr>
<td>🇫🇷 French</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-French">Namo-v1-French</a></td>
<td>85.0%</td>
</tr>
<tr>
<td>🇫🇮 Finnish</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Finnish">Namo-v1-Finnish</a></td>
<td>84.8%</td>
</tr>
<tr>
<td>🇷🇺 Russian</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Russian">Namo-v1-Russian</a></td>
<td>84.1%</td>
</tr>
<tr>
<td>🇺🇦 Ukrainian</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Ukrainian">Namo-v1-Ukrainian</a></td>
<td>82.4%</td>
</tr>
<tr>
<td>🇸🇦 Arabic</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Arabic">Namo-v1-Arabic</a></td>
<td>79.7%</td>
</tr>
<tr>
<td>🇧🇩 Bengali</td>
<td><a href="https://huggingface.co/videosdk-live/Namo-Turn-Detector-v1-Bengali">Namo-v1-Bengali</a></td>
<td>79.2%</td>
</tr>
</tbody>
</table>
<h3 id="try-it-yourself">Try It Yourself!</h3><p><a href="https://github.com/videosdk-live/NAMO-Turn-Detector-v1/tree/main#-try-it-yourself"/></p><p>We’ve provided an&nbsp;<a href="https://github.com/videosdk-live/NAMO-Turn-Detector-v1/blob/main/inference.py">inference script</a>&nbsp;to help you quickly test these models. Just plug it in and start experimenting!</p><ul><li>Hugging Face Models: <a href="https://huggingface.co/videosdk-live/models">https://huggingface.co/videosdk-live/models</a></li><li>Github Repo Link: <a href="https://github.com/videosdk-live/NAMO-Turn-Detector-v1/tree/main">https://github.com/videosdk-live/NAMO-Turn-Detector-v1/tree/main</a></li><li>Official Documentation: <a href="https://docs.videosdk.live/ai_agents/core-components/turn-detection-and-vad">https://docs.videosdk.live/ai_agents/core-components/turn-detection-and-vad</a></li></ul>
<!--kg-card-begin: html-->
<div style="left: 0; width: 100%; height: 0; position: relative; padding-bottom: 56.25%;"><iframe src="https://www.youtube.com/embed/IL0OSOD38bo?rel=0" style="top: 0; left: 0; width: 100%; height: 100%; position: absolute; border: 0;" allowfullscreen="" scrolling="no" allow="accelerometer *; clipboard-write *; encrypted-media *; gyroscope *; picture-in-picture *; web-share *;" referrerpolicy="strict-origin"/></div>
<!--kg-card-end: html-->
<h3 id="integration-with-videosdk-agents">Integration with VideoSDK Agents</h3><p><a href="https://github.com/videosdk-live/NAMO-Turn-Detector-v1/tree/main#integration-with-videosdk-agents"/></p><p>For seamless integration into your voice agent pipeline:</p><pre><code class="language-py">from videosdk_agents import NamoTurnDetectorV1, pre_download_namo_turn_v1_model

# Download model files (one-time setup)
# For multilingual (default):
pre_download_namo_turn_v1_model()

# For specific language:
# pre_download_namo_turn_v1_model(language="en")

# Initialize turn detector
turn_detector = NamoTurnDetectorV1()  # Multilingual
# turn_detector = NamoTurnDetectorV1(language="en")  # English-specific

# Add to your agent pipeline
from videosdk_agents import CascadingPipeline

pipeline = CascadingPipeline(
    stt=your_stt_service,
    llm=your_llm_service,
    tts=your_tts_service,
    turn_detector=turn_detector  # Namo integration
)</code></pre><h2 id="7-training-testing">7. Training &amp; Testing</h2><p><a href="https://github.com/videosdk-live/NAMO-Turn-Detector-v1/tree/main?tab=readme-ov-file#-training--testing"/></p><p>Each model includes Colab notebooks for training and testing:</p><ul><li><strong>Training Notebooks</strong>: Fine-tune models on your own datasets</li><li><strong>Testing Notebooks</strong>: Evaluate model performance on custom data</li></ul><p>Visit individual model pages for notebook links:</p><ul><li><a href="https://colab.research.google.com/drive/1WEVVAzu1WHiucPRabnyPiWWc-OYvBMNj" rel="nofollow">Multilingual Training Notebook</a></li><li><a href="https://colab.research.google.com/drive/1DqSUYfcya0r2iAEZB9fS4mfrennubduV" rel="nofollow">Specialised Model Training Notebook</a></li><li><a href="https://colab.research.google.com/drive/19ZOlNoHS2WLX2V4r5r492tsCUnYLXnQR" rel="nofollow">Testing Notebook</a></li><li><a href="https://github.com/videosdk-live/NAMO-Turn-Detector-v1/blob/main/inference.py" rel="noreferrer">Inference Script</a></li></ul><h2 id="looking-ahead-future-directions">Looking Ahead: Future Directions</h2><ul><li><strong>Multi-party turn-taking</strong> detection: deciding when one speaker yields to another.</li><li><strong>Hybrid signals</strong>: combine semantics with prosody, pitch, silence, etc.</li><li><strong>Adaptive thresholds &amp; confidence models</strong>: dynamic sensitivity based on conversation flow.</li><li><strong>Distilled / edge versions</strong> for latency-constrained devices.</li><li><strong>Continuous learning / feedback loop</strong>: let models adapt to usage patterns over time.</li></ul><h3 id="integrate-namo-turn-detection-model-on-any-device">Integrate Namo-Turn-Detection-Model on Any Device</h3><ul><li><a href="https://docs.videosdk.live/ai_agents/ai-phone-agent-quick-start" rel="noreferrer">Telephony</a></li><li><a href="https://docs.videosdk.live/ai_agents/whatsapp-voice-agent-quick-start" rel="noreferrer">WhasApp</a></li><li><a href="https://docs.videosdk.live/ai_agents/voice-agent-quick-start" rel="noreferrer">Web</a></li><li><a href="https://docs.videosdk.live/ai_agents/introduction" rel="noreferrer">Mobile</a></li><li><a href="https://docs.videosdk.live/unity/guide/video-and-audio-calling-api-sdk/concept-and-architecture" rel="noreferrer">Gaming</a></li><li><a href="https://docs.videosdk.live/iot/guide/video-and-audio-calling-api-sdk/concept-and-architecture" rel="noreferrer">Physical AI </a></li></ul><h2 id="conclusion">Conclusion</h2><p><strong>NAMO-v1</strong> turns a long-standing bottleneck, turn-taking, into a solved engineering problem. By combining <strong>semantic intelligence</strong> with <strong>real-time speed</strong>, it finally allows voice AI systems to feel human, fast, and globally scalable.</p><h3 id="citation">Citation</h3><pre><code class="language-md">@software{namo2025,
  title={Namo Turn Detector v1: Semantic Turn Detection for Conversational AI},
  author={VideoSDK Team},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/collections/videosdk-live/namo-turn-detector-v1-68d52c0564d2164e9d17ca97}
}
</code></pre>]]></content:encoded></item><item><title><![CDATA[Build a AI Phone Agent for Inbound & Outbound SIP Calls]]></title><description><![CDATA[Build and deploy a AI phone agent with VideoSDK—complete step-by-step guide from coding to live phone integration.]]></description><link>https://videosdk.live/blog/ai-phone-agent</link><guid isPermaLink="false">68a8f5b164df6f042b4d42ad</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[#sumit-so]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 28 Aug 2025 06:45:58 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/08/AI-Phone-Agent.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/08/AI-Phone-Agent.png" alt="Build a AI Phone Agent for Inbound & Outbound SIP Calls"/><p>This guide provides a step-by-step tutorial on how to build, containerize, and deploy a fully functional AI telephony agent using VideoSDK open source agentSDK. We will cover the complete workflow, from writing the agent's logic in main.py to configuring SIP trunks for live inbound and outbound phone calls.</p><p>If you're a developer to bridge the gap between your AI models and real-world telephony, you're in the right place. Follow along to turn a few simple files into a globally accessible voice agent.</p><iframe width="720" height="405" src="https://www.youtube.com/embed/WgEvRs0zqcI?si=AfckG_XWEh99vkBX" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""> </iframe><h2 id="project-directory">Project Directory</h2><p>This simple structure is our final goal for the worker. By following along, you'll create this complete project from scratch.</p><pre><code class="language-python">worker/
├── Dockerfile              # Instructions to build the Docker container
├── main.py                 # The core logic for your AI voice agent
├── requirements.txt        # Python package dependencies
├── .env                    # Environments variable
└── videosdk.yaml           # VideoSDK configuration for deploying the agent</code></pre><h2 id="architectureinboundoutbound-calls-for-ai-phone-agent">Architecture - Inbound/Outbound Calls for AI Phone Agent</h2><p>This architecture shows how your AI Phone Agent connects to the global phone network for both inbound and outbound SIP calls.</p><p>Our container-based deployment flow handles the complex telephony.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/videosdk-sip-inbound-outbound-1.png" class="kg-image" alt="Build a AI Phone Agent for Inbound & Outbound SIP Calls" loading="lazy" width="3546" height="2078"/></figure><h2 id="setting-up-agent-worker-for-ai-phone-agent">Setting Up Agent Worker for AI Phone Agent</h2><p>Let's build our <code>main.py</code>. This file uses three core VideoSDK components to define our agent's logic and connect it to a call.</p><ol><li><strong>Agent:</strong></li></ol><p>This class defines your agent's personality and conversational flow. You can also integrate advanced protocols like MCP and Agent 2 Agent to provide external context to your agent.</p><ol start="2"><li><strong>Pipeline:</strong></li></ol><p>The <code>Pipeline</code> is the engine that processes the audio stream. It takes the user's voice as input and produces the agent's voice as output. VideoSDK offers two types of pipelines to fit your needs</p><p>This is the audio processing engine. You can choose between two types:</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/videosdk-realtime-pipeline.png" class="kg-image" alt="Build a AI Phone Agent for Inbound & Outbound SIP Calls" loading="lazy" width="3037" height="586"/></figure><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/videosdk-casading-pipeline.png" class="kg-image" alt="Build a AI Phone Agent for Inbound & Outbound SIP Calls" loading="lazy" width="3026" height="582"/></figure><ol start="3"><li>Agent Session</li></ol><p>This brings it all together, connecting your Agent and Pipeline to a live VideoSDK Room to start the call.</p><p>Project Modules <code>requirements.txt</code></p><pre><code class="language-python ">videosdk-agents
videosdk-plugins-google
videosdk
python-dotenv</code></pre><p>Here’s how these components are implemented in the code <code>main.py</code>:</p><pre><code class="language-python">import asyncio
from videosdk.agents import Agent, AgentSession, RealTimePipeline, JobContext, RoomOptions, WorkerJob, MCPServerStdio
from videosdk.plugins.google import GeminiRealtime, GeminiLiveConfig
from dotenv import load_dotenv
import os

load_dotenv(override=True)

# Agent Component
class MyVoiceAgent(Agent):
    def __init__(self):
        super().__init__(
            instructions="You are VideoSDK's AI Avatar Voice Agent with real-time capabilities. You are a helpful virtual assistant with a visual avatar that can answer questions about weather help with other tasks in real-time.",
        )

    async def on_enter(self) -&gt; None:
        await self.session.say("Hello! I'm your real-time AI avatar assistant. How can I help you today?")
    
    async def on_exit(self) -&gt; None:
        await self.session.say("Goodbye! It was great talking with you!")

async def start_session(context: JobContext):

    # Initialize Gemini Realtime model
    model = GeminiRealtime(
        model="gemini-2.0-flash-live-001",
        # When GOOGLE_API_KEY is set in .env - DON'T pass api_key parameter
        api_key=os.getenv("GOOGLE_API_KEY"), 
        config=GeminiLiveConfig(
            voice="Leda",  # Puck, Charon etc
            response_modalities=["AUDIO"]
        )
    )

    # Create pipeline with avatar
    pipeline = RealTimePipeline(
        model=model,
    )
    
    session = AgentSession(
        agent=MyVoiceAgent(),
        pipeline=pipeline
    )

    try:
        await context.connect()
        await session.start()
        await asyncio.Event().wait()
    finally:
        await session.close()
        await context.shutdown()

def make_context() -&gt; JobContext:
    room_options = RoomOptions(
        auth_token=os.getenv("VIDEOSDK_TOKEN"),
        room_id="ln4i-tuwm-yzkq",
        name="AI Agent",
        playground=True,
        recording=False
    )
    return JobContext(room_options=room_options)


if __name__ == "__main__":
    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
    job.start() </code></pre><h2 id="running-the-ai-phone-agent-locally">Running the AI Phone Agent Locally</h2><p>Before we containerize our agent, let's run the Python script directly in a local development environment. This is the fastest way to test your agent's core logic.</p><p><strong>Prerequisite:</strong>&nbsp;Ensure you have Python 3.12 or newer installed on your machine.</p><h4 id="1-create-and-activate-a-virtual-environment">1. Create and Activate a Virtual Environment</h4><p>First, open your terminal in the&nbsp;worker&nbsp;directory. It's a best practice to use a virtual environment to manage project dependencies.</p><p>Create the environment:</p><pre><code class="language-python">python3 -m venv .venv</code></pre><p>Next, Activate it! Command differ based on your environment</p><ul><li>On MacOS/Linux</li></ul><pre><code class="language-python">source .venv/bin/activate</code></pre><ul><li>On Windows</li></ul><pre><code class="language-python">.venv\Scripts\activate</code></pre><p>You'll know the environment is active when you see&nbsp;(.venv)&nbsp;at the beginning of your terminal prompt.</p><h4 id="2-install-dependencies">2. Install Dependencies</h4><p>With the virtual environment active, install the necessary Python packages listed in your&nbsp;requirements.txt&nbsp;file:</p><pre><code class="language-python">pip install -r requirements.txt</code></pre><h4 id="3-run-the-python-script">3. Run the Python Script</h4><p>Finally, run the agent:</p><pre><code class="language-python">python main.py</code></pre><p>This will start your agent and connect it to a playground session, as defined by&nbsp;<code>playground=True</code>&nbsp;in the&nbsp;<code>make_context()</code>&nbsp;function in your code. You can now interact with it for initial testing before moving on to a full deployment.</p><h2 id="build-and-test-your-ai-phone-agent">Build and Test Your AI Phone Agent</h2><p>Before deploying our agent to the cloud, it's crucial to ensure it runs correctly on our local machine. The VideoSDK CLI makes this incredibly simple.</p><p>In your terminal, navigate to the <code>worker</code> directory and run the following command:</p><pre><code class="language-bash">videosdk run</code></pre><p>This command reads your <code>videosdk.yaml</code> and <code>Dockerfile</code>, builds a local container, and starts your agent. You should see an output confirming that your worker is running. This is the perfect time to test your agent's basic functionality in a playground environment to catch any bugs before going live.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/videosdk-run.png" class="kg-image" alt="Build a AI Phone Agent for Inbound & Outbound SIP Calls" loading="lazy" width="3921" height="1180"/></figure><h2 id="deploy-your-ai-phone-agent-to-the-cloud">Deploy Your AI Phone Agent to the Cloud</h2><p>Once you've confirmed the agent works locally, it's time to deploy it to VideoSDK's global infrastructure.</p><h3 id="1-configure-the-deployment-manifest">1. Configure the Deployment Manifest</h3><p>First, make sure your <code>videosdk.yaml</code> file is configured for a cloud deployment. The most important line is <code>cloud: true</code>, which tells the CLI to push your container to our servers instead of just running it locally.</p><pre><code class="language-yaml">version: "1.0"
deployment:
  id: 82fed3a5-5316-4273-b29f-4bb26e885842
  entry:
    path: main.py

deploy:
  cloud: true

env:
  path: "./.env"

secrets:
  VIDEOSDK_AUTH_TOKEN: # your_videosdk_token</code></pre><h3 id="2-deploy-with-a-single-command">2. Deploy with a Single Command</h3><p>Now for the magic. Run the <code>deploy</code> command:</p><pre><code class="language-bash">videosdk deploy</code></pre><p>The CLI will now package your worker, build the container, and upload it to the VideoSDK cloud. You'll see a live log of the progress.</p><p>Upon completion, you will get a <strong>Success!</strong> message along with your unique <strong>Worker ID</strong>.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/videosdk-deploy-.png" class="kg-image" alt="Build a AI Phone Agent for Inbound & Outbound SIP Calls" loading="lazy" width="2787" height="1127"/></figure><p>Crucial Step: Copy this Worker ID! You will need this unique identifier in the next step to connect your deployed agent to a phone number using a Routing Rule in the VideoSDK dashboard.</p><h2 id="connect-your-ai-phone-agent-to-the-phone-network">Connect Your AI Phone Agent to the Phone Network</h2><p>Now we'll use the VideoSDK dashboard to connect our deployed agent to a real phone number.</p><h3 id="1-set-up-an-inbound-gateway">1. Set up an Inbound Gateway</h3><p>This tells your SIP provider (e.g., Twilio) where to forward incoming calls.</p><ul><li>In the VideoSDK Dashboard, go to <strong>Telephony , in Inbound Gateways</strong> and click on <strong>Add Inbound Gateway</strong>.</li><li>Give it a name and add your phone number.</li><li>The dashboard will generate a unique <strong>Inbound Gateway URL</strong>. Copy this URL.</li><li>In your SIP provider's dashboard (like Twilio), paste this URL into the <strong>Origination SIP URI</strong> field for your SIP trunk.</li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/gif-inbound-gateway.gif" class="kg-image" alt="Build a AI Phone Agent for Inbound & Outbound SIP Calls" loading="lazy" width="1280" height="720"/></figure><h3 id="2-set-up-an-outbound-gateway">2. Set up an Outbound Gateway</h3><p>This tells VideoSDK where to send outgoing calls from your agent.</p><ul><li>Go to <strong>Telephony Outbound Gateways</strong> and click <strong>Add Outbound Gateway</strong>.</li><li>Give it a name and add your phone number.</li><li>In the <strong>Address</strong> field, enter the <strong>Termination SIP URI</strong> provided by your SIP provider.</li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/outbound-gateway.gif" class="kg-image" alt="Build a AI Phone Agent for Inbound & Outbound SIP Calls" loading="lazy" width="1280" height="720"/></figure><h4 id="3-create-a-routing-rule">3. Create a Routing Rule</h4><p>This is the final step that connects everything. The routing rule links a phone number to your specific deployed agent.</p><ul><li>Go to&nbsp;<strong>Telephony, in Routing Rules</strong> click&nbsp;on <strong>Add Routing Rules Gateway</strong>.</li><li>Select a direction (e.g.,&nbsp;inbound).</li><li>Choose the gateway you just created.</li><li>Set&nbsp;<strong>Agent Type</strong>&nbsp;to&nbsp;Cloud.</li><li>In the&nbsp;<strong>Deployment ID</strong>&nbsp;field, paste the&nbsp;<strong>Worker ID</strong>&nbsp;from your&nbsp;videosdk.yaml&nbsp;file.</li><li>Click&nbsp;<strong>Create</strong>.</li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/sippart05-clip-1.gif" class="kg-image" alt="Build a AI Phone Agent for Inbound & Outbound SIP Calls" loading="lazy" width="1280" height="720"/></figure><h2 id="making-an-outbound-call">Making an Outbound Call</h2><p>To trigger an outbound call from your agent, you can make a simple API request to the VideoSDK SIP endpoint.</p><p>Use a <code>POST</code> request with your <code>VIDEOSDK_TOKEN</code> for authorization. In the body, specify the <code>gatewayId</code> (from your Outbound Gateway) and the phone number to call in <code>sipCallTo</code>.</p><pre><code class="language-bash">curl --request POST \\\\
  --url &lt;https://api.videosdk.live/v2/sip/call&gt; \\\\
  --header 'Authorization: YOUR_VIDEOSDK_TOKEN' \\\\
  --header 'Content-Type: application/json' \\\\
  --data '{
    "gatewayId": "gw_123456789",
    "sipCallTo": "+14155550123"
  }'

</code></pre><h3 id="managing-agent-sessions-programmatically">Managing Agent Sessions Programmatically</h3><p>While routing rules automatically manage sessions, you can also control them via API for advanced use cases, like starting a session on demand or for cost management.</p><p><strong>Start a Deployment Session:</strong></p><pre><code class="language-bash">curl --request POST \\\\
  --url &lt;https://api.videosdk.live/ai/v1/ai-deployment-sessions/start&gt; \\\\
  --header 'Authorization: YOUR_VIDEOSDK_TOKEN' \\\\
  --header 'Content-Type: application/json' \\\\
  --data '{
    "deploymentId": "&lt;your-deployment-id&gt;"
  }'

</code></pre><p><strong>End a Deployment Session:</strong></p><pre><code class="language-bash">curl --request POST \\\\
  --url &lt;https://api.videosdk.live/ai/v1/ai-deployment-sessions/end&gt; \\\\
  --header 'Authorization: YOUR_VIDEOSDK_TOKEN' \\\\
  --header 'Content-Type: application/json' \\\\
  --data '{
    "sessionId": "&lt;session-id-from-start-response&gt;"
  }'

</code></pre><h2 id="next-step">Next Step</h2><p>That's it! You've successfully built a Python AI agent, deployed it to the cloud, and connected it to the global telephone network for both inbound and outbound calls.</p><ul><li><a href="https://docs.videosdk.live/ai_agents/introduction" rel="noreferrer">AI Agent Documentation</a></li><li><a href="https://github.com/videosdk-live/agents" rel="noreferrer">Opensource AgentSDK </a></li><li><a href="https://discord.gg/f2WsNDN9S5" rel="noreferrer">Join Us on Discord</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Top AI Voice Agent Use Cases Across Industries in 2025]]></title><description><![CDATA[AI voice agent use cases in 2025 across industries, showcasing how conversational AI boosts efficiency, customer satisfaction, and business growth.]]></description><link>https://www.videosdk.live/blog/ai-voice-agent-use-cases/</link><guid isPermaLink="false">689af94a64df6f042b4d41f9</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 12 Aug 2025 08:29:28 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/08/Top-AI-Voice-Agent-Use-Cases-Across-Industries-in-2025-1.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction"><strong>Introduction</strong></h2><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/Top-AI-Voice-Agent-Use-Cases-Across-Industries-in-2025-1.png" alt="Top AI Voice Agent Use Cases Across Industries in 2025"/><p>By 2025, AI is expected to handle over 95% of all customer interactions, and intelligent AI Voice Agents are at the forefront of this transformation. These are not the rigid, frustrating voice bots of the past. We're now in an era of conversational AI that can understand context, show empathy, and solve complex problems in real-time.</p><p>Many SaaS founders and developers recognize the potential of <a href="https://www.videosdk.live/voice-agents"><u>AI voice agents</u></a> but are often unsure about the most impactful, ROI-driven applications. They need a clear roadmap that goes beyond the hype and provides actionable implementation strategies.</p><p>This deep dive will explore the most transformative AI voice agent use cases across key industries for 2025. We will not only show you what is possible but also provide a technical blueprint for how you can build and deploy these solutions using a low-latency, scalable infrastructure like <a href="https://www.videosdk.live/"><u>VideoSDK</u></a>.</p><h2 id="what-are-ai-voice-agents-and-why-are-they-booming-in-2025"><strong>What Are AI Voice Agents, and Why Are They Booming in 2025?</strong></h2><p>An AI voice agent is a sophisticated software program designed to understand and respond to human speech, enabling it to automate conversations and perform a wide range of tasks. Unlike traditional interactive voice response (IVR) systems that rely on rigid, pre-programmed menus, AI voice agents leverage artificial intelligence, machine learning (ML), and natural language processing (NLP) to engage in natural, human-like conversations. They can be integrated into various channels, including phone systems, mobile apps, and smart devices.</p><p>The boom in AI voice agents in 2025 is driven by several factors. Advances in AI have made them more capable and affordable to develop and deploy. Businesses are also facing increasing pressure to improve customer experience, reduce operational costs, and enhance efficiency, all of which AI voice agents can help address. The growing consumer comfort with voice-based interactions, fueled by the widespread adoption of smart speakers and voice assistants, has further accelerated this trend.</p><p>Recent data underscores this surge. With the AI market projected to contribute $15.7 trillion to the global economy by 2030, the adoption of AI-powered solutions is no longer a niche trend but a business imperative. This explosive growth is a clear indicator that AI voice agents are set to become a standard, rather than a novel, feature in customer interactions.</p><h2 id="the-most-impactful-ai-voice-agent-use-cases-for-2025"><strong>The Most Impactful AI Voice Agent Use Cases for 2025</strong></h2><p>AI voice agents are reshaping industries like BFSI, healthcare, retail, and logistics by automating customer interactions, enhancing personalization, and improving overall efficiency. Let's take a deep dive into specific use cases, practical implementation tips, and why real-time communication solutions like VideoSDK are the go-to choice for developers.</p><h2 id="ai-voice-agents-revolutionizing-healthcare"><strong>AI Voice Agents Revolutionizing Healthcare</strong></h2><p>The healthcare industry is ripe for disruption by AI voice agents, which can alleviate administrative burdens, improve patient engagement, and enhance the overall quality of care.</p><h3 id="intelligent-appointment-scheduling-patient-intake"><strong>Intelligent Appointment Scheduling &amp; Patient Intake</strong></h3><p><strong>Problem:</strong> Medical receptionists are frequently overwhelmed by the sheer volume of calls for scheduling, rescheduling, and confirming appointments. This administrative bottleneck not only leads to long and frustrating wait times for patients but also contributes to staff burnout and a higher potential for human error in bookings.</p><p><strong>Solution:</strong> An AI voice agent can automate the entire appointment management lifecycle, handling bookings, reminders, and cancellations 24/7 without human intervention. By also conducting initial patient intake to gather medical history and symptoms before the visit, these agents ensure healthcare staff are better prepared and the patient's time with the doctor is maximized. For such a system to be viable in healthcare, it must be built on a foundation of enterprise-grade security and support <a href="https://www.videosdk.live/blog/hipaa-compliant-video-conferencing-api"><u>HIPAA compliance</u></a> to protect sensitive patient data—a core tenet of robust communication platforms.</p><h3 id="post-discharge-follow-up-and-chronic-care-management"><strong>Post-Discharge Follow-up and Chronic Care Management</strong></h3><p><strong>Problem:</strong> Ensuring patients adhere to post-operative care plans and effectively managing chronic conditions remotely are significant challenges for healthcare providers. Manual follow-up calls are time-consuming, difficult to scale, and often inconsistent, which can put patient recovery at risk.</p><p><strong>Solution:</strong> AI voice agents can conduct automated follow-up calls to check on a patient's recovery, provide medication reminders, and monitor symptoms for chronic diseases. These agents can ask a series of questions to assess the patient's condition and, if any warning signs are detected, can escalate the case to a healthcare professional for immediate attention. To enable this, a platform with reliable real-time audio streaming and the capability to integrate sentiment analysis is key, allowing healthcare teams to personalize care while offloading repetitive tasks to compliant AI agents.</p><h2 id="ai-voice-agents-for-customer-service"><strong>AI Voice Agents for Customer Service</strong></h2><p>Customer service is one of the most prominent areas where AI voice agents are making a significant impact, transforming how businesses support their customers.</p><h3 id="first-line-support-for-common-queries"><strong>First-Line Support for Common Queries</strong></h3><p><strong>Problem:</strong> Customer support teams are often bogged down by a high volume of repetitive and straightforward inquiries, such as questions about order status, business hours, or basic product information. This diverts skilled human agents from addressing more complex customer issues, leading to longer wait times across the board.</p><p><strong>Solution:</strong> AI voice agents can serve as the first line of support, handling a majority of these common queries instantly and 24/7. By automating responses to frequently asked questions, businesses can significantly reduce the workload on human agents and cut operational costs, allowing them to focus on resolving more sensitive customer issues. To effectively handle a high volume of concurrent calls, a robust and scalable infrastructure with a global network is essential for delivering clear, low-latency, and reliable customer interactions.</p><h3 id="smart-call-routing-and-escalation"><strong>Smart Call Routing and Escalation</strong></h3><p><strong>Problem:</strong> Misdirected calls are a common source of customer frustration and a major cause of inefficiency in contact centers. Traditional interactive voice response (IVR) systems are often rigid and confusing, leading to a poor customer experience and high call abandonment rates.</p><p><strong>Solution:</strong> AI-powered smart call routing can analyze a customer's intent in real-time and direct them to the most appropriate agent or department. If the AI agent cannot resolve an issue, it can seamlessly escalate the call to a human agent, providing them with the full context of the conversation so the customer doesn't have to repeat themselves. A platform with flexible APIs allows for seamless integration with AI and machine learning models, enabling the development of sophisticated routing logic based on real-time data and sentiment analysis.</p><h3 id="post-interaction-feedback-sentiment-analysis"><strong>Post-Interaction Feedback &amp; Sentiment Analysis</strong></h3><p><strong>Problem:</strong> Gathering customer feedback is crucial for improving service quality, but traditional survey methods like emails often suffer from low response rates. It is also difficult to gauge the emotional tone of a customer interaction without the right tools, potentially leaving valuable insights on the table.</p><p><strong>Solution:</strong> AI voice agents can automatically initiate post-interaction feedback calls or surveys, capturing insights while the experience is still fresh in the customer's mind. They can also perform real-time sentiment analysis during calls to gauge customer satisfaction and identify potential issues before they escalate. This requires a platform capable of capturing high-quality audio streams that can be fed into sentiment analysis engines and <a href="https://www.videosdk.live/real-time-transcription"><u>real-time transcription APIs</u></a> to provide a textual record for deeper analysis.</p><h2 id="ai-voice-agents-in-bfsi"><strong>AI Voice Agents in BFSI</strong></h2><p>In the banking, financial services, and insurance (BFSI) sector, AI voice agents are enhancing security, improving customer engagement, and automating routine processes.</p><h3 id="proactive-loan-servicing-emi-reminders"><strong>Proactive Loan Servicing &amp; EMI Reminders</strong></h3><p><strong>Problem:</strong> Manually contacting thousands of customers for <a href="https://www.wagedayadvance.co.uk/" rel="noreferrer">loan servicing</a> and Equated Monthly Installment (EMI) reminders is a resource-intensive and repetitive task for financial institutions. These manual efforts are difficult to scale and can lead to inconsistencies in communication.</p><p><strong>Solution:</strong> AI voice agents can automate these outbound calls, reminding customers of upcoming payments and providing them with self-service options to make payments or connect with a support agent. This proactive outreach improves collection rates and frees up financial advisors to handle more complex customer needs. This process relies on a secure and compliant communication channel to handle sensitive financial information and build customer trust.</p><h3 id="fraud-detection-account-alerts"><strong>Fraud Detection &amp; Account Alerts</strong></h3><p><strong>Problem:</strong> Financial fraud is a persistent and growing threat, and traditional identity verification methods over the phone can be vulnerable to social engineering. Protecting customer accounts requires a more dynamic and secure approach to authentication.</p><p><strong>Solution:</strong> AI voice agents can be integrated with voice biometric systems to provide a secure and convenient way to authenticate customers based on their unique voiceprint. They can also proactively send automated alerts for suspicious account activity, allowing for immediate action to secure the account. The effectiveness of this depends on a platform that supports real-time, high-quality audio streaming, which is a crucial component of a multi-layered fraud detection system.</p><h2 id="ai-voice-agents-in-e-commerce-retail"><strong>AI Voice Agents in E-Commerce &amp; Retail</strong></h2><p>For e-commerce and retail businesses, AI voice agents are creating more engaging, efficient, and personalized customer experiences.</p><h3 id="voice-powered-order-tracking-returns"><strong>Voice-Powered Order Tracking &amp; Returns</strong></h3><p><strong>Problem:</strong> "Where is my order?" is one of the most common customer inquiries, creating a significant and constant call volume for retail support centers. Similarly, managing returns can be a cumbersome process for both the customer and the business.</p><p><strong>Solution:</strong> AI voice agents can provide customers with instant, real-time updates on their order status and guide them through the return process through a natural, conversational interface. This self-service option is available 24/7, dramatically reducing the burden on human agents and improving customer satisfaction. Integrating this requires APIs that can connect seamlessly with e-commerce platforms and order management systems.</p><h3 id="promotional-campaigns-feedback-collection"><strong>Promotional Campaigns &amp; Feedback Collection</strong></h3><p><strong>Problem:</strong> Conducting outbound promotional campaigns to notify customers of sales or new products and collecting customer feedback over the phone requires a significant investment in time and manpower. Scaling these efforts during peak seasons like holidays is especially challenging.</p><p><strong>Solution:</strong> AI voice agents can automate these outbound calls, delivering personalized promotional messages and gathering valuable customer feedback at scale. These agents can reach thousands of customers in a short period, making campaigns more efficient and cost-effective. A scalable and reliable platform is ideal for running such large-scale outbound campaigns, with cross-platform SDKs ensuring a consistent experience across all customer devices.</p><h2 id="ai-voice-agents-for-restaurants"><strong>AI Voice Agents for Restaurants</strong></h2><p>The highly competitive restaurant industry is leveraging AI voice agents to improve operational efficiency in order taking, reservation management, and customer service.</p><h3 id="automated-order-taking-and-reservation-management"><strong>Automated Order Taking and Reservation Management</strong></h3><p><strong>Problem:</strong> During peak hours, restaurant staff are often too busy with in-house guests to answer the phone, leading to missed takeout orders and reservation opportunities. This results in lost revenue and a frustrating experience for customers trying to connect with the restaurant.</p><p><strong>Solution:</strong> An AI voice agent can handle a high volume of incoming calls simultaneously, taking complex orders and booking reservations directly into the restaurant's system without human intervention. This frees up staff to focus on providing excellent service to in-person customers while ensuring no call goes unanswered. Implementing a natural and intuitive voice-based system requires high-quality audio and low latency to create a seamless and efficient experience.</p><h3 id="handling-modifications-cancellations-and-delivery-coordination"><strong>Handling Modifications, Cancellations, and Delivery Coordination</strong></h3><p><strong>Problem:</strong> Managing last-minute changes to orders, processing cancellations, and coordinating with delivery drivers adds a significant layer of complexity to daily restaurant operations. These real-time communications are critical but can easily overwhelm staff during busy periods.</p><p><strong>Solution:</strong> AI voice agents can adeptly handle these real-time requests, automatically updating the restaurant's point-of-sale system and communicating with delivery personnel without manual intervention. This ensures that all parties—the customer, the restaurant, and the delivery driver—are always in sync. The real-time nature of a powerful communication platform is essential for the fast-paced restaurant environment, ensuring that all updates are transmitted instantly and reliably.</p><h3 id="customer-feedback-and-loyalty-campaigns"><strong>Customer Feedback and Loyalty Campaigns</strong></h3><p><strong>Problem:</strong> Gathering feedback from diners to improve service and keeping them engaged with loyalty programs can be a difficult and time-consuming task for busy restaurant owners. As a result, many valuable customer insights are lost, and loyalty-building opportunities are missed.</p><p><strong>Solution:</strong> AI voice agents can be programmed to conduct automated follow-up calls to gather feedback on the dining experience and inform customers about loyalty rewards and special offers. This consistent outreach helps build stronger customer relationships and provides a steady stream of data for service improvement. A scalable platform allows restaurants to easily implement these automated campaigns, helping them to improve their offerings and drive repeat business.</p><h2 id="ai-voice-agents-in-insurance"><strong>AI Voice Agents in Insurance</strong></h2><p>The insurance industry is using AI voice agents to streamline claims processing, improve customer engagement, and combat fraud.</p><h3 id="claims-processing-and-virtual-fnol-first-notice-of-loss"><strong>Claims Processing and Virtual FNOL (First Notice of Loss)</strong></h3><p><strong>Problem:</strong> The initial reporting of a claim, known as the First Notice of Loss (FNOL), is often a manual and time-consuming process for both the customer and the insurance company. This can lead to delays and inaccuracies at the most critical stage of the claims journey.</p><p><strong>Solution:</strong> AI voice agents can guide customers through the FNOL process 24/7, conversationally collecting all necessary information and automatically initiating the claim in the system. For more complex claims, the AI agent can seamlessly transition the call to a live video session with a human adjuster. The ability to integrate high-quality video APIs for virtual inspections and real-time assessments can accelerate the entire claims process and improve customer satisfaction.</p><h3 id="policy-renewals-and-premium-reminders"><strong>Policy Renewals and Premium Reminders</strong></h3><p><strong>Problem:</strong> Manually contacting every customer for policy renewals and premium reminders is a significant operational overhead for insurance companies. This repetitive work is not only costly but also prone to human error, potentially leading to missed renewals and lapsed policies.</p><p><strong>Solution:</strong> AI voice agents can automate outbound renewal and reminder calls, ensuring timely and consistent communication with policyholders. This proactive engagement can significantly improve policy retention rates and ensure on-time payments. The reliability and scalability of a communications platform are ideal for automating these critical customer touchpoints, ensuring that no renewal opportunity is missed.</p><h3 id="fraud-detection-and-identity-verification"><strong>Fraud Detection and Identity Verification</strong></h3><p><strong>Problem:</strong> Insurance fraud costs the industry billions of dollars annually, and verifying the identity of claimants over the phone can be a weak point in the security chain. Detecting fraudulent claims requires sophisticated tools that can identify subtle red flags.</p><p><strong>Solution:</strong> AI voice agents can use advanced voice biometrics to securely authenticate policyholders, adding a strong layer of security to the verification process. They can also be trained to analyze speech patterns and flag suspicious conversations for review by a specialized fraud detection team.&nbsp; Providing a secure and crystal-clear channel for communication is integral to an effective fraud detection and prevention strategy.</p><h2 id="implementing-ai-voice-agents-with-videosdk"><strong>Implementing AI Voice Agents with VideoSDK</strong></h2><p>Understanding the potential of AI voice agents is the first step; building them is the next. A successful implementation requires orchestrating several complex technologies to create a fluid, human-like conversational experience. This is where a dedicated SDK designed for real-time communication becomes invaluable.</p><h3 id="getting-started-with-an-ai-voice-agent-sdk"><strong>Getting Started with an AI Voice Agent SDK</strong></h3><p>To bring the use cases discussed above to life, developers need a streamlined way to integrate AI capabilities into a communication framework. An AI Voice Agent SDK, such as the one offered by VideoSDK, provides pre-built functionalities that handle the underlying complexities of real-time communication and AI integration. This allows developers to focus on crafting the agent's logic and personality rather than building the foundational infrastructure from scratch. The core of such an SDK revolves around four key components working in perfect harmony.</p><h4 id="overview-of-core-components"><strong>Overview of Core Components</strong></h4><ol><li><strong>Real-Time Streaming:</strong> This is the backbone of any live conversation. The SDK must manage the low-latency, bidirectional streaming of audio data between the user and the AI agent, ensuring the conversation flows naturally without awkward delays or interruptions.</li><li><strong>Speech-to-Text (STT):</strong> To understand the user, the AI agent needs to convert their spoken words into text. The SDK integrates with powerful STT engines that transcribe the user's audio in real-time, providing an accurate textual input for the AI model to process.</li><li><strong>Text-to-Speech (TTS):</strong> Once the AI has formulated a response, it needs to be converted back into natural-sounding speech. The SDK uses advanced TTS engines to generate high-quality, human-like audio, which is then streamed back to the user. The quality of the TTS is critical for user adoption and a positive experience.</li><li><strong>Agent Orchestration:</strong> This is the brain of the operation. The SDK orchestrates the entire workflow, managing the real-time flow of data between the STT service, your business logic or large language model (LLM), and the TTS service. This ensures that the agent can listen, think, and speak in a seamless, uninterrupted loop.</li></ol><h4 id="supported-integrations-for-maximum-flexibility"><strong>Supported Integrations for Maximum Flexibility</strong></h4><p>No single AI provider excels at everything. A flexible platform should allow developers to choose the best tools for their specific needs. VideoSDK's AI Voice Agent framework is designed to be plug-and-play, supporting integrations with leading AI services. Developers can mix and match providers for different components, including:</p><ul><li><strong>Speech-to-Text:</strong> Integrations with powerful engines like <strong>Google STT</strong> and <strong>OpenAI's Whisper</strong> ensure high-accuracy transcriptions across various languages and accents.</li><li><strong>Text-to-Speech:</strong> To create lifelike and emotionally resonant voices, the platform supports leading TTS providers like <strong>ElevenLabs</strong> and services from <strong>OpenAI</strong>.</li></ul><p>This "bring your own AI" model gives developers the freedom to leverage the best-in-class technology and future-proof their applications against a rapidly evolving AI landscape.</p><h4 id="sippstn-integration-for-telephony-grade-quality"><strong>SIP/PSTN Integration for Telephony-Grade Quality</strong></h4><p>While many AI interactions happen within apps, the ability to connect with traditional phone networks is crucial for countless business use cases, from customer service call centers to automated appointment reminders. The integration of Session Initiation Protocol (SIP) and Public Switched Telephone Network (PSTN) gateways is a vital feature. This allows the AI voice agent to make and receive calls from standard phone numbers, extending its reach beyond the digital-only world. VideoSDK's support for SIP/PSTN ensures that businesses can deploy AI agents into their existing telephony workflows, providing a seamless, telephony-grade quality experience for every user, regardless of how they connect.</p><h3 id="key-steps-to-build-your-own-voice-agent"><strong>Key Steps to Build Your Own Voice Agent</strong></h3><p>Here’s a step-by-step guide to creating your own AI voice agent using VideoSDK:</p><h4 id="step-1-choose-the-voice-model-tts-stt"><strong>Step 1: Choose the Voice Model (TTS + STT)</strong></h4><p>The first step is to select the text-to-speech and speech-to-text models that best suit your application. Consider factors like language support, accuracy, and the desired vocal characteristics of your agent.</p><p>Select providers based on:</p><ul><li><strong>Latency requirements</strong> (e.g., &lt;300ms for real-time calls)</li><li><strong>Language coverage</strong> (multi-lingual support for global deployments)</li><li><strong>Voice customization</strong> (brand-aligned tone &amp; gender)</li></ul><p><strong>Here is the example of the </strong><a href="https://docs.videosdk.live/ai_agents/plugins/tts/openai#example-usage"><strong><u>OpenAI TTS model</u></strong></a><strong>.</strong></p><pre><code class="language-python">from videosdk.plugins.openai import OpenAITTS
from videosdk.agents import CascadingPipeline

# Initialize the OpenAI TTS model
tts = OpenAITTS(
    # When OPENAI_API_KEY is set in .env - DON'T pass api_key parameter
    api_key="your-openai-api-key",
    model="tts-1",
    voice="alloy",
    speed=1.0,
    response_format="pcm"
)

#  Add tts to cascading pipeline
pipeline = CascadingPipeline(tts=tts)</code></pre><p>Alternatively you can try Google Gemini and AWS Nova Sonice</p><p><strong>Here is the example of the </strong><a href="https://docs.videosdk.live/ai_agents/plugins/stt/openai#example-usage"><strong><u>OpenAI STT model</u></strong></a><strong>.</strong></p><pre><code class="language-python">from videosdk.plugins.openai import OpenAISTT
from videosdk.agents import CascadingPipeline

# Initialize the OpenAI STT model
stt = OpenAISTT(
    # When OPENAI_API_KEY is set in .env - DON'T pass api_key parameter
    api_key="your-openai-api-key",
    model="whisper-1",
    language="en",
    prompt="Transcribe this audio with proper punctuation and formatting."
)

#  Add stt to cascading pipeline
pipeline = CascadingPipeline(stt=stt)</code></pre><h4 id="step-2-configure-videosdk-for-real-time-transport"><strong>Step 2: Configure VideoSDK for real-time transport</strong></h4><p>Next, you'll need to set up your VideoSDK environment to handle the real-time transport of audio data. This involves configuring your authentication tokens and meeting IDs to enable the AI agent to join a communication session. You will need to set up a .env file to securely store your API keys and tokens.</p><p><strong>Here is the </strong><a href="https://docs.videosdk.live/ai_agents/voice-agent-quick-start"><strong><u>OpenAI API key</u></strong></a><strong> to Configure VideoSDK for real-time transport&nbsp;</strong></p><pre><code class="language-python">VIDEOSDK_AUTH_TOKEN = your_videosdk_auth_token;
OPENAI_API_KEY = your_openai_api_key;
</code></pre><p>If you are using gemini or aws nova sonic you will need to provide their respective api key</p><h4 id="step-3-create-prompt-based-flows"><strong>Step 3: Create prompt-based flows</strong></h4><p>Define the conversational logic of your AI agent by creating prompt-based flows. This involves scripting the agent's initial greetings, questions, and responses based on potential user inputs. You can create a custom agent by inheriting from the base Agent class.&nbsp;</p><pre><code class="language-python">from videosdk.agents import Agent, AgentSession, WorkerJob, RoomOptions, JobContext
import asyncio

class VoiceAgent(Agent):
    def __init__(self):
        super().__init__(
            instructions="You are a helpful voice assistant that can answer questions and help with tasks."
        )

    async def on_enter(self) -&gt; None:
        """Called when the agent first joins the meeting"""
        await self.session.say("Hi there! How can I help you today?")

    async def on_exit(self) -&gt; None:
        """Called when the agent exits the meeting"""
        await self.session.say("Goodbye!")</code></pre><h4 id="step-4-add-fallback-and-escalation-logic"><strong>Step 4: Add fallback and escalation logic</strong></h4><p>It's crucial to account for scenarios where the AI agent may not understand a user's request or when an error occurs. Implementing fallback logic to provide a helpful response and, if necessary, a mechanism to escalate the conversation to a human agent is a best practice.</p><p>The VideoSDK AI Agent SDK allows you to handle these situations by overriding specific methods in your custom agent class.</p><h3 id="handling-unrecognized-intents-fallback"><strong>Handling Unrecognized Intents (Fallback)</strong></h3><p>If the Large Language Model (LLM) cannot determine the user's intent or if the user's speech is unclear, you can define a fallback behavior. In this example, if the agent doesn't understand, it will ask the user to rephrase their request.</p><pre><code class="language-python">from videosdk.agents import Agent, AgentSession
from videosdk.llm import LLM
from videosdk.stt import STT
from videosdk.tts import TTS

class VoiceAgent(Agent):
    def __init__(
        self,
        llm: LLM,
        stt: STT,
        tts: TTS,
    ):
        super().__init__(
            llm=llm,
            stt=stt,
            tts=tts,
            instructions="You are a helpful voice assistant that can answer questions and help with tasks."
        )

    async def on_enter(self) -&gt; None:
        """Called when the agent first joins the meeting"""
        await self.session.say("Hi there! How can I help you today?")

    async def on_fallback(self) -&gt; None:
        """Called when the agent cannot understand the user's intent."""
        await self.session.say("I'm sorry, I didn't quite catch that. Could you please rephrase?")

    async def on_exit(self) -&gt; None:
        """Called when the agent exits the meeting"""
        await self.session.say("Goodbye!")</code></pre><h3 id="handling-errors-and-escalation"><strong>Handling Errors and Escalation</strong></h3><p>For more critical errors, or if the user explicitly asks to speak to a human, you can implement an escalation path. This could involve triggering a notification, transferring the call, or providing the user with contact information for human support.</p><p>The on_error method can be used to catch exceptions that occur during the agent's operation.</p><pre><code class="language-python">import logging

# ... (previous imports)

class VoiceAgent(Agent):
    # ... (__init__ and on_enter methods)

    async def on_fallback(self) -&gt; None:
        """Called when the agent cannot understand the user's intent."""
        await self.session.say("I'm sorry, I didn't quite catch that. Could you please rephrase?")

    async def on_error(self, error: Exception) -&gt; None:
        """Called when an error occurs."""
        logging.error(f"An error occurred: {error}")
        # Simple escalation: inform the user and provide a support email.
        await self.session.say("It seems I've run into a technical issue. Please contact our support team at support@example.com for assistance.")
        # In a more advanced scenario, you could trigger an API call
        # to a human handoff service here.

    async def on_exit(self) -&gt; None:
        """Called when the agent exits the meeting"""
        await self.session.say("Goodbye!")</code></pre><p>In a real-world application, the on_error or a custom function tool could be used to initiate a more sophisticated escalation process, such as:</p><ul><li><strong>Human Handoff:</strong> Triggering a workflow in a CRM or <a href="https://hiverhq.com/blog/best-free-helpdesk-ticketing-software" rel="noreferrer">helpdesk system</a> to alert a human agent to join the call.</li><li><strong>Ticket Creation:</strong> Automatically creating a support ticket with the conversation transcript.</li><li><strong>SIP Transfer:</strong> If using SIP integration, transferring the call to a pre-defined human agent's phone number.</li></ul><p>By implementing these fallback and escalation mechanisms, you ensure that your AI voice agent provides a reliable and helpful experience, even when faced with ambiguity or errors.</p><h3 id="step-5-deploy-to-production"><strong>Step 5: Deploy to production</strong></h3><p>Once you have thoroughly tested your AI voice agent, you can deploy it. The VideoSDK CLI allows you to run your agent locally for testing and then deploy it to the VideoSDK Cloud.</p><pre><code class="language-python"># Run the AI Deployment locally
videosdk run

# Deploy the AI Deployment
videosdk deploy</code></pre><h2 id="best-practices-for-scale-and-accuracy"><strong>Best Practices for Scale and Accuracy</strong></h2><p>To ensure your AI voice agent performs optimally and delivers a high-quality user experience as your user base grows, consider these best practices:</p><h3 id="use-context-aware-agents-via-mcp-or-a2a-protocols"><strong>Use context-aware agents (via MCP or A2A Protocols)</strong></h3><p>A truly intelligent agent understands the flow of conversation. Instead of treating each user query as an isolated event, a context-aware agent maintains a memory of the dialogue. This allows for more natural and efficient interactions.</p><p>VideoSDK facilitates this through <a href="https://www.videosdk.live/blog/ai-voice-agent-a2a-mcp"><strong><u>Agent-to-Agent (A2A) communication protocols</u></strong></a>. For example, a general-purpose AI voice agent could handle initial user queries and then, upon identifying a specialized need (like a technical support issue), can seamlessly forward the query and the conversation history to a specialist agent. This ensures the user doesn't have to repeat themselves, creating a smoother experience.</p><h3 id="cache-common-responses"><strong>Cache common responses</strong></h3><p>Many businesses find that a significant portion of their customer inquiries are repetitive. For these frequently asked questions (e.g., "What are your business hours?" or "How do I reset my password?"), caching the generated audio response can significantly improve performance.</p><p>By storing the pre-rendered TTS audio for common answers, you can:</p><ul><li><strong>Reduce Latency:</strong> Deliver answers almost instantaneously, as you're bypassing the real-time TTS generation step.</li><li><strong>Lower Costs:</strong> Minimize the number of API calls to TTS services, leading to direct cost savings, especially at scale.</li><li><strong>Increase Consistency:</strong> Ensure the answer to a common question is always delivered in the same clear and consistent manner.</li></ul><h3 id="personalize-with-user-metadata"><strong>Personalize with user metadata</strong></h3><p>Personalization is key to transforming a generic interaction into a memorable customer experience. By leveraging user metadata—such as their name, past purchase history, or support ticket status—your AI voice agent can provide tailored and empathetic responses.</p><p>For instance, an e-commerce voice agent could greet a returning customer with:</p><p><em>"Welcome back, [Customer Name]! I see your recent order for the [Product Name] has been shipped. Are you calling about that, or is there something else I can help you with today?"</em></p><p>This level of personalization, achievable by integrating your AI agent with your CRM or user database, makes the interaction feel more human and significantly improves customer satisfaction.</p><h3 id="use-multi-turn-dialogs-via-llm"><strong>Use multi-turn dialogs via LLM</strong></h3><p>Early voice bots were often limited to simple, one-off commands. Modern AI voice agents, powered by sophisticated Large Language Models (LLMs), excel at handling <strong>multi-turn dialogues</strong>. This means the agent can manage complex, evolving conversations where the user's intent might be clarified over several exchanges.</p><p>For example, a user might start by saying, "I need a flight to New York." The agent can then ask clarifying questions like, "Which airport in New York?", "What date would you like to travel?", and "Are you looking for a one-way or round-trip ticket?" The LLM's ability to maintain context throughout this back-and-forth is what makes a truly conversational and useful AI possible. VideoSDK’s architecture is designed to support these stateful, long-running conversations seamlessly.</p><h2 id="conclusion"><strong>Conclusion</strong></h2><p>The proliferation of AI voice agents across industries in 2025 is a clear indicator of a fundamental shift in how we interact with technology. From making healthcare more accessible to providing 24/7 customer support, voice is becoming the new frontier of user experience. As these systems grow more sophisticated, they will unlock unprecedented opportunities for businesses to enhance customer engagement, boost operational efficiency, and drive growth.</p><p>For developers, marketers, and founders looking to stay ahead of the curve, the time to embrace AI-powered voice solutions is now. With its robust, scalable infrastructure, flexible integrations with top AI providers, and a developer-friendly SDK, VideoSDK provides the ultimate platform to build the next generation of intelligent voice agents. Whether you're developing for iOS, Android, or the web, our comprehensive tools empower you to bring your most ambitious voice agent projects to life.</p>]]></content:encoded></item><item><title><![CDATA[How to Build an AI Voice Agent in Minutes in 2025]]></title><description><![CDATA[Learn how to build an AI voice agent in minutes in 2025 using top conversational AI tools, voice automation, and real-time speech technologies.]]></description><link>https://www.videosdk.live/blog/build-ai-voice-agent</link><guid isPermaLink="false">6899f0a564df6f042b4d41b5</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 11 Aug 2025 13:49:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/08/How-to-Build-an-AI-Voice-Agent-in-Minutes-in-2025.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/08/How-to-Build-an-AI-Voice-Agent-in-Minutes-in-2025.png" alt="How to Build an AI Voice Agent in Minutes in 2025"/><p>Imagine a world where your customers can interact with an AI-powered agent that understands their needs, responds instantly, and provides a seamless experience across channels. This is the reality of AI Voice Agents in 2025.</p><p>In today’s fast-paced digital landscape, businesses are striving for ways to automate customer service and improve user experience. With AI-powered solutions like voice agents, SaaS companies can offer cutting-edge support that scales effortlessly. These intelligent systems are no longer a futuristic concept but a present-day reality, transforming how businesses engage with their customers.</p><p>In this blog, we’ll guide you through building an <a href="https://www.videosdk.live/voice-agents"><u>AI Voice Agent</u></a> in minutes using VideoSDK’s powerful tools and features, focusing on ease of integration, scalability, and customization. By the end, you'll understand the core components and be ready to deploy a sophisticated voice agent that can handle customer interactions, schedule appointments, and much more.</p><h2 id="what-are-ai-voice-agents-and-why-build-one-in-2025"><strong>What Are AI Voice Agents, and Why Build One in 2025?</strong></h2><p>An AI voice agent is an advanced software program designed to understand and respond to human speech in a natural, conversational manner. Unlike traditional, rigid IVR systems that rely on keypad inputs, AI voice agents leverage technologies like natural language processing to engage in human-like dialogue, automating both inbound and outbound calls without direct human oversight.</p><p>While off-the-shelf voice agent solutions exist, building your own provides unparalleled advantages in customization, brand identity, and data control. A custom voice agent allows you to create a unique and consistent auditory presence that aligns with your brand's personality, a critical differentiator in a crowded market. This ensures that every customer interaction, from a simple query to a complex transaction, reinforces your brand identity.</p><p><strong>The advantages of integrating a bespoke AI voice agent into your operations are substantial:</strong></p><ul><li><strong>Enhanced Customer Experience:</strong> Provide instant, 24/7 support and eliminate frustrating wait times, leading to higher customer satisfaction.</li><li><strong>Increased Operational Efficiency:</strong> Automate routine tasks like appointment scheduling and order updates, freeing up human agents to handle more complex issues.</li><li><strong>Cost Reduction:</strong> Scale your customer communication capabilities to handle high call volumes without a proportional increase in operational costs.</li><li><strong>Scalability and Consistency:</strong> Deliver standardized, high-quality responses that align with your brand's tone, ensuring a consistent experience for every customer.</li><li><strong>Data-Driven Insights:</strong> Gain valuable insights from customer interactions to further refine your products and services.</li></ul><h2 id="getting-started-with-your-ai-voice-agent-the-essential-toolkit"><strong>Getting Started with Your AI Voice Agent: The Essential Toolkit</strong></h2><p>Building a powerful AI voice agent involves the seamless integration of several core technologies. Here’s a breakdown of the essential components and their roles in creating a conversational AI experience.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/Untitled--10-.png" class="kg-image" alt="How to Build an AI Voice Agent in Minutes in 2025" loading="lazy" width="2067" height="1311"/></figure><h3 id="automatic-speech-recognition-asr"><strong>Automatic Speech Recognition (ASR)</strong></h3><p>Automatic Speech Recognition (ASR), also known as speech-to-text, is the foundational technology that converts spoken language into written text. It is the "ears" of your AI agent, enabling it to listen to and understand human speech in real-time.&nbsp;</p><p>The accuracy and speed of your ASR system are critical for a smooth conversational flow. A high-quality ASR engine ensures that the user's words are transcribed precisely, which is the first and most crucial step for the agent to comprehend the request. Modern ASR can handle various languages, accents, and even operate in noisy environments, making it a robust solution for global businesses. For any developer looking to build a voice-driven application, integrating a powerful real-time transcription API is non-negotiable.</p><h3 id="natural-language-processing-nlp-large-language-models-llms"><strong>Natural Language Processing (NLP) &amp; Large Language Models (LLMs)</strong></h3><p>Natural Language Processing (NLP) is a field of AI that gives machines the ability to understand, interpret, and generate human language. Large Language Models (LLMs) are an advanced application of NLP, trained on massive datasets to understand context, nuance, and intent, enabling them to generate coherent and relevant responses.</p><p>NLP and LLMs are the "brain" of your AI voice agent. While ASR transcribes <em>what</em> is said, NLP/LLMs figure out <em>what is meant</em>.&nbsp; They analyze the transcribed text to identify user intent, extract key information, and formulate a contextually appropriate response. This combination allows for dynamic, human-like conversations that go far beyond scripted, rule-based interactions, leading to more satisfying and effective user engagement.</p><h3 id="text-to-speech-tts"><strong>Text-to-Speech (TTS)</strong></h3><p>Text-to-Speech (TTS) technology converts written text back into natural-sounding human speech. This is the "voice" of your AI agent, giving it the ability to communicate its responses audibly.</p><p>The quality of the TTS voice is paramount for a positive user experience. A robotic, unnatural voice can be off-putting, while a clear, expressive, and human-like voice builds trust and keeps users engaged. Modern TTS systems allow for customization of voice, tone, gender, and accent, enabling you to create a voice personality that perfectly reflects your brand. This consistency across touchpoints strengthens brand recall and fosters a more personal connection with your audience.</p><h3 id="real-time-communication-webrtc"><strong>Real-Time Communication (WebRTC)</strong></h3><p>WebRTC (Web Real-Time Communication) is an open-source framework that enables real-time voice, video, and data communication directly between web browsers and devices without requiring plugins.</p><p>WebRTC is the "nervous system" that transmits the audio data between the user and the AI agent instantly. It provides the low-latency, secure, and reliable infrastructure necessary for seamless, real-time conversations. Whether you are building for the web, iOS, or Android, leveraging a robust <a href="https://www.videosdk.live/audio-video-conferencing"><u>Voice Calling API SDK</u></a> powered by WebRTC is essential. VideoSDK offers highly reliable, cross-platform SDKs that allow you to deploy AI voice agents and other real-time communication features with just a few lines of code, ensuring a scalable and secure implementation anywhere in the world. By combining these powerful technologies on a flexible platform like <a href="https://www.videosdk.live/">VideoSDK</a>, you can build and deploy a sophisticated AI voice agent that not only meets but exceeds customer expectations in 2025.</p><h2 id="how-to-build-an-ai-voice-agent-in-6-simple-steps-with-videosdk"><strong>How to Build an AI Voice Agent in 6 Simple Steps with VideoSDK</strong></h2><p>Building a state-of-the-art AI voice agent is no longer a monumental task reserved for large corporations. With VideoSDK's open-source AI Agent SDK, you can create and deploy a powerful, conversational AI agent in minutes. Our Python-based framework is designed for flexibility, allowing you to either use integrated real-time pipelines or build custom agents by combining your preferred STT, LLM, and TTS providers.</p><p>Here’s a step-by-step guide to bringing your voice agent to life.</p><h3 id="step-1-set-up-your-development-environment"><strong>Step 1: Set Up Your Development Environment</strong></h3><p>First, ensure you have the necessary prerequisites in place. Your backend will host the AI agent, while a client application will connect users to the agent in a VideoSDK meeting room.</p><p><strong>Prerequisites:</strong></p><ul><li>Python 3.12 or higher.</li><li>A <strong>VideoSDK Auth Token</strong>. You can generate these from the <a href="https://app.videosdk.live/">VideoSDK dashboard</a>.</li><li>API keys for your chosen third-party services (e.g., OpenAI for LLM, Deepgram for STT, ElevenLabs for TTS).</li></ul><p><strong>Installation:</strong>Create a Python virtual environment and install the VideoSDK Agents package along with any provider plugins you need.</p><pre><code class="language-python"># Create and activate virtual environment
python3.12 -m venv venv
source venv/bin/activate

# Install the core VideoSDK AI Agent package
pip install videosdk-agents

# Example: Install plugins for OpenAI, Google, and AWS
pip install "videosdk-plugins-openai"
pip install "videosdk-plugins-google"
pip install "videosdk-plugins-aws"</code></pre><p>Next, create a .env file in your project's root to securely store your API keys and tokens.</p><pre><code class="language-python">VIDEOSDK_AUTH_TOKEN=YOUR_VIDEOSDK_AUTH_TOKEN
OPENAI_API_KEY=YOUR_OPENAI_API_KEY
ELEVENLABS_API_KEY=YOUR_ELEVENLABS_API_KEY</code></pre><h3 id="step-2-integrating-webrtc-for-real-time-communication"><strong>Step 2: Integrating WebRTC for Real-Time Communication</strong></h3><p>WebRTC is the cornerstone of real-time voice experiences, enabling ultra-low-latency audio streaming directly between your users and the AI agent. While traditional methods like WebSockets can introduce noticeable lag, especially on unstable networks, WebRTC is specifically designed for reliable, real-time voice in any environment.</p><p>VideoSDK abstracts the complexities of WebRTC. The AI Agent SDK handles all the underlying communication, allowing your agent to join a meeting room just like any other participant. Your backend simply needs to initiate the session. The client-side application can be built using any of VideoSDK's SDKs, including React, React Native, Android, and iOS.</p><p><strong>Here’s how your Python server instructs the agent to join a meeting:</strong></p><pre><code class="language-python"># main.py
from fastapi import FastAPI
from dotenv import load_dotenv
import os
from your_agent_file import VoiceAgent, create_agent_session # Your custom agent logic

load_dotenv()

app = FastAPI()

@app.post("/join-agent")
async def join_agent(request_data: dict):
    meeting_id = request_data.get("meeting_id")
    auth_token = os.getenv("VIDEOSDK_AUTH_TOKEN")

    if not meeting_id or not auth_token:
        return {"error": "Meeting ID and auth token are required."}

    # Create and start the agent session in the background
    session = await create_agent_session(meeting_id, auth_token)
    await session.start()

    return {"message": "Agent is joining the meeting."}</code></pre><h3 id="step-3-configuring-speech-to-text-and-nlp-capabilities"><strong>Step 3: Configuring Speech-to-Text and NLP Capabilities</strong></h3><p>Accurate and fast transcription is non-negotiable for a voice agent to understand users correctly. Similarly, a high-quality, natural-sounding voice is essential for user engagement. VideoSDK’s CascadingPipeline offers a modular approach, giving you complete control to mix and match the best STT and TTS providers for your needs. You can choose from providers like Deepgram, Google, and more.</p><p><strong>Here is how you can configure a pipeline using Google for STT and ElevenLabs for TTS:</strong></p><pre><code class="language-python"># agent_logic.py
from videosdk.agents import Agent, AgentSession, CascadingPipeline
from videosdk.plugins.google import GoogleSTT
from videosdk.plugins.elevenlabs import ElevenLabsTTS

# Define your agent's personality and tools
class VoiceAgent(Agent):
    def __init__(self):
        super().__init__(
            instructions="You are a friendly and helpful assistant."
        )

# Configure the pipeline with your chosen STT and TTS providers
async def create_agent_session(meeting_id: str, auth_token: str):
    stt_config = {"model": "long", "language_code": "en-US"}
    stt_provider = GoogleSTT(config=stt_config)

# Configure OpenAI LLM below

llm_provider = OpenAILLM(
   model="gpt-4o",
   # When OPENAI_API_KEY is set in .env - DON'T pass api_key parameter
   api_key="your-openai-api-key",
   temperature=0.7,
   tool_choice="auto",
   max_completion_tokens=1000
)


    tts_config = {"model_id": "eleven_multilingual_v2"}
    tts_provider = ElevenLabsTTS(config=tts_config)
    
    pipeline = CascadingPipeline(
        stt=stt_provider,
        tts=tts_provider,
        llm=llm_provider
    )

    agent = VoiceAgent()
    session = AgentSession(agent=agent, pipeline=pipeline, meeting_id=meeting_id, auth_token=auth_token)
    return session</code></pre><h3 id="step-4-adding-ai-for-contextual-conversations"><strong>Step 4: Adding AI for Contextual Conversations</strong></h3><p>The Large Language Model (LLM) is the brain of your agent. This is where you integrate models like GPT-4 to process the transcribed text and generate intelligent, context-aware responses. VideoSDK’s framework seamlessly connects your chosen LLM into the conversation flow.</p><p><strong>Let's complete the pipeline from the previous step by adding OpenAI's GPT-4 as the LLM.</strong></p><pre><code class="language-python"># agent_logic.py (continued from Step 3)
from videosdk.plugins.openai import OpenAILLM

# ... (VoiceAgent class and other imports) ...

# Configure the pipeline with STT, TTS, and now LLM
async def create_agent_session(meeting_id: str, auth_token: str):
    # ... (STT and TTS provider setup) ...

    llm_config = {"model": "gpt-4"}
    llm_provider = OpenAILLM(config=llm_config)
    
    pipeline = CascadingPipeline(
        stt=stt_provider,
        llm=llm_provider,
        tts=tts_provider,
    )

    agent = VoiceAgent()
    session = AgentSession(agent=agent, pipeline=pipeline, meeting_id=meeting_id, auth_token=auth_token)

    # Add a greeting when the agent joins
    @agent.on_enter
    async def on_enter(session):
        await session.say("Hello! I'm your AI assistant. How can I help you today?")

    return session</code></pre><p>With this setup, the audio pipeline is complete: VideoSDK captures user audio, Google transcribes it, OpenAI processes the text and generates a response, and ElevenLabs converts that text back into speech for the user to hear.</p><h3 id="step-5-deploy-and-test-your-ai-voice-agent"><strong>Step 5: Deploy and Test Your AI Voice Agent</strong></h3><p>Your AI agent's backend logic, housed in a web server like FastAPI, can be deployed to any modern cloud service.&nbsp; Popular choices include serverless platforms like Vercel or AWS Lambda for scalability, or containerized applications on services like AWS Fargate or Google Cloud Run.</p><p><strong>Create a videosdk.ymal file with following structure : </strong></p><pre><code class="language-python">version: "1.0"
deployment:
  id: your_ai_deployment_id
  entry:
    path: entry_point_for_deployment
env: # Optional to run your agent locally
  path: "./.env"
secrets:
  VIDEOSDK_AUTH_TOKEN: your_auth_token
deploy:
 cloud: true</code></pre><p><strong>Deploy voice agent : </strong></p><p>1. Run AI Development Locally: </p><pre><code class="language-python">videosdk run</code></pre><p>2. Deploy Voice Agent</p><pre><code class="language-python">videosdk deploy</code></pre><p><strong>Testing focuses on two key metrics:</strong></p><ul><li><strong>Latency:</strong> Measure the time from when a user finishes speaking to when they hear the agent's response. An ideal response time is under one second to feel natural. VideoSDK's infrastructure is optimized for sub-80ms latency, giving you a strong foundation.</li><li><strong>Accuracy:</strong> Review call transcripts and agent responses to check for errors in transcription or intent recognition. Use this data to refine your agent's instructions (prompts) and improve its conversational abilities.</li></ul><h3 id="step-6-optimize-for-scale-and-performance"><strong>Step 6: Optimize for Scale and Performance</strong></h3><p>As your user base grows, you'll need to ensure your voice agent can handle high traffic without degrading performance.</p><p><strong>Best Practices for Optimization:</strong></p><ul><li><strong>Load Balancing:</strong> Deploy your backend server across multiple instances and use a load balancer to distribute incoming requests. This prevents any single server from becoming a bottleneck.</li><li><strong>Efficient Prompt Engineering:</strong> Optimize your LLM prompts for speed and clarity. Well-structured prompts reduce the computation required by the model, leading to faster response generation.</li><li><strong>Asynchronous Processing:</strong> Leverage asynchronous tasks for non-blocking operations. For instance, if your agent needs to fetch data from an external API, do it asynchronously so it can still handle other conversational turns. VideoSDK's Python SDK is built with async support to facilitate this.</li><li><strong>Monitor Performance Metrics:</strong> Continuously track metrics like first-call resolution, average response time, and user sentiment. Use these insights to iteratively improve your agent's logic and performance.</li></ul><h2 id="best-practices-for-an-exceptional-ai-voice-agent-experience"><strong>Best Practices for an Exceptional AI Voice Agent Experience</strong></h2><p>Building an agent that can talk is one thing; creating an experience that feels natural and intelligent is another. Here are the core principles to elevate your AI voice agent from functional to exceptional.</p><h3 id="low-latency-is-key"><strong>Low Latency is Key</strong></h3><p>In human conversation, the natural pause between turns is often just a few hundred milliseconds. Any delay beyond that feels awkward and breaks the conversational flow, making the AI feel slow or robotic. This is why ultra-low latency is not just a technical metric but a fundamental requirement for a great user experience. VideoSDK is built for this, with a global mesh network optimized for real-time communication and an AI Agent SDK engineered to minimize delays, typically achieving end-to-end latency of under 600ms. This ensures that interactions are snappy, responsive, and feel as natural as talking to a person.</p><h3 id="contextual-memory"><strong>Contextual Memory</strong></h3><p>A conversation without memory is just a series of disconnected questions and answers. For an agent to be truly helpful, it must remember what was said earlier in the conversation (short-term memory) and even recall information from past interactions (long-term memory). VideoSDK's framework is designed to support this, allowing you to build agents that maintain context. This enables more coherent, personalized, and intelligent dialogues where the agent can reference past details, understand follow-up questions, and provide truly relevant responses.</p><h3 id="handling-interruptions"><strong>Handling Interruptions</strong></h3><p>Natural conversations are not always perfectly linear; people interrupt each other. A great AI voice agent must handle this gracefully. This is achieved through advanced Voice Activity Detection (VAD), which detects when a user starts speaking and signals the agent to stop talking immediately. VideoSDK’s Agent SDK manages this complex process, allowing for fluid turn-taking where users can jump in, change the topic, or ask for clarification without waiting for the agent to finish its sentence. This capability is crucial for making the interaction feel collaborative rather than scripted.</p><h2 id="conclusion"><strong>Conclusion</strong></h2><p>In 2025, building a powerful, conversational AI voice agent is no longer a futuristic vision but an accessible reality. By combining the core technologies of ASR, NLP/LLMs, and TTS with a robust <a href="https://www.videosdk.live/blog/introduction-to-real-time-communication-sdk" rel="noreferrer">real-time communication</a> backbone like WebRTC, developers can create truly intelligent systems.</p><p>As we've seen, <a href="https://www.videosdk.live/" rel="noreferrer">VideoSDK</a> provides a comprehensive, open-source framework that abstracts away the complexities of real-time transport and multi-provider integration. With just a few lines of Python, you can deploy a scalable, low-latency agent equipped with contextual memory, function-calling capabilities, and natural interruption handling.</p><p>Whether you're looking to revolutionize customer support, automate sales qualification, or create interactive educational experiences, the tools are at your fingertips. The future of human-computer interaction is here, and with VideoSDK, you have everything you need to build it.</p>]]></content:encoded></item><item><title><![CDATA[10 Best AI Voice Agents and Platforms in 2025]]></title><description><![CDATA[Explore 10 top AI voice agents & platforms (2025) with conversational AI comparisons, virtual assistant features, voice automation tips, and real use cases.]]></description><link>https://www.videosdk.live/blog/best-ai-voice-agents</link><guid isPermaLink="false">6899d5cd64df6f042b4d4164</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 11 Aug 2025 12:06:08 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/08/10-Best-AI-Voice-Agents-and-Platforms-in-2025.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction"><strong>Introduction</strong></h2><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/10-Best-AI-Voice-Agents-and-Platforms-in-2025.png" alt="10 Best AI Voice Agents and Platforms in 2025"/><p>If your business is still relying solely on human-led voice interactions in 2025, you are likely leaving significant efficiency gains and customer satisfaction on the table. The era of clunky, command-based IVR systems is over, replaced by intelligent, human-like <a href="https://www.videosdk.live/voice-agents"><u>AI voice agents</u></a> that can understand context, manage complex conversations, and even close sales.</p><p>The global market for voice AI agents is projected to skyrocket, reflecting a massive shift in how businesses operate and interact with their customers. Valued at $2.4 billion in 2024, the market is expected to reach nearly <a href="https://market.us/report/voice-ai-agents-market/#:~:text=The%20Global%20Voice%20AI%20Agents,reaching%20an%20impressive%20$1.2%20billion."><u>$47.5 billion by 2034</u></a>, growing at a compound annual growth rate (CAGR) of 34.8%. Companies are increasingly deploying the best AI voice agents to automate tasks, reduce operational costs, and enhance the customer experience. It's predicted that by 2025, AI will power 95% of all customer interactions.&nbsp;</p><p>This blog post will guide you through the 10 best AI voice agents and platforms in 2025. We'll explore their key features, ideal use cases, and how you can leverage them to build next-generation voice experiences. We will also highlight how <a href="https://www.videosdk.live/"><u>VideoSDK's</u></a> robust infrastructure, with its powerful real-time communication (RTC) capabilities, can empower you to create and scale your own AI-powered voice agents with ease.&nbsp;</p>
<!--kg-card-begin: html-->
<div style="position:relative;width:100%;overflow:hidden;border-radius:16px;
            background-image:linear-gradient(to right,#CDB6FF,#9333EA,#7E22CE);
            padding:32px;text-align:center;box-shadow:0 25px 50px -12px rgba(0,0,0,0.25);
            margin:20px;">
  
  <!-- Gradient Heading -->
  <h3 style="margin:0 auto;max-width:672px;font-size:32px;line-height:1.2;font-weight:800;
             letter-spacing:-0.02em;background:linear-gradient(to right,#ffffff,#E9D5FF);
             -webkit-background-clip:text;-webkit-text-fill-color:transparent;">
    Launch Your AI Voice Agent in <span style="color:#fff;">5 Minutes</span>
  </h3>
  
  <!-- Subheading -->
  <p style="margin-top:12px;margin-bottom:0;font-size:17px;line-height:1.6;color:#F3F4F6;">
    Build, customize, and scale AI voice agents with VideoSDK’s developer-friendly APIs and SDKs.
  </p>
  
  <!-- Button -->
  <div style="margin-top:24px;">
    <div style="display:inline-block;">
      <a href="https://docs.videosdk.live/ai_agents/voice-agent-quick-start" target="_blank" rel="noopener noreferrer" style="display:inline-block;font-size:16px;font-weight:600;text-decoration:none;">
        <span style="background:linear-gradient(to right,#ffffff,#E9D5FF);
                     padding:14px 36px;color:#111827;border-radius:12px;
                     box-shadow:0 4px 10px rgba(0,0,0,0.15);transition:all 0.3s ease;
                     display:inline-block;cursor:pointer;" onmouseover="this.style.boxShadow='0 0 20px rgba(205,182,255,0.9),0 0 40px rgba(147,51,234,0.7)'; 
                           this.style.transform='scale(1.05)'" onmouseout="this.style.boxShadow='0 4px 10px rgba(0,0,0,0.15)'; 
                          this.style.transform='scale(1)'">
          🚀 Get Started Now
        </span>
      </a>
    </div>
  </div>
</div> 

<!--kg-card-end: html-->
<h2 id="what-are-ai-voice-agents-and-why-are-they-booming-in-2025"><strong>What Are AI Voice Agents, and Why Are They Booming in 2025?</strong></h2><p>An AI voice agent is a sophisticated software program designed to understand, process, and respond to human speech in a conversational manner. Unlike traditional interactive voice response (IVR) systems that rely on rigid, pre-programmed menus, AI voice agents use a combination of technologies, including <a href="https://www.videosdk.live/developer-hub/stt/what-is-automatic-speech-recognition" rel="noreferrer">automatic speech recognition (ASR)</a>, natural language processing (NLP), and text-to-speech (TTS) to engage in dynamic, two-way conversations. These agents can be integrated into various channels, such as phone systems, mobile apps, and smart devices, to automate a wide range of tasks, from answering customer queries to scheduling appointments.&nbsp;</p><p>The boom in AI voice agents in 2025 is driven by several key factors. Businesses are under constant pressure to <a href="https://www.eposnow.com/us/resources/payment-trends-in-2026-evaluation-and-predictions/" rel="noreferrer">improve efficiency</a> and reduce operational costs. AI voice agents offer a powerful solution by automating repetitive tasks and handling a high volume of inquiries simultaneously, thus freeing up human agents to focus on more complex and high-value interactions. Furthermore, customer expectations have evolved; today's consumers demand instant, 24/7 support, and AI agents can deliver this level of service without the limitations of a human workforce. The rapid advancements in AI technology, particularly in natural language understanding and generative AI, have also made these agents more human-like and capable of handling nuanced conversations, leading to a more positive customer experience.</p><p>The data speaks for itself. In 2025, it's anticipated that 80% of customer service organizations will utilize generative AI to boost agent productivity and the overall customer experience. This widespread adoption is a clear indicator of the significant impact AI voice agents are having on the customer service industry. Organizations that have implemented AI solutions have reported substantial benefits, including up to a 30% reduction in customer service operational costs and a significant increase in customer satisfaction. This trend underscores the importance of integrating AI-powered voice solutions to stay competitive in the modern business landscape.&nbsp;</p><h2 id="top-10-ai-voice-agents-and-platforms-in-2025"><strong>Top 10 AI Voice Agents and Platforms in 2025</strong></h2><p/><ol><li><a href="https://www.videosdk.live/" rel="noreferrer"><u>VideoSDK</u></a></li><li><a href="https://www.videosdk.live/blog/best-ai-voice-agents#vapi-best-for-omnichannel-support" rel="noreferrer"><u>Vapi</u></a></li><li><a href="https://www.videosdk.live/blog/best-ai-voice-agents#elevenlabs-best-for-expressive-ai-voices-agent" rel="noreferrer"><u>ElevenLabs</u></a></li><li><a href="https://www.videosdk.live/blog/best-ai-voice-agents#deepgram-best-for-highly-accurate-speech-recognition" rel="noreferrer"><u>Deepgram</u></a></li><li><a href="https://www.videosdk.live/blog/best-ai-voice-agents#openai-best-open-source-ai-voice-recognition" rel="noreferrer"><u>OpenAI</u></a></li><li><a href="https://www.videosdk.live/blog/best-ai-voice-agents#bland-best-for-generating-custom-ai-voices" rel="noreferrer"><u>Bland</u></a></li><li><a href="https://www.videosdk.live/blog/best-ai-voice-agents#synthflow-best-for-building-and-deploying-ai-voice-agents" rel="noreferrer"><u>Synthflow</u></a></li><li><a href="https://www.videosdk.live/blog/best-ai-voice-agents#retell-ai-best-for-support-teams" rel="noreferrer"><u>Retell AI</u></a></li><li><a href="https://www.videosdk.live/blog/best-ai-voice-agents#voiceflow-best-for-no-code-ai" rel="noreferrer"><u>Voiceflow</u></a></li><li><a href="https://www.videosdk.live/blog/best-ai-voice-agents#murfai-best-for-generating-studio-quality-ai-voices" rel="noreferrer"><u>Murf.ai</u></a></li></ol><h2 id="videosdk-best-for-real-time-ai-voice-agent"><a href="https://www.videosdk.live/"><strong><u>VideoSDK</u></strong></a><strong>: Best for Real-Time AI Voice Agent</strong></h2><h2 id=""/><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/Screenshot-2025-08-08-153831.png" class="kg-image" alt="10 Best AI Voice Agents and Platforms in 2025" loading="lazy" width="1897" height="916"/></figure><p>VideoSDK provides the foundational infrastructure for creating scalable and low-latency AI voice agents. It’s not just a tool but a comprehensive solution for developers looking to integrate intelligent voice experiences into their applications. This platform is engineered for developers who need to build, deploy, and manage sophisticated AI voice agents that operate in real-time, offering a flexible and powerful alternative to off-the-shelf solutions. Unlike rigid voice bot frameworks, VideoSDK gives you the flexibility, modularity, and global performance needed to build AI voice agents for production that think, speak, and act in real-time. With its robust WebRTC-based architecture, VideoSDK is the go-to choice for building custom, high-performance voice AI applications.</p><p><strong>Key Features:</strong></p><p><strong>Global WebRTC Infrastructure &amp; Low-Latency Guarantee:</strong> VideoSDK is built on a geo-distributed <a href="https://www.videosdk.live/blog/webrtc"><u>WebRTC</u></a> architecture that intelligently routes media through the nearest server to the end-user. This engineering ensures audio latency remains under 80ms, which is critical for preventing the awkward pauses and interruptions that plague slower systems. For an AI voice agent, this means conversations are fluid and natural, mirroring human interaction.</p><p><strong>Truly Modular and Extensible AI Pipelines:</strong> Developers have granular control over the voice agent's "brain." You can plug and play different best-in-class services for each part of the process:</p><p><strong>Speech-to-Text (STT):</strong> Choose from providers like Google Speech, OpenAI, or specialized services like Deepgram for real-time transcription. You can even switch models on the fly based on the caller's language or dialect.</p><p><strong>Large Language Model (LLM): </strong>Integrate any LLM of your choice, whether it's OpenAI's GPT series, Anthropic's Claude, or a fine-tuned open-source model you host yourself. This flexibility is key to controlling costs and tailoring the agent's personality and knowledge.</p><p><strong>Text-to-Speech (TTS):</strong> Select from a range of TTS engines like ElevenLabs for hyper-realistic voices or Amazon Polly for a wide variety of languages and accents.</p><p><strong>Context-Awareness with Built-in RAG and Memory: </strong>An AI agent is only as smart as its access to information. VideoSDK’s platform includes built-in Retrieval-Augmented Generation (RAG) capabilities. This allows the agent to query external knowledge bases—like a product database, company FAQs, or a user's order history—in real time. The integrated memory ensures the agent remembers previous turns in the conversation, so users don't have to repeat themselves. This combination drastically reduces LLM "hallucinations" by grounding responses in factual data.</p><p><strong>Full-Stack Platform SDKs:</strong> True cross-platform development is a core strength. An AI voice assistant built with VideoSDK can be deployed natively within your application, regardless of the platform. This includes comprehensive SDKs for web (React, Angular, Vue, Javascript), mobile (iOS and Android, with wrappers for React Native and Flutter), and even specialized environments like Unity.</p><p><strong>Telephony (PSTN) and SIP Integration: </strong>Your AI voice agent isn't limited to apps. VideoSDK allows you to connect your agent to traditional phone networks. You can acquire phone numbers directly and have your agent answer inbound calls (PSTN) or integrate it into existing enterprise phone systems using the <a href="https://www.videosdk.live/blog/sip-connect"><u>SIP protocol</u></a>.</p><p><strong>Robust Audio Processing:</strong> To ensure the STT engine receives the cleanest possible audio for maximum accuracy, VideoSDK includes built-in audio processing features like advanced noise suppression and echo cancellation. This is crucial for real-world environments where background noise is common, such as call centers, drive-thrus, or users on mobile devices.</p><p><strong>Flexible and Scalable "Agent Cloud": </strong>Deployment is designed for developer choice. You can use VideoSDK's managed "Agent Cloud" to get your voice agent running in minutes without worrying about server management and auto-scaling. For enterprises with specific security or infrastructure requirements, the entire agent framework can be self-hosted on your own cloud or on-premise servers.</p><p><strong>Enterprise-Grade Security and Compliance:</strong> VideoSDK is architected for trust and security, meeting standards like SOC 2 Type II, GDPR, and HIPAA. This makes it a viable solution for industries handling sensitive information, such as healthcare (for patient scheduling) or finance (for customer verification).</p><p><strong>Use Cases:</strong></p><ul><li><strong>E-commerce and Retail - Smart Return &amp; Order Management:</strong> An AI voice agent handles inbound calls for product returns. It authenticates the customer, uses RAG to pull up their order history from a Shopify or Magento backend, understands the reason for the return, and initiates the Return Merchandise Authorization (RMA) process—all without human intervention.</li><li><strong>Healthcare - HIPAA-Compliant Appointment Scheduling:</strong> A patient calls a clinic to book an appointment. The AI agent, operating within HIPAA guidelines, authenticates the patient, checks the doctor's real-time availability via an API call to the clinic's scheduling software, and books the appointment. It can also handle follow-up tasks like sending confirmation texts.</li><li><strong>SaaS - Interactive In-App Onboarding Assistant:</strong> A new user logs into a complex software platform. An in-app voice assistant proactively offers help. The user can ask natural language questions like, "How do I add a new team member to my project?" The agent provides a verbal walkthrough while potentially highlighting the relevant UI elements, drawing its answers from the product's documentation via RAG.</li><li><strong>Logistics and Transportation - Automated Dispatch and ETA Updates:</strong> A truck driver can call a dispatch number and, through a voice agent, report their current status, log a completed delivery, or request their next assignment. For customers, an AI agent can provide real-time ETA updates by querying the company's logistics database.</li><li><strong>Hospitality - 24/7 Voice-Based Concierge and Booking:</strong> A hotel guest can call the front desk at any hour and interact with an AI agent to request a wake-up call, order room service, or ask for information about local attractions. The agent can also handle new room bookings by checking availability and processing payments.</li><li><strong>FinTech - Secure Customer Authentication and Support:</strong> A user calls their bank's support line to report a lost card. The AI agent guides them through a secure, multi-factor voice authentication process. Once verified, it can immediately lock the card and initiate the process for sending a replacement, then log the interaction in the CRM.</li></ul><p><strong>Pricing: </strong>VideoSDK offers a flexible pricing model that caters to different scales of business. You can find more details on their <a href="https://www.videosdk.live/pricing"><u>pricing page</u></a>.</p><table>
<thead>
<tr>
<th>Plan</th>
<th>Free</th>
<th>Pay-As-You-Go</th>
<th>Enterprise</th>
</tr>
</thead>
<tbody>
<tr>
<td>Description</td>
<td>Launch effortlessly, ideal for exploration and integration</td>
<td>Scale seamlessly as you grow with usage-based pricing</td>
<td>Designed for high-volume demands and customized use cases</td>
</tr>
<tr>
<td>Included Minutes</td>
<td>10,000 mins/month (conferencing + streaming)<br>300 mins/month (add-ons)</br></td>
<td>Billed based on usage</td>
<td>Starts at 1M minutes</td>
</tr>
<tr>
<td>Audio Call Pricing</td>
<td>Free (within limits)</td>
<td>$0.0006 / participant-minute</td>
<td>Discounted pricing based on usage</td>
</tr>
<tr>
<td>Video Call Pricing</td>
<td>Free (within limits)</td>
<td>$0.003 / participant-minute</td>
<td>Discounted pricing based on usage</td>
</tr>
<tr>
<td>Live Streaming Pricing</td>
<td>Free (within limits)</td>
<td>$0.0015 / viewer-minute</td>
<td>Discounted pricing based on usage</td>
</tr>
<tr>
<td>Latency</td>
<td>&lt;80ms globally</td>
<td>&lt;80ms globally</td>
<td>&lt;80ms globally</td>
</tr>
<tr>
<td>Network</td>
<td>Global mesh network</td>
<td>Global mesh network</td>
<td>Global mesh network</td>
</tr>
<tr>
<td>Deployment Options</td>
<td>Shared infra</td>
<td>Shared infra</td>
<td>Dedicated cloud region stack</td>
</tr>
<tr>
<td>Support</td>
<td>Discord community</td>
<td>Community &amp; standard support</td>
<td>99.99% uptime SLAs<br>Best-in-town support<br>Dedicated assistance</br></br></td>
</tr>
<tr>
<td>Credit Card Required</td>
<td>No</td>
<td>Yes</td>
<td>No (via custom contract)</td>
</tr>
</tbody>
</table>
<h2 id="vapi-best-for-omnichannel-support"><strong>Vapi: Best for Omnichannel Support</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/Screenshot-2025-08-08-153924.png" class="kg-image" alt="10 Best AI Voice Agents and Platforms in 2025" loading="lazy" width="1896" height="920"/></figure><p>Vapi is a developer-centric platform designed for building, deploying, and scaling AI voice agents across various channels. It's particularly strong for teams that need to create a unified voice experience, whether through traditional phone calls, web, or mobile applications. Vapi acts as an orchestration layer, allowing developers to plug in their preferred models for STT, LLM, and TTS to construct a custom voice AI stack.</p><p><strong>Key Features:</strong></p><ul><li><strong>Omnichannel Deployment:</strong> Build a single voice agent and deploy it across telephony (PSTN), web (WebRTC), and mobile apps.</li><li><strong>BYO Model Integration:</strong> Offers the flexibility to "bring your own" models from providers like OpenAI, Deepgram, and ElevenLabs, enabling performance and cost optimization.</li><li><strong>Developer-Focused:</strong> Provides a rich developer ecosystem with detailed documentation, API keys for easy integration, and an active Discord community for support.</li><li><strong>Low-Latency Architecture:</strong> Engineered for real-time, responsive conversations, crucial for maintaining user engagement.</li><li><strong>Scalability:</strong> Built to handle high volumes of concurrent calls, making it suitable for businesses of all sizes.</li></ul><p><strong>Use Cases:</strong></p><ul><li>Automating inbound and outbound customer support calls.</li><li>AI-driven e-commerce order management, package tracking, and dispatch.</li><li>Building lead generation and qualification bots for sales teams.</li><li>Healthcare applications such as automated appointment scheduling.</li></ul><p><strong>Pricing:</strong>Vapi's pricing is usage-based and modular. The core orchestration costs $0.05 per minute, but the total cost increases as you add third-party services for telephony, LLM, STT, and TTS. A free trial is available with $10 in credits to get started.</p><h2 id="elevenlabs-best-for-expressive-ai-voices-agent"><strong>ElevenLabs: Best for Expressive AI Voices Agent</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/Screenshot-2025-08-08-154027.png" class="kg-image" alt="10 Best AI Voice Agents and Platforms in 2025" loading="lazy" width="1890" height="919"/></figure><p>ElevenLabs is a leader in voice AI technology, renowned for its ability to generate incredibly realistic, expressive, and human-like speech. While primarily a Text-to-Speech (TTS) provider, its high-quality voice generation is a critical component for creating believable AI voice agents. Developers use the ElevenLabs API to give their agents a distinctive and emotionally resonant voice that can significantly enhance the user experience.</p><p><strong>Key Features:</strong></p><ul><li><strong>High-Fidelity Speech Synthesis:</strong> Produces natural-sounding audio with lifelike intonation and emotional range.</li><li><strong>Voice Cloning:</strong> Allows you to create a digital replica of a specific voice from a short audio sample, perfect for brand consistency.</li><li><strong>Multilingual Support:</strong> Supports speech generation in over 29 languages and more than 120 voices.</li><li><strong>Voice Design:</strong> Provides tools to create and customize unique synthetic voices by adjusting parameters like age, gender, and accent.</li><li><strong>API for Integration:</strong> Offers a robust API that allows developers to easily integrate its TTS capabilities into any AI voice agent platform.</li></ul><p><strong>Use Cases:</strong></p><ul><li>Powering AI agents for audiobooks and podcasts with unique character voices.</li><li>Creating voiceovers for videos, e-learning modules, and corporate training.</li><li>Developing AI-powered game characters with dynamic and realistic dialogue.</li><li>Building brand-specific voice assistants for marketing and customer engagement.</li></ul><p><strong>Pricing:</strong>ElevenLabs offers a tiered subscription model. There is a free plan with a 10,000-character monthly limit. Paid plans start at $5/month for the Starter tier, $11/month for the Creator tier, and go up to $99/month for the Pro plan, with increasing character limits and feature access at each level.</p><h2 id="deepgram-best-for-highly-accurate-speech-recognition"><strong>Deepgram: Best for Highly Accurate Speech Recognition</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/Screenshot-2025-08-08-154119.png" class="kg-image" alt="10 Best AI Voice Agents and Platforms in 2025" loading="lazy" width="1876" height="915"/></figure><p>Deepgram is an AI speech platform that provides developers with building blocks for voice applications, centered around its industry-leading Speech-to-Text (STT) models. Its high accuracy and low latency in transcription are vital for any AI voice agent, as understanding the user correctly is the first step to a successful interaction. With the recent addition of Aura, its own TTS model, Deepgram now offers a more complete voice AI platform.</p><p><strong>Key Features:</strong></p><ul><li><strong>High-Accuracy STT:</strong> Renowned for its fast and precise speech-to-text transcription across more than 30 languages.</li><li><strong>Aura TTS Engine:</strong> A text-to-speech model built for responsive, conversational AI that minimizes latency.</li><li><strong>Audio Intelligence:</strong> Provides APIs for extracting insights from audio, such as summarization, sentiment analysis, and topic detection.</li><li><strong>Real-Time Processing:</strong> Engineered for sub-300ms response times, making it ideal for live, conversational applications.</li><li><strong>Custom Models:</strong> Allows businesses to train speech models on their specific audio data to improve accuracy for unique vocabularies or accents.</li></ul><p><strong>Use Cases:</strong></p><ul><li>Powering conversational AI and virtual assistants where transcription accuracy is critical.</li><li>Transcribing and analyzing calls in contact centers for quality assurance and compliance.</li><li>Creating accurate transcriptions for media such as podcasts and videos.</li><li>Building voice-controlled applications and devices.</li></ul><p><strong>Pricing:</strong>Deepgram uses a pay-as-you-go model. New users receive $200 in free credits. The Aura TTS service starts at $0.015 per 1,000 characters. For higher volume usage, Growth and Enterprise plans are available with discounted rates.</p><h2 id="openai-best-open-source-ai-voice-recognition"><strong>OpenAI: Best Open-Source AI Voice Recognition</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/Screenshot-2025-08-08-154218.png" class="kg-image" alt="10 Best AI Voice Agents and Platforms in 2025" loading="lazy" width="1608" height="913"/></figure><p>While not a singular, pre-built voice agent platform, OpenAI provides the essential AI models that serve as the building blocks for creating powerful voice agents. Developers can combine OpenAI's Whisper model for speech recognition, a GPT model (like GPT-4) for intelligence and reasoning, and its TTS models for voice output. The "open" nature refers to the accessibility of its APIs, which allow for deep customization and integration.</p><p><strong>Key Features:</strong></p><ul><li><strong>Whisper for STT:</strong> A highly accurate, open-source speech recognition model that can be self-hosted or accessed via API.</li><li><strong>GPT Models for LLM:</strong> Provides the conversational intelligence, allowing agents to understand context, answer questions, and perform tasks.</li><li><strong>TTS API:</strong> Offers a range of natural-sounding voices for generating the agent's spoken responses.</li><li><strong>Agents SDK:</strong> OpenAI provides an SDK, particularly for TypeScript, to help developers build real-time, context-aware voice agents more easily.</li><li><strong>Function Calling:</strong> Allows the LLM to connect to external tools and APIs, enabling the agent to perform real-world actions like booking appointments or processing orders.</li></ul><p><strong>Use Cases:</strong></p><ul><li>Building custom, intelligent voice assistants from the ground up for any application.</li><li>Creating specialized agents that can hand off tasks to one another.</li><li>Developing proof-of-concept voice agents for new product ideas.</li><li>Integrating voice control and conversational AI into existing applications.</li></ul><p><strong>Pricing:</strong>Pricing is based on API usage for each model (Whisper, GPT, TTS). Costs are calculated per token for language models and per second or character for audio models. This modular pricing allows developers to pay only for what they use.</p><h2 id="bland-best-for-generating-custom-ai-voices"><strong>Bland: Best for Generating Custom AI Voices</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/Screenshot-2025-08-08-154301.png" class="kg-image" alt="10 Best AI Voice Agents and Platforms in 2025" loading="lazy" width="1866" height="916"/></figure><p>Bland AI is an API-first platform designed for developers who want to build and scale AI-powered phone agents. It provides the infrastructure to handle high volumes of concurrent calls and is particularly suited for enterprise-level outbound and inbound call automation. While it offers a basic drag-and-drop builder, its core strength lies in its developer-centric tools and integrations.</p><p><strong>Key Features:</strong></p><ul><li><strong>High-Volume Calling:</strong> Capable of dispatching tens of thousands of calls per hour, making it suitable for large-scale campaigns.</li><li><strong>API-First Architecture:</strong> Gives developers deep control over call logic, workflows, and integrations via a flexible API.</li><li><strong>Voice Cloning (Beta):</strong> Offers the ability to create custom voices to align with specific brand identities.</li><li><strong>Telephony Integration:</strong> Supports integration with major telephony providers like Twilio and Vonage, as well as Bring-Your-Own-Carrier (BYOC) setups.</li><li><strong>Security and Compliance:</strong> Meets enterprise security standards, including SOC 2 and HIPAA certifications.</li></ul><p><strong>Use Cases:</strong></p><ul><li>Automating high-volume outbound sales and marketing calls.</li><li>Handling large-scale inbound customer service and support inquiries.</li><li>Building AI-powered appointment reminder and confirmation systems.</li><li>Conducting automated surveys and collecting customer feedback.</li></ul><p><strong>Pricing:</strong>Bland AI has a straightforward usage-based pricing model. Outbound calls are priced at $0.09 per minute and inbound calls at $0.04 per minute. Phone number rental is an additional $15 per month. Be aware that advanced features like voice cloning may incur extra fees.</p><h2 id="synthflow-best-for-building-and-deploying-ai-voice-agents"><strong>Synthflow: Best for Building and Deploying AI Voice Agents</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/Screenshot-2025-08-08-154428.png" class="kg-image" alt="10 Best AI Voice Agents and Platforms in 2025" loading="lazy" width="1878" height="921"/></figure><p>Synthflow is a no-code/low-code platform that enables businesses to design, build, and deploy human-like AI voice agents quickly. It is designed to be accessible for both technical and non-technical users, featuring an intuitive drag-and-drop interface. Synthflow bundles together all the necessary components (telephony, STT, LLM, TTS) into a single package, simplifying the development process.</p><p><strong>Key Features:</strong></p><ul><li><strong>No-Code Flow Builder:</strong> A visual, drag-and-drop interface for designing complex conversational workflows without writing code.</li><li><strong>All-in-One Platform:</strong> Includes all necessary components, abstracting away the complexity of integrating multiple APIs.</li><li><strong>Low-Latency Responses:</strong> Optimized for fast, sub-400ms response times to ensure natural-sounding conversations.</li><li><strong>Third-Party Integrations:</strong> Seamlessly connects with over 200 tools, including CRMs like Salesforce and HubSpot, calendars, and other business systems.</li><li><strong>White-Labeling:</strong> Offers an agency plan that allows businesses to rebrand the platform as their own.</li></ul><p><strong>Use Cases:</strong></p><ul><li>Automating lead qualification and appointment scheduling for sales teams.</li><li>Providing 24/7 AI-powered customer support and service.</li><li>Building AI receptionists and answering services.</li><li>Creating voice-based surveys and data collection agents.</li></ul><p><strong>Pricing:</strong>Synthflow offers several tiered plans. The Starter plan is $29/month, the Pro plan is $375/month, and the Growth plan is $750/month, each with included minutes and features. A 14-day free trial is available for the Pro plan. An enterprise plan is also available with volume-based discounts.</p><h2 id="retell-ai-best-for-support-teams"><strong>Retell AI: Best for Support Teams</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/Screenshot-2025-08-08-154509.png" class="kg-image" alt="10 Best AI Voice Agents and Platforms in 2025" loading="lazy" width="1873" height="896"/></figure><p>Retell AI is a developer-focused platform for building highly responsive, human-like voice agents. Its standout feature is its proprietary conversation engine, which excels at handling conversational nuances like turn-taking and interruptions, making it ideal for dynamic support interactions. Retell AI is designed for production environments where conversational fluidity is paramount.</p><p><strong>Key Features:</strong></p><ul><li><strong>Advanced Conversational Engine:</strong> Enables agents to handle interruptions and detect end-of-turn with less than 800ms latency, creating a more natural flow.</li><li><strong>Flexible AI Integration:</strong> Allows you to use your preferred LLM, including models from GPT and Claude, to power your agent's intelligence.</li><li><strong>Multi-Platform Deployment:</strong> Deploy voice agents across web applications, mobile apps, and telephony services like Twilio.</li><li><strong>Comprehensive Monitoring:</strong> Provides post-call analysis, sentiment tracking, and task completion data to monitor and improve agent performance.</li><li><strong>Security and Compliance:</strong> Supports SOC 2, HIPAA, and GDPR compliance, making it suitable for regulated industries.</li></ul><p><strong>Use Cases:</strong></p><ul><li>Building AI-powered customer service agents that can handle complex and unscripted inquiries.</li><li>Creating interactive voice assistants for technical support and troubleshooting.</li><li>Developing AI agents for booking and scheduling that require natural conversation.</li><li>Prototyping and deploying sophisticated voice AI applications.</li></ul><p><strong>Pricing:</strong>Retell AI uses a pay-as-you-go pricing model with separate charges for each component (voice engine, LLM, telephony). Voice engine costs start at $0.07/minute. There are no platform fees, and users get $10 in free credits to start. Enterprise plans with volume discounts are also available.</p><h2 id="voiceflow-best-for-no-code-ai"><strong>Voiceflow: Best for No-Code AI</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/Screenshot-2025-08-08-154542.png" class="kg-image" alt="10 Best AI Voice Agents and Platforms in 2025" loading="lazy" width="1875" height="918"/></figure><p>Voiceflow is a collaborative, no-code platform that allows teams to design, prototype, and deploy conversational AI agents for both voice and chat. It's particularly well-suited for teams with non-technical members, such as designers and product managers, thanks to its intuitive drag-and-drop interface. While strong on chat, its voice capabilities are typically enabled through integrations.</p><p><strong>Key Features:</strong></p><ul><li><strong>No-Code Visual Builder:</strong> An easy-to-use drag-and-drop canvas for designing complex conversation flows without any programming knowledge.</li><li><strong>Collaborative Workspace:</strong> Allows multiple team members to work on agent design in real-time, leaving comments and managing versions.</li><li><strong>Knowledge Base Training:</strong> Train your AI agent on your own data by uploading documents, websites, or articles to answer user questions accurately.</li><li><strong>Multi-Channel Deployment:</strong> Design an agent once and deploy it across various channels, including websites, mobile apps, and voice assistants like Amazon Alexa.</li><li><strong>LLM Compatibility:</strong> Supports various large language models, including GPT-4, Claude, and Gemini, or you can bring your own.</li></ul><p><strong>Use Cases:</strong></p><ul><li>Rapidly prototyping and testing new chatbot and voice assistant ideas.</li><li>Building customer support chatbots for websites to handle common queries.</li><li>Creating lead generation bots that capture user information.</li><li>Developing voice applications for smart speakers and other voice-enabled devices.</li></ul><p><strong>Pricing:</strong>Voiceflow offers a free Sandbox plan for individuals to experiment. The Pro plan starts at $60 per month per editor, and a custom-priced Enterprise plan is available for larger teams needing advanced features and security.</p><h2 id="murfai-best-for-generating-studio-quality-ai-voices"><strong>Murf.ai: Best for Generating Studio-Quality AI Voices</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/Screenshot-2025-08-08-154615.png" class="kg-image" alt="10 Best AI Voice Agents and Platforms in 2025" loading="lazy" width="1875" height="898"/></figure><p>Murf.ai is a powerful AI voice generator that specializes in creating studio-quality, realistic voiceovers from text. While not an end-to-end voice agent platform, it excels at providing the high-quality audio output needed to make an agent sound professional and trustworthy. It's an excellent tool for content creators, marketers, and educators who need polished voiceovers for their projects.</p><p><strong>Key Features:</strong></p><ul><li><strong>Studio-Quality Voices:</strong> Offers a library of over 200 realistic and natural-sounding voices in more than 30 languages.</li><li><strong>Voice Customization:</strong> Allows users to fine-tune voice parameters like pitch, tone, and speed to match the desired style and emotion.</li><li><strong>AI Voice Cloning:</strong> Provides the ability to create a custom AI voice clone for consistent branding.</li><li><strong>Voice Over Video:</strong> An integrated editor that allows you to sync your generated voiceover with videos, images, and background music.</li><li><strong>API Integration:</strong> An API is available for developers to integrate Murf's voice generation capabilities into their own applications.</li></ul><p><strong>Use Cases:</strong></p><ul><li>Creating professional voiceovers for marketing videos, advertisements, and social media content.</li><li>Producing audio for e-learning courses, training materials, and explainer videos.</li><li>Generating high-quality audio for podcasts and audiobooks.</li><li>Providing the voice for corporate presentations and product demos.</li></ul><p><strong>Pricing:</strong>Murf.ai has a free plan with limited features. Paid plans include the Creator plan at $19/month and the Business plan at $66/month (billed annually). A custom Enterprise plan is also available for larger teams with unlimited voice generation needs.</p><h2 id="use-case-patterns-emerging-in-2025"><strong>Use Case Patterns Emerging in 2025</strong></h2><h3 id="ai-voice-agents-for-agent-assist-in-contact-centers"><strong>AI Voice Agents for Agent Assist in Contact Centers</strong></h3><ul><li><strong>Pain Point:</strong> Human agents in contact centers often waste valuable time searching through extensive knowledge bases or internal documents while on a live call, leading to long silences and increased Average Handle Time (AHT).</li><li><strong>Solution:</strong> AI voice agents can act as a real-time co-pilot. The AI listens to the conversation, understands the customer's query, and automatically fetches and displays the most relevant information, policy details, or troubleshooting steps on the agent's screen.</li><li><strong>Example:</strong> During an insurance claim call, as the customer describes the incident, the AI agent listens for keywords and instantly surfaces the relevant policy clauses, coverage limits, and required forms, reducing call handling time by up to 40%.</li></ul><h3 id="ai-agents-for-customer-support-and-bpos"><strong>AI Agents for Customer Support and BPOs</strong></h3><ul><li><strong>Pain Point:</strong> High call volumes for repetitive queries (e.g., order status, password resets) lead to long customer wait times, agent burnout, and high operational costs for Business Process Outsourcing (BPO) centers.</li><li><strong>Solution:</strong> Deploy AI voice agents to autonomously handle Tier-1 and Tier-2 support inquiries 24/7. This front line of AI deflects a significant portion of calls, freeing human agents to manage complex escalations and high-value customer interactions.</li><li><strong>Example:</strong> A large retail BPO implements AI agents to manage all "Where is my order?" inquiries. The agent authenticates the user, integrates with the logistics backend to provide a real-time status update, and can even initiate a support ticket if the item is lost, resolving 60% of these calls without human intervention.</li></ul><h3 id="voice-agents-for-fintech"><strong>Voice Agents for Fintech</strong></h3><ul><li><strong>Pain Point:</strong> Fintech platforms require secure and immediate support for sensitive operations like fraud reporting, transaction verification, and account inquiries, often happening outside standard business hours.</li><li><strong>Solution:</strong> Implement AI voice agents with robust security layers, such as voice biometrics, for user authentication. These agents can securely handle routine financial tasks, query transaction histories, and place temporary locks on accounts in real-time.</li><li><strong>Example:</strong> A user of a digital wallet notices a suspicious transaction. They call the support line and are greeted by an AI agent that uses the user's voiceprint to verify their identity. The user says, "I don't recognize the last transaction," and the agent immediately flags the charge and freezes the card, preventing further fraud.</li></ul><h3 id="outbound-calling-agents-for-reminders-and-sales"><strong>Outbound Calling Agents for Reminders and Sales</strong></h3><ul><li><strong>Pain Point:</strong> Manually calling customers for appointment reminders or sales follow-ups is a monotonous, time-intensive task that doesn't scale and is prone to human inconsistency.</li><li><strong>Solution:</strong> Use AI voice agents to automate outbound calling campaigns. The agent can dial thousands of numbers concurrently, deliver personalized messages, and engage in simple two-way conversation to confirm appointments, qualify leads, or process renewals.</li><li><strong>Example:</strong> A car dealership uses an AI agent to call customers whose leases are expiring. The agent reminds them of the date, asks if they're interested in renewing or exploring new models, and can directly schedule a test drive with a sales representative based on calendar availability.</li></ul><h3 id="intelligent-ivr-replacement-for-enterprises"><strong>Intelligent IVR Replacement for Enterprises</strong></h3><ul><li><strong>Pain Point:</strong> Traditional Interactive Voice Response (IVR) systems with rigid, numbered menus ("Press 1 for sales, press 2 for support...") are a major source of customer frustration and often fail to resolve issues, leading to misrouted calls.</li><li><strong>Solution:</strong> Replace the legacy IVR with a conversational AI "front door." This AI agent understands natural language, allowing callers to state their intent immediately ("I need to find out if you have the new X-model laptop in stock"). It can either resolve the query directly or route the call to the correct department with the full context of the conversation.</li><li><strong>Example:</strong> A major airline replaces its IVR. A traveler calls and says, "My flight to San Francisco was just canceled, I need to get on the next one." The AI agent authenticates the caller, finds their booking, and rebooks them on the next available flight, all within a single, seamless conversation.</li></ul><h3 id="proactive-voice-notifications-for-critical-events"><strong>Proactive Voice Notifications for Critical Events</strong></h3><ul><li><strong>Pain Point:</strong> Email or SMS alerts for urgent events like service outages, potential fraud, or flight cancellations can be easily missed or ignored by customers.</li><li><strong>Solution:</strong> Trigger automated, outbound voice calls for time-sensitive alerts. An AI agent can deliver the critical information clearly and can even ask for a verbal confirmation to ensure the message was received.</li><li><strong>Example:</strong> A financial institution's system flags a potentially fraudulent transaction. It immediately triggers an AI-powered call to the customer that says, "We've detected a possible fraudulent charge of $500 on your card. Please say 'yes' if this was you or 'no' if it was not."</li></ul><h3 id="autonomous-scheduling-and-reminders"><strong>Autonomous Scheduling and Reminders</strong></h3><ul><li><strong>Pain Point:</strong> The back-and-forth communication required to schedule appointments or meetings is a significant administrative burden for both businesses and their clients.</li><li><strong>Solution:</strong> Deploy an AI agent that integrates directly with calendar systems (e.g., Google Calendar, Microsoft Outlook). The agent can view real-time availability, offer open slots to the user, book the appointment, and send automated reminders.</li><li><strong>Example:</strong> A patient needs to book a follow-up with their doctor. They call the clinic's AI scheduler, which offers available times. The patient chooses a slot, and the agent books it directly in the doctor's calendar and sends an email and SMS confirmation to the patient.</li></ul><h3 id="debt-collection-with-empathy-compliance"><strong>Debt Collection with Empathy + Compliance</strong></h3><ul><li><strong>Pain Point:</strong> Debt collection is a highly regulated and emotionally charged process. Human agents can struggle with maintaining a consistent, empathetic tone while adhering strictly to compliance rules like the FDCPA.</li><li><strong>Solution:</strong> Utilize AI voice agents programmed with an empathetic tone and a script that is hard-coded for compliance. The agent can handle initial outreach, offer predefined payment plans, and process payments securely, ensuring every interaction is professional and legally sound.</li><li><strong>Example:</strong> A collections agency uses an AI agent for first-contact calls. The agent clearly states all required legal disclosures, offers a flexible payment plan without judgment, and can process a payment over the phone, all while logging the call for compliance auditing.</li></ul><h3 id="language-localized-customer-support-at-scale"><strong>Language-Localized Customer Support at Scale</strong></h3><ul><li><strong>Pain Point:</strong> Offering high-quality, 24/7 customer support in multiple languages is logistically complex and often too expensive for businesses to maintain.</li><li><strong>Solution:</strong> Deploy a single, multilingual AI voice agent. The agent can be programmed to detect the caller's language or offer a language choice, then switch its STT, LLM, and TTS models instantly to provide a fully localized support experience.</li><li><strong>Example:</strong> A global software company uses one AI agent for its European support line. When a caller from France begins speaking, the agent responds in fluent French. The next call from Germany is handled seamlessly in German, drawing answers from the same central knowledge base.</li></ul><h3 id="ai-voice-receptionists-for-smbs"><strong>AI Voice Receptionists for SMBs</strong></h3><ul><li><strong>Pain Point:</strong> Small and medium-sized businesses (SMBs) often can't afford a full-time human receptionist, which can lead to missed calls, lost business opportunities, and an unprofessional image.</li><li><strong>Solution:</strong> Implement an AI voice agent that acts as a virtual receptionist. It can answer calls 24/7, provide answers to frequently asked questions (e.g., business hours, location), intelligently route calls to the right employee's mobile phone, or take detailed messages.</li><li><strong>Example:</strong> A boutique marketing agency uses an AI receptionist. The agent answers calls with the agency's name, asks callers about their needs, and can distinguish between a new business lead (which gets routed directly to the founder) and a vendor call (which goes to voicemail).</li></ul><h3 id="voice-based-surveys-and-feedback-collection"><strong>Voice-Based Surveys and Feedback Collection</strong></h3><ul><li><strong>Pain Point:</strong> Traditional text and email surveys suffer from notoriously low response rates, and they rarely capture detailed, qualitative insights.</li><li><strong>Solution:</strong> Automate customer feedback collection with engaging, post-interaction AI voice calls. The AI agent can ask open-ended questions and capture nuanced, natural language responses, providing richer data than a simple 1-5 rating scale.</li><li><strong>Example:</strong> An online retailer programs an AI agent to call customers three days after their product is delivered. The agent asks, "How was your experience?" and can follow up with questions like, "What's one thing we could do to make it better?" The transcribed answers are then analyzed for sentiment and product insights.</li></ul><h3 id="llm-powered-in-app-voice-companions"><strong>LLM-Powered In-App Voice Companions</strong></h3><ul><li><strong>Pain Point:</strong> Applications in gaming, education, and the metaverse often feel static and lack truly interactive, immersive elements to keep users engaged.</li><li><strong>Solution:</strong> Embed an AI voice agent directly into the application as a character, guide, or companion. Powered by a flexible LLM and a real-time communication platform like VideoSDK, this agent can engage in dynamic, context-aware conversations that enhance the user experience.</li><li><strong>Example:</strong> A language-learning app features an AI "travel guide" who acts as a conversation partner. The user can practice speaking with the AI character, asking for directions or ordering food in a new language, and the AI responds realistically, corrects pronunciation, and makes the learning process feel like a real-world interaction.</li></ul><h2 id="why-videosdk-is-the-best-choice-among-all-the-available-option"><strong>Why VideoSDK&nbsp; is the best choice among all the available option</strong></h2><p>While the market offers a range of excellent point solutions—from specialized TTS engines like ElevenLabs to no-code builders like Voiceflow—VideoSDK stands apart as the definitive choice for developers who need to build truly custom, high-performance, and scalable AI voice agents. The key difference lies in its architecture and philosophy. VideoSDK provides the core, real-time infrastructure and a fully modular AI pipeline, giving you complete control over your creation without sacrificing performance.</p><p>Unlike platforms that lock you into their specific ecosystem or abstract away critical components, VideoSDK empowers you with a developer-first toolkit. You are not just building on a platform; you are building with a foundational technology. This means you can select the absolute best-in-class models for every part of your agent's "brain"—be it OpenAI for intelligence, Deepgram for transcription, or ElevenLabs for voice—and orchestrate them on VideoSDK’s global, low-latency WebRTC network. This modularity ensures your agent is not only powerful but also future-proof, allowing you to swap components as AI technology evolves. For businesses aiming to create a differentiated, proprietary voice experience that operates in true real-time, VideoSDK is not just an option; it is the strategic foundation for success.</p><h3 id="comparison-of-ai-voice-agent-platforms-in-2025"><strong>Comparison of AI Voice Agent Platforms in 2025</strong></h3><table>
<thead>
<tr>
<th>Platform</th>
<th>Real-Time Voice Infrastructure</th>
<th>Modular STT/LLM/TTS Pipeline</th>
<th>Cross-Platform SDKs</th>
<th>Custom Deployment (Self-hosting)</th>
<th>Built-in Memory &amp; RAG</th>
<th>Best For</th>
</tr>
</thead>
<tbody>
<tr>
<td>VideoSDK</td>
<td>Yes</td>
<td>Yes</td>
<td>Web, iOS, Android, RN, Unity, IoT</td>
<td>Yes</td>
<td>Yes</td>
<td>End-to-end AI voice agent infrastructure</td>
</tr>
<tr>
<td>Vapi</td>
<td>Yes</td>
<td>No</td>
<td>CLI-based only</td>
<td>No</td>
<td>No</td>
<td>Developer tool for rapid prototyping</td>
</tr>
<tr>
<td>ElevenLabs</td>
<td>No (TTS-only)</td>
<td>No (TTS-only)</td>
<td>API-based (TTS only)</td>
<td>No</td>
<td>No</td>
<td>High-quality voice generation</td>
</tr>
<tr>
<td>Deepgram</td>
<td>No (STT-only)</td>
<td>No (STT-only)</td>
<td>API-based (STT only)</td>
<td>Possible with enterprise plan</td>
<td>No</td>
<td>Fast, accurate speech recognition</td>
</tr>
<tr>
<td>OpenAI</td>
<td>Partial (Real-time APIs)</td>
<td>No</td>
<td>API only</td>
<td>No</td>
<td>Limited (via GPT-4)</td>
<td>Research-based STT/TTS/LLM access</td>
</tr>
<tr>
<td>Bland</td>
<td>Yes</td>
<td>No</td>
<td>Hosted-only</td>
<td>No</td>
<td>No</td>
<td>Outbound call automation</td>
</tr>
<tr>
<td>Synthflow</td>
<td>Yes</td>
<td>No (Predefined pipeline)</td>
<td>No SDKs</td>
<td>No</td>
<td>Limited</td>
<td>No-code enterprise agents</td>
</tr>
<tr>
<td>Retell AI</td>
<td>Yes</td>
<td>No (Fixed pipeline)</td>
<td>Hosted-only</td>
<td>No</td>
<td>Yes</td>
<td>Customer service automation</td>
</tr>
<tr>
<td>Voiceflow</td>
<td>No</td>
<td>No (Visual scripting only)</td>
<td>No SDKs</td>
<td>No</td>
<td>Limited (depends on LLM)</td>
<td>Voice bot design &amp; prototyping</td>
</tr>
<tr>
<td>Murf.ai</td>
<td>No</td>
<td>No (TTS-only)</td>
<td>API-based (TTS only)</td>
<td>No</td>
<td>No</td>
<td>Studio-quality voiceovers</td>
</tr>
</tbody>
</table>
<h2 id="conclusion"><strong>Conclusion</strong></h2><p>The landscape of business communication is undergoing its most significant transformation in decades, and AI voice agents are at the heart of this revolution. From automating customer support and sales outreach to providing in-app conversational companions, the applications are as vast as they are impactful. We've explored the top platforms of 2025, each offering unique strengths—whether it's the beautiful voices of ElevenLabs, the powerful intelligence of OpenAI, or the rapid deployment of no-code builders like Synthflow.</p><p>This is where VideoSDK excels. By providing the foundational, low-latency WebRTC infrastructure and a completely modular AI pipeline, VideoSDK empowers you to build the exact voice agent you envision, powered by the best models on the market. You are in command of every component, ensuring your application is both powerful today and adaptable for tomorrow.</p><p><strong>Ready to build the future of voice communication?</strong></p><p>Explore VideoSDK's <a href="https://www.videosdk.live/voice-agents"><u>AI Voice Agent</u></a> capabilities: Dive into our <a href="https://docs.videosdk.live/ai_agents/introduction"><u>documentation</u></a> and see how our infrastructure can power your vision.</p><p>Start Building for Free: <a href="https://app.videosdk.live/signup"><u>Sign up for a free VideoSDK account</u></a> and get started with our robust APIs and SDKs.</p><p>Talk to an Expert: <a href="https://www.videosdk.live/contact"><u>Book a demo</u></a> with our solutions team to discuss how to bring your most ambitious voice projects to life.&nbsp;</p>]]></content:encoded></item><item><title><![CDATA[Deploy VideoSDK Telephony Voice Agents on Cerebrium]]></title><description><![CDATA[AI telephony agent with VideoSDK & Cerebrium. A step-by-step guide with full code to deploy your own voice solution.]]></description><link>https://www.videosdk.live/blog/deploy-telephony-agents-on-cerebrium</link><guid isPermaLink="false">688b61f964df6f042b4d40ec</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 05 Aug 2025 06:46:18 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/08/Cerebrium-x-VideoSDK--1-.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/08/Cerebrium-x-VideoSDK--1-.png" alt="Deploy VideoSDK Telephony Voice Agents on Cerebrium"/><p> AI-powered telephony solutions are revolutionizing customer service, sales, and communication workflows. This comprehensive guide shows you how to build a sophisticated AI telephony agent using VideoSDK's powerful voice agent capabilities, deployed seamlessly on Cerebrium's cloud platform.</p><h2 id="what-were-building">What We're Building</h2><p>We'll create a complete AI telephony system that can:</p><ul><li>Handle both inbound and outbound voice calls</li><li>Integrate with SIP providers like Twilio</li><li>Leverage Google's Gemini AI for intelligent conversations</li><li>Deploy automatically on Cerebrium's scalable infrastructure</li><li>Provide real-time voice processing with minimal latency</li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/08/cerebruim-telephony-agent.png" class="kg-image" alt="Deploy VideoSDK Telephony Voice Agents on Cerebrium" loading="lazy" width="2588" height="1425"/></figure><h2 id="architecture-overview">Architecture Overview</h2><p>Our AI telephony agent combines several powerful technologies:</p><ul><li><strong>VideoSDK Agents</strong>: The core voice agent framework</li><li><strong>SIP Integration</strong>: For telephony connectivity via Twilio</li><li><strong>Gemini AI</strong>: Real-time conversational intelligence</li><li><strong>Cerebrium</strong>: Cloud deployment and scaling platform</li></ul><h2 id="prerequisites">Prerequisites</h2><p>Before we start, you'll need accounts and credentials for the following services. Here are the links to get you started:</p><ul><li><a href="https://app.videosdk.live/">VideoSDK Auth Token</a></li><li><a href="https://console.twilio.com/">Twilio</a> SIP trunking setup</li><li><a href="https://aistudio.google.com/app/apikey">Google API key</a> for Gemini</li><li><a href="https://dashboard.cerebrium.ai/register">Cerebrium</a> account, follow docs <a href="https://docs.cerebrium.ai/cerebrium/getting-started/introduction">here</a></li></ul><h2 id="project-structure">Project Structure</h2><p>Our project follows a clean, modular structure:</p><pre><code class="language-shell">├── cerebrium.toml
├── main.py
├── requirements.txt
└── README.md
</code></pre><h2 id="initialize-your-project"><strong>Initialize Your Project</strong></h2><p>Let's begin by setting up our project directory and basic configuration using the Cerebrium Command Line Interface (CLI).</p><pre><code class="language-bash">pip install cerebrium
cerebrium login
cerebrium init videosdk-telephony-agent
</code></pre><h2 id="configure-cerebrium-deployment">Configure Cerebrium Deployment</h2><p>First, let's set up our <strong>cerebrium.toml</strong> configuration file for optimal deployment:</p><pre><code class="language-bash">[cerebrium.deployment]
name = "sip-ai-agent"
python_version = "3.12"
include = ["./*", "main.py", "cerebrium.toml"]
exclude = [".venv"]
disable_auth = true

[cerebrium.hardware]
region = "us-east-1"
provider = "aws"
compute = "CPU"
cpu = 2
memory = 4.0
gpu_count = 0

[cerebrium.runtime.custom]
port = 8000
entrypoint = ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
healthcheck_endpoint = "/"

[cerebrium.scaling]
min_replicas = 1
max_replicas = 2
cooldown = 30
replica_concurrency = 4
scaling_metric = "concurrency_utilization"
scaling_target = 80

[cerebrium.dependencies.paths]
pip = "requirements.txt"
</code></pre><p>This configuration ensures:</p><ul><li><strong>Scalability</strong>: Auto-scaling between 1-2 replicas based on concurrency, ensuring the app can handle fluctuating call volumes.</li><li><strong>Performance</strong>: Optimized CPU and memory allocation for real-time voice processing.</li><li><strong>Dependency Management</strong>: It clearly points to our&nbsp;requirements.txt&nbsp;file, keeping our dependencies separate and organized.</li></ul><h2 id="define-project-dependencies">Define Project Dependencies</h2><p>Create a <strong>requirements.txt</strong> file in your project directory, you can also view or download the complete file directly from the project's official <a href="https://github.com/videosdk-community/videosdk-telephony-agent/blob/main/requirements.txt">GitHub repository</a></p><h2 id="build-the-core-ai-agent">Build the Core AI Agent</h2><p>Now, let's create our main application in <strong>main.py</strong>. Here's the complete implementation:</p><pre><code class="language-python">import asyncio
import os
import logging
from contextlib import asynccontextmanager
from typing import Optional
from dotenv import load_dotenv
from fastapi import FastAPI, Request, Response
import uvicorn
from pyngrok import ngrok
from videosdk.plugins.sip import create_sip_manager
from videosdk.agents import Agent, JobContext, function_tool, RealTimePipeline
from videosdk.plugins.google import GeminiRealtime, GeminiLiveConfig

load_dotenv()

logging.basicConfig(level=os.getenv("LOG_LEVEL", "INFO"))
logger = logging.getLogger(__name__)

def create_agent_pipeline():
    """Function to create the specific pipeline for our agent."""
    model = GeminiRealtime(
        api_key=os.getenv("GOOGLE_API_KEY"),
        model="gemini-2.0-flash-live-001",
        config=GeminiLiveConfig(
            voice="Leda", # type: ignore
            response_modalities=["AUDIO"], # type: ignore
        ),
    )
    return RealTimePipeline(model=model)

class SIPAIAgent(Agent):
    """A AI agent for handling voice calls."""

    def __init__(self, ctx: Optional[JobContext] = None):
        super().__init__(
            instructions=(
             "You are a helpful voice assistant that can answer questions and help with tasks. Be friendly and concise."
             "Talk to the user as if you are a human and not a robot."
             ),
            tools=[self.end_call], # type: ignore
        )
        self.ctx = ctx
        self.greeting_message = "Hello! Thank you for calling. How can I assist you today?"
        logger.info(f"SIPAIAgent created")

    async def on_enter(self) -&gt; None:
        pass

    async def greet_user(self) -&gt; None:
        await self.session.say(self.greeting_message) # type: ignore

    async def on_exit(self) -&gt; None:
        pass

    @function_tool
    async def end_call(self) -&gt; str:
        """End the current call gracefully"""
        await self.session.say("Thank you for calling. Have a great day!") # type: ignore
        await asyncio.sleep(1)
        await self.session.leave() # type: ignore
        return "Call ended gracefully"

sip_manager = create_sip_manager(
    provider=os.getenv("SIP_PROVIDER", "twilio"),
    videosdk_token=os.getenv("VIDEOSDK_AUTH_TOKEN"),
    provider_config={
        # Twilio config
        "account_sid": os.getenv("TWILIO_ACCOUNT_SID"),
        "auth_token": os.getenv("TWILIO_AUTH_TOKEN"),
        "phone_number": os.getenv("TWILIO_PHONE_NUMBER"),
    }
)

@asynccontextmanager
async def lifespan(app: FastAPI):
    """Lifespan manager for FastAPI app startup and shutdown."""
    port = int(os.getenv("PORT", 8000))
    try:
        ngrok.kill()
        ngrok_auth_token = os.getenv("NGROK_AUTHTOKEN")
        if ngrok_auth_token:
            ngrok.set_auth_token(ngrok_auth_token)
        tunnel = ngrok.connect(str(port), "http")
        sip_manager.set_base_url(tunnel.public_url) # type: ignore
        logger.info(f"Ngrok tunnel created: {tunnel.public_url}")
    except Exception as e:
        logger.error(f"Failed to start ngrok tunnel: {e}")
    yield
    try:
        ngrok.kill()
        logger.info("Ngrok tunnel closed")
    except Exception as e:
        logger.error(f"Error closing ngrok tunnel: {e}")

app = FastAPI(title="SIP AI Agent", lifespan=lifespan)

@app.post("/call/make")
async def make_call(to_number: str):
    if not sip_manager.base_url:
        return {"status": "error", "message": "Service not ready (no base URL)."}
    agent_config = {"room_name": "Call", "enable_pubsub": True}
    details = await sip_manager.make_call(
        to_number=to_number,
        agent_class=SIPAIAgent,
        pipeline=create_agent_pipeline,
        agent_config=agent_config
    )
    return {"status": "success", "details": details}

@app.post("/sip/answer/{room_id}")
async def answer_webhook(room_id: str):
    logger.info(f"Answering call for room: {room_id}")
    body, status_code, headers = sip_manager.get_sip_response_for_room(room_id)
    return Response(content=body, status_code=status_code, media_type=headers.get("Content-Type"))

@app.post("/webhook/incoming")
async def incoming_webhook(request: Request):
    try:
        content_type = request.headers.get("Content-Type", "")
        if "x-www-form-urlencoded" in content_type:
            webhook_data = dict(await request.form())
        else:
            webhook_data = await request.json()
        logger.info(f"Received incoming webhook: {webhook_data}")

        agent_config = {"room_name": "Incoming Call", "enable_pubsub": True}
        body, status_code, headers = await sip_manager.handle_incoming_call(
            webhook_data=webhook_data,
            agent_class=SIPAIAgent,
            pipeline=create_agent_pipeline,
            agent_config=agent_config
        )
        return Response(content=body, status_code=status_code, media_type=headers.get("Content-Type"))
    except Exception as e:
        logger.error(f"Error in incoming webhook: {e}", exc_info=True)
        return Response(content="Error processing request", status_code=500)

@app.get("/sessions")
async def get_sessions():
    return {"sessions": sip_manager.get_active_sessions()}

@app.get("/")
async def root():
    return {"message": "SIP AI Agent"}

if __name__ == "__main__":
    port = int(os.getenv("PORT", 8000))
    logger.info(f"Starting SIP AI Agent on port {port}")
    uvicorn.run(app, host="0.0.0.0", port=port)
</code></pre><h2 id="key-components-explained">Key Components Explained</h2><h3 id="1-agent-pipeline-creation">1. Agent Pipeline Creation</h3><p>The <strong>create_agent_pipeline()</strong> function sets up our Gemini AI model with specific configurations:</p><ul><li><strong>Model</strong>: <strong>gemini-2.0-flash-live-001</strong> for real-time processing</li><li><strong>Voice</strong>: "Leda" for natural-sounding responses</li><li><strong>Modalities</strong>: Audio-only responses for telephony</li></ul><h3 id="2-sipaiagent-class">2. SIPAIAgent Class</h3><p>Our custom agent class inherits from VideoSDK's <strong>Agent</strong> base class and includes:</p><ul><li><strong>Instructions</strong>: Clear behavioral guidelines for the AI</li><li><strong>Tools</strong>: Built-in function tools like <strong>end_call()</strong></li><li><strong>Lifecycle methods</strong>: <strong>on_enter()</strong>, <strong>greet_user()</strong>, <strong>on_exit()</strong></li></ul><h3 id="3-sip-manager-integration">3. SIP Manager Integration</h3><p>The SIP manager handles all telephony operations:</p><ul><li><strong>Provider Configuration</strong>: Twilio credentials and settings</li><li><strong>Call Management</strong>: Both inbound and outbound call handling</li><li><strong>Session Tracking</strong>: Active session monitoring</li></ul><h3 id="api-endpoints">API Endpoints</h3><p>Our API provides several key endpoints:</p><ul><li><strong>POST /call/make</strong>: Initiate outbound calls</li><li><strong>POST /webhook/incoming</strong>: Handle incoming call webhooks</li><li><strong>POST /sip/answer/{room_id}</strong>: Process SIP responses</li><li><strong>GET /sessions</strong>:Monitor active sessions</li></ul><h2 id="environment-configuration">Environment Configuration</h2><p>Set up these environment variables for your deployment by adding them to a&nbsp;<strong>.env</strong>&nbsp;file</p><pre><code class="language-bash"># VideoSDK Configuration
VIDEOSDK_AUTH_TOKEN=your_videosdk_token

# AI Configuration
GOOGLE_API_KEY=your_google_api_key

# Twilio Configuration
TWILIO_ACCOUNT_SID=your_account_sid
TWILIO_AUTH_TOKEN=your_auth_token
TWILIO_PHONE_NUMBER=your_phone_number

# Optional
NGROK_AUTHTOKEN=your_ngrok_token
SIP_PROVIDER=twilio
</code></pre><p>To make these available to your deployment, add them to the&nbsp;<a href="https://docs.cerebrium.ai/cerebrium/other-topics/using-secrets#using-secrets"><strong>Secrets</strong></a>&nbsp;view on your Cerebrium dashboard.</p><h2 id="deploying-on-cerebrium">Deploying on Cerebrium</h2><p>Cerebrium makes deployment incredibly straightforward:</p><ol><li><strong>Install Cerebrium CLI</strong>:</li></ol><pre><code class="language-bash">pip install cerebrium
</code></pre><ol><li><strong>Deploy your application</strong>:</li></ol><pre><code class="language-bash">cerebrium deploy
</code></pre><p>After the deployment finishes, you will receive a public URL to interact with your application. It will have a format similar to this:</p><pre><code class="language-bash">&lt;https://api.aws.us-east-1.cerebrium.ai/v4/p-xxxxxxxx/sip-ai-agent/&gt;
</code></pre><h2 id="videosdk-documentation-resources">VideoSDK Documentation Resources</h2><p>For deeper integration and customization, explore these VideoSDK resources:</p><ul><li><a href="https://docs.videosdk.live/ai_agents/introduction">AI Agents Introduction</a></li><li><a href="https://docs.videosdk.live/ai_agents/voice-agent-quick-start">Quick Start Guide</a></li><li><a href="https://docs.videosdk.live/ai_agents/core-components/overview">Core Components Overview</a></li><li><a href="https://docs.videosdk.live/ai_agents/core-components/agent">Agent Implementation</a></li><li><a href="https://docs.videosdk.live/ai_agents/core-components/realtime-pipeline">Real-time Pipeline</a></li><li><a href="https://docs.videosdk.live/ai_agents/sip">SIP Integration</a></li><li><a href="https://docs.videosdk.live/ai_agents/running-multiple-agents">Multiple Agents</a></li></ul><h2 id="conclusion">Conclusion</h2><p>Building an AI telephony agent with VideoSDK and deploying on Cerebrium creates a powerful, scalable communication solution. This architecture provides:</p><ul><li><strong>Rapid Development</strong>: Get started in minutes with pre-built components</li><li><strong>Enterprise Scale</strong>: Handle thousands of concurrent calls</li><li><strong>Cost Efficiency</strong>: Pay-per-use pricing model</li><li><strong>Global Reach</strong>: Deploy worldwide with minimal latency</li></ul><p>The combination of VideoSDK's robust voice agent framework and Cerebrium's intelligent cloud platform creates an ideal environment for modern AI-powered telephony solutions.</p><p>Ready to get started? Check out the <a href="https://docs.videosdk.live/ai_agents/introduction">VideoSDK AI Agents documentation</a> and deploy your first agent on <a href="https://cerebrium.ai/">Cerebrium</a> today!</p>]]></content:encoded></item><item><title><![CDATA[Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK]]></title><description><![CDATA[Build an iOS video calling feature that triggers native call screens, manages VoIP notifications, and handles answering or rejecting calls seamlessly.]]></description><link>https://www.videosdk.live/blog/flutter-ios-callkit</link><guid isPermaLink="false">677e5cf8dbc171042ac708d9</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 21 Jul 2025 11:46:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/05/call-trigger_iOS_flutter.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/05/call-trigger_iOS_flutter.png" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK"/><p><br><em>Creating a native-like call experience on iOS using Flutter can be challenging—but with the right tools, it becomes seamless. In this guide, we’ll show you how to build a video calling feature for iOS that feels just like a regular phone call.</em></br></p><p><em>Using the flutter_callkit_incoming package, you’ll be able to trigger native iOS call screens, manage VoIP-style incoming call notifications, and handle user interactions like answering or rejecting a call—all without writing any Swift code</em></p><p><em>Explore the </em><a href="https://github.com/videosdk-live/videosdk-rtc-flutter-call-trigger-example" rel="noreferrer"><em>full source code</em></a><em> to see how everything fits together.</em></p><h2 id="introduction-to-flutter-callkit-incoming"><em>Introduction to Flutter CallKit Incoming</em></h2><p><em>The <code>flutter_callkit_incoming</code> package simplifies the integration of iOS CallKit into Flutter applications. This package allows developers to:</em></p><ul><li><em>Present native iOS call screens.</em></li><li><em>Manage incoming call notifications.</em></li><li><em>Enhance the overall call experience.</em></li></ul><p><em>It abstracts the complexities of implementing CallKit in Swift, enabling quick and efficient call handling for iOS users.</em></p><h2 id="overview-of-the-application-in-ios"><em>Overview of the Application in IOS</em><br/></h2><h3 id="initiating-a-call"><em><strong>Initiating a Call</strong>:</em></h3><ul><li><em>The caller (e.g., Pavan) enters the recipient's unique ID (e.g., Jay) and presses the call button.</em></li><li><em>This action triggers a server request to initiate the call, which sends a Firebase Cloud Messaging (FCM) notification to the recipient's device through API, and a fake call is being triggered on the Receiver Caller Side.</em></li></ul><h3 id="receiving-the-call"><em><strong>Receiving the Call</strong>:</em></h3><ul><li><em><strong>If the app is in the foreground</strong>: If the App is in the <strong>foreground</strong> no need to handle the notification, Firebase Messaging Package looks into it. Firebase Messaging Package provided by Firebase in Flutter helps one to manage the Foreground.&nbsp;</em></li><li><em><strong>If the app is in the background</strong>: When it comes to background Firebase provides you the feature using Firebase_background_handler on a method called onBackgroundMessage. Using that we can achieve the concept of handling the background message(Basically the message which is received when the app is closed/Minimised).</em></li></ul><h3 id="handling-call-status"><em><strong>Handling Call Status</strong>:</em></h3><ul><li><em>Upon answering or rejecting the call, the recipient's action triggers a server request (update call). This request updates the call status and is communicated back to the caller (Pavan) to reflect the recipient's response.</em></li></ul><p><em>This structured interaction between client, server, and platform-specific components ensures a smooth and intuitive experience for both the caller and the recipient, providing reliable video communication across devices.</em></p><h2 id="core-components-of-the-app"><em>Core Components of the App</em></h2><p><em>The app integrates several core components to manage video calling and notifications seamlessly across iOS platforms:</em></p><h3 id="fluttercallkitincoming"><em>flutter_callkit_incoming</em></h3><ul><li><em><strong>Purpose</strong>: Provides a native call interface on iOS.</em></li><li><em><strong>Function</strong>: Displays the incoming call UI and handles actions like answering or rejecting calls, leveraging iOS’s native CallKit functionality without requiring native Swift code.</em></li></ul><h3 id="push-notifications"><em>Push Notifications</em></h3><ul><li><em><strong>Purpose</strong>: Notifies the recipient about incoming calls.</em></li><li><em><strong>Function</strong>: Uses Firebase Cloud Messaging (FCM)  and Apple Push Notification Service (APNs) for iOS to send VoIP notifications, triggering the appropriate call UI even if the app is in the background or inactive.</em></li></ul><h3 id="firebase-realtime-database"><em>Firebase Realtime Database</em></h3><ul><li><em><strong>Purpose</strong>: Stores user tokens and caller IDs.</em></li><li><em><strong>Function</strong>: Ensures secure and real-time communication between users, facilitating the initiation of calls and status updates.</em></li></ul><h3 id="videosdk"><em>VideoSDK</em></h3><ul><li><em><strong>Purpose</strong>: Powers the video calling functionality.</em></li><li><em><strong>Function</strong>: Enables real-time video and audio communication, providing high-quality video conferencing features for both Android and iOS users.</em></li></ul><h3 id="nodejs-server"><em>Node.js Server</em></h3><ul><li><em><strong>Purpose</strong>: Acts as the backend service for call initiation and status updates.</em></li><li><em><strong>Function</strong>: Sends push notifications via Firebase and APNs, and updates call statuses (e.g., accepted, rejected) between users.</em></li></ul><p><em>These core components work together to ensure a smooth, feature-rich experience for both video calling and real-time notifications across devices and platforms.</em></p><h2 id="prerequisites"><em>Prerequisites</em></h2><p><em>Before starting with the development of the video calling app, ensure that the following prerequisites are met:</em></p><ol><li><em><strong>Flutter Development Environment</strong>:</em><ul><li><em>Install Flutter SDK and set up your development environment. Follow the </em><a href="https://flutter.dev/docs/get-started/install"><em>official Flutter installation guide</em></a><em>.</em></li></ul></li><li><em><strong>Firebase Project</strong>:</em><ul><li><em>Create a Firebase project in the Firebase Console.</em></li><li><em>Set up Firebase Cloud Messaging (FCM) for push notifications.</em></li></ul></li><li><em><strong>VideoSDK Account</strong>:</em><ul><li><em>Sign up for a VideoSDK account at </em><a href="https://www.videosdk.live/"><em>VideoSDK</em></a><em> and obtain the required credentials (API keys and authentication tokens).</em></li></ul></li><li><em><strong>Node.js Server</strong>:</em><ul><li><em>Set up a Node.js server to manage API requests, handle call initiation, and send push notifications. This server will also interact with Firebase to send call-trigger notifications.</em></li></ul></li></ol><p><em>With these prerequisites in place, you'll be ready to begin implementing the video calling functionality, leveraging both Firebase and VideoSDK for a seamless cross-platform experience.</em></p><h2 id="connecting-firebase-to-flutter-using-flutterfire-cli"><em>Connecting Firebase to Flutter Using FlutterFire CLI</em></h2><p/><p><em>To easily set up Firebase with your Flutter app, follow these steps:</em></p><h3 id="step-1-create-a-firebase-project"><em>Step 1: Create a Firebase Project</em></h3><ol><li><em>Go to the </em><a href="https://console.firebase.google.com/" rel="noreferrer"><em>Firebase Console</em></a><em>.</em></li><li><em>Create a new project by following the prompts.</em></li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-10.02.30-AM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="2008" height="1306"/></figure><h3 id="step-2-select-the-flutter-option"><strong><em>Step 2: Select the Flutter Option</em></strong></h3><ul><li><em>Once your project is created, navigate to the "Add app" section.</em></li><li><em>Choose the Flutter option to proceed.</em></li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-5.37.00-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1508" height="910"/></figure><h3 id="step-3-install-firebase-cli"><strong><em>Step 3: Install Firebase CLI</em></strong></h3><ul><li><em>Use npm to globally install the Firebase CLI. Run the following command in your terminal:</em></li></ul><pre><code class="language-js">npm install -g firebase-tools</code></pre><h3 id="step-4-login-to-firebase"><strong><em>Step 4: Login to Firebase</em></strong></h3><ul><li><em>Log in to your Firebase account using the Firebase CLI by running:</em></li></ul><pre><code class="language-js">firebase login</code></pre><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-10.22.19-AM-1.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="2506" height="562"/></figure><ul><li><em>This will open a browser window prompting you to sign in with your Google account.</em></li></ul><h3 id="step-5-configure-firebase-in-flutter-using-flutterfire-cli"><strong><em>Step 5: Configure Firebase in Flutter Using FlutterFire CLI</em></strong></h3><ul><li><em>Use the FlutterFire CLI to connect your Flutter project to Firebase</em></li><li><em>Select an existing Firebase project or create a new one.</em></li><li><em>Choose the platforms (e.g., Android, iOS) you want to integrate Firebase with.</em></li><li><em>This will add new dart file named as firebase_option.dart in your project .</em></li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-10.29.02-AM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="2466" height="556"/></figure><h3 id="step-6-add-dependency-in-pubspecyaml-file"><strong><em>Step 6: Add dependency in pubspec.yaml file</em></strong></h3><ul><li><em>add firebase_core dependency in your pubspec.yaml file to resolve errors </em></li></ul><h2 id="ios-setup-enabling-pushkit-and-callkit"><em>iOS Setup: Enabling PushKit and CallKit</em></h2><p/><p><em>To enable PushKit notifications in your application, it is essential to acquire the necessary certificates from your Apple Developer Program account and set them up for your iOS VoIP application. Follow the steps below:</em></p><h3 id="step-1-request-a-certificate-using-keychain"><em>Step 1: Request a Certificate Using Keychain</em></h3><p/><p><em>1.  Open the <strong>Keychain Access</strong> application on your Mac.</em></p><p><em>2.  Select <code>Certificate Assistant -&gt; Request a Certificate From a Certificate Authority</code>.</em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-5.14.40-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1822" height="804"/></figure><p><em>3.  Enter your email and common name, then click <strong>Continue</strong>.</em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-5.35.50-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1466" height="1066"/></figure><p><em>4. Modify the certificate’s name and save it.</em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-5.44.45-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1528" height="1036"/></figure><h3 id="step-2-create-an-app-id-in-the-apple-developer-account"><em>Step 2: Create an App ID in the Apple Developer Account</em></h3><p><em>This process requires an active Apple Developer Program account. Follow these steps:</em></p><p><em>1. Log into your Apple Developer account.</em></p><p><em>2. Navigate to <strong>Certificates, Identifiers &amp; Profiles</strong> and select <strong>Identifiers</strong>.</em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-5.58.30-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1476" height="1058"/></figure><p><em>3. Click the <strong>+</strong> icon to add a new identifier.</em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-6.19.18-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1464" height="262"/></figure><p><em>4. Add a description, specify your bundle ID, check <strong>PushKit</strong> under Capabilities, and click <strong>Continue</strong>.</em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-6.24.27-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1458" height="502"/></figure><p><em>The image below shows the finished App ID.</em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-6.19.18-PM-1.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1464" height="262"/></figure><h3 id="step-3-create-a-new-voip-services-certificate"><em>Step 3: Create a New VoIP Services Certificate</em></h3><p><br><em>1. Go to the <strong>Certificates</strong> section in your Apple Developer Program account.</em></br></p><p><em>2. Click to add a new certificate.</em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-7.07.51-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1776" height="1112"/></figure><p><em>3. Select the <strong>VoIP Services Certificate</strong> and choose the App ID you created.</em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-7.08.17-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1828" height="762"/></figure><p><em>4. Use the private certificate generated in Keychain Access and click <strong>Continue</strong>.</em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-7.08.38-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1834" height="888"/></figure><p><em>5. Download the <code>voip_services.cer</code> file provided by your VoIP service provider.</em></p><h3 id="step-4-convert-cer-to-p12"><em>Step 4: Convert <code>.cer</code> to <code>.p12</code></em></h3><ol><li><em>Double-click the <code>voip_services.cer</code> file to open it in Keychain Access.</em></li><li><em>Locate the certificate titled "VoIP Services: YourProductName".</em></li><li><em>Right-click on it and select <strong>Export</strong> to save it as a <code>.p12</code> file.</em></li><li><em>Create a strong password when prompted and save the file securely.</em></li></ol><h3 id="step-5-convert-p12-to-pem"><em>Step 5: Convert <code>.p12</code> to <code>.pem</code></em></h3><ol><li><em>Open a terminal and navigate to the directory where the <code>.p12</code> file is saved.</em></li><li><em>Enter the password created earlier.</em></li><li><em>A new file named <code>Certificates.pem</code> will be created in the same directory.</em></li></ol><p><em>Run the following command:</em></p><pre><code class="language-javascript">openssl pkcs12 -in YourFileName.p12 -out Certificates.pem -nodes -clcerts -legacy</code></pre><blockquote><em><strong>Note</strong>: The bundle ID of your VoIP services will influence the exact certificate name. This <code>.pem</code> file is now ready for use in your push notification implementation.</em></blockquote><h2 id="pushkit-setup"><em>PushKit Setup</em></h2><p/><p><em>PushKit will allow us to send notifications to the </em><a href="https://uxpilot.ai/mobile-design-templates/ios" rel="noreferrer"><em>iOS device</em></a><em>. To implement push notifications, you must upload an APN Auth Key. The following details about the app are needed when sending push notifications via an APN Auth Key:</em></p><ul><li><em>Auth Key file</em></li><li><em>Team ID</em></li><li><em>Key ID</em></li><li><em>Your app’s bundle ID</em></li></ul><h3 id="creating-an-apn-auth-key"><em>Creating an APN Auth Key</em></h3><ol><li><em>Visit the Apple&nbsp;</em><a href="https://developer.apple.com/account/"><em>Developer Member Center</em></a></li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-7.10.09-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1644" height="1078"/></figure><ol start="2"><li><em>Click on Certificates, Identifiers &amp; Profiles. Go to Keys from the left side. Create a new Auth Key by clicking on the plus button on the top right side.</em></li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-7.10.40-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1642" height="1084"/></figure><ol start="3"><li><em>On the following page, add a Key Name, and select APNs.</em></li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-7.12.59-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1644" height="1074"/></figure><ol start="4"><li><em>Click on the Register button.</em></li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-7.13.35-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1652" height="1082"/></figure><ol start="5"><li><em>You can download your auth key file from this page and upload this file to the Firebase dashboard without changing its name.</em></li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-7.14.03-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1644" height="1080"/></figure><ol start="6"><li><em>In your Firebase project, go to Settings and select the Cloud Messaging tab. Scroll down the iOS app configuration and click Upload under APNs Authentication Key</em></li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-7.14.51-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1772" height="1048"/></figure><ol start="7"><li><em>Enter&nbsp;<code>Key ID</code>&nbsp;and&nbsp;<code>Team ID</code>. Key ID is in the file name&nbsp;<code>AuthKey\_{Key ID}.p8</code>&nbsp;and is 10 characters. Your Team ID is in the Apple Member Center under the membership tab or displayed always under your account name in the top right corner.</em></li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-7.15.29-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1776" height="1014"/></figure><p><em>8 . Enable Push Notifications in Capabilities</em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-7.16.20-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1776" height="1032"/></figure><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-7.16.36-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1776" height="1032"/></figure><ol start="9"><li><em>Enable selected permission in Background Modes</em></li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-7.17.01-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1780" height="554"/></figure><h2 id="project-structure"><em>Project Structure </em></h2><pre><code class="language-javascript">Client
│
├── ios
│   └── Runner
│       ├── AppDelegate.swift
│       ├── Info.plist
│       └── GoogleService-Info.plist
│
├── lib
│   |
│   ├── meeting
│   |   ├── api_call.dart
│   |   ├── join_screen.dart
│   |   ├── meeting_controls.dart
│   |   ├── meeting_screen.dart
│   |   └── participant_tile.dart
│   |
│   ├── firebase_options.dart
│   ├── home.dart
│   └── main.dart
│
├── .env
├── pubspec.yaml
└── README.md
</code></pre><p/><p><em>Now go to the server side and setup our server </em></p><h3 id="step-1-create-a-new-project-directory"><em>Step 1 : Create a New Project Directory </em></h3><pre><code class="language-javascript">mkdir server
cd server 
npm init -y </code></pre><h3 id="step-2-install-required-dependencies"><em>Step 2 : Install required dependencies</em></h3><pre><code class="language-javascript">npm install express cors morgan firebase-admin uuid</code></pre><h3 id="step-3-set-up-firebase"><em>Step 3 : Set Up Firebase&nbsp;</em></h3><p><em>Enable Realtime Database and set the rule true and Firebase Cloud messaging (FCM). </em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-1.04.24-PM-1.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1602" height="1032"/></figure><h3 id="step-4"><em>Step 4 : </em></h3><p><em>Download the private key by navigating to <strong>Project Settings &gt; Service Accounts</strong>, selecting <strong>Node.js</strong>, and then clicking <strong>Download</strong>.</em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-1.06.54-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1890" height="1218"/></figure><h3 id="step-5"><em>Step 5 : </em></h3><p><em>Place this File into your server side and your server side structure look like this.</em></p><pre><code class="language-javascript">Server
  ├── node_modules/
  ├── server.js
  ├── callkit-3ec73-firebase-adminsdk-ghfto-9d9fc7a362.json
  ├── package-lock.json
  └── package.json</code></pre><h3 id="step-6-now-go-to-serverjs-we-have-to-add-several-api-in-our-serverjs"><em>Step 6 : now go to server.js we have to add several API in our server.js</em></h3><ol><li><em>GET /: Returns a "Hello Coding" message to confirm the server is running.</em></li><li><em>POST /register-device: Registers a device by storing its unique ID and FCM token in Firebase.</em></li><li><em>POST /api/add: Stores call details (callerId, roomId, and calleeId) in memory.</em></li><li><em>GET /api/getByCallee/:calleeId: Retrieves call details for a given callee ID from memory.</em></li><li><em>POST /send-call-status: Sends a notification to the caller about the status of a call (e.g., accepted, rejected, ended).</em></li><li><em>POST /send-notification: Sends a notification to a caller with details about an incoming call, including room ID and VideoSDK token.</em></li></ol><h3 id="step-7-now-we-add-below-code-to-run-the-server-at-your-ip"><em>Step 7 : Now we add below code to run the server at your ip&nbsp;</em></h3><pre><code class="language-javascript">const PORT = process.env.PORT || 9000;&nbsp;
const LOCAL_IP = '10.0.0.161'; // Replace with your actual local IP address&nbsp;
app.listen(PORT, LOCAL_IP, () =&gt; {
console.log(Server running on http://${LOCAL_IP}:${PORT});
&nbsp;});&nbsp;</code></pre><h3 id="step-8"><em>Step 8 : </em></h3><p><em>we setup ngrok and using this below command we redirect our local ip to temporary public URL that can you share outside to your local network ngrok http <u>http://10.0.0.161:9000</u></em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-06-at-10.15.09-AM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1408" height="780"/></figure><p><em>It looks like that we can use this <u>https://8190-115-246-20-252.ngrok-free.app</u> as our server url.</em></p><p><em>Refer the complete code of <strong>server,js</strong> </em><a href="https://github.com/videosdk-live/videosdk-rtc-flutter-call-trigger-example/blob/main/Server/server.js" rel="noreferrer"><em>here</em></a><em>.</em></p><p><em><strong>VideoSDK</strong> is a cutting-edge platform that enables seamless audio and video calling with low latency and robust performance. Start by signing up at </em><a href="https://videosdk.live/" rel="noopener"><strong><em>VideoSDK</em></strong></a><em>, generate your API token from the dashboard, and integrate real-time calling into your app effortlessly!</em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-2.29.43-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="2902" height="1326"/></figure><p><em>Now add this token into your .env file </em></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-06-at-10.31.18-AM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" loading="lazy" width="1058" height="314"/></figure><p><em>Now add all permission for  iOS</em></p><p><br><em>For iOS we have to add all the permission in info.plist file</em></br></p><pre><code class="language-xml">&lt;?xml version="1.0" encoding="UTF-8"?&gt;
&lt;!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"&gt;
&lt;plist version="1.0"&gt;
&lt;dict&gt;
	&lt;key&gt;CADisableMinimumFrameDurationOnPhone&lt;/key&gt;
	&lt;true/&gt;
	&lt;key&gt;CFBundleDevelopmentRegion&lt;/key&gt;
	&lt;string&gt;$(DEVELOPMENT_LANGUAGE)&lt;/string&gt;
	&lt;key&gt;CFBundleExecutable&lt;/key&gt;
	&lt;string&gt;$(EXECUTABLE_NAME)&lt;/string&gt;
	&lt;key&gt;CFBundleIdentifier&lt;/key&gt;
	&lt;string&gt;$(PRODUCT_BUNDLE_IDENTIFIER)&lt;/string&gt;
	&lt;key&gt;CFBundleInfoDictionaryVersion&lt;/key&gt;
	&lt;string&gt;6.0&lt;/string&gt;
	&lt;key&gt;CFBundleName&lt;/key&gt;
	&lt;string&gt;Flutter VideoSDK App&lt;/string&gt;
	&lt;key&gt;CFBundlePackageType&lt;/key&gt;
	&lt;string&gt;APPL&lt;/string&gt;
	&lt;key&gt;CFBundleShortVersionString&lt;/key&gt;
	&lt;string&gt;$(FLUTTER_BUILD_NAME)&lt;/string&gt;
	&lt;key&gt;CFBundleSignature&lt;/key&gt;
	&lt;string&gt;????&lt;/string&gt;
	&lt;key&gt;CFBundleVersion&lt;/key&gt;
	&lt;string&gt;$(FLUTTER_BUILD_NUMBER)&lt;/string&gt;
	&lt;key&gt;LSRequiresIPhoneOS&lt;/key&gt;
	&lt;true/&gt;
	&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
	&lt;string&gt;$(PRODUCT_NAME) Camera Usage!&lt;/string&gt;
	&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
	&lt;string&gt;$(PRODUCT_NAME) Microphone Usage!&lt;/string&gt;
	&lt;key&gt;RTCAppGroupIdentifier&lt;/key&gt;
	&lt;string&gt;group.com.example.broadcastScreen&lt;/string&gt;
	&lt;key&gt;RTCScreenSharingExtension&lt;/key&gt;
	&lt;string&gt;live.videosdk.flutter.example.FlutterBroadcast&lt;/string&gt;
	&lt;key&gt;UIApplicationSupportsIndirectInputEvents&lt;/key&gt;
	&lt;true/&gt;
	&lt;key&gt;UIBackgroundModes&lt;/key&gt;
	&lt;array&gt;
		&lt;string&gt;voip&lt;/string&gt;
		&lt;string&gt;fetch&lt;/string&gt;
		&lt;string&gt;processing&lt;/string&gt;
		&lt;string&gt;remote-notification&lt;/string&gt;
	&lt;/array&gt;
	&lt;key&gt;BGTaskSchedulerPermittedIdentifiers&lt;/key&gt;
	&lt;array&gt;
		&lt;string&gt;dev.flutter.background.refresh&lt;/string&gt;
	&lt;/array&gt;
	&lt;key&gt;UILaunchStoryboardName&lt;/key&gt;
	&lt;string&gt;LaunchScreen&lt;/string&gt;
	&lt;key&gt;UIMainStoryboardFile&lt;/key&gt;
	&lt;string&gt;Main&lt;/string&gt;
	&lt;key&gt;UISupportedInterfaceOrientations&lt;/key&gt;
	&lt;array&gt;
		&lt;string&gt;UIInterfaceOrientationPortrait&lt;/string&gt;
		&lt;string&gt;UIInterfaceOrientationLandscapeLeft&lt;/string&gt;
		&lt;string&gt;UIInterfaceOrientationLandscapeRight&lt;/string&gt;
	&lt;/array&gt;
	&lt;key&gt;UISupportedInterfaceOrientations~ipad&lt;/key&gt;
	&lt;array&gt;
		&lt;string&gt;UIInterfaceOrientationPortrait&lt;/string&gt;
		&lt;string&gt;UIInterfaceOrientationPortraitUpsideDown&lt;/string&gt;
		&lt;string&gt;UIInterfaceOrientationLandscapeLeft&lt;/string&gt;
		&lt;string&gt;UIInterfaceOrientationLandscapeRight&lt;/string&gt;
	&lt;/array&gt;
	&lt;key&gt;UIViewControllerBasedStatusBarAppearance&lt;/key&gt;
	&lt;false/&gt;
	
&lt;/dict&gt;
&lt;/plist&gt;</code></pre><p><em> Along with info.plist file also check the AppDelegate.swift file</em></p><pre><code class="language-javascript">import UIKit
import Flutter

@main
@objc class AppDelegate: FlutterAppDelegate {
  override func application(
    _ application: UIApplication,
    didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?
  ) -&gt; Bool {
    GeneratedPluginRegistrant.register(with: self)
      application.registerForRemoteNotifications()
    return super.application(application, didFinishLaunchingWithOptions: launchOptions)
  }
}</code></pre><p><em>Okay, let's move forward with integrating the VideoSDK into our project. Please refer to the </em><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start" rel="noreferrer"><em>Quickstart</em></a><em> documentation provided in the link and follow the initial steps to build the necessary steps till step 4 which is creating a paticipant_tile.dart screen.</em></p><p><em>Now, after adding the other files to the Meeting folder, lets create a meeting_screen.dart file through which our meeting will be created, rendered and managed. </em></p><pre><code class="language-javascript">import 'dart:convert';

import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';
import 'package:videosdk_flutter_example/home.dart';

import 'package:videosdk_flutter_example/meeting/meeting_controls.dart';
import './participant_tile.dart';
import 'package:http/http.dart' as http;

class MeetingScreen extends StatefulWidget {
  final String meetingId;
  final String token;
  final String url;
  final String callerId;
  String? source;
  MeetingScreen(
      {Key? key,
      required this.meetingId,
      required this.token,
      required this.callerId,
      required this.url,
      this.source})
      : super(key: key);

  @override
  State&lt;MeetingScreen&gt; createState() =&gt; _MeetingScreenState();
}

class _MeetingScreenState extends State&lt;MeetingScreen&gt; {
  late Room _room;
  var micEnabled = true;
  var camEnabled = true;

  Map&lt;String, Participant&gt; participants = {};

  @override
  void initState() {
    // create room
    if (widget.source == "true") {
      sendnotification(
          widget.url, widget.callerId, "Call Accepted", widget.meetingId);
    }
    _room = VideoSDK.createRoom(
        roomId: widget.meetingId,
        token: widget.token,
        displayName: "John Doe",
        micEnabled: micEnabled,
        camEnabled: camEnabled,
        defaultCameraIndex: kIsWeb
            ? 0
            : 1 // Index of MediaDevices will be used to set default camera
        );

    setMeetingEventListener();

    // Join room
    _room.join();

    super.initState();
  }

  Future&lt;void&gt; sendnotification(String api, callerId, status, roomId) async {
    await sendCallStatus(
        serverUrl: api, callerId: callerId, status: status, roomId: roomId);
  }

  @override
  void setState(fn) {
    if (mounted) {
      super.setState(fn);
    }
  }

  // listening to meeting events
  void setMeetingEventListener() {
    _room.on(Events.roomJoined, () {
      setState(() {
        participants.putIfAbsent(
            _room.localParticipant.id, () =&gt; _room.localParticipant);
      });
    });

    _room.on(
      Events.participantJoined,
      (Participant participant) {
        setState(
          () =&gt; participants.putIfAbsent(participant.id, () =&gt; participant),
        );
      },
    );

    _room.on(Events.participantLeft, (String participantId) {
      if (participants.containsKey(participantId)) {
        setState(
          () =&gt; participants.remove(participantId),
        );
      }
    });

    _room.on(Events.roomLeft, () {
      participants.clear();
      Navigator.pushAndRemoveUntil(
        context,
        MaterialPageRoute(
            builder: (context) =&gt; Home(
                  callerID: widget.callerId,
                )),
        (route) =&gt; false, // Removes all previous routes
      );
    });
  }

  // onbackButton pressed leave the room
  Future&lt;bool&gt; _onWillPop() async {
    _room.leave();
    return true;
  }

  Future&lt;void&gt; sendCallStatus({
    required String serverUrl,
    required String callerId,
    required String status,
    required String roomId,
  }) async {
    final url = Uri.parse('$serverUrl/send-call-status');
   
    try {
      // Request payload
      final body = jsonEncode({
        'callerId': callerId,
        'status': status,
        'roomId': roomId,
      });

      // Sending the POST request
      final response = await http.post(
        url,
        headers: {
          'Content-Type': 'application/json',
        },
        body: body,
      );

      // Handling the response
      if (response.statusCode == 200) {
        print("Notification sent successfully: ${response.body}");
      } else {
        print("Failed to send notification: ${response.statusCode}");
  
      }
    } catch (e) {
      print("Error sending call status: $e");
    }
  }

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    // ignore: deprecated_member_use
    return WillPopScope(
      onWillPop: () =&gt; _onWillPop(),
      child: Scaffold(
        appBar: AppBar(
          title: const Text('VideoSDK QuickStart'),
        ),
        body: Padding(
          padding: const EdgeInsets.all(8.0),
          child: Column(
            children: [
              Text(widget.meetingId),
              //render all participant
              Expanded(
                child: Padding(
                  padding: const EdgeInsets.all(8.0),
                  child: GridView.builder(
                    gridDelegate:
                        const SliverGridDelegateWithFixedCrossAxisCount(
                      crossAxisCount: 2,
                      crossAxisSpacing: 10,
                      mainAxisSpacing: 10,
                      mainAxisExtent: 300,
                    ),
                    itemBuilder: (context, index) {
                      return ParticipantTile(
                          key: Key(participants.values.elementAt(index).id),
                          participant: participants.values.elementAt(index));
                    },
                    itemCount: participants.length,
                  ),
                ),
              ),
              MeetingControls(
                micEnabled: micEnabled,
                camEnabled: camEnabled,
                onToggleMicButtonPressed: () {
                  setState(() {
                    micEnabled = !micEnabled;
                  });
                  micEnabled ? _room.unmuteMic() : _room.muteMic();
                },
                onToggleCameraButtonPressed: () {
                  setState(() {
                    camEnabled = !camEnabled;
                  });
                  camEnabled ? _room.enableCam() : _room.disableCam();
                },
                onLeaveButtonPressed: () {
                  _room.leave();
                },
              ),
            ],
          ),
        ),
      ),
      //home: JoinScreen(),
    );
  }
}
</code></pre><p><em>Great, we have successfully added our videosdk meeting screens. Lets deep dive into main.dart file and home.dart file.    <br><br>main.dart</br></br></em></p><pre><code class="language-javascript">import 'package:firebase_core/firebase_core.dart';
import 'package:firebase_messaging/firebase_messaging.dart';
import 'package:flutter/material.dart';
import 'package:flutter_dotenv/flutter_dotenv.dart';
import 'package:videosdk_flutter_example/home.dart';

void main() async {
  WidgetsFlutterBinding.ensureInitialized();
  await dotenv.load(fileName: ".env");
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      debugShowCheckedModeBanner: false,
      home: Home(),
    );
  }
}
</code></pre><p><em>After creating the main.dart file, now its time to create a <strong>home.dart</strong> file. In home.dart file we will be working on managing the notification and also the call triggering logic is present in this file. </em></p><pre><code class="language-javascript">Future&lt;void&gt; sendNotification({
    required String callerId,
    required String callerInfo,
    required String roomId,
    required String token,
  }) async {
    // Prepare the request payload

    final Map&lt;String, dynamic&gt; payload = {
      'callerId': callerId,
      'callerInfo': {'id': callerInfo},
      'videoSDKInfo': {'roomId': roomId, 'token': token},
    };

    try {
      // Send POST request to the API
      final response = await http.post(
        Uri.parse("${apiUrl!}/send-notification"),
        headers: {'Content-Type': 'application/json'},
        body: jsonEncode(payload),
      );

      // Handle the response from the API
      if (response.statusCode == 200) {
        print('Notification sent successfully');
        if (mounted) {
          ScaffoldMessenger.of(context).showSnackBar(
            const SnackBar(content: Text("Message sent successfully")),
          );
        }
        print(response.body);
      } else {
        print('Failed to send notification: ${response.body}');
      }
    } catch (e) {
      print('Error occurred while sending notification: $e');
    }
  }


Future&lt;void&gt; makeFakeCallInComing(String callerId) async {

  {
    _currentUuid = const Uuid().v4();

    final params = CallKitParams(
      id: _currentUuid,
      appName: 'VideoSdk',
      avatar: 'https://i.pravatar.cc/100',
      handle: "VideoSdk",
      type: 0,
      duration: 30000,
      textAccept: 'Accept',
      textDecline: 'Decline',
      missedCallNotification: const NotificationParams(
        showNotification: true,
        isShowCallback: true,
        subtitle: 'Missed call',
        callbackText: 'Call back',
      ),
      extra: &lt;String, dynamic&gt;{'userId': '1a2b3c4d'},
      headers: &lt;String, dynamic&gt;{'apiKey': 'Abc@123!', 'platform': 'flutter'},
      ios: const IOSParams(
        iconName: 'CallKitLogo',
        handleType: '',
        supportsVideo: true,
        maximumCallGroups: 2,
        maximumCallsPerCallGroup: 1,
        audioSessionMode: 'default',
        audioSessionActive: true,
        audioSessionPreferredSampleRate: 44100.0,
        audioSessionPreferredIOBufferDuration: 0.005,
        supportsDTMF: true,
        supportsHolding: true,
        supportsGrouping: false,
        supportsUngrouping: false,
        ringtonePath: 'system_ringtone_default',
      ),
    );
    await FlutterCallkitIncoming.showCallkitIncoming(params);
    Future.delayed(const Duration(seconds: 10), () async {
      await FlutterCallkitIncoming.endAllCalls();
      // _initializePhoneAccount();
    });
  }
}</code></pre><p><em>To get entire code for home.dart file you can refer </em><a href="https://github.com/videosdk-live/videosdk-rtc-flutter-call-trigger-example/blob/main/Client/lib/home.dart" rel="noreferrer"><em>here</em></a><em>.</em></p><p><em>Note: The above code is only applicable when app is in Foreground state or when app is open in Background, yet working on the case where app is totally terminated</em></p><hr><blockquote><em>App Reference</em></blockquote>
<!--kg-card-begin: html-->
<table>
  <tr>
    <td><img src="https://cdn.videosdk.live/website-resources/docs-resources/flutter-call-kit/calling-screen-flutter-ios.png" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" width="200" height="100"/></td>
    <td><img src="https://cdn.videosdk.live/website-resources/docs-resources/flutter-call-kit/home-screen-flutter-ios.png" alt="Build a Video Calling App with Call Trigger in Flutter - iOS using Firebase and VideoSDK" width="200" height="100"/></td>
  </tr>
</table>
<!--kg-card-end: html-->

<!--kg-card-begin: html-->
<video width="900" height="500" controls="">
  <source src="https://cdn.videosdk.live/website-resources/docs-resources/flutter_ios_callkit.mp4" type="video/mp4">
</source></video>
<!--kg-card-end: html-->
<h2 id="conclusion"><em>Conclusion</em></h2><p><em>With this, we've successfully built the  Flutter iOS video calling app using the  Flutter_callkit_incoming package, VideoSDK, and Firebase, Node js. For additional features like chat messaging and screen sharing, feel free to refer to our </em><a href="https://docs.videosdk.live" rel="noreferrer"><em>documentation</em></a><em>. If you encounter any issues with the implementation, don’t hesitate to reach out to us through our </em><a href="https://discord.gg/Gpmj6eCq5u" rel="noreferrer"><em>Discord community</em></a><em>.</em></p></hr>]]></content:encoded></item><item><title><![CDATA[Build a Video Calling App with Call Trigger in Flutter - Android using Firebase and VideoSDK]]></title><description><![CDATA[Create a cross-platform video calling app for Android with native call triggers using Flutter, Node.js, Firebase, VideoSDK, and Telecom Framework.]]></description><link>https://www.videosdk.live/blog/flutter-android-calltrigger</link><guid isPermaLink="false">6777c913dbc171042ac7075a</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 21 Jul 2025 06:39:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/05/call-trigger_android_flutter.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/05/call-trigger_android_flutter.png" alt="Build a Video Calling App with Call Trigger in Flutter - Android using Firebase and VideoSDK"/><p/><p>In today’s connected world, building a seamless video calling experience is more important than ever. If you're looking to create a cross-platform video calling app with native call-trigger functionality—just like a regular phone call—you’re in the right place.</p><p>In this tutorial, we’ll walk through building a powerful video calling app using:</p><ul><li><strong>Flutter</strong> for the cross-platform UI</li><li><strong>Node.js</strong> for handling server-side token generation</li><li><strong>Firebase</strong> for real-time user status and call event synchronization</li><li><strong>VideoSDK</strong> for high-quality, low-latency audio and video communication</li><li><strong>Android’s Telecom Framework</strong> to provide native call UI and call management behavior</li></ul><p>By the end, you’ll have a production-ready video calling experience complete with incoming/outgoing call screens, call controls, and real-time updates. Be sure to <a href="https://github.com/videosdk-live/videosdk-rtc-flutter-call-trigger-example" rel="noreferrer">check out the sample code</a> and demo video to see the final result in action.<br/></p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/YEtYwzdsw1k?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="" title="I Built a Flutter CallKit Integration with VideoSDK (Step-by-Step)"/></figure><h2 id="introduction-to-telecom-framework">Introduction to Telecom Framework</h2><p>The Telecom framework in Android facilitates seamless management of audio and video calls, supporting both traditional SIM-based calls and VoIP functionality. It acts as the backbone for handling call connections and user interactions during calls.</p><h3 id="key-components">Key Components:</h3><ul><li><strong>ConnectionService</strong>: Manages call connections, tracks their states, and ensures proper routing of audio and video streams.</li><li><strong>InCallService</strong>: Provides the interface for call interactions, enabling users to view, answer, and manage ongoing calls.</li></ul><p>A thorough understanding of these components ensures a smoother development process when integrating call functionality into Android applications.</p><h2 id="overview-of-the-application-for-android">Overview of the Application for Android</h2><p>The application facilitates seamless video calling by leveraging a well-coordinated workflow between Flutter, Node.js, Firebase, and platform-specific features. Here’s an overview of the call process:</p><h3 id="initiating-a-call">Initiating a Call</h3><ol><li>The caller (e.g., Pavan) enters the recipient's unique ID (e.g., Jay) and presses the call button.</li><li>This action triggers a server request to initiate the call, which sends a Firebase Cloud Messaging (FCM) notification to the recipient's device through an API.</li></ol><h3 id="receiving-the-call">Receiving the Call</h3><ul><li><strong>If the app is in the foreground</strong>: The notification is handled on the Flutter side, where a method channel is used to invoke Kotlin code for the Android <code>TelecomManager</code>. This displays the call interface.</li><li><strong>If the app is in the background</strong>: The notification is processed by the <code>FirebaseMessagingService</code> in Kotlin. It then uses <code>TelecomManager</code> to display the call interface.</li></ul><h3 id="handling-call-status">Handling Call Status</h3><p>Upon answering or rejecting the call, the recipient's action triggers a server request to update the call status. This update is communicated back to the caller (Pavan) to reflect the recipient's response.</p><h2 id="core-components-of-the-app">Core Components of the App</h2><p>The app integrates several core components to manage video calling and notifications seamlessly across Android platform:</p><h3 id="telecom-framework">Telecom Framework</h3><ul><li><strong>Purpose</strong>: Manages incoming and outgoing calls with native system integration.</li><li><strong>Function</strong>: Handles call connection states, audio/video routing, and user interactions through the Android <code>TelecomManager</code>.</li></ul><h3 id="push-notifications">Push Notifications</h3><ul><li><strong>Purpose</strong>: Notifies the recipient about incoming calls.</li><li><strong>Function</strong>: Uses Firebase Cloud Messaging (FCM), for triggering the appropriate call UI even if the app is in the background or inactive.</li></ul><h3 id="firebase-realtime-database">Firebase Realtime Database</h3><ul><li><strong>Purpose</strong>: Stores user tokens and caller IDs.</li><li><strong>Function</strong>: Ensures secure and real-time communication between users, facilitating the initiation of calls and status updates.</li></ul><h3 id="videosdk">VideoSDK</h3><ul><li><strong>Purpose</strong>: Powers the video calling functionality.</li><li><strong>Function</strong>: Enables real-time video and audio communication, providing high-quality video conferencing features for both Android and iOS users.</li></ul><h3 id="nodejs-server">Node.js Server</h3><ul><li><strong>Purpose</strong>: Acts as the backend service for call initiation and status updates.</li><li><strong>Function</strong>: Sends push notifications via Firebase and APNs, and updates call statuses (e.g., accepted, rejected) between users.</li></ul><p>These core components work together to ensure a smooth, feature-rich experience for both video calling and real-time notifications across devices and platforms.</p><h2 id="prerequisites">Prerequisites</h2><p>Before starting with the development of the video calling app, ensure that the following prerequisites are met:</p><ol><li><strong>Flutter Development Environment</strong>:<ul><li>Install Flutter SDK and set up your development environment. Follow the <a href="https://flutter.dev/docs/get-started/install">official Flutter installation guide</a>.</li></ul></li><li><strong>Firebase Project</strong>:<ul><li>Create a Firebase project in the Firebase Console.</li><li>Set up Firebase Cloud Messaging (FCM) for push notifications.</li></ul></li><li><strong>VideoSDK Account</strong>:<ul><li>Sign up for a VideoSDK account at <a href="https://www.videosdk.live/">VideoSDK</a> and obtain the required credentials (API keys and authentication tokens).</li></ul></li><li><strong>Node.js Server</strong>:<ul><li>Set up a Node.js server to manage API requests, handle call initiation, and send push notifications. This server will also interact with Firebase to send call-trigger notifications.</li></ul></li></ol><p>With these prerequisites in place, you'll be ready to begin implementing the video calling functionality, leveraging both Firebase and VideoSDK for a seamless cross-platform experience.</p><h2 id="connecting-firebase-to-flutter-using-flutterfire-cli">Connecting Firebase to Flutter Using FlutterFire CLI</h2><p/><p>To easily set up Firebase with your Flutter app, follow these steps:</p><h3 id="step-1-create-a-firebase-project">Step 1: Create a Firebase Project</h3><ol><li>Go to the <a href="https://console.firebase.google.com/" rel="noreferrer">Firebase Console</a>.</li><li>Create a new project by following the prompts.</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-10.02.30-AM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - Android using Firebase and VideoSDK" loading="lazy" width="2008" height="1306"/></figure><h3 id="step-2-select-the-flutter-option"><strong>Step 2: Select the Flutter Option</strong></h3><ul><li>Once your project is created, navigate to the "Add app" section.</li><li>Choose the Flutter option to proceed.</li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-5.37.00-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - Android using Firebase and VideoSDK" loading="lazy" width="1508" height="910"/></figure><h3 id="step-3-install-firebase-cli"><strong>Step 3: Install Firebase CLI</strong></h3><ul><li>Use npm to globally install the Firebase CLI. Run the following command in your terminal:</li></ul><pre><code class="language-javascript">npm install -g firebase-tools</code></pre><h3 id="step-4-login-to-firebase"><strong>Step 4: Login to Firebase</strong></h3><ul><li>Log in to your Firebase account using the Firebase CLI by running:</li></ul><pre><code class="language-javascript">firebase login</code></pre><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-10.22.19-AM-1.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - Android using Firebase and VideoSDK" loading="lazy" width="2506" height="562"/></figure><ul><li>This will open a browser window prompting you to sign in with your Google account.</li></ul><h3 id="step-5-configure-firebase-in-flutter-using-flutterfire-cli"><strong>Step 5: Configure Firebase in Flutter Using FlutterFire CLI</strong></h3><ul><li>Use the FlutterFire CLI to connect your Flutter project to Firebase</li><li>Select an existing Firebase project or create a new one.</li><li>Choose the platforms (e.g., Android, iOS) you want to integrate Firebase with.</li><li>This will add new dart file named as firebase_option.dart in your project .</li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-10.29.02-AM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - Android using Firebase and VideoSDK" loading="lazy" width="2466" height="556"/></figure><h3 id="step-6-add-dependency-in-pubspecyaml-file"><strong>Step 6: Add dependency in pubspec.yaml file</strong></h3><ul><li>add firebase_core dependency in your pubspec.yaml file.</li></ul><hr><h2 id="project-structure">Project Structure </h2><pre><code class="language-javascript">Client
│
├── android
│   └── app
│       └── src
│           ├── main
|               ├── java
│               ├── kotlin/com/example/example
│               |   ├── CallConnectionService.kt
│               |   ├── MainActivity.kt
│               |   └── MyFirebaseMessagingService.kt
│               └── res
├── lib
│   |
│   ├── meeting
│   |   ├── api_call.dart
│   |   ├── join_screen.dart
│   |   ├── meeting_controls.dart
│   |   ├── meeting_screen.dart
│   |   └── participant_tile.dart
│   |
│   ├── firebase_options.dart
│   ├── home.dart
│   └── main.dart
│
├── .env
├── pubspec.yaml
└── README.md
</code></pre><h2 id="now-start-with-android-code-mainactivitykt">Now start with android code <br><br>MainActivity.kt</br></br></h2><p>Established the bridge using MethodChannel for communication between Flutter and native side code .we use Android Telecom API to handle native calls . we register phoneAccount , manage incoming calls and open phone account settings when it requires .&nbsp; we also initialise the callconnetionservice.kt class and firebase also from this file .</p><pre><code class="language-kotlin">package com.example.example

import android.content.ComponentName
import android.content.Intent
import android.net.Uri
import android.os.Bundle
import android.telecom.PhoneAccount
import android.telecom.PhoneAccountHandle
import android.telecom.TelecomManager
import android.util.Log
import io.flutter.embedding.android.FlutterActivity
import io.flutter.embedding.engine.FlutterEngine
import io.flutter.plugin.common.MethodChannel
import java.lang.Exception
import com.google.firebase.FirebaseApp

class MainActivity : FlutterActivity() {
    private val CHANNEL = "com.example.example/calls"
    private var telecomManager: TelecomManager? = null
    private var phoneAccountHandle: PhoneAccountHandle? = null
    private var methodChannel: MethodChannel? = null

    override fun configureFlutterEngine(flutterEngine: FlutterEngine) {
        super.configureFlutterEngine(flutterEngine)
        FirebaseApp.initializeApp(this)
        Log.d("MainActivity", "Firebase Initialized")
        telecomManager = getSystemService(TELECOM_SERVICE) as TelecomManager
        val componentName = ComponentName(this, CallConnectionService::class.java)
        phoneAccountHandle = PhoneAccountHandle(componentName, "DhirajAccountId")
        methodChannel = MethodChannel(flutterEngine.dartExecutor.binaryMessenger, CHANNEL)
         Log.d("MainActivity", "Above the call connection Service!!!")
        CallConnectionService.setFlutterEngine(flutterEngine)
        MyFirebaseMessagingService.setFlutterEngine(flutterEngine)
        Log.d("MainActivity", "Below the call connection Service!!!")
        methodChannel?.setMethodCallHandler { call, result -&gt;
            when (call.method) {
                "registerPhoneAccount" -&gt; {
                    try {
                        registerPhoneAccount()
                        result.success("Phone account registered successfully")
                    } catch (e: Exception) {
                        result.error("ERROR", "Failed to register phone account", e.message)
                    }
                }
                "handleIncomingCall" -&gt; {
                    val callerId = call.argument&lt;String&gt;("callerId")
                    handleIncomingCall(callerId)
                    result.success("Incoming call handled successfully")
                }
                "openPhoneAccountSettings" -&gt; 
                {
                    openPhoneAccountSettings()
                    result.success("Incoming call phone account");
                }
                else -&gt; {
                    result.notImplemented()
                }
            }
        }
    }

    override fun onCreate(savedInstanceState: Bundle?) {
        Log.d("MainActivity", "Initializing Firebase")
       
        super.onCreate(savedInstanceState)
    }
    
    override fun onPause() {
        super.onPause()
        Log.d("MainActivity", "App is paused. Performing necessary tasks.")
        // Perform any cleanup or save operations
    }

    override fun onResume() {
        super.onResume()
        Log.d("MainActivity", "App is resumed. Restoring state or resources.")
        // Restore any states or resources
    }

    override fun onStop() {
        super.onStop()
        Log.d("MainActivity", "App is stopped. Releasing resources.")
        
        // Check for incoming call and handle it
        val callerId = intent.getStringExtra("callerId") // Correct key
        Log.d("MainActivity", "Incoming call from: $callerId")
        if (callerId != null) {
            handleIncomingCall(callerId)
        }
    }

    override fun onStart() {
        super.onStart()
        Log.d("MainActivity", "App is started. Preparing resources.")
        // Initialize or prepare resources
    }

    // Register the phone account with the telecom manager
    private fun registerPhoneAccount() {
        val phoneAccount = PhoneAccount.builder(phoneAccountHandle, "VideoSdk")
            .setCapabilities(PhoneAccount.CAPABILITY_CALL_PROVIDER)
            .build()
        telecomManager?.registerPhoneAccount(phoneAccount)
    }

    // Check if the phone account is registered
    private fun isPhoneAccountRegistered(): Boolean {
        val phoneAccounts = telecomManager?.callCapablePhoneAccounts ?: return false
        return phoneAccounts.contains(phoneAccountHandle)
    }

    // Handle the incoming call or open settings if the account is not registered
    private fun handleIncomingCall(callerId: String?) {
        if (!isPhoneAccountRegistered()) {
            // Open phone account settings if not registered
            openPhoneAccountSettings()
            return
        }
        val extras = Bundle().apply {
            val uri = Uri.fromParts("tel", callerId, null)
            putParcelable(TelecomManager.EXTRA_INCOMING_CALL_ADDRESS, uri)
        }
        try {
            telecomManager?.addNewIncomingCall(phoneAccountHandle, extras)
        } catch (cause: Throwable) {
            Log.e("handleIncomingCall", "Error in addNewIncomingCall", cause)
        }
    }

    // Simulate an incoming call (e.g., during app stop)
    private fun simulateIncomingCall() {
        Log.d("MainActivity", "Simulating an incoming call in onStop.")

        // Simulate a caller ID for testing
        val testCallerId = "1234567890"
        handleIncomingCall(testCallerId)
    }

    // Open phone account settings
    private fun openPhoneAccountSettings() {
        try {
            val intent = Intent(TelecomManager.ACTION_CHANGE_PHONE_ACCOUNTS)
            intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK)
            startActivity(intent)
        } catch (e: Exception) {
            Log.e("openPhoneAccountSettings", "Unable to open settings", e)
        }
    }
}
</code></pre><h2 id="callconnectionservicekt">CallConnectionService.kt</h2><p>CallConnectionService acts as the bridge between the Android Telecom framework and the flutter app, managing incoming call events. It creates a Connection object to handle call lifecycle actions like answering or rejecting calls , also seamlessly communicating these events back to Flutter via a method channel .</p><pre><code class="language-kotlin">
package com.example.example
import io.flutter.plugin.common.MethodChannel
import android.content.Intent
import android.net.Uri
import android.telecom.Connection
import android.telecom.ConnectionRequest
import android.telecom.ConnectionService
import android.telecom.TelecomManager
import android.util.Log
import io.flutter.embedding.engine.FlutterEngine
class CallConnectionService : ConnectionService() {
companion object {
        private const val CHANNEL = "com.example.example/calls"
        private var methodChannel: MethodChannel? = null
        fun setFlutterEngine(flutterEngine: FlutterEngine) {
            methodChannel = MethodChannel(flutterEngine.dartExecutor, CHANNEL)
        }
    }
    override fun onCreateIncomingConnection(
        connectionManagerPhoneAccount: android.telecom.PhoneAccountHandle?,
        request: ConnectionRequest?
    ): Connection {
        val connection = object : Connection() {
            override fun onAnswer() {
                super.onAnswer()

                val extras = request?.extras
                val roomId = extras?.getString("roomId")
                val callerId = extras?.getString("callerId")

                Log.d("CallConnectionService", "Call Answered")
                Log.d("Room ID:", roomId ?: "null")
                Log.d("Caller ID:", callerId ?: "null")

                // Generate deep link for the meeting screen
                val deepLink = "exampleapp://open/meeting?roomId=$roomId&amp;callerId=$callerId" //creating the deeplink

                val intent = Intent(Intent.ACTION_VIEW).apply {
                    addFlags(Intent.FLAG_ACTIVITY_NEW_TASK or Intent.FLAG_ACTIVITY_CLEAR_TOP)
                    data = Uri.parse(deepLink)
                }
                startActivity(intent) //again triggering the intent for the //connection with Fluter.
                destroy()
            }

            override fun onReject() {
                super.onReject()

                val extras = request?.extras
                val callerId = extras?.getString("callerId")


                // Generate deep link for the home screen
                val deepLink = "exampleapp://open/home?callerId=$callerId" //same fir the reject call

                val intent = Intent(Intent.ACTION_VIEW).apply {
                    addFlags(Intent.FLAG_ACTIVITY_NEW_TASK or Intent.FLAG_ACTIVITY_CLEAR_TOP)
                    data = Uri.parse(deepLink)
                }
                startActivity(intent) //again triggering the intent for the connection with Fluter.
                destroy()
            }
        }
        connection.setAddress(request?.address, TelecomManager.PRESENTATION_ALLOWED)
        connection.setCallerDisplayName("Incoming Call", TelecomManager.PRESENTATION_ALLOWED)
        connection.setInitializing()
        connection.setActive()
        return connection
    }
}
</code></pre><h2 id="myfirebasemessagingservicekt">MyFirebaseMessagingService.kt</h2><p>The MyFirebaseMessagingService handle incoming firebase notification only for background like when the app is in background , and also make call from here </p><pre><code class="language-kotlin">package com.example.example
import android.app.ActivityManager
import android.content.ComponentName
import android.content.Context
import android.net.Uri
import android.os.Bundle
import android.telecom.PhoneAccountHandle
import android.telecom.TelecomManager
import android.util.Log
import androidx.core.app.ActivityCompat
import com.google.firebase.messaging.FirebaseMessagingService
import com.google.firebase.messaging.RemoteMessage
import android.content.pm.PackageManager
import android.Manifest
import io.flutter.embedding.engine.FlutterEngine
import io.flutter.plugin.common.MethodChannel
import android.os.Handler
import android.os.Looper
class MyFirebaseMessagingService : FirebaseMessagingService() {
    companion object {
        private const val CHANNEL_ID = "call_notifications_channel"
        private const val TAG = "MyFirebaseMessagingService"
        private const val FLUTTER_CHANNEL = "ack"
        var methodChannel: MethodChannel? = null
        fun setFlutterEngine(flutterEngine: FlutterEngine) {
            methodChannel = MethodChannel(flutterEngine.dartExecutor.binaryMessenger, FLUTTER_CHANNEL)
        }
    }
    private var telecomManager: TelecomManager? = null
    private var phoneAccountHandle: PhoneAccountHandle? = null
    override fun onMessageReceived(remoteMessage: RemoteMessage) {
      
  
        val data = remoteMessage.data
        Log.d(TAG, "Received notification data: $data")
        if (data.isNotEmpty()) {
            val callerId = data["callerInfo"]
            val roomId = data["roomId"]
            val type = data["type"]
            if (callerId != null &amp;&amp; roomId != null &amp;&amp; type == null) {
           
                handleIncomingCall(callerId, roomId)
                // Send data to Flutter on the main thread
                Handler(Looper.getMainLooper()).post {
                    sendToFlutter(callerId, roomId)
                }
            }
        }
    }
    private fun sendToFlutter(callerId: String, roomId: String) {
        if (methodChannel != null) {
            // Ensure this runs on the main thread
            Handler(Looper.getMainLooper()).post {
                methodChannel?.invokeMethod("onIncomingCall", mapOf("callerId" to callerId, "roomId" to roomId))
                
            }
        } else {
            Log.e(TAG, "MethodChannel is not initialized.")
        }
    }
    private fun handleIncomingCall(callerId: String, roomId: String) {
        initializeTelecomManager() // Ensure telecomManager and phoneAccountHandle are initialized
        if (!isPhoneAccountRegistered()) {
            Log.w(TAG, "Phone account not registered. Cannot handle incoming call.")
            return
        }
        if (!hasReadPhoneStatePermission()) {
            Log.e(TAG, "READ_PHONE_STATE permission not granted.")
            return
        }
        val extras = Bundle().apply {
            val uri = Uri.fromParts("tel", callerId, null)
            putParcelable(TelecomManager.EXTRA_INCOMING_CALL_ADDRESS, uri)
            putString("roomId",roomId)
            putString("callerId",callerId)
        }
        try {
            telecomManager?.addNewIncomingCall(phoneAccountHandle, extras)
            Log.d(TAG, "Incoming call handled successfully for Caller ID: $callerId")
        } catch (cause: Throwable) {
            Log.e(TAG, "Error handling incoming call", cause)
        }
    }
    private fun initializeTelecomManager() {
        if (telecomManager == null || phoneAccountHandle == null) {
            telecomManager = getSystemService(Context.TELECOM_SERVICE) as TelecomManager
            val componentName = ComponentName(this, CallConnectionService::class.java)
            phoneAccountHandle = PhoneAccountHandle(componentName, "DhirajAccountId")
            Log.d(TAG, "TelecomManager and PhoneAccountHandle initialized.")
        }
    }
    private fun isPhoneAccountRegistered(): Boolean {
        val phoneAccounts = telecomManager?.callCapablePhoneAccounts ?: return false
        return phoneAccounts.contains(phoneAccountHandle)
    }
    private fun hasReadPhoneStatePermission(): Boolean {
        return ActivityCompat.checkSelfPermission(this, Manifest.permission.READ_PHONE_STATE) == PackageManager.PERMISSION_GRANTED
    }
    private fun isAppInBackground(): Boolean {
        val activityManager = getSystemService(Context.ACTIVITY_SERVICE) as ActivityManager
        val runningAppProcesses = activityManager.runningAppProcesses ?: return true
        for (processInfo in runningAppProcesses) {
            if (processInfo.importance == ActivityManager.RunningAppProcessInfo.IMPORTANCE_FOREGROUND
                &amp;&amp; processInfo.processName == packageName
            ) {
                return false // App is in foreground
            }
        }
        return true // App is in background
    }
}

</code></pre><p>Now go to the server side and setup our server </p><h3 id="step-1-create-a-new-project-directory">Step 1 : Create a new project Directory </h3><pre><code class="language-javascript">mkdir server
cd server 
npm init -y </code></pre><h3 id="step-2-install-required-dependencies">Step 2 : Install required dependencies</h3><pre><code class="language-javascript">npm install express cors morgan firebase-admin uuid</code></pre><h3 id="step-3-set-up-firebase">Step 3 : Set Up Firebase&nbsp;</h3><p>Enable Realtime Database and set the rule true and Firebase Cloud messaging (FCM). </p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-1.04.24-PM-1.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - Android using Firebase and VideoSDK" loading="lazy" width="1602" height="1032"/></figure><h3 id="step-4">Step 4 : </h3><p>Download the private key by navigating to <strong>Project Settings &gt; Service Accounts</strong>, selecting <strong>Node.js</strong>, and then clicking <strong>Download</strong>.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-1.06.54-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - Android using Firebase and VideoSDK" loading="lazy" width="1890" height="1218"/></figure><h3 id="step-5">Step 5 : </h3><p>Place this File into your server side and your server side structure look like this.</p><pre><code class="language-javascript">Server
  ├── node_modules/
  ├── server.js
  ├── callkit-3ec73-firebase-adminsdk-ghfto-9d9fc7a362.json
  ├── package-lock.json
  └── package.json</code></pre><h3 id="step-6-now-go-to-serverjs-we-have-to-add-several-api-in-our-serverjs">Step 6 : now go to server.js we have to add several API in our server.js</h3><ol><li>GET /: Returns a "Hello Coding" message to confirm the server is running.</li><li>POST /register-device: Registers a device by storing its unique ID and FCM token in Firebase.</li><li>POST /api/add: Stores call details (callerId, roomId, and calleeId) in memory.</li><li>GET /api/getByCallee/:calleeId: Retrieves call details for a given callee ID from memory.</li><li>POST /send-call-status: Sends a notification to the caller about the status of a call (e.g., accepted, rejected, ended).</li><li>POST /send-notification: Sends a notification to a caller with details about an incoming call, including room ID and VideoSDK token.</li></ol><h3 id="step-7-now-we-add-below-code-to-run-the-server-at-your-ip">Step 7 : Now we add below code to run the server at your ip&nbsp;</h3><pre><code class="language-javascript">const PORT = process.env.PORT || 9000;&nbsp;
const LOCAL_IP = '10.0.0.161'; // Replace with your actual local IP address&nbsp;
app.listen(PORT, LOCAL_IP, () =&gt; {
console.log(Server running on http://${LOCAL_IP}:${PORT});
&nbsp;});&nbsp;</code></pre><h3 id="step-8">Step 8 : </h3><p>we setup ngrok and using this below command we redirect our local ip to temporary public URL that can share outside to your local network ngrok http <u>http://10.0.0.161:9000</u></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-06-at-10.15.09-AM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - Android using Firebase and VideoSDK" loading="lazy" width="1408" height="780"/></figure><p>It looks like that we can use this <u>https://8190-115-246-20-252.ngrok-free.app</u> as our server url.</p><p> Refer the complete code of <strong>server,js</strong> <a href="https://github.com/videosdk-live/videosdk-rtc-flutter-call-trigger-example/blob/main/Server/server.js" rel="noreferrer">here</a>.</p><p><strong>VideoSDK</strong> is a cutting-edge platform that enables seamless audio and video calling with low latency and robust performance. Start by signing up at <a href="https://videosdk.live/" rel="noopener"><strong>VideoSDK</strong></a>, generate your API token from the dashboard, and integrate real-time calling into your app effortlessly!</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-03-at-2.29.43-PM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - Android using Firebase and VideoSDK" loading="lazy" width="2902" height="1326"/></figure><p>Now add this token into your .env file </p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Screenshot-2025-01-06-at-10.31.18-AM.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Flutter - Android using Firebase and VideoSDK" loading="lazy" width="1058" height="314"/></figure><p>Now add all permission for android. In android , we need permission of camera , audio, internet ,wake lock ,call log , post notification and more here is full file of AndroidManifest.xml .</p><pre><code class="language-xml">&lt;manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="com.example.example"&gt;

    &lt;!-- Hardware features --&gt;
    &lt;uses-feature android:name="android.hardware.camera" /&gt;
    &lt;uses-feature android:name="android.hardware.camera.autofocus" /&gt;

    &lt;!-- Permissions --&gt;
    &lt;uses-permission android:name="android.permission.CAMERA" /&gt;
    &lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
    &lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
    &lt;uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" /&gt;
    &lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
    &lt;uses-permission android:name="android.permission.INTERNET" /&gt;
    &lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;
    &lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE" /&gt;
    &lt;uses-permission android:name="android.permission.CALL_PHONE" /&gt;
    &lt;uses-permission android:name="android.permission.MANAGE_OWN_CALLS" /&gt;
    &lt;uses-permission android:name="android.permission.READ_PHONE_STATE" /&gt;
    &lt;uses-permission android:name="android.permission.MODIFY_PHONE_STATE" /&gt;
    &lt;uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" /&gt;
    &lt;uses-permission android:name="android.permission.READ_CALL_LOG" /&gt;
    &lt;uses-permission android:name="android.permission.WRITE_CALL_LOG" /&gt;
    &lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE_MEDIA_PROJECTION" /&gt;
    &lt;uses-permission android:name="android.permission.POST_NOTIFICATIONS" /&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH"
        android:maxSdkVersion="30" /&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH_ADMIN"
        android:maxSdkVersion="30" /&gt;
    &lt;uses-permission android:name="android.permission.BLUETOOTH_CONNECT" /&gt;

    &lt;application
        android:label="Flutter Telecom App"
        android:icon="@mipmap/ic_launcher"
        android:enableOnBackInvokedCallback="true"&gt;

        &lt;!-- Main Activity --&gt;
        &lt;activity
            android:name=".MainActivity"
            android:launchMode="singleTask"
            android:theme="@style/LaunchTheme"
            android:configChanges="orientation|keyboardHidden|keyboard|screenSize|smallestScreenSize|locale|layoutDirection|fontScale|screenLayout|density|uiMode"
            android:hardwareAccelerated="true"
            android:windowSoftInputMode="adjustResize"
            android:exported="true"&gt;
            &lt;!-- Specifies an Android theme to apply to this Activity --&gt;
            &lt;meta-data
                android:name="io.flutter.embedding.android.NormalTheme"
                android:resource="@style/NormalTheme" /&gt;
            &lt;!-- Launch screen configuration --&gt;
            &lt;meta-data
                android:name="io.flutter.embedding.android.SplashScreenDrawable"
                android:resource="@drawable/launch_background" /&gt;
            &lt;intent-filter&gt;
                &lt;action android:name="android.intent.action.MAIN" /&gt;
                &lt;category android:name="android.intent.category.LAUNCHER" /&gt;
            &lt;/intent-filter&gt;
              &lt;intent-filter&gt;
                &lt;action android:name="android.intent.action.VIEW" /&gt;
                &lt;category android:name="android.intent.category.DEFAULT" /&gt;
                &lt;category android:name="android.intent.category.BROWSABLE" /&gt;
                &lt;data android:scheme="exampleapp" android:host="open" /&gt;
            &lt;/intent-filter&gt;
        &lt;/activity&gt;

        &lt;!-- Telecom Connection Service --&gt;
        &lt;service
            android:name=".CallConnectionService"
            android:permission="android.permission.BIND_TELECOM_CONNECTION_SERVICE"
            android:exported="true"&gt;
            &lt;intent-filter&gt;
                &lt;action android:name="android.telecom.ConnectionService" /&gt;
            &lt;/intent-filter&gt;
        &lt;/service&gt;
        
&lt;service
    android:name=".MyFirebaseMessagingService"
    android:permission="com.google.android.c2dm.permission.RECEIVE"
    android:exported="true"&gt;
    &lt;intent-filter&gt;
        &lt;action android:name="com.google.firebase.MESSAGING_EVENT" /&gt;
    &lt;/intent-filter&gt;
&lt;/service&gt;

        &lt;!-- Telecom Management Service --&gt;
        &lt;service
            android:name=".MyTelecomService"
            android:exported="true" /&gt;

        &lt;!-- FlutterActivity declaration --&gt;
        &lt;activity
            android:name="io.flutter.embedding.android.FlutterActivity"
            android:configChanges="orientation|keyboardHidden|keyboard|screenSize|smallestScreenSize|locale|layoutDirection|fontScale|screenLayout|density|uiMode"
            android:hardwareAccelerated="true"
            android:label="Flutter"
            android:theme="@style/LaunchTheme"
            android:exported="true" /&gt;

        &lt;!-- Main Application Metadata --&gt;
        &lt;meta-data android:name="flutterEmbedding" android:value="2" /&gt;

    &lt;/application&gt;
&lt;/manifest&gt;
</code></pre><p>Okay, let's move forward with integrating the VideoSDK into our project. Please refer to the <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start" rel="noreferrer">Quickstart</a> documentation provided in the link and follow the initial steps to build the necessary steps till step 4 which is creating a paticipant_tile.dart screen.</p><p>Now, after adding the other files to the Meeting folder, lets create a meeting_screen.dart file through which our meeting will be created, rendered and managed. </p><pre><code class="language-Dart">import 'dart:convert';

import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';
import 'package:videosdk_flutter_example/home.dart';

import 'package:videosdk_flutter_example/meeting/meeting_controls.dart';
import './participant_tile.dart';
import 'package:http/http.dart' as http;

class MeetingScreen extends StatefulWidget {
  final String meetingId;
  final String token;
  final String url;
  final String callerId;
  String? source;
  MeetingScreen(
      {Key? key,
      required this.meetingId,
      required this.token,
      required this.callerId,
      required this.url,
      this.source})
      : super(key: key);

  @override
  State&lt;MeetingScreen&gt; createState() =&gt; _MeetingScreenState();
}

class _MeetingScreenState extends State&lt;MeetingScreen&gt; {
  late Room _room;
  var micEnabled = true;
  var camEnabled = true;

  Map&lt;String, Participant&gt; participants = {};

  @override
  void initState() {
    // create room
    if (widget.source == "true") {
      sendnotification(
          widget.url, widget.callerId, "Call Accepted", widget.meetingId);
    }
    _room = VideoSDK.createRoom(
        roomId: widget.meetingId,
        token: widget.token,
        displayName: "John Doe",
        micEnabled: micEnabled,
        camEnabled: camEnabled,
        defaultCameraIndex: kIsWeb
            ? 0
            : 1 // Index of MediaDevices will be used to set default camera
        );

    setMeetingEventListener();

    // Join room
    _room.join();

    super.initState();
  }

  Future&lt;void&gt; sendnotification(String api, callerId, status, roomId) async {
    await sendCallStatus(
        serverUrl: api, callerId: callerId, status: status, roomId: roomId);
  }

  @override
  void setState(fn) {
    if (mounted) {
      super.setState(fn);
    }
  }

  // listening to meeting events
  void setMeetingEventListener() {
    _room.on(Events.roomJoined, () {
      setState(() {
        participants.putIfAbsent(
            _room.localParticipant.id, () =&gt; _room.localParticipant);
      });
    });

    _room.on(
      Events.participantJoined,
      (Participant participant) {
        setState(
          () =&gt; participants.putIfAbsent(participant.id, () =&gt; participant),
        );
      },
    );

    _room.on(Events.participantLeft, (String participantId) {
      if (participants.containsKey(participantId)) {
        setState(
          () =&gt; participants.remove(participantId),
        );
      }
    });

    _room.on(Events.roomLeft, () {
      participants.clear();
      Navigator.pushAndRemoveUntil(
        context,
        MaterialPageRoute(
            builder: (context) =&gt; Home(
                  callerID: widget.callerId,
                )),
        (route) =&gt; false, // Removes all previous routes
      );
    });
  }

  // onbackButton pressed leave the room
  Future&lt;bool&gt; _onWillPop() async {
    _room.leave();
    return true;
  }

  Future&lt;void&gt; sendCallStatus({
    required String serverUrl,
    required String callerId,
    required String status,
    required String roomId,
  }) async {
    final url = Uri.parse('$serverUrl/send-call-status');
    try {
      // Request payload
      final body = jsonEncode({
        'callerId': callerId,
        'status': status,
        'roomId': roomId,
      });

      // Sending the POST request
      final response = await http.post(
        url,
        headers: {
          'Content-Type': 'application/json',
        },
        body: body,
      );

      // Handling the response
      if (response.statusCode == 200) {
        print("Notification sent successfully: ${response.body}");
      } else {
        print("Failed to send notification: ${response.statusCode}");
       
      }
    } catch (e) {
      print("Error sending call status: $e");
    }
  }

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    // ignore: deprecated_member_use
    return WillPopScope(
      onWillPop: () =&gt; _onWillPop(),
      child: Scaffold(
        appBar: AppBar(
          title: const Text('VideoSDK QuickStart'),
        ),
        body: Padding(
          padding: const EdgeInsets.all(8.0),
          child: Column(
            children: [
              Text(widget.meetingId),
              //render all participant
              Expanded(
                child: Padding(
                  padding: const EdgeInsets.all(8.0),
                  child: GridView.builder(
                    gridDelegate:
                        const SliverGridDelegateWithFixedCrossAxisCount(
                      crossAxisCount: 2,
                      crossAxisSpacing: 10,
                      mainAxisSpacing: 10,
                      mainAxisExtent: 300,
                    ),
                    itemBuilder: (context, index) {
                      return ParticipantTile(
                          key: Key(participants.values.elementAt(index).id),
                          participant: participants.values.elementAt(index));
                    },
                    itemCount: participants.length,
                  ),
                ),
              ),
              MeetingControls(
                micEnabled: micEnabled,
                camEnabled: camEnabled,
                onToggleMicButtonPressed: () {
                  setState(() {
                    micEnabled = !micEnabled;
                  });
                  micEnabled ? _room.unmuteMic() : _room.muteMic();
                },
                onToggleCameraButtonPressed: () {
                  setState(() {
                    camEnabled = !camEnabled;
                  });
                  camEnabled ? _room.enableCam() : _room.disableCam();
                },
                onLeaveButtonPressed: () {
                  _room.leave();
                },
              ),
            ],
          ),
        ),
      ),
    );
  }
}
</code></pre><p>Great, we have successfully added our videosdk meeting screens. Lets deep dive into main.dart file and home.dart file.    <br><br>main.dart</br></br></p><pre><code class="language-Dart">import 'dart:async';

import 'package:firebase_core/firebase_core.dart';
import 'package:firebase_messaging/firebase_messaging.dart';
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';

import 'package:flutter_dotenv/flutter_dotenv.dart';

import 'package:videosdk_flutter_example/home.dart';

import 'package:videosdk_flutter_example/meeting/meeting_screen.dart';

String? videoSdkKey = dotenv.env["VIDEO_SDK_KEY"];
String? url = dotenv.env["SERVER_URL"];
@pragma('vm:entry-point')
Future&lt;void&gt; _firebaseMessagingBackgroundHandler(RemoteMessage message) async {
  await Firebase.initializeApp();

}

void main() async {
  WidgetsFlutterBinding.ensureInitialized();
  FirebaseMessaging.onBackgroundMessage(_firebaseMessagingBackgroundHandler);
  await dotenv.load(fileName: ".env");
  runApp(const MyApp());

  const platform = MethodChannel('com.yourapp/call');
  platform.setMethodCallHandler((call) async {
    if (call.method == "incomingCall") {
      final data = call.arguments as Map;
      final roomId = data["roomId"];
      final callerId = data["callerId"];
    
    }
  });
}

class MyApp extends StatelessWidget {
  const MyApp({Key? key}) : super(key: key);

  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      initialRoute: '/',
      onGenerateRoute: (settings) {
        final uri = Uri.parse(settings.name ?? '');
        if (uri.path == '/meeting') {
          final roomId = uri.queryParameters['roomId'];
          final callerId = uri.queryParameters['callerId'];
          print("Callaer id in Main.dart file: $callerId");
          return MaterialPageRoute(
            builder: (context) {
              WidgetsBinding.instance.addPostFrameCallback((_) {
                Navigator.pushAndRemoveUntil(
                  context,
                  MaterialPageRoute(
                    builder: (context) =&gt; MeetingScreen(
                      meetingId: roomId!,
                      token: videoSdkKey!,
                      callerId: callerId!,
                      url: url!,
                      source: "true",
                    ),
                  ),
                  (route) =&gt; false, // Removes all previous routes
                );
              });
              return const SizedBox(); // Placeholder widget (not displayed)
            },
          );
        } else if (uri.path == '/home') {
          final callerId = uri.queryParameters['callerId'];
          return MaterialPageRoute(
            builder: (context) {
              WidgetsBinding.instance.addPostFrameCallback((_) {
                Navigator.pushAndRemoveUntil(
                  context,
                  MaterialPageRoute(
                    builder: (context) =&gt; Home(
                      callerID: callerId!,
                      source: "true",
                    ),
                  ),
                  (route) =&gt; false, // Removes all previous routes
                );
              });
              return const SizedBox(); // Placeholder widget (not displayed)
            },
          );
        } else {
          return MaterialPageRoute(
            builder: (context) {
              WidgetsBinding.instance.addPostFrameCallback((_) {
                Navigator.pushAndRemoveUntil(
                  context,
                  MaterialPageRoute(
                    builder: (context) =&gt; Home(),
                  ),
                  (route) =&gt; false, // Removes all previous routes
                );
              });
              return const SizedBox(); // Placeholder widget (not displayed)
            },
          );
        }
      },
      debugShowCheckedModeBanner: false,
    );
  }
}
</code></pre><p>After creating the main.dart file, now its time to create a home.dart file. In home.dart file we will be working on managing the notification and also the method call which are required. </p><pre><code class="language-dart">Future&lt;void&gt; sendNotification({
    required String callerId,
    required String callerInfo,
    required String roomId,
    required String token,
  }) async {
    final Map&lt;String, dynamic&gt; payload = {
      'callerId': callerId,
      'callerInfo': {'id': callerInfo},
      'videoSDKInfo': {'roomId': roomId, 'token': token},
    };

    try {
      // Send POST request to the API
      final response = await http.post(
        Uri.parse("${apiUrl!}/send-notification"),
        headers: {'Content-Type': 'application/json'},
        body: jsonEncode(payload),
      );

      // Handle the response from the API
      if (response.statusCode == 200) {
       
        if (mounted) {
          ScaffoldMessenger.of(context).showSnackBar(
            const SnackBar(content: Text("Message sent successfully")),
          );
        }
      } else {
        print('Failed to send notification: ${response.body}');
      }
    } catch (e) {
      print('Error occurred while sending notification: $e');
    }
  }


Future makeCall(String callerID) async {
callerId = callerID;
if (await Permission.phone.isDenied) {
await Permission.phone.request();
}
if (await Permission.phone.isGranted) {
try {
final result = await platform
.invokeMethod('handleIncomingCall', {'callerId': callerID});
} catch (e) {
print('Error: $e');
}
} else {
print('Phone permission is not granted');
}
}</code></pre><p>To get entire code for home.dart file you can refer <a href="https://github.com/videosdk-live/videosdk-rtc-flutter-call-trigger-example/blob/main/Client/lib/home.dart" rel="noreferrer">here</a>.</p><h2 id="deep-linking-in-flutter-for-android-seamlessly-transitioning-from-native-to-flutter"><strong>Deep Linking in Flutter for Android: Seamlessly Transitioning from Native to Flutter</strong></h2><p/><blockquote>So basically the concept of Deep linking states that the <em>onclick</em> event has to redirect to the specified app without any user interaction.</blockquote><p>How It Works</p><ol><li><strong>Native Call Trigger</strong>:<br>When a call is initiated while the app is in the background, the native side of the Android app detects the event. This setup ensures that the native platform efficiently handles the call logic.</br></li><li><strong>Handling Call Events</strong>:<br>Upon user interaction (either answering or rejecting the call), the app seamlessly integrates with Flutter. The native side triggers a deep link back into the app, directing the user to a specific Flutter screen depending on the action.</br></li><li><strong>Deep Link Transition</strong>:<br>The deep link opens the required Flutter screen, ensuring that the app behaves as expected, even after transitioning from the background state.</br></li></ol><h4 id="key-implementation">Key Implementation </h4><ul><li><strong>Defining the Deep Link</strong>:<br>I used a URI scheme to define the deep link format. For example:<br><code>myapp://call?action=answered</code> or <code>myapp://call?action=rejected</code>.You can refer the code in CallConnectionService.kt </br></br></li><li><strong>Native Side Configuration</strong>:<br>The native Android code listens for call events and uses <code>Intent</code> to communicate with the Flutter engine. The <code>Intent</code> carries the deep link information to Flutter.</br></li><li><strong>Flutter Integration</strong>:<br>Ge the Deep link and based on the routes redirect it to a particular page which is mention in the link, if call is answered then redirect the user to meeting and if rejected the open the app and send the caller that receiver and rejected the call.</br></li></ul><hr><blockquote>App Reference</blockquote>
<!--kg-card-begin: html-->
<table>
  <tr>
    <td><img src="https://cdn.videosdk.live/website-resources/docs-resources/flutter-call-kit/calling-screen-flutter-android.jpg" alt="Build a Video Calling App with Call Trigger in Flutter - Android using Firebase and VideoSDK" width="200" height="100"/></td>
    <td><img src="https://cdn.videosdk.live/website-resources/docs-resources/flutter-call-kit/home-screen-flutter-android.jpg" alt="Build a Video Calling App with Call Trigger in Flutter - Android using Firebase and VideoSDK" width="200" height="100"/></td>
  </tr>
</table>
<!--kg-card-end: html-->

<!--kg-card-begin: html-->
<video width="900" height="500" controls="" poster="default-image.jpg">
  <source src="https://cdn.videosdk.live/website-resources/docs-resources/flutter_android_callkit.mp4" type="video/mp4">
  <p>Your browser does not support the video tag or the video is unavailable. 
     <img src="default-image.jpg" alt="Build a Video Calling App with Call Trigger in Flutter - Android using Firebase and VideoSDK" width="900" height="500">
  </img></p>
</source></video>

<!--kg-card-end: html-->
<h2 id="conclusion">Conclusion</h2><p>With this, we've successfully built the Flutter Android video calling app using the Telecom framework, VideoSDK, and Firebase. For additional features like chat messaging and screen sharing, feel free to refer to our&nbsp;<a href="https://docs.videosdk.live/" rel="noreferrer">documentation</a>. If you encounter any issues with the implementation, don’t hesitate to reach out to us through our&nbsp;<a href="https://discord.gg/Gpmj6eCq5u" rel="noreferrer">Discord community</a>.</p></hr></hr>]]></content:encoded></item><item><title><![CDATA[Build a Conversational Flow AI Agent with Voice Activity & Turn Detection]]></title><description><![CDATA[Build a production-quality, self-contained Voice Agent featuring advanced conversational flow, voice activity detection (VAD), and turn detection.]]></description><link>https://www.videosdk.live/blog/conversational-flow-vad-turn-detection</link><guid isPermaLink="false">686b66b8d4a340b6b279c5fe</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[#sumit-so]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 14 Jul 2025 10:32:25 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/07/conversational-flow.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/07/conversational-flow.png" alt="Build a Conversational Flow AI Agent with Voice Activity & Turn Detection"/><p/><p>In this blog, you'll learn—step by step—how to build a <strong>production-quality, self-contained VideoSDK agent</strong> featuring advanced conversational flow, voice activity detection (VAD), and turn detection. By the end, you'll have a complete, working playground example: just run the script, join the playground, and experience a natural, back-and-forth AI conversation with Retrieval-Augmented Generation (RAG) for smart recommendations!</p><h2 id="what-were-building">What We're Building</h2><p>We’re transforming a complex, API-driven service into a simple, playground-ready <strong>AI Voice Agent</strong> with Python script. This agent will:</p><ul><li>Join a VideoSDK meeting room directly from your terminal</li><li>Support natural conversation with voice activity and turn detection</li><li>Use <strong>RAG (Retrieval-Augmented Generation)</strong> to give smart, context-aware answers (e.g. travel advice)</li><li>Be easy to run and extend for your own use cases</li></ul><h2 id="prerequisites">Prerequisites</h2><p>You’ll need accounts and API keys for:</p><ul><li><a href="https://app.videosdk.live/">VideoSDK</a> (<code>VIDEOSDK_AUTH_TOKEN</code>)</li><li><a href="https://aistudio.google.com/app/apikey">Google AI Studio</a> (<code>GOOGLE_API_KEY</code>)</li><li><a href="https://platform.openai.com/api-keys">OpenAI</a> (<code>OPENAI_API_KEY</code>)</li><li><a href="https://www.pinecone.io/">Pinecone</a> (<code>PINECONE_API_KEY</code>, <code>PINECONE_INDEX_NAME</code>)</li></ul><h2 id="project-architecture">Project Architecture</h2><pre><code class="language-bash">├── .env.example
├── agent.py               # Agent's personality, VAD, RAG, and dialogue logic
├── build_pinecone_store.py  # Utility to build the vector store
├── main.py                # Entrypoint: runs the playground agent
├── rag_handler.py         # Handles the RAG logic with Pinecone
├── requirements.txt       # Python dependencies
├── travel_destinations.csv # Data for the RAG system
└── README.md              # This file
</code></pre><h2 id="project-setup">Project Setup</h2><h3 id="create-and-activate-a-virtual-environment">Create and Activate a Virtual Environment</h3><pre><code class="language-bash">python -m venv .venv
# On Windows:
.venv\Scripts\activate
# On macOS/Linux:
source .ven</code></pre><h3 id="install-the-required-dependencies">Install the Required Dependencies</h3><p>Create a <code>requirements.txt</code> file with:</p><pre><code class="language-bash">videosdk-agents
videosdk-plugins-google
videosdk-plugins-simli
python-dotenv
fastmcp
langchain
pinecone-client
openai
</code></pre><p>Then install:</p><pre><code class="language-bash">pip install -r requirements.txt
</code></pre><h3 id="configure-environment-variables">Configure Environment Variables</h3><p>Copy <code>.env.example</code> to <code>.env</code> and fill in your API keys:</p><pre><code class="language-bash">VIDEOSDK_AUTH_TOKEN=...
GOOGLE_API_KEY=...
OPENAI_API_KEY=...
PINECONE_API_KEY=...
PINECONE_INDEX_NAME</code></pre><h2 id="core-concept-what-is-rag">(Core Concept) What is RAG?</h2><p><strong>Retrieval-Augmented Generation (RAG)</strong> combines the power of language models with external knowledge. When the user asks a question, the agent first searches a knowledge base (using Pinecone) for relevant facts, then provides a personalized answer using those facts. This makes your agent much smarter and context-aware—perfect for things like a travel advisor!</p><h2 id="build-the-knowledge-base">Build the Knowledge Base</h2><p>You must first populate your Pinecone vector store with travel destination data:</p><pre><code class="language-bash">python build_pinecone_store.py
</code></pre><p>This reads <code>travel_destinations.csv</code>, generates embeddings with OpenAI, and uploads them to Pinecone.<br><strong>You must do this before you start the agent!</strong></br></p><h2 id="code-walkthrough">Code Walkthrough</h2><h3 id="mainpy-%E2%80%94-entrypoint-and-session-setup"><code>main.py</code> — Entrypoint and Session Setup</h3><pre><code class="language-python">import asyncio
import sys
import os
import requests
from pathlib import Path
from dotenv import load_dotenv

from videosdk.agents import (
    AgentSession,
    JobContext,
    RoomOptions,
    WorkerJob,
    RealTimePipeline
)
from videosdk.plugins.google import GeminiRealtime, GeminiLiveConfig
from agent import MyVoiceAgent, MyConversationFlow

load_dotenv(override=True)

def get_room_id(auth_token: str) -&gt; str:
    url = "https://api.videosdk.live/v2/rooms"
    headers = {"Authorization": auth_token}
    response = requests.post(url, headers=headers)
    response.raise_for_status()
    return response.json()["roomId"]

async def start_session(context: JobContext):
    model = GeminiRealtime(
        model="gemini-2.0-flash-live-001",
        config=GeminiLiveConfig(
            voice="Nova",
            response_modalities=["AUDIO"]
        )
    )
    pipeline = RealTimePipeline(model=model)

    system_prompt = (
        "You are a knowledgeable and friendly travel advisor AI assistant. "
        "Your goal is to help users find perfect travel destinations based on their interests. "
        "Use the context provided from the knowledge base to give helpful, personalized recommendations. "
        "Be conversational and friendly - travel planning should be exciting!"
    )
    agent = MyVoiceAgent(system_prompt=system_prompt, personality="travel_advisor")
    conversation_flow = MyConversationFlow(agent)

    session = AgentSession(
        agent=agent,
        pipeline=pipeline,
        conversation_flow=conversation_flow
    )

    await context.connect()
    await session.start()
    await asyncio.Event().wait()

def make_context() -&gt; JobContext:
    auth_token = os.getenv("VIDEOSDK_AUTH_TOKEN")
    if not auth_token:
        raise ValueError("VIDEOSDK_AUTH_TOKEN environment variable not set!")
    room_id = get_room_id(auth_token)
    room_options = RoomOptions(
        room_id=room_id,
        auth_token=auth_token,
        name="RAG Travel Agent",
        playground=True
    )
    return JobContext(room_options=room_options)

if __name__ == "__main__":
    print("🚀 Starting AI Agent for VideoSDK Playground...")
    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
    job.start()
</code></pre><h3 id="agentpy-%E2%80%94-personality-conversational-flow-vad-and-rag"><code>agent.py</code> — Personality, Conversational Flow, VAD, and RAG</h3><pre><code class="language-python">import asyncio
from typing import AsyncIterator
from videosdk.agents import Agent, ConversationFlow, function_tool, ChatRole
from rag_handler import search_knowledge_base

class MyVoiceAgent(Agent):
    def __init__(self, system_prompt: str, personality: str):
        super().__init__(instructions=system_prompt)
        self.personality = personality

    async def on_enter(self) -&gt; None:
        await self.session.say("Hey, I'm your friendly travel advisor! Where are you dreaming of going?")

    async def on_exit(self) -&gt; None:
        await self.session.say("Happy travels! Goodbye!")

    @function_tool
    async def end_call(self) -&gt; None:
        await self.session.say("It was great planning with you. Goodbye!")
        await asyncio.sleep(1)
        await self.session.leave()

class MyConversationFlow(ConversationFlow):
    async def run(self, transcript: str) -&gt; AsyncIterator[str]:
        self.agent.chat_context.add_message(role=ChatRole.USER, content=transcript)
        retrieved_context = None
        try:
            retrieved_context = await search_knowledge_base(transcript)
        except Exception as e:
            print(f"Error during RAG retrieval: {e}")
        if retrieved_context:
            self.agent.chat_context.add_message(
                role=ChatRole.SYSTEM, 
                content=f"Here is some relevant context from the travel database: {retrieved_context}"
            )
        async for response_chunk in self.process_with_llm():
            yield response_chunk
</code></pre><h3 id="raghandlerpy-%E2%80%94-pinecone-rag-handler"><code>rag_handler.py</code> — Pinecone RAG Handler</h3><pre><code class="language-python">import os
from typing import List, Dict
from dotenv import load_dotenv
from langchain_openai import OpenAIEmbeddings
from pinecone import Pinecone
import logging

load_dotenv()

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

class RAGHandler:
    def __init__(self):
        self.embeddings_model = None
        self.pinecone_index = None
        self.initialized = False

    async def initialize(self):
        if self.initialized:
            return
        try:
            openai_api_key = os.getenv("OPENAI_API_KEY")
            pinecone_api_key = os.getenv("PINECONE_API_KEY")
            pinecone_index_name = os.getenv("PINECONE_INDEX_NAME")
            if not all([openai_api_key, pinecone_api_key, pinecone_index_name]):
                raise ValueError("Missing RAG environment variables!")
            self.embeddings_model = OpenAIEmbeddings(openai_api_key=openai_api_key)
            pc = Pinecone(api_key=pinecone_api_key)
            self.pinecone_index = pc.Index(pinecone_index_name)
            self.initialized = True
            logger.info("RAG Handler initialized successfully.")
        except Exception as e:
            logger.error(f"Failed to initialize RAG Handler: {e}")
            raise

    async def search(self, query: str, top_k: int = 3) -&gt; List[Dict]:
        if not self.initialized:
            await self.initialize()
        query_embedding = self.embeddings_model.embed_query(query)
        search_results = self.pinecone_index.query(
            vector=query_embedding,
            top_k=top_k,
            include_metadata=True
        )
        return [
            {"content": match.metadata.get("text", ""), "score": match.score}
            for match in search_results.matches
        ]

    def format_context_for_llm(self, search_results: List[Dict]) -&gt; str:
        if not search_results:
            return "No relevant information found in the knowledge base."
        context_parts = []
        for i, result in enumerate(search_results, 1):
            context_parts.append(f"Reference {i}:\n{result['content']}\n")
        return "\n".join(context_parts)

rag_handler = RAGHandler()

async def search_knowledge_base(query: str, max_results: int = 3) -&gt; str:
    try:
        search_results = await rag_handler.search(query, top_k=max_results)
        return rag_handler.format_context_for_llm(search_results)
    except Exception as e:
        logger.error(f"Error in search_knowledge_base: {e}")
        return "I apologize, but I'm having trouble accessing the travel database right now."
</code></pre><h3 id="buildpineconestorepy-%E2%80%94-build-the-knowledge-base"><code>build_pinecone_store.py</code> — Build the Knowledge Base</h3><blockquote><em>This script is unchanged from the original. It reads your <code>travel_destinations.csv</code>, generates OpenAI embeddings, and uploads them to Pinecone. Run it before starting the agent!</em></blockquote><h3 id="traveldestinationscsv-%E2%80%94-your-data"><code>travel_destinations.csv</code> — Your Data</h3><blockquote><em>This is your knowledge base: a CSV file of travel destinations to recommend. You can expand it as you wish!</em></blockquote><h2 id="running-the-agent-and-seeing-the-result">Running the Agent and Seeing the Result</h2><ol><li><strong>Talk to your AI agent!</strong><ul><li>Ask travel questions from <a href="https://saaspirate.com/how-ai-agents-change-work/" rel="noreferrer">your AI agent</a>, like "Where should I go for great beaches and food?</li><li>The agent will search your knowledge base and respond in real time, with voice activity and turn detection for smooth conversation!</li></ul></li></ol><p><strong>Open the VideoSDK Playground URL</strong> printed in your terminal.<br>Example:</br></p><pre><code class="language-bash">https://playground.videosdk.live?token=...&amp;meetingId=...</code></pre><p><strong>Run the agent:</strong></p><pre><code class="language-bash">python main.py
</code></pre><p><strong>Build your knowledge base</strong> (if you haven't already):</p><pre><code class="language-bash">python build_pinecone_store.py
</code></pre><h2 id="conclusion-next-steps">Conclusion &amp; Next Steps</h2><p>You now have a fully working, playground-ready conversational AI with:</p><ul><li>Real-time voice and turn-taking</li><li>RAG-powered smart answers</li><li>Easy extensibility (swap out data, add tools, change the agent’s personality)</li></ul><p><strong>Next steps:</strong></p><ul><li>Add new tools (weather, booking, facts)</li><li>Enhance your knowledge base (more CSV data)</li><li>Try different agent personalities or pipelines</li></ul><p>For more, see the <a href="https://docs.videosdk.live/ai_agents/playground">VideoSDK AI Playground documentation</a>.</p>]]></content:encoded></item><item><title><![CDATA[How to Build a Voice Agent Using Agent2Agent Protocol (A2A) and MCP]]></title><description><![CDATA[Build an AI voice agent with Agent2Agent (A2A) and MCP. Step-by-step guide to SIP, call automation, and scalable agent-to-agent workflows.]]></description><link>https://www.videosdk.live/blog/ai-voice-agent-a2a-mcp</link><guid isPermaLink="false">685ce7404ec7927e167c88c6</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[#sumit-so]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 14 Jul 2025 08:15:21 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/06/Build-a-Voice-Agent-Using-MCP---A2A-Protocols.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/06/Build-a-Voice-Agent-Using-MCP---A2A-Protocols.png" alt="How to Build a Voice Agent Using Agent2Agent Protocol (A2A) and MCP"/><p>What if your AI could do more than just answer questions? What if it could coordinate with other AI Agents, handle bookings, and even trigger workflows in your favorite apps? Let’s build that together, step by step.<br>With the Agent-to-Agent (A2A) protocol and Model Context Protocol (MCP), you can turn your Python conversational agent into a powerful, collaborative system. In this blog, you’ll wire up a full-featured, multi-agent AI stack that:</br></p><ul><li>Speaks with users in real time (Gemini TTS/STT, via the VideoSDK pipeline)</li><li>Delegates work to specialist agents (flight, hotel, email) through the <strong>A2A protocol</strong></li><li>Integrates with external tools and workflows (Zapier, calendars, CRMs) using <strong>MCP</strong></li></ul><p>By following each section and copying the code blocks, you’ll build a working conversational AI orchestration layer—one you can extend for travel automation, email workflows, or any complex, multi-step process.</p><h2 id="why-a2a-and-mcp-matter">Why A2A and MCP Matter</h2><p>Most conversational agents do just one thing. But real-world automation demands <em>collaboration</em>: an agent should delegate, coordinate, and call out to other AIs or SaaS tools. <strong>A2A</strong> lets your agents talk to each other—no more brittle monoliths. <strong>MCP</strong> bridges your agents with the outside world, enabling access to tools, APIs, and automations like Zapier. Together, these protocols make your system modular, scalable, and endlessly extensible.</p><h2 id="prerequisites">Prerequisites</h2>
<!--kg-card-begin: html-->
<table>
<thead>
<tr>
<th>Requirement</th>
<th>Why you need it</th>
</tr>
</thead>
<tbody>
<tr>
<td>Python 3.10+</td>
<td>Enables <code>asyncio</code> concurrency for agents</td>
</tr>
<tr>
<td>VideoSDK account</td>
<td>Needed for <code>VIDEOSDK_AUTH_TOKEN</code> and meetings</td>
</tr>
<tr>
<td>Google AI Studio key</td>
<td>Powers Gemini speech-to-text &amp; text-to-speech</td>
</tr>
</tbody>
</table>
<!--kg-card-end: html-->
<h2 id="environment-setup">Environment Setup</h2><p>Let’s get our environment ready:</p><pre><code class="language-bash">python -m venv .venv
source .venv/bin/activate        # (Windows: .venv\Scripts\activate)
pip install videosdk-agents==0.7.* \
            videosdk-plugins-google==0.2.* \
            aiohttp python-dotenv
</code></pre><p>Create a <code>.env</code> file in your project root:</p><pre><code class="language-bash">VIDEOSDK_AUTH_TOKEN=your_videosdk_token
GOOGLE_API_KEY=your_google_api_key
ZAPIER_MCP_SERVER=https://hooks.zapier.com/...   # optional for MCP
</code></pre><h2 id="project-layout">Project Layout</h2><p>Here’s how your folder should look:</p><pre><code class="language-bash">a2a-mcp-agents/
├── agents/
│   ├── email_agent.py
│   ├── flight_agent.py
│   ├── hotel_agent.py
│   └── travel_agent.py
├── session_manager.py
└── main.py
</code></pre><p>Create these folders and files as shown. Each agent gets its own file for clarity.</p><h2 id="building-specialist-agents">Building Specialist Agents</h2><p>Specialist agents run “silently” in the background, waiting for A2A messages from other agents. Here’s how to build them.</p><h3 id="emailagent-agentsemailagentpy">EmailAgent (<code>agents/email_agent.py</code>)</h3><pre><code class="language-python">from videosdk.agents import Agent, AgentCard, A2AMessage
import asyncio

class EmailAgent(Agent):
    """Sends confirmations and updates by email."""

    def __init__(self):
        super().__init__(
            agent_id="agent_email_001",
            instructions="You automate booking confirmations, travel updates, and notifications."
        )

    async def handle_send_booking_email(self, message: A2AMessage):
        email_type  = message.content.get("email_type", "")
        details     = message.content.get("details", "")
        recipient   = message.content.get("recipient", "")
        print(f"[EmailAgent] Sending {email_type} to {recipient}")
        await asyncio.sleep(0.5)  # Simulate I/O
        status = "sent"
        await self.a2a.send_message(
            to_agent="travel_agent_1",
            message_type="email_confirmation",
            content={"status": status, "email_type": email_type}
        )

    async def on_enter(self):
        await self.register_a2a(AgentCard(
            id="agent_email_001",
            name="Email Automation Service",
            domain="email",
            capabilities=["send_confirmations", "send_updates"]
        ))
        self.a2a.on_message("send_booking_email", self.handle_send_booking_email)

    async def on_exit(self):
        await self.unregister_a2a()
</code></pre><h3 id="flightagent-agentsflightagentpy">FlightAgent (<code>agents/flight_agent.py</code>)</h3><pre><code class="language-python">from videosdk.agents import Agent, AgentCard, A2AMessage

class FlightAgent(Agent):
    """Finds and books flights."""

    def __init__(self):
        super().__init__(
            agent_id="agent_flight_001",
            instructions="Provide flight options with prices, times, airline names."
        )

    async def handle_flight_search_query(self, message: A2AMessage):
        dest  = message.content["destination"]
        dates = message.content["dates"]
        email = message.content["customer_email"]
        reply = (f"Flights to {dest} on {dates}:\n"
                 f"1) Direct $299 08:00–11:30\n"
                 f"2) Economy Plus $399 14:15–17:45\n"
                 f"3) Premium Eco $549 18:30–22:00")
        await self.a2a.send_message(
            to_agent="travel_agent_1",
            message_type="flight_booking_response",
            content={"response": reply}
        )
        email_agent = self.a2a.registry.find_agents_by_domain("email")[0]
        await self.a2a.send_message(
            to_agent=email_agent,
            message_type="send_booking_email",
            content={"email_type": "flight_options", "details": reply, "recipient": email}
        )

    async def on_enter(self):
        await self.register_a2a(AgentCard(
            id="agent_flight_001",
            name="Skymate",
            domain="flight",
            capabilities=["search_flights"]
        ))
        self.a2a.on_message("flight_search_query", self.handle_flight_search_query)

    async def on_exit(self):
        await self.unregister_a2a()
</code></pre><h3 id="hotelagent-agentshotelagentpy">HotelAgent (<code>agents/hotel_agent.py</code>)</h3><pre><code class="language-python">from videosdk.agents import Agent, AgentCard, A2AMessage

class HotelAgent(Agent):
    """Finds and books hotels."""

    def __init__(self):
        super().__init__(
            agent_id="agent_hotel_001",
            instructions="Suggest hotels with amenities, price, and location."
        )

    async def handle_hotel_search_query(self, message: A2AMessage):
        dest  = message.content["destination"]
        dates = message.content["dates"]
        email = message.content["customer_email"]
        reply = (f"Hotels in {dest} for {dates}:\n"
                 f"1) Grand Plaza $180/night\n"
                 f"2) Comfort Inn $120/night\n"
                 f"3) Luxury Resort $350/night")
        await self.a2a.send_message(
            to_agent="travel_agent_1",
            message_type="hotel_booking_response",
            content={"response": reply}
        )
        email_agent = self.a2a.registry.find_agents_by_domain("email")[0]
        await self.a2a.send_message(
            to_agent=email_agent,
            message_type="send_booking_email",
            content={"email_type": "hotel_options", "details": reply, "recipient": email}
        )

    async def on_enter(self):
        await self.register_a2a(AgentCard(
            id="agent_hotel_001",
            name="Hotel Booker",
            domain="hotel",
            capabilities=["search_hotels"]
        ))
        self.a2a.on_message("hotel_search_query", self.handle_hotel_search_query)

    async def on_exit(self):
        await self.unregister_a2a()
</code></pre><h2 id="orchestrator-agent-travelagent">Orchestrator Agent: TravelAgent</h2><p>The <strong>TravelAgent</strong> is your “voice” agent. It listens to the user, delegates tasks to other agents using A2A, and (optionally) reaches out to external tools through MCP.</p><p><code>agents/travel_agent.py</code>:</p><pre><code class="language-python">from videosdk.agents import Agent, function_tool, AgentCard, A2AMessage, MCPServerHTTP
import asyncio, os
from typing import Dict, Any

class TravelAgent(Agent):
    def __init__(self):
        zapier_url = os.getenv("ZAPIER_MCP_SERVER", "")
        super().__init__(
            agent_id="travel_agent_1",
            instructions="Book complete trips: flights, hotels, emails.",
            mcp_servers=[MCPServerHTTP(url=zapier_url)] if zapier_url else []
        )

    @function_tool
    async def book_travel_package(self, destination: str, travel_dates: str, email: str) -&gt; Dict[str, Any]:
        await self.session.say(f"Looking up options for {destination}…")
        for _ in range(3):
            flights = self.a2a.registry.find_agents_by_domain("flight")
            hotels  = self.a2a.registry.find_agents_by_domain("hotel")
            if flights and hotels:
                break
            await asyncio.sleep(2)
        await self.a2a.send_message(flights[0], "flight_search_query",
                                    {"destination": destination, "dates": travel_dates, "customer_email": email})
        await self.a2a.send_message(hotels[0], "hotel_search_query",
                                    {"destination": destination, "dates": travel_dates, "customer_email": email})
        return {"status": "processing"}

    async def handle_flight_response(self, msg: A2AMessage):
        await self.session.say(f"Flight update: {msg.content['response']}")

    async def handle_hotel_response(self, msg: A2AMessage):
        await self.session.say(f"Hotel update: {msg.content['response']}")

    async def handle_email_confirm(self, msg: A2AMessage):
        await self.session.say("Confirmation email sent.")

    async def on_enter(self):
        await self.register_a2a(AgentCard(
            id="travel_agent_1",
            name="Travel Coordinator",
            domain="travel",
            capabilities=["travel_planning"]
        ))
        self.a2a.on_message("flight_booking_response", self.handle_flight_response)
        self.a2a.on_message("hotel_booking_response", self.handle_hotel_response)
        self.a2a.on_message("email_confirmation",     self.handle_email_confirm)
        await self.session.say("Hello! Where would you like to travel?")

    async def on_exit(self):
        await self.unregister_a2a()
</code></pre><h2 id="wiring-it-all-together">Wiring It All Together</h2><p><code>session_manager.py</code>:</p><pre><code class="language-python">import os, asyncio
from videosdk.agents import AgentSession, RealTimePipeline
from videosdk.plugins.google import GeminiRealtime, GeminiLiveConfig
from google.genai.types import Modality
from agents.travel_agent import TravelAgent
from agents.flight_agent  import FlightAgent
from agents.hotel_agent   import HotelAgent
from agents.email_agent   import EmailAgent

def make_voice_pipeline() -&gt; RealTimePipeline:
    return RealTimePipeline(
        model=GeminiRealtime(
            model="gemini-2.0-flash-exp",
            api_key=os.environ["GOOGLE_API_KEY"],
            config=GeminiLiveConfig(voice="Aoede", response_modalities=[Modality.AUDIO]),
        )
    )

def make_text_pipeline() -&gt; RealTimePipeline:
    return RealTimePipeline(
        model=GeminiRealtime(
            model="gemini-2.0-flash-exp",
            api_key=os.environ["GOOGLE_API_KEY"],
            config=GeminiLiveConfig(response_modalities=[Modality.TEXT]),
        )
    )

async def create_room() -&gt; str:
    import aiohttp
    async with aiohttp.ClientSession() as s:
        async with s.post(
            "https://api.videosdk.live/v2/rooms",
            headers={"Authorization": os.environ["VIDEOSDK_AUTH_TOKEN"], "Content-Type": "application/json"},
        ) as r:
            return (await r.json())["roomId"]

async def start_agents(room_id: str):
    # context.playground = True lets us test in the web UI
    travel = AgentSession(TravelAgent(), make_voice_pipeline(),
                          {"meetingId": room_id, "join_meeting": True, "playground": True})
    flight  = AgentSession(FlightAgent(),  make_text_pipeline(), {"join_meeting": False, "playground": True})
    hotel   = AgentSession(HotelAgent(),   make_text_pipeline(), {"join_meeting": False, "playground": True})
    email   = AgentSession(EmailAgent(),   make_text_pipeline(), {"join_meeting": False, "playground": True})

    await asyncio.gather(flight.start(), hotel.start(), email.start())
    await asyncio.sleep(3)     # allow registry
    await travel.start()
</code></pre><h2 id="application-entry-point">Application Entry Point</h2><p><code>main.py</code>:</p><pre><code class="language-python">#!/usr/bin/env python3
import asyncio, os, signal
from session_manager import create_room, start_agents

def validate_env():
    for var in ("VIDEOSDK_AUTH_TOKEN", "GOOGLE_API_KEY"):
        if var not in os.environ:
            raise RuntimeError(f"{var} is not set")

async def main():
    validate_env()
    room = await create_room()
    print(f"Meeting room created: {room}")
    loop = asyncio.get_running_loop()
    loop.add_signal_handler(signal.SIGINT, loop.stop)
    await start_agents(room)

if __name__ == "__main__":
    asyncio.run(main())
</code></pre><h2 id="try-it-in-the-videosdk-agents-playground">Try It in the VideoSDK Agents Playground</h2><p>When the <strong>TravelAgent</strong> session starts (with <code>playground: True</code> in the context), VideoSDK prints a link like:</p><pre><code class="language-bash">Agent started in playground mode
Interact with agent here at:
https://playground.videosdk.live?token=&lt;auth_token&gt;&amp;meetingId=&lt;meeting_id&gt;</code></pre><p>Open that link in Chrome, give microphone access, and start talking:</p><pre><code class="language-bash">You: I’d like to book a flight to Tokyo next month.
Agent: Looking up options for Tokyo…
Agent: Flight update: Flights to Tokyo on 2025-08-03…
Agent: Hotel update: Hotels in Tokyo…
Agent: Confirmation email sent.</code></pre><ul><li>Hear the agent respond in real time using Gemini TTS.</li><li>Watch as A2A messages coordinate between your specialist agents.</li><li>No client app needed—just use the web playground!</li></ul><blockquote><strong>Tip:</strong> The playground mode is for testing and debugging. Disable <code>playground: True</code> in production for a secure, scalable agent deployment.</blockquote><h2 id="key-takeaways">Key Takeaways</h2><ul><li><strong>A2A</strong> enables true agent collaboration, not just single-bot workflows.</li><li><strong>MCP</strong> opens the door to external tools and SaaS integrations.</li><li><strong>VideoSDK Agents Playground</strong> makes it easy to iterate, test, and show off your whole multi-agent system.</li><li>With just a handful of Python files, you can stand up a fully working, extensible, <em>open source</em> conversational AI.</li></ul><p>Now go build your own network of specialist agents—and let them do the heavy lifting for your users and your business!</p>]]></content:encoded></item><item><title><![CDATA[HIPAA Compliant Video Conferencing API: Complete Guide for Telehealth Video Calls In Your App]]></title><description><![CDATA[Embed HIPAA-compliant video conferencing API into your app using VideoSDK for seamless telehealth video calls. Ensure privacy and security.]]></description><link>https://www.videosdk.live/blog/hipaa-compliant-video-conferencing-api</link><guid isPermaLink="false">6638814320fab018df10e49e</guid><category><![CDATA[TeleHealth]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Fri, 31 Jan 2025 05:10:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/05/Hipaa-Compliant-Video-API--1-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/05/Hipaa-Compliant-Video-API--1-.jpg" alt="HIPAA Compliant Video Conferencing API: Complete Guide for Telehealth Video Calls In Your App"/><p>In the healthcare sector, the demand for secure and efficient communication channels is more important than ever. As Twilio discontinues its Programmable Video API, healthcare providers, and developers seek a HIPAA-compliant video conferencing solution. With the advent of telehealth services and the increasing reliance on virtual interactions, ensuring <a href="https://www.hhs.gov/hipaa/index.html" rel="noreferrer">HIPAA</a> compliance in video conferencing is paramount to safeguarding patients' sensitive information.</p><h2 id="what-is-hipaa">What is HIPAA?</h2><p>HIPAA stands for Health Insurance Portability and Accountability Act, a federal law that establishes standards for protecting sensitive patient information, known as <a href="https://www.hhs.gov/answers/hipaa/what-is-phi/index.html" rel="noreferrer">Protected Health Information (PHI)</a>. The HIPAA Act, enacted in 1996, sets forth regulations to safeguard patients' sensitive details and ensure data privacy and security within the healthcare industry. Any organization that handles PHI, including healthcare providers, insurance companies, and technology vendors, must adhere to HIPAA regulations to protect confidentiality and prevent data breaches.</p><h3 id="roles-of-covered-entities-and-business-associates">Roles of Covered Entities and Business Associates</h3><p>Under HIPAA regulations, healthcare providers are obligated to ensure the confidentiality and integrity of PHI. When these entities collaborate with third-party service providers like Twilio, Zoom, VideoSDK, or others, to facilitate communication services, they enter into Business Associate Agreements (BAAs) to ensure that PHI remains protected throughout the transmission process.</p><h2 id="importance-of-hipaa-compliant-video-conferencing-api">Importance of HIPAA Compliant Video Conferencing API</h2><p>In the healthcare industry, video communication has become an essential tool for remote consultations, patient-provider interactions, and team collaboration. However, traditional video conferencing platforms may not meet the stringent security and privacy requirements set forth by HIPAA. This is where a&nbsp;<strong>HIPAA Compliant Video conferencing</strong>&nbsp;API comes into play.</p><p>A <a href="https://www.videosdk.live/solutions/telehealth" rel="noreferrer">HIPAA-Compliant Video Conferencing API</a> is a secure, cloud-based solution that enables healthcare organizations to seamlessly integrate video communication into their existing systems and workflows. By adhering to HIPAA regulations, this API ensures that all video interactions and data transmissions are protected, safeguarding the confidentiality and integrity of sensitive patient information.</p><h2 id="benefits-of-secure-video-conferencing-api">Benefits of Secure Video Conferencing API</h2><ul><li>Encryption: A HIPAA Compliant Video API employs robust encryption protocols, such as AES-256, to protect video and audio data during transmission, ensuring that sensitive information remains secure and inaccessible to unauthorized parties.</li><li>Access Controls: The API provides granular access controls, allowing healthcare organizations to manage user permissions and restrict access to sensitive data based on individual roles and responsibilities.</li><li>Audit Logging: Comprehensive audit logging capabilities enable healthcare providers to track and monitor all video communication activities, ensuring compliance and facilitating the investigation of any potential security incidents.</li><li>Data Backup and Retention: The API offers secure data backup and retention features, ensuring that patient records and video interactions are properly stored and can be retrieved when needed, under HIPAA requirements.</li><li>Compliance Certifications: A HIPAA-Compliant Video API should hold relevant compliance certifications, such as HIPAA or SOC 2, demonstrating its adherence to industry-standard security and privacy practices.</li></ul><h2 id="what-are-the-requirements-for-hipaa-compliance">What are the Requirements for HIPAA Compliance?</h2><p>Ensure that your video application meets HIPAA compliance standards by implementing necessary security measures, including encrypted communication, signed webhook requests, and HTTP authentication. Integrating these security protocols into your application architecture, you can mitigate the risk of data breaches and unauthorized access to PHI.</p><p>Enforce access controls, such as HTTP Basic Authentication, to restrict access to video communication functionalities and sensitive data. By authenticating users' credentials before granting access to PHI-related resources, you can prevent unauthorized viewing or tampering with patient information.</p><h2 id="why-transition-from-twilio-to-other-hipaa-compliant-apis">Why Transition from Twilio to Other HIPAA-Compliant APIs?</h2><p>As Twilio sunsets its Programmable Video API, healthcare providers must migrate their video communication solutions to a new platform. VideoSDK's hipaa compliant video API offers a seamless transition path, providing a comprehensive set of tools and services to facilitate the migration process. By leveraging VideoSDK's telehealth video conferencing API, healthcare providers can maintain HIPAA compliance while benefiting from advanced features and improved security measures. </p><h2 id="what-are-the-benchmarks-for-choosing-the-best-hipaa-compliant-video-api">What are the Benchmarks for Choosing the Best HIPAA-Compliant Video API?</h2><p>When selecting a HIPAA Compliant Video API, healthcare organizations should consider the following factors:</p><ol><ol><li>Security and Compliance: Ensure that the API has robust security measures in place and holds the necessary compliance certifications.</li><li>Scalability and Reliability: Choose an API that can accommodate the growing needs of your healthcare organization and provide reliable, high-quality video communication services.</li><li>Ease of Integration: Opt for an API that seamlessly integrates with your existing healthcare systems and workflows, minimizing the need for complex implementation and training.</li><li>Customer Support: Evaluate the level of customer support and technical assistance provided by the API vendor, as this can be crucial in ensuring a smooth implementation and ongoing operation.</li><li>Pricing and Flexibility: Consider the pricing structure and the flexibility of the API's licensing model to ensure it aligns with your organization's budget and long-term needs.</li></ol></ol><h2 id="building-a-hipaa-compliant-video-conferencing-with-videosdks-api">Building a HIPAA Compliant Video Conferencing with VideoSDK's API</h2><p><a href="https://www.videosdk.live/">VideoSDK's</a> video conferencing API emerges as a leading choice in the telehealth industry because it offers compliant <a href="https://www.videosdk.live/solutions/telehealth" rel="noreferrer">video conferencing solutions</a> tailored to meet the unique needs of healthcare organizations. By leveraging VideoSDK's telehealth video conferencing API, healthcare providers can facilitate secure video conferencing experiences while maintaining compliance with regulatory requirements.</p><h3 id="benefits-of-implementing-a-videosdks-api">Benefits of Implementing a VideoSDK's API</h3><ol><li><strong>Enhanced Patient Engagement</strong>: VideoSDK's API offers a secure and user-friendly platform for video communication, enhancing patient engagement and facilitating remote consultations, follow-ups, and educational sessions. By providing a seamless and reliable video experience, healthcare organizations can <a href="https://www.medesk.net/en/blog/how-to-get-more-patients/" rel="noreferrer">get more patients</a> and strengthen relationships with patients and improve overall satisfaction.</li><li><strong>Efficient Care Coordination</strong>: VideoSDK's API enables healthcare teams to collaborate effectively, regardless of physical location, through features like virtual meetings, case discussions, and real-time consultations. This streamlined communication enhances care coordination, leading to better patient outcomes and operational efficiency. Many organisations also rely on trained remote professionals from <a href="https://www.virtuallatinos.com/find-talent/healthcare-medical-virtual-assistant/" rel="noreferrer">Virtual Latinos</a> to support scheduling coordination, documentation follow-ups, and administrative communication tasks.</li><li><strong>Cost Savings</strong>: By leveraging VideoSDK's API for telehealth services, healthcare organizations can reduce costs associated with in-person visits, transportation, and infrastructure maintenance. The efficient use of telehealth technology can lead to significant savings while maintaining high-quality care delivery.</li><li><strong>Compliance Assurance</strong>: VideoSDK's API is designed to meet the stringent security and privacy requirements of HIPAA, ensuring that all video interactions and patient data remain confidential and secure. By choosing a trusted telehealth API provider like VideoSDK, healthcare organizations can rest assured that they are compliant with regulatory standards and safeguarding patient information.</li></ol><h2 id="comprehensive-features-of-videosdks-api">Comprehensive Features of VideoSDK's API</h2><p>VideoSDK provides <a href="https://docs.videosdk.live/">comprehensive documentation</a> and examples for building video collaboration apps across various platforms, ensuring a seamless development experience. By following best practices and adhering to VideoSDK's guidelines, developers can create secure communication channels that safeguard patient data against unauthorized access or interception.</p><h3 id="compliance-assurance">Compliance Assurance</h3><p>VideoSDK's API is designed to meet the stringent security and privacy requirements of HIPAA, ensuring that all video interactions and patient data remain confidential and secure. By choosing a trusted telehealth HIPAA API provider like VideoSDK, healthcare organizations can rest assured that they are compliant with regulatory standards and safeguarding patient information.</p><h3 id="enhanced-security-measures">Enhanced Security Measures</h3><p>VideoSDK's API prioritizes data security, offering end-to-end encryption to protect sensitive patient information. By implementing advanced encryption algorithms, VideoSDK ensures that all video communications remain secure and compliant with HIPAA regulations.</p><h3 id="customizable-video-communication-solutions">Customizable Video Communication Solutions</h3><p>VideoSDK's API provides developers with a flexible and customizable framework to build tailored video communication solutions. From patient consultations to remote monitoring, VideoSDK's API can be adapted to meet the unique requirements of healthcare providers.</p><h3 id="seamless-integration">Seamless Integration</h3><p>VideoSDK's API seamlessly integrates with existing healthcare systems and applications, allowing for a smooth transition from Twilio's Programmable Video API. By offering compatibility with popular development frameworks and platforms, VideoSDK simplifies the migration process for developers.</p><hr><p>Creating a HIPAA-compliant video communication solution with VideoSDK's API empowers healthcare providers to deliver secure and efficient virtual care services while safeguarding patients' sensitive information. </p><p>By following the guidelines outlined in this comprehensive guide, you can build a robust communication platform that meets HIPAA compliance standards and enhances the overall quality of patient care.</p></hr>]]></content:encoded></item><item><title><![CDATA[Video PD (Personal Discussion): Customer Verification and Onboarding Solution]]></title><description><![CDATA[Explore effective onboarding strategies with Video PD. Enhance employee engagement and learning through personalized video discussions.]]></description><link>https://www.videosdk.live/blog/video-personal-discussion-vpd</link><guid isPermaLink="false">6674299e20fab018df10ec98</guid><category><![CDATA[Industry Update]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Thu, 30 Jan 2025 11:08:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/06/Video-PD.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-video-pd">What is Video PD?</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/Video-PD.jpg" alt="Video PD (Personal Discussion): Customer Verification and Onboarding Solution"/><p>Video PD (Video Personal Discussion) is an innovative technology that transforms the way businesses interact with their customers. It is a video-based communication medium like Video KYC that enables real-time, personalized face-to-face interactions between customers and businesses, revolutionizing the traditional verification and onboarding processes.</p><p>VideoPD allows connecting with a live representative to customers through a secure video call. This enables businesses to verify the customer's identity, gather necessary information, and provide a tailored onboarding experience, all while maintaining a personal touch.</p><p>By integrating video solutions, businesses can better understand their customers' needs, build stronger relationships, and enhance the overall customer experience. This technology is particularly useful for banking, financial institutions, lenders, healthcare, live-commerce, and other organizations that require detailed customer information for verification and onboarding purposes.</p><h2 id="why-are-traditional-verification-procedures-not-correct">Why are Traditional Verification Procedures not correct?</h2><p><a href="https://www.videosdk.live/blog/video-kyc-vs-traditional-kyc#what-is-traditional-kyc-physical-kyc" rel="noreferrer">Traditional customer verification</a> and onboarding processes face various challenges that restrict significance and impact the overall customer experience. The inability to establish a personal connection during the verification process can also undermine the trust and connection businesses aim to build with their customers.</p><p>Another challenge is the limited ability to verify the customer's identity effectively. Traditional methods, such as document scanning or authentication, can be susceptible to fraud and may not provide the security and assurance that businesses and customers require in today's digital landscape.</p><p><strong>There are more challenges in traditional verification processes:</strong></p><ul><li><strong>Geographical Limitations</strong>: Businesses often cannot reach out to customers in remote or distant locations, leading to a limited customer base and missed opportunities.</li><li><strong>Delayed Onboarding:</strong> The time-consuming nature of in-person meetings and physical document verification can significantly slow down the customer onboarding process, resulting in a poor customer experience.</li><li><strong>Lack of Personalization:</strong> Phone calls and face-to-face interactions, while required, can often feel impersonal and fail to address the unique needs and preferences of each customer.</li><li><strong>Increased Operational Costs:</strong> Sending agents to customers' homes or offices can be a costly, time-consuming, and resource-intensive process, putting a strain on the business.</li></ul><p>Understanding these challenges is crucial in appreciating the value that innovative solutions like Video PD bring to the table.</p><h2 id="how-video-pd-solves-the-personalizing-customer-interaction-problem">How Video PD Solves the Personalizing Customer Interaction Problem?</h2><p>Video PD addresses the shortcomings of traditional verification processes by providing a more personalized and engaging customer interaction. By leveraging real-time video interactions, businesses can create a seamless and efficient onboarding experience that caters to the unique needs of each customer. </p><p>One of the key advantages of Video PD is its ability to establish a personal connection between the customer and the business representative. Through the live video call, customers can interact with a real person, ask questions, and receive immediate feedback, encouraging a sense of trust and connection that is often lacking in traditional verification methods.</p><p>The <a href="https://www.videosdk.live/"><strong>video-based solution</strong></a> allows businesses to gather more comprehensive information about their customers, including visual cues and non-verbal communication. This enables them to better understand the customer's needs, preferences, and concerns, and tailor the onboarding process accordingly.</p><p>Businesses can also streamline the verification process, reducing the time and effort required from the customer by integrating video solutions. Instead of navigating through lengthy forms and questionnaires, customers can simply engage in a personalized video discussion, where the necessary information can be collected in a more efficient and user-friendly manner.</p><h2 id="trends-in-video-pd-for-bfsi">Trends in Video PD for BFSI</h2><h3 id="increasing-adoption">Increasing Adoption</h3><p>The BFSI sector is increasingly adopting Video PD solutions for applications such as customer onboarding, KYC (Know Your Customer) processes, and remote advisory services. This trend is fueled by the need for secure, face-to-face interactions in a digital format, which helps build trust and maintain compliance with regulatory standards.</p><h3 id="technological-advancements">Technological Advancements</h3><p>Integrating AI and machine learning with Video PD infra improves the security and efficiency of these interactions. For example, AI-driven facial recognition and biometric authentication are becoming standard features in Video PD systems used by banks and financial institutions to verify customer identities remotely.</p><h3 id="market-growth-in-bfsi">Market Growth in BFSI</h3><p>The AI in the BFSI market, which includes Video PD solutions, was valued at  $14.2 billion in 2022 and is expected to reach <strong>$94.1 billion by 2028</strong>, growing at a <strong>CAGR of 33.9% during 2023-2028</strong>. This growth reflects the increasing dependence on AI-driven video solutions for customer interactions and fraud prevention in the BFSI sector.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/Video-PD-Market-Size-2.png" class="kg-image" alt="Video PD (Personal Discussion): Customer Verification and Onboarding Solution" loading="lazy" width="1650" height="950"/></figure><h2 id="who-needs-a-video-pd-video-personal-discussion">Who needs a Video PD (video personal discussion)?</h2><p>Video PD is useful for organizations that require detailed customer information for verification and onboarding purposes, such as:</p><ul><li><strong>Financial Services</strong>: Video PD is particularly beneficial for banks, investment firms, and insurance companies can use VideoPD to securely onboard new customers, verify their identities, and collect necessary financial information in a personalized and efficient manner.</li><li><strong>Insurance Companies</strong>: Insurance providers require detailed information about customers to decide risk and set premiums.</li><li><strong>Government and Public Sector</strong>: Government agencies need to verify identities and gather information for various purposes, such as issuing identification documents or processing benefits.</li><li><strong>Live-commerce</strong>: Online retailers can use Video PD to enhance their customer onboarding experience, verify customer identities, and provide personalized support during the purchase process.</li><li><strong>Healthcare</strong>: Healthcare providers can leverage Video PD to onboard new patients, conduct virtual consultations, and ensure compliance with regulatory requirements.</li></ul><p>Across these diverse industries, Video PD offers a solution that addresses the limitations of traditional verification processes and delivers a more personalized, secure, and engaging customer experience.</p><h2 id="why-should-businesses-implement-video-pd-in-their-customer-onboarding-process">Why should Businesses implement Video PD in their Customer Onboarding Process?</h2><p>Implementing Video Personal Discussions (Video PD) in the customer verification process can enjoy several benefits that improve the overall customer experience and streamline their operations:</p><ul><li><strong>Personalized Customer Experience</strong>: Video PD allows businesses to create a more personalized and engaging onboarding experience for their customers. By leveraging the strength of real-time video solutions, businesses can establish a personal connection, better understand customer needs, and provide tailored support accordingly.</li><li><strong>Improved Efficiency and Productivity</strong>: Video PD can streamline the customer onboarding process, reducing the time and effort required from both the customer and the business. By eliminating the need for lengthy forms and questionnaires, the process becomes more efficient and user-friendly. Recognizing the efforts of staff managing these smoother processes through an&nbsp;employee rewards software&nbsp;can help maintain high levels of motivation and service quality.</li><li><strong>Increased Security and Fraud Prevention</strong>: Video PD offers a more robust identity verification process compared to traditional methods. By leveraging facial recognition, document verification, and real-time interaction, businesses can better provide the authenticity of their customers and mitigate the risk of fraud.</li><li><strong>Competitive Advantage</strong>: Implementing Video PD can give businesses a competitive edge in their respective industries. By offering a more personalized and efficient onboarding experience, businesses can differentiate themselves from their competitors and attract and retain customers more effectively.</li><li><strong>Regulatory Compliance</strong>: In industries with strict regulatory requirements, such as finance and healthcare, Video PD can help businesses comply with relevant laws and regulations by providing a secure and auditable customer verification process.</li></ul><p>By adopting the Video PD Solution, businesses can enhance the customer experience, improve operational efficiency, and gain a competitive advantage in the market.</p><h2 id="how-does-videosdk-help-with-video-pd-solution">How does VideoSDK help with Video PD Solution?</h2><p><a href="https://www.videosdk.live/solutions/video-kyc">VideoSDK</a> is a leading and <strong>best video PD solutions provider</strong> across the globe that helps businesses streamline their personalized experience. We offer a comprehensive suite of features that cater to the diverse needs of businesses and their customers. Some of the key features include:</p><ul><li><strong>Secure Video Conferencing</strong>: Provides a secure and reliable video conferencing solution that enables real-time, face-to-face interactions between customers and business representatives.</li><li><strong>Identity Verification</strong>: VideoSDK leverages advanced technologies, such as facial recognition, document verification, and QR codes for secure customer identification, ensuring the authenticity of customer identities during the onboarding process. </li><li><strong>Customizable Workflows</strong>: Our end-to-end customized SDK offers the flexibility to customize the onboarding workflow, allowing businesses to tailor the process to their specific requirements and customer needs.</li><li>Compliance and Audit Trails: VideoSDK’s cloud recording feature for detailed audit trail purposes, ensures that businesses can meet regulatory requirements and provide evidence of their customer verification processes.</li><li><strong>Integrations</strong>: Our solutions often integrate with existing systems and identity management platforms, enabling seamless data flow and a unified customer experience.</li><li><strong>Analytics and Reporting</strong>: VideoSDK’s solutions provide businesses with valuable insights and analytics, allowing them to track key performance indicators, identify areas for improvement, and make data-driven decisions.</li></ul><h3 id="features-offered-by-videosdk">Features Offered By VideoSDK</h3><p>VideoSDK solutions come with a range of features that enhance the customer experience and streamline the verification process:</p><ul><li><strong>Accurate Geo-tagging:</strong> Precise location tracking to verify the customer's identity and the authenticity of the interaction.</li><li><strong>Image and Video Capture:</strong> The option to capture and store relevant documents, photographs, and video footage for future reference and compliance purposes.</li><li><strong>Storage Integration:</strong> Own storage integration option with an instant dump feature for real-time recording and audit.</li><li><strong>Secure Data Encryption:</strong> Robust data encryption protocols to protect sensitive customer information and ensure compliance with data privacy regulations.</li><li><strong>Analytics</strong>: Our analytics dashboard provides valuable insights that allow you to track key performance indicators and make data-driven decisions.</li></ul><p>By leveraging these comprehensive features, businesses can enhance their customer onboarding experience, improve operational efficiency, and maintain regulatory compliance, all while strengthening their competitive position in the market.</p><h2 id="conclusion">Conclusion</h2><p>As the digital landscape continues to develop, the demand for Video PD solutions will only grow. Businesses that embrace this technology will be well-positioned to meet the changing expectations of their customers, stay ahead of the competition, and maintain regulatory compliance in an increasingly complex business environment.</p><p>By implementing Video PD, businesses can unlock a new era of personalized customer interactions, driving customer satisfaction, operational efficiency, and long-term success. This technology will play a pivotal role in shaping the future of customer engagement and onboarding.</p>]]></content:encoded></item><item><title><![CDATA[Understanding Pre-call Integration in Flutter]]></title><description><![CDATA[Discover how to implement Precall Integration in Flutter SDK. Our comprehensive guide helps you enhance your app's communication capabilities, ensuring smooth and efficient user interactions.]]></description><link>https://www.videosdk.live/blog/precall-integration-in-flutter</link><guid isPermaLink="false">6685226720fab018df10f5f6</guid><category><![CDATA[Flutter]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 30 Jan 2025 09:45:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/07/Pre-Call-Check-Flutter.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/Pre-Call-Check-Flutter.jpg" alt="Understanding Pre-call Integration in Flutter"/><p>In the dynamic world of video communication, ensuring a smooth and high-quality experience for users is paramount. One critical aspect of achieving this is through the implementation of a <code>Precall</code> system. This system allows users to test and configure their devices before joining a call, ensuring that audio and video are optimal. In this blog, we will explore the <strong>precall</strong> feature provided by VideoSDK's in Flutter SDK, discussing its significance, goals, and step-by-step implementation.</p><h2 id="the-goal-of-precall-integration">The goal of Precall Integration</h2><p>The primary goal of the <strong>precall</strong> feature is to enhance user experience by allowing them to:</p><ul><li>Verify and configure their audio and video devices.</li><li>Troubleshoot any potential issues before joining the actual call.</li></ul><p>By incorporating a <strong>precall</strong> check, developers can significantly reduce the likelihood of technical difficulties during live video sessions, leading to more professional and uninterrupted communication.</p><h2 id="step-by-step-guide-integrating-the-precall-feature">Step-by-Step Guide: Integrating the Precall Feature</h2><h3 id="step-1-verify-permissions">Step 1: Verify Permissions</h3><p>First, ensure that your application has the required permissions to access user devices, including cameras, microphones, and speakers. Use the <code>checkPermissions()</code> method from the VideoSDK class to confirm if these permissions are granted.</p><pre><code class="language-dart">import 'package:videosdk/videosdk.dart';

void checkMediaPermissions() async {
  try {
  //By default both audio and video permissions will be checked.
  Map&lt;String, bool&gt;? checkAudioVideoPermissions = await VideoSDK.checkPermissions();
  //For checking just audio permission.
  Map&lt;String, bool&gt;? checkAudioPermissions = await VideoSDK.checkPermissions(Permissions.audio);
  //For checking just video permission.
  Map&lt;String, bool&gt;? checkVideoPermissions = await VideoSDK.checkPermissions(Permissions.video);
  //For checking both audio and video permissions.
  Map&lt;String, bool&gt;? checkAudioVideoPermissions = await VideoSDK.checkPermissions(Permissions.audio_video);
  } catch (ex) {
    print("Error in checkPermissions()");
  }
  // Output: Map object for both audio and video permission:
  /*
        Map(2)
        0 : {"audio" =&gt; true}
            key: "audio"
            value: true
        1 : {"video" =&gt; true}
            key: "video"
            value: true
    */
};</code></pre><p><strong>Enable Permissions on iOS</strong>: Add the following to your <code>Podfile</code> to check for microphone and camera permissions:</p><pre><code class="language-ruby">post_install do |installer|
  installer.pods_project.targets.each do |target|
    flutter_additional_ios_build_settings(target)
    target.build_configurations.each do |config|
    //Add this into your podfile
      config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= [
        'PERMISSION_CAMERA=1',
        'PERMISSION_MICROPHONE=1',
      ]
    end
  end
end</code></pre><blockquote>Note:  checkPermissions() method is not supported on Desktop Applications and Firefox Browser.</blockquote><blockquote>Note: When microphone and camera permissions are blocked, rendering device lists is not possible:</blockquote><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/image-21.png" class="kg-image" alt="Understanding Pre-call Integration in Flutter" loading="lazy" width="2244" height="1376"/></figure><h3 id="step-2-request-permissions-if-necessary">Step 2: Request Permissions (if Necessary)</h3><p>If the required permissions are not granted, use the <code>requestPermission()</code> method from the VideoSDK class to prompt users to allow access to their devices.</p><p>To request microphone and camera permissions on iOS devices, add the following to your Podfile:</p><pre><code class="language-ruby">post_install do |installer|
  installer.pods_project.targets.each do |target|
    flutter_additional_ios_build_settings(target)
    target.build_configurations.each do |config|
    //Add this into your podfile
      config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= [
        'PERMISSION_CAMERA=1',
        'PERMISSION_MICROPHONE=1',
      ]
    end
  end
end</code></pre><blockquote>NOTE: In case permissions are blocked by the user, the permission request dialogue cannot be re-rendered programmatically. In such cases, consider providing guidance to users on manually adjusting their permissions.</blockquote><pre><code class="language-dart">void requestMediaPermissions() async {
  try {
      //By default both audio and video permissions will be requested.
      Map&lt;String, bool&gt;? reqAudioVideoPermissions = await VideoSDK.requestPermissions();
      //For requesting just audio permission.
      Map&lt;String, bool&gt;? reqAudioPermissions = await VideoSDK.requestPermissions(Permissions.audio);
      //For requesting just video permission.
      Map&lt;String, bool&gt;? reqVideoPermissions = await VideoSDK.requestPermissions(Permissions.video);
      //For requesting both audio and video permissions.
      Map&lt;String, bool&gt;? reqAudioVideoPermissions = await VideoSDK.requestPermissions(Permissions.audio_video);

  } catch (ex) {
    print("Error in requestPermission ");
  }
};</code></pre><blockquote>Note: The <code>requestPermissions()</code> method is not supported on Desktop Applications and Firefox Browser.</blockquote><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/image-22.png" class="kg-image" alt="Understanding Pre-call Integration in Flutter" loading="lazy" width="2244" height="1389"/></figure><h3 id="step-3-render-device-lists">Step 3: Render Device Lists</h3><p>After obtaining the necessary permissions, fetch and display lists of available video and audio devices using the <code>getVideoDevices()</code> and <code>getAudioDevices()</code> methods from the VideoSDK class. Allow users to select their preferred devices from these lists.</p><blockquote>Note: For iOS devices, the EARPIECE is not supported when a WIRED_HEADSET or BLUETOOTH device is connected. Additionally, WIRED_HEADSET and BLUETOOTH devices cannot be used simultaneously; the most recently connected device takes priority.</blockquote><pre><code class="language-dart">
List&lt;AudioDeviceInfo&gt; audioInputDevices = [];
List&lt;AudioDeviceInfo&gt; audioOutputDevices = [];

const getMediaDevices = async () =&gt; {
  try {
    //Method to get all available video devices.
    List&lt;VideoDeviceInfo&gt;? videoDevices = await VideoSDK.getVideoDevices();
    //Method to get all available Microphones.
    List&lt;AudioDeviceInfo&gt;? audioDevices = await VideoSDK.getAudioDevices();
    for (AudioDeviceInfo device in audioDevices!) {
      //For Mobile Applications
      if (!kIsWeb) {
        if (Platform.isAndroid || Platform.isIOS) {
          audioOutputDevices.add(device);
        }
      } else {
        //For Web and Desktop Applications
          if (device.kind == 'audioinput') {
            audioInputDevices.add(device);
          } else {
            audioOutputDevices.add(device);
          }
        }
      }
  } catch (err) {
    print("Error in getting audio or video devices");
  }
};</code></pre><ul><li>Displaying device lists once permissions are granted:</li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/image-23.png" class="kg-image" alt="Understanding Pre-call Integration in Flutter" loading="lazy" width="2244" height="1423"/></figure><h3 id="step-4-handle-device-changes">Step 4: Handle Device Changes</h3><p>Implement the <code>deviceChanged</code> callback in the VideoSDK class to dynamically update and re-render device lists whenever new devices are connected or removed. This ensures that users can seamlessly interact with newly connected devices without any disruptions.</p><blockquote>Note: The <code>deviceChanged</code> event is not supported in macOS applications.</blockquote><pre><code class="language-dart">VideoSDK.on(
  Events.deviceChanged,
  (devices) {
    print("device changed ${devices}");
  },
);</code></pre><h3 id="step-5-ensure-selected-devices-are-used-in-the-meeting">Step 5: Ensure Selected Devices Are Used in the Meeting</h3><p>Make sure all relevant states, such as microphone and camera status (on/off) and selected devices, are transferred from the precall screen to the meeting. This can be achieved by passing these crucial states and media streams to the VideoSDK and Room methods.</p><p><strong>For the video device:</strong></p><ul><li>Create a custom track using the selected <code>CameraID</code> and pass it to the <code>createRoom</code> method.</li></ul><p><strong>For audio devices in web and desktop applications:</strong></p><ul><li>Use the <code>switchAudioDevice</code> method to select the desired audio output device.</li><li>Use the <code>changeMic</code> method to select the desired audio input device once the room is joined.</li></ul><p><strong>For audio devices in mobile applications:</strong></p><ul><li>Use the <code>switchAudioDevice</code> method to select both the input and output audio devices.</li></ul><p>By ensuring this integration, users can smoothly transition from the precall setup to the actual meeting while maintaining their preferred settings.</p><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class MeetingScreen extends StatefulWidget {
  final String meetingId;
  final String token;

  const MeetingScreen(
      {super.key, required this.meetingId, required this.token});

  @override
  State&lt;MeetingScreen&gt; createState() =&gt; _MeetingScreenState();
}

class _MeetingScreenState extends State&lt;MeetingScreen&gt; {
  late Room _room;

  @override
  void initState() {
    //Create a custom track with the selectedVideoDevice Id
    CustomTrack cameraTrack = await VideoSDK.createCameraVideoTrack(
      cameraId: selectedVideoDevice?.deviceId);
    // create room
    _room = VideoSDK.createRoom(
      roomId: widget.meetingId,
      token: widget.token,
      displayName: "John Doe",
      micEnabled: true,
      camEnabled: true,
      customCameraVideoTrack: cameraTrack, //For Video Devices
    );
    registerMeetingEvents(room);
    super.initState();
  }

  //For Audio Devices
void registerMeetingEvents(Room _meeting) {
    // Called when joined in meeting
  _meeting.on(
    Events.roomJoined,
    () {
      //When meeting is joined, call the switchAudioDevice method to switch Audio Output.
      _meeting.switchAudioDevice(widget.selectedAudioOutputDevice!);

      //When meeting is joined, call the changeMic method to switch Audio Input.(Onlyrequired for web and desktop applications)
        if (kIsWeb || Platform.isWindows || Platform.isMacOS) {
          _meeting.changeMic(widget.selectedAudioInputDevice!);
        }
    },
  );
}

  @override
  Widget build(BuildContext context) {
    return YourMeetingWidget();
  }
}</code></pre><p>This example demonstrates best practices for implementing precall functionality and can serve as a starting point for your own implementation.</p><pre><code class="language-terminal">git clone https://github.com/videosdk-live/videosdk-rtc-flutter-sdk-example.git</code></pre><h2 id="conclusion">Conclusion:</h2><p>Effective precall implementation is key to creating a polished and professional video calling experience. By leveraging <a href="https://www.videosdk.live/">VideoSDK's </a>precall capabilities, you can ensure that your users enter calls with properly configured devices and necessary permissions, setting the stage for successful and hassle-free video conferences.</p><p>Remember, the precall phase is your opportunity to address potential issues before they impact the user experience. By investing time in robust precall implementation, you're laying the groundwork for high-quality, reliable video calls that will keep your users coming back.</p>]]></content:encoded></item><item><title><![CDATA[Understanding Interactive Live Streaming API Pricing]]></title><description><![CDATA[Interact with your viewers on a live stream with ease and flexibility. Intergrate interactive live streaming with all affordability ]]></description><link>https://www.videosdk.live/blog/interactive-live-streaming-pricing</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb6f</guid><category><![CDATA[Pricing]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 30 Jan 2025 09:27:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/09/Interactive-live-streaming-Pricing-thumbnail--1--1.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/09/Interactive-live-streaming-Pricing-thumbnail--1--1.jpg" alt="Understanding Interactive Live Streaming API Pricing"/><p>Interactive live streaming has become one of the most engaging platforms to virtually make social, cultural, political, and corporate happenings. After the pandemic, communication over the web has accelerated. </p><p>Interactive live streaming is a web-based live stream that allows interaction between both, the host and the viewers. Usually, streaming platforms are recognized at platforms where the host is the only entity granted with the activity of speech and presentation. However, interactive live streaming allows the participation of both groups- the host and the viewers.</p><h3 id="what-is-interactive-live-streaming">What is Interactive Live Streaming</h3><p><a href="https://www.videosdk.live/interactive-live-streaming" rel="noreferrer">Interactive live streaming</a> refers to real-time video broadcasting where the audience can engage directly with the streamer through features like chat, polls, and Q&amp;A sessions. This technology enables creators and businesses to host immersive events, workshops, and webinars, fostering a dynamic interaction. Key platforms include Twitch, YouTube Live, and proprietary services like VideoSDK, which cater to varied needs from gaming to educational sessions. The technology supports high-definition video and low-latency communication, making it ideal for a wide range of interactive experiences.<br/></p><blockquote>This blog is dedicated to pricing. We bring interactive live streaming to you with the most astounding quality and features along with affordability. We have simplified pricing plans that help in making effective strategies for the users. It makes readers create a clear picture of how we effectively function with pricing. You can always <a href="https://videosdk.live/contact">get in touch</a> with us in case of any query or fix. We are happy to assist you. </blockquote><h2 id="how-to-calculate-the-cost-of-interactive-live-streaming-pricing">How to Calculate the Cost of Interactive Live Streaming Pricing?</h2><p>Interactive live streaming involves cost computation of two entities</p><p><strong>a) The host/ broadcaster and b) The viewers</strong></p><ul><li>The cost is calculated per stream</li><li>We deliver interactive live streaming in two resolutions, 720p (HD) and 1080p (Full HD)</li><li>The calculation is based on the sum of broadcaster and viewer cost</li><li>The stream can address multiple hosts</li><li>Recording and RTMP of stream carry additional costs<br/></li></ul><p><strong>Cost of host</strong>= Unit price (revolution per minute) x number of minutes x Total hosts</p><p><strong>Cost of views</strong>= Unit price (revolution per minute) x number of minutes x Total participants</p><p><strong>Total cost </strong>Cost of Hosts + Cost of Viewers <br/></p><p><strong>At 720p- Cost per host= $ 0.00299, Cost per viewer= $ 0.0015</strong></p><p><strong>At 1080p- Cost per host= $ 0.00699, Cost per viewer= $0.004</strong><br/></p><p>Let’s get a better understanding with some examples</p><p><strong>Examples of Cost Calculations 1</strong> </p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/pricing_interactive-live-streaming.jpg" class="kg-image" alt="Understanding Interactive Live Streaming API Pricing" loading="lazy" width="1292" height="717"/></figure><p>Given that; Total streaming minutes= 45, Total Viewers= 1,000 Total Hosts= 1</p><ol><li><strong>At 720p</strong> Total Cost= $ 67.63</li><li><strong>At 1080p Total Cost= $ 180.31 (0.00699 x 45 x 1) + (0.004 x 45 x 1000)</strong><br/></li></ol><p><strong>Let’s take an example with multiple hosts</strong></p><p><strong>Examples of Cost Calculations 2</strong></p><p>Given that; Total streaming minutes= 60, Total Viewers= 2,500 Total Hosts= 6</p><ol><li><strong>At 720p</strong> Total Cost= $ 226.07 (0.00299 x 60x 6) + (0.0015 x 60 x 2500)</li><li><strong>At 1080p</strong> Total Cost= $ 602.51 (0.00699 x 60 x 6) + (0.004 x 60 x 2500)<br/></li></ol><h2 id="comparison-with-other-providers">Comparison with Other Providers</h2><p>Many providers design and develop interactive live streaming with the same approach of enhancing engagement. These companies also allow multiple host streaming along with huge participant viewing numbers. One major difference between videosdk.live and these companies are that they calculate pricing based on the 2K+ on-screen resolution of the hosts. <br/></p><h2 id="what-is-2k-resolution">What is 2K+ resolution?</h2><p>2K+ is a resolution that exceeds the resolution of more than 2160 pixels. The interactive live streaming API providers calculate resolution based on the number of hosts adding up their respective screen resolutions. For example, 4 hosts streaming live at 720p makes a total of 720 x 4= 2,880p which is termed as 2K+ resolution.<br/></p><h2 id="comparison-videosdklive-vs-other-companies">Comparison: Videosdk.live vs. Other Companies</h2><p>The below image describes the units and pricing policies of videosdk.live and other companies that deal with interactive live streaming APIs. We have taken two standard resolutions for comparison. To reckon the pricing, most companies have pricing similar to the estimates we have claimed.<br/></p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/pricing-comparison_interactive-live-streaming.jpg" class="kg-image" alt="Understanding Interactive Live Streaming API Pricing" loading="lazy" width="1292" height="1002"/></figure><blockquote>The above chart clearly represents the pricing difference of videosdk.live and other companies. With the same quality, features and deliverables we observe a huge price difference. Interactive live streaming has been an immersively attractive approach for brands and corporates to scale engagement.</blockquote>]]></content:encoded></item><item><title><![CDATA[RTMP Out Explained: How Live Streaming Really Works]]></title><description><![CDATA[This article delves into the intricacies of RTMP Out, a crucial component in the transmission of live video data, exploring its technical details, step-by-step process, and how it enables seamless live streaming across various platforms.]]></description><link>https://www.videosdk.live/blog/rtmp-out-how-live-streaming-really-works</link><guid isPermaLink="false">660d35982a88c204ca9cfaaf</guid><category><![CDATA[RTMP]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Wed, 29 Jan 2025 13:02:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/09/RTMP-Out-Explained_-How-Live-Streaming-Really-Works.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/09/RTMP-Out-Explained_-How-Live-Streaming-Really-Works.jpg" alt="RTMP Out Explained: How Live Streaming Really Works"/><p>Live streaming has become an integral part of our world. From watching esports tournaments and concerts to attending virtual classes and webinars, live streams offer a unique way to connect and share experiences in real-time.</p><p>But how exactly does this live video transmission work? This is where RTMP comes in, it is a communication protocol and plays an important role in transmission. This article dives into the inner workings of RTMP Out, exploring its technical details and the step-by-step process of how it enables smooth live streaming.</p><h2 id="what-is-rtmp-and-rtmp-out">What is RTMP and RTMP Out?</h2><p>In simple words, RTMP (Real-Time Messaging Protocol) is a specialized communication protocol designed for the low-latency transmission of live video data over the Internet. Unlike protocols like HTTP (yes, it is related to security as well), which focus on file transfer, RTMP prioritizes real-time communication, ensuring the uninterrupted delivery of live streams.</p><p>RTMP Output is the single-way process of transmitting a live video stream from an encoder to a streaming platform or media server using the Real-Time Messaging Protocol (RTMP).</p><h2 id="different-elements-used-in-rtmp-out">Different Elements Used in RTMP Out</h2><p><strong>Encoder</strong>: An encoder is a (1) software or hardware application responsible for capturing video and audio from a source like a camera, microphone, screen capture card, etc; (2) compressing it into a streamable format, and (3) preparing it for transmission.</p><p><strong>Streaming Platform</strong>: This is the online service that receives the encoded stream from the encoder, distributes it to viewers on various devices, and manages the playback experience. Popular examples include YouTube Live, Twitch, and Facebook Live.</p><p><strong>RTMP Server</strong>: The streaming platform has servers specifically configured to receive incoming RTMP streams. These servers handle the data transfer and make the stream accessible to viewers.</p><h2 id="how-rtmp-out-works">How RTMP Out Works?</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/09/diagram--3--1.png" class="kg-image" alt="RTMP Out Explained: How Live Streaming Really Works" loading="lazy" width="1600" height="990"/></figure><p>Here's a detailed analysis of the process involved in RTMP Out:</p><h3 id="stream-setup-and-configuration">Stream Setup and Configuration:</h3><p>The streamer chooses a streaming platform and sets up an account. The platform provides the streamer with an RTMP server URL and stream key. These act like unique identifiers that allow the platform to recognize and accept the incoming stream from the encoder.</p><p>The streamer opens its encoder software (OBS Studio, XSplit, vMix, etc.) and configures the streaming settings. Within the encoder settings, you'll typically find a dedicated section for "Streaming" or "Output."</p><h3 id="rtmp-and-connection-establishment">RTMP and Connection Establishment:</h3><p>The encoder initiates an RTMP with the streaming platform's RTMP server using the provided URL and stream key. This association establishes a persistent connection between the encoder and the server, ensuring a continuous flow of data for the live stream.</p><h3 id="data-chunking-and-stream-packaging">Data Chunking and Stream Packaging:</h3><p>To ensure efficient data transmission over the internet, the encoder breaks down the compressed video and audio data into smaller manageable units called chunks or packets.</p><p>Each chunk is typically a few hundred bytes in size and contains information about the data type (audio/video), sequence number, and timestamp for proper reassembly at the receiving end. The encoder then packages these data chunks into RTMP messages according to the RTMP protocol specifications.</p><h3 id="sending-the-stream">Sending the Stream:</h3><p>The encoder transmits the RTMP messages containing the video and audio data scraps over the established connection to the streaming platform's RTMP server. RTMP uses TCP (Transmission Control Protocol) for reliable data transfer. </p><p>TCP ensures that the data packets arrive in the correct order and without errors. In case of any errors or missing packets, TCP requests retransmission.</p><h3 id="receives-and-processes-the-stream">Receives and Processes the Stream:</h3><p>RTMP server receives the RTMP messages sent by the encoder. The server unpacks the messages, reassembles the data chunks based on their sequence numbers and timestamps, and buffers the received data.</p><h3 id="stream-distribution-and-playback">Stream Distribution and Playback:</h3><p>The streaming platform processes the reassembled video and audio data and prepares it for delivery to viewers. This might involve further transcoding the stream into different bitrates and resolutions to cater to viewers with varying internet connection speeds and device capabilities. </p><p>The platform then distributes the prepared stream to viewers who have connected to the stream on their devices (computers, phones, tablets). Viewers typically access the stream through the platform's website or dedicated apps. </p><p>The platform employs various protocols like HLS (HTTP Live Streaming) for this delivery, allowing viewers to experience smooth playback even with fluctuating internet connections.</p><h3 id="monitoring-and-stream-management">Monitoring and Stream Management:</h3><p>Monitoring the health of their stream within the encoder software with tools like bitrate meters and frame rate displays provides valuable insights into the stream's performance, including options to control who has access, schedule the stream, and interact with viewers in real time through chat features.</p><h2 id="benefits-of-using-rtmp-out">Benefits of Using RTMP Out</h2><p><strong>Low Latency</strong>: A key benefit of RTMP is its ability to deliver low-latency streams. This means the delay between the action happening in real-time and viewers seeing it on their screens is minimal. This is crucial for interactive live experiences like gaming streams and live auctions where viewers need to react quickly.</p><p><strong>Reliability</strong>: RTMP prioritizes reliable data transmission. By employing error correction and congestion control mechanisms, RTMP ensures that the video stream reaches viewers with minimal interruptions or glitches. This stability is essential for professional live-streaming scenarios.</p><p><strong>Compatibility</strong>: As an open standard, RTMP is widely supported by a vast range of encoder software and streaming platforms. This flexibility allows streamers to choose the tools and services that best suit their needs without worrying about compatibility issues.</p><p><strong>Security</strong>: While not inherently secure, RTMP can be configured with authentication and encryption to protect live streams from unauthorized access. This is important for sensitive content or private events.</p><h2 id="alternatives-to-rtmp-out">Alternatives to RTMP Out</h2><p>RTMP remains a reliable and popular choice for live streaming, but newer protocols have emerged that offer additional functionalities:</p><h3 id="hls-http-live-streaming">HLS (HTTP Live Streaming):</h3><p>HLS is gaining traction due to its ability to deliver adaptive bitrate streams. This means the streaming platform can adjust the video quality based on the viewer's internet connection speed, ensuring smooth playback even for viewers with limited bandwidth. HLS also offers playlist-based delivery, allowing viewers to join the stream mid-broadcast without missing anything.</p><h3 id="rist-reliable-internet-streaming-transport">RIST (Reliable Internet Streaming Transport):</h3><p>This newer protocol offers even lower latency than RTMP and is gaining popularity for real-time broadcasting applications like live sports and news events.</p><hr><p>Understanding RTMP Out allows you to appreciate the technical foundation behind live streaming. While newer protocols are emerging, RTMP remains a robust and widely supported solution for streaming live video content.</p><p>Its low latency, reliability, and compatibility continue to make it a valuable tool for streamers across various platforms. As technology evolves, RTMP will likely coexist with newer options, offering broadcasters the flexibility to choose the protocol that best suits their specific needs and streaming goals.</p></hr>]]></content:encoded></item><item><title><![CDATA[Flutter-WebRTC: A Complete Guide]]></title><description><![CDATA[Learn how to build high-quality real-time video apps with Flutter WebRTC. Get started today and deliver the best user experience with your video apps using Flutter WebRTC.]]></description><link>https://www.videosdk.live/blog/flutter-webrtc</link><guid isPermaLink="false">63e1f61792324379f1d343ab</guid><category><![CDATA[WebRTC]]></category><dc:creator><![CDATA[Yash]]></dc:creator><pubDate>Tue, 28 Jan 2025 16:44:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/02/Flutter-WebRTC-7.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/02/Flutter-WebRTC-7.jpg" alt="Flutter-WebRTC: A Complete Guide"/><p>In this tutorial, we will discuss the most reliable and widely used RTC framework: WebRTC, the rapidly growing hybrid application development framework. Flutter-WebRTC, and how we can use the framework. I will explain how to create a Flutter-WebRTC app in Just 7 steps. </p><p>Video calling has become a common medium of communication since the pandemic. We saw a very huge wave in real-time communication space (audio and video communication). There are many use cases of RTC in modern businesses, such as video conferencing, <a href="https://www.videosdk.live/interactive-live-streaming" rel="noreferrer">real-time streaming</a>, live commerce, education, telemedicine, surveillance, gaming, etc.</p><p>Developers often ask the same questions How to build real-time applications with minimal effort (Me too ?). If you ask such questions, then you are at the right place.</p><p>For those who don't have the time to go through the process step by step, this repository is a great resource for quickly building a <a href="https://github.com/videosdk-live/webrtc">flutter-webrtc github project</a>. (feel free to give it a star ⭐)</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/02/Flutter-WebRTC.jpg" class="kg-image" alt="Flutter-WebRTC: A Complete Guide" loading="lazy" width="1280" height="720"/></figure><h2 id="what-is-flutter">What is Flutter?</h2><p><a href="https://flutter.dev">Flutter</a> is a mobile app development framework based on the <a href="https://dart.dev">Dart</a> programming language, developed by Google. One can develop Android apps, iOS apps, web apps, and desktop apps using the same code with the Flutter Framework. Flutter has a large community, which is why it is the fastest-growing app development framework ever.</p><h2 id="what-is-webrtc">What is WebRTC?</h2><p><a href="https://www.videosdk.live/blog/webrtc">WebRTC</a> is an open-source framework for real-time communication (audio, video, and generic data) adopted by the majority of browsers and can be used on native platforms like Android, iOS, MacOS, Linux, Windows, etc.</p><p>WebRTC relies on three major APIs</p><ul><li>getUserMedia: used to get local audio and video media.</li><li>RTCPeerConnection: establishes a connection with another peer.</li><li>RTCDataChannel: Creates a channel for generic data exchange.</li></ul><h1 id="what-is-flutter-webrtc">What is Flutter-WebRTC?</h1>
<p>Flutter-WebRTC is a plugin for the Flutter framework that enables <a href="https://www.videosdk.live/blog/introduction-to-real-time-communication-sdk" rel="noreferrer">real-time communication</a> (RTC) capabilities in web and mobile applications. It is a collection of communication protocols and APIs that allow direct communication between web browsers and mobile applications without third-party plugins or software. With <a href="https://www.videosdk.live/blog/flutter-webrtc">Flutter-WebRTC</a>, you can easily build video call applications without dealing with the underlying technologies' complexities.</p><h2 id="how-webrtc-works">How WebRTC works?</h2><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://assets.videosdk.live/static-assets/ghost/2023/02/webrtc-working-1.png" class="kg-image" alt="Flutter-WebRTC: A Complete Guide" loading="lazy" width="800" height="862"><figcaption><span style="white-space: pre-wrap;">Working of WebRTC</span></figcaption></img></figure><p>To understand the workings of WebRTC, we need to understand the following technologies.</p><p><strong>1. Signalling</strong></p><p>WebRTC allows peer-to-peer communication over the web even though a peer has no idea where other peers are and how to connect to them or communicate to them.</p><p>To establish a connection between peers, WebRTC needs clients to exchange metadata to coordinate with them using Signalling. It allows us to communicate over firewalls or work with <a href="https://en.wikipedia.org/wiki/Network_address_translation">NATs</a> (Network Address Translators) . Technology, that is majorly used for signaling is <a href="https://www.videosdk.live/blog/what-is-a-websocket" rel="noreferrer">WebSocket</a>, which allows bidirectional communication between peers and signaling servers.</p><p><strong>2. SDP </strong></p><p><a href="https://en.wikipedia.org/wiki/Session_Description_Protocol">SDP</a> stands for Session Description Protocol. It describes session information like </p><ul><li>sessionId</li><li>session expire time</li><li>Audio/Video Encoding/Formats/Encryption etc...</li><li>Audio/Video IP and Port</li></ul><p>Suppose there are two peers Client A and Client B that will be connected over WebRTC. Then Client A generates and sends an SDP offer (session-related information like codecs it supports) to Client B then Client B responds with SDP Answer (Accept or Reject SDP Offer). SDP is used here for negotiation between two peers.  </p><p><strong>3. ICE</strong></p><p><a href="https://en.wikipedia.org/wiki/Interactive_Connectivity_Establishment">ICE</a> stands for Interactive Connectivity Establishment, which allows peers to connect with other peers. There are many reasons why a straight-up connection between peers will not work. </p><p>It requires bypassing firewalls that could prevent opening connections, giving a unique IP address if like most situations device does not have a public IP Address, and relaying data through a server if the router does not allow it to directly connect with peers. ICE uses <a href="https://en.wikipedia.org/wiki/STUN">STUN</a> or <a href="https://en.wikipedia.org/wiki/Traversal_Using_Relays_around_NAT">TURN</a> servers to accomplish this.</p><h1 id="lets-start-with-flutter-webrtc-project">Let's start with Flutter-WebRTC Project</h1>
<p>First of all, we need to setup signalling server. </p><ul><li><strong>Clone the Flutter-WebRTC repository </strong></li></ul><pre><code class="language-js">git clone https://github.com/videosdk-live/webrtc.git</code></pre><ul><li><strong>Go to webrtc-signalling-server and install dependencies for Flutter-WebRTC App</strong></li></ul><pre><code class="language-js">cd webrtc-signalling-server &amp;&amp; npm install</code></pre><ul><li><strong>Start Signalling Server for Flutter-WebRTC App</strong></li></ul><pre><code class="language-js">npm run start</code></pre><ul><li><strong>Flutter-WebRTC Project Structure</strong></li></ul><pre><code class="language-js">lib
└── main.dart
└── services
	└── signalling.service.dart
└── screens
	└── join_screen.dart
	└── call_screen.dart</code></pre><h1 id="7-steps-to-build-a-flutter-webrtc-video-calling-app">7 Steps to Build a Flutter-WebRTC Video Calling App</h1>
<h2 id="step-1-create-flutter-wrbrtc-app-project">Step 1: Create Flutter-WrbRTC app project</h2>
<pre><code class="language-js">flutter create flutter_webrtc_app</code></pre><h2 id="step-2-add-project-dependency-for-flutter-webrtc-app">Step 2: Add project dependency for Flutter-WebRTC App</h2>
<pre><code class="language-js">flutter pub add flutter_webrtc socket_io_client</code></pre><h2 id="step-3-flutter-wrbrtc-setup-for-ios-and-android">Step 3: Flutter-WrbRTC Setup for IOS and Android</h2>
<ul><li><strong>Flutter-WebRTC iOS Setup</strong><br>Add the following lines to your Info.plist file, located at &lt;project root&gt;/ios/Runner/Info.plist.</br></li></ul><pre><code class="language-js">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Camera Usage!&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Microphone Usage!&lt;/string&gt;</code></pre><p>These lines allows your app to access camera and microphone.</p><p><strong>Note: Refer, if you have trouble with </strong><a href="https://pub.dev/packages/flutter_webrtc#note-for-ios"><strong>iOS setup</strong></a><strong>. </strong></p><ul><li><strong>Flutter-WebRTC Android Setup </strong><br>Add following lines in AndroidManifest.xml, located at &lt;project root&gt;/android/app/src/main/AndroidManifest.xml</br></li></ul><pre><code class="language-js">&lt;uses-feature android:name="android.hardware.camera" /&gt;
&lt;uses-feature android:name="android.hardware.camera.autofocus" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
&lt;uses-permission android:name="android.permission.BLUETOOTH" android:maxSdkVersion="30" /&gt;
&lt;uses-permission android:name="android.permission.BLUETOOTH_ADMIN" android:maxSdkVersion="30" /&gt;</code></pre><p>If necessary, you will need to increase <code>minSdkVersion</code> of <code>defaultConfig</code> up to <code>23</code> in app level build.gradle file.</p><h2 id="step-4-create-signallingservice-for-flutter-webrtc-app">Step 4: Create SignallingService for Flutter-WebRTC App</h2>
<p>The Signalling Service will deal with the communication to the Signalling Server. Here, we will use <a href="https://socket.io">socket.io</a> client to connect with socker.io server, which is basically a WebSocket Server.</p><pre><code class="language-js">import 'dart:developer';
import 'package:socket_io_client/socket_io_client.dart';

class SignallingService {
  // instance of Socket
  Socket? socket;

  SignallingService._();
  static final instance = SignallingService._();

  init({required String websocketUrl, required String selfCallerID}) {
    // init Socket
    socket = io(websocketUrl, {
      "transports": ['websocket'],
      "query": {"callerId": selfCallerID}
    });

    // listen onConnect event
    socket!.onConnect((data) {
      log("Socket connected !!");
    });

    // listen onConnectError event
    socket!.onConnectError((data) {
      log("Connect Error $data");
    });

    // connect socket
    socket!.connect();
  }
}</code></pre><h2 id="step-5-create-joinscreen-for-flutter-webrtc-app">Step 5: Create JoinScreen for Flutter-WebRTC App</h2>
<p>JoinScreen will be a <a href="https://api.flutter.dev/flutter/widgets/StatefulWidget-class.html">StatefulWidget</a>, which allows the user to join a session. Using this screen, the user can start a session or join a session when some other user calls this user using CallerID.</p><pre><code class="language-js">import 'package:flutter/material.dart';
import 'call_screen.dart';
import '../services/signalling.service.dart';

class JoinScreen extends StatefulWidget {
  final String selfCallerId;

  const JoinScreen({super.key, required this.selfCallerId});

  @override
  State&lt;JoinScreen&gt; createState() =&gt; _JoinScreenState();
}

class _JoinScreenState extends State&lt;JoinScreen&gt; {
  dynamic incomingSDPOffer;
  final remoteCallerIdTextEditingController = TextEditingController();

  @override
  void initState() {
    super.initState();

    // listen for incoming video call
    SignallingService.instance.socket!.on("newCall", (data) {
      if (mounted) {
        // set SDP Offer of incoming call
        setState(() =&gt; incomingSDPOffer = data);
      }
    });
  }

  // join Call
  _joinCall({
    required String callerId,
    required String calleeId,
    dynamic offer,
  }) {
    Navigator.push(
      context,
      MaterialPageRoute(
        builder: (_) =&gt; CallScreen(
          callerId: callerId,
          calleeId: calleeId,
          offer: offer,
        ),
      ),
    );
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      backgroundColor: Theme.of(context).colorScheme.background,
      appBar: AppBar(
        centerTitle: true,
        title: const Text("P2P Call App"),
      ),
      body: SafeArea(
        child: Stack(
          children: [
            Center(
              child: SizedBox(
                width: MediaQuery.of(context).size.width * 0.9,
                child: Column(
                  mainAxisAlignment: MainAxisAlignment.center,
                  children: [
                    TextField(
                      controller: TextEditingController(
                        text: widget.selfCallerId,
                      ),
                      readOnly: true,
                      textAlign: TextAlign.center,
                      enableInteractiveSelection: false,
                      decoration: InputDecoration(
                        labelText: "Your Caller ID",
                        border: OutlineInputBorder(
                          borderRadius: BorderRadius.circular(10.0),
                        ),
                      ),
                    ),
                    const SizedBox(height: 12),
                    TextField(
                      controller: remoteCallerIdTextEditingController,
                      textAlign: TextAlign.center,
                      decoration: InputDecoration(
                        hintText: "Remote Caller ID",
                        alignLabelWithHint: true,
                        border: OutlineInputBorder(
                          borderRadius: BorderRadius.circular(10.0),
                        ),
                      ),
                    ),
                    const SizedBox(height: 24),
                    ElevatedButton(
                      style: ElevatedButton.styleFrom(
                        side: const BorderSide(color: Colors.white30),
                      ),
                      child: const Text(
                        "Invite",
                        style: TextStyle(
                          fontSize: 18,
                          color: Colors.white,
                        ),
                      ),
                      onPressed: () {
                        _joinCall(
                          callerId: widget.selfCallerId,
                          calleeId: remoteCallerIdTextEditingController.text,
                        );
                      },
                    ),
                  ],
                ),
              ),
            ),
            if (incomingSDPOffer != null)
              Positioned(
                child: ListTile(
                  title: Text(
                    "Incoming Call from ${incomingSDPOffer["callerId"]}",
                  ),
                  trailing: Row(
                    mainAxisSize: MainAxisSize.min,
                    children: [
                      IconButton(
                        icon: const Icon(Icons.call_end),
                        color: Colors.redAccent,
                        onPressed: () {
                          setState(() =&gt; incomingSDPOffer = null);
                        },
                      ),
                      IconButton(
                        icon: const Icon(Icons.call),
                        color: Colors.greenAccent,
                        onPressed: () {
                          _joinCall(
                            callerId: incomingSDPOffer["callerId"]!,
                            calleeId: widget.selfCallerId,
                            offer: incomingSDPOffer["sdpOffer"],
                          );
                        },
                      )
                    ],
                  ),
                ),
              ),
          ],
        ),
      ),
    );
  }
}</code></pre><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/02/Flutter-WebRTC-3.jpg" class="kg-image" alt="Flutter-WebRTC: A Complete Guide" loading="lazy" width="1280" height="720"/></figure><h2 id="step-6-create-callscreen-for-flutter-webrtc-app">Step 6: Create CallScreen for Flutter-WebRTC App</h2>
<p>In CallScreen, we will show a local stream of the user, the remote stream of another user, and controls like toggleCamera, toggleMic, switchCamera, endCall. Here, we will establish RTCPeerConnection between peers, create SDP Offer and SDP Answer, and transmit ICE Candidate related data over signalling server (<a href="https://socket.io">socket.io</a>). </p><pre><code class="language-js">import 'package:flutter/material.dart';
import 'package:flutter_webrtc/flutter_webrtc.dart';
import '../services/signalling.service.dart';

class CallScreen extends StatefulWidget {
  final String callerId, calleeId;
  final dynamic offer;
  const CallScreen({
    super.key,
    this.offer,
    required this.callerId,
    required this.calleeId,
  });

  @override
  State&lt;CallScreen&gt; createState() =&gt; _CallScreenState();
}

class _CallScreenState extends State&lt;CallScreen&gt; {
  // socket instance
  final socket = SignallingService.instance.socket;

  // videoRenderer for localPeer
  final _localRTCVideoRenderer = RTCVideoRenderer();

  // videoRenderer for remotePeer
  final _remoteRTCVideoRenderer = RTCVideoRenderer();

  // mediaStream for localPeer
  MediaStream? _localStream;

  // RTC peer connection
  RTCPeerConnection? _rtcPeerConnection;

  // list of rtcCandidates to be sent over signalling
  List&lt;RTCIceCandidate&gt; rtcIceCadidates = [];

  // media status
  bool isAudioOn = true, isVideoOn = true, isFrontCameraSelected = true;

  @override
  void initState() {
    // initializing renderers 
    _localRTCVideoRenderer.initialize();
    _remoteRTCVideoRenderer.initialize();

    // setup Peer Connection
    _setupPeerConnection();
    super.initState();
  }

  @override
  void setState(fn) {
    if (mounted) {
      super.setState(fn);
    }
  }

  _setupPeerConnection() async {
    // create peer connection
    _rtcPeerConnection = await createPeerConnection({
      'iceServers': [
        {
          'urls': [
            'stun:stun1.l.google.com:19302',
            'stun:stun2.l.google.com:19302'
          ]
        }
      ]
    });

    // listen for remotePeer mediaTrack event
    _rtcPeerConnection!.onTrack = (event) {
      _remoteRTCVideoRenderer.srcObject = event.streams[0];
      setState(() {});
    };

    // get localStream
    _localStream = await navigator.mediaDevices.getUserMedia({
      'audio': isAudioOn,
      'video': isVideoOn
          ? {'facingMode': isFrontCameraSelected ? 'user' : 'environment'}
          : false,
    });

    // add mediaTrack to peerConnection
    _localStream!.getTracks().forEach((track) {
      _rtcPeerConnection!.addTrack(track, _localStream!);
    });

    // set source for local video renderer
    _localRTCVideoRenderer.srcObject = _localStream;
    setState(() {});

    // for Incoming call
    if (widget.offer != null) {
      // listen for Remote IceCandidate
      socket!.on("IceCandidate", (data) {
        String candidate = data["iceCandidate"]["candidate"];
        String sdpMid = data["iceCandidate"]["id"];
        int sdpMLineIndex = data["iceCandidate"]["label"];

        // add iceCandidate
        _rtcPeerConnection!.addCandidate(RTCIceCandidate(
          candidate,
          sdpMid,
          sdpMLineIndex,
        ));
      });

      // set SDP offer as remoteDescription for peerConnection
      await _rtcPeerConnection!.setRemoteDescription(
        RTCSessionDescription(widget.offer["sdp"], widget.offer["type"]),
      );

      // create SDP answer
      RTCSessionDescription answer = await _rtcPeerConnection!.createAnswer();

      // set SDP answer as localDescription for peerConnection
      _rtcPeerConnection!.setLocalDescription(answer);

      // send SDP answer to remote peer over signalling
      socket!.emit("answerCall", {
        "callerId": widget.callerId,
        "sdpAnswer": answer.toMap(),
      });
    }
    // for Outgoing Call
    else {
      // listen for local iceCandidate and add it to the list of IceCandidate
      _rtcPeerConnection!.onIceCandidate =
          (RTCIceCandidate candidate) =&gt; rtcIceCadidates.add(candidate);

      // when call is accepted by remote peer
      socket!.on("callAnswered", (data) async {
        // set SDP answer as remoteDescription for peerConnection
        await _rtcPeerConnection!.setRemoteDescription(
          RTCSessionDescription(
            data["sdpAnswer"]["sdp"],
            data["sdpAnswer"]["type"],
          ),
        );

        // send iceCandidate generated to remote peer over signalling
        for (RTCIceCandidate candidate in rtcIceCadidates) {
          socket!.emit("IceCandidate", {
            "calleeId": widget.calleeId,
            "iceCandidate": {
              "id": candidate.sdpMid,
              "label": candidate.sdpMLineIndex,
              "candidate": candidate.candidate
            }
          });
        }
      });

      // create SDP Offer
      RTCSessionDescription offer = await _rtcPeerConnection!.createOffer();

      // set SDP offer as localDescription for peerConnection
      await _rtcPeerConnection!.setLocalDescription(offer);

      // make a call to remote peer over signalling
      socket!.emit('makeCall', {
        "calleeId": widget.calleeId,
        "sdpOffer": offer.toMap(),
      });
    }
  }

  _leaveCall() {
    Navigator.pop(context);
  }

  _toggleMic() {
    // change status
    isAudioOn = !isAudioOn;
    // enable or disable audio track
    _localStream?.getAudioTracks().forEach((track) {
      track.enabled = isAudioOn;
    });
    setState(() {});
  }

  _toggleCamera() {
    // change status
    isVideoOn = !isVideoOn;

    // enable or disable video track
    _localStream?.getVideoTracks().forEach((track) {
      track.enabled = isVideoOn;
    });
    setState(() {});
  }

  _switchCamera() {
    // change status
    isFrontCameraSelected = !isFrontCameraSelected;

    // switch camera
    _localStream?.getVideoTracks().forEach((track) {
      // ignore: deprecated_member_use
      track.switchCamera();
    });
    setState(() {});
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      backgroundColor: Theme.of(context).colorScheme.background,
      appBar: AppBar(
        title: const Text("P2P Call App"),
      ),
      body: SafeArea(
        child: Column(
          children: [
            Expanded(
              child: Stack(children: [
                RTCVideoView(
                  _remoteRTCVideoRenderer,
                  objectFit: RTCVideoViewObjectFit.RTCVideoViewObjectFitCover,
                ),
                Positioned(
                  right: 20,
                  bottom: 20,
                  child: SizedBox(
                    height: 150,
                    width: 120,
                    child: RTCVideoView(
                      _localRTCVideoRenderer,
                      mirror: isFrontCameraSelected,
                      objectFit:
                          RTCVideoViewObjectFit.RTCVideoViewObjectFitCover,
                    ),
                  ),
                )
              ]),
            ),
            Padding(
              padding: const EdgeInsets.symmetric(vertical: 12),
              child: Row(
                mainAxisAlignment: MainAxisAlignment.spaceAround,
                children: [
                  IconButton(
                    icon: Icon(isAudioOn ? Icons.mic : Icons.mic_off),
                    onPressed: _toggleMic,
                  ),
                  IconButton(
                    icon: const Icon(Icons.call_end),
                    iconSize: 30,
                    onPressed: _leaveCall,
                  ),
                  IconButton(
                    icon: const Icon(Icons.cameraswitch),
                    onPressed: _switchCamera,
                  ),
                  IconButton(
                    icon: Icon(isVideoOn ? Icons.videocam : Icons.videocam_off),
                    onPressed: _toggleCamera,
                  ),
                ],
              ),
            ),
          ],
        ),
      ),
    );
  }

  @override
  void dispose() {
    _localRTCVideoRenderer.dispose();
    _remoteRTCVideoRenderer.dispose();
    _localStream?.dispose();
    _rtcPeerConnection?.dispose();
    super.dispose();
  }
}</code></pre><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/02/Flutter-WebRTC-4.jpg" class="kg-image" alt="Flutter-WebRTC: A Complete Guide" loading="lazy" width="1280" height="720"/></figure><h2 id="step-7-modify-code-in-maindart">Step 7: Modify code in main.dart</h2>
<p>We will pass websocketUrl (signalling server URL) to JoinScreen and create random callerId for user.</p><pre><code class="language-js">import 'dart:math';

import 'package:flutter/material.dart';
import 'screens/join_screen.dart';
import 'services/signalling.service.dart';

void main() {
  // start videoCall app
  runApp(VideoCallApp());
}

class VideoCallApp extends StatelessWidget {
  VideoCallApp({super.key});

  // signalling server url
  final String websocketUrl = "WEB_SOCKET_SERVER_URL";

  // generate callerID of local user
  final String selfCallerID =
      Random().nextInt(999999).toString().padLeft(6, '0');

  @override
  Widget build(BuildContext context) {
    // init signalling service
    SignallingService.instance.init(
      websocketUrl: websocketUrl,
      selfCallerID: selfCallerID,
    );

    // return material app
    return MaterialApp(
      darkTheme: ThemeData.dark().copyWith(
        useMaterial3: true,
        colorScheme: const ColorScheme.dark(),
      ),
      themeMode: ThemeMode.dark,
      home: JoinScreen(selfCallerId: selfCallerID),
    );
  }
}</code></pre><h2 id="woohoo-finally-we-did-it">Woohoo!! Finally, we did it.</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/02/moment_when_your_code_works.jpg" class="kg-image" alt="Flutter-WebRTC: A Complete Guide" loading="lazy" width="500" height="449"/></figure><h2 id="problems-with-p2p-webrtc">Problems with P2P WebRTC</h2><ul><li><strong>Quality of Service: </strong>Quality will decrease as the number of peer connections increases.</li><li><strong>Client-Side Computation: </strong>Low-end devices can not synchronize multiple incoming streams.</li><li><strong>Scalability: </strong>It becomes very difficult for the client to handle computation and network load when the number of peers increases and uploads media at the same time.</li></ul><h2 id="solutions">Solutions</h2><ul><li><strong>MCU (Multipoint Control Unit)</strong></li><li><strong>SFU (Selective Forwarding Unit)</strong></li><li><strong>Video SDK </strong></li></ul><h2 id="integrate-flutter-webrtc-with-videosdk">Integrate Flutter-WebRTC with VideoSDK </h2><p><a href="https://videosdk.live">VideoSDK</a><em> </em>is the most developer-friendly platform for live video and audio SDKs. Video SDK makes integrating live video and audio into your Flutter project considerably easier and faster. You can have a branded, customized, and programmable call-up running in no time with only a few lines of code.</p><p>In addition, VideoSDK provides best-in-class modifications, providing you total control over layout and rights. Plugins may be used to improve the experience, and end-to-end call logs and quality data can be accessed directly from your Video SDK dashboard or via <a href="https://docs.videosdk.live/api-reference/realtime-communication/intro">REST APIs</a>. This amount of data enables developers to debug any issues that arise during a conversation and improve their integrations for the <a href="https://www.commandbar.com/blog/how-to-build-a-frictionless-customer-experience/">best customer experience possible</a>.</p><p>Alternatively, you can follow this quick start guide to <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start">Create a Demo Flutter Project with the VideoSDK.</a> or start with <a href="https://github.com/videosdk-live/videosdk-rtc-flutter-sdk-example">Code Sample</a>.</p>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h2 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h2>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="more-flutter-resources">More Flutter Resources</h2><ul><li><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/getting-started">Video SDK Flutter SDK Integration Guide</a></li><li><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start">Flutter SDK QuickStart</a></li><li><a href="https://github.com/videosdk-live/videosdk-rtc-flutter-sdk-example">Flutter SDK Github Example</a></li><li><a href="https://www.videosdk.live/blog/video-calling-in-flutter">Build a Flutter Video Calling App with Video SDK</a></li><li><a href="https://youtu.be/4h57eVcaC34">Video Call Flutter app with Video SDK (Android &amp; IOS)</a></li><li><a href="https://pub.dev/packages/videosdk">Official Video SDK flutter plugin: (feel free to give it a star ⭐)</a></li></ul>]]></content:encoded></item><item><title><![CDATA[eKYC: Future of Blockchain Identity Verification]]></title><description><![CDATA[This article delves into how blockchain technology is revolutionizing eKYC processes. By offering a decentralized, immutable, and transparent approach, blockchain enhances security, streamlines identity verification, and ensures regulatory compliance.]]></description><link>https://www.videosdk.live/blog/ekyc-future-of-identity-verification-using-blockchain</link><guid isPermaLink="false">66e1704320fab018df1101ca</guid><category><![CDATA[eKYC]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Tue, 28 Jan 2025 13:00:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/09/Future-of-eKYC_-Blockchain-Identity-Verification.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/09/Future-of-eKYC_-Blockchain-Identity-Verification.jpg" alt="eKYC: Future of Blockchain Identity Verification"/><p>In India, whenever you go to do KYC you notice that you have to go for, a face-to-face procedure. This is good when trust is involved, but what if you are not able to travel to that place, or you are at your workplace and didn’t mark the spot for some reason? This is not a single person’s problem.&nbsp;</p><p>This is the problem for every customer who wants to do KYC. KYC is mandatory everywhere in the world because it proves that you are the identifier and related things that are attached to you digitally. It will become more important when you are involved in a financial or health-related situation.</p><p>The KYC process, where you have to go on an in-person visit to a specific place, is called offline KYC, physical KYC, or face-to-face KYC, and it uses a biometric system. You name it what you want, but the main point is that it has some limitations. It creates problems like data breaches and cyberattacks, posing a significant risk to authorizers and customers.</p><p>Traditional KYC methods are static, storing your susceptible customer data in a centralized database. The dependence on centralized databases also leads to inefficiencies and delays, which can frustrate customers and increase business operational costs.</p><p>Because of these problems, the concept of the Electronic Knows Your Customer (eKYC) and Video KYC has emerged as a solution to streamline the identity verification process while enhancing security and compliance. In this article, we are only talking about eKYC and its association with blockchain technology that verifies identities, protects sensitive data, and ensures compliance with regulatory standards.</p><h2 id="what-is-ekyc">What is eKYC?</h2><p>eKYC is a digital process of verifying the identity of customers to comply with regulatory requirements, whose main role is preventing fraud, money laundering, and other illegal activities. The eKYC method is faster than manual verification and involves digitizing customer documents and using digital channels to validate identities.</p><p>The importance of eKYC not only reduces the time and costs associated with customer onboarding but also enhances the overall customer experience. By using it, organizations can ensure regulatory compliance while minimizing the risk of identity fraud.</p><h2 id="how-blockchain-enhances-ekyc"><strong>How Blockchain Enhances eKYC?</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/09/blockchain-based-ekyc.png" class="kg-image" alt="eKYC: Future of Blockchain Identity Verification" loading="lazy" width="1365" height="902"/></figure><p><em>Source: </em><a href="Reimagining KYC Using Blockchain Technology"><em>Reimagining KYC Using Blockchain Technology</em></a></p><p>Blockchain technology’s decentralized, unchangeable, and transparent approach offers a powerful solution to the challenges of traditional eKYC. By decentralizing data storage, blockchain reduces the risk of data breaches and ensures that customer information is securely encrypted and accessible only to authorized parties. Each transaction or update is recorded in a block, which is then added to a chain of previous transactions, making it nearly impossible to change data without agreement from the network.</p><p>Blockchain's transparent nature also facilitates better compliance with regulatory standards. Smart contracts (self-executing contracts with the phrases directly written into code) can automate compliance checks, ensuring that only verified and compliant identities are allowed to proceed through the onboarding process. This automation not only enhances security but also reduces the time and cost associated with manual compliance checks.</p><p>Several industries are already exploring the integration of the blockchain into their eKYC processes. For example, banks are using blockchain to create secure and efficient digital identities for their customers, which can be reused across multiple services without the need to repeat the KYC process. Digital wallets and fintech companies are also leveraging blockchain to offer faster and more secure onboarding experiences for their users using eKYC.</p><h2 id="future-trends-and-innovations-in-ekyc">Future Trends and Innovations in eKYC</h2><h3 id="decentralized-identity-solutions">Decentralized Identity Solutions</h3><p>One of the most promising trends in eKYC is the rise of decentralized identity solutions, where individuals control their own digital identities powered by the blockchain. It allows users to store their identity information securely on their devices and share it with service providers only when necessary. This approach reduces the dependency on centralized databases and gives users greater control over their personal information.</p><h3 id="ai-and-biometrics-in-blockchain-ekyc">AI and Biometrics in Blockchain eKYC</h3><p>The integration of AI and biometrics into blockchain-based eKYC processes is set to further enhance identity verification. AI can analyze patterns and detect anomalies in real-time, improving the accuracy of verification processes.&nbsp;</p><p>Biometrics, such as <a href="https://lenso.ai/en/blog/general/facial-recognition-what-is-it-and-why-do-we-need-it" rel="noreferrer">facial recognition</a> and fingerprint scanning, add a layer of security, ensuring that only the rightful owner of the identity can access services. Together, AI, biometrics, and blockchain initiate a powerful trio that can transform the future of eKYC.</p><p>Moreover, as regulatory requirements become more stringent, businesses will need to adapt their eKYC processes to ensure compliance. This may involve the adoption of more sophisticated verification methods and the implementation of real-time monitoring systems.</p><p>Looking ahead, the future of eKYC is likely to be characterized by greater collaboration between businesses, regulatory bodies, and technology providers. By working together, these stakeholders can create a more secure and efficient identity verification ecosystem that benefits everyone involved.</p><h2 id="conclusion">Conclusion</h2><p>As we look towards the future, the integration of AI, biometrics, and decentralized identity solutions will continue to push the boundaries of what's possible in eKYC, making identity verification not only more secure but also more user-centric.</p><p>Blockchain technology holds the potential to address many of the inherent challenges in traditional eKYC processes. By providing a decentralized, secure, and efficient way to manage identity verification, blockchain can enhance user experience, reduce operational costs, and improve compliance.&nbsp;</p>]]></content:encoded></item><item><title><![CDATA[The Future of Virtual Events: A Complete Guide for 2025]]></title><description><![CDATA[Now that we are officially two decades into the 21st-century, technology trends are taking over the events industry. These technologies are making life easier for event planners, and attendee experiences are being elevated as well.]]></description><link>https://www.videosdk.live/blog/future-of-virtual-events</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb75</guid><category><![CDATA[virtual events]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Tue, 28 Jan 2025 08:17:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/10/Future-of-Virtual-Event--1--1.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/10/Future-of-Virtual-Event--1--1.jpg" alt="The Future of Virtual Events: A Complete Guide for 2025"/><p/><p>The value of face-to-face interaction will no doubt remain inevitable throughout life! However, there are times when going out becomes physically impossible. No matter how vital the event is for you, attending the same would cost your life. The most current scenario depicts the vitality of virtual meetings. The spread of COVID-19 has strongly proved that virtual events are the future of your meetings and conferences. Most businesses during the isolation achieved conferences filled with networking chances, educational sessions, and insights.</p><p>Virtual events are no less than actual events in person. You require the same attention and critical thinking to host a <a href="https://www.videosdk.live/solutions/virtual-events" rel="noreferrer">virtual event</a>. Besides the venues and on-site participants, not a single element is missing in virtual events. Many platforms have been dominating market players in the Virtual event industry.</p><p>These platforms supported all broad types of events to make virtual events the new boon for the world. Many <a href="https://www.videosdk.live/audio-video-conferencing">Video API</a> platforms even enhanced their stock value to tenfold higher as compared to the pre-pandemic period. Here check out some statistics and figures that show the bright future of virtual events in the upcoming years.</p><h3 id="covid-19-impact-on-virtual-events">Covid-19 Impact on Virtual Events</h3><p>No doubt, Covid-19 has widely enhanced the popularity of Virtual events. Isolation has forced many businesses to operate virtually. Remote working forced many companies to run virtual events over Video API platforms. The most common virtual events witnessed during the times were team meetings, panel briefings and discussions, webinars, and live streaming sessions with guest speakers.</p><p>Interestingly, you can witness about a 14% rise in webinars during the pandemic. The Ask-me-anything sessions also witnessed a 9% enhancement in numbers. This majorly depicts how virtual events got empowered by COVID-19.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/10/Global-Market-Size.jpg" class="kg-image" alt="The Future of Virtual Events: A Complete Guide for 2025" loading="lazy" width="1600" height="900"/></figure><p>Even 90% of the organizations participated in global analysis and voted to continue the virtual events even after the pandemic. The driving factors that make the virtual events a success during a pandemic are:</p><ul><li>Instant connectivity.</li><li>Cost reduction in event organization.</li><li>Attending events is our own comfort.</li></ul><h3 id="technological-advancements-and-advantages-of-virtual-events">Technological Advancements and Advantages of Virtual Events</h3><p>Long before the pandemic, virtual events still existed. However, no organization found it reasonable to conduct virtual events instead of face-to-face events. Technological advancement at first laid the foundation of virtual events. The scenarios where connecting to clients was not feasible due to distance used virtual events as a channel for communication. <a href="https://www.spotsaas.com/blog/communication-technology-trends/" rel="noreferrer">Communication tools</a> like instant messaging apps, video calling apps, slack, and online webinars also boosted the popularity of virtual events long before the pandemic. <a href="https://www.emailmavlers.com/interactive-email-templates/" rel="noreferrer">Interactive email templates</a> are highly effective communication tools for webinars as they go beyond static content and encourage recipients to engage directly within the email for webinar promotion, reminders, and follow-up.</p><p>Eventually, Covid-19 turned out to be a fuel that ignited the importance of virtual events. What took years for technological evolution only took a few months for COVID-19 to spread the popularity of Virtual events!</p><p>Despite being the only option for communication during a pandemic, Virtual events also delivered exceptional benefits to the hosts. Most stereotypes stated that virtual events would take away the audience from organizations compared to in-person events. However, it turned out to be wrong. Here are many perks that you get from Virtual events.</p><h3 id="extensive-reach-to-the-audience-along-with-inclusivity">Extensive reach to the audience along with inclusivity</h3><p>In-person events, no doubt, made it lively, but there were still a massive number of participants who could not attend virtual events for many challenges. Some of these challenges were personal, while some logistical. However, virtual events enabled all participants to take part across boundaries. Even many businesses achieved long-distance selling points with the help of virtual events.</p><h3 id="saving-cost-and-time">Saving cost and time</h3><p>The virtual events proved to be cost-saving for both ends; including the hosts and the participants. Hosts reduced the cost of hosting events by 75%. Only 25% of the cost got used in setting up the right equipment to make virtual meetings possible. With many organizations already having hardware configurations, they only had to buy a virtual event platform subscription to save even more.</p><p>Besides the presenters, participants also saved costs on logistics. Most participants staying at a farther location joined the virtual meeting at a meeting platform subscription only.</p><h3 id="event-flexibility-and-easy-accumulation-of-powerful-data">Event flexibility and Easy accumulation of powerful data</h3><p>Flexibility is when you can host meetings at multiple event spaces. In-person events were limited to physical venues only. However, virtual events allow hosts to host an event from any location multiple times. The advanced virtual events platforms also make the event more accessible and interactive by offering language options. This helps you increase the number of participants as well.</p><p>The collection of data is a major part of every event. However, in-person events make it difficult to collect insights about several participants, feedback, and many more. Virtual, even, in contrast, offers you easy access to all such vital data. It includes several participants, their feedback, responses, and many more.</p><h3 id="effect-on-virtual-events-and-isolation">Effect on Virtual Events and Isolation</h3><p>Although <a href="https://www.videosdk.live/solutions/virtual-events" rel="noreferrer">virtual events</a> are a great success, they also offer limited interaction between participants and the host. Connectivity issues can sometimes obstruct chats and live interaction sessions. The users may even lose the meeting session temporarily for some time.</p><p>Isolation is a key requirement for current scenarios, but it sometimes proves to be less fruitful during virtual events. The participants, in many cases, fail to connect with the emotions of the host. Isolation also makes it boring to attend long meetings in one place.</p><h3 id="virtual-events-trend-and-statistics">Virtual Events Trend and Statistics</h3><p>To determine the trends and statistics of virtual events, you need to know their market size. In 2020, the global virtual events market size was valued at 94.04 billion USD. This will further grow at a rate of 23.7% from 2021 to 2028. The increasing adoption of technologies with live streaming APIs is what boosts the growth. </p><h3 id="the-fastest-growing-uk-based-virtual-events-startup-is-now-unicornhopin">The fastest-growing UK-based virtual events startup is now unicorn - HOPIN</h3><p>Hopin is a UK-based virtual events unicorn that has recently scaled its valuation to $7.75 Billion. On the 5th of August 2021, the company raised another $ 450 Million in funding to reach its peak. It is one of the fastest-scaling tech startups that has beat many of Europe’s largest private sector companies. In March 2020, with a valuation of $ 5 billion, the startup first gained the fastest scaling company title and is now way bigger. Johnny Boufarhat is the founder of Hopin, who started the firm back in 2019. COVID-19 turned out to be the biggest turnover for the company and made it an event unicorn. The company has grown to 800 employees in 47 nations worldwide. </p><h3 id="a-new-frontier-for-virtual-events-platforms-to-expand">A new frontier for Virtual Events Platforms to expand</h3><p><strong>Hybrid Events </strong></p><p>Hybrid virtual events will be the standard choice in the post-pandemic world. If you're creating time-lapse cutaways of room turnarounds for hybrid recaps, dependable <a href="https://fixthephoto.com/best-projectors-for-presentations.html" rel="noreferrer">projectors for presentations</a> keep in-venue visuals crisp while you shoot. This allows organizers unparalleled reach &amp; profit. Further, this could lead to interesting ideas &amp; avenues where both the virtual &amp; non-virtual could interact, play games, and network. To bridge the physical and digital experience, organizers are increasingly using QR codes generated with tools like&nbsp;<a href="https://www.the-qrcode-generator.com/" rel="noopener noreferrer">The QR Code Generator</a>, Uniqode, Canva, and&nbsp;<a href="https://qrscanner.net/qr-code-generator" rel="noopener noreferrer">QRScanner.net</a>&nbsp; to let in-person attendees instantly access live polls, feedback forms, or event materials from their mobile devices, boosting real-time engagement. To make these interactions more impactful, organizers and attendees can use tools like QR-based event badges, LinkedIn QR codes, or <a href="https://www.uniqode.com/digital-business-card" rel="noreferrer">Uniqode’s business card</a> to exchange contact information instantly during sessions. These digital networking aids help participants follow up easily, especially when time is limited or face-to-face interaction isn’t always feasible.</p><p><strong>Virtual Studio</strong></p><p>The next wave is virtual studios, now these platforms are focusing on extending in the creator economy. Providing first-class studio support for creator platforms like YouTube, Facebook, etc. Zuddi just launched their Studio a month back as a private beta to experiment on the creator market.</p><p><strong>Virtual Co-working</strong></p><p>Apart from this, in the multiverse, a couple of companies are experimenting with virtual offices to drive the engagement of their existing customers with another solution. Companies like SoWork are facilitating virtual work to encourage diversity in the workplace and counter climate change.</p><p> </p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/11/Future-of-Virtual-Event--1-.jpg" class="kg-image" alt="The Future of Virtual Events: A Complete Guide for 2025" loading="lazy" width="1600" height="900"/></figure><h3 id="virtual-events-future-prediction">Virtual Events Future Prediction</h3><p>The future prediction about virtual events states 40.37% growth till the end of 2021. The market impact after the pandemic will remain neutral, as users are well-versed in the benefits of virtual events. Several players are occupying the market. But the North will continue to contribute the highest with a 29% boost. However, professionals predict that the future will bring an amazing amalgamation of in-person and virtual events. </p><p>Virtual is at the frontier of opportunities and unexplored spaces, and we didn’t even mention the gaming industry or the metaverse or VR headsets! The possibilities are yet to be explored. Experts strongly believe that virtual is going to change the status quo. However, there’s a lot of work that needs to be done to make that vision a reality. Startups and event management companies being more strategic about the fact that will revolutionize free-style socializing events</p>]]></content:encoded></item><item><title><![CDATA[Grow Your Edtech Business]]></title><description><![CDATA[Expand your ed-tech business. Flourish your platform with extraordinary technologies and add value to it.]]></description><link>https://www.videosdk.live/blog/grow-edtech-business</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb68</guid><category><![CDATA[edtech]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 28 Jan 2025 04:19:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/07/Integrate-It-Anywhere-2.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/07/Integrate-It-Anywhere-2.jpg" alt="Grow Your Edtech Business"/><p>Virtual classrooms have today been one of the most useful teaching and learning tools. It has today been one of the trendiest techniques of teaching. Just be in one place and explore all the learning. These classrooms have made learning simpler. This has also become an advantage for tutors as they can engage a large crowd of students at once. These classrooms can also be broadcasted on multiple platforms, multiplying the reach. <br/></p><blockquote>At videosdk.live, we dedicate our products to education. Building several educational classrooms on a virtual platform we always try to make a tutor’s teaching and a student’s learning experience simplified. We have designed several classrooms as per the needs of the tutor. These classrooms are fully functional with amazing attributes and the best part is that they can be fully customized as per the teaching needs.</blockquote><p><br><strong>Our classroom designs</strong></br></p><p><strong>How to create super awesome classrooms with customization and ease</strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/07/one-to-one-video-chat.jpg" class="kg-image" alt="Grow Your Edtech Business" loading="lazy" width="1600" height="900"><figcaption><span style="white-space: pre-wrap;">Small Classrooms</span></figcaption></img></figure><h3 id="1-small-classrooms">1. Small classrooms</h3><p>We also help tutors and students with one-to-one mentoring, <strong>engaging and enhancing personal attention</strong> towards learning. Keeping that in mind, we have designed classrooms where a student and a teacher can converse with each other, making engrossed and attentive learning. The students get personal guidance with a student-teacher ratio of 1:1. <br/></p><p>When an educator focuses on bringing up a leader, he handpicks the students he visions as future leaders and coaches them for the path he aspires, under his guidance. Videosdk.live small classroom platform also fulfills a tutor’s vision to guide students in a small group of 5 to 10 students, giving them personal guidance and attentiveness. <br/></p><p>These classrooms give tutors full-fledged flexibility with operations and customization. The students can<strong> interact, debate and discuss</strong> with each other making the virtual platform a living classroom. We design classrooms with scalability, and high performative facets like whiteboards, media, and document uploads, with active screen sharing, which makes it more lively and engaging. This also helps a tutor to derive active communication within the smaller group to make big changes for their better future.<br/></p><p>Small classrooms are fully functional with engaging tools. We allow customization for the classrooms, as per the needs of teaching. To make a small classroom, there is a paramount step that is required before commencing some fruitful learning and discussions. Follow the below link and create a wonderful classroom all with a lively environment.</p><p><a href="https://docs.videosdk.live/docs/tutorials/realtime-communication/prebuilt-sdk/quickstart-prebuilt-js">https://docs.videosdk.live/docs/tutorials/realtime-communication/prebuilt-sdk/quickstart-prebuilt-js</a> </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/07/Real-time-large-classes.jpg" class="kg-image" alt="Grow Your Edtech Business" loading="lazy" width="1600" height="900"><figcaption><span style="white-space: pre-wrap;">Large Classrooms</span></figcaption></img></figure><h3 id="2-large-classrooms">2. Large classrooms</h3><p>Videosdk.live offers tutors to teach a large number of students at once. These rooms can accommodate 5,000 students at once. The tutor gets the authority of admitting, muting, and unmuting, and screen sharing on his own. These lecture rooms can be made interactive as well as non-interactive by the tutor. Videosdk.live provides amazing features for a large classroom which can be recorded and played on-demand. <br/></p><p>With a full-fledged useful explanation board and amazing real-time audio and video quality, we deliver the participants with screen sharing, query raising, active real-time chat with supportable audio and video quality. We support more than 98% of devices. We make features that can coordinate with a large number of people together, in an unchaotic way. We also develop <strong>highly coordinative audio quality</strong> with 360 spatial audio with voice changing and audio mixer, making the session lively. Our end-to-end encrypted video communication stands for a <strong>secured classroom learning.</strong> <br/></p><p>Large classrooms are worth it when a tutor wants to deliver tutoring to a large number of people at once explaining with unlimited time allotment. Videosdk.live builds this amazing platform for education. Develop a platform for educating a huge number of people within 10 minutes. Catch the below informative link to start a classroom instantly.</p><p><a href="https://docs.videosdk.live/docs/tutorials/realtime-communication/js-sdk/quickstart-js">https://docs.videosdk.live/docs/tutorials/realtime-communication/js-sdk/quickstart-js</a> </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/07/Go-live-anytime.jpg" class="kg-image" alt="Grow Your Edtech Business" loading="lazy" width="1600" height="900"><figcaption><span style="white-space: pre-wrap;">Live Classrooms</span></figcaption></img></figure><h3 id="3-live-classrooms">3. Live classrooms</h3><p>To an educator who believes that education is something worth delivering to each one, located remotely, but active to learning. What can be better than live classrooms? A live classroom <strong>streams to a million attendees</strong>, where a tutor gets huge engagement and can teach with full flexibility. The tutor can <a href="https://slidemodel.com/seminar-presentation/" rel="noreferrer">present a seminar</a> or his lecture live on the platform and also can encode and store a lecture video provided that it can be streamed on-demand.<br/></p><p>Videosdk.live provides tutors with high-quality streaming, favorable in unstable network conditions. We provide<strong> streaming over multiple platforms</strong> at once, enriching the audience count. With adaptive streaming, we deliver tutors with dedicated features like screen sharing, recording, and on-demand playback. The viewers can communicate via chatbox and can use it to post queries and comments in respect of the tutor.<br/></p><p>Live streaming can be organized to attract huge engagements by the tutor for what he teaches. A tutor can develop a live stream within a few minutes and make his tutoring live over the internet covering mass learning. To make access even more seamless for students joining from different devices or environments, tutors can generate QR codes using <a href="https://www.uniqode.com/qr-code-generator" rel="noreferrer">Uniqode's QR generator</a>, Canva, etc., to enable quick scan-and-join access to live sessions. Build live streaming for the masses within a few minutes. Study the below-given link and make your steaming.</p><p><a href="https://docs.videosdk.live/docs/tutorials/live-streaming/api/quickstart-rest-api">https://docs.videosdk.live/docs/tutorials/live-streaming/api/quickstart-rest-api</a> </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/07/Upload-video-from-anywhere.jpg" class="kg-image" alt="Grow Your Edtech Business" loading="lazy" width="1600" height="900"><figcaption><span style="white-space: pre-wrap;">Pre-recorded Classrooms</span></figcaption></img></figure><h3 id="4-pre-recorded-classrooms">4. Pre-recorded classrooms</h3><p>Videosdk.live being highly cooperative with the tutor and student, creating pre-recorded rooms of learning. These classrooms are filled up with lectures that are pre-recorded. A tutor, who desires to deliver a lecture but cannot manage live possibilities, <strong>can record a lecture and stream</strong> it at a scheduled time of his choice. These lectures are pre-recorded, encoded in the UI libraries, and hosted once demanded. Once hosted, these lessons can be distributed to your mailing database on platforms like <a href="https://www.getresponse.com/" rel="noreferrer">GetResponse</a>, Brevo, ActiveCampaign or Mailchimp.</p><p>The ease with these classrooms is that a tutor gets to explain his course all at once without any network lags. The students can reach out to the recorded lectures whenever they wish, as per their flexibility. This <strong>uninterrupted lecture recording</strong> can also be attached with slides and videos for better understanding. With adaptive streaming, we ensure secured accessibility for the students.<br/></p><p>Pre-recorded classroom gathers a large engagement of students as they can view these lectures on a loop until completely understood. Also, they can post their queries to the lecturer later and can review them for further help. Take a glance at how to build a pre-recorded classroom. Refer to the link below.</p><p><a href="https://docs.videosdk.live/docs/tutorials/video-on-demand/api/quickstart-rest-api">https://docs.videosdk.live/docs/tutorials/video-on-demand/api/quickstart-rest-api</a> <br/></p><blockquote>Also, refer to our blog that describes how Videosdk.live makes education entertaining and enjoyable gathering masses together. You can refer to this link to learn how we keep education on one of the most dedicated use cases.</blockquote><p><strong>(</strong><a href="https://videosdk.live/blog/engage-ed-tech-business"><strong>Preview Part 1</strong></a><strong>)</strong></p>]]></content:encoded></item><item><title><![CDATA[Engage Your Edtech Business]]></title><description><![CDATA[The education system has shifted all its focus towards growing e-learning classrooms, adapting to new technologies for an augmented teaching and learning system]]></description><link>https://www.videosdk.live/blog/engage-ed-tech-business</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb67</guid><category><![CDATA[edtech]]></category><category><![CDATA[video-conferencing]]></category><category><![CDATA[live-streaming]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 27 Jan 2025 18:10:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/07/Real-time-small-classes-2.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/07/Real-time-small-classes-2.jpg" alt="Engage Your Edtech Business"/><p>The modern education system works on the idea of teaching students in a virtual classroom environment, the reason being, it attracts mass learning. Education, always being a top priority, needs the most advanced technology to date. We all have started learning at big institutes under the guidance of experts, sitting at our homes. It is just because of technology that we have expanded our reach of learning.<br/></p><p>For students, today a student or a learner can reach any place to learn without stepping out of his room. Everything has become virtual. They can reach out to educators from a different city, and also a different country. Simply, Education has no boundaries. This is all possible due to technological advancements in the sector. We can communicate over <a href="https://www.videosdk.live/audio-video-conferencing" rel="noreferrer">video conferencing</a>, webinars, live streaming, and also one-to-one meeting with educators. <br/></p><p>For educators, it comes out to be a great opportunity when they can teach over the internet. A tutor or an educator can today teach a student from any part of the world, sitting at his home. The benefits are enormous, it saves time, cost, and energy. Also, the tutor gets a wide reach of students, establishing his student map. Nonetheless, technology in education, aka online learning is a boon towards an educated economy.  <br/></p><p>Videosdk.live looks for the betterment of education with the help of technology. We design products that help towards enhancing the educational growth of the community. We design APIs and SDKs for real-time communication, customizable to the teaching needs concerning a better communication system. We cater products like scalable audio and video streaming, along with live streaming and video conferencing APIs. Being highly dedicated to the promotion of virtual classroom education, we regularly design products that benefit this community.<br/></p><h3 id="how-do-we-build-interactive-classrooms">How do we build interactive classrooms?</h3><p>Online education, or simply ed-tech, requires several technologies to connect a teacher and a learning community. E-learning with videosdk.live comes with enthusiasm. We design products with full customization as per teaching needs. We develop products that ensure effortless management of work on a virtual platform. We look for simplicity, constituting easy-to-use teaching tools. Our products make it possible to connect with large masses at once in the least time.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/07/Real-time-small-classes-1.jpg" class="kg-image" alt="Engage Your Edtech Business" loading="lazy" width="1600" height="900"><figcaption><span style="white-space: pre-wrap;">Real-time Communication</span></figcaption></img></figure><h3 id="1-real-time-audio-and-video-communication">1. <a href="https://docs.videosdk.live/docs/realtime-communication/intro">Real-time audio and video communication</a></h3><p>Videosdk.live curates a live interactive session with real-time voice, video, and chat support, designed with no lags in connectivity. The educators and students can connect easily with our technologies. A wholesome quality experience where teaching can be done as personal tutoring, teaching a small batch, and teaching a large group. We have developed teaching tools with an extraordinary essence.<br/></p><ul><li><strong>Video conferencing</strong></li></ul><p>A well-developed video conferencing API that makes room for 5000 participants at once, consequently with no time limitations. Secured and stable, one can easily address their presence in a meeting without entering passcodes. We value your time. We provide tutors with host access, whiteboard technology, and supportable screen sharing and the ability to <a href="https://www.uniqode.com/qr-code-generator" rel="noreferrer">incorporate QR codes</a> for quick access. Our scalable video conferencing makes access for more than 98% of devices, aiming technology to not be a barrier in learning.<br/></p><ul><li><strong>Audio conferencing</strong></li></ul><p>Videosdk.live also provides one-to-one and group voice calls. Students and teachers can also communicate with each other for clearing small doubts and queries with audio calls. Voice calls are a faster communication tool for small conversations. We have designed the audio call APIs with cloud recording. We support audio conferencing with more than 5000 participants at once and that’s huge!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/07/Go-live-anytime-1.jpg" class="kg-image" alt="Engage Your Edtech Business" loading="lazy" width="1600" height="900"><figcaption><span style="white-space: pre-wrap;">Live Streaming</span></figcaption></img></figure><h3 id="2-live-streaming">2. <a href="https://docs.videosdk.live/docs/live-streaming/intro">Live Streaming</a></h3><p>We provide live streaming with a scalable quality and huge mass access. It can be interactive as well as non-interactive. We support streaming with unlimited participants at a scalable quality for uninterrupted learning. Live streaming supports 98% of devices along with flexible screen resolution and internet bandwidth. We support cloud stream recording with an on-demand video streaming facility. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/07/On-Demand-Courses-1.jpg" class="kg-image" alt="Engage Your Edtech Business" loading="lazy" width="1600" height="900"><figcaption><span style="white-space: pre-wrap;">On-demand Video Streaming</span></figcaption></img></figure><h3 id="3-on-demand-videos">3. <a href="https://docs.videosdk.live/docs/video-on-demand/intro">On-demand Videos</a></h3><p>We provide on-demand video streaming facilities to view the streaming videos and other recorded videos. This makes students help themselves by re-watching the videos for better study help. The teachers can also pre-record videos to help students play videos at their learning speed. We allow the facility of play and pause to make learning at a gradual speed. Our on-demand facility allows watching recorded video meetings and live streaming after being broadcasted. On-demand videos help students in designing a learning pattern, and teachers for reviewing old videos for new and more classified explanations. These videos can also be repurposed for broader <a href="https://www.distribution.ai/blog/content-distribution" rel="noreferrer">content distribution</a>, helping educators extend their reach.<br/></p><h3 id="how-do-we-make-your-classroom-worth-it">How do we make your classroom worth it?</h3><p>Videosdk.live always focuses on quality. With a never compromising attitude, we make experiences worth sharing in the community. Our quality development with generous attributes makes us mark above the standard level. We look for the best delivery of products to make learning better and qualitative. Our APIs and SDKs make sure that we define a classroom worth learning and teaching, all with a cohesive bond.<br/></p><ol><li><strong>Customization as per need</strong></li></ol><p>Videosdk.live provides full customization to a classroom meeting. All the necessary adjustments can be made with the provided label. We ensure a structured classroom delivery to you and you can even add more stars to it. We are flexible with changes to our APIs for learning and education<br/></p><p><strong>2. Astounding audio and video quality</strong></p><p>With amazing customizable classrooms, the quality of audio and video that we provide is beyond par. Our HD audio and video quality, active speaker switch, noise cancellation, speaker’s window switch and more, make our learning rooms worth employing for education. <br/></p><p><strong>3. Fully functional attributes</strong></p><p>Videosdk.live classrooms contain supreme attributes. We cater to whiteboard, real-time audio, video, and messaging, along with customizations. Supporting quality education, our tools help tutors to keep the students engaged.  <br/></p><p><strong>4. Dedicated scalability</strong></p><p>Focusing on screen resolution and high-quality video delivery, we ensure scalability by supporting a majority of devices. Ensuring that learning must carry on even in unfavorable network conditions, we keep up with the latest technologies to meet the best video and audio quality for the student and teacher community. <br/></p><p>Videosdk.live develops several customized classrooms for effective learning. Want to learn how we benefit educators and students with our exclusive classrooms? <br>Find out how to develop effective and effortless classrooms at videosdk.live. </br></p><p>Click here (<a href="https://app.videosdk.live/dashboard"><strong>Try Now</strong></a>)</p><blockquote><strong>We value education!</strong></blockquote><p><strong>(</strong><a href="https://www.videosdk.live/blog/grow-edtech-business"><strong>Next Part 2</strong></a><strong>)</strong></p>]]></content:encoded></item><item><title><![CDATA[Ultimate Guide to React Icons Enhancing Your Web Projects with Scalable Graphics]]></title><description><![CDATA[Explore the essentials of React Icons in this guide, from setup to advanced usage. Learn how to integrate and style icons within your React projects for improved performance and design.]]></description><link>https://www.videosdk.live/blog/react-icons-guide</link><guid isPermaLink="false">661b9dee2a88c204ca9d0c51</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 27 Jan 2025 04:42:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/luke-peters-B6JINerWMz0-unsplash-1.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction-to-react-icons">Introduction to React Icons</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/luke-peters-B6JINerWMz0-unsplash-1.jpg" alt="Ultimate Guide to React Icons Enhancing Your Web Projects with Scalable Graphics"/><p>React Icons is a powerful library that provides accessible SVG icons for React applications, integrating popular icon sets like Font Awesome, Material Design, UI design, and Ionicons into your projects seamlessly. This library is essential for developers looking to add visually appealing, scalable, and easy-to-manage icons into their web applications.</p><h2 id="what-are-react-icons">What are React Icons?</h2><p>React Icons is a useful collection of popular icons that you can add to your React projects. It brings together many different icon libraries, giving you lots of choices for your designs.</p><h2 id="why-use-react-icons">Why Use React Icons?</h2><p>The primary advantage of using React Icons lies in its efficiency and ease of use. The library supports tree-shaking, meaning it only bundles the icons used in your project, thus reducing the overall bundle size and improving load times. Furthermore, each icon is treated as a React component icons, allowing for easier integration and manipulation within a React environment.</p><h3 id="diagram-for-installing-react-icons">Diagram for Installing React Icons</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/05/diagram--1-.png" class="kg-image" alt="Ultimate Guide to React Icons Enhancing Your Web Projects with Scalable Graphics" loading="lazy" width="1868" height="882"/></figure><h3 id="installation-and-basic-setup-fro-react-icons">Installation and Basic Setup fro React Icons</h3><p>To get started with <a href="https://www.npmjs.com/package/react-icons">React Icons</a>, you need to install the package via npm, which is straightforward:</p><pre><code class="language-bash">npm install react-icons --save</code></pre><p>Once installed, you can begin using the react icons in your React project by importing them directly from the library. Here's a basic example of how to use an icon:</p><pre><code class="language-javascript">import { FaBeer } from 'react-icons/fa';

function App() {
  return &lt;h3&gt;Let's have a &lt;FaBeer /&gt; tonight!&lt;/h3&gt;;
}
</code></pre><p>This code snippet demonstrates the simplicity of incorporating an icon directly into your React components. The <code>{ FaBeer }</code> import represents a beer icon from the Font Awesome collection, which can be rendered as part of your JSX code.</p><p>React Icons provides a versatile and performance-efficient solution for using icons in React applications. By enabling developers to import only the icons they need, it ensures that the application remains lightweight and fast. As you proceed to integrate React Icons into your projects, you will appreciate the flexibility and aesthetic enhancement they bring to your user interfaces.</p><h2 id="exploring-icon-libraries-and-usage-with-react-icons">Exploring Icon Libraries and Usage with React Icons</h2><p>React Icons amalgamates several popular icon libraries, offering a vast selection of icons suitable for various design needs. This section delves into these libraries, discusses advanced icon configuration, and explores dynamic icon usage, providing you with the tools to enhance your project's visual and interactive aspects.</p><h3 id="icon-libraries-overview">Icon Libraries Overview</h3><p>React Icons integrates icons from multiple libraries, including FontAwesome, MaterialIcons, and Bootstrap Icons, among others. This integration offers a unified way to access a wide range of icons, from social media symbols to <a href="https://www.designmantic.com/how-to/how-to-get-cheap-logo-design-for-business" rel="noreferrer">business icons</a>, all within your React applications.</p><p><strong>FontAwesome:</strong> Known for its comprehensive collection of icons, FontAwesome is frequently used for its versatility and wide acceptance.</p><p><strong>MaterialIcons:</strong> These are grounded in Material Design principles and offer clean, UIUX designs<strong> </strong>and graphics perfect for modern web interfaces.</p><p><strong>Bootstrap Icons:</strong> Bootstrap Icons are specifically designed to work seamlessly with Bootstrap components but are versatile enough to be used in other contexts.</p><p>Using icons from these libraries is straightforward. For instance, to use a heart icon from FontAwesome, you would import it like so:</p><pre><code class="language-js">import { FaHeart } from 'react-icons/fa';</code></pre><p>This single line of code allows you to render a heart icon anywhere within your React components.</p><h3 id="advanced-icon-configuration">Advanced Icon Configuration</h3><p>To globally configure icons in your project, React Icons provides the <code>IconContext.Provider</code>. This component allows you to set default properties for all icons, such as size, color, and style, reducing the need to individually style each icon.</p><p>Here’s how you can set global properties for your icons:</p><pre><code class="language-js">import { IconContext } from 'react-icons';

function App() {
  return (
    &lt;IconContext.Provider value={{ color: 'blue', size: '50px' }}&gt;
      &lt;div&gt;
        &lt;FaHeart /&gt;
        {/* other components */}
      &lt;/div&gt;
    &lt;/IconContext.Provider&gt;
  );
}
</code></pre><p>This configuration sets all icons within the <code>Provider</code> to be blue and 50 pixels in size, ensuring consistency across your app without repeated code.</p><h3 id="dynamic-icon-usage">Dynamic Icon Usage</h3><p>React Icons can also dynamically change based on application state. This is particularly useful for interactive elements like buttons or toggles, where the icon might change in response to user actions.</p><p>For example, you can create a toggle button that switches between a "plus" and "minus" icon based on whether a section is expanded or collapsed:</p><pre><code class="language-js">import { useState } from 'react';
import { AiOutlinePlus, AiOutlineMinus } from 'react-icons/ai';

function ToggleButton() {
  const [isOpen, setIsOpen] = useState(false);

  return (
    &lt;button onClick={() =&gt; setIsOpen(!isOpen)}&gt;
      {isOpen ? &lt;AiOutlineMinus /&gt; : &lt;AiOutlinePlus /&gt;}
    &lt;/button&gt;
  );
}
</code></pre><p>This snippet demonstrates the use of React state to conditionally render different icons, providing an intuitive visual cue for the user.</p><p>The React Icons library not only simplifies the process of integrating icons but also enhances the flexibility and functionality of your UI components. By leveraging the extensive libraries available and utilizing advanced configuration options, you can create a more engaging and visually appealing user experience. In the following sections, we will explore practical applications and best practices to optimize the performance and accessibility of these icons within your projects.</p><p>Practical Applications and Best Practices for React Icons</p><p>In this section, we delve into the practical applications of React Icons within a project, discussing how to optimize their performance and ensure accessibility, alongside tips for effective styling and customization. When building interfaces with sequential steps or interactive flows, React Icons help communicate actions clearly, and an <a href="https://venngage.com/ai-tools/flowchart-generator" rel="noreferrer">AI-powered flowchart creator</a> can also assist in designing structured visuals that complement your UI components. By adhering to these best practices, developers can enhance both the user experience and the technical quality of their applications.</p><h3 id="performance-optimization">Performance Optimization</h3><p>Optimizing the performance of React Icons is crucial for maintaining fast load times and efficient rendering in your application. If your workflow includes prepping SVG assets or running design tools alongside the dev server, a capable <a href="https://fixthephoto.com/best-computer-for-graphic-design.html" rel="noreferrer">computer for graphic design</a> helps keep previews, exports, and builds responsive. Here are several strategies:</p><p><strong>Selective Importing</strong>: Import only the icons you use within your components rather than entire libraries. This minimizes bundle size significantly.</p><pre><code class="language-js">import { FaBeer } from 'react-icons/fa';</code></pre><p>This code demonstrates importing a single icon rather than all FontAwesome icons, which keeps your application lightweight.</p><ol><li><strong>Lazy Loading</strong>: For applications using a large number of icons or large sets, consider lazy loading icons that aren't immediately visible to the user. This can be achieved by dynamic imports in React.</li><li><strong>Use SVGs Appropriately</strong>: Since React Icons are SVG elements, ensure that they are not re-rendered unless necessary. Utilizing React’s memoization or shouldComponentUpdate lifecycle method can prevent unnecessary re-renders.</li></ol><h3 id="accessibility-considerations">Accessibility Considerations</h3><p>Ensuring that icons are accessible is key to creating inclusive web applications. Here are some tips to enhance icon accessibility:</p><ol><li><strong>Aria Labels</strong>: Add appropriate <code>aria-label</code> attributes to icons that serve as interactive elements, providing a text description that screen readers can announce.</li><li><strong>Keyboard Accessibility</strong>: Ensure that icons used as buttons or links are focusable and can be triggered using the keyboard. Wrapping them in <code>&lt;button&gt;</code> or <code>&lt;a&gt;</code> tags can achieve this.</li><li><strong>Semantic HTML</strong>: Use appropriate HTML elements for icons depending on their function. Icons that trigger actions should be buttons, while icons for decoration should use a <code>&lt;span&gt;</code> with <code>role="img"</code> and an aria-label.</li></ol><h2 id="faqs-and-troubleshooting-common-issues-with-react-icons">FAQs and Troubleshooting Common Issues with React Icons</h2><p>In this final section of our comprehensive guide to React Icons, we address some frequently asked questions and provide troubleshooting tips for common issues that developers may encounter while using this library. This segment is designed to enhance your troubleshooting skills and help you resolve typical problems efficiently.</p><h3 id="frequently-asked-questions-faqs">Frequently Asked Questions (FAQs)</h3><p><strong>How do I ensure React Icons are properly aligned with text?</strong></p><ul><li>Icons may not always align perfectly with adjacent text. To fix alignment issues, wrap your icon and text in a flex container and use CSS to align items to the center.</li></ul><pre><code class="language-js">&lt;div style={{ display: 'flex', alignItems: 'center' }}&gt;
  &lt;FaBeer /&gt; &lt;span&gt;Enjoy responsibly!&lt;/span&gt;
&lt;/div&gt;
</code></pre><p><strong>Can I use multiple icon libraries in one project?</strong></p><ul><li>Yes, you can import icons from different libraries within the same project. However, be mindful of the overall size of your bundle and the performance implications of importing many libraries.</li></ul><p><strong>What should I do if an icon does not appear as expected?</strong></p><ul><li>Ensure that the import statement is correct and points to the right library. It’s common to mistakenly import an icon from a different library or misspell the icon name.</li></ul><p><strong>How can I change the color and size of an icon dynamically?</strong></p><ul><li>You can dynamically style icons using inline styles or CSS classes based on state or props. Here’s a simple example with inline styling:</li></ul><p><strong>Are there any accessibility concerns I should be aware of when using icons as buttons?</strong></p><ul><li>When using icons as interactive elements like buttons, ensure they are wrapped in <code>&lt;button&gt;</code> tags, have accessible labels, and are focusable for keyboard-only users.</li></ul><h3 id="troubleshooting-common-issues">Troubleshooting Common Issues</h3><p><strong>Icon Import Errors:</strong></p><ul><li>Double-check your import statements for typos. Use the exact name as defined in the library, and ensure that your project's dependencies include the icon library.</li></ul><p><strong>Icons Not Rendering:</strong></p><ul><li>Verify that the icon components are correctly implemented in the JSX. Check if there are any console errors related to SVG or React component rendering.</li></ul><p><strong>Performance Issues with Large Number of Icons:</strong></p><ul><li>If performance is impacted by the use of many icons, consider implementing code splitting or dynamically importing icons only when they are needed.</li></ul><p><strong>Styling Issues:</strong></p><ul><li>For icons that don’t adopt the style properties, inspect the elements in a browser to see if styles are being overridden. Ensure that styles are not being applied to a parent container instead of directly to the icon.</li></ul><p><strong>Library Conflicts:</strong></p><ul><li>Sometimes, different icon libraries might conflict, especially if they use similar names for different icons. Use aliases in your import statements to resolve these conflicts.</li></ul><pre><code class="language-js">import { FaBeer as BeerIcon } from 'react-icons/fa';
</code></pre><p>By addressing these FAQs and troubleshooting common issues, developers can reduce downtime and frustration, ensuring a smoother development process and a better user experience. As you integrate React Icons into your projects, keep this guide handy to overcome any hurdles with ease.</p>]]></content:encoded></item><item><title><![CDATA[Top 10 Daily.co Alternatives in 2026]]></title><description><![CDATA[Discover an innovative and transformative alternative to Daily.co, revolutionizing your online experience and propelling you to new heights of success. Embrace this golden opportunity to unlock unparalleled potential.]]></description><link>https://www.videosdk.live/blog/daily-co-alternative</link><guid isPermaLink="false">64af84735badc3b21a5955b9</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Sun, 26 Jan 2025 12:55:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/07/Daily-alternative.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Daily-alternative.jpg" alt="Top 10 Daily.co Alternatives in 2026"/><p>If you're looking for seamless integration of real-time video into your application and seeking an <a href="https://www.videosdk.live/daily-vs-videosdk" rel="noreferrer"><strong>alternative to Daily.co</strong></a>, you've come to the right spot! While Daily.co is a popular option, there are plenty of untapped opportunities available beyond their platform. Stay tuned to discover what you might have been missing out on, particularly if you're already using Daily.co. Get ready to explore new possibilities!</p><h2 id="understanding-the-need-for-a-dailyco-alternative">Understanding the Need for a Daily.co Alternative</h2>
<p>It's important to note that Daily.co is not without issues. One of its main issues is that it's not able to work properly in the latest <a href="https://github.com/daily-co/daily-js/issues/156">chrome on iOS</a> as well as <a href="https://github.com/daily-co/daily-js/issues/132">Firefox</a>. The platform is hindered by legacy code and an unintuitive developer experience, which can impact its overall usability.</p><p>If you're currently a Daily.co customer or considering trying out Daily.co, We recommend reading this blog to gain insights into what potential limitations or missed opportunities you might encounter.</p><p>The <strong>top 10 Daily.co Alternatives</strong> are VideoSDK, Twilio Video, MirrorFly, Agora, Jitsi, Vonage, AWS Chime, EnableX, WhereBy, and SignalWire.</p><blockquote>
<h2 id="top-10-dailyco-alternatives-for-2026">Top 10 Daily.co Alternatives for 2026</h2>
<ul>
<li>VideoSDK</li>
<li>Twilio Video</li>
<li>MirrorFly</li>
<li>Agora</li>
<li>Jitsi</li>
<li>Vonage</li>
<li>AWS Chime SDK</li>
<li>Enablex</li>
<li>Whereby</li>
<li>SignalWire</li>
</ul>
</blockquote>
<h2 id="1-videosdk-seamless-integration-for-enhanced-audio-video-experience">1. VideoSDK: Seamless Integration for Enhanced Audio-Video Experience</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-SDK-for-Real-time-Communication-Live-Streaming-Video-API-7.jpeg" class="kg-image" alt="Top 10 Daily.co Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Experience the power of <a href="https://www.videosdk.live">VideoSDK</a>, an API designed to seamlessly integrate robust audio-video features into your applications. With minimal effort, you can enhance your app with live audio and video experiences across any platform.</p><h3 id="key-points-about-videosdk">Key points about VideoSDK</h3>
<ul><li>VideoSDK offers simplicity and speed of integration, freeing up your time to focus on developing innovative features that enhance user retention. </li><li>Say goodbye to complex integration processes and unlock a world of possibilities.</li><li>Enjoy the benefits of VideoSDK, including high scalability, <a href="https://www.videosdk.live/blog/what-is-adaptive-bitrate-streaming" rel="noreferrer">adaptive bitrate</a> technology, end-to-end customization, superior quality recordings, in-depth analytics, cross-platform streaming, seamless scaling, and comprehensive platform support. </li><li>It works effortlessly on mobile (Flutter, Android, iOS), web (JavaScript Core SDK + UI Kit), and desktop (Flutter Desktop), allowing you to create immersive video experiences.</li></ul><h3 id="videosdk-pricing">VideoSDK pricing</h3>
<ul><li>Get incredible value with VideoSDK! Take advantage of <a href="https://www.videosdk.live/pricing">$20 free minutes</a> and <a href="https://www.videosdk.live/pricing#pricingCalc">flexible pricing options</a> for video and audio calls. </li><li><strong>Video calls</strong> start at just <strong>$0.003</strong> per participant per minute, while <strong>audio calls</strong> begin at <strong>$0.0006</strong>.</li><li>Additional costs include <strong>$0.015</strong> per minute for <strong>cloud recordings</strong> and <strong>$0.030</strong> per minute for <strong>RTMP output</strong>.</li><li>Plus, enjoy <strong>free 24/7 customer support</strong>.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/daily-vs-videosdk"><strong>Daily and VideoSDK</strong></a><strong>.</strong></blockquote>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-twilio-video-versatile-live-video-for-mobile-and-web">2. Twilio Video: Versatile Live Video for Mobile and Web</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Communication-APIs-for-SMS-Voice-Video-Authentication_twilio-6.jpeg" class="kg-image" alt="Top 10 Daily.co Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Twilio stands as one of the top video SDK solutions, providing businesses with the ability to seamlessly integrate live video into their mobile and web applications.</p><p>The key advantage of utilizing Twilio is its versatility, allowing you to either build an app from the ground up or enhance existing solutions with powerful communication features. Whether you're starting fresh or expanding your app's capabilities, Twilio offers a reliable and comprehensive solution for incorporating live video into your applications.</p><h3 id="key-points-about-twilio">Key points about Twilio</h3>
<ul><li>Twilio provides web, iOS, and Android SDKs for integrating live video into applications.</li><li>Manual configuration and extra code are required for using multiple audio and video inputs.</li><li>Twilio's call insights can track and analyze errors, but additional code is needed for implementation.</li><li>Pricing can be a concern as usage grows, as Twilio lacks a built-in tiering system in the dashboard.</li><li>Twilio supports up to 50 hosts and participants in a call.</li><li>There are no plugins available for easy product development with Twilio.</li><li>The level of customization offered by the Twilio Video SDK may not meet the needs of all developers, resulting in additional code writing.</li></ul><h3 id="pricing-for-twilio">Pricing for Twilio</h3>
<ul><li>The <a href="https://www.twilio.com/en-us/video/pricing">pricing</a> for <a href="https://www.videosdk.live/blog/twilio-video-alternative"><strong>Twilio</strong></a> starts at <strong>$4</strong> per 1,000 minutes. </li><li><strong>Recordings</strong> are charged at <strong>$0.004</strong> per participant minute, <strong>recording compositions</strong> cost <strong>$0.01</strong> per composed minute, and <strong>storage</strong> is priced at <strong>$0.00167</strong> per GB per day after the first 10 GBs.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/twilio-vs-daily"><strong>Daily and Twilio</strong></a><strong>.</strong></blockquote><h2 id="3-mirrorfly-enterprise-grade-in-app-communication">3. MirrorFly: Enterprise-Grade In-App Communication</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Live-Video-Call-API-Best-Video-Chat-SDK-for-Android-iOS-mirrorfly-6.jpeg" class="kg-image" alt="Top 10 Daily.co Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>MirrorFly is an outstanding in-app communication suite designed specifically for enterprises. It offers a wide range of powerful APIs and SDKs that deliver exceptional chat and calling experiences. With over 150 features for chat, voice, and video calling, this cloud-based solution seamlessly integrates to create a robust communication platform.</p><h3 id="key-points-about-mirrorfly">Key points about MirrorFly</h3>
<ul><li>MirrorFly may have limitations when it comes to customization options, which can restrict the ability to tailor the platform to specific branding or user experience needs. </li><li>This can hinder the uniqueness and personalization of communication features. </li><li>Scaling and handling high user volumes may pose challenges for MirrorFly, potentially impacting performance and stability, especially during periods of significant traffic or complex use cases. </li><li>Users have reported mixed experiences with MirrorFly's technical support, with some encountering delays or difficulties in issue resolution. </li><li>MirrorFly's pricing structure may not be suitable for all budgets or use cases, as costs can be higher depending on desired features and scalability requirements.</li><li>Integrating MirrorFly into existing applications or workflows may require considerable effort and technical expertise, as comprehensive documentation and developer resources may be lacking.</li></ul><h3 id="mirrorfly-pricing">MirrorFly pricing</h3>
<ul><li>MirrorFly's <a href="https://www.mirrorfly.com/pricing.php">pricing</a> starts at <strong>$299</strong> per month, making it a higher-cost option to consider.</li></ul><h2 id="4-agora-robust-features-for-engaging-live-interactions">4. Agora: Robust Features for Engaging Live Interactions</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Agora-Real-Time-Voice-and-Video-Engagement-6.jpeg" class="kg-image" alt="Top 10 Daily.co Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Agora's video calling SDK is packed with features such as embedded voice and video chat, real-time recording, live streaming, and instant messaging, empowering developers to create captivating live in-app experiences.</p><h3 id="key-points-about-agora">Key points about Agora</h3>
<ul><li>Agora's video SDK offers features like embedded voice and video chat, real-time recording, live streaming, and instant messaging.</li><li>Add-ons such as AR facial masks, sound effects, whiteboards, and other features are available for an additional cost.</li><li>Agora's SD-RTN provides global coverage, connecting individuals from over 200 countries and regions with ultra-low latency streaming capabilities.</li><li>The pricing structure can be complex and may not be suitable for businesses with limited budgets.</li><li>Users seeking hands-on support may experience delays as Agora's support team may require additional time to provide assistance.</li></ul><h3 id="agora-pricing">Agora pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/agora-alternative"><strong>Agora</strong></a> offers Premium and Standard <a href="https://www.agora.io/en/pricing/">pricing</a> options, where the usage duration for audio and video is calculated monthly. </li><li>Video usage is categorized into four types based on resolution, and pricing is determined accordingly. </li><li>The pricing includes <strong>Audio</strong> at <strong>$0.99</strong> per 1,000 participant minutes, <strong>HD Video</strong> at <strong>$3.99</strong> per 1,000 participant minutes, and <strong>Full HD Video</strong> at <strong>$8.99</strong> per 1,000 participant minutes.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/agora-vs-daily"><strong>Daily and Agora</strong></a><strong>.</strong></blockquote><h2 id="5-jitsi-open-source-video-conferencing-for-businesses">5. Jitsi: Open-Source Video Conferencing for Businesses</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Free-Video-Conferencing-Software-for-Web-Mobile-Jitsi-7.jpeg" class="kg-image" alt="Top 10 Daily.co Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Jitsi is a collection of free, open-source projects that offer versatile video conferencing services for individuals, teams, and organizations. Its most prominent components are Jitsi Meet and Jitsi Videobridge.</p><p>Jitsi Meet functions as an end-client application, similar to Zoom, enabling users to conduct video chats. It provides features such as screen sharing, real-time collaboration, user invitations, and more.</p><h3 id="key-points-about-jitsi">Key points about Jitsi</h3>
<ul><li>Jitsi is free, open-source, and offers end-to-end encryption, allowing code customization.</li><li>The live experience includes features like active speakers, text chatting (web only), room locking, screen sharing, raise/lower hand, push-to-talk mode, and an audio-only option.</li><li>Security is a major concern, as Jitsi lacks adequate privacy and encryption for confidential business meetings.</li><li>Ample storage space is required for storing and saving meeting data offline, posing challenges for businesses with frequent sessions.</li><li>Jitsi Meet has a limitation of handling up to 200 attendees, which can detract from the meeting experience for larger audiences.</li><li>Meetings lasting longer than three hours are not supported, making it inconvenient for large companies, organizations, webinars, and other institutions.</li><li>Video and audio quality in Jitsi Meet is poor, impacting the overall meeting experience for participants and hosts.</li></ul><h3 id="jitsi-pricing">Jitsi pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/jitsi-alternative"><strong>Jitsi</strong></a> is an open-source platform that is available for <strong>free</strong> usage. </li><li>However, if your business requires advanced features or technical support, it might be worth considering third-party providers that offer commercial support for Jitsi.</li><li>The cost of commercial support can vary based on your specific requirements and the chosen provider.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/jitsi-vs-daily"><strong>Daily and Jitsi</strong></a><strong>.</strong></blockquote><h2 id="6-vonage-reliable-communication-for-rapid-prototyping">6. Vonage: Reliable Communication for Rapid Prototyping</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-API-Fully-Programmable-and-Customizable-Vonage-5.jpeg" class="kg-image" alt="Top 10 Daily.co Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Despite being acquired by Vonage(Formerly TokBox) and subsequently renamed as the "Vonage API," the industry still commonly refers to TokBox. TokBox's SDKs enable reliable point-to-point communication, making it a viable option for establishing proof of concepts during hackathons or meeting investor deadlines.</p><h3 id="key-points-about-vonage">Key points about Vonage</h3>
<ul><li>The TokBox SDK enables building custom audio/video streams with effects, filters, and AR/VR on mobile devices.</li><li>It supports various use cases like 1:1 video, group video chat, and large-scale broadcast sessions.</li><li>Participants can share screens, exchange messages via chat, and send data during calls.</li><li>One challenge is scaling costs, as the price per stream per minute increases with a growing user base.</li><li>Additional features like recording and interactive broadcasts come at an extra cost.</li><li>After 2,000 connections, the platform switches to CDN delivery, resulting in higher latency.</li><li>Real-time streaming at scale is a struggle, as anything over 3,000 viewers requires switching to HLS, which introduces significant latency.</li></ul><h3 id="vonage-pricing">Vonage pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/vonage-alternative"><strong>Vonage</strong></a> follows a usage-based <a href="https://www.vonage.com/communications-apis/video/pricing/">pricing</a> model based on the number of participants in a video session, which is dynamically calculated every minute.</li><li>Their <strong>pricing plans</strong> begin at <strong>$9.99</strong> per month and include a free allowance of 2,000 minutes per month across all plans. </li><li>Once the free allowance is used up, users are charged at a rate of <strong>$0.00395</strong> per participant per minute. <strong>Recording</strong> services start at <strong>$0.010</strong> per minute, while <strong>HLS streaming</strong> carries a cost of <strong>$0.003</strong> per minute.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/vonage-vs-daily"><strong>Daily and Vonage</strong></a><strong>.</strong></blockquote><h2 id="7-aws-chime-sdk">7. AWS Chime SDK</h2>
<p>The Amazon Chime SDK is the underlying technology of Amazon Chime, operating without its outer shell or user interface.</p><h3 id="key-points-about-aws-chime-sdk">Key points about AWS Chime SDK</h3>
<ul><li>The Amazon Chime SDK allows up to 25 participants (or 50 for mobile users) in a video meeting.</li><li>Simulcast technology ensures consistent video quality across different devices and networks.</li><li>All calls, videos, and chats are encrypted for enhanced security.</li><li>It lacks certain features like polling, auto-sync with Google Calendar, and background blur effects.</li><li>Compatibility issues have been reported in Linux environments and with participants using the Safari browser.</li><li>Customer support experiences can vary, with inconsistent query resolution times depending on the support agent.</li></ul><h3 id="aws-chime-pricing">AWS Chime pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/amazon-chime-sdk-alternative"><strong>AWS Chime</strong></a>'s <strong>free basic plan</strong> allows users to have one-on-one audio/video calls and group chats.</li><li><strong>The Plus plan</strong>, <a href="https://aws.amazon.com/chime/pricing/">priced</a> at <strong>$2.50</strong> per monthly user, provides additional features including <strong>screen sharing</strong>, <strong>remote desktop control</strong>, <strong>1 GB of message history</strong> per user, and <strong>Active Directory integration</strong>.</li><li><strong>The Pro plan</strong>, priced at <strong>$15</strong> per user per month, includes all the features of the Plus plan and allows for <strong>meetings</strong> with three or more participants.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/amazon-chime-sdk-vs-daily"><strong>Daily and AWS Chime</strong></a><strong>.</strong></blockquote><h2 id="8-enablex">8. EnableX</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Call-API-Video-Chat-API-Voice-API-Video-Conferencing_enebleX-7.jpeg" class="kg-image" alt="Top 10 Daily.co Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>The EnableX SDK provides video and audio calling capabilities along with collaborative features such as a whiteboard, screen sharing, annotation, recording, host control, and chat. With the SDK, you can utilize a video builder to deploy custom video-calling solutions directly into your application. It allows you to create personalized live video streams with a custom user interface, hosting options, billing integration, and other essential functionalities tailored to your specific needs.</p><h3 id="key-points-about-enablex">Key points about EnableX</h3>
<ul><li>EnableX provides a self-service portal with reporting capabilities and live analytics to track quality and facilitate online payments.</li><li>The SDK supports JavaScript, PHP, and Python programming languages.</li><li>Users can stream live content directly from your app/site or on platforms like YouTube and Facebook for broader audience reach.</li><li>The support team's response time may take up to 72 hours, which may be a potential drawback for users in need of timely assistance.</li></ul><h3 id="enablex-pricing">EnableX pricing</h3>
<ul><li>EnableX <a href="https://www.enablex.io/cpaas/pricing/our-pricing">pricing</a> starts at <strong>$0.004</strong> per minute per participant for rooms with <strong>up to 50 people</strong>. </li><li>If you require hosting larger meetings or events, you can reach out to their sales team for custom pricing options.</li><li>EnableX also provides a <strong>recording option</strong> priced at <strong>$0.010</strong> per minute per participant. </li><li>Additionally, if you need to transcode your video into a <strong>different format</strong>, it is available at a rate of <strong>$0.010</strong> per minute. </li><li>For <strong>additional storage</strong> or <strong>RTMP streaming</strong>, you can obtain them for <strong>$0.05</strong> per GB per month and <strong>$0.10</strong> per minute, respectively.</li></ul><h2 id="9-whereby">9. Whereby</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Calling-API-for-Web-and-App-Developers-Whereby-7.jpeg" class="kg-image" alt="Top 10 Daily.co Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Whereby is a straightforward and user-friendly video conferencing platform that caters to basic small- to medium-sized meetings. However, it may not be the ideal choice for larger businesses or those in need of more advanced features.</p><h3 id="key-points-about-whereby">Key points about Whereby</h3>
<ul><li>Whereby allows basic customization of the video interface, but it has limited options and doesn't support a fully custom experience.</li><li>Video calls can be embedded directly within websites, mobile apps, and web products, eliminating the need for external links or apps.</li><li>Whereby offers a seamless video conferencing experience, but it may lack advanced features found in other tools. </li><li>The maximum capacity is 50 participants.</li><li>Screen sharing for mobile users and customization option for the host interface may be limited.</li><li>Whereby does not offer a virtual background feature, and some users have reported issues with the mobile app, which may impact the overall user experience.</li></ul><h3 id="whereby-pricing">Whereby pricing</h3>
<ul><li>Whereby follows a <a href="https://whereby.com/information/pricing">pricing</a> model that begins at <strong>$6.99</strong> per month, offering up to 2,000 user minutes that are renewed monthly. </li><li>Once the allocated minutes are used up, an additional charge of <strong>$0.004</strong> per minute is applicable. </li><li>The platform also provides options for <strong>cloud recording</strong> and <strong>live streaming</strong>, which are charged at a rate of <strong>$0.01</strong> per minute.</li><li><strong>Email</strong> and <strong>chat support</strong> are available for <strong>free</strong> to all users, ensuring that assistance is readily accessible when needed. </li><li><strong>Paid</strong> support plans offer additional features such as <strong>technical onboarding</strong>, <strong>customer success management</strong>, and <strong>HIPAA compliance</strong>.</li></ul><h2 id="10-signalwire">10. SignalWire</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Building-The-Software-Defined-Telecom-Network-SignalWire-6.jpeg" class="kg-image" alt="Top 10 Daily.co Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>SignalWire is a platform powered by APIs, specifically designed to help developers seamlessly integrate live and on-demand video experiences into their applications. Its primary goal is to simplify video encoding, delivery, and renditions, ensuring a smooth and uninterrupted video streaming experience for users.</p><h3 id="overview-of-signalwire">Overview of SignalWire</h3>
<ul><li>SignalWire provides an SDK that enables the integration of real-time video and live streams into web, iOS, and Android applications. </li><li>Each video call can accommodate up to 100 participants in a real-time <a href="https://www.videosdk.live/blog/webrtc" rel="noreferrer">WebRTC</a> environment. </li><li>However, it's important to note that the SDK does not offer built-in support for managing disruptions or user publish-subscribe logic, which developers will need to implement separately.</li></ul><h3 id="signalwire-pricing">SignalWire pricing</h3>
<ul><li>SignalWire utilizes a <a href="https://signalwire.com/pricing/video">pricing</a> model based on per-minute usage. </li><li>The pricing includes <strong>$0.0060</strong> per minute for <strong>HD video calls</strong> and <strong>$0.012</strong> for <strong>Full HD video calls</strong>. </li><li>The actual cost may vary depending on the desired video quality for your application.</li><li>In addition, SignalWire offers additional features such as <strong>recording</strong>, which is available at a rate of <strong>$0.0045</strong> per minute. </li><li>This allows you to capture and store video content for future use. </li><li>The platform also provides <strong>streaming capabilities</strong>, priced at <strong>$0.10</strong> per minute, allowing you to broadcast your video content in real-time.</li></ul><h2 id="certainly">Certainly!</h2>
<p><a href="https://www.videosdk.live">VideoSDK</a> stands out as an SDK that prioritizes fast and seamless integration. With a low-code solution, developers can quickly build live video experiences in their applications, deploying custom video conferencing solutions in under 10 minutes. Unlike other SDKs, VideoSDK offers a streamlined process with easy creation and embedding of live video experiences, enabling real-time connections, communication, and collaboration.</p><h2 id="still-skeptical">Still skeptical?</h2>
<p>Immerse yourself in the possibilities of VideoSDK by taking a deep dive into its comprehensive <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start" rel="noopener noreferrer">Quickstart guide</a>. Explore the potential with the <a href="https://docs.videosdk.live/code-sample" rel="noopener noreferrer">powerful sample app</a> exclusively designed for Video SDK. Sign up now and start your integration journey, and don't miss out on the chance to claim your <a href="https://www.videosdk.live/pricing" rel="noopener noreferrer">complimentary $20 free credit</a> to unlock the full potential of Video SDK. Our dedicated team is always ready to assist you whenever you need support. Get ready to showcase your creativity and build remarkable experiences with Video SDK. Let the world see what you can create!</p>]]></content:encoded></item><item><title><![CDATA[What is Computer Emergency Response Team (CERT)? Why is it Important Globally?]]></title><description><![CDATA[Discover the types of CERTs and their importance in maintaining digital security across organizations, industries, and nations.]]></description><link>https://www.videosdk.live/blog/what-is-cert</link><guid isPermaLink="false">66d6d6d220fab018df10fea0</guid><category><![CDATA[CERT]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Sun, 26 Jan 2025 11:54:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-CERT.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-CERT.png" alt="What is Computer Emergency Response Team (CERT)? Why is it Important Globally?"/><p>CERT, which stands for <a href="https://sei.cmu.edu/about/divisions/cert/index.cfm">Computer Emergency Response Team</a>, is a specialized group of cybersecurity professionals dedicated to addressing and managing cybersecurity incidents. It plays a crucial role in the global cybersecurity landscape by helping organizations, governments, and individuals protect against and respond to cyber threats.</p><p>Their primary responsibilities include protecting against, detecting, and responding to various cybersecurity threats, such as data breaches and denial-of-service attacks. CERTs also play a vital role in public awareness campaigns and research aimed at enhancing security systems.</p><p>The concept of CERT originated in 1988 with the establishment of the <a href="http://sei.cmu.edu/about/divisions/cert/index.cfm">CERT Coordination Center (CERT/CC)</a> at Carnegie Mellon University. This center was created to address the growing need for coordinated responses to cybersecurity incidents.</p><p>Over time, different CERTs have been formed globally, often affiliated with specific organizations or governmental bodies. For instance, the <a href="https://www.cisa.gov/sites/default/files/publications/infosheet_US-CERT_v2.pdf" rel="noreferrer">United States Computer Emergency Readiness Team</a> (US-CERT) was established in 2003 to serve as a coordination point for cyber threat prevention and response in the U.S.</p><h2 id="why-do-you-need-cert">Why do you need CERT?</h2><ul><li><strong>Incident Response</strong>: CERTs help organizations respond to and recover from cybersecurity incidents, minimizing the damage caused by attacks. They have the expertise to detect, analyze, and mitigate threats quickly, helping reduce downtime and prevent future incidents. A&nbsp;well-defined <a href="https://www.wiz.io/academy/incident-response-plan" rel="noreferrer">incident response plan</a>&nbsp;is essential in guiding these actions effectively and ensuring a coordinated approach.</li><li><strong>Threat Intelligence Sharing</strong>: CERTs collect and analyze data on emerging threats and vulnerabilities, which they share with clients and the broader cybersecurity community. This proactive approach helps organizations stay ahead of potential threats by applying necessary patches and security measures.</li><li><strong>Vulnerability Management</strong>: They identify security weaknesses in systems and provide actionable advice to address them. This includes offering guidance on <a href="https://www.motadata.com/patch-management-software/" rel="noreferrer">patch management</a> and securing systems, reducing the risk of exploitation by cybercriminals.</li><li><strong>Public Awareness</strong>: CERTs play an important role in educating the public and organizations on cybersecurity best practices. They conduct training and awareness programs to help users adopt secure behaviors and reduce the likelihood of falling victim to attacks.</li><li><strong>Global Coordination</strong>: CERTs collaborate with other security organizations, law enforcement, and international partners to respond to large-scale incidents and enhance the overall cybersecurity ecosystem. This global network strengthens the collective defense against cyber threats.</li></ul><h2 id="how-many-types-of-certs-are-in-the-world">How many types of CERTs are in the World?</h2><ul><li><strong>National and Regional CERTs</strong>: Many countries have their own national CERTs, such as US-CERT in the United States, <a href="https://www.cert-in.org.in/">CERT-In</a> in India, and <a href="https://www.jpcert.or.jp/">JPCERT</a> in Japan. These teams focus on the cybersecurity needs of their respective regions.</li><li><strong>Industry-Specific CERTs</strong>: Some CERTs specialize in specific industries, like finance or healthcare, to address sector-specific threats.</li><li><strong>Global CERT Networks</strong>: Organizations like <a href="https://www.first.org/">FIRST (Forum of Incident Response and Security Teams)</a> and the <a href="http://sei.cmu.edu/about/divisions/cert/index.cfm">CERT Coordination Center (CERT/CC)</a> work to connect CERTs globally, enhancing cooperation and effectiveness in tackling cybersecurity issues.</li></ul><p>Overall, CERTs are essential in the global effort to enhance cybersecurity readiness, response, and resilience against the ever-evolving threat landscape.</p>]]></content:encoded></item><item><title><![CDATA[Video KYC vs Traditional KYC: What's the Difference?]]></title><description><![CDATA[This article compares traditional in-person KYC with innovative Video KYC methods, highlighting their pros and cons.]]></description><link>https://www.videosdk.live/blog/video-kyc-vs-traditional-kyc</link><guid isPermaLink="false">66db01e620fab018df110099</guid><category><![CDATA[Video KYC]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Sun, 26 Jan 2025 11:52:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/09/Difference-between-Traditional-KYC-and-Video-KYC--1-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/09/Difference-between-Traditional-KYC-and-Video-KYC--1-.jpg" alt="Video KYC vs Traditional KYC: What's the Difference?"/><p>We know that customer verification processes, aka <a href="https://www.swift.com/your-needs/financial-crime-cyber-security/know-your-customer-kyc/kyc-process" rel="noreferrer">Know Your Customer (KYC)</a> processes, have become essential for businesses and their customers against fraud, money laundering, and other illegal activities, particularly in the financial sector.</p><p>These processes ensure that institutions can accurately identify and verify their customers, thereby mitigating risks associated with fraud and compliance violations.</p><p>As technology evolves, so do the methods of customer verification, With the advent of digital technologies like eKYC and <a href="https://www.videosdk.live/solutions/video-kyc" rel="noreferrer">Video KYC</a>. This innovative method promises to streamline the verification process, offering both convenience and enhanced security.</p><p>In this article, we'll explore the key differences between Video KYC and traditional KYC customer verification, and highlight the advantages and challenges of each KYC process and the implications for the future of financial services.</p><h2 id="what-is-traditional-kyc-physical-kyc"><strong>What is Traditional KYC (Physical KYC)?</strong></h2><p>Traditional KYC customer verification, often referred to as in-person or physical KYC, has been the standard method employed by financial institutions for decades. These established methods are used by businesses, particularly in the financial sector to collect personal information and documentation, such as government-issued identification (Aadhar card, PAN card, Passport), proof of address (Electricity bill or Voter ID), and sometimes even financial history (Passbook, Bank Statement) to confirm the identity of their clients before initiating services.</p><p>Historically, this method has been the pillar of KYC (Know Your Customer) protocols, which were designed to reduce fraud and ensure compliance with regulatory requirements. However, as digital interactions have surged, limitations such as time consumption, inconvenience, and higher operational costs have initiated the search for more efficient alternatives like eKYC and Video KYC.</p><h3 id="the-regular-kyc-procedure">The Regular KYC Procedure</h3><ol><li><strong>Document Collection</strong>: Customers are required to provide various identification documents, such as government-issued IDs, proof of address, and sometimes additional supporting documents like utility bills or bank statements.</li><li><strong>In-Person Visit</strong>: The customer must visit a branch or designated location to present these documents in person. This step allows bank representatives to physically examine the documents and verify their authenticity.</li><li><strong>Face-to-Face Interview</strong>: A bank employee questions the customer to gather additional information and assess the risk profile.</li><li><strong>Document Verification</strong>: The collected documents are scrutinized for authenticity, often involving manual checks and comparisons against databases.</li><li><strong>Data Entry</strong>: Customer information is manually entered into the bank's systems, a process prone to human error.</li><li><strong>Approval Process</strong>: The compiled information undergoes review by relevant departments for final approval.</li></ol><h3 id="challenges-and-limitations-of-the-traditional-approach">Challenges and Limitations of the Traditional Approach</h3><ul><li><strong>Time-Consuming</strong>: The entire process can take days or weeks to complete, leading to delays in account opening or service activation.</li><li><strong>Resource-Intensive</strong>: It requires significant human resources like verification agents and physical infrastructure to manage in-person verifications.</li><li><strong>Geographical Limitations</strong>: Customers must be physically present, which can be inconvenient for most customers or impossible for those in remote areas or different countries.</li><li><strong>Paperwork Burden</strong>: The dependency on physical documents increases the risk of loss, damage, or misusage of sensitive information.</li><li><strong>Inconsistent Experience</strong>: The quality of verification can differ depending on the skills and training of individual bank employees.</li></ul><p>Despite these challenges, traditional customer verification has been the popular KYC process due to its perceived reliability and the comfort of face-to-face interactions. But as we'll explore in the following sections, the emergence of Video KYC is offering a more efficient and technologically advanced alternative.</p><h2 id="what-is-video-kyc"><strong>What is Video KYC?</strong></h2><p><a href="https://www.videosdk.live/solutions/video-kyc" rel="noreferrer">Video KYC</a> (Know Your Customer) is a digital approach to verifying customer identities through live video interactions, allowing financial institutions to authenticate customers remotely without the need for physical presence. This process is also called Digital KYC because its innovative method combines digital technology with the security of face-to-face interactions, making it very important in the digital banking landscape.</p><p>The Video KYC process involves a customer starting a video call with a trained agent, during which they present their identification documents like Aadhar, PAN, or Passport for verification. Then, Advanced technologies such as facial recognition and <a href="https://www.videosdk.live/blog/what-is-liveness-detection">liveness detection</a> are used to ensure that the individual is physically present and their identity matches with provided documents.</p><p>Additionally, <a href="https://aws.amazon.com/what-is/ocr/">Optical Character Recognition (OCR)</a> technology is used to extract and verify information from these documents in real-time. This innovative approach allows customers to complete their KYC requirements remotely, using video conferencing technology and advanced authentication methods.</p><h2 id="the-importance-of-video-kyc">The Importance of Video KYC</h2><p>Video KYC represents a significant leap forward in customer verification processes, leveraging digital technology to create a more efficient, accessible, and secure experience.</p><p>Key features of Video KYC include:</p><ol><li><strong>Remote Verification</strong>: Customers can complete the KYC process from anywhere with an internet connection and a device with a camera.</li><li><strong>Real-time Document Scanning</strong>: Advanced OCR (Optical Character Recognition) technology enables instant document verification.</li><li><strong>Facial Recognition</strong>: AI-powered facial recognition tools enhance identity verification accuracy.</li><li><strong>Geo-tagging and Timestamp</strong>: These features add an extra layer of security to the verification process.</li><li><strong>Recorded Sessions</strong>: Video KYC sessions are recorded for audit and compliance purposes.</li></ol><p>The regulatory framework supporting Video KYC has evolved rapidly, with many countries now recognizing its validity. For instance, the <a href="https://www.videosdk.live/blog/rbi-compliance-for-video-kyc">Reserve Bank of India (RBI) issued guidelines</a> in January 2020 allowing banks and other regulated entities to use Video KYC for customer onboarding.</p><h2 id="comparison-between-video-kyc-vs-traditional-kyc"><strong>Comparison between Video KYC vs Traditional KYC </strong></h2><p>While traditional customer verification methods have been the standard for decades, the emergence of Video KYC has introduced a more efficient and convenient alternative. Let's compare the two approaches:</p>
<!--kg-card-begin: html-->
<table>
<thead>
<tr>
<th><strong>Feature</strong></th>
<th><strong>Traditional Verification</strong></th>
<th><strong>Video KYC</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>Process</td>
<td>Involves in-person visits to branches or offices, where customers must physically show their identification documents</td>
<td>Allows customers to complete the verification process remotely through live video calls</td>
</tr>
<tr>
<td>Document Submission</td>
<td>Submit physical copies of their documents.</td>
<td>Enables digital document submission in front of the camera during the video call</td>
</tr>
<tr>
<td>Time Efficiency</td>
<td>Time-consuming and prone to delays due to manual processes and human error</td>
<td>Streamlines process through automation, allowing for faster customer onboarding and reduced waiting times</td>
</tr>
<tr>
<td>Customer Experience</td>
<td>Inconvenient for customers, requiring them to visit branches and wait in line</td>
<td>Offers convenience and tech-savvy experience that aligns with digital banking trends.</td>
</tr>
<tr>
<td>Security and Fraud Prevention</td>
<td>Relies on human expertise to detect forgeries and impersonation attempts.</td>
<td>Utilizes AI and machine learning to reduce fraud detection, including deepfake prevention techniques.</td>
</tr>
<tr>
<td>Geographical Limitations</td>
<td>Limited by physical branch locations, challenging for remote or international customers.</td>
<td>Eliminates geographical barriers, allowing banks to reach a wider customer base</td>
</tr>
<tr>
<td>Cost</td>
<td>High operational costs due to physical infrastructure, staff, and paper-based processes.</td>
<td>Significantly lower costs, with reduced need for physical branches and streamlined digital processes.</td>
</tr>
</tbody>
</table>
<!--kg-card-end: html-->
<h2 id="advantages-of-video-kyc">Advantages of Video KYC</h2><p>Video KYC offers several convincing advantages that improve both customer experience and operational efficiency for financial institutions.</p><ul><li><strong>Enhanced Customer Onboarding</strong>: Faster processes lead to improved customer satisfaction and reduced dropoff rates during account opening.</li><li><strong>Reduced Operational Costs</strong>: Banks can significantly cut expenses of physical infrastructure and manual processing.</li><li><strong>Improved Accuracy</strong>: AI-powered systems reduce human errors in data entry and document verification.</li><li><strong>Scalability</strong>: Video KYC allows financial institutions to handle higher volumes of verifications without increases in resources.</li><li><strong>Data Security</strong>: Digital processes often provide better encryption and security measures for sensitive customer information.</li></ul><p>These advantages make Video KYC important option for both financial institutions and customers, driving its increasing adoption across the global financial sector.</p><h2 id="challenges-in-video-kyc">Challenges in Video KYC</h2><p>While Video KYC offers numerous advantages, it also presents some challenges that financial institutions must address, like:</p><ul><li><strong>Technology Infrastructure</strong>: Building an in-house <a href="https://www.videosdk.live/solutions/video-kyc" rel="noreferrer">Video KYC structure</a> requires robust IT systems and capable of handling video streaming, data processing, and secure storage.</li><li><strong>Data Privacy and Security</strong>: With increased digital interactions, ensuring the protection of sensitive customer data becomes paramount.</li><li><strong>Regulatory Compliance</strong>: As a relatively new technology, Video KYC must navigate evolving regulatory landscapes across different jurisdictions.</li><li><strong>Digital Divide</strong>: Not all customers may have access to the internet or necessary technology. Might be possible that they feel uncomfortable with digital processes.</li></ul><h2 id="implementation-of-video-kyc">Implementation of Video KYC</h2><p>To successfully implement Video KYC, financial institutions should consider the following best practices:</p><ul><li><strong>Selecting the Right Solution</strong>: Choose the right <a href="https://www.videosdk.live/">Video KYC infrastructure</a> that aligns with your specific needs, regulatory requirements, and customer base.</li><li><strong>Staff Training</strong>: Ensure that employees are well-trained in using the new infrastructure and can go through the process very effectively. A structured <a href="https://www.qooper.io/blog/mentor-mentee-roles-and-expectations" rel="noreferrer">mentor mentee</a> program can further support this, allowing experienced staff to guide newer team members and accelerate knowledge transfer.</li><li><strong>System Integration</strong>: Seamlessly integrate with existing customer onboarding and workflow management systems.</li><li><strong>User Experience Design</strong>: Create a user-friendly interface that caters to customers with varying levels of technical proficiency.</li><li><strong>Continuous Monitoring and Improvement</strong>: Regularly assess the performance of your video KYC system and make necessary adjustments based on feedback and emerging technologies.</li></ul><h2 id="future-of-kyc-trends-and-predictions">Future of KYC: Trends and Predictions</h2><p>The KYC landscape continues to evolve rapidly. Some key trends to watch include:</p><ol><li><strong>AI and Machine Learning</strong>: Advanced algorithms will further enhance fraud detection and automate complex verification processes.</li><li><strong>Biometric Advancements</strong>: Innovations in biometric technology, such as behavioral biometrics, will provide even more secure authentication methods.</li><li><strong>Blockchain and Decentralized Identity</strong>: Blockchain technology could revolutionize KYC by creating secure, decentralized identity verification systems.</li><li><strong>Regulatory Technology (RegTech)</strong>: The growth of <a href="https://www.videosdk.live/blog/what-is-regtech">RegTech</a> solutions will help financial institutions navigate complex and changing KYC regulations more efficiently.</li></ol><p>As we look to the future, the key for financial institutions will be to balance innovation with security and regulatory compliance, ensuring that digital KYC processes remain robust, efficient, and user-friendly in an increasingly digital world.</p><p>While traditional methods have served well for decades, the digital age demands more efficient, secure, and customer-friendly solutions. Staying up-to-date on emerging trends, banks, and other financial institutions, providers can successfully leverage Video KYC to streamline their operations and meet the evolving needs of their customers.</p>]]></content:encoded></item><item><title><![CDATA[Cloud Recording and RTMP Pricing]]></title><description><![CDATA[Cloud record and stream to multiple streaming platforms with the most affordable pricing. Enjoy streaming to Youtube, Twitch, Facebook, and more]]></description><link>https://www.videosdk.live/blog/cloud-recording-and-rtmp-pricing</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb70</guid><category><![CDATA[Pricing]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Sun, 26 Jan 2025 09:35:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/09/Cloud-Recording---RTMP-thumbnail-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/09/Cloud-Recording---RTMP-thumbnail-1.jpg" alt="Cloud Recording and RTMP Pricing"/><p>Audio and video calling has become an important approach for majorly all corporates and entities. These calling APIs increase demand when accompanied by additional functions and attractive deliverables. API developers supplement features with various features and functions for attracting clients but the most important of all is cloud recording and RTMP. These additionals are becoming important. <br/></p><blockquote>This blog describes the two important functions- Cloud Recording and  RTMP, with their pricing. Stay tuned!</blockquote><h2 id="cloud-recording">Cloud Recording</h2><p>Cloud Recording refers to the storage of the video meeting which the host puts on recording to retrieve the meeting data for future use. It is a computer data recording system that records and stores video meetings in digital pools, typically called “the cloud”.<br/></p><p>Videosdk.live has consistently provided RTC services with cloud recording function and here we present you the pricing. Do note, our cloud recording feature is designed at our clients’ comfort. We provide various recording options to ease the workflow of our clients.<br/></p><p>There are two ways of cloud recording</p><p><strong>a) Only cloud recording  b) Recording with storage and streaming</strong><br/></p><h3 id="only-cloud-recording">Only Cloud Recording</h3><p>When a host believes in recording only the meeting from our video calling API and desires to store the recording in his cloud, he looks for this option. Videosdk.live provides users with the option of only recording the meeting and then transcode and transfer it to their cloud. This makes it easy to link your storage directly with your workflow, ensuring seamless access and management of your recordings. Additionally, you can leverage <a href="https://cloudchipr.com/blog/best-cloud-cost-optimization-tools" rel="noreferrer">cloud cost optimizations tools</a> to manage storage expenses efficiently.<br/></p><p>When a user chooses this option we assure:</p><ul><li>Only the cost of recording is charged</li><li>The cost of cloud recording is a one-time charge</li><li>There is no cost levied for storage and streaming</li><li>The meeting is recorded on our cloud</li><li>Users can transfer recorded meetings to their cloud</li><li>There is no transfer cost</li><li>The cost is computed on the number of meeting minutes and not the number of a participants<br/></li></ul><p><strong>Calculation of cost</strong></p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/CLoud.jpg" class="kg-image" alt="Cloud Recording and RTMP Pricing" loading="lazy" width="1600" height="1040"/></figure><p><strong>Cost = total recording minutes x cost per minute</strong><br/></p><p>Let’s understand the concept with some examples.</p><p><strong>Our Cost per minute= $ 0.0015 for a month</strong><br/></p><p><strong>Example 1</strong></p><p>Total minutes= 20</p><p>Cost of cloud recording= 20 x 0.0015 = $ 0.03<br/></p><p><strong>Example 2</strong></p><p>Total minutes = 160</p><p>Cost of cloud recording = 160 x 0.0015 = $ 0.24<br/></p><h3 id="cloud-recording-with-storage-and-streaming">Cloud Recording with storage and streaming</h3><p>Videosdk.live also provides cloud recording with storage and streaming. The host when makes this choice, we supplement with all three services. The video meeting is recorded along with storage and streaming<br/></p><p>When a user chooses to record along with storage and streaming, these points must be noted</p><ul><li>The cost is charged for all three functions [Recording, Storage, and Streaming]</li><li>All three functions have independent pricing</li><li>The cost of Storage is recurring per month</li><li>The streaming costs are charged as per the number of views</li><li>The cost of cloud recording is a one-time charge<br/></li></ul><p><strong>Calculation of cost</strong></p><p><strong>Cost of delivery(no.of views)= Total views x Total Minutes x $0.0010</strong><br/></p><p>Let’s understand the concept with some examples.</p><p><strong>Example 1</strong></p><p><strong>Total minutes= 30, Total Views= 90</strong><br/></p><p>Cost of recording= 30x 0.0015 = $0.045</p><p>Cost of storage= 30 x 0.003= $0.09</p><p>Cost of delivery= 30 x 90 x 0.0010= $2.7<br/></p><p><strong>Example 2</strong></p><p><strong>Total Minutes= 100, Total Views= 200</strong></p><p>Cost of recording= 100 x 0.0015 = $0.15</p><p>Cost of storage= 100 x 0.003= $0.3</p><p>Cost of delivery= 100 x 200 x 0.0010= $20<br/></p><h2 id="real-time-messaging-protocol-rtmp">Real-Time Messaging Protocol (RTMP)</h2><p>RTMP an abbreviate for Real-Time Messaging Protocol helps in the smooth transmission of data. RTMP helps streamers in streaming a large amount of data of audio, video, and other formats. It helps streaming data on multiple platforms with no trouble. It is used widely for real-time communication, live streaming, on-demand video streaming, and more.<br/></p><blockquote>Videosdk.live brings its RTMP prices with amazing features. Our RTMP functions make streamers enable easy transmission of videos to several different platforms like YouTube, Twitch, Facebook, and more.</blockquote><p>At videosdk.live, with our RTMP,</p><ul><li>You can re-stream to unlimited streaming platforms</li><li>The cost is charged as per the RTMP minutes</li><li>RTMP per video is a one-time cost<br/></li></ul><p><strong>Calculation of cost of RTMP</strong></p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/RTMP.jpg" class="kg-image" alt="Cloud Recording and RTMP Pricing" loading="lazy" width="1600" height="1040"/></figure><p><strong>Cost of RTMP per minute= Total RTMP minutes x $0.0025</strong><br/></p><p><strong>Example 1</strong></p><p>Total minutes= 60</p><p>Cost of RTMP= 60 x 0.0025= $0.15<br/></p><p><strong>Example 2</strong></p><p>Total Minutes= 200</p><p>Cost of RTMP= 200 x 0.0025= $0.5<br/></p><blockquote>Videosdk.live aims for delivering products with a qualitative aspect. All our services including cloud recording and RTMP are served with an astounding quality at affordable prices. Our additive functions- cloud recording and RTMP are some outstanding examples of quality.</blockquote>]]></content:encoded></item><item><title><![CDATA[Building a Virtual Event Platform with Prebuilt SDK]]></title><description><![CDATA[Learn how to build a virtual events platform in 5 easy steps. a step-by-step tutorial Prebulit SDK to help create a live virtual event on your platform.]]></description><link>https://www.videosdk.live/blog/building-a-virtual-event-platform-with-prebuilt-sdk</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb8b</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[virtual events]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Sat, 25 Jan 2025 13:45:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/05/Virtual-event-platform.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/05/Virtual-event-platform.jpg" alt="Building a Virtual Event Platform with Prebuilt SDK"/><p>Build your virtual event platform with VideoSDK using PreBuild APIs within minutes.</p><p>Building a <a href="https://www.videosdk.live/solutions/virtual-events" rel="noreferrer">virtual event</a> platform is an excellent idea since the virtual event industry is growing rapidly with an estimated valuation to reach $774 billion by 2030. The only problem is finding the best video SDK to integrate into your app. And no doubt the are many Video SDKs out in the market but many are overpriced or lacking in features &amp; customization or simply lack quality customer support. But with Video SDK we try to solve all the current industry dilemmas and provide you the best experience possible! </p><h3 id="why-choose-videosdk-to-build-your-virtual-event-platform">Why choose VideoSDK to build your Virtual event platform?</h3><p>VideoSDK is a video &amp; audio conferencing service you can integrate into your app within a few minutes. It helps you build the best virtual event platform with <a href="https://www.videosdk.live/pricing">$20 free credit</a>. 100+ startups are using <a href="https://www.videosdk.live/">VideoSDK</a> to build powerful platforms in the Virtual event space. We provide PreBuild as well as <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/getting-started">advanced virtual event platform features</a> to integrate into your app. </p><p>If you are building a virtual events platform using Low-Code or No-Code apps then check out our below tutorials.</p><ul><li>How to embed “Video Calling API” on any low-code web builder?</li><li>How to build a video calling app with Bubble.io?</li><li><a href="https://www.videosdk.live/blog/video-calling-wordpress">How to build a video calling app on WordPress?</a></li></ul><h2 id="the-steps-to-build-a-virtual-event-platform-using-prebuilt-sdk">The steps to build a virtual event platform using Prebuilt SDK</h2><p>Following the below steps will guide you to quickly build the best virtual event platform. Make sure to follow them accordingly and if you face any issues go ahead then go ahead and join our <a href="https://discord.com/invite/Gpmj6eCq5u">discord community</a> and we will solve your issues the right way!</p><h2 id="1-create-a-videosdk-account-and-generate-api-key">1: Create a VideoSDK account and generate API Key</h2><p><a href="https://app.vidoesdk.live/"><strong>Sign Up</strong></a> on Video SDK to create your free account. Once you set up the account go to <a href="https://app.videosdk.live/settings/api-keys" rel="noopener noreferrer">settings&gt;api-keys</a> and generate a new <strong>API key</strong> for integration. (For more info, you can follow <a href="https://docs.videosdk.live/docs/guide/prebuilt-video-and-audio-calling/signup-and-create-api" rel="noopener noreferrer">How to generate API Key?</a>)</p><h2 id="2-setup-a-meeting-in-your-virtual-event-platform">2: Setup a meeting in your virtual event platform </h2><p>Create a <code>index.html</code> file and add the following <code>&lt;script&gt;</code> tag at the end of your code's <code>&lt;body&gt;</code> tag. Initialize <code>VideoSDKMeeting</code> after the script gets loaded. Replace the <code>apiKey</code> with the one generated in <strong>Step 1</strong>.</p><pre><code class="language-js">&lt;!DOCTYPE html&gt;
&lt;html lang="en"&gt;
  &lt;head&gt;
    &lt;meta charset="UTF-8" /&gt;
    &lt;meta http-equiv="X-UA-Compatible" content="IE=edge" /&gt;
    &lt;meta name="viewport" content="width=device-width, initial-scale=1.0" /&gt;
    &lt;title&gt;Videosdk.live RTC&lt;/title&gt;
  &lt;/head&gt;
  &lt;body&gt;
&lt;script&gt;
  var script = document.createElement("script");
  script.type = "text/javascript";

  script.addEventListener("load", function (event) {
    const meeting = new VideoSDKMeeting();

    const config = {
      name: "John Doe",
      apiKey: "&lt;API KEY&gt;", // generated in step 1
      meetingId: "milkyway", // enter your meeting id

      containerId: null,
      redirectOnLeave: "https://www.videosdk.live/",

      micEnabled: true,
      webcamEnabled: true,
      participantCanToggleSelfWebcam: true,
      participantCanToggleSelfMic: true,

      chatEnabled: true,
      screenShareEnabled: true,
      pollEnabled: true,
      whiteboardEnabled: true,
      raiseHandEnabled: true,

      recordingEnabled: true,
      recordingWebhookUrl: "https://www.videosdk.live/callback",
      recordingAWSDirPath: `/meeting-recordings/${meetingId}/`, // automatically save recording in this s3 path
      autoStartRecording: true, // auto start recording on participant joined

      brandingEnabled: true,
      brandLogoURL: "https://picsum.photos/200",
      brandName: "Awesome startup",
      poweredBy: true,

      participantCanLeave: true, // if false, leave button won't be visible

      // Live stream meeting to youtube
      livestream: {
        autoStart: true,
        outputs: [
          // {
          //   url: "rtmp://x.rtmp.youtube.com/live2",
          //   streamKey: "&lt;STREAM KEY FROM YOUTUBE&gt;",
          // },
        ],
      },

      permissions: {
        askToJoin: false, // Ask joined participants for entry in meeting
        toggleParticipantMic: true, // Can toggle other participant's mic
        toggleParticipantWebcam: true, // Can toggle other participant's webcam
        drawOnWhiteboard: true, // Can draw on whiteboard
        toggleWhiteboard: true, // Can toggle whiteboard
        toggleRecording: true, // Can toggle meeting recording
        removeParticipant: true, // Can remove participant
        endMeeting: true, // Can end meeting
      },

      joinScreen: {
        visible: true, // Show the join screen ?
        title: "Daily scrum", // Meeting title
        meetingUrl: window.location.href, // Meeting joining url
      },

      pin: {
        allowed: true, // participant can pin any participant in meeting
        layout: "SPOTLIGHT", // meeting layout - GRID | SPOTLIGHT | SIDEBAR
      },

      leftScreen: {
        // visible when redirect on leave not provieded
        actionButton: {
          // optional action button
          label: "Video SDK Live", // action button label
          href: "https://videosdk.live/", // action button href
        },
      },

      notificationSoundEnabled: true,

      maxResolution: "sd", // "hd" or "sd"
    };

    meeting.init(config);
  });

  script.src =
    "https://sdk.videosdk.live/rtc-js-prebuilt/0.3.1/rtc-js-prebuilt.js";
  document.getElementsByTagName("head")[0].appendChild(script);
&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre><h2 id="3-generate-your-unique-meeting-link">3: Generate your unique meeting link </h2><p>Create createMeeting.htm and add <code>createMeeting()</code> function<a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/dynamic-meeting-link#step-2-create-createmeetinghtml-and-add-createmeeting-function">​</a></p><p>Add a <code>&lt;script&gt;</code> which will contain <code>createMeeting()</code> which will create and redirect to a new meeting. And add this method to <code>onClick</code> of <code>&lt;button&gt;</code></p><p>Your <code>&lt;body&gt;</code> should look something like this.</p><pre><code class="language-js">&lt;body&gt;
  &lt;script&gt;
    // Function to create meeting ID
    function createMeeting() {
          let meetingId =  'xxxxyxxx'.replace(/[xy]/g, function(c) {
              var r = Math.random() * 16 | 0, v = c == 'x' ? r : (r &amp; 0x3 | 0x8);
              return v.toString(16);
          });
          console.log("http://"+ window.location.host + "?meetingId="+ meetingId)
          document.getElementById("copyInput").value = "https://"+ window.location.host + "/meeting.html?meetingId="+ meetingId;
    }

    // Function to copy the link
    function copyFunction() {
      /* Get the text field */
      var copyText = document.getElementById("copyInput");

      /* Select the text field */
      copyText.select();
      copyText.setSelectionRange(0, 99999); /* For mobile devices */

      /* Copy the text inside the text field */
      navigator.clipboard.writeText(copyText.value);
    }
  &lt;/script&gt;
  &lt;div&gt;
    &lt;button onclick="createMeeting()"&gt;Create Meeting&lt;/button&gt;
    &lt;br/&gt;
    &lt;input type="text" id="copyInput"&gt;
    &lt;button onclick="myFunction()"&gt;Copy Link&lt;/button&gt;
  &lt;/div&gt;
&lt;/body&gt;</code></pre><h2 id="4-update-to-take-the-meetingid-from-the-url%E2%80%8B">4: Update to take the meetingId from the URL.<a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/dynamic-meeting-link#step-2-update-your-meetinghtml-to-take-the-meetingid-from-the-url">​</a></h2><p>In this way, you will be able to access meetingId from the URL and each unique URL will work as a different room</p><pre><code class="language-js">//...
&lt;script&gt;

   script.addEventListener("load", function (event) {
      //Get URL query parameters
      const url = new URLSearchParams(window.location.search);

      //...

      const config = {
        // ...
        meetingId: url.get("meetingId"), // Get meeting id from params.
        // ...
      };

      const meeting = new VideoSDKMeeting();
      meeting.init(config);
    });

&lt;/script&gt;

//...</code></pre><h2 id="5-run-and-test">5: Run and test</h2><p>Install an HTTP server if you don't already have one and run the server to join the meeting from the browser.<a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/dynamic-meeting-link#step-3-run-and-test">​</a></p><ul><li>Node.js</li></ul><pre><code class="language-js">$ npm install -g live-server
$ live-server --port=8000</code></pre><h4 id="find-the-npm-run-start-for-python-php-wamp-and-xampp-here">Find the NPM run start for Python, PHP, WAMP, and XAMPP <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/quick-start">here</a>.</h4>
<h3 id="prebuild-sdk-features-list"><br>PreBuild SDK features list</br></h3><p/><p>Below are the features of the virtual event platform. They can be easily implemented into your virtual event app. Simply input the 'true' or 'false' value in the<em> &lt;permission object</em> &gt; to your features for reference see the below video. </p>
<!--kg-card-begin: html-->
<iframe width="560" height="315" src="https://www.youtube.com/embed/o8DIxmT1Ubg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""/>
<!--kg-card-end: html-->
<h3 id="build-a-custom-virtual-event-platform"><br>Build a custom virtual event platform</br></h3><p/><p>With Video SDK you can create a huge number of custom features. They are very helpful because the custom features are specific to your industry niche. For example, in a <a href="https://www.theknowledgeacademy.com/courses/project-management-courses/" rel="noreferrer">project management course</a>, these features can support live collaboration, breakout discussions, and hands-on training exercises. We have kept the implementation of these features as simple as possible for our developers as well as no code developers. </p><p><strong>Interactive live streaming</strong> - <a href="https://docs.videosdk.live/api-reference/realtime-communication/start-hlsStream">Host up to 50,000 participants</a> on your virtual event platform in a single meeting. </p><p><strong>Break out room</strong> - Utilize the breakout rooms to flexibly merge meetings or transfer hosts to participants &amp; vice versa &amp; more.</p><p><strong>Custom video track</strong> - Integrate <a href="https://www.banuba.com/facear-sdk/face-filters?utm_source=adwords&amp;utm_medium=ppc&amp;utm_campaign=Brand+USA+Canada+India&amp;utm_term=banuba&amp;hsa_tgt=kwd-325576249823&amp;hsa_grp=133935013084&amp;hsa_mt=b&amp;hsa_cam=15897989032&amp;hsa_ver=3&amp;hsa_src=g&amp;hsa_net=adwords&amp;hsa_kw=banuba&amp;hsa_acc=6834955494&amp;hsa_ad=575300001743&amp;gclid=Cj0KCQjwyYKUBhDJARIsAMj9lkGud5Fj_CV8UOxiYTPkpt9YYQWOSJVS-emzsFTBSpouX7HpNCSIm6IaAq1_EALw_wcB">Banuba</a> a Face Filter SDK that provides awesome Filters and detailed analytics such as face tracking, background subtraction, &amp; more.</p><h3 id="open-source-projects">Open source projects</h3><p/><ul>
<li><strong><a href="https://github.com/videosdk-live/videosdk-rtc-react-prebuilt-ui">Prebuilt React SDK</a> UI Kit available on GitHub for free</strong></li>
</ul>
<p>You can go to GitHub &amp; and use our PreBuild SDK UI kit as it is open source. This means you can use our PreBuild video SDK for free as well as customize it as per your needs for your virtual event platform.</p><h3 id="conclusion">Conclusion</h3><p><br>Congrats on integrating Video SDK in your virtual event app! If you wish to add functionalities like chat messaging, and screen sharing, you can always check out our <a href="https://docs.videosdk.live/">documentation</a>. If you face any difficulty with the implementation you can check out the <a href="https://github.com/videosdk-live/videosdk-rtc-javascript-sdk-example">example on GitHub</a> or connect with us on our <a href="https://discord.gg/Gpmj6eCq5u">discord community</a>.<br/></br></p>]]></content:encoded></item><item><title><![CDATA[Building or Buying Video Calling API: Comprehensive Guide]]></title><description><![CDATA[The audio and video calling trends have increased. Being an entity it leaves a question that whether a company should build API on its own or purchase it from some reliable managed services. Let's discuss!]]></description><link>https://www.videosdk.live/blog/build-or-buy-video-calling-api</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb6a</guid><category><![CDATA[video-conferencing]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Sat, 25 Jan 2025 13:26:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/08/Blog-Thumbnail--1-.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/08/Blog-Thumbnail--1-.jpg" alt="Building or Buying Video Calling API: Comprehensive Guide"/><p>Communication has always been a crucial part of developing interactions with a community. With the advent of technology, face-to-face communication has also changed its conduct. The audio and video calls have become a crucial interaction source between two people, a group, a business, and a corporate organization among all sectors with the emergence of the COVID-19 pandemic. The questions arise, <br/></p><ul><li><em>How did these apps shift to video calling so rapidly?</em></li><li><em>Do they manually build it or do they arrange it from some development source? </em><br/></li></ul><blockquote>Well currently, real-time communication is in increasing demand. This blog guides whether a company should build its Video and audio calling API or count on a more reliable managed service for the same.</blockquote><figure class="kg-card kg-image-card kg-width-wide"><img src="http://assets.videosdk.live/static-assets/ghost/2021/08/Table.png" class="kg-image" alt="Building or Buying Video Calling API: Comprehensive Guide" loading="lazy" width="1432" height="678"/></figure><h2 id="building-audio-and-video-calling-api">Building audio and video calling API</h2><p>Developing audio and video calling API by a business is a matter of utmost care as it involves a high amount of R&amp;D. A company developing these APIs has to build vast technical support. Observe the technology changes &amp; advancements to keep their platform updated all the time.</p><p>The companies exclusively planning to distribute these APIs as a <em>platform</em> shall invest in building them, as they serve a purpose, and are accustomed to technological advancements. These companies supplement considerable resources and knowledge for the development of these APIs, making work adaptive. Building an audio and video calling API with <a href="https://www.mavlers.com/salesforce-marketing-cloud-services/" rel="noreferrer">Salesforce Marketing Cloud services</a> is an interesting and slightly unconventional integration, as Salesforce Marketing Cloud is primarily focused on marketing automation and customer engagement.</p><h3 id="observations-while-developing-apis-on-your-own">Observations while developing APIs on your own</h3><p><strong>A well-defined Budget</strong></p><p>A company needs to invest a huge amount of money, as it involves high development costs concerning <a href="https://uxpilot.ai/website-design-templates" rel="noreferrer">UI/UX design</a>, database management, cloud services, scaling, security, etc. It also includes costs of infrastructure, hosting, maintenance, salaries to personnel, and more, increasing the overall budget.</p><p><strong>Manpower Costs</strong></p><p>Developing audio and video calling APIs involves a huge development team. It requires a skilled force to work on specialized device platforms like Android, iOS, web, etc. Instead of hiring manpower for these activities, a company can prefer a purchase, and make that team occupied in the primary goals of the company. </p><p><strong>Time Investment</strong></p><p>Developing an API involves a huge time investment. While developing APIs on its own, involves opportunity costs, and above all the chances of successful development in one go is a question of concern for the company.</p><p><strong>Scalability</strong></p><p>When a company develops audio and video calling API on its own, it is hard to comply with scalability. The open-source aid generally can scale only up to 50-100 participants in one video call. In the case of allowing a huge participant count in one meeting, managed services make the effort. It can scale 5000+ participants at once.</p><p><strong>Change Resilience</strong></p><p>A company developing APIs as a feature of their application lacks resilience to change. They need to comply regularly with new technologies and upgrades. This problem arises when companies do not keep up with the changing factors. Buying an API with managed services looks at upgrades and changes regularly.</p><h2 id="buying-audio-and-video-calling-api">Buying audio and video calling API</h2><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/08/Buy-video-calling-API.jpg" class="kg-image" alt="Building or Buying Video Calling API: Comprehensive Guide" loading="lazy" width="2000" height="400"/></figure><p>When a company decides to buy <a href="https://www.videosdk.live/audio-video-conferencing">audio and video calling APIs</a>, it considerably looks to integrate these APIs as a feature for their application and not as a dedicated platform. Here, buying APIs is a more defined approach than building them. Developers are fully acquainted with the latest technologies and innovations, serving dedicated security, customization, and support. Companies who keep calling APIs as their secondary function must invest in buying them.</p><p>Corporations and organizations engaged in businesses like health, education, e-commerce, retail, and other similar industries should always choose to buy APIs from managed services. These services are easy to integrate and also facilitate security, customizations, upgradation, and technical support for the APIs. They are reliable and induce low risk.</p><h3 id="observations-while-purchasing-apis">Observations while purchasing APIs</h3><p><strong>Cost-effective</strong></p><p>Readymade audio and video calling APIs include features like call encryption, HIPAA compliance, spatial audio, chat support, and other elements. These costs are huge if a company develops every single one of them on its own. Rather when it purchases APIs, the overall effective cost is lower. This brings these APIs to qualify the cost worth for a company.</p><p><strong>Saves time and efforts</strong></p><p>Buying audio and video calling API from managed services saves a lot of time and development efforts for a company. It merely needs to comply with technical requirements for integration. The companies just need to invest very little to integrate the purchased APIs into their application.</p><p><strong>Customization</strong></p><p>A purchased API is often designed with customization primacy. A company can easily alter the design and the contents as per its requirements without digging deep into the low-level specifications. Contrarily, a company that designs API on its own usually lacks customization as it incurs huge costs of development. </p><p><strong>Secured communication</strong></p><p>A managed video calling service by a company comes with end-to-end encryption. This denotes that communication between two persons, or a group is fully secured and no one other than the participants can access it. A company generally neglects security in the initial phase of development due to higher costs.</p><p><strong>Scalability</strong></p><p>Managed services can scale participants to more than 5,000 in one meeting. Whereas, open sources can scale not more than 100-150 participants. It is also believed that if a company has just begun its journey into the corporates, it must always keep its trust towards these services for scalable support. </p><p><strong>Support</strong></p><p>On purchasing APIs, there is a provision of support and consultancy to the company. From help in customizing the desired API to upgradation with changes in technology, buying an API is useful. Some services also provide 24x7 technical support.</p><p><strong>Prevents platform abuse</strong></p><p>A company is protected from platform abuse by purchasing video and audio calling APIs from managed services. The services are developed with security that prevents happening of any sort of trouble, malpractice, or misuse of the APIs in use.<br/></p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/08/build-vs-buy-1.jpg" class="kg-image" alt="Building or Buying Video Calling API: Comprehensive Guide" loading="lazy" width="2000" height="400"/></figure><h2 id="the-final-verdict">The Final Verdict</h2><p>Audio and video calling are always considered the most attractive feature of any application. Therefore, a company that deals in real-time communication with its clients and consumers aspires to create it in the most astounding approach. Apart from looks, security, and quality of a meeting are the most concerning factors. All of these are possible approaches for a company but at huge costs. Managed services offer all these features, and they are more secure than the ones a company develops.<br/></p><blockquote><a href="https://www.videosdk.live/">VideoSDK  </a>is a platform that develops audio and video calling APIs for their clients from various industries to ease their functioning by simple integrations along with upgradation, customization, and security. We support you in your dedicated work. Sign in to the <a href="https://app.videosdk.live/">Dashboard</a> and start building your desired Audio and video calling API.<br/></blockquote>]]></content:encoded></item><item><title><![CDATA[Live Webinar - Create a Low Latency Live Streaming App in React Native ⚛️]]></title><description><![CDATA[Register this workshop to build the sample app or integrate the low latency live streaming within your React Native app.]]></description><link>https://www.videosdk.live/blog/create-a-low-latency-live-streaming-app-in-react-native</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb77</guid><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Sat, 25 Jan 2025 08:58:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/11/React-native-Live-streaming-1.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: html--><iframe width="560" height="315" src="https://www.youtube.com/embed/BQ1vWEC5WrE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""/><!--kg-card-end: html--><img src="http://assets.videosdk.live/static-assets/ghost/2021/11/React-native-Live-streaming-1.png" alt="Live Webinar - Create a Low Latency Live Streaming App in React Native ⚛️"/><p><br>Register this workshop to build the sample app or integrate the low latency live streaming within your React Native app.<br><br>Tuesday, 16 Nov 06:00 PM<br><br>? In the session, we covered<br><br>➟ Introduction to <a href="https://www.linkedin.com/feed/hashtag/?keywords=videosdk&amp;highlightedUpdateUrns=urn%3Ali%3Aactivity%3A6864207216082206720">#VideoSDK</a><br>➟ How React Native Video SDK works?<br>➟ Integration of React Native SDK<br>➟ Go Live<br>➟ Various Use Cases<br>➟ Q&amp;A Session and More.<br><br>The <a href="https://videosdk.live">Video SDK</a> React Native SDK is available now to help you quickly add voice, video, and live broadcasting to your mobile applications on iOS and Android. You can get the <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-sdk-example">Video SDK React Native SDK here on Github.</a> <br><br>This code sample demonstrates a one-to-one and group video call application built with <a href="https://docs.videosdk.live/docs/realtime-communication/sdk-reference/react-native-sdk/setup" rel="nofollow">Video SDK RTC React Native SDK</a> and <a href="https://docs.videosdk.live/docs/realtime-communication/sdk-reference/react-sdk/setup" rel="nofollow">Video SDK RTC React SDK</a></br></br></br></br></br></br></br></br></br></br></br></br></br></br></br></br></p><h3 id="features"><br>Features</br></h3><ul><li>Built for serverless video calling experience in Android and iOS.</li><li>Scale it up to 5,000 participants with low code.</li><li>10,000 minutes free on a monthly basis</li><li>Video API with real-time audio, video, and data streams</li><li>5,000+ participants support</li><li>Chat support with rich media.</li><li>Screen sharing with HD and Full HD.</li><li>Play the real-time video in a meeting</li><li>Connect it with social media such as Facebook, Youtube, etc (RTMP out support).</li><li>Intelligent speaker switch</li><li>Record your meetings on cloud</li><li>Customize UI and build other rich features with our new data streams such as whiteboard, poll, Q &amp; A, etc.</li></ul><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/11/Untitled-design--32-.png" class="kg-image" alt="Live Webinar - Create a Low Latency Live Streaming App in React Native ⚛️" loading="lazy" width="891" height="194" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2021/11/Untitled-design--32-.png 600w, http://assets.videosdk.live/static-assets/ghost/2021/11/Untitled-design--32-.png 891w" sizes="(min-width: 720px) 720px"/></figure>]]></content:encoded></item><item><title><![CDATA[Build an Android Live Streaming Video Chat App Using Kotlin?]]></title><description><![CDATA[In this article, we'll guide you through the process of building an Android(Kotlin) live-streaming app using VideoSDK. ]]></description><link>https://www.videosdk.live/blog/android-live-streaming</link><guid isPermaLink="false">642c269d2c7661a49f381e90</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[Android]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Fri, 24 Jan 2025 13:33:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/HTTP-Live-Streaming-Kotlin-2.png" medium="image"/><content:encoded><![CDATA[<h2 id="tldr">tl;dr</h2>
<img src="https://assets.videosdk.live/static-assets/ghost/2024/04/HTTP-Live-Streaming-Kotlin-2.png" alt="Build an Android Live Streaming Video Chat App Using Kotlin?"/><p><a href="https://www.videosdk.live/blog/what-is-http-live-streaming"><strong>HTTP Live Streaming</strong></a> (HLS) is a widely adopted streaming protocol developed by Apple Inc. to transmit audio and video content over the internet. It operates on a client-server model, delivering a seamless and adaptive streaming experience. HLS has gained popularity due to its compatibility with various devices and browsers, making it an ideal choice for content delivery.</p><p>You've likely consumed a fair amount of live streams if you're an avid consumer of online content in the present day. With live streaming becoming the preferred source of learning and entertainment for many, it's hard to miss out on live broadcasts, whether it's for attending online classes, following sports events, watching fitness lessons, or engaging with celebrities.</p><p>If you're a developer looking to build a top-notch live streaming experience in your Android app, this article is for you.</p><h2 id="why-videosdk-is-your-go-to-for-live-streaming">Why VideoSDK is Your Go-To for Live Streaming?</h2><p><a href="https://www.videosdk.live/">VideoSDK </a>is a perfect choice for those seeking a live-streaming platform that offers the necessary features to create high-quality streams. The platform supports screen sharing and real-time Messaging, allows broadcasters to invite audience members to the stage, and supports 100 Participants, ensuring that your live streams are interactive and engaging. With VideoSDK, you can also use your own customized layout template for live streaming. Additionally, VideoSDK caters to users in the United Kingdom, Australia, USA, India, UAE, Canada, and Nigeria.</p><p>In terms of integration, VideoSDK offers a simple and quick integration process, allowing you to seamlessly integrate live streaming into your app. This ensures that you can enjoy the benefits of live streaming without any technical difficulties or lengthy implementation processes.</p><p>Furthermore, VideoSDK is <a href="https://www.videosdk.live/pricing">budget-friendly</a>, making it an affordable option for businesses of all sizes. You can enjoy the benefits of a feature-rich live-streaming platform without breaking the bank, making it an ideal choice for startups and small businesses.</p><h2 id="build-a-live-streaming-android-app">Build a live-streaming Android App</h2><p>The below steps will give you all the information to quickly build an interactive live-streaming <a href="https://uxpilot.ai/mobile-design-templates/android" rel="noreferrer">android app</a>. Please carefully follow along, and if you have any trouble, let us know right away on <a href="https://discord.gg/f2WsNDN9S5">Discord</a>, and we will be happy to help you.</p><h3 id="prerequisite">Prerequisite</h3><ul><li><a href="https://www.oracle.com/in/java/technologies/downloads/">Java Development Kit</a>.</li><li><a href="https://developer.android.com/studio">Android Studio 3.0</a> or later.</li><li>Android SDK API Level 21 or higher.</li><li>A mobile device that runs Android 5.0 or later.</li><li>A token from the VideoSDK <a href="https://app.videosdk.live/api-keys">dashboard</a></li></ul><h3 id="create-a-new-project">Create a new project</h3><p>In Android Studio, create a Phone and Tablet Android project with an Empty Activity.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/android_new_project.png" class="kg-image" alt="Build an Android Live Streaming Video Chat App Using Kotlin?" loading="lazy" width="899" height="652"/></figure><p>The next step is to provide a name. We have set the name as HLSDemo.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/Screenshot-from-2023-04-05-12-18-04.png" class="kg-image" alt="Build an Android Live Streaming Video Chat App Using Kotlin?" loading="lazy" width="887" height="641"/></figure><h2 id="integrate-videosdk">Integrate VideoSDK</h2><p>Add the repository to the project's <code>settings.gradle</code> file.</p>
<pre><code class="language-js">dependencyResolutionManagement{
  repositories {
    // ...
    google()
    mavenCentral()
    maven { url 'https://jitpack.io' }
    maven { url "https://maven.aliyun.com/repository/jcenter" }
  }
}</code></pre><p>Add the following dependency to your app's <code>build.gradle</code>.</p><pre><code class="language-js">dependencies {
  implementation 'live.videosdk:rtc-android-sdk:0.1.26'

  // library to perform Network call to generate a meeting id
  implementation 'com.amitshekhar.android:android-networking:1.0.2'

  // other app dependencies
  }</code></pre><blockquote>If your project has set <code>android.useAndroidX=true</code>, then set <code>android.enableJetifier=true</code> in the <code>gradle.properties</code> file to migrate your project to AndroidX and avoid duplicate class conflict.</blockquote><h3 id="add-permissions-to-your-project">Add permissions to your project</h3><p><br>In <code>/app/Manifests/AndroidManifest.xml</code>, add the following permissions after <code>&lt;/application&gt;</code>.</br></p><pre><code class="language-js">&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS"/&gt;
&lt;uses-permission android:name="android.permission.INTERNET" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;</code></pre><h3 id="structure-of-project">Structure of project</h3><p>We will create two activities and two fragments. First activity is <code>JoinActivity</code> , which allows users to create/join the meeting, and another one is <code>MeetingActivity</code> , which will initialize meetings and replace <code>mainLayout</code> with <code>SpeakerFragment</code> or with <code>ViewerFragment</code> according to the user's choice.</p><p>Our project structure would look like this.</p><pre><code class="language-js">  app
   ├── java
   │    ├── packagename
   │         ├── JoinActivity
   │         ├── MeetingActivity
   │         ├── SpeakerAdapter
   │         ├── SpeakerFragment
   |         ├── ViewerFragment
   ├── res
   │    ├── layout
   │    │    ├── activity_join.xml
   │    │    ├── activity_meeting.xml
   |    |    ├── fragment_speaker.xml
   |    |    ├── fragment_viewer.xml
   │    │    ├── item_remote_peer.xml</code></pre><blockquote>You have to set <code>JoinActivity</code> as Launcher activity.</blockquote><h3 id="app-architecture">App Architecture</h3>
<!--kg-card-begin: html-->
<center>

<img src="https://cdn.videosdk.live/website-resources/docs-resources/android_ils_quickstart_app_structure.png" alt="Build an Android Live Streaming Video Chat App Using Kotlin?"/>

</center>
<!--kg-card-end: html-->
<h2 id="essential-steps-for-building-the-video-calling-functionality">Essential Steps for Building the Video Calling Functionality</h2><h3 id="step-1-creating-joining-screen">Step 1: Creating Joining Screen</h3><p>Create a new Activity named <code>JoinActivity</code></p><h4 id="11-creating-ui-for-joining-the-screen">1.1 Creating UI for Joining the Screen</h4>
<p>The Joining screen will include :</p><ol><li><strong>Create Button</strong> - This button will create a new meeting for you.</li><li><strong>TextField for Meeting ID</strong> - This text field will contain the meeting ID you want to join.</li><li><strong>Join as Host Button</strong> - This button will join the meeting as <strong>host </strong>with <code>meetingId</code> you provided.</li><li><strong>Join as Viewer Button</strong> - This button will join the meeting as a <strong>viewer</strong> with <code>meetingId</code> you provided.</li></ol><p>In <code>/app/res/layout/activity_join.xml</code> file, replace the content with the following.</p><pre><code class="language-js">&lt;LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:id="@+id/createorjoinlayout"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:background="@color/black"
    android:gravity="center"
    android:orientation="vertical"&gt;

    &lt;Button
        android:id="@+id/btnCreateMeeting"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="Create Meeting"
        android:textAllCaps="false" /&gt;

    &lt;TextView
        android:id="@+id/tvText"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:paddingVertical="5sp"
        android:text="OR"
        android:textColor="@color/white"
        android:textSize="20sp" /&gt;

    &lt;EditText
        android:id="@+id/etMeetingId"
        android:theme="@android:style/Theme.Holo"
        android:layout_width="250dp"
        android:layout_height="wrap_content"
        android:hint="Enter Meeting Id"
        android:textColor="@color/white"
        android:textColorHint="@color/white" /&gt;

    &lt;Button
        android:id="@+id/btnJoinHostMeeting"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_marginTop="8sp"
        android:text="Join as Host"
        android:textAllCaps="false" /&gt;

    &lt;Button
        android:id="@+id/btnJoinViewerMeeting"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="Join as Viewer"
        android:textAllCaps="false" /&gt;

&lt;/LinearLayout&gt;</code></pre><h4 id="12-integration-of-create-meeting-api">1.2 Integration of Create Meeting API</h4>
<ol>
<li>Create field <code>sampleToken</code> in <code>JoinActivity</code> which will hold the generated token from the <a href="https://app.videosdk.live/api-keys">VideoSDK dashboard</a>. This token will be used in the VideoSDK config as well as generating meetingId.</li>
</ol>
<pre><code class="language-js">class JoinActivity : AppCompatActivity() {

  //Replace with the token you generated from the VideoSDK Dashboard
  private var sampleToken = ""

  override fun onCreate(savedInstanceState: Bundle?) {
    //...
  }
}</code></pre><ol start="2">
<li>
<p>On the <strong>Join Button as Host</strong> <code>onClick</code> events, we will navigate to <code>MeetingActivity</code> with token, meetingId, and mode as <code>CONFERENCE</code>.</p>
</li>
<li>
<p>On the <strong>Join Button as Viewer</strong> <code>onClick</code> events, we will navigate to <code>MeetingActivity</code> with token, meetingId, and mode as <code>Viewer</code>.</p>
</li>
</ol>
<pre><code class="language-js">class JoinActivity : AppCompatActivity() {

   //Replace with the token you generated from the VideoSDK Dashboard
   private var sampleToken = "" 

   override fun onCreate(savedInstanceState: Bundle?) {
      super.onCreate(savedInstanceState)
      setContentView(R.layout.activity_join)

      val btnCreate = findViewById&lt;Button&gt;(R.id.btnCreateMeeting)
      val btnJoinHost = findViewById&lt;Button&gt;(R.id.btnJoinHostMeeting)
      val btnJoinViewer = findViewById&lt;Button&gt;(R.id.btnJoinViewerMeeting)
      val etMeetingId = findViewById&lt;EditText&gt;(R.id.etMeetingId)

      // create meeting and join as Host
      btnCreate.setOnClickListener {
          createMeeting(
              sampleToken
          )
      }

      // Join as Host
      btnJoinHost.setOnClickListener {
          val intent = Intent(this@JoinActivity, MeetingActivity::class.java)
          intent.putExtra("token", sampleToken)
          intent.putExtra("meetingId", etMeetingId.text.toString().trim { it &lt;= ' ' })
          intent.putExtra("mode", "CONFERENCE")
          startActivity(intent)
      }

      // Join as Viewer
      btnJoinViewer.setOnClickListener {
          val intent = Intent(this@JoinActivity, MeetingActivity::class.java)
          intent.putExtra("token", sampleToken)
          intent.putExtra("meetingId", etMeetingId.text.toString().trim { it &lt;= ' ' })
          intent.putExtra("mode", "VIEWER")
          startActivity(intent)
      }
    }

    private fun createMeeting(token: String) {
      // we will explore this method in the next step
    }</code></pre><ol start="4">
<li>For the <strong>Create Button</strong>, under <code>createMeeting</code> method we will generate meetingId by calling API and navigating to <code>MeetingActivity</code> with the token, generated meetingId, and mode as <code>CONFERENCE</code>.</li>
</ol>
<pre><code class="language-js">class JoinActivity : AppCompatActivity() {
  //...onCreate
 private fun createMeeting(token: String) {
  // we will make an API call to VideoSDK Server to get a roomId
  AndroidNetworking.post("https://api.videosdk.live/v2/rooms")
      .addHeaders("Authorization", token) //we will pass the token in the Headers
      .build()
      .getAsJSONObject(object : JSONObjectRequestListener {
          override fun onResponse(response: JSONObject) {
            try {
              // response will contain `roomId`
              val meetingId = response.getString("roomId")

              // starting the MeetingActivity with received roomId and our sampleToken
              val intent = Intent(this@JoinActivity, MeetingActivity::class.java)
              intent.putExtra("token", sampleToken)
              intent.putExtra("meetingId", meetingId)
              intent.putExtra("mode", "CONFERENCE")
              startActivity(intent)
            } catch (e: JSONException) {
                e.printStackTrace()
            }
          }

          override fun onError(anError: ANError) {
            anError.printStackTrace()
            Toast.makeText(this@JoinActivity, anError.message, Toast.LENGTH_SHORT)
                .show()
          }
      })
  }
}</code></pre><ol start="5">
<li>Our App is completely based on audio and video commutation, that's why we need to ask for runtime permissions <code>RECORD_AUDIO</code> and <code>CAMERA</code>. So, we will implement permission logic on <code>JoinActivity</code>.</li>
</ol>
<pre><code class="language-js">class JoinActivity : AppCompatActivity() {
  companion object {
    private const val PERMISSION_REQ_ID = 22
    private val REQUESTED_PERMISSIONS = arrayOf(
        Manifest.permission.RECORD_AUDIO,
        Manifest.permission.CAMERA
    )
  }

  private fun checkSelfPermission(permission: String, requestCode: Int) {
    if (ContextCompat.checkSelfPermission(this, permission) !=
        PackageManager.PERMISSION_GRANTED
    ) {
        ActivityCompat.requestPermissions(this, REQUESTED_PERMISSIONS, requestCode)
    }
  }

  override fun onCreate(savedInstanceState: Bundle?) {
    //... button listeneres
    checkSelfPermission(REQUESTED_PERMISSIONS[0], PERMISSION_REQ_ID)
    checkSelfPermission(REQUESTED_PERMISSIONS[1], PERMISSION_REQ_ID)
  }
}</code></pre><blockquote>You will get <code>Unresolved reference: MeetingActivity</code> error, but don't worry. It will automatically solved once you create <code>MeetingActivity</code>.</blockquote><h3 id="step-2-creating-meeting-screen">Step 2: Creating Meeting Screen</h3><p>Create a new Activity named <code>MeetingActivity</code>.</p><h4 id="21-creating-the-ui-for-the-meeting-screen">2.1 Creating the UI for the Meeting Screen</h4>
<p>In <code>/app/res/layout/activity_meeting.xml</code> file, replace the content with the following.</p><pre><code class="language-js">&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    android:id="@+id/mainLayout"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:background="@color/black"
    tools:context=".MeetingActivity"&gt;

    &lt;TextView
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:gravity="center"
        android:text="Creating a meeting for you"
        android:textColor="@color/white"
        android:textFontWeight="700"
        android:textSize="20sp" /&gt;

&lt;/RelativeLayout&gt;</code></pre><h4 id="22-initializing-the-meeting">2.2 Initializing the Meeting</h4>
<p>After getting the token, meetigId, and mode from <code>JoinActivity</code>,</p><ol>
<li>Initialize <strong>VideoSDK</strong>.</li>
<li>Configure <strong>VideoSDK</strong> with the token.</li>
<li>Initialize the meeting with required params such as <code>meetingId</code>, <code>participantName</code>, <code>micEnabled</code>, <code>webcamEnabled</code> , <code>mode</code>, and more.</li>
<li>Join the room with <code>meeting.join()</code> method.</li>
<li>Add <code>MeetingEventListener</code> for listening  <strong>Meeting Join</strong> event.</li>
<li>Check mode of <code>localParticipant</code>, If the mode is <strong>CONFERENCE</strong> then we will replace mainLayout with <code>SpeakerFragment</code> otherwise, replace it with <code>ViewerFragment</code>.</li>
</ol>
<pre><code class="language-java">class MeetingActivity : AppCompatActivity() {
  var meeting: Meeting? = null
      private set

  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_meeting)

    val meetingId = intent.getStringExtra("meetingId")
    val token = intent.getStringExtra("token")
    val mode = intent.getStringExtra("mode")
    val localParticipantName = "John Doe"
    val streamEnable = mode == "CONFERENCE"

    // initialize VideoSDK
    VideoSDK.initialize(applicationContext)

    // Configuration VideoSDK with Token
    VideoSDK.config(token)

    // Initialize VideoSDK Meeting
    meeting = VideoSDK.initMeeting(
      this@MeetingActivity, meetingId, participantName,
      micEnabled, webcamEnabled,null, null, false, null, null)

    // join Meeting
    meeting!!.join()

    // if mode is CONFERENCE than replace mainLayout with SpeakerFragment otherwise with ViewerFragment
    meeting!!.addEventListener(object : MeetingEventListener() {
      override fun onMeetingJoined() {
          if (meeting != null) {
            if (mode == "CONFERENCE") {
              //pin the local partcipant
              meeting!!.localParticipant.pin("SHARE_AND_CAM")
              supportFragmentManager
                  .beginTransaction()
                  .replace(R.id.mainLayout, SpeakerFragment(), "MainFragment")
                  .commit()
              } else if (mode == "VIEWER") {
                supportFragmentManager
                    .beginTransaction()
                    .replace(R.id.mainLayout, ViewerFragment(), "viewerFragment")
                    .commit()
              }
          }
      }
    })
  }
}</code></pre><h3 id="step-3-implement-speakerview">Step 3: Implement SpeakerView</h3><p>After successfully entering the meeting, it's time to render the speaker's view and manage controls such as toggling the webcam/mic, start/stop HLS, and leave the meeting.</p><ul><li>Create a new fragment named <code>SpeakerFragment</code>.</li><li>In <code>/app/res/layout/fragment_speaker.xml</code> file, replace the content with the following.</li></ul><pre><code class="language-js">&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:background="@color/black"
    android:gravity="center"
    android:orientation="vertical"
    tools:context=".SpeakerFragment"&gt;

    &lt;LinearLayout
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:layout_marginVertical="8sp"
        android:paddingHorizontal="10sp"&gt;

        &lt;TextView
            android:id="@+id/tvMeetingId"
            android:layout_width="0dp"
            android:layout_height="wrap_content"
            android:text="Meeting Id : "
            android:textColor="@color/white"
            android:textSize="18sp"
            android:layout_weight="3"/&gt;

        &lt;Button
            android:id="@+id/btnLeave"
            android:layout_width="0dp"
            android:layout_height="wrap_content"
            android:text="Leave"
            android:textAllCaps="false"
            android:layout_weight="1"/&gt;

    &lt;/LinearLayout&gt;

    &lt;TextView
        android:id="@+id/tvHlsState"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="Current HLS State : NOT_STARTED"
        android:textColor="@color/white"
        android:textSize="18sp" /&gt;

    &lt;androidx.recyclerview.widget.RecyclerView
        android:id="@+id/rvParticipants"
        android:layout_width="match_parent"
        android:layout_height="0dp"
        android:layout_marginVertical="10sp"
        android:layout_weight="1" /&gt;

    &lt;LinearLayout
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:gravity="center"&gt;

        &lt;Button
            android:id="@+id/btnHLS"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:text="Start HLS"
            android:textAllCaps="false" /&gt;

        &lt;Button
            android:id="@+id/btnWebcam"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginHorizontal="5sp"
            android:text="Toggle Webcam"
            android:textAllCaps="false" /&gt;

        &lt;Button
            android:id="@+id/btnMic"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:text="Toggle Mic"
            android:textAllCaps="false" /&gt;

    &lt;/LinearLayout&gt;

&lt;/LinearLayout&gt;</code></pre><p>Now, let's set a listener for buttons allowing the participant to toggle media.</p>
<pre><code class="language-js">class SpeakerFragment : Fragment() {
  private var micEnabled = true
  private var webcamEnabled = true
  private var hlsEnabled = false
  private var btnMic: Button? = null
  private var btnWebcam: Button? = null
  private var btnHls: Button? = null
  private var btnLeave: Button? = null
  private var tvMeetingId: TextView? = null
  private var tvHlsState: TextView? = null
  override fun onAttach(context: Context) {
    super.onAttach(context)
    mContext = context
    if (context is Activity) {
      mActivity = context
      // getting meeting object from Meeting Activity
      meeting = (mActivity as MeetingActivity?)!!.meeting
    }
  }

  override fun onCreateView(
      inflater: LayoutInflater, container: ViewGroup?,
      savedInstanceState: Bundle?
  ): View? {
    // Inflate the layout for this fragment
    val view = inflater.inflate(R.layout.fragment_speaker, container, false)
    btnMic = view.findViewById(R.id.btnMic)
    btnWebcam = view.findViewById(R.id.btnWebcam)
    btnHls = view.findViewById(R.id.btnHLS)
    btnLeave = view.findViewById(R.id.btnLeave)
    tvMeetingId = view.findViewById(R.id.tvMeetingId)
    tvHlsState = view.findViewById(R.id.tvHlsState)
    if (meeting != null) {
      tvMeetingId!!.text = "Meeting Id : " + meeting!!.meetingId
      setActionListeners()
    }
    return view
  }

  private fun setActionListeners() {}

  companion object {
    private var mActivity: Activity? = null
    private var mContext: Context? = null
    private var meeting: Meeting? = null
  }
}</code></pre><pre><code class="language-js">private fun setActionListeners() {
    btnMic!!.setOnClickListener {
      if (micEnabled) {
        meeting!!.muteMic()
        Toast.makeText(mContext, "Mic Muted", Toast.LENGTH_SHORT).show()
      } else {
        meeting!!.unmuteMic()
        Toast.makeText(
            mContext,
            "Mic Enabled",
            Toast.LENGTH_SHORT
        ).show()
      }
      micEnabled = !micEnabled
    }
    btnWebcam!!.setOnClickListener {
      if (webcamEnabled) {
        meeting!!.disableWebcam()
        Toast.makeText(
            mContext,
            "Webcam Disabled",
            Toast.LENGTH_SHORT
        ).show()
      } else {
        meeting!!.enableWebcam()
        Toast.makeText(
            mContext,
            "Webcam Enabled",
            Toast.LENGTH_SHORT
        ).show()
      }
      webcamEnabled = !webcamEnabled
    }
    btnLeave!!.setOnClickListener { meeting!!.leave() }
    btnHls!!.setOnClickListener {
      if (!hlsEnabled) {
        val config = JSONObject()
        val layout = JSONObject()
        JsonUtils.jsonPut(layout, "type", "SPOTLIGHT")
        JsonUtils.jsonPut(layout, "priority", "PIN")
        JsonUtils.jsonPut(layout, "gridSize", 4)
        JsonUtils.jsonPut(config, "layout", layout)
        JsonUtils.jsonPut(config, "orientation", "portrait")
        JsonUtils.jsonPut(config, "theme", "DARK")
        JsonUtils.jsonPut(config, "quality", "high")
        meeting!!.startHls(config)
      } else {
        meeting!!.stopHls()
      }
    }
  }</code></pre><p>After adding listeners for buttons, let's add <code>MeetingEventListener</code> to the meeting and remove all listeners in <code>onDestroy()</code> method.</p>
<pre><code class="language-js">class SpeakerFragment : Fragment() {

  override fun onCreateView(
      inflater: LayoutInflater, container: ViewGroup?,
      savedInstanceState: Bundle?
  ): View? {
    //...
    if (meeting != null) {
      //...
      // add Listener to the meeting
      meeting!!.addEventListener(meetingEventListener)
    }
    return view
  }

  private val meetingEventListener: MeetingEventListener = object : MeetingEventListener() {
      override fun onMeetingLeft() {
        // unpin the local participant
        meeting!!.localParticipant.unpin("SHARE_AND_CAM")
        if (isAdded) {
          val intents = Intent(mContext, JoinActivity::class.java)
          intents.addFlags(
              Intent.FLAG_ACTIVITY_NEW_TASK
                      or Intent.FLAG_ACTIVITY_CLEAR_TOP or Intent.FLAG_ACTIVITY_CLEAR_TASK
          )
          startActivity(intents)
          mActivity!!.finish()
        }
      }

      @RequiresApi(api = Build.VERSION_CODES.P)
      override fun onHlsStateChanged(HlsState: JSONObject) {
        if (HlsState.has("status")) {
          try {
              tvHlsState!!.text = "Current HLS State : " + HlsState.getString("status")
              if (HlsState.getString("status") == "HLS_STARTED") {
                hlsEnabled = true
                btnHls!!.text = "Stop HLS"
              }
              if (HlsState.getString("status") == "HLS_STOPPED") {
                hlsEnabled = false
                btnHls!!.text = "Start HLS"
              }
            } catch (e: JSONException) {
                e.printStackTrace()
            }
        }
      }
  }

  override fun onDestroy() {
      mContext = null
      mActivity = null
      if (meeting != null) {
          meeting!!.removeAllListeners()
          meeting = null
      }
      super.onDestroy()
  }
}</code></pre><p>The next step is to render the speaker's view. With <code>RecyclerView</code>, we will display a list of participants who joined the meeting as a <code>host</code>.</p>
<p>Create a new layout for the participant view named <code>item_remote_peer.xml</code> in the <code>res/layout</code> folder.</p>
<pre><code class="language-js">&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="200dp"
    android:background="@color/cardview_dark_background"
    tools:layout_height="200dp"&gt;

    &lt;live.videosdk.rtc.android.VideoView
        android:id="@+id/participantView"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:visibility="gone" /&gt;

    &lt;LinearLayout
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:layout_gravity="bottom"
        android:background="#99000000"
        android:orientation="horizontal"&gt;

        &lt;TextView
            android:id="@+id/tvName"
            android:layout_width="0dp"
            android:layout_height="wrap_content"
            android:layout_weight="1"
            android:gravity="center"
            android:padding="4dp"
            android:textColor="@color/white" /&gt;

    &lt;/LinearLayout&gt;

&lt;/FrameLayout&gt;</code></pre><p>Create a recycler view adapter named <code>SpeakerAdapter</code> which will show the participant list. Create <code>PeerViewHolder</code> the adapter that will extend <code>RecyclerView.ViewHolder</code>.</p>
<pre><code class="language-js">class SpeakerAdapter(private val meeting: Meeting) :
    RecyclerView.Adapter&lt;SpeakerAdapter.PeerViewHolder?&gt;() {
    private var participantList: MutableList&lt;Participant&gt; = ArrayList()

    init {
      updateParticipantList()
      // adding Meeting Event listener to get the participant join/leave event in the meeting.
      meeting.addEventListener(object : MeetingEventListener() {
        override fun onParticipantJoined(participant: Participant) {
          // check participant join as Host/Speaker or not
          if (participant.mode == "CONFERENCE") {
              // pin the participant
              participant.pin("SHARE_AND_CAM")
              // add participant in participantList
              participantList.add(participant)
          }
          notifyDataSetChanged()
        }

        override fun onParticipantLeft(participant: Participant) {
          var pos = -1
          for (i in participantList.indices) {
              if (participantList[i].id == participant.id) {
                  pos = i
                  break
              }
          }
          if (participantList.contains(participant)) {
              // unpin participant who left the meeting
              participant.unpin("SHARE_AND_CAM")
              // remove participant from participantList
              participantList.remove(participant)
          }
          if (pos &gt;= 0) {
              notifyItemRemoved(pos)
          }
        }
      })
    }

    private fun updateParticipantList() {
    }

    override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): PeerViewHolder {
    }

    override fun onBindViewHolder(holder: PeerViewHolder, position: Int) {
    }

    override fun getItemCount(): Int {
    }

    class PeerViewHolder(view: View) : RecyclerView.ViewHolder(view) {
    }
}</code></pre><pre><code class="language-js">private fun updateParticipantList() {
      // adding the local participant(You) to the list
      participantList.add(meeting.localParticipant)

      // adding participants who join as Host/Speaker
      val participants: Iterator&lt;Participant&gt; = meeting.participants.values.iterator()
      for (i in 0 until meeting.participants.size) {
        val participant = participants.next()
        if (participant.mode == "CONFERENCE") {
            // pin the participant
            participant.pin("SHARE_AND_CAM")
            // add participant in participantList
            participantList.add(participant)
        }
      }
    }</code></pre><pre><code class="language-js">override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): PeerViewHolder {
      return PeerViewHolder(
          LayoutInflater.from(parent.context).inflate(R.layout.item_remote_peer, parent, false)
      )
    }

    override fun onBindViewHolder(holder: PeerViewHolder, position: Int) {
      val participant = participantList[position]
      holder.tvName.text = participant.displayName

      // adding the initial video stream for the participant into the 'VideoView'
      for ((_, stream) in participant.streams) {
        if (stream.kind.equals("video", ignoreCase = true)) {
          holder.participantView.visibility = View.VISIBLE
          val videoTrack = stream.track as VideoTrack
          holder.participantView.addTrack(videoTrack)
          break
        }
      }

      // add Listener to the participant which will update start or stop the video stream of that participant
      participant.addEventListener(object : ParticipantEventListener() {
        override fun onStreamEnabled(stream: Stream) {
          if (stream.kind.equals("video", ignoreCase = true)) {
            holder.participantView.visibility = View.VISIBLE
            val videoTrack = stream.track as VideoTrack
            holder.participantView.addTrack(videoTrack)
          }
        }

        override fun onStreamDisabled(stream: Stream) {
          if (stream.kind.equals("video", ignoreCase = true)) {
            holder.participantView.removeTrack()
            holder.participantView.visibility = View.GONE
          }
        }
      })
    }

    override fun getItemCount(): Int {
      return participantList.size
    }

    class PeerViewHolder(view: View) : RecyclerView.ViewHolder(view) {
      // 'VideoView' to show Video Stream
      var participantView: VideoView
      var tvName: TextView

      init {
        tvName = view.findViewById(R.id.tvName)
        participantView = view.findViewById(R.id.participantView)
      }
    }
</code></pre><p>Add this adapter to the <code>SpeakerFragment</code></p>
<pre><code class="language-js">override fun onCreateView(
      inflater: LayoutInflater, container: ViewGroup?,
      savedInstanceState: Bundle?
  ): View? {
    //...
    if (meeting != null) {
      //...
      val rvParticipants = view.findViewById&lt;RecyclerView&gt;(R.id.rvParticipants)
      rvParticipants.layoutManager = GridLayoutManager(mContext, 2)
      rvParticipants.adapter = SpeakerAdapter(meeting!!)
  }
}</code></pre><h3 id="step-4-implement-viewerview">Step 4: Implement ViewerView</h3><p>When the host starts the live streaming, the viewer will be able to see the live streaming.<br>
To implement player view, we are going to use <code>ExoPlayer</code>. It will be helpful to play HLS stream.<br>
Let's first add the dependency to the project.</br></br></p>
<pre><code class="language-js">dependencies {
  implementation 'com.google.android.exoplayer:exoplayer:2.18.5'
  // other app dependencies
}</code></pre><p>Now, we create a new Fragment named <code>ViewerFragment</code>.</p><h4 id="creating-the-ui-for-viewer-fragment">Creating the UI for Viewer Fragment</h4>
<p>The Viewer Fragment will include :</p><ul><li><strong>TextView for Meeting Id</strong> - The meeting ID that you have joined with, will be displayed in this text view.</li><li><strong>Leave Button</strong> - This button will leave the meeting.</li><li><strong>waitingLayout</strong> - This is the textView that will be shown when there is no active HLS.</li><li><strong>StyledPlayerView</strong> -  This is a media player that will display live streaming.</li></ul><p>In <code>/app/res/layout/fragment_viewer.xml</code> file, replace the content with the following.</p><pre><code class="language-js">&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:background="@color/black"
    tools:context=".ViewerFragment"&gt;

    &lt;LinearLayout
        android:id="@+id/meetingLayout"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:paddingHorizontal="12sp"
        android:paddingVertical="5sp"&gt;

        &lt;TextView
            android:id="@+id/meetingId"
            android:layout_width="0dp"
            android:layout_height="wrap_content"
            android:layout_weight="3"
            android:text="Meeting Id : "
            android:textColor="@color/white"
            android:textSize="20sp" /&gt;

        &lt;Button
            android:id="@+id/btnLeave"
            android:layout_width="0dp"
            android:layout_height="wrap_content"
            android:layout_weight="1"
            android:text="Leave" /&gt;

    &lt;/LinearLayout&gt;

    &lt;TextView
        android:id="@+id/waitingLayout"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:text="Waiting for host \n to start the live streaming"
        android:textColor="@color/white"
        android:textFontWeight="700"
        android:textSize="20sp"
        android:gravity="center"/&gt;

    &lt;com.google.android.exoplayer2.ui.StyledPlayerView
        android:id="@+id/player_view"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:visibility="gone"
        app:resize_mode="fixed_width"
        app:show_buffering="when_playing"
        app:show_subtitle_button="false"
        app:use_artwork="false"
        app:show_next_button="false"
        app:show_previous_button="false"
        app:use_controller="true"
        android:layout_below="@id/meetingLayout"/&gt;

&lt;/RelativeLayout&gt;</code></pre><h4 id="initialize-player-and-play-hls-stream">Initialize player and Play HLS stream</h4>
<p>Initialize the player and play the HLS when the meeting HLS state is HLS_PLAYABLE, and release it when the HLS state is HLS_STOPPED. Whenever the meeting HLS state changes, the event onHlsStateChanged will be triggered.</p>
<pre><code class="language-js">class ViewerFragment : Fragment() {
  private var meeting: Meeting? = null
  private var playerView: StyledPlayerView? = null
  private var waitingLayout: TextView? = null
  private var player: ExoPlayer? = null
  private var dataSourceFactory: DefaultHttpDataSource.Factory? = null
  private val startAutoPlay = true
  private var downStreamUrl: String? = ""

  override fun onCreateView(
    inflater: LayoutInflater, container: ViewGroup?,
    savedInstanceState: Bundle?
  ): View? {
    // Inflate the layout for this fragment
    val view = inflater.inflate(R.layout.fragment_viewer, container, false)
    playerView = view.findViewById(R.id.player_view)
    waitingLayout = view.findViewById(R.id.waitingLayout)
    if (meeting != null) {
        // set MeetingId to TextView
        (view.findViewById&lt;View&gt;(R.id.meetingId) as TextView).text =
            "Meeting Id : " + meeting!!.meetingId
        // leave the meeting on btnLeave click
        (view.findViewById&lt;View&gt;(R.id.btnLeave) as Button).setOnClickListener { meeting!!.leave() }
        // add listener to meeting
        meeting!!.addEventListener(meetingEventListener)
    }
    return view
  }

  override fun onAttach(context: Context) {
      super.onAttach(context)
      mContext = context
      if (context is Activity) {
        mActivity = context
        // get meeting object from MeetingActivity
        meeting = (mActivity as MeetingActivity?)!!.meeting
      }
  }

  private val meetingEventListener: MeetingEventListener = object : MeetingEventListener() {}

  override fun onDestroy() {
  }

  companion object {
    private var mActivity: Activity? = null
    private var mContext: Context? = null
  }
}</code></pre><pre><code class="language-js">private val meetingEventListener: MeetingEventListener = object : MeetingEventListener() {
      override fun onMeetingLeft() {
        if (isAdded) {
          val intents = Intent(mContext, JoinActivity::class.java)
          intents.addFlags(
              Intent.FLAG_ACTIVITY_NEW_TASK
                      or Intent.FLAG_ACTIVITY_CLEAR_TOP or Intent.FLAG_ACTIVITY_CLEAR_TASK
          )
          startActivity(intents)
          mActivity!!.finish()
        }
      }

      @RequiresApi(api = Build.VERSION_CODES.P)
      override fun onHlsStateChanged(HlsState: JSONObject) {
        if (HlsState.has("status")) {
            try {
              if (HlsState.getString("status") == "HLS_PLAYABLE" &amp;&amp; HlsState.has("downstreamUrl")) {
                downStreamUrl = HlsState.getString("downstreamUrl")
                waitingLayout!!.visibility = View.GONE
                playerView!!.visibility = View.VISIBLE
                // initialize player
                initializePlayer()
              }
              if (HlsState.getString("status") == "HLS_STOPPED") {
                // release the player
                releasePlayer()
                downStreamUrl = null
                waitingLayout!!.text = "Host has stopped \n the live streaming"
                waitingLayout!!.visibility = View.VISIBLE
                playerView!!.visibility = View.GONE
              }
            } catch (e: JSONException) {
                e.printStackTrace()
            }
          }
      }
  }
  
  private fun initializePlayer() {
  }
	
   private fun releasePlayer() {
   }</code></pre><pre><code class="language-js">private fun initializePlayer() {
    if (player == null) {
      dataSourceFactory = DefaultHttpDataSource.Factory()
      val mediaSource = HlsMediaSource.Factory(dataSourceFactory!!).createMediaSource(
          MediaItem.fromUri(Uri.parse(downStreamUrl))
      )
      val playerBuilder = ExoPlayer.Builder( /* context = */mContext!!)
      player = playerBuilder.build()
      // auto play when player is ready
      player!!.playWhenReady = startAutoPlay
      player!!.setMediaSource(mediaSource)
      // if you want display setting for player then remove this line
      playerView!!.findViewById&lt;View&gt;(com.google.android.exoplayer2.ui.R.id.exo_settings).visibility =
          View.GONE
      playerView!!.player = player
    }
    player!!.prepare()
  }
</code></pre><pre><code class="language-js"> private fun releasePlayer() {
    if (player != null) {
      player!!.release()
      player = null
      dataSourceFactory = null
      playerView!!.player = null
    }
  }

  override fun onDestroy() {
    mContext = null
    mActivity = null
    downStreamUrl = null
    releasePlayer()
    if (meeting != null) {
        meeting!!.removeAllListeners()
        meeting = null
    }
    super.onDestroy()
  }</code></pre><p>This is how the viewer will see their screen.</p>

<!--kg-card-begin: html-->
<iframe width="560" height="315" src="https://www.youtube.com/embed/NIqR32ajQBc" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen=""/>
<!--kg-card-end: html-->
<h3 id="run-your-app">Run your App</h3><p><strong>Tadaa!!</strong> Our app is ready for live streaming. Easy, isn't it? <br>Install and run the app on two different devices and make sure both of them are connected to the internet. You should expect it to work as shown in the video below:</br></p><center> 
<p><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/done-its-done.gif" alt="Build an Android Live Streaming Video Chat App Using Kotlin?" loading="lazy"><br>
</br></img></p></center><p/>
<h2 id="conclusion">Conclusion</h2><p>In this blog, we have learned what VideoSDK is and how to create your own Live Streaming Android App with the VideoSDK.</p><p>Go ahead and create advanced features like screen-sharing, Real-Time messaging, and others. Browse Our <a href="https://docs.videosdk.live/">Documentation</a>.</p><p>To see the full implementation of the app, check out <a href="https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example/tree/one-to-one-demo">This</a> GitHub repository.</p><p>If you face any problems or have questions, Feel free to join our <a href="https://discord.gg/Gpmj6eCq5u">Discord Community</a>.</p><p>To unlock the full potential of VideoSDK and create easy-to-use video experiences, developers are encouraged to sign up for VideoSDK and further explore its features. </p><p><a href="https://www.videosdk.live/signup"><strong>Sign up with VideoSDK</strong></a> today and Get <strong>10000 minutes free</strong> to take your video app to the next level!</p><h2 id="more-android-resources">More Android Resources</h2><ul><li><a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/concept-and-architecture">Build an Android Video Calling App - docs</a></li><li><a href="https://youtu.be/Kj7jS3dbJFA">Build an Android Video Calling App using Android Studio and Video SDK</a> - Youtube</li><li><a href="https://github.com/videosdk-live/quickstart/tree/main/android-rtc">quickstart/android-rtc</a></li><li><a href="https://github.com/videosdk-live/quickstart/tree/main/android-hls">quickstart/android-hls</a></li><li><a href="https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example">videosdk-rtc-android-java-sdk-example</a></li><li><a href="https://github.com/videosdk-live/videosdk-rtc-android-kotlin-sdk-example">videosdk-rtc-android-kotlin-sdk-example</a></li><li><a href="https://github.com/videosdk-live/videosdk-hls-android-java-example">videosdk-hls-android-java-example</a></li><li><a href="https://github.com/videosdk-live/videosdk-hls-android-kotlin-example">videosdk-hls-android-kotlin-example</a></li></ul>]]></content:encoded></item><item><title><![CDATA[How VideoSDK Overrides Codecs?]]></title><description><![CDATA[The article explores VideoSDK's features including codec selection, dynamic switching, custom configuration, fallback mechanisms, and low-latency optimizations.]]></description><link>https://www.videosdk.live/blog/how-video-sdk-overrides-codecs</link><guid isPermaLink="false">66d6d30d20fab018df10fe78</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Fri, 24 Jan 2025 12:19:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/09/How-VideoSDK-overrides-Codecs.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/09/How-VideoSDK-overrides-Codecs.jpg" alt="How VideoSDK Overrides Codecs?"/><p>VideoSDK offers the capability to dynamically override and manage codecs throughout video and audio sessions, allowing developers to adapt media streams based on specific needs like network conditions, device performance, or application requirements. Here’s how Video SDK typically handles codec overriding:</p><h2 id="codec-selection">Codec Selection</h2><p>The VideoSDK provides APIs to select the desired codec for encoding and decoding video streams. It includes support for widely used codecs such as H.264, H.265 (HEVC), VP9, and the newly emerging AV1 codec. Developers can choose the most appropriate codec by considering a variety of factors, including quality expectations, bitrate limitations, and compatibility with devices.</p><h2 id="dynamic-codec-switching">Dynamic Codec Switching</h2><p>VideoSDK enables dynamic switching of codecs during an ongoing session without interrupting the media stream. This is often handled internally by the SDK based on predefined conditions such as bandwidth changes, CPU usage, or packet loss.</p><p>The SDK monitors network conditions and automatically switches to a more suitable codec if the current one becomes inefficient. For example, it might switch from VP9 to VP8 or H.264 to reduce computational load or improve compatibility.</p><h2 id="custom-codec-configuration">Custom Codec Configuration</h2><p>The SDK provides detailed control over codec parameters via a configuration API. Developers can adjust settings like:</p><ul><li><strong>Bitrate</strong>: Controlling the target bitrate for encoding to optimize quality vs file size tradeoffs.</li><li><strong>Resolution</strong>: Specifying the desired resolution for encoding or decoding.</li><li><strong>Frame rate</strong>: Setting the frame rate for smooth playback.</li><li><strong>Encoding presets</strong>: Choosing presets optimized for different use cases like low-latency streaming or high-quality offline encoding.</li></ul><p>This level of flexibility enables developers to customize default codec settings to their specific needs and enhance performance.</p><h2 id="fallback-mechanisms">Fallback Mechanisms</h2><p>VideoSDK includes fallback mechanisms where, if a preferred codec fails (due to unsupported devices or insufficient resources), it automatically falls back to the next best available codec. This ensures that the session continues smoothly even under suboptimal conditions.</p><h2 id="codec-prioritization-and-negotiation">Codec Prioritization and Negotiation</h2><p>During session setup, VideoSDK negotiates codec capabilities between participants, prioritizing preferred codecs. If a codec mismatch occurs, the SDK adapts by choosing a mutually supported codec, ensuring compatibility.</p><p>Developers can influence this process by specifying codec priorities or excluding certain codecs from the negotiation process.</p><h2 id="support-for-low-latency-codecs">Support for Low-Latency Codecs</h2><p>For applications requiring low-latency communication, VideoSDK supports codecs optimized for minimal delay, such as VP8 or OPUS, and can override higher-latency codecs when real-time interaction is critical.</p><h2 id="api-access-for-codec-control">API Access for Codec Control</h2><p>VideoSDK provides APIs that give developers direct access to codec control, allowing them to implement custom codec-switching logic. For instance, you can create event listeners that trigger codec changes based on real-time network analytics or user-defined conditions.</p>]]></content:encoded></item><item><title><![CDATA[Why Video PD and Relationship Management are Important in Banking?]]></title><description><![CDATA[This article explores how these innovations are revolutionizing customer interactions, streamlining operations, and providing banks with a competitive edge. ]]></description><link>https://www.videosdk.live/blog/videopd-and-relationship-management</link><guid isPermaLink="false">66d8296220fab018df10ff81</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Thu, 23 Jan 2025 12:19:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-Video-PD-and-RM_-Why-are-They-Important-in-Banking_.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-Video-PD-and-RM_-Why-are-They-Important-in-Banking_.jpg" alt="Why Video PD and Relationship Management are Important in Banking?"/><p>In an era where technology is reshaping every aspect of our lives, the banking sector is no exception. As financial institutions aim to enhance customer experiences and streamline operations, two innovative concepts have emerged as game-changers: <a href="https://www.videosdk.live/blog/video-personal-discussion-vpd">Video Personal Discussion (Video PD)</a> and Relationship Management (RM).</p><p>Video PD leverages <a href="https://www.videosdk.live/">video technology</a> to facilitate real-time, personalized interactions between banks and their customers, making processes like onboarding and verification more efficient and engaging. On the other hand, RM surrounds the strategies and tools that banks use to manage and nurture customer relationships, focusing on personalization and data-driven insights.</p><p>Together, Video PD and RM are revolutionizing the way banks operate, allowing them to meet the evolving needs of customers while maintaining a competitive edge in a rapidly changing landscape.</p><p>In this article, we will explore what Video PD and RM are, how they function, and their significance in the banking industry. By understanding these concepts, banks can harness their potential to drive customer satisfaction and operational efficiency.</p><h2 id="understanding-video-pd">Understanding Video PD</h2><h3 id="definition-and-functionality">Definition and Functionality</h3><p><a href="https://www.videosdk.live/blog/video-personal-discussion-vpd">Video Personal Discussion (Video PD)</a> is a cutting-edge technology that facilitates real-time, face-to-face interactions between banks and their customers through secure video calls. This innovative approach allows financial institutions to conduct customer onboarding, verification, and support in a more personalized manner, enhancing the overall customer experience.</p><p>Video PD integrates various features such as <a href="https://www.videosdk.live/blog/what-is-liveness-detection">liveness detection</a>, document sharing, and identity verification, making it an effective tool for streamlining banking processes.</p><h3 id="comparison-with-traditional-customer-verification-methods"><strong>Comparison with Traditional Customer Verification Methods</strong></h3><p>Traditional customer verification methods often involve in-person meetings, extensive paperwork, and time-consuming processes that can lead to customer frustration and drop-offs. These methods are limited by geographical constraints and can be prone to errors and fraud. In contrast, Video PD eliminates the need for physical visits, allowing banks to connect with customers remotely, thereby accelerating the onboarding process and improving efficiency.</p><p><strong>Key Features of Video PD</strong></p><ol><li><strong>Real-Time Interaction</strong>: Video PD enables live discussions, fostering a personal connection that builds trust between banks and customers.</li><li><strong>Document Sharing</strong>: Customers can securely upload necessary documents during the call, simplifying the verification process.</li><li><strong>Identity Verification</strong>: Advanced technologies like facial recognition and <a href="https://aws.amazon.com/what-is/ocr/">Optical Character Recognition (OCR)</a> ensure accurate identity checks.</li><li><strong>Audit Trails</strong>: All interactions are recorded for compliance and auditing purposes, providing a reliable record of the verification process.</li><li><strong>Accessibility</strong>: Video PD breaks geographical barriers, allowing banks to serve a wider audience, including those in remote areas.</li></ol><p>These features collectively enhance the customer onboarding experience, making Video PD a vital tool for modern banking practices.</p><h2 id="understanding-relationship-management-rm"><strong>Understanding Relationship Management (RM)</strong></h2><h3 id="definition-and-scope-of-rm-in-banking"><strong>Definition and Scope of RM in Banking</strong></h3><p>Relationship Management (RM) in banking refers to the strategies and practices employed by financial institutions to foster and maintain strong relationships with their customers. It encompasses a wide range of activities, from personalized customer service and engagement to data-driven insights that help banks understand customer needs and preferences. By leveraging <a href="https://www.pragmaticcoders.com/blog/data-science-in-finance" rel="noreferrer">data science in finance</a>, banks can more precisely tailor their RM strategies, ultimately enhancing customer loyalty and satisfaction. RM aims to create long-term relationships that benefit both the bank and its clients, ultimately enhancing customer loyalty and satisfaction.</p><h3 id="importance-of-personalized-interactions-and-data-analytics"><strong>Importance of Personalized Interactions and Data Analytics</strong></h3><p>Personalized interactions are crucial in today’s competitive banking landscape. Customers expect tailored services that cater to their unique financial situations and goals. RM leverages data analytics to gather insights into customer behavior, preferences, and transaction history, enabling banks to deliver customized solutions and proactive support. This personalization not only enhances the customer experience but also increases retention rates.</p><h3 id="how-rm-integrates-with-technology-to-enhance-customer-service"><strong>How RM Integrates with Technology to Enhance Customer Service?</strong></h3><p>The integration of <a href="https://www.videosdk.live/">video technology</a> in RM has revolutionized customer service in banking. Advanced Customer Relationship Management <a href="https://www.saasadviser.co/software/crm-software" rel="noreferrer">CRM software</a> enables banks to track customer interactions, manage inquiries, and automate follow-ups efficiently.</p><p>Additionally, the use of Artificial Intelligence (AI) and machine learning allows for predictive analytics, helping banks anticipate customer needs and offer relevant products and services. By combining personalized service with technological advancements, RM enhances overall customer satisfaction and drives business growth.</p><h2 id="importance-of-video-pd-and-rm-in-banking"><strong>Importance of Video PD and RM in Banking</strong></h2><h3 id="enhanced-customer-experience"><strong>Enhanced Customer Experience</strong></h3><p>Video PD and Relationship Management (RM) significantly improve customer engagement and satisfaction by offering personalized interactions. <a href="https://www.videosdk.live/blog/video-personal-discussion-vpd">Video PD</a> allows customers to connect face-to-face with bank representatives, creating a more intimate and trustworthy environment.</p><p>This personal touch helps banks better understand customer needs and preferences, leading to tailored solutions that enhance the overall experience. Furthermore, RM utilizes data analytics to provide insights into customer behavior, enabling banks to proactively address issues and offer relevant services. Together, these technologies foster deeper relationships and increase customer loyalty.</p><h3 id="operational-efficiency"><strong>Operational Efficiency</strong></h3><p>The integration of Video PD and RM streamlines banking processes, reducing costs and improving turnaround times. Video PD eliminates the need for in-person meetings and lengthy paperwork, allowing for quicker onboarding and verification. This efficiency not only saves time for both customers and banks but also reduces operational costs associated with traditional methods.</p><p>RM systems enhance workflow by automating customer interactions and follow-ups, ensuring that no inquiries fall through the cracks. As a result, banks can operate more effectively while delivering faster service to their clients.</p><h3 id="fraud-prevention-and-security"><strong>Fraud Prevention and Security</strong></h3><p>Video PD enhances security through real-time verification and identity checks. By utilizing advanced technologies such as facial recognition and document verification during live video interactions, banks can ensure the authenticity of their customers. This robust identity verification process significantly reduces the risk of fraud compared to traditional methods, which can be susceptible to manipulation. Additionally, the ability to record video interactions provides an audit trail that enhances compliance with regulatory standards, further bolstering security measures.</p><h3 id="competitive-advantage"><strong>Competitive Advantage</strong></h3><p>Banks that leverage Video PD and RM can differentiate themselves in a crowded market. By offering innovative, personalized services, these institutions can attract and retain customers more effectively. The convenience of remote interactions through Video PD appeals to a tech-savvy customer base that values efficiency and flexibility. Moreover, banks that successfully implement these technologies can position themselves as leaders in customer service, enhancing their reputation and building a loyal clientele. In an increasingly digital world, adopting Video PD and RM is not just an option; it is essential for maintaining a competitive edge in the banking industry.</p><h2 id="practical-applications-and-case-studies"><strong>Practical Applications and Case Studies</strong></h2><h3 id="use-cases-of-video-pd"><strong>Use Cases of Video PD</strong></h3><p>Video Personal Discussion (Video PD) has a wide range of practical applications in banking, particularly in customer onboarding, <a href="https://www.videosdk.live/solutions/video-kyc">Know Your Customer (KYC)</a> processes, and remote support. For customer onboarding, Video PD allows banks to conduct virtual meetings where customers can present identification documents and complete necessary forms in real-time. This approach not only accelerates the onboarding process but also enhances the customer experience by providing a personal touch.</p><p>In KYC processes, Video PD enables banks to verify customer identities through live video interactions, ensuring compliance with regulatory requirements while minimizing the risk of fraud. Additionally, for remote support, banks can utilize Video PD to assist customers with complex inquiries, providing a face-to-face interaction that fosters trust and clarity.</p><h3 id="future-trends"><strong>Future Trends</strong></h3><p>Looking ahead, the future of Video PD and RM in banking is bright, with emerging technologies like Artificial Intelligence (AI) and machine learning set to play a pivotal role. These technologies can enhance Video PD by automating document verification and providing predictive analytics that help banks anticipate customer needs. </p><p>The integration of AI-driven chatbots with Video PD can further streamline customer interactions, making the banking experience more efficient and personalized. As these technologies evolve, banks that embrace them will be better positioned to meet the demands of a dynamic financial landscape.</p>]]></content:encoded></item><item><title><![CDATA[What is Server to Server (S2S) Communication?]]></title><description><![CDATA[Discover fundamental factors of Server to Server (S2S) communication, including protocols, security measures, and potential challenges.]]></description><link>https://www.videosdk.live/blog/server-to-server-s2s-communication</link><guid isPermaLink="false">66d6d6b520fab018df10fe98</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Thu, 23 Jan 2025 12:18:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-Server-to-Server-Communication_-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-Server-to-Server-Communication_-1.png" alt="What is Server to Server (S2S) Communication?"/><p>Server to Server (S2S) communication refers to the process where two or more servers interact directly with each other without the involvement of a client (e.g., a user's device or a web browser). This communication can be used to exchange data, perform background processes, or coordinate actions between systems. It's commonly used in various scenarios, including microservices architecture, cloud services, data synchronization, and APIs.</p><h2 id="importance-of-server-to-server-s2s-communication">Importance of Server to Server (S2S) Communication</h2><h3 id="seamless-operations">Seamless Operations</h3><p>Server to Server (S2S) is critical for maintaining seamless operations in environments with high data traffic. It ensures that multiple servers can work together efficiently to handle large volumes of requests simultaneously.</p><h3 id="real-time-data-handling">Real-time Data Handling</h3><p>Server to Server (S2S) enables real-time data processing and synchronization, which is essential for applications that require immediate updates, such as management systems and management (CRM) tools.</p><h3 id="enhanced-user-experience">Enhanced User Experience</h3><p>By ensuring that data is consistently updated across servers, Server to Server contributes to a better user experience. Users benefit from faster response times and more reliable services, which can lead to higher satisfaction and retention.</p><h3 id="scalability">Scalability</h3><p>Server to Server architecture supports the easy addition of more servers as demand increases. This scalability is vital for businesses experiencing growth, as it allows them to enhance their infrastructure without significant changes to the existing setup.</p><h2 id="key-aspects-of-server-to-server-s2s-communication">Key Aspects of Server to Server (S2S) Communication</h2><h3 id="direct-communication">Direct Communication</h3><p>S2S (Server-to-Server) allows servers to exchange data and requests directly, which minimizes latency and enhances performance. This is particularly useful in scenarios where rapid data exchange is crucial, such as real-time analytics.</p><h3 id="load-distribution">Load Distribution</h3><p>By enabling servers to communicate with one another,  Server to Server (S2S) helps distribute workloads evenly. This ensures that no single server becomes a bottleneck, thereby improving overall system reliability and efficiency.</p><h3 id="data-synchronization">Data Synchronization</h3><p>In a typical Server-to-Server setup, servers can synchronize data across different systems. For example, one server might handle user requests, while another processes transactions and a third manages user data. Server to Server (S2S) ensures that all these servers remain updated with the latest information, which is critical for maintaining consistency and accuracy in operations.</p><h3 id="protocols-and-technologies">Protocols and Technologies</h3><p>Various protocols facilitate Server to Server (S2S) communication using APIs (Application Programming Interfaces) or other standardized protocols like HTTP, HTTPS, WebSocket, gRPC, or even custom protocols. These protocols define how servers communicate, ensuring that data is exchanged in a structured and reliable manner.</p><h3 id="security-and-authentication">Security and Authentication</h3><p>Since no direct client is involved, security measures are critical. Common methods include API keys, OAuth tokens, SSL/TLS for encrypted communication, and mutual authentication methods to ensure only authorized servers can communicate.</p><h2 id="protocols-and-methods">Protocols and Methods</h2><ul><li><strong>RESTful APIs</strong>: Using HTTP/HTTPS requests to GET, POST, PUT, and DELETE data.</li><li><strong>SOAP</strong>: A protocol for exchanging structured information in the implementation of web services.</li><li><strong>gRPC:</strong> A high-performance, open-source framework that uses HTTP/2 for transport, protocol buffers as the interface description language, and provides features like authentication, load balancing, and more.</li><li><strong>Message Queues</strong>: Systems like RabbitMQ, Kafka, or AWS SQS that enable asynchronous communication between servers.</li></ul><h2 id="challenges-in-server-to-server-communication">Challenges in Server-to-Server Communication</h2><p>While S2S (Server-to-Server) communication offers many advantages, it also presents challenges:</p><h3 id="network-latency">Network Latency</h3><p>The physical distance between servers can introduce latency, affecting the speed of data transfer. Optimizing network paths and using efficient data transfer protocols can mitigate this issue.</p><h3 id="security-concerns">Security Concerns</h3><p>As servers communicate directly, ensuring data security during transmission is crucial. Implementing encryption and secure protocols can help protect sensitive information from unauthorized access.</p><h3 id="complexity-in-management">Complexity in Management</h3><p>Managing multiple servers and ensuring they communicate effectively can be complex. Proper monitoring and management tools are necessary to maintain performance and reliability.</p><p>You can notice that Server to Server (S2S) communication is a foundational aspect of modern network architecture, enabling efficient data exchange, load balancing, and synchronization across multiple servers. Its importance is particularly evident in high-traffic environments where seamless operations and real-time data handling are critical for success.</p>]]></content:encoded></item><item><title><![CDATA[Audio Calling API Pricing]]></title><description><![CDATA[Pricing with no surprises. Study the affordability and quality of several entities and make a choice! Experience well-defined audio calling.
]]></description><link>https://www.videosdk.live/audio-calling-api-pricing/</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb6c</guid><category><![CDATA[Pricing]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 23 Jan 2025 11:38:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/09/audio-calling-Pricing-thumbnail.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/09/audio-calling-Pricing-thumbnail.jpg" alt="Audio Calling API Pricing"/><p>Videosdk.live brings its pricing for audio communication. With the best-supported quality, we deliver audio at the most affordable pricing. This blog makes the reader understand that how our pricing policies always come out to be an effective deal.<br/></p><p><strong>The audio communication API pricing.</strong></p><p><strong>Get 10,000 Minutes free each month, lifetime!</strong><br/></p><p>Audio calling APIs play an important role in real-time communication. Being commonly used in big corporates and smaller entities, it deals with clients and customers efficiently. Although it is believed that video calling is an important aspect of communicating over the web, it is just incomplete without audio. <br/></p><blockquote>Before we begin with the audio communication pricing, we commit to deliver you the best quality experience to enlarge engagement. With our effective pricing, we add value features to our platform too. We aspire that each client makes a worthful experience. </blockquote><ul><li>There is no necessity to speak to sales support to get acknowledged with pricing. You can view pricing policies here</li></ul><h2 id="how-to-calculate-participant-minutes">How to calculate Participant Minutes?</h2><p>Participant Minutes are the total number of minutes spent by each participant in one meeting. Videosdk.live calculates Participant Minutes based on the number of participants present in a meeting. The computation is simple.</p><p><strong>Participant Minutes = Number of participants(N) x Minutes Consumed(M)</strong><br>The reference below will make you understand the calculation effortlessly.</br></p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/PArticipant--Minutes.jpg" class="kg-image" alt="Audio Calling API Pricing" loading="lazy" width="2000" height="477" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2021/09/PArticipant--Minutes.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2021/09/PArticipant--Minutes.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/size/w1600/2021/09/PArticipant--Minutes.jpg 1600w, http://assets.videosdk.live/static-assets/ghost/2021/09/PArticipant--Minutes.jpg 2000w" sizes="(min-width: 1200px) 1200px"><figcaption>6 Participants x 20 Minutes = 120 Participant Minutes</figcaption></img></figure><h2 id="free-plan">Free Plan</h2><p>We have secured and kept 10,000 minutes for you each month, which means, every month the first 10,000 minutes you will consume, will be free. After consuming 10,000 minutes, you will pay as you go. A question may arise that how do these minutes count? It can be illustrated with some examples.<br/></p><p><strong>Example:</strong></p><p><em><strong>Audio Conferencing of 50 participants for 40 minutes</strong></em></p><ul><li>Total Minutes consumed - 50 Participants x 40 Minutes = 2,000 Participant Minutes  <strong>BUT, </strong>these minutes are free, there is no cost to be borne by you.</li><li><strong><strong>On a very simple calculation, your<strong> free minutes will be Deducted by 2,000 Free Minutes.</strong></strong></strong></li><li><strong><strong><strong>Remaining Free Minutes = 10,000 - 2,000 = 8,000 Minutes</strong></strong></strong></li><li>Similarly, the participant minutes of your next meeting will be further deducted from the remaining free minutes.<br/></li></ul><p><em><strong>Get a bonus example</strong></em></p><blockquote>On an estimation, we calculated that, if 6 Participants show up their presence for 50 minutes in a meeting, regularly for 30 days, they can manage <strong>30 FREE audio meetings each month!! </strong></blockquote><h2 id="pro-plan">Pro Plan</h2><p>After consumption of the first 10,000 free minutes, you are charged with the paid plan. Keeping it user-friendly, we have designed manageable affordable pricing. These pro plans are recommended for companies to scale engagement and make profits.<br/></p><p>The calculation is simple;</p><p><strong>Pricing = Number of participants x Meeting minutes x Unit Price per minute</strong></p><p><strong>Videosdk.live Price per minute = $ 0.00060</strong><br/></p><p><strong>Example:</strong></p><p>In an audio calling meeting Total Participants (N)= 50, Total Minutes consumed (M)= 100. The Total Participant Minutes (PM)= 5000.</p><p><strong>On calculation: Total Price= N x M x Price per minute =  50 x 100x 0.0006 = $3</strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/pro-plan-pricing.jpg" class="kg-image" alt="Audio Calling API Pricing" loading="lazy" width="1189" height="997" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2021/09/pro-plan-pricing.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2021/09/pro-plan-pricing.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2021/09/pro-plan-pricing.jpg 1189w" sizes="(min-width: 720px) 720px"><figcaption>50 Participants x 100 Minutes x 0.00060 = $3.00</figcaption></img></figure><h2 id="enterprise-plan"><strong>Enterprise Plan</strong></h2><p>An Enterprise Plan is a plan for companies dedicated to accounts management, support, and other technicalities. We bring this plan to promote mass engagement at affordable prices.</p><blockquote><a href="https://videosdk.live/contact">Contact Support</a> for the best pricing deals.</blockquote><p><strong>Other Audio calling API providers</strong></p><p>Several other API providers indulge in developing audio calling APIs keeping up with identical approaches. The only contrasting point between Videosdk.live and other providers is price. <br/></p><p>There is no elaborated calculation</p><p><strong>Pricing = Number of participants x Meeting minutes x Unit Price per minute</strong></p><blockquote>Other companies Price per minute = $ 0.00099</blockquote><p><strong>Calculation with the same example:</strong></p><p>In an audio calling meeting Total Participants (N)= 50, Total Minutes consumed (M)= 100. The Total Participant Minutes (PM)= 5000.</p><p><strong>On calculation: Total Price= N x M x Price per minute =  50 x 100x 0.00099 = $5 (approx)</strong><br/></p><h2 id="price-comparison%E2%80%9Cvideosdklive-vs-other-api-providers%E2%80%9D">Price Comparison- “videosdk.live vs. other API providers”</h2><figure class="kg-card kg-image-card kg-width-wide"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/pricing-table_audio-call--1-.jpg" class="kg-image" alt="Audio Calling API Pricing" loading="lazy" width="1567" height="717" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2021/09/pricing-table_audio-call--1-.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2021/09/pricing-table_audio-call--1-.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2021/09/pricing-table_audio-call--1-.jpg 1567w" sizes="(min-width: 1200px) 1200px"/></figure><blockquote>Videosdk.live and other companies comply with alike features and there is no difference in quality at all. The approach is identical but the prices vary. You can always get in touch with us through our <a href="https://www.videosdk.live/contact">fastest support chain</a>. We welcome all your queries.</blockquote>]]></content:encoded></item><item><title><![CDATA[Comprehensive Comparison: Agora, Twilio, Vonage, Zoom, and VideoSDK]]></title><description><![CDATA[This article will break down the main features and limitations of five tools that provide the infrastructure for embedding in-app audio-video functionality to help CEO, CTO, PM, developers and finance team make a more informed decision: Agora, Twilio, Vonage, Zoom, and VideoSDK.]]></description><link>https://www.videosdk.live/blog/agora-vs-twilio-vs-vonage-vs-zoom-vs-video-sdk-comparison</link><guid isPermaLink="false">63a5439bbd44f53bde5ce4f9</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Wed, 22 Jan 2025 12:00:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/12/indepth-comparison1--1-.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/12/indepth-comparison1--1-.jpg" alt="Comprehensive Comparison: Agora, Twilio, Vonage, Zoom, and VideoSDK"/><p>A brief and unbiased comparison of five major audio-video infrastructure providers: Agora SDK, Twilio Video, Vonage Video API, Zoom Video SDK, and VideoSDK.</p><p>This article aims to dissect the key attributes and constraints of five tools integral to incorporating in-app audio-video capabilities. This analysis is designed to empower CEOs, CTOs, project managers, developers, and finance teams with comprehensive insights, facilitating a more enlightened decision-making process. The tools under scrutiny include Agora, Twilio, Vonage, Zoom, and VideoSDK.</p><p>The video conferencing market is expected to grow at a CAGR of 12.6% during the forecast period, reaching USD 19.1 billion by 2027 from an estimated USD 10.6 billion in 2022. Because of the development of conferencing platforms based on machine learning and artificial intelligence, the companies are poised for profitable growth. These solutions enable businesses to maximize the use of collaboration platforms and increase meeting productivity by leveraging facial recognition and virtual assistant technology. With the use of AI in conferencing systems, organizations can learn more about the ideal meeting size, ideal meeting duration, and best meeting time of day.</p><p><a href="https://www.marketsandmarkets.com/Market-Reports/video-conferencing-market-99384414.html">Source</a></p><h2 id="the-verticalization-of-zoom">The 'Verticalization' of Zoom</h2><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/12/1_mAa1Q3jHZ8ZggUBLmhsPZA.webp" class="kg-image" alt="Comprehensive Comparison: Agora, Twilio, Vonage, Zoom, and VideoSDK" loading="lazy"/></figure><p><br><a href="https://medium.com/swlh/the-verticalization-of-zoom-eb61a79d1cad">Source</a></br></p><p>The number of 5G subscribers globally is expected to reach 5 billion by 2028.</p><p>According to the latest edition of the <a href="https://www.ericsson.com/4ae28d/assets/local/reports-papers/mobility-report/documents/2022/ericsson-mobility-report-november-2022.pdf">Ericsson Mobility Report</a>, 65% of the world's population would have 5G coverage, with networks handling 45% of global mobile data traffic.</p><p>Video traffic in mobile networks is expected to increase by around 30% per year until 2025. It will account for nearly 75% of mobile data traffic, up from a little more than 60% in 2019.</p><blockquote>This article will evaluate solutions based on six criteria:</blockquote><ul><li><a href="https://www.videosdk.live/blog/agora-vs-twilio-vs-vonage-vs-zoom-vs-video-sdk-comparison#agora-real-time-voice-and-video-engagement">Company Brief Introduction</a></li><li><a href="https://www.videosdk.live/blog/agora-vs-twilio-vs-vonage-vs-zoom-vs-video-sdk-comparison#audio-video-agora-vs-twilio-vs-vonage-vs-zoom-vs-videosdk">Audio/Video</a></li><li><a href="https://www.videosdk.live/blog/agora-vs-twilio-vs-vonage-vs-zoom-vs-video-sdk-comparison#features-agora-vs-twilio-vs-vonage-vs-zoom-vs-videosdk">Features</a></li><li><a href="https://www.videosdk.live/blog/agora-vs-twilio-vs-vonage-vs-zoom-vs-video-sdk-comparison#interactive-features-agora-vs-twilio-vs-vonage-vs-zoom-vs-videosdk">Interactive Features</a></li><li><a href="https://www.videosdk.live/blog/agora-vs-twilio-vs-vonage-vs-zoom-vs-video-sdk-comparison#pricing-agora-vs-twilio-vs-vonage-vs-zoom-vs-videosdk">Pricing</a></li><li><a href="https://www.videosdk.live/blog/agora-vs-twilio-vs-vonage-vs-zoom-vs-video-sdk-comparison#support-agora-vs-twilio-vs-vonage-vs-zoom-vs-videosdk">Support</a></li></ul><h2 id="agora-pioneering-real-time-communication">Agora: Pioneering Real-Time Communication</h2><h3 id="what-is-agoraio">What is Agora.io?</h3><p>Agora is a company that offers a real-time video and audio communication platform. The Agora platform makes it simple for developers to create interactive speech and video apps by offering APIs that allow them to quickly add real-time voice and video features to their applications.</p><p>The Agora platform may be used to create applications for a wide range of use cases, such as real-time video conferencing, live streaming, online education, and customer support. The technology is built to serve huge numbers of users and can offer high-quality audio and video even in low-bandwidth conditions.</p><p>Agora is based in China and has offices in USA. Since its inception in 2012, the firm has grown to become a major provider of real-time communication solutions for businesses and organizations.</p><h2 id="twilio-enhancing-real-time-engagement-within-apps">Twilio: Enhancing Real-Time Engagement within Apps</h2><h3 id="what-is-twilio-and-twilio-video">What is Twilio and Twilio Video?</h3><p>Twilio is a cloud communications platform that allows developers to create, grow, and manage communication solutions through the use of a set of APIs. Twilio's APIs enable developers to add voice, text, and messaging capabilities to their apps, allowing them to communicate in real time with their users.</p><p><a href="https://www.videosdk.live/alternative/twilio-vs-videosdk">Twilio Video</a> may be used to create apps for a wide range of applications, including video conferencing, live streaming, online education, and customer care to making it a valuable tool for the <a href="https://www.surveysensum.com/blog/best-companies-for-customer-experience">best companies for customer experience</a>. The technology is built to serve huge numbers of users and can give better video even in reflects the common.</p><p>Twilio was formed in 2008 and is located in San Francisco, California. It has now grown to become a global provider of cloud communication solutions, servicing customers all over the world.</p><h2 id="vonage-video-seamless-integration-of-communication-apis">Vonage Video: Seamless Integration of Communication APIs</h2><h3 id="what-is-vonage-and-vonage-video">What is Vonage and Vonage Video?</h3><p>Vonage is a cloud communications platform provider that enables businesses and organizations to design, scale, and run communication solutions using a set of APIs (Application Programming Interfaces). Vonage's platform enables developers to integrate phone, message, and video capabilities into their apps, allowing them to engage with their consumers in real time.</p><p><a href="https://www.videosdk.live/alternative/vonage-vs-videosdk">Vonage Video</a> is a cloud communications APIS for real-time video communication platform. Vonage Video makes it simple for developers to develop interactive voice and video apps by providing APIs that allow them to rapidly add real-time video capabilities to their applications.</p><p>Vonage was founded in 1998 in Holmdel, New Jersey. It has now evolved into a global provider of cloud communication solutions, serving customers worldwide.</p><h2 id="zoom-simplifying-video-conferencing">Zoom: Simplifying Video Conferencing</h2><h3 id="what-is-zoom-video-sdk">What is Zoom Video SDK?</h3><p>Zoom Video SDK is a collection of software development frameworks and tools made available by Zoom that enable developers to create unique video-based apps that interface with Zoom's platform. Zoom is a firm that offers a cloud-based videoconferencing and collaboration platform that allows users to connect and speak with one another in real time via the internet.</p><p>The Zoom SDK contains a number of APIs (Application Programming Interfaces) and SDKs (Software Development Kits) that allow developers to integrate video, audio, and messaging capabilities into their applications. Engineers may utilize the <a href="https://videosdk.live/alternative/zoom-vs-videosdk">Zoom SDK</a> to create unique video-based apps for a range of purposes such as videoconferencing, online education, and video broadcasts.</p><p>Zoom Video Communications, Inc. (or simply Zoom) is a San Jose, California-based American communications technology company.</p><h2 id="videosdk-developer-friendly-live-audio-video-integration">VideoSDK: Developer-Friendly Live Audio &amp; Video Integration</h2><h3 id="what-is-videosdk">What is VideoSDK?</h3><p>VideoSDK provides an API that allows developers to easily add powerful, extensible, scalable, and resilient audio-video features to their apps with just a few lines of code. Add live audio and video experiences to any platform in minutes. It abstracts the business logic of the conference room in <em><a href="https://www.videosdk.live/examples">templates</a></em>. Client-side SDKs include all edge cases within the SDK rather than leaving it to the application.</p><h2 id="audio-video-agora-vs-twilio-vs-vonage-vs-zoom-vs-videosdk">Audio-Video: Agora vs Twilio vs Vonage vs Zoom vs VideoSDK</h2><!--kg-card-begin: markdown--><table>
	<thead>
            <th>
            </th>
			<th>
				<strong>Agora</strong>
			</th>
			<th>
				<strong>Twilio</strong>
			</th>
			<th>
				<strong>Vonage</strong>
			</th>
			<th>
				<strong>Zoom</strong>
			</th>
			<th>
				<strong>VideoSDK</strong>
			</th>
	</thead>
	<tbody>
			<tr>
				<td>Audio Calling</td>
				<td>✔ Built using the SDK</td>
				<td>✔ Built using the SDK</td>
				<td>✔ Built using the SDK</td>
				<td>✔ Built using the SDK</td>
				<td>✔ Built using the SDK</td>
			</tr>
		<tr>
			<td>Video Calling</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Built using the SDK</td>
		</tr>
		<tr>
			<td>✔ Interactive Livestreaming</td>
			<td>✔ Built with Agora's Live Streaming SDK</td>
			<td>✔ Built with Twilio Live</td>
			<td>✔ Built with Vonage Live</td>
			<td>✘ No explicit mention in Zoom documentation</td>
			<td>✔ Built using the Video SDK</td>
		</tr>
	</tbody>
</table><!--kg-card-end: markdown--><h2 id="features-agora-vs-twilio-vs-vonage-vs-zoom-vs-videosdk">Features: Agora vs Twilio vs Vonage vs Zoom vs VideoSDK</h2><!--kg-card-begin: markdown--><table>
	<thead>
<tr>
            <td>
                &nbsp;
            </td>
            <td>
                <strong>Agora</strong>
            </td>
            <td>
                <strong>Twilio</strong>
            </td>
            <td>
                <strong>Vonage</strong>
            </td>
            <td>
                <strong>Zoom</strong>
            </td>
            <td>
                <strong>VideoSDK</strong>
            </td>
        </tr>
	</thead>
	<tbody>
        <tr>
			<td>Recording</td>
			<td>✔ Enabled using the dashboard and API</td>
			<td>✔ Enabled using the dashboard and API</td>
			<td>✔ Enabled using the dashboard and API</td>
			<td>✔ Enabled using the dashboard and API</td>
			<td>✔ Enabled using the dashboard and API</td>
		</tr>
        <tr>
			<td>External Streaming (RTMP Output)</td>
			<td>✔ Built using the Agora SDK</td>
			<td>✘ No direct support for RTMP</td>
			<td>✔ Built using the Vonage SDK</td>
			<td>✔ Built using the Zoom SDK</td>
			<td>✔ Built using the Video SDK</td>
		</tr>
		<tr>
			<td>Screen share</td>
			<td>✔ Built using the SDK.</td>
			<td>✔ Built using getDisplayMedia API</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Built using the SDK</td>
		</tr>
		<tr>
			<td>Remote mute</td>
			<td>✔ Built using the Agora RTM SDK.</td>
			<td>✔ Built using the Twilio Live SDK</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Built using the SDK</td>
		</tr>
		<tr>
			<td>Active Speaker detection</td>
			<td>✔ Built using the SDK.</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Built using the SDK</td>
		</tr>
		<tr>
			<td>Noise reduction</td>
			<td>✔ Built using the third party integration</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Available, currently in Beta.</td>
		</tr>
	</tbody>
</table><!--kg-card-end: markdown--><h2 id="interactive-features-agora-vs-twilio-vs-vonage-vs-zoom-vs-videosdk">Interactive Features: Agora vs Twilio vs Vonage vs Zoom vs VideoSDK</h2><!--kg-card-begin: markdown--><table>
	<thead>
<table>
	<thead>
<tr>
            <td>
                &nbsp;
            </td>
            <td>
                <strong>Agora</strong>
            </td>
            <td>
                <strong>Twilio</strong>
            </td>
            <td>
                <strong>Vonage</strong>
            </td>
            <td>
                <strong>Zoom</strong>
            </td>
            <td>
                <strong>VideoSDK</strong>
            </td>
        </tr>
	</thead>
	<tbody>
		<tr>
			<td>Chat</td>
			<td>✔ Built using the RTM SDK</td>
			<td>✔ Built using Twilio Conversations API.</td>
			<td>✔ Built using the Separate SDK</td>
			<td>✔ Built using the Separate SDK</td>
			<td>✔ Built using Pub/Sub API</td>
		</tr>
			<tr>
				<td>Whiteboard</td>
				<td>✔ Built using the SDK</td>
				<td>✔ Built using the DataTrack API/SDK.</td>
				<td>✘ Unavailable</td>
				<td>✘ No explicit mention in Zoom docs</td>
				<td>✔ Built using Pub/Sub API</td>
			</tr>
			<tr>
				<td>Live polls, Q&A, Quizzes</td>
				<td>✔ Built using the Separate SDK</td>
				<td>✔ Built using the DataTrack API/SDK.</td>
				<td>✘ Unavailable </td>
				<td>✘ No explicit mention in Zoom docs</td>
				<td>✔ Built using Pub/Sub API</td>
			</tr>
		<tr>
			<td>Raise hand</td>
			<td>✔ Built using the agora RTM SDK</td>
			<td>✔ Built using the DataTrack API.</td>
			<td>✘ No explicit mention in Vonage docs</td>
			<td>✘ No explicit mention in Zoom docs</td>
			<td>✔ Built using Pub/Sub API</td>
		</tr>
		<tr>
			<td>Emoji</td>
			<td>✔ Built using the Agora RTM SDK</td>
			<td>✔ Built using the DataTrack API</td>
			<td>✘ No explicit mention in Vonage docs</td>
			<td>✘ Unavailable</td>
			<td>✔ Built using Pub/Sub API</td>
		</tr>
		<tr>
			<td>Notifications</td>
			<td>✔ Built using the Agora RTM SDK.</td>
			<td>✘ No explicit mention in Twilio Video docs</td>
			<td>✔ Built using the SDK </td>
			<td>✘ No explicit mention in Zoom docs</td>
			<td>✔ Built using Pub/Sub API</td>
		</tr>
		<tr>
			<td>Background</td>
			<td>✔ Virtual Background extension available</td>
			<td>✔ Built using the Twilio video processor SDK</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Built using the SDK</td>
			<td>✔ Built using Custom Video Track</td>
		</tr>
	</tbody>
	</table><!--kg-card-end: markdown--><p>Resources:  <a href="https://docs.agora.io/en" rel="noopener noreferrer">Agora Docs</a>, <a href="https://www.twilio.com/docs" rel="noopener noreferrer">Twilio Docs</a>, <a href="https://marketplace.zoom.us/docs/sdk/video/introduction/" rel="noopener noreferrer">Zoom Video SDK Docs</a>, <a href="https://developer.vonage.com/documentation">Vonage Docs</a>, <a href="https://docs.videosdk.live/">Video SDK Docs</a></p><h2 id="pricing-agora-vs-twilio-vs-vonage-vs-zoom-vs-videosdk">Pricing: Agora vs Twilio vs Vonage vs Zoom vs VideoSDK</h2><!--kg-card-begin: markdown--><table>
	<thead>
		<td>
		</td>
		<td>
			<strong>Agora</strong>
		</td>
		<td>
			<strong>Twilio</strong>
		</td>
		<td>
			<strong>Vonage</strong>
		</td>
		<td>
			<strong>Zoom</strong>
		</td>
		<td>
			<strong>VideoSDK</strong>
		</td>
	</thead>
	
	<tr>
		<td>
			Pricing Model
		</td>
		<td>
			On the basis of usage
		</td>
		<td>
			On the basis of usage
		</td>
		<td>
			On the basis of usage + fixed cost
		</td>
		<td>
		On the basis of usage
		</td>
		<td>
		On the basis of usage
		</td>
	</tr>
	<tr>
		<td>Audio-Video Conferencing</td>
		<td>
			Price per 1,000 minutes depending on aggregate resolution <br>Audio: $0.99<br>Video HD: $3.99<br>Video Full HD: $8.99<br>Video 2K: $15.99<br>
Video 2K+: $35.99
</br></br></br></br></br></td>
		<td>Twilio P2P: $1.5 per participant per minute 1,000 minutes<br>Twilio Video Groups: $4.00 per participant minute 1,000 minutes</br></td>
		<td>Vonage Video Groups: $4 per participant minute 1,000 minutes +($9.99 par month)</td>
		<td>Zoom Video Groups: $3.5 per participant minute 1,000 minutes<br>
				 — $1,000/ year with 30,000 free minutes included per month. Priced at $0.003/min after free minutes
		</br></td>
		<td>Video SDK Video Groups:Audio: $0.60<br>Video SD: $2.00<br>Video HD: $3.00<br>Video Full HD: $7.00<br>
		</br></br></br></br></td></tr>
</table><!--kg-card-end: markdown--><!--kg-card-begin: markdown--><table>
	<thead>
		<td>
		</td>
		<td>
			<strong>Agora</strong>
		</td>
		<td>
			<strong>Twilio</strong>
		</td>
		<td>
			<strong>Vonage</strong>
		</td>
		<td>
			<strong>Zoom</strong>
		</td>
		<td>
			<strong>VideoSDK</strong>
		</td>
	</thead>
	
	<tr>
		<td>
			Streaming
		</td>
		<td>
			The <a href="https://docs.agora.io/en/Interactive%20Broadcast/product_live?platform=Android">price</a> of interactive live streaming is determined by the number of participant minutes for the host and audience, as
			well as the latency. <a href="https://docs.agora.io/en/Interactive%20Broadcast/product_live?platform=Android"> Details can be
				found here.</a>.
		</td>
		<td>
			Twilio Live—Video streaming: <br>Broadcaster: $0.004/min<br>Viewer: $0.0025/min<br>Encoding: $0.010/min
	</br></br></br></td>
		<td>
			The Vonage <a href="https://www.vonage.com/communications-apis/video/pricing/">price</a> of interactive live streaming is determined by the number of participant minutes for the host and audience, as well as the
			latency. <a href="https://video-api.support.vonage.com/hc/en-us/articles/360060601492-How-does-participant-based-video-pricing-work-">
				Details can be found here.</a>. </td>
		<td>Zoom Video SDK: Streaming is allowed, but information on pricing is unavailable.
		</td>
		<td>
10,000 free minutes per month.<br>Broadcaster: $0.002/min<br>Viewer: <br>$0.0015/min
			Encoding: $0.04/min
		</br></br></br></td>
	</tr>
  </table><!--kg-card-end: markdown--><!--kg-card-begin: markdown--><table>
	<thead>
		<td>
		</td>
		<td>
			<strong>Agora</strong>
		</td>
		<td>
			<strong>Twilio</strong>
		</td>
		<td>
			<strong>Vonage</strong>
		</td>
		<td>
			<strong>Zoom</strong>
		</td>
		<td>
			<strong>Video SDK</strong>
		</td>
	</thead>
	
<td>
	Recording
</td>
<td>
	Agora charge per participant minute: 
		Per 1000 Per participant minute: : <br>Voice: $1.49 <br>HD: $5.99 <br>Full HD: $13.49 <br>2K: $23.99 <br>2K+: $53.99
		<br>
		Web Page Recording:  <br> HD $14.00 <br> FHD $28.00
			</br></br></br></br></br></br></br></br></td>
<td>
	Twilio charge per Recording participant minute:  $0.0125 <br>Per composed minute <br>$0.04</br></br></td>
<td>
	Vonage charge per Recording participant minute:  $0.0125 <br>Per composed minute <br>SD: $1.49 <br>HD: $0.035
				<br>Full HD: $0.045</br></br></br></br></td>
<td>
Zoom Video SDK:Recording available with a Video SDK account and a Cloud Recording
		Storage Plan. Information on pricing is unavailable.</td>
<td>
	Web Page Recording:<br>$0.015/min
				</br></td>
		
 </table><!--kg-card-end: markdown--><h2 id="support-agora-vs-twilio-vs-vonage-vs-zoom-vs-videosdk">Support: Agora vs Twilio vs Vonage vs Zoom vs VideoSDK</h2><!--kg-card-begin: markdown--><table>
	<tbody>
		<tr>
			<td>
			</td>
			<td>
				<strong>Agora</strong>
			</td>
			<td>
				<strong>Twilio</strong>
			</td>
			<td>
				<strong>Vonage</strong>
			</td>
			<td>
				<strong>Zoom</strong>
			</td>
			<td>
				<strong>VideoSDK</strong>
			</td>
		</tr>
		<tr>
			<td>
			  Support plan monthly cost
			</td>
			<td>
		<strong>Standard Plan: </strong>$1200/month<br><strong>Premium Plan: </strong>$2900/month<br><strong>Enterprise Plan: </strong>$4900/month
			</br></br></td>
			<td>
				<strong>Production Plan: </strong>4% of monthly spend (or $250 minimum)
				<br>
				<strong>Business Plan: </strong>6% of monthly spend (or $1,500 minimum)
				<br>
				<strong>Personalized Plan: </strong>8% of monthly spend (or $5,000 minimum)
			</br></br></td>
			<td>
				<strong>Priority: </strong>$750/month
				<br>
				<strong>Premium: </strong>$1,500/month
				<br>
				<strong>Premier: </strong>$3,000/month
				<br>
			<td>
				<strong>Premier Developer Support:-</strong>
				<br>
				<br>
				<strong>Bronze - </strong>$675/month
				<br>
				<strong>Silver - </strong>$1,300/month
				<br>
				<strong>Gold - </strong>$1,900/month
			</br></br></br></br></td>
			<td>
				<strong>FREE for everyone</strong>
			</td>
		</br></br></br></td></tr>
	</tbody>
</table><!--kg-card-end: markdown--><!--kg-card-begin: markdown--><table>
	<tbody>
		<tr>
			<td>
			</td>
			<td>
				<strong>Agora</strong>
			</td>
			<td>
				<strong>Twilio</strong>
			</td>
			<td>
				<strong>Vonage</strong>
			</td>
			<td>
				<strong>Zoom</strong>
			</td>
			<td>
				<strong>VideoSDK</strong>
			</td>
		</tr>
		<tr>
	<td>
		Access to TAM / CS Engineer
	</td>
	<td>
		<strong>Premium plan:</strong>
		- Access to CS enginee
		- Code Review
		<br>
		<strong>Enterprise plan</strong>
		- Everything in the Premium plan plus access to SA Engineer &amp; Live Developer Consultation and Training.
	</br></td>
	<td>
		Only available in the Personalized plan. The plan also provides a support escalation line and quarterly status review.
	</td>
	<td>
		Available for Premier customers.
	</td>
	<td>
		Premier Developer Support provides developer enablement, onboarding, training, and architectural consultations.
	</td>
	<td>
		All paying customers have access to a dedicated CS manager and a solutions engineer.
	</td>
	</tr>
	<tr>
	</tr></tbody>
</table><!--kg-card-end: markdown--><!--kg-card-begin: markdown--><table>
	<tbody>
		<tr>
			<td>
			</td>
			<td>
				<strong>Agora</strong>
			</td>
			<td>
				<strong>Twilio</strong>
			</td>
			<td>
				<strong>Vonage</strong>
			</td>
			<td>
				<strong>Zoom</strong>
			</td>
			<td>
				<strong>VideoSDK</strong>
			</td>
		</tr>
	<tr>
		<td>
		SLAs
		</td>
		<td>
			<strong>Standard Plan:</strong>
			P1 – 4 Business hours
			P2 – 16 business hours
			P3 – 24 business hours
			<br>
			<strong>Premium Plan:</strong>
			P1 – 3 hours(24/7)
			P2 – 8 business hours
			P3 – 16 business hours
			<br>
			<strong>Enterprise Plan:</strong>
			P1 – 2 hours (24/7)
			P2 – 5 business hours
			P3 – 9 business hours
		</br></br></td>
		<td>
			<strong>Production Plan:</strong>
			P1 - 3 business hours
			P2 - 6 business hours
			P3 - 9 business hours
			<br>
			<strong>Business Plan:</strong>
			P1 - 1 hour (24/7)
			P2 - 2 business hours
			P3 - 3 business hour
			<br>
			<strong>Personalized Plan:</strong>
			P1 - 1 hour (24/7)
			P2 - 2 business hours
			P3 - 3 business hours
		</br></br></td>
		<td>
			<strong>Service Level Targets (SLT) for the initial response for Premium Plus:</strong>
			<br>
							S1: 30 minutes
							S2: 2 hours
						  S3: 4 hours
				</br></td>
		<td>
			<strong>Premier Developer Support:-</strong>
			<br>
			<strong>Developer:</strong>
			P1 - N/A
			P2 - N/A
			P3 - N/A
			<br>
			<strong>Bronze:</strong>
			P1 - 24 hours
			P2 - 48 hours
			P3 - 72 hours
			<br>
			<strong>Silver:</strong>
			P1 - 6 hours
			P2 - 12 hours
			P3 - 24 hours
			<br>
			<strong>Gold:</strong>
			P1 - 4 hours
			P2 - 8 hours
			P3 - 16 hours
		</br></br></br></br></td>
		<td>
			Critical - 8 hours
			Major - 72 hours
			Minor - Mutually aligned timeline
		</td>
	</tr>
	</tbody>
</table><!--kg-card-end: markdown--><!--kg-card-begin: markdown--><table>
	<tbody>
		<tr>
			<td>
			</td>
			<td>
				<strong>Agora</strong>
			</td>
			<td>
				<strong>Twilio</strong>
			</td>
			<td>
				<strong>Vonage</strong>
			</td>
			<td>
				<strong>Zoom</strong>
			</td>
			<td>
				<strong>Video SDK</strong>
			</td>
		</tr>
	<td>
		Channels
	</td>
	<td>
	Submit a ticket directly through the Agora Console.
<br>All support plans offer Ticket/Email support. Premium Enterprise plans to offer Emergency
							Phone Number Access.</br></td>
	<td>Production, Business, and Personalize plans provide live support options with live chat on all three, and phone support on the latter two.<br>Only email support on the Developer plan.
	</br></td>
	<td>
Premier users:
		<br>- Email support
	  <br>- Chat support
		<br>- Phone support
    </br></br></br></td>
	<td>
    No specific communication channels are explicitly mentioned in the Premier Developer Support doc.
	</td>
	<td>
	Interact with support directly on Slack, Discord.<br>Paying customers can access dedicated Slack support.
	<tr>
	</tr></br></td></tbody>
</table><!--kg-card-end: markdown--><!--kg-card-begin: markdown--><table>
	<tbody>
		<tr>
			<td>
			</td>
			<td>
				<strong>Agora</strong>
			</td>
			<td>
				<strong>Twilio</strong>
			</td>
			<td>
				<strong>Vonage</strong>
			</td>
			<td>
				<strong>Zoom</strong>
			</td>
			<td>
				<strong>Video SDK</strong>
			</td>
		</tr>
	<tr>
		<td>
			Access to Testing
		</td>
		<td>
			Testing is not expressly supported.
		</td>
		<td>
			Testing is not expressly supported.
		</td>
		<td>
			Testing is not expressly supported.
		</td>
		<td>
			Testing is not expressly supported.
		</td>
		<td>
			All paid accounts get access to testing support from Video SDK.
		</td>
	</tr>
	<tr>
	</tr></tbody>
</table><!--kg-card-end: markdown--><!--kg-card-begin: markdown--><table>
	<tbody>
		<tr>
			<td>
				&nbsp;
			</td>
			<td>
				<strong>Agora</strong>
			</td>
			<td>
				<strong>Twilio</strong>
			</td>
			<td>
				<strong>Vonage</strong>
			</td>
			<td>
				<strong>Zoom</strong>
			</td>
			<td>
				<strong>VideoSDK</strong>
			</td>
		</tr>
    <tr>
			<td>
				Community Support
			</td>
			<td>
				Agora’s Stack Overflow and Community Slack Channel
			</td>
			<td>
				Twilio Forum now largely shifted to Stack Overflow as Twilio Collective.
			</td>
			<td>
				Vonage'S Slack Community
			</td>
			<td>
				Zoom Community
			</td>
			<td>
				VideoSDK has an active <a href="https://discord.gg/Gpmj6eCq5u">Discord Community</a>
			</td>
		</tr>
	</tbody>
</table><!--kg-card-end: markdown--><p>Resources: <a href="https://www.agora.io/en/pricing/">Agora Pricing</a>, <a href="https://www.twilio.com/video/pricing">Twilio Pricing</a>, <a href="https://zoom.us/buy/videosdk">Zoom Video SDK Pricing</a>, <a href="https://www.vonage.com/communications-apis/video/pricing/">Vonage Video API Pricing</a>, <a href="https://www.videosdk.live/pricing">Video SDK Pricing</a></p></thead></table>]]></content:encoded></item><item><title><![CDATA[API Key vs API Token: Understanding the Differences]]></title><description><![CDATA[API Key and API Token are authentication methods in APIs. Keys are long-term identifiers, while tokens are short-lived and used for specific actions or sessions, enhancing security.]]></description><link>https://www.videosdk.live/blog/api-key-vs-api-token</link><guid isPermaLink="false">66767f5820fab018df10efdb</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Wed, 22 Jan 2025 11:12:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/06/api-key-vs-api-token.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<img src="https://assets.videosdk.live/static-assets/ghost/2024/06/api-key-vs-api-token.jpg" alt="API Key vs API Token: Understanding the Differences"/><p>In the digital age, secure and efficient management of API interactions is crucial for the development and operation of modern software applications. API keys and tokens are foundational components in this landscape, enabling secure communication between software services. While they both serve to facilitate access control, their functions, implications, and best practices differ significantly. API keys are primarily used to identify the calling application, making them suitable for simpler authentication tasks. On the other hand, API tokens provide a robust mechanism for authenticating and authorizing individual users, offering a deeper level of security and control. This article explores the definitions, uses, and security measures associated with API keys and tokens, providing essential insights into choosing the right method for different scenarios.</p><h2 id="what-is-api-key">What is API Key?</h2>
<p>An API key is a simple yet crucial component in the realm of software development, acting primarily as a unique identifier for application traffic. It does not authenticate individual users but rather serves to recognize the application or the project making a call to an API. This identifier enables external services and applications to interact securely by ensuring that only registered users with valid API keys can access specific functionalities.</p><h3 id="how-api-keys-work">How API Keys Work?</h3>
<p>The operation of an API key is straightforward. When an application makes a request to an API, it must include its API key, typically in the request header. The API server then validates this key against a list of authorized keys. If the key is valid, the server processes the request and returns the desired response. This system not only confirms the authenticity of the requestor but also helps manage and track each request's origin, which can be critical for analyzing usage patterns and enforcing security measures.</p><p>Moreover, API keys are often associated with specific access rights or permissions. For instance, one key might allow a developer to retrieve data, while another could enable the addition or deletion of data. This allows API providers to offer different levels of access to various users, which can be particularly useful for services that charge based on usage levels or that offer premium features.</p><h3 id="security-considerations">Security Considerations</h3>
<p>While API keys are effective for managing access, they are not without security risks. If an API key is exposed, it can be used by unauthorized individuals to access the API, potentially leading to data breaches or service disruptions. Thus, securing API keys is paramount. Best practices for API key security include transmitting keys over HTTPS to prevent interception by third parties, rotating keys regularly to limit the duration of any exposure, and never embedding keys directly in code, where they can be easily extracted by malicious actors.</p><p>API providers must also implement rate limiting to prevent abuse. Rate limiting restricts the number of requests a given API key can make in a certain period, thus protecting the API from overuse and helping to maintain service availability for all users. o further strengthen API security in blockchain-based applications, businesses often <a href="https://www.tecla.io/network/hire-solidity-developers" rel="noreferrer">hire Solidity developers</a> who can implement secure smart contracts and manage authentication processes that protect data integrity and prevent unauthorized access.</p><h2 id="what-is-an-api-token">What is an API Token?</h2>
<p>API tokens are sophisticated credentials that do much more than just identify the source of an API request—they also authenticate and authorize it. Unlike API keys, which are static and somewhat straightforward, tokens such as JSON Web Tokens (JWTs) are rich in structure and information. A typical token contains encoded data that includes details about the user, the token's expiration time, and the permissions granted to the user. This makes tokens particularly suited for scenarios where security and fine-grained access control are paramount.</p><h3 id="security-and-scope">Security and Scope</h3>
<p>The dynamic nature of API tokens allows for more secure interactions. Tokens are generated upon successful user authentication and have a limited lifespan, which reduces the risk of misuse if they are intercepted or exposed. Once a token expires, it must be renewed, often requiring user re-authentication, which adds an additional layer of security.</p><p>The scope of permissions associated with a token can be precisely controlled. For example, a token may grant a user read-only access to certain data while restricting access to other sensitive information. This is in stark contrast to API keys, which generally provide a fixed set of permissions and do not adapt to different user roles or contexts.</p><h3 id="use-cases">Use Cases</h3>
<p>Tokens excel in environments where user-specific authentication is crucial. They are extensively used in web and mobile applications where each user's identity and permissions need to be verified and maintained across sessions. Here are a few typical scenarios where tokens are preferred:</p><h4 id="user-authentication">User Authentication</h4>
<p>In web applications, tokens authenticate users and maintain their sessions, providing a continuous, secure user experience without requiring re-authentication for each request.</p><h4 id="fine-grained-access-control">Fine-Grained Access Control</h4>
<p>For applications that handle sensitive or personal data, tokens can enforce detailed access controls based on the user’s role within the organization or specific permissions granted to them.</p><h4 id="stateful-operations">Stateful Operations</h4>
<p>Applications that need to remember user information across multiple sessions use tokens to manage the state without compromising security. This capability is vital for personalized user experiences in complex applications.</p><h2 id="comparing-api-keys-and-tokens">Comparing API Keys and Tokens</h2>
<h3 id="key-differences">Key Differences</h3>
<p>While API keys and tokens may seem similar at a glance, they serve distinctly different purposes and offer different levels of security and functionality. The primary distinction lies in their scope and security implications:</p><h3 id="scope-of-use">Scope of Use</h3>
<p>API keys are essentially used for identifying the application making the API request, not the user. They are suited for scenarios where the identity of the application is sufficient to grant access to the API. Conversely, API tokens are designed to authenticate and authorize individual users, providing a more granular level of access control. This includes detailed information about user permissions and roles, which is critical in personalized user services.</p><h3 id="security">Security</h3>
<p>API keys are static, which means they do not change unless manually updated or rotated. This can pose a security risk if the key is exposed, as it remains valid until revoked. Tokens, on the other hand, are dynamic and often expire after a short duration, requiring re-authentication to renew, thus providing a more secure framework. Tokens also include additional security layers, such as encoding and potentially encrypting the contents, which are not typical features of API keys.</p><h4 id="implementation-complexity">Implementation Complexity</h4>
<p>API keys are easier to implement and manage as they require less infrastructure and management overhead compared to tokens. Tokens, with their complex structures and requirements for handling expiration and renewal, demand a more sophisticated system to manage.</p><h2 id="choosing-the-right-one">Choosing the Right One</h2>
<p>Deciding whether to use an API key or a token depends largely on the specific requirements of your application and the level of security you need:</p><h3 id="use-api-keys-when">Use API Keys When</h3>
<ul><li>You need a simple identifier for accessing APIs that do not contain sensitive or personal information.</li><li>Your application interacts with less critical external services where complex authentication is unnecessary.</li><li>You require a method for quick and simple rate limiting or access monitoring without detailed user-level controls.</li></ul><h3 id="use-tokens-when">Use Tokens When</h3>
<ul><li>User authentication is crucial, and you need to securely manage individual user sessions across requests.</li><li>Your application demands detailed access control based on user roles or permissions, especially when dealing with sensitive or personal information.</li><li>You require a system that can dynamically adjust permissions and access rights based on real-time analysis of user actions or attributes.</li></ul><h2 id="conclusion">Conclusion</h2>
<p>Understanding the differences between API keys and tokens is fundamental for developers and IT security professionals tasked with safeguarding digital interactions. While API keys offer a straightforward solution for identifying applications, they fall short in environments where user-specific authentication and fine-grained access controls are necessary. API tokens, with their dynamic and detailed approach to user permissions, are better suited for these more complex scenarios. By employing the appropriate authentication method—be it API keys for basic access control or tokens for advanced user-specific permissions—organizations can enhance the security and functionality of their digital platforms. As technology evolves and security demands intensify, the strategic implementation of these tools will continue to play a critical role in the development of secure and efficient software ecosystems.</p><h2 id="faqs-on-api-keys-and-api-tokens">FAQs on API Keys and API Tokens</h2>
<h3 id="1-what-is-an-api-key-and-how-is-it-used">1. What is an API key and how is it used?</h3>
<p>An API key is a unique identifier used to authenticate a calling application or project when making requests to an API. It helps manage access and track how the API is being used, but does not provide information about the user. API keys are typically used for simpler authentication tasks where user-specific security is not critical.</p><h3 id="2-how-do-api-tokens-differ-from-api-keys">2. How do API tokens differ from API keys?</h3>
<p>Unlike API keys, which only identify the application making the request, API tokens authenticate and authorize individual users, providing more detailed access control. API tokens contain encoded data about the user, their permissions, and the token's validity, making them suitable for scenarios that require stringent security measures and fine-grained access control.</p><h3 id="3-what-are-the-best-practices-for-securing-api-keys">3. What are the best practices for securing API keys?</h3>
<p>To secure API keys, it is recommended to transmit them only over HTTPS, rotate them regularly, and avoid hard-coding them in the application's source code. Additionally, implementing rate limiting and monitoring usage can help prevent abuse and detect potential security breaches.</p><h3 id="4-why-might-an-organization-choose-to-use-api-tokens-over-api-keys">4. Why might an organization choose to use API tokens over API keys?</h3>
<p>Organizations might prefer API tokens when they need robust user authentication, dynamic permissions management, and secure, personalized user interactions. Tokens are ideal in environments where access needs to be tightly controlled based on user roles or specific actions within an application.</p><h3 id="5-can-api-tokens-be-used-for-stateful-operations">5. Can API tokens be used for stateful operations?</h3>
<p>Yes, API tokens are well-suited for stateful operations as they can maintain user information across sessions. This is beneficial for applications that require a seamless user experience, where the user's identity and permissions need to be persistently managed without repeatedly asking for authentication.</p><h3 id="6-what-are-common-use-cases-for-api-keys">6. What are common use cases for API keys?</h3>
<p>Common use cases for API keys include third-party service integrations, simple authentication processes, and situations where detailed user-specific permissions are not necessary. They are often used in server-to-server communications, accessing public data, and rate limiting.</p><h3 id="7-are-api-tokens-more-secure-than-api-keys">7. Are API tokens more secure than API keys?</h3>
<p>Generally, API tokens are considered more secure than API keys because they offer more comprehensive security features, such as expiration, encoding, and potentially encryption. They also provide more granular control over what actions can be performed by the authenticated user, reducing the risk of unauthorized access.</p><h3 id="8-how-can-developers-manage-and-monitor-the-usage-of-api-keys-and-tokens-effectively">8. How can developers manage and monitor the usage of API keys and tokens effectively?</h3>
<p>Developers can manage and monitor API keys and tokens by using dedicated API management tools that provide features like key and token generation, expiration settings, activity logs, and security alerts. These tools help ensure that keys and tokens are used safely and according to organizational policies.</p><h3 id="9-what-should-an-organization-do-if-an-api-key-or-token-is-exposed">9. What should an organization do if an API key or token is exposed?</h3>
<p>If an API key or token is exposed, the organization should immediately revoke the compromised key or token, audit all recent API activity to check for unauthorized access, and implement stricter security measures to prevent future exposures.</p>]]></content:encoded></item><item><title><![CDATA[How to Build Live Streaming Video Call App in Java?]]></title><description><![CDATA[Explore the step-by-step tutorial, You will learn how to build an Android Live Streaming app with VideoSDK using Java.]]></description><link>https://www.videosdk.live/blog/android-java-interactive-live-streaming</link><guid isPermaLink="false">649434178ecddeab7f1791db</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[Android]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Wed, 22 Jan 2025 06:17:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/HTTP-Live-Streaming-Java-2.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/04/HTTP-Live-Streaming-Java-2.png" alt="How to Build Live Streaming Video Call App in Java?"/><p>The <a href="https://www.videosdk.live/blog/what-is-http-live-streaming">HTTP Live Streaming</a> (HLS) feature has gained huge popularity in recent times. It offers unique opportunities for content creators, businesses, and developers to connect with their audience in real-time.</p><p>The <a href="https://www.videosdk.live/">VideoSDK </a>is a powerful tool that allows you to include real-time live streaming capabilities into your video call applications. With HTTP live streaming, you can engage your users in dynamic and immersive experiences, such as live events, gaming broadcasts, virtual classrooms, and more.</p><p>By the end of this tutorial, you will have acquired valuable skills in integrating live video streaming into your Android app, that creates engaging and immersive experiences for your users. So, let's embark on this journey and unlock the potential of live video streaming in Android with VideoSDK!</p><h2 id="integrating-hls-with-videosdk-on-android">Integrating HLS with VideoSDK on Android</h2><p>Follow the steps to create the necessary environment to add HTTP live streaming in your Android live app. You can find the code sample for Quickstart <a href="https://github.com/videosdk-live/quickstart/tree/main/android-hls"><a href="https://www.videosdk.live/">here</a></a>.</p><h3 id="prerequisites">Prerequisites</h3><p>First of all, your development environment should meet the following requirements:</p><ul><li><a href="https://www.oracle.com/in/java/technologies/downloads/">Java Development Kit.</a></li><li><a href="https://developer.android.com/studio?gclid=Cj0KCQjw4s-kBhDqARIsAN-ipH1LHN3BHD8E8boP9bhHZMQKve-7beXh5xlozj9hspl3Oc1FmGduOd8aAlm7EALw_wcB&amp;gclsrc=aw.ds">Android Studio</a> 3.0 or later.</li><li>Android SDK API Level 21 or higher.</li><li>A mobile device that runs Android 5.0 or later.</li></ul><blockquote>You should use a VideoSDK account to generate tokens. Visit VideoSDK <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">dashboard</a> to generate a token.</blockquote><h3 id="create-a-new-android-project">Create a new Android Project</h3><p>For a new project in Android Studio, create a Phone and Tablet Android project with an Empty activity.</p><p>Video SDK Android Quick Start New Project</p><blockquote><strong>CAUTION</strong>: After creating the project, Android Studio automatically starts gradle sync. Ensure that the sync succeeds before you continue.</blockquote><p>Add the repositories to the project's <code>settings.gradle</code> file.</p><pre><code class="language-groovy">dependencyResolutionManagement{
  repositories {
    // ...
    google()
    mavenCentral()
    maven { url 'https://jitpack.io' }
    maven { url "https://maven.aliyun.com/repository/jcenter" }
  }
}
</code></pre><p>Add the following dependency to your app's <code>app/build.gradle</code>.</p><pre><code class="language-groovy">dependencies {
  implementation 'live.videosdk:rtc-android-sdk:0.1.26'

  // library to perform Network call to generate a meeting id
  implementation 'com.amitshekhar.android:android-networking:1.0.2'

  // Other dependencies specific to your app
}
</code></pre><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">ℹ️</div><div class="kg-callout-text">Android SDK compatible with <code>armeabi-v7a</code>, <code>arm64-v8a</code>, <code>x86_64</code> architectures. If you want to run the application in an emulator, choose ABI <code>x86_64</code> when creating a device.</div></div><h3 id="add-permissions-to-your-project">Add permissions to your project</h3><p>In <code>/app/Manifests/AndroidManifest.xml</code>, add the following permissions after <code>&lt;/application&gt;</code>.</p><pre><code class="language-java">&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS"/&gt;
&lt;uses-permission android:name="android.permission.INTERNET" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;</code></pre><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">⚠️</div><div class="kg-callout-text">If your project has set <code>android.useAndroidX = true</code>, then set <code>android.enableJetifier = true</code> in the <code>gradle.properties</code> file to migrate your project to AndroidX and avoid duplicate class conflicts.</div></div><h3 id="structure-of-the-project">Structure of the project</h3><p>Your project structure should look like this:</p><!--kg-card-begin: markdown--><pre><code class="language-js">app
   ├── java
   │    ├── packagename
   │         ├── JoinActivity
   │         ├── MeetingActivity
   │         ├── SpeakerAdapter
   │         ├── SpeakerFragment
   |         ├── ViewerFragment
   ├── res
   │    ├── layout
   │    │    ├── activity_join.xml
   │    │    ├── activity_meeting.xml
   |    |    ├── fragment_speaker.xml
   |    |    ├── fragment_viewer.xml
   │    │    ├── item_remote_peer.xml
</code></pre>
<!--kg-card-end: markdown--><blockquote><strong>NOTE</strong>: You have to set <code>JoinActivity</code> as the Launcher activity.</blockquote><h3 id="app-architecture">App Architecture</h3><!--kg-card-begin: html--><center>

<img src="https://cdn.videosdk.live/website-resources/docs-resources/android_ils_quickstart_app_structure.png" alt="How to Build Live Streaming Video Call App in Java?"/>

</center><!--kg-card-end: html--><h2 id="build-an-android-live-video-streaming-app">Build an Android Live Video Streaming App</h2><h3 id="step-1-creating-joining-screen">Step 1: Creating Joining Screen</h3><p>Create a new Activity named <code>JoinActivity</code>.</p><!--kg-card-begin: markdown--><h4 id="a-creating-ui">(a) Creating UI</h4>
<!--kg-card-end: markdown--><p>The joining screen includes:</p><ul><li><strong>Create Button</strong>: Creates a new meeting.</li><li><strong>TextField for Meeting ID</strong>: Contains the meeting ID you want to join.</li><li><strong>Join as Host Button</strong>: Joins the meeting as the host with the provided <code>meetingId</code>.</li><li><strong>Join as Viewer Button</strong>: Joins the meeting as a viewer with the provided <code>meetingId</code>.</li></ul><p>In the <code>/app/res/layout/activity_join.xml</code> file, replace the content with the following:</p><!--kg-card-begin: markdown--><pre><code class="language-js">&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;

&lt;LinearLayout xmlns:android=&quot;http://schemas.android.com/apk/res/android&quot;
    android:id=&quot;@+id/createorjoinlayout&quot;
    android:layout_width=&quot;match_parent&quot;
    android:layout_height=&quot;match_parent&quot;
    android:background=&quot;@color/black&quot;
    android:gravity=&quot;center&quot;
    android:orientation=&quot;vertical&quot;&gt;

    &lt;Button
        android:id=&quot;@+id/btnCreateMeeting&quot;
        android:layout_width=&quot;wrap_content&quot;
        android:layout_height=&quot;wrap_content&quot;
        android:text=&quot;Create Meeting&quot;
        android:textAllCaps=&quot;false&quot; /&gt;

    &lt;TextView
        android:id=&quot;@+id/tvText&quot;
        android:layout_width=&quot;wrap_content&quot;
        android:layout_height=&quot;wrap_content&quot;
        android:paddingVertical=&quot;5sp&quot;
        android:text=&quot;OR&quot;
        android:textColor=&quot;@color/white&quot;
        android:textSize=&quot;20sp&quot; /&gt;

    &lt;EditText
        android:id=&quot;@+id/etMeetingId&quot;
        android:theme=&quot;@android:style/Theme.Holo&quot;
        android:layout_width=&quot;250dp&quot;
        android:layout_height=&quot;wrap_content&quot;
        android:hint=&quot;Enter Meeting Id&quot;
        android:textColor=&quot;@color/white&quot;
        android:textColorHint=&quot;@color/white&quot; /&gt;

    &lt;Button
        android:id=&quot;@+id/btnJoinHostMeeting&quot;
        android:layout_width=&quot;wrap_content&quot;
        android:layout_height=&quot;wrap_content&quot;
        android:layout_marginTop=&quot;8sp&quot;
        android:text=&quot;Join as Host&quot;
        android:textAllCaps=&quot;false&quot; /&gt;

    &lt;Button
        android:id=&quot;@+id/btnJoinViewerMeeting&quot;
        android:layout_width=&quot;wrap_content&quot;
        android:layout_height=&quot;wrap_content&quot;
        android:text=&quot;Join as Viewer&quot;
        android:textAllCaps=&quot;false&quot; /&gt;

&lt;/LinearLayout&gt;
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h4 id="b-integration-of-create-meeting-api">(b) Integration of Create Meeting API</h4>
<!--kg-card-end: markdown--><p>Create field <code>sampleToken</code> in <code>JoinActivity</code> that will hold the generated token from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK dashboard</a>. This token will be used in the VideoSDK config as well as in generating <code>meetingId</code>.</p><!--kg-card-begin: markdown--><pre><code class="language-js">public class JoinActivity extends AppCompatActivity {

  //Replace with the token you generated from the VideoSDK Dashboard
  private String sampleToken =&quot;&quot;;

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    //...
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>On the <strong>Join Button as Host</strong> <code>onClick</code> events, we will navigate to <code>MeetingActivity</code> with token, <code>meetingId</code>and mode as <code>CONFERENCE</code>.</p><p>On the <strong>Join Button as Viewer</strong> <code>onClick</code> events, we will navigate to <code>MeetingActivity</code> with token, <code>meetingId</code>and mode as <code>Viewer</code>.</p><!--kg-card-begin: markdown--><pre><code class="language-js">public class JoinActivity extends AppCompatActivity {

  //Replace with the token you generated from the VideoSDK Dashboard
  private String sampleToken =&quot;&quot;;

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_join);

    final Button btnCreate = findViewById(R.id.btnCreateMeeting);
    final Button btnJoinHost = findViewById(R.id.btnJoinHostMeeting);
    final Button btnJoinViewer = findViewById(R.id.btnJoinViewerMeeting);
    final EditText etMeetingId = findViewById(R.id.etMeetingId);

    // create meeting and join as Host
    btnCreate.setOnClickListener(v -&gt; createMeeting(sampleToken));

    // Join as Host
    btnJoinHost.setOnClickListener(v -&gt; {
        Intent intent = new Intent(JoinActivity.this, MeetingActivity.class);
        intent.putExtra(&quot;token&quot;, sampleToken);
        intent.putExtra(&quot;meetingId&quot;, etMeetingId.getText().toString().trim());
        intent.putExtra(&quot;mode&quot;, &quot;CONFERENCE&quot;);
        startActivity(intent);
    });

    // Join as Viewer
    btnJoinViewer.setOnClickListener(v -&gt; {
        Intent intent = new Intent(JoinActivity.this, MeetingActivity.class);
        intent.putExtra(&quot;token&quot;, sampleToken);
        intent.putExtra(&quot;meetingId&quot;, etMeetingId.getText().toString().trim());
        intent.putExtra(&quot;mode&quot;, &quot;VIEWER&quot;);
        startActivity(intent);
    });
  }

  private void createMeeting(String token) {
    // we will explore this method in the next step
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>For <strong>Create Button</strong> under <code>createMeeting</code> method, we will generate <code>meetingId</code>by calling API and navigating to <code>MeetingActivity</code> with the token, generated <code>meetingId</code>and mode as <code>CONFERENCE</code>.</p><!--kg-card-begin: markdown--><pre><code class="language-js">public class JoinActivity extends AppCompatActivity {
  //...onCreate

  private void createMeeting(String token) {
    // we will make an API call to VideoSDK Server to get a roomId
    AndroidNetworking.post(&quot;https://api.videosdk.live/v2/rooms&quot;)
          .addHeaders(&quot;Authorization&quot;, token) //we will pass the token in the Headers
          .build()
          .getAsJSONObject(new JSONObjectRequestListener() {
              @Override
              public void onResponse(JSONObject response) {
                try {
                  // response will contain `roomId`
                  final String meetingId = response.getString(&quot;roomId&quot;);

                  // starting the MeetingActivity with received roomId and our sampleToken
                  Intent intent = new Intent(JoinActivity.this, MeetingActivity.class);
                  intent.putExtra(&quot;token&quot;, sampleToken);
                  intent.putExtra(&quot;meetingId&quot;, meetingId);
                  intent.putExtra(&quot;mode&quot;, &quot;CONFERENCE&quot;);
                  startActivity(intent);
                } catch (JSONException e) {
                    e.printStackTrace();
                }
              }

              @Override
              public void onError(ANError anError) {
                anError.printStackTrace();
                Toast.makeText(JoinActivity.this, anError.getMessage(), Toast.LENGTH_SHORT).show();
              }
          });
  }
}
</code></pre>
<!--kg-card-end: markdown--><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">⚠️</div><div class="kg-callout-text">Don't get confused between Room and Meeting keywords, both are the same thing.</div></div><p>Our Android live app is completely based on audio and video communication, that's why we need to ask for runtime permissions <code>RECORD_AUDIO</code> and <code>CAMERA</code>. So, we will implement permission logic on <code>JoinActivity</code>.</p><!--kg-card-begin: markdown--><pre><code class="language-js">public class JoinActivity extends AppCompatActivity {
  private static final int PERMISSION_REQ_ID = 22;

  private static final String[] REQUESTED_PERMISSIONS = {
    Manifest.permission.RECORD_AUDIO,
    Manifest.permission.CAMERA
  };

  private void checkSelfPermission(String permission, int requestCode) {
    if (ContextCompat.checkSelfPermission(this, permission) !=
            PackageManager.PERMISSION_GRANTED) {
      ActivityCompat.requestPermissions(this, REQUESTED_PERMISSIONS, requestCode);
    }
  }

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    //... button listeneres
   checkSelfPermission(REQUESTED_PERMISSIONS[0], PERMISSION_REQ_ID);
   checkSelfPermission(REQUESTED_PERMISSIONS[1], PERMISSION_REQ_ID);
  }
}
</code></pre>
<!--kg-card-end: markdown--><h3 id="step-2-creating-meeting-screen">Step 2: Creating Meeting Screen</h3><p>Create a new Activity named <code>MeetingActivity</code>. In <code>/app/res/layout/activity_meeting.xml</code> file, replace the content with the following.</p><!--kg-card-begin: markdown--><pre><code class="language-js">&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;
&lt;RelativeLayout xmlns:android=&quot;http://schemas.android.com/apk/res/android&quot;
    xmlns:tools=&quot;http://schemas.android.com/tools&quot;
    android:id=&quot;@+id/mainLayout&quot;
    android:layout_width=&quot;match_parent&quot;
    android:layout_height=&quot;match_parent&quot;
    android:background=&quot;@color/black&quot;
    tools:context=&quot;.MeetingActivity&quot;&gt;

    &lt;TextView
        android:layout_width=&quot;match_parent&quot;
        android:layout_height=&quot;match_parent&quot;
        android:gravity=&quot;center&quot;
        android:text=&quot;Creating a meeting for you&quot;
        android:textColor=&quot;@color/white&quot;
        android:textFontWeight=&quot;700&quot;
        android:textSize=&quot;20sp&quot; /&gt;

&lt;/RelativeLayout&gt;
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h4 id="b-initializing-the-meeting">(b) Initializing the Meeting</h4>
<!--kg-card-end: markdown--><p>After getting the token, <code>meetingId</code>and mode from <code>JoinActivity</code>,</p><ol><li>Initialize <strong>VideoSDK</strong>.</li><li>Configure <strong>VideoSDK</strong> with the token.</li><li>Initialize the meeting with required params such as <code>meetingId</code>, <code>participantName</code>, <code>micEnabled</code>, <code>webcamEnabled</code>, <code>mode</code> and more.</li><li>Join the room with <code>meeting.join()</code> method.</li><li>Add <code>MeetingEventListener</code> for listening <strong>Meeting Join</strong> event.</li><li>Check mode of <code>localParticipant</code>, If the mode is <strong>CONFERENCE,</strong> We will replace <code>mainLayout</code> with <code>SpeakerFragment</code> otherwise, replace it with <code>ViewerFragment</code>.</li></ol><!--kg-card-begin: markdown--><pre><code class="language-js">public class MeetingActivity extends AppCompatActivity {
  private Meeting meeting;

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_meeting);

    final String meetingId = getIntent().getStringExtra(&quot;meetingId&quot;);
    String token = getIntent().getStringExtra(&quot;token&quot;);
    String mode = getIntent().getStringExtra(&quot;mode&quot;);
    String localParticipantName = &quot;John Doe&quot;;
    boolean streamEnable = mode.equals(&quot;CONFERENCE&quot;);

    // initialize VideoSDK
    VideoSDK.initialize(getApplicationContext());

    // Configuration VideoSDK with Token
    VideoSDK.config(token);

    // Initialize VideoSDK Meeting
    meeting = VideoSDK.initMeeting(
            MeetingActivity.this, meetingId, participantName,
            micEnabled, webcamEnabled,null, null, false, null, null);

    // join Meeting
    meeting.join();

    // if mode is CONFERENCE than replace mainLayout with SpeakerFragment otherwise with ViewerFragment
    meeting.addEventListener(new MeetingEventListener() {
        @Override
        public void onMeetingJoined() {
          if (meeting != null) {
            if (mode.equals(&quot;CONFERENCE&quot;)) {
              //pin the local partcipant
              meeting.getLocalParticipant().pin(&quot;SHARE_AND_CAM&quot;);
              getSupportFragmentManager()
                  .beginTransaction()
                  .replace(R.id.mainLayout, new SpeakerFragment(), &quot;MainFragment&quot;)
                  .commit();
            } else if (mode.equals(&quot;VIEWER&quot;)) {
              getSupportFragmentManager()
                  .beginTransaction()
                  .replace(R.id.mainLayout, new ViewerFragment(), &quot;viewerFragment&quot;)
                  .commit();
            }
          }
        }
      });
  }
  public Meeting getMeeting() {
      return meeting;
  }
}
</code></pre>
<!--kg-card-end: markdown--><h3 id="step-3-implement-speakerview%E2%80%8B">Step 3: Implement <code>SpeakerView​</code></h3><p>After successfully entering the meeting, render the speaker's view and manage controls such as toggling the webcam/mic, start/stop HLS, and leaving the meeting.</p><p>Create a new fragment named <code>SpeakerFragment</code>. In <code>/app/res/layout/fragment_speaker.xml</code> file, replace the content with the following.</p><!--kg-card-begin: markdown--><pre><code class="language-js">&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;
&lt;LinearLayout xmlns:android=&quot;http://schemas.android.com/apk/res/android&quot;
    xmlns:tools=&quot;http://schemas.android.com/tools&quot;
    android:layout_width=&quot;match_parent&quot;
    android:layout_height=&quot;match_parent&quot;
    android:background=&quot;@color/black&quot;
    android:gravity=&quot;center&quot;
    android:orientation=&quot;vertical&quot;
    tools:context=&quot;.SpeakerFragment&quot;&gt;

    &lt;LinearLayout
        android:layout_width=&quot;match_parent&quot;
        android:layout_height=&quot;wrap_content&quot;
        android:layout_marginVertical=&quot;8sp&quot;
        android:paddingHorizontal=&quot;10sp&quot;&gt;

        &lt;TextView
            android:id=&quot;@+id/tvMeetingId&quot;
            android:layout_width=&quot;0dp&quot;
            android:layout_height=&quot;wrap_content&quot;
            android:text=&quot;Meeting Id : &quot;
            android:textColor=&quot;@color/white&quot;
            android:textSize=&quot;18sp&quot;
            android:layout_weight=&quot;3&quot;/&gt;

        &lt;Button
            android:id=&quot;@+id/btnLeave&quot;
            android:layout_width=&quot;0dp&quot;
            android:layout_height=&quot;wrap_content&quot;
            android:text=&quot;Leave&quot;
            android:textAllCaps=&quot;false&quot;
            android:layout_weight=&quot;1&quot;/&gt;

    &lt;/LinearLayout&gt;

    &lt;TextView
        android:id=&quot;@+id/tvHlsState&quot;
        android:layout_width=&quot;wrap_content&quot;
        android:layout_height=&quot;wrap_content&quot;
        android:text=&quot;Current HLS State : NOT_STARTED&quot;
        android:textColor=&quot;@color/white&quot;
        android:textSize=&quot;18sp&quot; /&gt;

    &lt;androidx.recyclerview.widget.RecyclerView
        android:id=&quot;@+id/rvParticipants&quot;
        android:layout_width=&quot;match_parent&quot;
        android:layout_height=&quot;0dp&quot;
        android:layout_marginVertical=&quot;10sp&quot;
        android:layout_weight=&quot;1&quot; /&gt;

    &lt;LinearLayout
        android:layout_width=&quot;match_parent&quot;
        android:layout_height=&quot;wrap_content&quot;
        android:gravity=&quot;center&quot;&gt;

        &lt;Button
            android:id=&quot;@+id/btnHLS&quot;
            android:layout_width=&quot;wrap_content&quot;
            android:layout_height=&quot;wrap_content&quot;
            android:text=&quot;Start HLS&quot;
            android:textAllCaps=&quot;false&quot; /&gt;

        &lt;Button
            android:id=&quot;@+id/btnWebcam&quot;
            android:layout_width=&quot;wrap_content&quot;
            android:layout_height=&quot;wrap_content&quot;
            android:layout_marginHorizontal=&quot;5sp&quot;
            android:text=&quot;Toggle Webcam&quot;
            android:textAllCaps=&quot;false&quot; /&gt;

        &lt;Button
            android:id=&quot;@+id/btnMic&quot;
            android:layout_width=&quot;wrap_content&quot;
            android:layout_height=&quot;wrap_content&quot;
            android:text=&quot;Toggle Mic&quot;
            android:textAllCaps=&quot;false&quot; /&gt;

    &lt;/LinearLayout&gt;

&lt;/LinearLayout&gt;
</code></pre>
<!--kg-card-end: markdown--><p>Now, let's set the listener for buttons allowing the participant to toggle media.</p><!--kg-card-begin: markdown--><pre><code class="language-js">public class SpeakerFragment extends Fragment {

  private static Activity mActivity;
  private static Context mContext;
  private static Meeting meeting;
  private boolean micEnabled = true;
  private boolean webcamEnabled = true;
  private boolean hlsEnabled = false;
  private Button btnMic, btnWebcam, btnHls, btnLeave;
  private TextView tvMeetingId, tvHlsState;

  public SpeakerFragment() {
    // Required empty public constructor
  }

  @Override
  public void onAttach(@NonNull Context context) {
    super.onAttach(context);
    mContext = context;
    if (context instanceof Activity) {
      mActivity = (Activity) context;
      // getting meeting object from Meeting Activity
      meeting = ((MeetingActivity) mActivity).getMeeting();
    }
  }

  @Override
  public View onCreateView(LayoutInflater inflater, ViewGroup container,
                            Bundle savedInstanceState) {
    // Inflate the layout for this fragment
    View view = inflater.inflate(R.layout.fragment_speaker, container, false);
    btnMic = view.findViewById(R.id.btnMic);
    btnWebcam = view.findViewById(R.id.btnWebcam);
    btnHls = view.findViewById(R.id.btnHLS);
    btnLeave = view.findViewById(R.id.btnLeave);

    tvMeetingId = view.findViewById(R.id.tvMeetingId);
    tvHlsState = view.findViewById(R.id.tvHlsState);

    if (meeting != null) {
        tvMeetingId.setText(&quot;Meeting Id : &quot; + meeting.getMeetingId());
        setActionListeners();
    }
    return view;
  }

  private void setActionListeners() {
    btnMic.setOnClickListener(v -&gt; {
        if (micEnabled) {
          meeting.muteMic();
          Toast.makeText(mContext,&quot;Mic Muted&quot;,Toast.LENGTH_SHORT).show();
        } else {
          meeting.unmuteMic();
          Toast.makeText(mContext,&quot;Mic Enabled&quot;,Toast.LENGTH_SHORT).show();
        }
        micEnabled=!micEnabled;
    });

    btnWebcam.setOnClickListener(v -&gt; {
        if (webcamEnabled) {
          meeting.disableWebcam();
          Toast.makeText(mContext,&quot;Webcam Disabled&quot;,Toast.LENGTH_SHORT).show();
        } else {
          meeting.enableWebcam();
          Toast.makeText(mContext,&quot;Webcam Enabled&quot;,Toast.LENGTH_SHORT).show();
        }
        webcamEnabled=!webcamEnabled;
    });

    btnLeave.setOnClickListener(v -&gt; meeting.leave());

    btnHls.setOnClickListener(v -&gt; {
      if (!hlsEnabled) {
        JSONObject config = new JSONObject();
        JSONObject layout = new JSONObject();
        JsonUtils.jsonPut(layout, &quot;type&quot;, &quot;SPOTLIGHT&quot;);
        JsonUtils.jsonPut(layout, &quot;priority&quot;, &quot;PIN&quot;);
        JsonUtils.jsonPut(layout, &quot;gridSize&quot;, 4);
        JsonUtils.jsonPut(config, &quot;layout&quot;, layout);
        JsonUtils.jsonPut(config, &quot;orientation&quot;, &quot;portrait&quot;);
        JsonUtils.jsonPut(config, &quot;theme&quot;, &quot;DARK&quot;);
        JsonUtils.jsonPut(config, &quot;quality&quot;, &quot;high&quot;);
        meeting.startHls(config);
      } else {
        meeting.stopHls();
      }
    });
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>After adding listeners for buttons, let's add <code>MeetingEventListener</code> to the meeting and remove all the listeners in <code>onDestroy()</code> method.</p><!--kg-card-begin: markdown--><pre><code class="language-js">public class SpeakerFragment extends Fragment {

  @Override
  public View onCreateView(LayoutInflater inflater, ViewGroup container,
                            Bundle savedInstanceState) {
    //...
    if (meeting != null) {
        //...
        // add Listener to the meeting
        meeting.addEventListener(meetingEventListener);
    }
    return view;
  }

  private final MeetingEventListener meetingEventListener = new MeetingEventListener() {
    @Override
    public void onMeetingLeft() {
      //unpin local participant
      meeting.getLocalParticipant().unpin(&quot;SHARE_AND_CAM&quot;);
      if (isAdded()) {
        Intent intents = new Intent(mContext, JoinActivity.class);
        intents.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK
                | Intent.FLAG_ACTIVITY_CLEAR_TOP | Intent.FLAG_ACTIVITY_CLEAR_TASK);
        startActivity(intents);
        mActivity.finish();
      }
    }

    @RequiresApi(api = Build.VERSION_CODES.P)
    @Override
    public void onHlsStateChanged(JSONObject HlsState) {
        if (HlsState.has(&quot;status&quot;)) {
          try {
            tvHlsState.setText(&quot;Current HLS State : &quot; + HlsState.getString(&quot;status&quot;));
            if (HlsState.getString(&quot;status&quot;).equals(&quot;HLS_STARTED&quot;)) {
                hlsEnabled=true;
                btnHls.setText(&quot;Stop HLS&quot;);
            }
            if (HlsState.getString(&quot;status&quot;).equals(&quot;HLS_STOPPED&quot;)) {
                hlsEnabled = false;
                btnHls.setText(&quot;Start HLS&quot;);
            }
          } catch (JSONException e) {
              e.printStackTrace();
          }
        }
    }
  };

  @Override
  public void onDestroy() {
    mContext = null;
    mActivity = null;
    if (meeting != null) {
        meeting.removeAllListeners();
        meeting = null;
    }
    super.onDestroy();
  }

}
</code></pre>
<!--kg-card-end: markdown--><p>The next step is to render the speaker's view. With <code>RecyclerView</code>, we will display the list of participants who have joined the meeting as <code>host</code>.</p><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">?</div><div class="kg-callout-text">- Here the participant's video is displayed using <code>VideoView</code>, but you may also use <code>SurfaceViewRender</code> for the same.<br>- For <code>VideoView</code>, SDK version should be <code>0.1.13</code> or higher.<br>- To know more about <code>VideoView</code>, please visit <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/render-media/display-video/understand-videoView-component">here</a>.</br></br></div></div><p>Create a new layout for the participant view named <code>item_remote_peer.xml</code> in the <code>res/layout</code> folder.</p><!--kg-card-begin: markdown--><pre><code class="language-js">&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;
&lt;FrameLayout xmlns:android=&quot;http://schemas.android.com/apk/res/android&quot;
    xmlns:app=&quot;http://schemas.android.com/apk/res-auto&quot;
    xmlns:tools=&quot;http://schemas.android.com/tools&quot;
    android:layout_width=&quot;match_parent&quot;
    android:layout_height=&quot;200dp&quot;
    android:background=&quot;@color/cardview_dark_background&quot;
    tools:layout_height=&quot;200dp&quot;&gt;

    &lt;live.videosdk.rtc.android.VideoView
        android:id=&quot;@+id/participantView&quot;
        android:layout_width=&quot;match_parent&quot;
        android:layout_height=&quot;match_parent&quot;
        android:visibility=&quot;gone&quot; /&gt;

    &lt;LinearLayout
        android:layout_width=&quot;match_parent&quot;
        android:layout_height=&quot;wrap_content&quot;
        android:layout_gravity=&quot;bottom&quot;
        android:background=&quot;#99000000&quot;
        android:orientation=&quot;horizontal&quot;&gt;

        &lt;TextView
            android:id=&quot;@+id/tvName&quot;
            android:layout_width=&quot;0dp&quot;
            android:layout_height=&quot;wrap_content&quot;
            android:layout_weight=&quot;1&quot;
            android:gravity=&quot;center&quot;
            android:padding=&quot;4dp&quot;
            android:textColor=&quot;@color/white&quot; /&gt;

    &lt;/LinearLayout&gt;

&lt;/FrameLayout&gt;
</code></pre>
<!--kg-card-end: markdown--><p>Create a recycler view adapter  <code>SpeakerAdapter</code> that will show the participant list. Create <code>PeerViewHolder</code> in an adapter that will extend <code>RecyclerView.ViewHolder</code>.</p><!--kg-card-begin: markdown--><pre><code class="language-js">public class SpeakerAdapter extends RecyclerView.Adapter&lt;SpeakerAdapter.PeerViewHolder&gt; {
  private List&lt;Participant&gt; participantList = new ArrayList&lt;&gt;();
  private final Meeting meeting;
  public SpeakerAdapter(Meeting meeting) {
    this.meeting = meeting;

    updateParticipantList();

    // adding Meeting Event listener to get the participant join/leave event in the meeting.
    meeting.addEventListener(new MeetingEventListener() {
      @Override
      public void onParticipantJoined(Participant participant) {
        // check participant join as Host/Speaker or not
        if (participant.getMode().equals(&quot;CONFERENCE&quot;)) {
          // pin the participant
          participant.pin(&quot;SHARE_AND_CAM&quot;);
          // add participant in participantList
          participantList.add(participant);
        }
        notifyDataSetChanged();
      }

      @Override
      public void onParticipantLeft(Participant participant) {
        int pos = -1;
        for (int i = 0; i &lt; participantList.size(); i++) {
          if (participantList.get(i).getId().equals(participant.getId())) {
            pos = i;
            break;
          }
        }
        if(participantList.contains(participant)) {
          // unpin participant who left the meeting
          participant.unpin(&quot;SHARE_AND_CAM&quot;);
          // remove participant from the list
          participantList.remove(participant);
        }
        if (pos &gt;= 0) {
            notifyItemRemoved(pos);
        }
      }
    });
  }

  private void updateParticipantList() {
    participantList = new ArrayList&lt;&gt;();

    // adding the local participant(You) to the list
    participantList.add(meeting.getLocalParticipant());

    // adding participants who join as Host/Speaker
    Iterator&lt;Participant&gt; participants = meeting.getParticipants().values().iterator();
    for (int i = 0; i &lt; meeting.getParticipants().size(); i++) {
      final Participant participant = participants.next();
      if (participant.getMode().equals(&quot;CONFERENCE&quot;)) {
        // pin the participant
        participant.pin(&quot;SHARE_AND_CAM&quot;);
        // add participant in participantList
        participantList.add(participant);
      }
    }
  }

  @NonNull
  @Override
  public PeerViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
    return new PeerViewHolder(LayoutInflater.from(parent.getContext()).inflate(R.layout.item_remote_peer, parent, false));
  }

  @Override
  public void onBindViewHolder(@NonNull PeerViewHolder holder, int position) {
    Participant participant = participantList.get(position);

    holder.tvName.setText(participant.getDisplayName());

    // adding the initial video stream for the participant into the 'VideoView'
    for (Map.Entry&lt;String, Stream&gt; entry : participant.getStreams().entrySet()) {
      Stream stream = entry.getValue();
      if (stream.getKind().equalsIgnoreCase(&quot;video&quot;)) {
        holder.participantView.setVisibility(View.VISIBLE);
        VideoTrack videoTrack = (VideoTrack) stream.getTrack();
        holder.participantView.addTrack(videoTrack);
        break;
      }
    }

    // add Listener to the participant which will update start or stop the video stream of that participant
    participant.addEventListener(new ParticipantEventListener() {
        @Override
        public void onStreamEnabled(Stream stream) {
          if (stream.getKind().equalsIgnoreCase(&quot;video&quot;)) {
            holder.participantView.setVisibility(View.VISIBLE);
            VideoTrack videoTrack = (VideoTrack) stream.getTrack();
            holder.participantView.addTrack(videoTrack);
          }
        }

        @Override
        public void onStreamDisabled(Stream stream) {
          if (stream.getKind().equalsIgnoreCase(&quot;video&quot;)) {
            holder.participantView.removeTrack();
            holder.participantView.setVisibility(View.GONE);
          }
        }
    });
  }

  @Override
  public int getItemCount() {
    return participantList.size();
  }

  static class PeerViewHolder extends RecyclerView.ViewHolder {
    // 'VideoView' to show Video Stream
    public VideoView participantView;
    public TextView tvName;
    public View itemView;

    PeerViewHolder(@NonNull View view) {
        super(view);
        itemView = view;
        tvName = view.findViewById(R.id.tvName);
        participantView = view.findViewById(R.id.participantView);
    }
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>Add this adapter to the <code>SpeakerFragment</code>.</p><!--kg-card-begin: markdown--><pre><code class="language-js"> @Override
  public View onCreateView(LayoutInflater inflater, ViewGroup container,
                            Bundle savedInstanceState) {
    //...
    if (meeting != null) {
      //...
      final RecyclerView rvParticipants = view.findViewById(R.id.rvParticipants);
      rvParticipants.setLayoutManager(new GridLayoutManager(mContext, 2));
      rvParticipants.setAdapter(new SpeakerAdapter(meeting));
    }
}
</code></pre>
<!--kg-card-end: markdown--><h3 id="step-4-implement-viewerview">Step 4: Implement ViewerView</h3><p>When the host starts live streaming, The viewer can see the live streaming.</p><p>To implement the player view, We're going to use <code>ExoPlayer</code>. It will be helpful to play the HLS stream.</p><p>But first, Let's add a dependency to the project.</p><!--kg-card-begin: markdown--><pre><code class="language-js">dependencies {
  implementation 'com.google.android.exoplayer:exoplayer:2.18.5'
  // other app dependencies
  }
</code></pre>
<!--kg-card-end: markdown--><p>Create a new Fragment named <code>ViewerFragment</code></p><!--kg-card-begin: markdown--><h4 id="a-creating-ui">(a) Creating UI</h4>
<!--kg-card-end: markdown--><p>The Viewer Fragment will include :</p><ol><li><strong>TextView for Meeting ID</strong> - The meeting ID that you joined will be displayed in this text view.</li><li><strong>Leave Button</strong> - This button will leave the meeting.</li><li><strong>waitingLayout</strong> - This textView will be shown when there is no active HLS.</li><li><strong>StyledPlayerView</strong> - This is a media player that will display live streaming.</li></ol><p>In <code>/app/res/layout/fragment_viewer.xml</code> file, replace the content with the following.</p><!--kg-card-begin: markdown--><pre><code class="language-js">&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;
&lt;RelativeLayout xmlns:android=&quot;http://schemas.android.com/apk/res/android&quot;
    xmlns:app=&quot;http://schemas.android.com/apk/res-auto&quot;
    xmlns:tools=&quot;http://schemas.android.com/tools&quot;
    android:layout_width=&quot;match_parent&quot;
    android:layout_height=&quot;match_parent&quot;
    android:background=&quot;@color/black&quot;
    tools:context=&quot;.ViewerFragment&quot;&gt;

    &lt;LinearLayout
        android:id=&quot;@+id/meetingLayout&quot;
        android:layout_width=&quot;match_parent&quot;
        android:layout_height=&quot;wrap_content&quot;
        android:paddingHorizontal=&quot;12sp&quot;
        android:paddingVertical=&quot;5sp&quot;&gt;

        &lt;TextView
            android:id=&quot;@+id/meetingId&quot;
            android:layout_width=&quot;0dp&quot;
            android:layout_height=&quot;wrap_content&quot;
            android:layout_weight=&quot;3&quot;
            android:text=&quot;Meeting Id : &quot;
            android:textColor=&quot;@color/white&quot;
            android:textSize=&quot;20sp&quot; /&gt;

        &lt;Button
            android:id=&quot;@+id/btnLeave&quot;
            android:layout_width=&quot;0dp&quot;
            android:layout_height=&quot;wrap_content&quot;
            android:layout_weight=&quot;1&quot;
            android:text=&quot;Leave&quot; /&gt;

    &lt;/LinearLayout&gt;

    &lt;TextView
        android:id=&quot;@+id/waitingLayout&quot;
        android:layout_width=&quot;match_parent&quot;
        android:layout_height=&quot;match_parent&quot;
        android:text=&quot;Waiting for host \n to start the live streaming&quot;
        android:textColor=&quot;@color/white&quot;
        android:textFontWeight=&quot;700&quot;
        android:textSize=&quot;20sp&quot;
        android:gravity=&quot;center&quot;/&gt;

    &lt;com.google.android.exoplayer2.ui.StyledPlayerView
        android:id=&quot;@+id/player_view&quot;
        android:layout_width=&quot;match_parent&quot;
        android:layout_height=&quot;match_parent&quot;
        android:visibility=&quot;gone&quot;
        app:resize_mode=&quot;fixed_width&quot;
        app:show_buffering=&quot;when_playing&quot;
        app:show_subtitle_button=&quot;false&quot;
        app:use_artwork=&quot;false&quot;
        app:show_next_button=&quot;false&quot;
        app:show_previous_button=&quot;false&quot;
        app:use_controller=&quot;true&quot;
        android:layout_below=&quot;@id/meetingLayout&quot;/&gt;

&lt;/RelativeLayout&gt;
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h4 id="b-initialize-player-and-play-hls-stream">(b) Initialize player and Play HLS stream</h4>
<!--kg-card-end: markdown--><p>Initialize the player and play the HLS when the HLS state is <code>HLS_PLAYABLE</code>, and release it when the HLS state is <code>HLS_STOPPED</code>. Whenever the meeting HLS state changes, the event <code>onHlsStateChanged</code> will be triggered.</p><!--kg-card-begin: markdown--><pre><code class="language-js">public class ViewerFragment extends Fragment {

  private Meeting meeting;
  protected StyledPlayerView playerView;
  private TextView waitingLayout;
  protected @Nullable
  ExoPlayer player;

  private DefaultHttpDataSource.Factory dataSourceFactory;
  private boolean startAutoPlay=true;
  private String downStreamUrl = &quot;&quot;;
  private static Activity mActivity;
  private static Context mContext;

  public ViewerFragment() {
      // Required empty public constructor
  }

  @Override
  public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
  }

  @Override
  public View onCreateView(LayoutInflater inflater, ViewGroup container,
                            Bundle savedInstanceState) {
    // Inflate the layout for this fragment
    View view = inflater.inflate(R.layout.fragment_viewer, container, false);

    playerView = view.findViewById(R.id.player_view);

    waitingLayout = view.findViewById(R.id.waitingLayout);
    if(meeting != null) {
      // set MeetingId to TextView
      ((TextView) view.findViewById(R.id.meetingId)).setText(&quot;Meeting Id : &quot; + meeting.getMeetingId());
      // leave the meeting on btnLeave click
      ((Button) view.findViewById(R.id.btnLeave)).setOnClickListener(v -&gt; meeting.leave());
      // add listener to meeting
      meeting.addEventListener(meetingEventListener);
    }
    return view;
  }


  @Override
  public void onAttach(@NonNull Context context) {
    super.onAttach(context);
    mContext = context;
    if (context instanceof Activity) {
      mActivity = (Activity) context;
      // get meeting object from MeetingActivity
      meeting = ((MeetingActivity) mActivity).getMeeting();
    }
  }

  private final MeetingEventListener meetingEventListener = new MeetingEventListener() {

      @Override
      public void onMeetingLeft() {
        if (isAdded()) {
          Intent intents = new Intent(mContext, JoinActivity.class);
          intents.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK
                  | Intent.FLAG_ACTIVITY_CLEAR_TOP | Intent.FLAG_ACTIVITY_CLEAR_TASK);
          startActivity(intents);
          mActivity.finish();
        }
      }

      @RequiresApi(api = Build.VERSION_CODES.P)
      @Override
      public void onHlsStateChanged(JSONObject HlsState) {
        if (HlsState.has(&quot;status&quot;)) {
          try {
            if (HlsState.getString(&quot;status&quot;).equals(&quot;HLS_PLAYABLE&quot;) &amp;&amp; HlsState.has(&quot;downstreamUrl&quot;)) {
              downStreamUrl = HlsState.getString(&quot;downstreamUrl&quot;);
              waitingLayout.setVisibility(View.GONE);
              playerView.setVisibility(View.VISIBLE);
              // initialize player
              initializePlayer();
            }
            if (HlsState.getString(&quot;status&quot;).equals(&quot;HLS_STOPPED&quot;)) {
              // release the player
              releasePlayer();
              downStreamUrl = null;
              waitingLayout.setText(&quot;Host has stopped \n the live streaming&quot;);
              waitingLayout.setVisibility(View.VISIBLE);
              playerView.setVisibility(View.GONE);
            }
          } catch (JSONException e) {
              e.printStackTrace();
          }
        }
    }
  };

  protected void initializePlayer() {
    if (player == null) {
      dataSourceFactory = new DefaultHttpDataSource.Factory();
      HlsMediaSource mediaSource = new HlsMediaSource.Factory(dataSourceFactory).createMediaSource(
              MediaItem.fromUri(Uri.parse(this.downStreamUrl)));
      ExoPlayer.Builder playerBuilder =
              new ExoPlayer.Builder(/* context= */ mContext);
      player = playerBuilder.build();
      // auto play when player is ready
      player.setPlayWhenReady(startAutoPlay);
      player.setMediaSource(mediaSource);
      // if you want display setting for player then remove this line
      playerView.findViewById(com.google.android.exoplayer2.ui.R.id.exo_settings).setVisibility(View.GONE);
      playerView.setPlayer(player);
    }
    player.prepare();
  }

  protected void releasePlayer() {
    if (player != null) {
      player.release();
      player = null;
      dataSourceFactory = null;
      playerView.setPlayer(/* player= */ null);
    }
  }

  @Override
  public void onDestroy() {
    mContext = null;
    mActivity = null;
    downStreamUrl = null;
    releasePlayer();
    if (meeting != null) {
      meeting.removeAllListeners();
      meeting = null;
    }
    super.onDestroy();
  }

}
</code></pre>
<!--kg-card-end: markdown--><h2 id="final-output">Final Output</h2><p>Congratulations! Following this tutorial, you have successfully integrated HTTP live streaming capabilities into your Android app using <strong>VideoSDK</strong>. Let's recap what you have accomplished:</p><ul><li>Created a user-friendly joining screen allowing users to enter a meeting ID and join as a host or viewer.</li><li>Implemented the necessary logic to generate meeting IDs and handle different roles (host/viewer).</li><li>Developed a meeting screen where users can participate in HTTP live-streaming sessions, engage with the stream, chat with other participants, and have real-time discussions.</li></ul><p>To unlock the full potential of VideoSDK and create easy-to-use video experiences, developers are encouraged to sign up for VideoSDK and further explore its features. </p><p><a href="https://www.videosdk.live/signup"><strong>Sign up with VideoSDK</strong></a> today and Get <strong>10000 minutes free</strong> to take your Android video app to the next level!</p><!--kg-card-begin: markdown--><h2 id="more-android-resources">More Android Resources</h2>
<!--kg-card-end: markdown--><ul><li><a href="https://www.videosdk.live/blog/1-on-1-video-chat">Android<em> </em>One-To-One Video Calling App with Java</a></li><li><a href="https://www.videosdk.live/blog/android-live-streaming">Android Live Streaming App using Kotlin</a></li><li><a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/concept-and-architecture">Build Android Video Calling App - docs</a></li><li><a href="https://youtu.be/Kj7jS3dbJFA">Build Android Video Calling App using Android Studio and VideoSDK</a></li><li><a href="https://github.com/videosdk-live/quickstart/tree/main/android-rtc">quickstart/android-rtc</a></li><li><a href="https://github.com/videosdk-live/quickstart/tree/main/android-hls">quickstart/android-hls</a></li><li><a href="https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example">videosdk-rtc-android-java-sdk-example</a></li><li><a href="https://github.com/videosdk-live/videosdk-rtc-android-kotlin-sdk-example">videosdk-rtc-android-kotlin-sdk-example</a></li><li><a href="https://github.com/videosdk-live/videosdk-hls-android-java-example">videosdk-hls-android-java-example</a></li><li><a href="https://github.com/videosdk-live/videosdk-hls-android-kotlin-example">videosdk-hls-android-kotlin-example</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Build vs Buy VCIP Infrastructure: Complete Guide]]></title><description><![CDATA[This article compares building video KYC infrastructure to be costly and time-consuming and defines its limitations and why a buy ready-to-use InfraTech solution would be beneficial.]]></description><link>https://www.videosdk.live/blog/build-vs-buy-video-kyc-infrastructure</link><guid isPermaLink="false">651fee6e9eadee0b8b9eb4e2</guid><category><![CDATA[Video KYC]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Tue, 21 Jan 2025 11:31:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/10/Build-vs-Buy-Hero-img.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/10/Build-vs-Buy-Hero-img.jpg" alt="Build vs Buy VCIP Infrastructure: Complete Guide"/><p>The decision to build or buy a Video KYC (V-CIP) infrastructure is one of the most consequential technology choices a bank, NBFC, or fintech company makes. Get it wrong and you face regulatory penalties, customer drop-offs, or a fraud incident that bypasses your own system. Get it right and you shorten customer onboarding from days to minutes, reduce operational costs, and stay ahead of an evolving compliance landscape.</p><p>As per <a href="https://www.videosdk.live/blog/rbi-compliance-for-video-kyc">RBI regulations</a>, the video KYC verification process starts with collecting the customer’s consent to use their personal information for verification. To comply with these regulations, there are only two options available for any Financial Institution - building an in-house video KYC solution or buying a ready-to-use VCIP infrastructure. In this article, we try to give you an answer to that question and represent overall information about build vs buy video KYC infrastructure.</p><h2 id="understanding-vcip-infrastructure">Understanding VCIP Infrastructure</h2><p>VCIP Infrastructure is a framework that starts with onboarding a customer to complete the process with the <strong>end-to-end secure</strong> and reliable approach in a particular session. The purpose of this framework differs from user to user according to their scenario.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://assets.videosdk.live/static-assets/ghost/2023/10/Build-vs-Buy-Architecture-1.jpg" class="kg-image" alt="Build vs Buy VCIP Infrastructure: Complete Guide" loading="lazy" width="2049" height="1149"/></figure><h2 id="what-is-needed-to-build-complete-vcip-kyc-solution">What is Needed to Build Complete VCIP KYC Solution?</h2><p>Building an in-house VCIP KYC Solution implicates diverse features and technologies to securely verify identities through video calls. To start, it is crucial to follow an effective implementation process.</p><h3 id="tech-stack"><strong>Tech Stack</strong></h3><p>To ensure compatibility with the desired tech stack, it is important to choose tools and technologies that seamlessly integrate with the chosen components. Leveraging the appropriate technology frameworks like <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start"><strong>JavaScript,</strong></a><strong> </strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/concept-and-architecture"><strong>React,</strong></a><strong> </strong><a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/concept-and-architecture"><strong>React Native,</strong></a><strong> </strong><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/concept-and-architecture"><strong>Flutter,</strong></a><strong> </strong><a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/concept-and-architecture"><strong>Android,</strong></a><strong> and </strong><a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/getting-started"><strong>iOS</strong></a> ensures compatibility with the tech stack.</p><h3 id="software-platform"><strong>Software Platform</strong></h3><p>To develop a web application for video KYC, use open-source platforms like JITSI, JANUS, or any similar private platforms. These platforms offer flexibility and customization options, but they come with certain limitations with scalability. But it is costly when you need custom features to enhance, also they may lack enterprise-grade security features required for identity verification and fraud prevention.</p><h3 id="team-of-executives"><strong>Team of Executives</strong></h3><p>To build video KYC Infrastructure, it is important to have a well-structured team in place. Team executives will play a crucial role in maintaining video KYC solutions and implementing and managing the infrastructure. They will be responsible for ensuring compliance with regulatory guidelines and integrating necessary features. </p><h2 id="the-challenge-of-building-video-kyc-infrastructure">The Challenge of Building Video KYC Infrastructure</h2><p>When choosing between building and buying Video KYC infrastructure, organizations must consider factors such as implementation process, tech stack compatibility, software platform features, team requirements, deployment options, compliance management, identity verification capabilities, security measures, scalability, data security, and customization options.</p><p>If the video KYC infrastructure is built using JITSI and JANUS's core features, they should add some more layers to be compatible. These must have extra layers that can provide <strong>enterprise-grade security</strong> features and additional capabilities.</p><h2 id="building-in-house-kyc-infra-vs-buy-video-kyc-infratech-solution">Building In-house KYC Infra v\s Buy  Video KYC InfraTech Solution</h2><table>
<thead>
<tr>
<th/>
<th>Requirements for Building KYC Infrastructure</th>
<th>Buy Ready-to-use InfraTech Framework</th>
</tr>
</thead>
<tbody>
<tr>
<td>Compliance Management</td>
<td>Organizations need to incorporate compliance management solutions to ensure regulatory compliance. (Not available in Open Source Platforms)</td>
<td>Provides built-in compliance management solutions to ensure regulatory compliance with RBI Guidelines for CERT-in and VAPT acceptance.</td>
</tr>
<tr>
<td>Complete On-Premises Deployment</td>
<td>Allows organizations to have complete control over data security by deploying the solution on their own servers. (Sometime lack when high demand)</td>
<td>Offers a solution hosted on the provider's servers, ensuring data security and protection.</td>
</tr>
<tr>
<td>Face Match Comparison</td>
<td>Requires integrating AI algorithms to compare customer's face with the photo on their government-issued ID for identity verification. Additional integration with third-party identity verification services may be needed.</td>
<td>Offers AI-powered face match comparison for identity verification, with integrated third-party identity verification services for enhanced authenticity.</td>
</tr>
<tr>
<td>Geo-Location Capture and IP Check</td>
<td>Requires capturing customer's location and performing IP checks to prevent fraud and unauthorized access.</td>
<td>Offers built-in geo-location capture and IP check features for fraud prevention.</td>
</tr>
<tr>
<td>End-to-End Encryption for Video</td>
<td>Requires implementing advanced encryption algorithms and protocols to secure data exchanged during video calls.</td>
<td>Provides end-to-end encryption for video calls to ensure data privacy and security.</td>
</tr>
<tr>
<td>Unlimited Video Storage and Instant Retrieval</td>
<td>Organizations need to provide unlimited video storage to meet compliance requirements.</td>
<td>Offers unlimited video storage and instant retrieval of video recordings for compliance purposes.</td>
</tr>
<tr>
<td>Battling Security Threats</td>
<td>Organizations need to implement robust security measures to detect and prevent security threats such as fake identity documents and pre-recorded videos.</td>
<td>Provides advanced security measures to battle security threats, ensuring protection against fraud and illegal activities.</td>
</tr>
<tr>
<td>Real-Time OVD Verification</td>
<td>Requires instant verification of government-issued identity documents for compliance.</td>
<td>Offers real-time verification of government-issued identity documents for regulatory compliance.</td>
</tr>
<tr>
<td>Cost Analysis</td>
<td>Consider the cost of hiring and retaining skilled personnel, development tools, infrastructure, and ongoing maintenance.</td>
<td>Evaluate the upfront purchase cost, licensing fees, and ongoing subscription or maintenance fees. Compare these costs over time.</td>
</tr>
<tr>
<td>Time-to-Market</td>
<td>Developing an in-house solution may take longer, especially if it requires research and development from scratch.</td>
<td>Purchasing an existing solution can significantly reduce time-to-market as the product is already developed and ready to use.</td>
</tr>
<tr>
<td>Customization Needs</td>
<td>If your organization requires a highly customized solution tailored to specific requirements, building in-house may be the better option.</td>
<td>Third-party solutions may have limitations in terms of customization. Evaluate if the available features meet your needs without extensive modification.</td>
</tr>
</tbody>
</table>
<h3 id="additional-core-regulatory-layers">Additional Core Regulatory Layers</h3>
<!--kg-card-begin: html-->
<table>
<thead>
<tr>
<th/>
<th>Build Framework</th>
<th>Buy Framework</th>
</tr>
</thead>
<tbody>
<tr>
<td>Flexibility</td>
<td>Depends on the organization's requirements and size, requiring high-end customization to support different customer journeys.</td>
<td>Offers high-end customization to accommodate diverse customer needs and ensure a smooth and personalized KYC experience.</td>
</tr>
<tr>
<td>Auto Scalability</td>
<td>Organizations need to ensure their infrastructure can scale up or down based on customer volume.</td>
<td>Provides auto scalability to handle sudden spikes in customer volume without performance issues or downtime.</td>
</tr>
<tr>
<td>Data Security</td>
<td>Requires implementing robust data encryption and security measures to protect customer data from unauthorized access or breaches.</td>
<td>Ensures data security by implementing robust encryption, firewall proxy support, and purging mechanisms to protect customer data.</td>
</tr>
</tbody>
</table>
<!--kg-card-end: html-->
<p>So, building a new infrastructure from scratch can be expensive, requiring substantial investments in technology, resources, and expertise. <strong>First Layer Solution</strong> <strong>providers</strong> solve this issue very efficiently. By using a ready-to-use infrastructure, organizations can avoid the need to build their own internal stack from scratch and instead leverage a pre-built </p><h2 id="benefits-of-buying-video-kyc-infratech-solution">Benefits of Buying Video KYC InfraTech Solution</h2><p>Requirements of BFSI organizations differ depending on their size and specific needs. For example, consumer fintech startups and small union banks may have different priorities than mid-sized or large banks for VCIP. It's crucial for organizations to carefully consider their unique needs and choose the right video KYC infrastructure.</p><p>One of the significant advantages of opting for <a href="https://www.videosdk.live/solutions/video-kyc">Video KYC InfraTech Solutions</a> is security and cost-effectiveness. This type of PaaS solution has all types of features. This not only streamlines the onboarding process but also reduces operational costs, minimizes fraud, and ensures compliance with regulatory guidelines. The first-layer solution provider offers customization options, scalability, and reliability, allowing organizations to efficiently onboard more customers.</p><p>This cost advantage allows organizations to allocate their resources more efficiently and focus on other critical areas of their business. Thus, a ready-to-use infrastructure provides not only the benefits mentioned above but also a cost-effective for organizations looking to implement video KYC solutions efficiently.</p><h2 id="which-should-be-good-for-financial-organizations">Which Should Be Good for Financial Organizations?</h2><p>The decision to build or buy video KYC infrastructure is crucial for organizations in the BFSI sector. Building an in-house infrastructure requires significant investments in technology, resources, and expertise. On the other hand, buying a ready-to-use solution offers cost-effectiveness, streamlined onboarding processes, reduced operational costs, and compliance with regulatory guidelines.</p><p>Both Options are good but buying a ready-to-use VCIP infrastructure from a Video solution expert offers several advantages. They obtain compliance with CERT-in and VAPT from the RBI. They affirm their commitment to maintaining the highest standards of security and regulatory compliance. </p><p>When it comes to choosing the best company to look out for, <a href="https://www.videosdk.live/">VideoSDK </a>stands out from the competition. Banks and fintech companies can benefit from a better identity verification SDK or solution by leveraging VideoSDK's pre-built infrastructure without the need to build their own internal stack from scratch. This not only saves time and resources but also ensures a streamlined onboarding process, reduced operational costs and minimized fraud.</p><p>With VideoSDK, organizations can trust in their customization options, scalability, and reliability, enabling them to efficiently onboard more customers while remaining compliant with regulatory guidelines. Choose <a href="https://www.videosdk.live/about-us">VideoSDK </a>for a comprehensive and reliable video KYC solution that puts security and compliance at the forefront.</p><p>You can <a href="https://www.videosdk.live/contact">talk with our team</a> if you have any questions regarding Video KYC compliances or Infrastructure.</p><p/>]]></content:encoded></item><item><title><![CDATA[What is Custom Delivery Protocol?]]></title><description><![CDATA[This article explores common use cases for CDP, key features, a step-by-step approach to implementation, and the advantages and challenges of developing custom protocols.]]></description><link>https://www.videosdk.live/blog/what-is-custom-delivery-protocol</link><guid isPermaLink="false">66d6d25720fab018df10fe6a</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Tue, 21 Jan 2025 10:49:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-Custom-Delivery-Protocol_.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-Custom-Delivery-Protocol_.jpg" alt="What is Custom Delivery Protocol?"/><p>Custom Delivery Protocol refers to a bespoke communication method designed to transmit data between client and server systems. Unlike standard protocols such as HTTP, <a href="https://www.videosdk.live/developer-hub/websocket/websocket-guide">WebSockets</a>, or <a href="https://www.videosdk.live/blog/what-is-rtp-protocol">RTP</a>, CDP is tailored to address specific needs that off-the-shelf protocols cannot efficiently handle. This customization allows for optimized performance, enhanced security, and improved reliability in data transmission.</p><h2 id="common-use-cases-for-custom-delivery-protocols">Common use cases for custom delivery protocols</h2><p>CDP finds application in various industries and situations where standard protocols fall short:</p><ul><li><strong>Video Conferencing and Streaming</strong>: Enhancing the quality of video and audio transmission beyond traditional protocols.</li><li><strong>Internet of Things (IoT)</strong>: Facilitating efficient data transmission between devices with limited resources.</li><li><strong>High-Frequency Trading</strong>: Shaving critical milliseconds off trade execution where speed is paramount.</li><li><strong>Online Gaming</strong>: Reduced lag and improved responsiveness in multiplayer environments.</li></ul><h2 id="key-features-of-custom-delivery-protocol">Key Features of Custom Delivery Protocol</h2><ol><li><strong>Optimized Data Flow</strong>: CDP ensures efficient data transmission by reducing latency, minimizing bandwidth usage, and increasing throughput based on specific application requirements.</li><li><strong>Advanced Security Measures</strong>: With CDP, developers can integrate custom security protocols, encryption standards, and authentication mechanisms to meet unique security needs.</li><li><strong>Robust Error Handling</strong>: CDP implements more advanced error detection, correction, and data recovery techniques than standard protocol capabilities.</li><li><strong>Low Latency Communication</strong>: Especially beneficial for real-time applications such as <a href="https://www.videosdk.live/audio-video-conferencing">video conferencing</a>, gaming, or live streaming where minimal latency is important.</li><li><strong>Customizable Quality of Service (QoS)</strong>: CDP allows for tailored QoS settings, ensuring reliable data delivery even under varying network conditions.</li><li><strong>Protocol Flexibility</strong>: The ability to modify and extend protocol rules as needed enables adaptation to new requirements without a complete system overhaul.</li></ol><h2 id="creating-a-custom-delivery-protocol-a-step-by-step-approach">Creating a Custom Delivery Protocol: A Step-by-Step Approach</h2><p>For those looking to implement CDP in their messaging system, here's a general process:</p><ol><li><strong>Define Protocol Attributes</strong>: Start by defining Protocol Attribute Definitions (DPADs) in your system's Enterprise Designer.</li><li><strong>Configure Messaging and Packaging Attributes</strong>: Set the protocol's Messaging Attribute Definitions (MADs) and Packaging Attribute Definitions (PADs).</li><li><strong>Create Delivery Actions</strong>: Establish delivery action groups and actions under Custom Protocols in Host Explorer.</li><li><strong>Fine-Tune Parameters</strong>: Modify parameter settings for delivery actions as needed to optimize performance.</li></ol><p>Once defined, custom protocols can be assigned to B2B hosts and used to efficiently deliver messages between trading partners.</p><h2 id="advantages-of-implementing-a-custom-delivery-protocol">Advantages of implementing a custom delivery protocol</h2><ol><li><strong>Performance Tuning</strong>: CDP outperforms standard alternatives by focusing on the specific data flow needs of each use case.</li><li><strong>Enhanced Security</strong>: Allows the implementation of advanced or industry-specific security measures.</li><li><strong>Customization</strong>: Provides flexibility to add or remove features as application needs evolve.</li></ol><h2 id="challenges-in-developing-custom-delivery-protocols">Challenges in developing custom delivery protocols</h2><p>Although the benefits are significant, implementing CDP comes with its own set of challenges:</p><ol><li><strong>Complexity</strong>: Developing custom protocols requires deep expertise in networking, data structures, and security.</li><li><strong>Continuous Maintenance</strong>: Regular support and updates are required, especially to address emerging security vulnerabilities.</li><li><strong>Interoperability issues</strong>: Custom protocols can face challenges when interacting with standard systems, often requiring additional gateways or adapters.</li></ol><p>As digital communication continues to evolve, the role of Custom Delivery Protocols is set to grow. With the increasing demand for real-time, secure, and efficient data transmission across various industries, CDP offers a flexible and powerful solution for specialized communication needs.</p><p>As we move towards an increasingly connected world, the ability to customize and control data transmission at this level will undoubtedly become a crucial competitive advantage. Businesses and developers can ensure that their data delivery is not only optimized and secure, but also tailored to meet the unique demands of their specific applications and industries. </p>]]></content:encoded></item><item><title><![CDATA[Top 10 Amazon Chime Alternatives in 2026]]></title><description><![CDATA[Explore a game-changing alternative to AWS Chime SDK and elevate your online experience to new heights. Embrace the opportunity for success.]]></description><link>https://www.videosdk.live/blog/amazon-chime-sdk-alternative</link><guid isPermaLink="false">64abba985badc3b21a58f657</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 21 Jan 2025 06:04:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/07/AWS-Chime-Alternative.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/07/AWS-Chime-Alternative.jpg" alt="Top 10 Amazon Chime Alternatives in 2026"/><p>Looking for an <a href="https://www.videosdk.live/amazon-chime-sdk-vs-videosdk" rel="noreferrer"><strong>Amazon Chime SDK alternative</strong></a> that seamlessly integrates real-time video into your app? Well, you've come to the right place! While Amazon Chime SDK is a popular choice, there's a whole world of untapped possibilities beyond their platform.</p><p>Stick around to find out what you might be missing out on, especially if you're already an Amazon Chime SDK customer.<br><br>When looking for alternatives to AWS Chime, there are several options for video conferencing and communication platforms that offer similar features. Each provides robust collaboration tools and seamless integration with other services. When considering Amazon Chime pricing, these alternatives may offer competitive rates or unique features. Whether you need an AWS Chime app alternative for business meetings or team collaboration, these platforms deliver reliable performance and flexibility. Explore these options to find the best fit for your needs.</br></br></p><h2 id="exploring-alternatives-to-aws-chime-sdk">Exploring Alternatives to AWS Chime SDK</h2>
<p>The application lacks some useful features, like background blurring effects and polls. Additionally, it doesn't automatically sync with Google Calendar. There have been complaints about compatibility issues, such as the lack of support for the Linux system and problems with certain browsers like Safari 6.2. Users have reported unwanted noises during content sharing, screens going black, and choppy audio when using Apple-based devices and browsers. On top of that, <a href="https://www.videosdk.live/blog/amazon-chime-sdk-competitors">Amazon</a> follows a per-user pricing model, which means you still have to pay the full price for occasional video conferencing users. To enhance performance and address some of these challenges, companies may consider leveraging&nbsp;<a href="https://intellias.com/aws-cloud-migration-services/" rel="noreferrer">AWS migration services</a>&nbsp;to transition their applications to a more robust cloud infrastructure, enabling better scalability and reliability for video conferencing and other features.</p><p>The <strong>top 10 AWS Chime Alternatives</strong> are VideoSDK, Twilio, MirrorFly, Agora, Vonage, ApiRTC, Jitsi, SignalWire, Enablex, and WhereBy.</p><blockquote>
<h2 id="top-10-amazon-chime-alternatives-for-2024">Top 10 Amazon Chime Alternatives for 2024</h2>
<ul>
<li><strong>VideoSDK</strong></li>
<li><strong>Twilio Video</strong></li>
<li><strong>MirrorFly</strong></li>
<li><strong>Agora</strong></li>
<li><strong>Vonage</strong></li>
<li><strong>ApiRTC</strong></li>
<li><strong>Jitsi</strong></li>
<li><strong>SignalWire</strong></li>
<li><strong>Enablex</strong></li>
<li><strong>Whereby</strong></li>
</ul>
</blockquote>
<h2 id="1-videosdk-a-swift-cross-platform-alternative">1. VideoSDK: A Swift, Cross-Platform Alternative</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-SDK-for-Real-time-Communication-Live-Streaming-Video-API-4.jpeg" class="kg-image" alt="Top 10 Amazon Chime Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>Experience the incredible power of <a href="https://www.videosdk.live">VideoSDK</a>, an API designed to seamlessly integrate robust audio-video features into your applications with minimal effort. </li><li>With just a few lines of code, you can enhance your app with live audio and video experiences across any platform in a matter of minutes.</li><li>One of the major advantages of Video SDK is its simplicity and speed of integration, allowing you to devote more time to developing innovative features that enhance user retention. </li><li>Say goodbye to complex integration processes and hello to a world of limitless possibilities.</li><li>Unlock the full potential of video technology with VideoSDK, offering a multitude of benefits. </li><li>Enjoy high scalability, adaptive bitrate technology, end-to-end customization, superior quality recordings, in-depth analytics, cross-platform streaming, seamless scaling, and comprehensive platform support.</li><li>Whether you're on mobile (Flutter, Android, iOS), web (JavaScript Core SDK + UI Kit), or desktop (Flutter Desktop), our Video SDK empowers you to effortlessly create immersive video experiences. </li><li>Join VideoSDK today and revolutionize your video capabilities like never before!</li><li>Get incredible value with VideoSDK! Take advantage of <a href="https://www.videosdk.live/pricing" rel="noreferrer">$20 free credit</a> and flexible <a href="https://www.videosdk.live/pricing#pricingCalc">pricing options</a> for video and audio calls. </li><li><strong>Video calls</strong> start at just <strong>$0.003</strong> per participant per minute, while <strong>audio calls</strong> begin at <strong>$0.0006</strong>.</li><li>Additional costs include <strong>$0.015</strong> per minute for <strong>cloud recordings</strong> and <strong>$0.030</strong> per minute for <strong>RTMP output</strong>. </li><li>Plus, They provide <strong>free 24/7 customer support</strong> to assist you with any inquiries or technical needs. </li><li>Upgrade your video capabilities today and embark on a new level of excellence!</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/amazon-chime-sdk-vs-videosdk"><strong>AWS Chime and VideoSDK</strong></a><strong>.</strong></blockquote>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-twilio-video-reliable-video-conferencing-solution">2. Twilio Video: Reliable Video Conferencing Solution</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Communication-APIs-for-SMS-Voice-Video-Authentication_twilio-3.jpeg" class="kg-image" alt="Top 10 Amazon Chime Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li><strong>Twilio</strong> provides SDKs for web, iOS, and Android, allowing seamless integration of their services into applications. </li><li>However, incorporating multiple audio and video inputs requires manual code implementation. </li><li>Call insights help analyze errors, and Twilio supports <strong>up to 50 hosts</strong> <strong>and</strong> <strong>participants</strong> in a call. </li><li>Simplified product development is not offered, and <strong>hard coding</strong> is necessary to handle disruptions. </li></ul><h3 id="twilio-pricing">Twilio pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/twilio-video-alternative">Twilio</a>'s <a href="https://www.twilio.com/en-us/video/pricing">pricing</a> starts at <strong>$4</strong> per 1,000 minutes, and <strong>free</strong> support includes <strong>API status notifications</strong> and <strong>email support</strong> during <strong>business hours</strong>. </li><li><strong>Additional services</strong> may come at an <strong>extra cost</strong>.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/twilio-vs-amazon-chime-sdk"><strong>AWS Chime and Twilio</strong></a><strong>.</strong></blockquote><h2 id="3-mirrorfly-comprehensive-self-hosted-solution">3. MirrorFly: Comprehensive Self-Hosted Solution</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Live-Video-Call-API-Best-Video-Chat-SDK-for-Android-iOS-mirrorfly-3.jpeg" class="kg-image" alt="Top 10 Amazon Chime Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>MirrorFly is a feature-rich in-app communication suite designed for enterprises. </li><li>It offers intuitive APIs and SDKs that enable a top-notch chat and calling experience. </li><li>With a wide range of over 150 chat, voice, and video calling features, this cloud-based solution seamlessly integrates to provide a strong communication platform.</li></ul><h3 id="mirrorfly-pricing">MirrorFly pricing</h3>
<ul><li>It's worth mentioning that MirrorFly's <a href="https://www.mirrorfly.com/pricing.php">pricing</a> starts at <strong>$299</strong> per month, making it a higher-cost option to consider.</li></ul><h2 id="4-agora-high-performance-cpaas-with-limitations">4. Agora: High-Performance CPaaS with Limitations</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Agora-Real-Time-Voice-and-Video-Engagement-3.jpeg" class="kg-image" alt="Top 10 Amazon Chime Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>Agora is widely recognized for its robust real-time video conferencing capabilities. </li><li>While it offers powerful features, it's important to note a few <strong>limitations</strong>. <strong>Customisation options</strong> are limited, and occasional <strong>performance issues</strong> may arise. </li><li>Advanced features like <strong>recording</strong> or <strong>transcription</strong> may come with <strong>additional costs</strong>. </li></ul><h3 id="agora-pricing">Agora pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/agora-alternative"><strong>Agora</strong></a>'s <a href="https://www.agora.io/en/pricing/">pricing</a> starts at <strong>$3.99</strong> per 1,000 minutes for <strong>video calling</strong> and <strong>$0.99</strong> per 1,000 minutes for <strong>voice calling</strong>.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/agora-vs-amazon-chime-sdk"><strong>AWS Chime and Agora</strong></a><strong>.</strong></blockquote><h2 id="5-vonage-versatile-api-for-live-video-and-audio">5. Vonage: Versatile API for Live Video and Audio</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-API-Fully-Programmable-and-Customizable-Vonage-2.jpeg" class="kg-image" alt="Top 10 Amazon Chime Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>Opentok, now known as Vonage, is a reliable API for integrating live video and audio into web and mobile apps. </li><li>It supports <strong>up to 25 participants</strong> in video conferences and offers collaborative features like chat and whiteboard. </li><li><a href="https://www.videosdk.live/blog/vonage-alternative"><strong>Vonage</strong></a> has strong audio capabilities with dial-in numbers for <strong>60 countries</strong> and supports <strong>up to 200 participants</strong> in audio conferences.</li><li><strong>Call quality</strong> is generally <strong>satisfactory</strong>, but occasional <strong>audio issues</strong> have been reported.</li></ul><h3 id="vonage-pricing">Vonage pricing</h3>
<ul><li>Vonage pricing starts at <strong>$9.99 </strong>per month with 2000 free minutes, and <strong>additional costs</strong> apply for <strong>advanced features</strong>.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/vonage-vs-amazon-chime-sdk"><strong>AWS Chime and Vonage</strong></a><strong>.</strong></blockquote><h2 id="6-apirtc-easy-webrtc-integration-for-developers">6. ApiRTC: Easy WebRTC Integration for Developers</h2>
<ul><li>ApiRTC is an outstanding Platform as a Service (PaaS) that simplifies developers' access to WebRTC technology. </li><li>Their user-friendly API allows the seamless integration of real-time multimedia interactions <a href="https://www.ileeline.com/mobile-vs-desktop-statistics/">into websites and mobile</a> apps with minimal code. </li></ul><h3 id="apirtc-pricing">ApiRTC pricing</h3>
<ul><li>It's important to mention that the <strong>basic subscription</strong> for ApiRTC starts at <strong>$54.37</strong>, which may be considered relatively high for smaller developers.</li></ul><h2 id="7-jitsi-open-source-video-conferencing">7. Jitsi: Open-Source Video Conferencing</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Free-Video-Conferencing-Software-for-Web-Mobile-Jitsi-4.jpeg" class="kg-image" alt="Top 10 Amazon Chime Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>Jitsi is a remarkable open-source suite that offers video conferencing solutions.</li><li>Jitsi Meet is a feature-rich platform with screen sharing and collaboration capabilities, accessible through web browsers and mobile apps. </li><li><a href="https://www.videosdk.live/blog/jitsi-alternative"><strong>Jitsi</strong></a> Videobridge serves as an XMPP server for secure video chats. It's important to note that Jitsi is free, open-source, and provides end-to-end encryption.</li><li>However, call recording and support may require additional steps. It's worth mentioning that Jitsi does not manage user bandwidth during network issues.</li><li>While it's 100% open source and <strong>free</strong> to use, setting up servers and designing the user interface is necessary.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/amazon-chime-sdk-vs-jitsi"><strong>AWS Chime and Jitsi</strong></a><strong>.</strong></blockquote><h2 id="8-signalwire-flexible-video-call-integration">8. SignalWire: Flexible Video Call Integration</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Building-The-Software-Defined-Telecom-Network-SignalWire-3.jpeg" class="kg-image" alt="Top 10 Amazon Chime Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>SignalWire is a platform that simplifies the integration of video into applications. </li><li>It supports video calls with <strong>up to 100 participants</strong> and provides SDKs for web, iOS, and Android apps. </li><li>However, developers are responsible for managing disruptions and user logic on their own. </li><li><a href="https://signalwire.com/pricing/video">Pricing</a> is based on per-minute usage, with options available for both HD and Full HD calls. </li><li>Additional features, such as <strong>recording</strong> and <strong>streaming</strong>, can be added at <strong>separate rates</strong>.</li></ul><h2 id="9-enablex-tailored-video-solutions-for-developers">9. EnableX: Tailored Video Solutions for Developers</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Call-API-Video-Chat-API-Voice-API-Video-Conferencing_enebleX-4.jpeg" class="kg-image" alt="Top 10 Amazon Chime Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>EnableX provides convenient SDKs for incorporating live video, voice, and messaging functionalities, making it easier to create immersive live experiences within applications. </li><li>It caters to service providers, ISVs, SIs, and developers, offering flexibility to customize video-calling solutions and personalized live video streams. </li><li>The SDK supports JavaScript, PHP, and Python, empowering developers to work with their preferred programming languages. </li><li><a href="https://www.enablex.io/cpaas/pricing/our-pricing"><strong>Pricing</strong></a> begins at <strong>$0.004</strong> per participant minute for <strong>up to 50 participants</strong>, and there are <strong>additional charges</strong> for services like <strong>recording</strong>, <strong>transcoding</strong>, <strong>storage</strong>, and <strong>RTMP streaming</strong>.</li></ul><h2 id="10-whereby-browser-based-meetings-with-simple-setup">10. Whereby: Browser-Based Meetings with Simple Setup</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Calling-API-for-Web-and-App-Developers-Whereby-4.jpeg" class="kg-image" alt="Top 10 Amazon Chime Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>Whereby is a browser-based meeting platform that offers permanent rooms for users. </li><li>Joining meetings is hassle-free with a simple click, no downloads or registrations are needed. </li><li>They introduced a hybrid meeting solution that reduces echo and eliminates the need for expensive hardware. </li><li><strong>Customization </strong>options for the video interface are <strong>limited</strong>. Data privacy is a priority, and <a href="https://whereby.com/information/pricing"><strong>pricing</strong></a><strong> plans</strong> start at <strong>$6.99</strong> per month with additional charges for extra usage.</li></ul><h2 id="certainly">Certainly!</h2>
<p>Among the mentioned video conferencing SDKs, <a href="https://www.videosdk.live/">VideoSDK</a> stands out for its emphasis on fast and seamless integration. It offers a low-code solution that enables developers to quickly build live video experiences in their applications. With VideoSDK, custom video conferencing solutions can be created and deployed in under 10 minutes, reducing integration time and effort. Unlike other SDKs with longer integration times or limited customization options, VideoSDK prioritizes a streamlined process. By leveraging Video SDK, developers can effortlessly create and embed live video experiences, facilitating real-time connections, communication, and collaboration for users.</p><h2 id="faqs">FAQs</h2><p><strong>Q1: What makes VideoSDK stand out among the top 10 AWS Chime Alternatives?</strong></p><p>VideoSDK stands out for its emphasis on fast and seamless integration. It offers a low-code solution that enables developers to build live video experiences in under 10 minutes.</p><p><strong>Q:2</strong> <strong>How does VideoSDK simplify the integration process compared to other SDKs?</strong></p><p>VideoSDK prioritizes a streamlined process, allowing developers to effortlessly create and embed live video experiences. This reduces integration time and effort compared to other SDKs.</p><p><strong>Q:3 What sets VideoSDK apart in terms of features and benefits for video conferencing applications?</strong></p><p>VideoSDK offers a comprehensive set of features, including high scalability, adaptive bitrate technology, end-to-end customization, superior quality recordings, in-depth analytics, and cross-platform support, enhancing the overall video conferencing experience.</p><h2 id="still-skeptical">Still skeptical?</h2>
<p>Explore VideoSDK's comprehensive <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start">Quickstart guide</a> and discover the potential with our exclusive <a href="https://docs.videosdk.live/code-sample">sample app</a>. <a href="https://app.videosdk.live/">Sign up</a> today to begin your integration journey and claim your <a href="https://www.videosdk.live/pricing">complimentary $20 free credit</a>, unlocking the full power of VideoSDK. Our dedicated team is always ready to assist you whenever needed. Prepare to witness the incredible experiences you can create using VideoSDK's exceptional capabilities. Unleash your creativity and showcase your creations to the world!</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Active Speaker Indication in React Native Video Calling App?]]></title><description><![CDATA[Integrating Active Speaker Indication in a React Native video call app for Android enhances user experience by providing visual cues for identifying the current speaker.]]></description><link>https://www.videosdk.live/blog/active-speaker-indication-in-react-native-video-call-app</link><guid isPermaLink="false">660f78902a88c204ca9cfccc</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[React Native]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Mon, 20 Jan 2025 12:38:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Active-Speaker-React-Native.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Active-Speaker-React-Native.jpg" alt="How to Integrate Active Speaker Indication in React Native Video Calling App?"/><p>Consider a video call with numerous participants in a discussion. Wouldn't it be helpful to be able to easily identify who is speaking at any given moment throughout the constantly changing sequence of ideas? </p><p>By including active speaker indication in your React Native video call app, you can provide users with visual signals that make it simpler for them to follow conversations and stay interested, resulting in a more seamless and engaging communication experience.</p><p>By integrating Active Speaker Indication, Users can easily identify who is speaking, leading to smoother conversations and reduced confusion. Active speaker indication simplifies the user interface, making it more intuitive and user-friendly. </p><p>In group calls, participants can quickly focus on the current speaker, leading to more efficient meetings and discussions. Users feel more connected when they can easily identify who is speaking, leading to increased engagement and participation.</p><p>This feature can be incredibly useful in a variety of circumstances, including remote work sets, virtual classrooms, webinars, and social gatherings held via video conferences.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the Active Speaker functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a>.</p><h3 id="prerequisites">Prerequisites</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Node.js v12+</li><li>NPM v6+ (comes installed with newer Node versions)</li><li>Android Studio or Xcode installed</li></ul><h2 id="%E2%AC%87%EF%B8%8F-integrate-videosdk%E2%80%8B"><strong>⬇️ </strong>Integrate VideoSDK<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#videosdk-installation">​</a></h2><p>It is necessary to set up VideoSDK within your project before going into the details of integrating the Active Speaker feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.</p><ul><li>For NPM </li></ul><pre><code class="language-npm">npm install "@videosdk.live/react-native-sdk"  "@videosdk.live/react-native-incallmanager"</code></pre><ul><li>For Yarn</li></ul><pre><code class="language-yarn">yarn add "@videosdk.live/react-native-sdk" "@videosdk.live/react-native-incallmanager"</code></pre><h3 id="project-configuration">Project Configuration</h3><p>Before integrating the Active Speaker functionality, ensure that your project is correctly prepared to handle the integration. This setup consists of a sequence of steps for configuring rights, dependencies, and platform-specific parameters so that VideoSDK can function seamlessly inside your application context.</p><h3 id="android-setup">Android Setup</h3>
<ul><li>Add the required permissions in the <code>AndroidManifest.xml</code> file.</li></ul><pre><code class="language-js">&lt;manifest
  xmlns:android="http://schemas.android.com/apk/res/android"
  package="com.cool.app"
&gt;
    &lt;!-- Give all the required permissions to app --&gt;
    &lt;uses-permission android:name="android.permission.INTERNET" /&gt;
    &lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Legacy up to Android 11) --&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH"
        android:maxSdkVersion="30" /&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH_ADMIN"
        android:maxSdkVersion="30" /&gt;

    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Android 12 upwards)--&gt;
    &lt;uses-permission android:name="android.permission.BLUETOOTH_CONNECT" /&gt;

    &lt;uses-permission android:name="android.permission.CAMERA" /&gt;
    &lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
    &lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
    &lt;uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" /&gt;
    &lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE"/&gt;
    &lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;

    &lt;application&gt;
   &lt;meta-data
      android:name="live.videosdk.rnfgservice.notification_channel_name"
      android:value="Meeting Notification"
     /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_channel_description"
    android:value="Whenever meeting started notification will appear."
    /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_color"
    android:resource="@color/red"
    /&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundService" android:foregroundServiceType="mediaProjection"&gt;&lt;/service&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundServiceTask"&gt;&lt;/service&gt;
  &lt;/application&gt;
&lt;/manifest&gt;</code></pre><ul><li>Update your <code>colors.xml</code> file for internal dependencies: </li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">&lt;resources&gt;
  &lt;item name="red" type="color"&gt;
    #FC0303
  &lt;/item&gt;
  &lt;integer-array name="androidcolors"&gt;
    &lt;item&gt;@color/red&lt;/item&gt;
  &lt;/integer-array&gt;
&lt;/resources&gt;</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/app/src/main/res/values/colors.xml</span></p></figcaption></figure><ul><li>Link the necessary VideoSDK Dependencies</li></ul><pre><code class="language-js">  dependencies {
   implementation project(':rnwebrtc')
   implementation project(':rnfgservice')
  }</code></pre><figure class="kg-card kg-code-card"><pre><code class="language-js">include ':rnwebrtc'
project(':rnwebrtc').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-webrtc/android')

include ':rnfgservice'
project(':rnfgservice').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-foreground-service/android')</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/settings.gradle</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">import live.videosdk.rnwebrtc.WebRTCModulePackage;
import live.videosdk.rnfgservice.ForegroundServicePackage;

public class MainApplication extends Application implements ReactApplication {
  private static List&lt;ReactPackage&gt; getPackages() {
      @SuppressWarnings("UnnecessaryLocalVariable")
      List&lt;ReactPackage&gt; packages = new PackageList(this).getPackages();
      // Packages that cannot be autolinked yet can be added manually here, for example:

      packages.add(new ForegroundServicePackage());
      packages.add(new WebRTCModulePackage());

      return packages;
  }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MainApplication.java</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">/* This one fixes a weird WebRTC runtime problem on some devices. */
android.enableDexingArtifactTransform.desugaring=false</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/gradle.properties</span></p></figcaption></figure><ul><li>Include the following line in your <code>proguard-rules.pro</code> file (optional: if you are using Proguard)</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">-keep class org.webrtc.** { *; }</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/app/proguard-rules.pro</span></p></figcaption></figure><ul><li>In your <code>build.gradle</code> file, update the minimum OS/SDK version to <code>23</code>.</li></ul><pre><code class="language-js">buildscript {
  ext {
      minSdkVersion = 23
  }
}</code></pre><h3 id="ios-setup%E2%80%8B">iOS Setup​</h3>
<blockquote>IMPORTANT: Ensure that you are using CocoaPods version 1.10 or later.</blockquote><ul><li>To update CocoaPods, you can reinstall  <code>gem</code> using the following command:</li></ul><pre><code class="language-swift">$ sudo gem install cocoapods</code></pre><ul><li>Manually link react-native-incall-manager (if it is not linked automatically).</li></ul><p>Select <code>Your_Xcode_Project/TARGETS/BuildSettings</code>, in Header Search Paths, add <code>"$(SRCROOT)/../node_modules/@videosdk.live/react-native-incall-manager/ios/RNInCallManager"</code></p><ul><li>Change the path of <code>react-native-webrtc</code> using the following command:</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">pod ‘react-native-webrtc’, :path =&gt; ‘../node_modules/@videosdk.live/react-native-webrtc’</code></pre><figcaption><p><span style="white-space: pre-wrap;">Podfile</span></p></figcaption></figure><ul><li>Change the version of your platform.</li></ul><p>You need to change the platform field in the Podfile to 12.0 or above because <strong>react-native-webrtc</strong> doesn't support iOS versions earlier than 12.0. Update the line: platform: ios, ‘12.0’.</p><ul><li>Install pods.</li></ul><p>After updating the version, you need to install the pods by running the following command:</p><pre><code class="language-swift">Pod install</code></pre><ul><li>Add <strong><code>libreact-native-webrtc.a</code></strong> binary.</li></ul><p>Add the <strong><code>libreact-native-webrtc.a</code></strong> binary to the "Link Binary With Libraries" section in the target of your main project folder.</p><ul><li>Declare permissions in <strong>Info.plis</strong>t :</li></ul><p>Add the following lines to your info.plist file located at :</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Camera permission description&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Microphone permission description&lt;/string&gt;</code></pre><figcaption><p><span style="white-space: pre-wrap;">project folder/ios/projectname/info.plist</span></p></figcaption></figure><h4 id="register-service">Register Service</h4>
<p>Register VideoSDK services in your root <code>index.js</code> file for the initialization service.</p><pre><code class="language-js">import { AppRegistry } from "react-native";
import App from "./App";
import { name as appName } from "./app.json";
import { register } from "@videosdk.live/react-native-sdk";

register();

AppRegistry.registerComponent(appName, () =&gt; App);</code></pre><h2 id="essential-steps-for-building-the-video-calling-functionality">Essential Steps for Building the Video Calling Functionality</h2><p>By following essential steps, you can seamlessly implement video into your applications with VideoSDK, which provides a robust set of tools and APIs to facilitate the integration of video capabilities into applications.</p><h3 id="step-1-get-started-with-apijs%E2%80%8B">Step 1: Get started with api.js<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-1--get-started-with-apijs">​</a></h3><p>Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><pre><code class="language-js">export const token = "&lt;Generated-from-dashbaord&gt;";
// API call to create meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${token}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });

  const { roomId } = await res.json();
  return roomId;
};</code></pre><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up a wireframe of <em>App.js</em>, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the <strong>Context Provider</strong> and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meetings such as <em>join</em>, <em>leave</em>, <em>enable</em>/<em>disable</em> the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as <strong>name</strong>, <strong>webcamStream</strong>, <strong>micStream</strong>, etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the <em>App.js</em> file.</p><pre><code class="language-js">import React, { useState } from "react";
import {
  SafeAreaView,
  TouchableOpacity,
  Text,
  TextInput,
  View,
  FlatList,
} from "react-native";
import {
  MeetingProvider,
  useMeeting,
  useParticipant,
  MediaStream,
  RTCView,
} from "@videosdk.live/react-native-sdk";
import { createMeeting, token } from "./api";

function JoinScreen(props) {
  return null;
}

function ControlsContainer() {
  return null;
}

function MeetingView() {
  return null;
}

export default function App() {
  const [meetingId, setMeetingId] = useState(null);

  const getMeetingId = async (id) =&gt; {
    const meetingId = id == null ? await createMeeting({ token }) : id;
    setMeetingId(meetingId);
  };

  return meetingId ? (
    &lt;SafeAreaView style={{ flex: 1, backgroundColor: "#F6F6FF" }}&gt;
      &lt;MeetingProvider
        config={{
          meetingId,
          micEnabled: false,
          webcamEnabled: true,
          name: "Test User",
        }}
        token={token}
      &gt;
        &lt;MeetingView /&gt;
      &lt;/MeetingProvider&gt;
    &lt;/SafeAreaView&gt;
  ) : (
    &lt;JoinScreen getMeetingId={getMeetingId} /&gt;
  );
}</code></pre><h3 id="step-3-implement-join-screen%E2%80%8B">Step 3: Implement Join Screen<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-3--implement-join-screen">​</a></h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one. </p><figure class="kg-card kg-code-card"><pre><code class="language-js">function JoinScreen(props) {
  const [meetingVal, setMeetingVal] = useState("");
  return (
    &lt;SafeAreaView
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        paddingHorizontal: 6 * 10,
      }}
    &gt;
      &lt;TouchableOpacity
        onPress={() =&gt; {
          props.getMeetingId();
        }}
        style={{ backgroundColor: "#1178F8", padding: 12, borderRadius: 6 }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Create Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;

      &lt;Text
        style={{
          alignSelf: "center",
          fontSize: 22,
          marginVertical: 16,
          fontStyle: "italic",
          color: "grey",
        }}
      &gt;
        ---------- OR ----------
      &lt;/Text&gt;
      &lt;TextInput
        value={meetingVal}
        onChangeText={setMeetingVal}
        placeholder={"XXXX-XXXX-XXXX"}
        style={{
          padding: 12,
          borderWidth: 1,
          borderRadius: 6,
          fontStyle: "italic",
        }}
      /&gt;
      &lt;TouchableOpacity
        style={{
          backgroundColor: "#1178F8",
          padding: 12,
          marginTop: 14,
          borderRadius: 6,
        }}
        onPress={() =&gt; {
          props.getMeetingId(meetingVal);
        }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Join Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;
    &lt;/SafeAreaView&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">JoinScreen Component</span></p></figcaption></figure><h3 id="step-4-implement-controls%E2%80%8B">Step 4: Implement Controls<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-4--implement-controls">​</a></h3><p>The next step is to create a <code>ControlsContainer</code> component to manage features such as Join/leave a Meeting and Enable/Disable the Webcam or  Mic.</p><p>In this step, the <code>useMeeting</code> hook is utilized to acquire all the required methods such as <code>join()</code>, <code>leave()</code>, <code>toggleWebcam</code> and <code>toggleMic</code>.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const Button = ({ onPress, buttonText, backgroundColor }) =&gt; {
  return (
    &lt;TouchableOpacity
      onPress={onPress}
      style={{
        backgroundColor: backgroundColor,
        justifyContent: "center",
        alignItems: "center",
        padding: 12,
        borderRadius: 4,
      }}
    &gt;
      &lt;Text style={{ color: "white", fontSize: 12 }}&gt;{buttonText}&lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
  );
};

function ControlsContainer({ join, leave, toggleWebcam, toggleMic }) {
  return (
    &lt;View
      style={{
        padding: 24,
        flexDirection: "row",
        justifyContent: "space-between",
      }}
    &gt;
      &lt;Button
        onPress={() =&gt; {
          join();
        }}
        buttonText={"Join"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleWebcam();
        }}
        buttonText={"Toggle Webcam"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleMic();
        }}
        buttonText={"Toggle Mic"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          leave();
        }}
        buttonText={"Leave"}
        backgroundColor={"#FF0000"}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ControlsContainer Component</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantList() {
  return null;
}
function MeetingView() {
  const { join, leave, toggleWebcam, toggleMic, meetingId } = useMeeting({});

  return (
    &lt;View style={{ flex: 1 }}&gt;
      {meetingId ? (
        &lt;Text style={{ fontSize: 18, padding: 12 }}&gt;
          Meeting Id :{meetingId}
        &lt;/Text&gt;
      ) : null}
      &lt;ParticipantList /&gt;
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingView Component</span></p></figcaption></figure><h3 id="step-5-render-participant-list%E2%80%8B">Step 5: Render Participant List<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-5--render-participant-list">​</a></h3><p>After implementing the controls, the next step is to render the joined participants.</p><p>You can get all the joined <code>participants</code> from the <code>useMeeting</code> Hook.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantView() {
  return null;
}

function ParticipantList({ participants }) {
  return participants.length &gt; 0 ? (
    &lt;FlatList
      data={participants}
      renderItem={({ item }) =&gt; {
        return &lt;ParticipantView participantId={item} /&gt;;
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 20 }}&gt;Press Join button to enter meeting.&lt;/Text&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ParticipantList Component</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">function MeetingView() {
  // Get `participants` from useMeeting Hook
  const { join, leave, toggleWebcam, toggleMic, participants } = useMeeting({});
  const participantsArrId = [...participants.keys()];

  return (
    &lt;View style={{ flex: 1 }}&gt;
      &lt;ParticipantList participants={participantsArrId} /&gt;
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingView Component</span></p></figcaption></figure><h3 id="step-6-handling-participants-media%E2%80%8B">Step 6: Handling Participant's Media<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-6--handling-participants-media">​</a></h3><p>Before Handling the Participant's Media, you need to understand a couple of concepts.</p><h4 id="1-useparticipant-hook">1. useParticipant Hook</h4>
<p>The <code>useParticipant</code> hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take <code>participantId</code> as argument.</p><pre><code class="language-useParticipant Hook Example">const { webcamStream, webcamOn, displayName } = useParticipant(participantId);</code></pre><h4 id="2-mediastream-api">2. MediaStream API</h4>
<p>The MediaStream API is beneficial for adding a MediaTrack into the <code>RTCView</code> component, enabling the playback of audio or video.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">&lt;RTCView
  streamURL={new MediaStream([webcamStream.track]).toURL()}
  objectFit={"cover"}
  style={{
    height: 300,
    marginVertical: 8,
    marginHorizontal: 8,
  }}
/&gt;</code></pre><figcaption><p><span style="white-space: pre-wrap;">useParticipant Hook Example</span></p></figcaption></figure><h4 id="rendering-participant-media">Rendering Participant Media</h4>
<figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantView({ participantId }) {
  const { webcamStream, webcamOn } = useParticipant(participantId);

  return webcamOn &amp;&amp; webcamStream ? (
    &lt;RTCView
      streamURL={new MediaStream([webcamStream.track]).toURL()}
      objectFit={"cover"}
      style={{
        height: 300,
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        backgroundColor: "grey",
        height: 300,
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 16 }}&gt;NO MEDIA&lt;/Text&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ParticipantView Component</span></p></figcaption></figure><p>Congratulations! By following these steps, you're on your way to unlocking the video within your application. Now, we are moving forward to integrate the feature that builds immersive video experiences for your users!</p><h2 id="integrate-active-speaker-indication-feature">Integrate Active Speaker Indication Feature</h2><p>The Active Speaker Indication feature allows you to identify the participant who is currently the active speaker in a meeting. This feature proves especially valuable in larger meetings or webinars, where numerous participants can make it challenging to identify the active speaker.</p><p>Whenever any participant speaks in a meeting, the <code>onSpeakerChanged</code> event will trigger, providing the participant ID of the active speaker.</p><p>For example, the meeting is running with <strong>Alice</strong> and <strong>Bob</strong>. Whenever any of them speaks, <code>onSpeakerChanged</code> event will trigger and return the speaker's <code>participantId</code>.</p><pre><code class="language-js">import { useMeeting } from "@videosdk.live/react-native-sdk";

const MeetingView = () =&gt; {
  /** useMeeting hooks events */
  const {
    /** Methods */
  } = useMeeting({
    onSpeakerChanged: (activeSpeakerId) =&gt; {
      console.log("Active Speaker participantId", activeSpeakerId);
    },
  });
};</code></pre><p>To integrate the Active Speaker Indication feature into your app, you can refer to the UI provided in the Android blog of the VideoSDK React Native SDK example repository available at <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-sdk-example">GitHub - videosdk-live/videosdk-rtc-react-native-sdk-example</a>. The provided UI showcases how ASI can be seamlessly integrated into your app's interface, providing visual cues to highlight the active speaker during video calls.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/videosdk-live/videosdk-rtc-react-native-sdk-example"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - videosdk-live/videosdk-rtc-react-native-sdk-example: WebRTC based video conferencing SDK for React Native (Android / iOS)</div><div class="kg-bookmark-description">WebRTC based video conferencing SDK for React Native (Android / iOS) - videosdk-live/videosdk-rtc-react-native-sdk-example</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="How to Integrate Active Speaker Indication in React Native Video Calling App?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">videosdk-live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/f57930fc172abe10bc434d0849dff6a41a10f17023b7af84e47d6e6229939cb2/videosdk-live/videosdk-rtc-react-native-sdk-example" alt="How to Integrate Active Speaker Indication in React Native Video Calling App?" onerror="this.style.display = 'none'"/></div></a></figure><h2 id="%E2%9C%A8-want-to-add-more-features-to-react-native-video-calling-app">✨ Want to Add More Features to React Native Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your React Native video-calling app,</p><p><strong>? Check out these additional resources:</strong></p><ul><li>? RTMP Live Stream: <a href="https://www.videosdk.live/blog/integrate-rtmp-in-react-native-video-app">Link</a></li><li>? Image Capture Feature: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-react-native-for-android-app">Link</a></li><li>?️ Screen Share Feature in Android: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-native-android-video-call-app">Link</a></li><li>?️ Screen Share Feature in iOS: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-native-ios-video-call-app">Link</a></li><li>? Chat Feature: <a href="https://www.videosdk.live/blog/integrate-chat-feature-in-react-native-video-call-app">Link</a></li><li>?️ Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/picture-in-picture-pip-in-react-native">Link</a></li></ul><h2 id="conclusion">Conclusion</h2><p>In conclusion, integrating Active Speaker Indication enriches the React Native video call app, enhancing communication by providing users with intuitive cues to identify the active speaker effectively. Use cases extend to virtual meetings, online classes, and remote collaboration, where clear identification of the active speaker streamlines interactions and boosts productivity. </p><p>Additionally, it enhances accessibility for users with hearing impairments, facilitating a more inclusive experience for all participants.</p><p>If you are new here and want to build an interactive React Native app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get ? <em>10000 free minutes every month</em>. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[What is Codec Switching?]]></title><description><![CDATA[The article explores the role of codecs, the importance of codec switching, common codecs used in streaming, and how the switching process works, highlighting its significance in video conferencing, VoIP services, and streaming platforms.]]></description><link>https://www.videosdk.live/blog/what-is-codec-switching</link><guid isPermaLink="false">66d55ca320fab018df10fe33</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Mon, 20 Jan 2025 06:57:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-Codec-Switching_-2.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-Codec-Switching_-2.jpg" alt="What is Codec Switching?"/><p>Codec switching refers to the dynamic process of changing the codec (coder-decoder) used for encoding or decoding audio and video streams during transmission. This technology is important in streaming media, where fluctuating network conditions and diverse device specifications necessitate on-the-fly adjustments to maintain optimal performance and user satisfaction.</p><h2 id="the-role-of-codecs-in-streaming">The Role of Codecs in Streaming</h2><p>Before we delve further into codec switching, it's essential to understand the fundamental role of <a href="https://www.streamingmedia.com/Articles/ReadArticle.aspx?ArticleID=74487">codecs</a> in the streaming ecosystem. A codec is a specialized technology that compresses and decompresses multimedia files, facilitating efficient transmission and playback.</p><p>Codecs can be categorized into two main types:</p><ol><li><strong>Lossy Codecs</strong>: These are more commonly used in streaming due to their ability to significantly reduce file sizes while sacrificing some quality.</li><li><strong>Lossless Codecs</strong>: These preserve the original quality but result in larger file sizes.</li></ol><h2 id="the-importance-of-codec-switching">The Importance of Codec Switching</h2><h3 id="1-enabling-adaptive-streaming">1. Enabling Adaptive Streaming</h3><p><a href="https://getstream.io/glossary/video-codecs/">Adaptive bitrate streaming</a> heavily relies on codec switching. In this approach, multiple versions of a video are encoded at different bitrates. The streaming service can then seamlessly switch between these versions based on the viewer's current internet speed and device capabilities, ensuring a smooth viewing experience without buffering interruptions.</p><h3 id="2-optimizing-quality">2. Optimizing Quality</h3><p>Codec switching allows streaming platforms to deliver higher-quality streams when bandwidth permits and revert to lower-quality streams when bandwidth is constrained. This flexibility is crucial for maintaining user satisfaction across various network conditions.</p><h3 id="3-enhancing-device-compatibility">3. Enhancing Device Compatibility</h3><p>Different devices often support different codecs. By implementing codec switching, streaming services can deliver the most compatible format for each user's device, enhancing accessibility and overall user experience.</p><h2 id="common-codecs-in-streaming">Common Codecs in Streaming</h2><p>Several codecs are widely used in the streaming industry, each with its strengths:</p><ul><li><a href="https://corp.kaltura.com/blog/video-codec/"><strong>H.264</strong></a>: This codec is extensively used for video streaming due to its excellent balance of quality and compression efficiency.</li><li><strong>H.265 (HEVC)</strong>: Offering better compression than H.264, this codec is particularly suitable for high-resolution content like 4K streaming.</li><li><strong>AV1</strong>: An emerging open-source codec that promises improved efficiency and quality, potentially replacing older codecs.</li><li><strong>AAC</strong>: This codec is commonly used for audio and provides good quality at lower bitrates.</li></ul><h2 id="how-codec-switching-works">How Codec Switching Works</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/09/Flowchart-of-the-adaptive-codec-switching-1.png" class="kg-image" alt="What is Codec Switching?" loading="lazy" width="2570" height="1108"/></figure><p>The process of codec switching involves several key steps:</p><ol><li><strong>Network Monitoring</strong>: The system continuously monitors network conditions such as bandwidth, latency, jitter, and packet loss.</li><li><strong>Decision Making</strong>: Based on the monitored data, the system decides whether to switch codecs to maintain optimal performance.</li><li><strong>Seamless Transition</strong>: The switch occurs in real time, often with minimal disruption to the user. This is achieved by temporarily buffering the live media stream and gradually transitioning from one codec to another.</li><li><strong>Negotiation</strong>: During a session, devices negotiate which codecs are supported and can be switched to during the live streaming process.</li></ol><h2 id="use-cases-of-codec-switching">Use Cases of Codec Switching</h2><p>Codec switching finds applications in various domains:</p><ul><li><a href="https://www.videosdk.live/audio-video-conferencing"><strong>Video Conferencing</strong></a>: Ensures uninterrupted communication during fluctuating internet conditions.</li><li><a href="https://www.videosdk.live/developer-hub/sip/voip"><strong>VoIP Services</strong></a>: Enhances call quality by adapting codecs based on real-time network analysis.</li><li><a href="https://www.videosdk.live/interactive-live-streaming"><strong>Streaming Platforms</strong></a>: Optimizes the quality of live streams by adjusting to viewers' available bandwidth.</li></ul><p>As streaming continues to dominate the digital media landscape, technologies like codec switching play an increasingly vital role in ensuring high-quality, uninterrupted viewing experiences. By dynamically adapting to network conditions and device capabilities, codec switching allows streaming services to deliver optimal performance across a wide range of scenarios. As codecs continue to evolve and improve, we can expect even more efficient and seamless streaming experiences in the future.</p>]]></content:encoded></item><item><title><![CDATA[Best Amazon Chime SDK Competitors in 2025]]></title><description><![CDATA[Explore AWS Chime SDK's competition by comparing AWS Chime SDK with its competitors in the realm of real-time communication.]]></description><link>https://www.videosdk.live/blog/amazon-chime-sdk-competitors</link><guid isPermaLink="false">64b525319eadee0b8b9e71d1</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 20 Jan 2025 06:19:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/08/AWZ-Chime-competitors-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/08/AWZ-Chime-competitors-1.jpg" alt="Best Amazon Chime SDK Competitors in 2025"/><p>If you're currently exploring <strong>video communication APIs</strong>, the Amazon Chime SDK stands out as a noteworthy option among several contenders that might align with your requirements. It offers a robust set of features and functionalities that could potentially meet your needs.</p><p>However, navigating the landscape of <a href="https://www.videosdk.live/blog/amazon-chime-sdk-alternative">alternatives to Amazon Chime</a> SDK and finding the ideal fit can indeed be a complex task, given the diverse array of choices available. Before delving into the world of video API offerings, it's crucial to establish your project's budget, primary use case, and the essential features you require.</p><p>In the dynamic realm of video conferencing, choosing the right SDK is paramount for a seamless virtual collaboration experience. This article delves into the landscape of Amazon Chime SDK competitors in 2024, offering insights into key players and helping you make informed decisions for your conferencing needs.</p><h2 id="amazon-chime-sdk">Amazon Chime SDK</h2>
<p>AWS Chime SDK stands out as a versatile solution, providing developers with powerful tools to embed video and audio calling, screen sharing, and messaging into applications. As the technological landscape evolves, exploring the alternatives becomes imperative.</p><h3 id="key-points-about-amazon-chime-sdk">Key points about Amazon Chime SDK</h3>
<ul><li>The Amazon Chime SDK allows for video meetings with a <strong>maximum of 25 participants</strong> (50 for mobile users), making it a suitable platform for effective collaboration among users. </li><li>However, it's important to note that some specific features are <strong>not available</strong> in the Amazon Chime SDK, such as <strong>polling</strong>, <strong>auto-sync with Google Calendar</strong>, and <strong>background blur</strong> effects. </li><li>This limitation might impact users who require these functionalities for their applications.</li><li>Additionally, there have been reported <strong>compatibility issues</strong> in Linux environments, and participants using the Safari browser might encounter <strong>challenges</strong> while using the SDK. </li><li>These issues can potentially affect the overall user experience, especially for those using this platform.</li><li>It's worth mentioning that <strong>customer support experiences</strong> with the Amazon Chime SDK <strong>can vary</strong>. </li><li>Some users have reported <strong>inconsistent</strong> query resolution times, and the <strong>quality of support</strong> can depend on the specific support agent assigned to the case. </li><li>This aspect might impact the level of assistance users receive when troubleshooting issues or seeking guidance.</li></ul><h3 id="amazon-chime-sdk-pricing">Amazon Chime SDK pricing</h3>
<p>Amazon Chime SDK offers a <strong>tiered </strong><a href="https://aws.amazon.com/chime/pricing/"><strong>pricing</strong></a><strong> model</strong> that caters to users with varying collaboration needs:</p><ol><li><strong>Free Basic Plan</strong>: The free basic plan enables users to engage in <strong>one-on-one audio/video calls</strong> and <strong>group chats</strong> without any cost. This plan provides essential communication features for users who require simple interactions.</li><li><strong>Plus Plan</strong>: For users who need <strong>additional features and functionalities</strong>, there is the Plus plan available at <strong>$2.50</strong> per month per user. This plan includes additions such as <strong>screen sharing</strong>, <strong>remote desktop control</strong>, <strong>1 GB of message history per user</strong>, and <strong>integration with Active Directory</strong>.</li><li><strong>Pro Plan</strong>: The Pro plan, priced at <strong>$15</strong> per user per month, is designed for more extensive collaboration needs. This comprehensive plan encompasses all the features of the Plus plan and allows for meetings with <strong>three or more participants</strong>. It's suitable for larger group discussions, presentations, and more complex collaboration scenarios.</li></ol><h2 id="direct-comparison-aws-chime-sdk-vs-top-competitors">Direct Comparison: AWS Chime SDK vs Top Competitors</h2>
<p>The <strong>top competitors of AWS Chime SDK </strong>are VideoSDK,  100ms, WebRTC, <a href="https://www.videosdk.live/blog/jitsi-competitors">Jitsi</a>, Daily, and LiveKit.</p><p>Let's compare Chime with each of the above competitors.</p><h2 id="1-aws-chime-sdk-vs-videosdk">1. AWS Chime SDK vs VideoSDK</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/AWS-Chime-vs-Videosdk.png" class="kg-image" alt="Best Amazon Chime SDK Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>VideoSDK offers developers a seamless API that simplifies incorporating robust, scalable, and dependable audio-video capabilities into their applications. With only a few lines of code, developers can introduce live audio and video experiences to various platforms within minutes. One of the primary benefits of opting for the <a href="https://www.videosdk.live/">VideoSDK</a> is its remarkable ease of integration. This characteristic enables developers to concentrate their efforts on crafting innovative features that contribute to enhanced user engagement and retention.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>AWS Chime SDK pricing</strong></td>
        <td><strong>Video SDK pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$1.7</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$2</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>Starts from <strong>$1</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>NA</td>
        <td>
            <strong>$15</strong> per 1,000 minutes, No limit on participants
        </td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$12.5</strong> per 1,000 minutes</td>
        <td>
            <strong>$15</strong> per 1,000 minutes, No limit on participants
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/amazon-chime-sdk-vs-videosdk">AWS Chime SDK vs Video SDK</a>.</blockquote><h2 id="2-aws-chime-sdk-vs-100ms">2. AWS Chime SDK vs 100ms</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/AWS-Chime-vs-100ms.jpg" class="kg-image" alt="Best Amazon Chime SDK Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p><a href="https://www.videosdk.live/blog/100ms-alternative">100ms</a> offers a cloud platform that empowers developers to effortlessly incorporate video and audio conferencing into a variety of applications, including web, Android, and iOS platforms. This platform is equipped with a range of powerful tools, including REST APIs, software development kits (SDKs), and an intuitive user-friendly dashboard. These tools collectively streamline the process of capturing, distributing, recording, and displaying live interactive audio and video content.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>AWS Chime SDK pricing</strong></td>
        <td><strong>100ms pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$1.7</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$4</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>Starts from <strong>$4</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>NA</td>
        <td><strong>$40</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$12.5</strong> per 1,000 minutes</td>
        <td><strong>$13.5</strong> per 1,000 minutes</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/amazon-chime-sdk-vs-100ms">AWS Chime SDK vs 100ms</a>.</blockquote><h2 id="3-aws-chime-sdk-vs-webrtc">3. AWS Chime SDK vs WebRTC</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/AWS-Chime-vs-WebRTC.jpg" class="kg-image" alt="Best Amazon Chime SDK Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>WebRTC is an open-source project that empowers web browsers to integrate Real-Time Communications (RTC) capabilities through user-friendly JavaScript APIs. The various components of <a href="https://www.videosdk.live/blog/webrtc-alternative">WebRTC</a> have been meticulously optimized to effectively facilitate real-time communication functionalities within web browsers. This initiative enables developers to seamlessly embed features like audio and video calls, chat, file sharing, and more directly into their web applications, creating dynamic and interactive user experiences.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>AWS Chime SDK pricing</strong></td>
        <td><strong>WebRTC pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$1.7</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$12.5</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/amazon-chime-sdk-vs-webrtc">AWS Chime SDK vs WebRTC</a>.</blockquote><h2 id="4-aws-chime-sdk-vs-jitsi">4. AWS Chime SDK vs Jitsi</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/AWS-Chime-vs-Jitsi-Meet.jpg" class="kg-image" alt="Best Amazon Chime SDK Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>Jitsi is an open-source platform that focuses on simplifying the process of video conferencing. It offers a user-friendly experience that doesn't require any downloads or plugins, which makes it an attractive option for individuals or businesses seeking a straightforward and cost-effective solution for live video communication. With <a href="https://www.videosdk.live/blog/jitsi-alternative">Jitsi</a>, users can easily initiate video calls, participate in virtual meetings, and collaborate seamlessly without the need for extensive technical know-how or complex setups.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>AWS Chime SDK pricing</strong></td>
        <td><strong>Jitsi pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$1.7</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$12.5</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/amazon-chime-sdk-vs-jitsi">AWS Chime SDK vs Jitsi</a>.</blockquote><h2 id="5-aws-chime-sdk-vs-daily">5. AWS Chime SDK vs Daily</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/AWS-Chime-vs-Daily.jpg" class="kg-image" alt="Best Amazon Chime SDK Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>Daily is a developer-friendly platform that empowers developers to effortlessly integrate real-time video and audio calls into their applications, directly within web browsers. With <a href="https://www.videosdk.live/blog/daily-co-alternative">Daily</a>'s tools and features, developers can easily handle the complex backend aspects of video calls across different platforms.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>AWS Chime SDK pricing</strong></td>
        <td><strong>Daily pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$1.7</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$4</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>Starts from <strong>$1.2</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>NA</td>
        <td><strong>$15</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$12.5</strong> per 1,000 minutes</td>
        <td><strong>$13.49</strong> per 1,000 minutes</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/amazon-chime-sdk-vs-daily">AWS Chime SDK vs Daily</a>.</blockquote><h2 id="6-aws-chime-sdk-vs-livekit">6. AWS Chime SDK vs LiveKit</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/AWS-Chime-vs-LiveKIt.jpg" class="kg-image" alt="Best Amazon Chime SDK Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>Livekit offers a comprehensive solution for developers who want to integrate live video and audio capabilities into their native applications. With its set of software development kits (SDKs), <a href="https://www.videosdk.live/blog/livekit-alternative">Livekit</a> makes it seamless to incorporate various real-time communication features into your applications, enhancing user engagement and interaction.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>AWS Chime SDK pricing</strong></td>
        <td><strong>LiveKit pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$1.7</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$20</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>
            <strong>$69</strong> per hour (upto 500 viewers only and doesn't support Full HD)
        </td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>NA</td>
        <td>No accurate data available</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$12.5</strong> per 1,000 minutes</td>
        <td>No accurate data available</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/amazon-chime-sdk-vs-livekit">AWS Chime SDK vs LiveKit</a>.</blockquote><h2 id="have-you-determined-whether-chime-aligns-with-your-requirements-or-have-you-found-an-alternative">Have you determined whether Chime aligns with your requirements, or have you found an alternative?</h2>
<p>The alternatives to AWS Chime SDK that were previously discussed offer a diverse set of solutions for developers aiming to enhance in-app user experiences. However, if your requirements extend beyond simple in-app communication and you're seeking a more comprehensive engagement strategy that incorporates voice and video functionalities, solutions like <a href="https://www.videosdk.live/signup/">Video SDK</a> could potentially align better with your needs. These solutions provide the necessary tools and features to create immersive and interactive experiences for your users, allowing you to offer more than just text-based communication. By exploring these options, you can take your application's user experience to the next level and provide a richer communication environment for your users.</p><p>Taking into account your specific requirements, budget limitations, and the essential features you're seeking, AWS Chime SDK may not align perfectly with your needs. To ensure a well-informed decision, it's advisable to explore the alternatives that were discussed earlier. Some of these alternatives, such as Video SDK, offer free trial options that allow you to test their capabilities in real-world projects. By doing so, you can gain a better understanding of how well they meet your needs before making a significant commitment. Additionally, it's important to remember that if your requirements change over time, you have the flexibility to transition away from AWS Chime SDK to a solution that better suits your evolving needs.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Active Speaker in Android(Java) Video Chat App?]]></title><description><![CDATA[In this article, Learn how to use the VideoSDK to add an active speaker indicating to your Android(Java) video app. ]]></description><link>https://www.videosdk.live/blog/active-speaker-in-java-video-chat-app</link><guid isPermaLink="false">65fbec9b2a88c204ca9cec05</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[Android]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Sun, 19 Jan 2025 16:20:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Active-Speaker-Android.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Active-Speaker-Android.jpg" alt="How to Integrate Active Speaker in Android(Java) Video Chat App?"/><p>Have you ever been on a crowded video call where you can hear the conversation, but you have no idea who is actually talking? Video conferences with numerous participants can be confusing, making it difficult to follow the conversation and engage effectively.</p><p>This is where the <strong>Active Speaker indication</strong> comes in! This is an expert guide for Android developers who want to implement active speaker highlighting in their video apps (built with Java) using VideoSDK.</p><p>Highlighting the active speaker is especially beneficial in large meetings or webinars where numerous participants are present, which can lead to confusion about who is speaking at which time. By including this feature, you can improve the user experience on large group calls, encouraging more participation and better engagement.</p><h2 id="goals">Goals</h2><p>By the End of this Article:</p><ol><li>Create a <a href="https://www.videosdk.live/signup">VideoSDK account</a> and generate your VideoSDK auth token.</li><li>Integrate the VideoSDK library and dependencies into your project.</li><li>Implement core functionalities for video calls using VideoSDK</li><li>Enable active speaker indication</li></ol><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the active speaker highlighting functionality, we will need to use the capabilities that the VideoSDK offers. Before we dive into the implementation steps, let's make sure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token plays a crucial role in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/authentication-and-token#1-generating-token-from-dashboard">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Java Development Kit is supported.</li><li>Android Studio version 3.0 or later.</li><li>Android SDK API level 21 or higher.</li><li>A mobile device with Android 5.0 or later version.</li></ul><h2 id="integrate-videosdk">Integrate VideoSDK</h2><p>Following the account creation and token generation steps, we'll guide you through the process of adding the VideoSDK library and other dependencies to your project. We'll also ensure your app has the required permissions to access features like audio recording, camera usage, and internet connectivity, all crucial for a seamless video experience.</p><h3 id="step-a-add-the-repositories-to-the-projects-settingsgradle-file">Step (a): Add the repositories to the project's <code>settings.gradle</code> file.</h3><pre><code class="language-Java">dependencyResolutionManagement{
  repositories {
    // ...
    google()
    mavenCentral()
    maven { url 'https://jitpack.io' }
    maven { url "https://maven.aliyun.com/repository/jcenter" }
  }
}
</code></pre><h3 id="step-b-include-the-following-dependency-within-your-applications-buildgradle-file">Step (b): Include the following dependency within your application's <code>build.gradle</code> file:</h3><pre><code class="language-Java">dependencies {
  implementation 'live.videosdk:rtc-android-sdk:0.1.26'

  // library to perform Network call to generate a meeting id
  implementation 'com.amitshekhar.android:android-networking:1.0.2'

  // Other dependencies specific to your app
}
</code></pre><blockquote>If your project has set <code>android.useAndroidX=true</code>, then set <code>android.enableJetifier=true</code> in the <code>gradle.properties</code> file to migrate your project to AndroidX and avoid duplicate class conflict.</blockquote><h3 id="step-c-add-permissions-to-your-project">Step (c): Add permissions to your project</h3><p>In <code>/app/Manifests/AndroidManifest.xml</code>, add the following permissions after <code>&lt;/application&gt;</code>.</p><pre><code class="language-Java">&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
</code></pre><p>These permissions are essential for enabling core functionalities like audio recording, internet connectivity for real-time communication, and camera access for video streams within your video application.</p><h2 id="essential-steps-for-building-the-video-calling-functionality">Essential Steps for Building the Video Calling Functionality</h2><p>We'll now delve into the functionalities that make your video application after set up your project with VideoSDK. This section outlines the essential steps for implementing core functionalities within your app.</p><p>This section will guide you through four key aspects:</p><h3 id="step-1-generate-a-meetingid">Step 1: Generate a <code>meetingId</code></h3><p>Now, we can create the <code>meetingId</code> from the VideoSDK's rooms API. You can refer to this <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/setup-call/initialize-meeting#generating-meeting-id">documentation</a> to generate meetingId.</p><h3 id="step-2-initializing-the-meeting">Step 2: Initializing the Meeting</h3><p>After getting <code>meetingId</code> , the next step involves initializing the meeting for that we need to,</p><ol><li>Initialize VideoSDK.</li><li>Configure <strong>VideoSDK</strong> with a token.</li><li>Initialize the meeting with required params such as <code>meetingId</code>, <code>participantName</code>, <code>micEnabled</code>, <code>webcamEnabled</code> and more.</li><li>Add <code>MeetingEventListener</code> for listening events such as Meeting Join/Left and Participant Join/Left.</li><li>Join the room with <code>meeting.join()</code> method.</li></ol><p>Please copy the .xml file of the <code>MeetingActivity</code> from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/activity_meeting.xml">here</a>.</p><pre><code class="language-Java">public class MeetingActivity extends AppCompatActivity {
  // declare the variables we will be using to handle the meeting
  private Meeting meeting;
  private boolean micEnabled = true;
  private boolean webcamEnabled = true;

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_meeting);

    final String token = ""; // Replace with the token you generated from the VideoSDK Dashboard
    final String meetingId = ""; // Replace with the meetingId you have generated
    final String participantName = "John Doe";

    // 1. Initialize VideoSDK
    VideoSDK.initialize(applicationContext);

    // 2. Configuration VideoSDK with Token
    VideoSDK.config(token);

    // 3. Initialize VideoSDK Meeting
    meeting = VideoSDK.initMeeting(
            MeetingActivity.this, meetingId, participantName,
            micEnabled, webcamEnabled,null, null, false, null, null);

    // 4. Add event listener for listening upcoming events
    meeting.addEventListener(meetingEventListener);

    // 5. Join VideoSDK Meeting
    meeting.join();

    ((TextView)findViewById(R.id.tvMeetingId)).setText(meetingId);
  }

  // creating the MeetingEventListener
  private final MeetingEventListener meetingEventListener = new MeetingEventListener() {
    @Override
    public void onMeetingJoined() {
      Log.d("#meeting", "onMeetingJoined()");
    }

    @Override
    public void onMeetingLeft() {
      Log.d("#meeting", "onMeetingLeft()");
      meeting = null;
      if (!isDestroyed()) finish();
    }

    @Override
    public void onParticipantJoined(Participant participant) {
      Toast.makeText(MeetingActivity.this, participant.getDisplayName() + " joined", Toast.LENGTH_SHORT).show();
    }

    @Override
    public void onParticipantLeft(Participant participant) {
      Toast.makeText(MeetingActivity.this, participant.getDisplayName() + " left", Toast.LENGTH_SHORT).show();
    }
  };
}</code></pre><h3 id="step-3-handle-local-participant-media">Step 3: Handle Local Participant Media</h3><p>After successfully entering the meeting, it's time to manage the webcam and microphone for the local participant (you).</p><p>To enable or disable the webcam, we'll use the <code>Meeting</code> class methods <code>enableWebcam()</code> and <code>disableWebcam()</code>, respectively. Similarly, to mute or unmute the microphone, we'll utilize the methods <code>muteMic()</code> and <code>unmuteMic()</code></p><pre><code class="language-Java">public class MeetingActivity extends AppCompatActivity {
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_meeting);
    //...Meeting Setup is Here

    // actions
    setActionListeners();
  }

  private void setActionListeners() {
    // toggle mic
    findViewById(R.id.btnMic).setOnClickListener(view -&gt; {
      if (micEnabled) {
        // this will mute the local participant's mic
        meeting.muteMic();
        Toast.makeText(MeetingActivity.this, "Mic Disabled", Toast.LENGTH_SHORT).show();
      } else {
        // this will unmute the local participant's mic
        meeting.unmuteMic();
        Toast.makeText(MeetingActivity.this, "Mic Enabled", Toast.LENGTH_SHORT).show();
      }
      micEnabled=!micEnabled;
    });

    // toggle webcam
    findViewById(R.id.btnWebcam).setOnClickListener(view -&gt; {
      if (webcamEnabled) {
        // this will disable the local participant webcam
        meeting.disableWebcam();
        Toast.makeText(MeetingActivity.this, "Webcam Disabled", Toast.LENGTH_SHORT).show();
      } else {
        // this will enable the local participant webcam
        meeting.enableWebcam();
        Toast.makeText(MeetingActivity.this, "Webcam Enabled", Toast.LENGTH_SHORT).show();
      }
      webcamEnabled=!webcamEnabled;
    });

    // leave meeting
    findViewById(R.id.btnLeave).setOnClickListener(view -&gt; {
      // this will make the local participant leave the meeting
      meeting.leave();
    });
  }
}</code></pre><h3 id="step-4-handling-the-participants-view">Step 4: Handling the Participants' View</h3><p>To display a list of participants in your video UI, we'll utilize a <code>RecyclerView</code>.</p><p><strong>(a)</strong> This involves creating a new layout for the participant view named <code>item_remote_peer.xml</code> in the <code>res/layout</code> folder. You can copy <code>item_remote_peer.xml </code>file from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/item_remote_peer.xml">here</a>.</p><p><strong>(b)</strong> Create a RecyclerView adapter <code>ParticipantAdapter</code> which will be responsible for displaying the participant list. Within this adapter, define a <code>PeerViewHolder</code> class that extends <code>RecyclerView.ViewHolder</code>.</p><pre><code class="language-Java">public class ParticipantAdapter extends RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt; {

  @NonNull
  @Override
  public PeerViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
      return new PeerViewHolder(LayoutInflater.from(parent.getContext()).inflate(R.layout.item_remote_peer, parent, false));
  }

  @Override
  public void onBindViewHolder(@NonNull PeerViewHolder holder, int position) {
  }

  @Override
  public int getItemCount() {
      return 0;
  }

  static class PeerViewHolder extends RecyclerView.ViewHolder {
    // 'VideoView' to show Video Stream
    public VideoView participantView;
    public TextView tvName;
    public View itemView;

    PeerViewHolder(@NonNull View view) {
        super(view);
        itemView = view;
        tvName = view.findViewById(R.id.tvName);
        participantView = view.findViewById(R.id.participantView);
    }
  }
}</code></pre><p><strong>(c)</strong> Now, we will render a list of <code>Participant</code> for the meeting. We will initialize this list in the constructor of the <code>ParticipantAdapter</code></p><pre><code class="language-Java">public class ParticipantAdapter extends RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt; {

  // creating a empty list which will store all participants
  private final List&lt;Participant&gt; participants = new ArrayList&lt;&gt;();

  public ParticipantAdapter(Meeting meeting) {
    // adding the local participant(You) to the list
    participants.add(meeting.getLocalParticipant());

    // adding Meeting Event listener to get the participant join/leave event in the meeting.
    meeting.addEventListener(new MeetingEventListener() {
      @Override
      public void onParticipantJoined(Participant participant) {
        // add participant to the list
        participants.add(participant);
        notifyItemInserted(participants.size() - 1);
      }

      @Override
      public void onParticipantLeft(Participant participant) {
        int pos = -1;
        for (int i = 0; i &lt; participants.size(); i++) {
          if (participants.get(i).getId().equals(participant.getId())) {
            pos = i;
            break;
          }
        }
        // remove participant from the list
        participants.remove(participant);

        if (pos &gt;= 0) {
          notifyItemRemoved(pos);
        }
      }
    });
  }

  // replace getItemCount() method with following.
  // this method returns the size of total number of participants
  @Override
  public int getItemCount() {
    return participants.size();
  }
  //...
}</code></pre><p><strong>(d)</strong> We have listed our participants. Let's set up the view holder to display a participant video.</p><pre><code class="language-Java">public class ParticipantAdapter extends RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt; {

  // replace onBindViewHolder() method with following.
  @Override
  public void onBindViewHolder(@NonNull PeerViewHolder holder, int position) {
    Participant participant = participants.get(position);

    holder.tvName.setText(participant.getDisplayName());

    // adding the initial video stream for the participant into the 'VideoView'
    for (Map.Entry&lt;String, Stream&gt; entry : participant.getStreams().entrySet()) {
      Stream stream = entry.getValue();
      if (stream.getKind().equalsIgnoreCase("video")) {
        holder.participantView.setVisibility(View.VISIBLE);
        VideoTrack videoTrack = (VideoTrack) stream.getTrack();
        holder.participantView.addTrack(videoTrack)
        break;
      }
    }
    // add Listener to the participant which will update start or stop the video stream of that participant
    participant.addEventListener(new ParticipantEventListener() {
      @Override
      public void onStreamEnabled(Stream stream) {
        if (stream.getKind().equalsIgnoreCase("video")) {
          holder.participantView.setVisibility(View.VISIBLE);
          VideoTrack videoTrack = (VideoTrack) stream.getTrack();
          holder.participantView.addTrack(videoTrack)
        }
      }

      @Override
      public void onStreamDisabled(Stream stream) {
        if (stream.getKind().equalsIgnoreCase("video")) {
          holder.participantView.removeTrack();
          holder.participantView.setVisibility(View.GONE);
        }
      }
    });
  }
}</code></pre><p><strong>(e)</strong> Now, add this adapter to the <code>MeetingActivity</code></p><pre><code class="language-Java">@Override
protected void onCreate(Bundle savedInstanceState) {
  //Meeting Setup...
  //...
  final RecyclerView rvParticipants = findViewById(R.id.rvParticipants);
  rvParticipants.setLayoutManager(new GridLayoutManager(this, 2));
  rvParticipants.setAdapter(new ParticipantAdapter(meeting));
}</code></pre><h2 id="active-speaker-integration">Active speaker Integration</h2><p>The active speaker highlight feature aims to visually indicate the participant currently speaking in the video call. This functionality is especially valuable in scenarios with a large number of participants, where it can be difficult to identify the source of the incoming audio.</p><p>This is how we can integrate this feature with VideoSDK. Every time a participant actively speaks during the meeting, the <code><a href="https://docs.videosdk.live/android/api/sdk-reference/meeting-class/meeting-event-listener-class#onspeakerchanged">onSpeakerChanged</a></code> event is triggered. This event transmits the participant ID of the person who is currently speaking.</p><p>By capturing this event and leveraging the participant ID, you can visually highlight the corresponding participant in your application's user interface. This can be achieved by modifying the user interface element that represents the speaking participant, such as changing the background color or adding a visual indicator.</p><pre><code class="language-Java">private final MeetingEventListener meetingEventListener = new MeetingEventListener() {
    @Override
    public void onSpeakerChanged(String participantId) {
        Toast.makeText(MainActivity.this, "Active Speaker participantId" + participantId, Toast.LENGTH_SHORT).show();
        super.onSpeakerChanged(participantId);
    }
};</code></pre><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/03/Screenshot_20240319_181033_Android-VideoSDK-App--2--1.jpg" class="kg-image" alt="How to Integrate Active Speaker in Android(Java) Video Chat App?" loading="lazy" width="396" height="800"/></figure><p>To implement the dynamic speaker detection UI featured in the above image, feel free to check out our <a href="https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example">GitHub repository</a>.</p><h2 id="conclusion"><strong>Conclusion</strong></h2><p>This expert guide has provided Android developers with complete instructions on how to implement active speaker highlighting in their video apps using VideoSDK. By visually indicating the active speaker, you can reduce confusion and improve engagement among participants.</p><p>To unlock the full potential of VideoSDK and create easy-to-use video experiences, developers are encouraged to sign up for VideoSDK and further explore its features. Start implementing Active Speaker Highlighting today to improve your video app functionality and user engagement.</p><p><a href="https://www.videosdk.live/signup"><strong>Sign up with VideoSDK</strong></a> today and Get <strong>10000 Free Minutes </strong>to<strong> </strong>take your video app to the next level!</p>]]></content:encoded></item><item><title><![CDATA[What is Liveness Detection and How it Works? Complete Guide]]></title><description><![CDATA[Understand the importance of liveness detection, its working principles, active and passive detection methods, applications in finance and healthcare, and how its real-time analysis and AI integration are revolutionizing identity verification and fraud prevention.]]></description><link>https://www.videosdk.live/blog/what-is-liveness-detection</link><guid isPermaLink="false">66d6d6fe20fab018df10fea7</guid><category><![CDATA[Liveness Detection]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Sun, 19 Jan 2025 11:01:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-Liveness-Detection_-and-How-it-Works_.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-Liveness-Detection_-and-How-it-Works_.jpg" alt="What is Liveness Detection and How it Works? Complete Guide"/><p>Liveness detection is a security feature used in biometric systems, such as facial recognition or fingerprint scanning, to determine whether the biometric sample being presented is from a live person and not from a spoofing attack using a photo, video, 3D mask, or other fraudulent methods. These algorithms ensure that the system can accurately differentiate between genuine and fake biometric data, enhancing the security of identity verification processes.</p><h2 id="why-is-liveness-detection-important">Why is Liveness Detection Important?</h2><p>Implementing liveness detection is a <a href="https://www.videosdk.live/blog/rbi-compliance-for-video-kyc">regulatory requirement</a> in many industries. As spoofing techniques become more advanced over time, continuously updating the liveness detection is essential to maintain robust security.</p><p>Liveness detection is not just a best practice but a necessary component of modern biometric authentication systems, ensuring the reliability, trustworthiness, and compliance of these critical security measures. This is especially important for sensitive applications like <a href="https://www.videosdk.live/solutions/video-banking">banking</a>, <a href="https://www.videosdk.live/solutions/telehealth">healthcare</a>, <a href="https://www.videosdk.live/solutions/auto-proctoring">HRtech</a>, and eGovernment services, where security is paramount.</p><h2 id="how-liveness-detection-algorithms-work">How Liveness Detection Algorithms Work?</h2><p>Liveness detection uses different methods using video like Anti-spoofing and Presentation Attack Detection to specify whether a biometric sample is alive or not. First, the system can prompt the user to perform certain actions, such as winking, smiling, or nodding. Real users will respond with natural, involuntary movements that can be detected, while static images or videos cannot mimic these movements.</p><p>Then, algorithms examine the fine details and textures of the subject's skin or fingerprint. Real skin exhibits unique features and perspiration patterns that are difficult to replicate with photo or synthetic materials.</p><p>A specialized depth camera captures a 3D image of the face, creating a digital model that maps the shape and depth of the face. This method can distinguish between a real, 3D face and a flat image or mask, which would appear two-dimensional in a depth map. Similarly, a <a href="https://www.leelinepack.com/how-to-use-digital-hot-foil-stamping/" rel="noreferrer">digital hot foil stamping machine</a> utilizes advanced technology to map intricate designs and apply foil with precision, ensuring that the final product has a unique, three-dimensional texture, distinguishing it from flat or basic prints.</p><p>The system may ask the user to perform specific actions with challenge-response tests, such as turning their head or saying a random phrase. The system then analyzes the responses to determine liveness.</p><p>Advanced AI and machine learning algorithms using deep neural networks like deepfake, it can detect subtle differences between real and fake biometric samples that are invisible to the human eye.</p><h2 id="what-is-active-passive-liveness-detection">What is Active &amp; Passive Liveness Detection?</h2><h3 id="active-liveness-detection">Active Liveness Detection</h3><p>The Active Liveness Detection method involves user interaction, asking the user to perform specific actions like blinking, smiling, nodding, or turning their head. It verifies liveness by checking if these actions are performed correctly and in real time. This method is considered more robust against spoofing attacks.</p><h3 id="passive-liveness-detection">Passive Liveness Detection</h3><p>This Passive Liveness Detection method does not require user interaction and uses signals from the captured biometric data to determine liveness. Techniques include analyzing texture (e.g. <a href="https://www.ijsr.net/archive/v2i5/IJSROFF2013212.pdf">skin texture analysis</a>), checking for reflections in eyes, or detecting micro-movements like subtle changes in facial expression. It relies on AI and machine learning models designed with <a href="https://www.chromatix.com.au/ai/why-ai-might-not-be-sustainable/" rel="noreferrer">sustainable intelligence</a> to accurately identify patterns associated with real human features while optimizing efficiency and resource use.</p><h2 id="use-of-liveness-detection">Use of Liveness Detection</h2><ul><li><strong>Financial Services: </strong>To secure transactions and prevent identity fraud in services like mobile banking and ATM access, the use of video liveness detection become very important factor.</li><li><strong>Digital Onboarding:</strong> For verifying identities during online registration processes, especially in remote setups. Video solutions are used to perform live liveness checks during the onboarding process for banks, telecom providers, and HR tech services that require identity verification.</li><li><strong>Healthcare:</strong> In telehealth, video liveness detection ensures that the right patient or healthcare provider is involved in a session, maintaining the integrity of virtual consultations.</li></ul><h2 id="how-can-videosdk-be-useful-in-liveness-detection">How can VideoSDK be useful in Liveness Detection?</h2><p>Video solutions play a significant role in enhancing liveness detection algorithms by providing dynamic, real-time data that can be analyzed to verify the authenticity of a user's identity. Here’s how <a href="https://www.videosdk.live/">VideoSDK’s</a> latest Infratech contributes to liveness detection:</p><h3 id="1-real-time-analysis">1. <strong>Real-Time Analysis</strong></h3><p>VideoSDK enables the real-time monitoring of facial movements and expressions, helping algorithms to analyze subtle, involuntary actions such as blinking, head tilting, and micro-expressions. These movements are difficult to replicate in static images or pre-recorded videos, thus helping to distinguish between a live person and a spoof attempt.</p><h3 id="2-3d-depth-and-movement-analysis"><strong>2. 3D Depth and Movement Analysis</strong></h3><p>VideoSDK can capture depth information and analyze movements, providing a way to distinguish between 2D representations (like photos or screens) and real 3D faces. This is achieved using stereoscopic cameras or depth-sensing technologies in advanced video solutions.</p><h3 id="3-enhanced-texture-and-depth-analysis">3. <strong>Enhanced Texture and Depth Analysis</strong></h3><p>VideoSDK can improve texture analysis by capturing a range of lighting conditions and angles, which can reveal unique skin textures and other characteristics of a live face. Additionally, advanced depth-sensing technologies can create a 3D map of the face from video data, further distinguishing between real and fake representations.</p><h3 id="4-challenge-response-interactions">4. <strong>Challenge-Response Interactions</strong></h3><p>VideoSDK can facilitate interactive liveness checks, where the system prompts users to perform specific actions, such as moving their heads or making facial expressions. This interactive element ensures that the subject is engaging in real-time, providing a robust verification process that is harder to spoof.</p><h3 id="5-integration-with-ai-and-machine-learning">5. <strong>Integration with AI and Machine Learning</strong></h3><p>VideoSDK data can be used to train machine learning models that improve the accuracy of liveness detection. By analyzing portions of video footage, these models can learn to identify subtle differences between real and fake faces, enhancing the overall security of biometric systems.</p><h3 id="6-compliance-and-audit-trails"><strong>6. Compliance and Audit Trails:</strong></h3><p>VideoSDK provides recordings for a verifiable audit trail of the liveness check, which is valuable for compliance purposes in regulated industries. This ensures that organizations can demonstrate adherence to security standards and regulations.</p><p>Leveraging the dynamic capabilities of VideoSDK, liveness detection becomes more robust, user-friendly, and effective in preventing fraudulent activities, thereby safeguarding both users and service providers.</p><h2 id="what-are-the-major-challenges-in-liveness-detection">What are the Major Challenges in Liveness Detection?</h2><ul><li><strong>Accuracy:</strong> Balancing between high accuracy and low false rejection rates can be difficult, especially in diverse lighting conditions or with various skin tones.</li><li><strong>Spoofing Attacks:</strong> As technology advances, so do the methods to trick liveness detection systems, that's why it requires continuous improvement and updates.</li><li><strong>User Experience:</strong> Active liveness checks can sometimes be intrusive or cumbersome, affecting user satisfaction.</li></ul><p>If you are considering integrating liveness detection in your system or want to build a new tech stack to reduce spoofing and deepfake, VideoSDk is the leading enterprise-grade Infratech solution, which provides the best and most accurate liveness detection system. VideoSDK enhances security and reliability for sure and provides an additional layer of protection against fraudulent attempts. </p>]]></content:encoded></item><item><title><![CDATA[Top 10 Agora Alternatives in 2026]]></title><description><![CDATA[Discover a powerful Agora alternative that will revolutionize your online experience. Maximize your potential and take control of your success today.]]></description><link>https://www.videosdk.live/blog/agora-alternative</link><guid isPermaLink="false">64a3ecfc8ecddeab7f17f4ad</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Sun, 19 Jan 2025 05:16:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/07/Agora-alternative-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Agora-alternative-1.jpg" alt="Top 10 Agora Alternatives in 2026"/><p>Looking for an <a href="https://www.videosdk.live/alternative/agora-vs-videosdk" rel="noreferrer"><strong>Agora alternative</strong></a> to seamlessly integrate real-time video into your application? Chances are you've come across Agora. Established in 2013, Agora was among the pioneers in developing a developer platform that offers broadcast, voice, and video calls for mobile and web applications through its software development kit (SDK).</p><p>While Agora's platform was groundbreaking during the early 2010s, it has fallen behind in releasing significant updates over the years, leaving room for innovative platforms like <a href="https://www.videosdk.live">VideoSDK</a> to provide a refreshing approach to live video.</p><p>PS: If you happen to be an Agora customer, I encourage you to keep reading to discover what you might be missing out on by sticking with the platform.</p><h2 id="choosing-the-right-agora-alternative">Choosing the Right Agora Alternative</h2>
<p>Undoubtedly, live video is challenging, and <a href="https://www.videosdk.live/blog/agora-competitors">Agora</a> only adds to the complexity. Merely building a basic live experience in your app with Agora requires integrating and paying for the various SDKs it offers. Therefore, when searching for an alternative live video solution, keep an eye out for the following:</p><h3 id="prioritize-customization">Prioritize Customization</h3>
<p>In an <strong>Agora alternative</strong>, prioritize customization. Choose an SDK that gives developers full control over the user interface (UI) and overall experience. Look for low-code prebuilt UI options to speed up initial setup, saving time and effort. Striking a balance between customization and user-friendly prebuilt options enables you to create a unique and engaging live video experience without extensive development. Customization ensures the alternative SDK meets your specific requirements, delivering a personalized and seamless solution.</p><h3 id="demand-high-adaptive-bitrate">Demand High Adaptive Bitrate</h3>
<p>Look for an <strong>alternative SDK</strong> that offers high <a href="https://www.videosdk.live/blog/what-is-adaptive-bitrate-streaming" rel="noreferrer">adaptive bitrate</a> technology, especially if your platform supports features like video translation, where maintaining consistent stream quality is essential for accurate and seamless user experiences. This ensures that your live video streams are optimized for different network conditions, providing a smooth and uninterrupted viewing experience for your users. Adaptive bitrate automatically adjusts the stream quality based on available bandwidth, resulting in optimal video quality without buffering or interruptions.</p><h3 id="comprehensive-single-sdk">Comprehensive Single SDK</h3>
<p>Building a live service in your application shouldn't necessitate mixing and matching dozens of SDKs. Instead, opt for a comprehensive single SDK that offers everything you require, from video calling to voice calling, streaming, and beyond.</p><h3 id="open-source-and-third-party-integrations">Open Source and Third-Party Integrations</h3>
<p>Consider whether open-source solutions like Jitsi provide the flexibility and transparency needed for your project. Additionally, assess the SDK’s compatibility with third-party tools for enhanced functionality and media management.</p><h3 id="transparent-pricing">Transparent Pricing</h3>
<p>It is crucial to consider the pricing structure of the <strong>alternative SDK</strong>. Look for a solution that offers transparent and competitive pricing, without hidden fees or additional costs. Ideally, the SDK should provide built-in collaboration features, eliminating the need for integrating multiple SDKs separately. This not only simplifies the development process but also helps to optimize costs by offering all the necessary features in one package.</p><h3 id="comprehensive-single-sdk">Comprehensive Single SDK</h3>
<p>Building a live service in your application shouldn't necessitate mixing and matching dozens of SDKs. Instead, opt for a comprehensive single SDK that offers everything you require, from video calling to voice calling, streaming, and beyond.</p><p>With that in mind, let's delve into this guide, where we'll explore the leading players that deserve your consideration in the video SDK market.</p><p><strong>Agora alternatives</strong> are available to overcome issues like Customization, Network Management, Collaborative Features, Pricing, Security, and Customer support.</p><p>The top 10 Agora Alternatives are VideoSDK, Jitsi, Twilio, EnableX, Zoom Video SDK, TokBox OpenTok [Vonage], Whereby, AWS Chime, Daily, and SignalWire. These alternatives and competitors offer a variety of features, pricing choices, and more, enabling you to make an informed choice according to your specific requirements.</p><blockquote>
<h2 id="top-10-alternatives-to-agora-2026">Top 10 Alternatives to Agora 2026</h2>
<ul>
<li>VideoSDK</li>
<li>Jitsi</li>
<li>Twilio</li>
<li>Enablex</li>
<li>Zoom Video SDK</li>
<li>TokBox Opentok [Vonage]</li>
<li>Whereby</li>
<li>AWS Chime</li>
<li>Daily</li>
<li>SignalWire</li>
</ul>
</blockquote>
<p>These alternatives and competitors offer a variety of features, pricing choices, and more, enabling you to make an informed choice according to your specific requirements.</p><h3 id="1-videosdk">1. VideoSDK</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-SDK-for-Real-time-Communication-Live-Streaming-Video-API.jpeg" class="kg-image" alt="Top 10 Agora Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li><a href="https://www.videosdk.live">VideoSDK</a> provides an API that allows developers to easily add powerful, extensible, scalable, and resilient audio-video features to their apps with just a few lines of code. Add live audio and video experiences to any platform in minutes.</li><li>The key advantage of using Video SDK is it’s quite easy and quick to integrate, allowing you to focus more on building innovative features to enhance user retention.</li></ul><h4 id="detailed-features-of-videosdk">Detailed Features of VideoSDK</h4>
<ul><li><strong>High scalability</strong>: Video SDK’s high scalability with infinite room and zero maintenance ensures uninterrupted availability, and that too with &lt;99ms latency. It supports up to 300 attendees, including 50 presenters, and empowers large-scale collaborations. This advanced infrastructure enables global reach and success in the digital landscape.</li><li><strong>High adaptive bitrate</strong>: Video SDK offers high adaptive bitrate technology for an immersive audio-video experience. It auto-adjusts stream quality under bandwidth constraints and adapts to varying network conditions. With a global infrastructure and secure usage in restricted network environments, Video SDK delivers optimal performance and seamless streaming.</li><li><strong>End-to-end customized SDK</strong>: With their end-to-end customized SDK, you have the power to fully customize the UI to meet your unique needs. Their code samples help accelerate your time-to-market, while template layouts can be easily customized in any orientation. Leveraging their PubSub feature, you can build engaging and interactive features, enhancing the overall user experience.</li><li><strong>Quality Recordings</strong>: Experience high-quality recordings on any connection with Video SDK. Their solution supports 1080p video recording capability, ensuring crystal-clear and detailed footage. With programmable layouts and custom templates, you can tailor the recording experience to your specific requirements. Easily store your recordings in the Video SDK cloud or popular cloud storage providers such as AWS, GCP, or Azure. Access your recordings conveniently from the dashboard itself, providing seamless management and retrieval of your valuable content.</li><li><strong>Detailed analytics</strong>: Gain access to in-depth analytics on video call metrics, including participant interactions and duration, allowing you to analyze participant interest throughout the session.</li><li><strong>Cross-platform streaming</strong>: Stream live events to millions of viewers across platforms such as YouTube, LinkedIn, Facebook, and more with built-in <a href="https://www.videosdk.live/developer-hub/rtmp/rtmp-vs-srt" rel="noreferrer">RTMP</a> support.</li><li><strong>Seamless scaling</strong>: Effortlessly scale live audio/video within your web app, accommodating from just a few users to over 10,000 and reaching millions of viewers through RTMP output.</li><li><strong>Platform support</strong>: Build your live video app for a specific platform and seamlessly run it across browsers, devices, and operating systems with minimal development efforts.</li><li><strong>Mobile</strong>: Flutter, Android (Java/Kotlin), iOS (Objective-C/Swift), React Native </li><li><strong>Web</strong>: JavaScript Core SDK + UI Kit for React JS, Angular, Web Components for other frameworks </li><li><strong>Desktop</strong>: Flutter Desktop</li></ul><h4 id="videosdk-pricing">VideoSDK Pricing</h4>
<ul><li>Video SDK offers <a href="https://www.videosdk.live/pricing" rel="noreferrer">$20 free credit</a> that renew monthly. You only start paying once you exhaust the free minutes. </li><li>The best thing is, that pricing for video and audio calls is considered separately. Pricing for <strong>video calls</strong> begins at <strong>$0.002</strong> per participant per minute and for <strong>audio calls</strong>, it begins at <strong>$0.0006</strong> per participant per minute. </li><li>The additional cost for <strong>cloud recordings</strong> is <strong>$0.015</strong> per minute and <strong>RTMP output</strong> is <strong>$0.030</strong> per minute. You can estimate your costs using their <a href="https://www.videosdk.live/pricing#pricingCalc">pricing calculator</a>.</li><li>Video SDK provides free <strong>24/7</strong> support to all customers. Their dedicated team is available to assist you through your preferred communication channel whenever you need help with basic queries, upcoming events, or technical requirements.</li></ul><blockquote><strong>Here's a detailed comparison of  </strong><a href="https://www.videosdk.live/alternative/agora-vs-videosdk"><strong>Agora and Video SDK</strong></a><strong>.</strong></blockquote>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h4 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h4>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h3 id="2-jitsi">2. Jitsi</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Free-Video-Conferencing-Software-for-Web-Mobile-Jitsi.jpeg" class="kg-image" alt="Top 10 Agora Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Jitsi is a collection of open-source projects that enable you to build and deploy video conferencing solutions within your applications. Their most well-known offerings are Jitsi Meet and Jitsi Videobridge.</p><p>Jitsi Meet is a JavaScript-based client application that facilitates video chatting. It allows you to share screens, collaborate in real time, invite users, and more. You can access the conference via a web browser or Android/iOS apps.</p><p>Jitsi Videobridge is an XMPP server (Prosody) capable of hosting large-scale video chats. It is WebRTC-compatible and provides default encryption.</p><h4 id="key-points-about-jitsi">Key points about Jitsi</h4>
<ul><li>Jitsi is free, open-source, and offers end-to-end encryption, giving you the ability to review and modify the code according to your requirements.</li><li>The live experience includes features such as active speakers, text chatting (web only), room locking, screen sharing, raise/lower hand, push-to-talk mode, audio-only option, and more.</li><li>However, certain essential features like shared text documents based on Etherpad, streaming, telephone dial-in to a conference, dial-out to a telephone participant, and more, only work if Jibri is configured.</li><li>Recording a call requires additional effort. You need to live stream your conference to YouTube and access the recording from there or set up Jibri for this purpose.</li><li>Obtaining support to resolve issues can take over 48 hours.</li><li>The tool does not automatically manage user bandwidth in the event of network instability, potentially resulting in a blank screen.</li></ul><h4 id="pricing-for-jitsi">Pricing for Jitsi</h4>
<ul><li><a href="https://www.videosdk.live/blog/jitsi-alternative"><strong>Jitsi</strong></a> is 100% open source and available for <strong>free</strong> usage and development.</li><li>However, you are responsible for setting up your own servers and creating the UI from scratch. Product support comes at an additional cost.</li></ul><blockquote><strong>Here's a detailed comparison of  </strong><a href="https://www.videosdk.live/agora-vs-jitsi"><strong>Agora and Jitsi</strong></a><strong>.</strong></blockquote><h3 id="3-twilio">3. Twilio</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Communication-APIs-for-SMS-Voice-Video-Authentication_twilio.jpeg" class="kg-image" alt="Top 10 Agora Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Twilio initially focused on automating phone calls and SMS services but has now expanded its offerings to provide developers with a range of APIs for building business communication across various channels.</p><p>Twilio allows you to either create an app from scratch or enhance an existing solution with communication features. The SDK supports multiple programming languages, including Java and Ruby.</p><h4 id="key-points-about-twilio">Key Points about Twilio</h4>
<ul><li>Twilio provides web, iOS, and Android SDKs. However, when utilizing multiple audio and video inputs, developers need to manually configure them, requiring additional code implementation.</li><li>In case a user's call drops or any issues arise during the call, Twilio offers call insights to track and analyze errors.</li><li>Twilio supports a maximum of 50 hosts within a call and a total of 50 participants, including hosts.</li><li>Twilio does not offer any plugins to simplify product development.</li><li>With the SDK, you will need to allocate engineering resources to handle hard coding for various edge cases that can potentially disrupt a user's video call.</li></ul><h4 id="pricing-for-twilio">Pricing for Twilio</h4>
<ul><li><a href="https://www.videosdk.live/blog/twilio-video-alternative"><strong>Twilio</strong></a>'s <a href="https://www.twilio.com/en-us/video/pricing">pricing</a> starts at <strong>$4</strong> per 1,000 minutes. </li><li>Additionally, there are costs associated with <strong>recordings</strong>, which amount to <strong>$0.004</strong> per participant minute, <strong>recording compositions</strong> priced at <strong>$0.01</strong> per composed minute, and <strong>storage</strong> fees of <strong>$0.00167</strong> per GB per day after the first 10 GBs.</li><li>Twilio's free support plan includes API status notifications and email support during business hours. </li><li>Users can opt for additional services such as 24/7 live chat support, a support escalation line, quarterly status reviews, and guaranteed response times at an extra cost. </li><li>The price for these services varies, usually based on a percentage of the monthly plan or a specific minimum amount (ranging from <strong>$250</strong>/month to <strong>$5,000</strong>/month).</li></ul><blockquote><strong>Here's a detailed comparison of  </strong><a href="https://www.videosdk.live/agora-vs-twilio"><strong>Agora and Twilio</strong></a><strong>.</strong></blockquote><h3 id="4-enablex">4. EnableX</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Call-API-Video-Chat-API-Voice-API-Video-Conferencing_enebleX.jpeg" class="kg-image" alt="Top 10 Agora Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>EnableX offers live video, voice, and messaging SDKs that serve as fundamental building blocks to expedite the development of live experiences in your applications. It primarily targets service providers, ISVs, SIs, and developers.</p><h4 id="key-points-about-enablex">Key points about EnableX</h4>
<ul><li>The SDK provides a video builder that allows you to implement a customized video-calling solution into your application. </li><li>Alternatively, you can create personalized live video streams with tailored UI, hosting capabilities, billing integration, and other essential functionalities. </li><li>The self-service portal grants you access to reporting features and live analytics, enabling you to monitor quality and facilitate online payments from clients. </li><li>The SDK supports a limited range of programming languages, including JavaScript, PHP, and Python. </li><li>It enables your users to stream live content directly from your app or website, as well as stream directly on platforms like YouTube or Facebook for unlimited reach. </li><li>Please note that the support team may take up to 72 hours to respond to your support requests. Integrating the SDK into your application may require several weeks. </li><li>The SDK does not optimize users' videos in the event of device or network issues.</li></ul><h4 id="enablex-pricing">EnableX pricing</h4>
<ul><li>The SDK is <a href="https://www.enablex.io/cpaas/pricing/our-pricing">priced</a> at <strong>$0.004</strong> per participant minute for up to <strong>50 participants</strong> per room. </li><li>For pricing involving over 50 participants, it is necessary to contact their sales team. </li><li>The <strong>recording</strong> is charged at <strong>$0.10</strong> per participant per minute, <strong>transcoding</strong> at <strong>$0.10</strong> per minute, and <strong>storage</strong> at <strong>$0.05</strong> per GB per month. </li><li><a href="https://www.videosdk.live/developer-hub/rtmp/custom-rtmp" rel="noreferrer"><strong>RTMP streaming</strong></a> incurs a cost of <strong>$0.10</strong> per minute.</li></ul><h3 id="5-zoom-video-sdk">5. Zoom Video SDK</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-SDK-from-Zoom-Zoom.jpeg" class="kg-image" alt="Top 10 Agora Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>The Zoom Video SDK empowers developers to create customized live video-based applications utilizing the technology behind Zoom.</p><p>Zoom introduced the Video SDK as they recognized that the traditional Zoom client might not meet all customer requirements. By providing access to the underlying Zoom technology, customers can unlock additional benefits.</p><p>The SDK offers a comprehensive set of services, including video, audio, screen sharing, chat, data streams, and more. Developers have the flexibility to utilize all of these features or select specific ones based on their application's needs. Additionally, the Video SDK provides robust server-side APIs and webhooks.</p><h4 id="zoom-video-sdk-at-a-glance">Zoom Video SDK at a glance</h4>
<ul><li>With the Zoom Video SDK, you can build applications with customizable video compositions, supporting up to 1,000 co-hosts/participants per session. The SDK offers limited customization options for the live video itself. </li><li>It enables you to incorporate screen sharing, third-party live streaming, and in-session chat, while also providing control over the call's layout. </li><li>Zoom supports seven major languages and offers an open translation extensibility, facilitating international growth and enhancing user experience. The SDK only supports the predetermined roles of a host and participant. </li><li>This limitation may pose challenges for use cases that require customized permissions for peers. Unless you opt for paid support plans, you will receive slower email support. </li><li>The SDK provides partial assistance in managing user bandwidth consumption during network degradation.</li></ul><h4 id="zoom-video-sdk-pricing">Zoom Video SDK pricing</h4>
<ul><li><a href="https://www.videosdk.live/blog/zoom-video-sdk-alternative"><strong>Zoom</strong></a> provides 10,000 free minutes per month, and <a href="https://zoom.us/buy/videosdk">charges</a> apply only once you exceed this limit. Pricing starts at <strong>$0.31</strong> per user minute, with <strong>recordings</strong> available for <strong>$100</strong> per month for 1 TB of storage. Telephony services are priced at <strong>$100</strong> per month.</li><li>Zoom offers three customer support plans: <strong>Access</strong>, <strong>Premier</strong>, and <strong>Premier+</strong>. For detailed pricing information, it is necessary to contact Zoom directly.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/agora-vs-zoom"><strong>Agora and Zoom</strong></a><strong>.</strong></blockquote><h3 id="6-vonage-video-api-formerly-tokbox-opentok">6. Vonage Video API (Formerly TokBox OpenTok)</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-API-Fully-Programmable-and-Customizable-Vonage.jpeg" class="kg-image" alt="Top 10 Agora Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Vonage Video API is a viable option for creating customized video experiences in mobile, web, or desktop applications.</p><p>Opentok was established in 2008 when the company shifted its business strategy from providing a struggling consumer video conference product to offering the underlying technology. This allows companies to embed a video conference component into their websites.</p><p>In addition to live video, the API encompasses voice, messaging, and screen-sharing capabilities. It provides client libraries for web, iOS, Android, Windows, and Linux, along with server-side SDKs and a REST API.</p><h4 id="vonage-at-a-glance">Vonage at a glance</h4>
<ul><li>The SDK enables the creation of custom audio/video streams on mobile devices with various effects, filters, and AR/VR integration. </li><li>It supports use cases such as one-on-one video calls, group video chats, and large-scale broadcast sessions. </li><li>Calls can consist of video and voice, voice-only, or a combination. Participants in a call can share screens, exchange data, and chat with each other. </li><li>The SDK provides performance data for detailed session analysis through the account dashboard or via its insights API. </li><li>All voice, video, and signaling traffic is encrypted using AES-128 or AES-256 encryption. </li><li>Additionally, video recordings can be optionally encrypted with AES-256. The SDK complies with GDPR and HIPAA regulations. </li><li>It supports a maximum of 55 participants per call. The company offers chat-based support, which may take up to 72 hours to respond to your inquiries.</li><li>During a stream, you can have up to 2,000 concurrent room participants. The platform does not handle the live video backend, so you need to allocate resources to develop edge case management capabilities.</li></ul><h4 id="vonage-pricing">Vonage pricing</h4>
<ul><li> Vonage follows a usage-based <a href="https://www.vonage.com/communications-apis/video/pricing/">pricing</a> model, calculated dynamically for each minute based on the number of participants in a video session. </li><li>Plans start at <strong>$9.99</strong> per month, which includes free 2,000 minutes per month across all plans. </li><li>After exhausting the free minutes, pricing is set at <strong>$0.00395</strong> per participant minute. <strong>Recording</strong> starts at <strong>$0.10</strong> per minute, and <strong>HLS streaming</strong> is priced at <strong>$0.15</strong> per minute.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/agora-vs-vonage"><strong>Agora and Vonage</strong></a><strong>.</strong></blockquote><h3 id="7-whereby">7. Whereby</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Calling-API-for-Web-and-App-Developers-Whereby.jpeg" class="kg-image" alt="Top 10 Agora Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Whereby offers browser-based meetings that can be accessed through a permanent room owned by each user. Guests can join meetings by simply clicking a link, without requiring any downloads or registrations. They have recently introduced a hybrid meeting solution for distributed teams, which reduces echo and eliminates the need for expensive meeting hardware.</p><h4 id="whereby-at-a-glance">Whereby at a glance</h4>
<ul><li>The tool allows you to customize the video interface by adding logos, colors, and buttons using their no-code interface editor. </li><li>However, the customization options are limited, and you cannot create a completely custom experience with Whereby. </li><li>It enables you to offer video calls seamlessly from your website, mobile apps, or other web products, without the need for external links or apps. </li><li>Whereby emphasizes data privacy and GDPR compliance. They do not mine or sell user data, and all content is encrypted. </li><li>The SDK provides basic collaborative features such as screen sharing, recording, picture-in-picture, and text chat. </li><li>However, it does not offer the ability to add more interactive elements through APIs. </li><li>The SDK does not automatically handle user-host publish-subscribe logic, requiring manual implementation on your end.</li></ul><h4 id="whereby-pricing">Whereby pricing</h4>
<ul><li>Whereby offers <a href="https://whereby.com/information/pricing">pricing</a> plans starting at <strong>$6.99</strong> per month, which includes up to 2,000 user minutes that renew monthly. </li><li><strong>Additional minutes</strong> are charged at <strong>$0.004</strong> per minute, and <strong>cloud recording</strong> and <strong>live streaming</strong> are available at <strong>$0.01</strong> per minute.</li><li><strong>Email</strong> and <strong>chat</strong> support are provided for <strong>free</strong> to all accounts. <strong>Technical onboarding</strong>, <strong>customer success manager</strong>, and <strong>HIPAA</strong> compliance options are available for <strong>enterprise plans</strong>.</li></ul><h3 id="8-aws-chime">8. AWS Chime</h3>
<p><a href="https://www.videosdk.live/blog/amazon-chime-sdk-alternative"><strong>AWS Chime</strong></a> is a video conferencing tool provided by Amazon Web Services, primarily designed for business users. It offers features such as VoIP calling, video messaging, and virtual meetings, allowing users to host or join remote meetings through the service.</p><h4 id="heres-an-overview-of-aws-chime">Here's an overview of AWS Chime</h4>
<ul><li>Conduct and attend online meetings with high-definition video, audio, dial-in numbers, and in-room video conference support.</li><li>Collaborative features include screen sharing, remote desktop control, and individual/group text-based chats.</li><li>Host team meetings with up to 250 participants, and manage meeting controls, recording, scheduling, delegate assignments, etc.</li><li>Enhanced security using AWS Identity and Access Management policies, enabling user administration, policy management, and SSO setup.</li><li>Supports audio recording in .m4a format and converts screen shares to video (.mp4). However, the attendee recording is not available.</li><li>Session analytics are not available unless you opt for the enterprise plan, which comes at a higher price.</li><li>Basic bandwidth management capabilities to handle minor disruptions in the user's network.</li><li>The platform does not provide edge case management capabilities, which means you are responsible for handling such scenarios.</li></ul><h4 id="aws-chime-pricing">AWS Chime pricing</h4>
<ul><li><strong>Basic Tier</strong>: <a href="https://aws.amazon.com/chime/pricing/"><strong>Free</strong></a>, includes <strong>one-on-one audio and video calls</strong> and <strong>group chat</strong>.</li><li><strong>Plus Tier</strong>: <strong>$2.50</strong> per user per month, includes all <strong>basic features</strong>, <strong>screen sharing</strong>, <strong>remote desktop control</strong>, <strong>1 GB message history per user</strong>, and <strong>Active Directory integration</strong>.</li><li><strong>Pro Tier</strong>: <strong>$15</strong> per user per month, includes all <strong>Plus features</strong>, allows <strong>scheduling and hosting</strong> meetings for three or more people (<strong>up to 100 attendees</strong>), <strong>meeting recording</strong>, <strong>Outlook integration</strong>, and more.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/agora-vs-amazon-chime-sdk"><strong>Agora and AWS Chime</strong></a><strong>.</strong></blockquote><h3 id="9-daily">9. Daily</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/WebRTC-Video-Audio-APIs-for-Every-Developer-Daily.jpeg" class="kg-image" alt="Top 10 Agora Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Daily is a platform that enables developers to build real-time video and audio calls that work directly in the browser. It provides SDKs and APIs to handle common backend video call use cases across different platforms.</p><h4 id="heres-an-overview-of-daily">Here's an overview of Daily</h4>
<ul><li>Two main approaches: <a href="https://www.videosdk.live/blog/daily-co-alternative"><strong>Daily</strong></a> Client SDKs allow developers to build custom UIs by interacting with Daily's core APIs. Daily Prebuilt is an embeddable video chat widget that can be added to any web app with fewer lines of code.</li><li>Collaborative features include HD screen sharing, breakout rooms, raise a hand, live transcription, whiteboard, and customizable text chat to enhance the user experience.</li><li>Prerecorded video can be embedded, interactive real-time calls can be hosted with up to 1,000 people, and live streaming to millions is possible with minimal latency. Real-time call data is available for debugging and optimization.</li><li>Mobile SDKs are currently in the beta phase of development, so their evolution and ability to solve specific use cases may vary.</li><li>Support response time can take up to 72 hours to resolve issues.</li><li>Users need to add their own publish-subscribe logic to manage live video interactions.</li><li>The platform does not have built-in edge case management capabilities.</li></ul><h4 id="daily-pricing">Daily pricing</h4>
<ul><li><strong>$0.004</strong> per participant minute, with free 10,000 minutes refreshed per month.</li><li>Additional charges for <strong>audio</strong> <strong>$0.00099 </strong>per<strong> </strong>user minute, <strong>streaming</strong> <strong>$0.0012 </strong>per<strong> </strong>minute, <strong>RTMP output</strong> <strong>$0.015 </strong>per minute, and <strong>recording</strong> <strong>$0.01349 </strong>per GB.</li><li><strong>Email</strong> and <strong>chat support</strong> are available for <strong>free</strong> for all accounts. </li><li><strong>Advanced support features</strong> are available through add-on packages starting from <strong>$250</strong> per month.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/agora-vs-daily"><strong>Agora and Daily</strong></a><strong>.</strong></blockquote><h3 id="10-signalwire">10. SignalWire</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Building-The-Software-Defined-Telecom-Network-SignalWire.jpeg" class="kg-image" alt="Top 10 Agora Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>SignalWire is an API-driven platform that allows developers to integrate live and on-demand video experiences into their applications. It simplifies video encoding, delivery, and renditions, aiming to provide a seamless video streaming experience.</p><h4 id="heres-a-summary-of-signalwire">Here's a summary of SignalWire</h4>
<ul><li>The SDK enables developers to embed real-time video and live streams into their applications. It supports web, iOS, and Android SDKs for building applications with live video capabilities.</li><li>Each call supports up to 100 participants in a real-time <a href="https://www.videosdk.live/blog/webrtc">webRTC</a> environment with their video enabled.</li><li>The SDK does not include built-in support for managing video call disruptions or handling the publish-subscribe logic of meeting users. These aspects would need to be implemented separately.</li></ul><h4 id="signalwire-pricing">SignalWire pricing</h4>
<ul><li><a href="https://signalwire.com/pricing/video">Pricing</a> includes <strong>$0.0060</strong> per minute for <strong>HD</strong> and <strong>$0.012</strong> for a <strong>Full HD</strong> video call.</li><li>Additional features such as <strong>recording</strong> cost <strong>$0.0045</strong> per minute, and <strong>streaming</strong> is priced at <strong>$0.10</strong> per minute.</li></ul><h2 id="certainly">Certainly!</h2>
<p>While all the video conferencing SDKs mentioned offer various features and capabilities, <a href="https://www.videosdk.live">VideoSDK</a> stands out as an SDK that prioritizes a fast and seamless integration experience.</p><p>Video SDK offers a low-code solution that allows developers to quickly build live video experiences in their applications. With VideoSDK, it is possible to create and deploy custom video conferencing solutions in under 10 minutes, significantly reducing the time and effort required for integration.</p><p>Unlike other SDKs that may have longer integration times or limited customization options, <a href="https://www.videosdk.live">VideoSDK</a> aims to provide a streamlined process. By leveraging Video SDK, developers can create and embed live video experiences with ease, allowing users to connect, communicate, and collaborate in real-time.</p><h2 id="still-skeptical">Still skeptical?</h2>
<p>Take a deep dive into Video SDK's comprehensive <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start">Quickstart guide</a> and immerse yourself in the possibilities with our <a href="https://docs.videosdk.live/code-sample">powerful sample app</a>, built exclusively for Video SDK.</p><p>Embark on your integration journey today and seize the opportunity to claim your <a href="https://www.videosdk.live/pricing">complimentary $20 free</a>, allowing you to unleash the full potential of Video SDK. And if you ever need assistance along the way, our dedicated team is just a click away, ready to support you.</p><p>Get ready and <a href="https://app.videosdk.live">sign up</a> to witness the remarkable experiences you can create using the extraordinary capabilities of Video SDK. Unleash your creativity and let the world see what you can build!</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Active Speaker in Android(Kotlin) Video Chat App?]]></title><description><![CDATA[In this article, we explore how to use VideoSDK to build active speaker highlighting in Android(Kotlin) video applications.]]></description><link>https://www.videosdk.live/blog/active-speaker-in-kotlin-video-chat-app</link><guid isPermaLink="false">65fbd25f2a88c204ca9ceb42</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[Android]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Sat, 18 Jan 2025 16:20:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Active-Speaker-Kotlin.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Active-Speaker-Kotlin.jpg" alt="How to Integrate Active Speaker in Android(Kotlin) Video Chat App?"/><p>Imagine an active video conference with multiple participants. In group calls, it is important to identify the active speaker. By integrating <strong>Active Speaker Indication</strong> feature into your Android (Kotlin) video chat application, you can enhance the user experience by providing visual cues that highlight who is currently speaking.</p><p>This guide will cover everything through the step-by-step process of implementing active speaker signaling in your application using the VideoSDK. Available through VideoSDK, it transforms your Android video chat application by visually highlighting the currently speaking participant, ensuring a smooth and informative experience for your users during video calls.</p><h2 id="goals">Goals</h2><p>By the End of this Article:</p><ol><li>Create a <a href="https://app.videosdk.live/signup">VideoSDK account</a> and generate your VideoSDK auth token.</li><li>Integrate the VideoSDK library and dependencies into your project.</li><li>Implement core functionalities for video calls using VideoSDK</li><li>Enable active speaker indication</li></ol><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the active speaker highlighting functionality, we will need to use the capabilities that the VideoSDK offers. Before we dive into the implementation steps, let's make sure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token plays a crucial role in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/authentication-and-token#1-generating-token-from-dashboard">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Java Development Kit is supported.</li><li>Android Studio version 3.0 or later.</li><li>Android SDK API level 21 or higher.</li><li>A mobile device with Android 5.0 or later version.</li></ul><h2 id="integrate-videosdk">Integrate VideoSDK</h2><p>Following the account creation and token generation steps, we'll guide you through the process of adding the VideoSDK library and other dependencies to your project. We'll also ensure your app has the required permissions to access features like audio recording, camera usage, and internet connectivity, all crucial for a seamless video experience.</p><h3 id="step-a-add-the-repositories-to-the-projects-settingsgradle-file">Step (a): Add the repositories to the project's <code>settings.gradle</code> file.</h3><pre><code class="language-kotlin">dependencyResolutionManagement {
  repositories {
    // ...
    google()
    mavenCentral()
    maven { url '&lt;https://jitpack.io&gt;' }
    maven { url "&lt;https://maven.aliyun.com/repository/jcenter&gt;" }
  }
}
</code></pre><h3 id="step-b-include-the-following-dependency-within-your-applications-buildgradle-file">Step (b): Include the following dependency within your application's <code>build.gradle</code> file:</h3><pre><code class="language-kotlin">dependencies {
  implementation 'live.videosdk:rtc-android-sdk:0.1.26'

  // library to perform Network call to generate a meeting id
  implementation 'com.amitshekhar.android:android-networking:1.0.2'

  // Other dependencies specific to your app
}
</code></pre><blockquote>If your project has set <code>android.useAndroidX=true</code>, then set <code>android.enableJetifier=true</code> in the <code>gradle.properties</code> file to migrate your project to AndroidX and avoid duplicate class conflict.</blockquote><h3 id="step-c-add-permissions-to-your-project">Step (c): Add permissions to your project</h3><p>In <code>/app/Manifests/AndroidManifest.xml</code>, add the following permissions after <code>&lt;/application&gt;</code>.</p><pre><code class="language-kotlin">&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
</code></pre><p>These permissions are essential for enabling core functionalities like audio recording, internet connectivity for real-time communication, and camera access for video streams within your video application.</p><h2 id="essential-steps-for-building-the-video-calling-functionality">Essential Steps for Building the Video Calling Functionality</h2><p>We'll now set up the functionalities that make your video application after setting up your project with VideoSDK. This section outlines the essential steps for implementing core functionalities within your app. This section will guide you through four key aspects.</p><h3 id="step-1-generate-a-meetingid">Step 1: Generate a <code>meetingId</code></h3><p>Now, we can create the <code>meetingId</code> from the VideoSDK's rooms API. You can refer to this <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/setup-call/initialize-meeting#generating-meeting-id">documentation</a> to generate meetingId.</p><h3 id="step-2-initializing-the-meeting">Step 2: Initializing the Meeting</h3><p>After getting <code>meetingId</code> , the next step involves initializing the meeting for that we need to,</p><ol><li>Initialize VideoSDK.</li><li>Configure <strong>VideoSDK</strong> with a token.</li><li>Initialize the meeting with required params such as <code>meetingId</code>, <code>participantName</code>, <code>micEnabled</code>, <code>webcamEnabled</code> and more.</li><li>Add <code>MeetingEventListener</code> for listening events such as Meeting Join/Left and Participant Join/Left.</li><li>Join the room with <code>meeting.join()</code> a method.</li></ol><p>Please copy the .xml file of the <code>MeetingActivity</code> from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/activity_meeting.xml">here</a>.</p><pre><code class="language-kotlin">class MeetingActivity : AppCompatActivity() {
  // declare the variables we will be using to handle the meeting
  private var meeting: Meeting? = null
  private var micEnabled = true
  private var webcamEnabled = true

  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_meeting)

    val token = "" // Replace with the token you generated from the VideoSDK Dashboard
    val meetingId = "" // Replace with the meetingId you have generated
    val participantName = "John Doe"
    
    // 1. Initialize VideoSDK
    VideoSDK.initialize(applicationContext)

    // 2. Configuration VideoSDK with Token
    VideoSDK.config(token)

    // 3. Initialize VideoSDK Meeting
    meeting = VideoSDK.initMeeting(
      this@MeetingActivity, meetingId, participantName,
      micEnabled, webcamEnabled,null, null, false, null, null)

    // 4. Add event listener for listening upcoming events
    meeting!!.addEventListener(meetingEventListener)

    // 5. Join VideoSDK Meeting
    meeting!!.join()

    (findViewById&lt;View&gt;(R.id.tvMeetingId) as TextView).text = meetingId
  }

  // creating the MeetingEventListener
  private val meetingEventListener: MeetingEventListener = object : MeetingEventListener() {
    override fun onMeetingJoined() {
      Log.d("#meeting", "onMeetingJoined()")
    }

    override fun onMeetingLeft() {
      Log.d("#meeting", "onMeetingLeft()")
      meeting = null
      if (!isDestroyed) finish()
    }

    override fun onParticipantJoined(participant: Participant) {
      Toast.makeText(
        this@MeetingActivity, participant.displayName + " joined",
        Toast.LENGTH_SHORT
      ).show()
    }

    override fun onParticipantLeft(participant: Participant) {
      Toast.makeText(
         this@MeetingActivity, participant.displayName + " left",
         Toast.LENGTH_SHORT
      ).show()
    }
  }
}
</code></pre><h3 id="step-3-handle-local-participant-media">Step 3: Handle Local Participant Media</h3><p>After successfully entering the meeting, it's time to manage the webcam and microphone for the local participant (you).</p><p>To enable or disable the webcam, we'll use the <code>Meeting</code> class methods <code>enableWebcam()</code> and <code>disableWebcam()</code>, respectively. Similarly, to mute or unmute the microphone, we'll utilize the methods <code>muteMic()</code> and <code>unmuteMic()</code></p><pre><code class="language-kotlin">class MeetingActivity : AppCompatActivity() {
  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_meeting)
    //...Meeting Setup is Here

    // actions
    setActionListeners()
  }

  private fun setActionListeners() {
    // toggle mic
    findViewById&lt;View&gt;(R.id.btnMic).setOnClickListener { view: View? -&gt;
      if (micEnabled) {
        // this will mute the local participant's mic
        meeting!!.muteMic()
        Toast.makeText(this@MeetingActivity, "Mic Muted", Toast.LENGTH_SHORT).show()
      } else {
        // this will unmute the local participant's mic
        meeting!!.unmuteMic()
        Toast.makeText(this@MeetingActivity, "Mic Enabled", Toast.LENGTH_SHORT).show()
      }
        micEnabled=!micEnabled
    }

    // toggle webcam
    findViewById&lt;View&gt;(R.id.btnWebcam).setOnClickListener { view: View? -&gt;
      if (webcamEnabled) {
        // this will disable the local participant webcam
        meeting!!.disableWebcam()
        Toast.makeText(this@MeetingActivity, "Webcam Disabled", Toast.LENGTH_SHORT).show()
      } else {
        // this will enable the local participant webcam
        meeting!!.enableWebcam()
        Toast.makeText(this@MeetingActivity, "Webcam Enabled", Toast.LENGTH_SHORT).show()
      }
       webcamEnabled=!webcamEnabled
    }

    // leave meeting
    findViewById&lt;View&gt;(R.id.btnLeave).setOnClickListener { view: View? -&gt;
      // this will make the local participant leave the meeting
      meeting!!.leave()
    }
  }
}
</code></pre><h3 id="step-4-handling-the-participants-view">Step 4: Handling the Participants' View</h3><p>To display a list of participants in your video UI, we'll utilize a <code>RecyclerView</code>.</p><p><strong>(a)</strong> This involves creating a new layout for the participant view named <code>item_remote_peer.xml</code> in the <code>res/layout</code> folder. You can copy <code>item_remote_peer.xml </code>file from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/item_remote_peer.xml">here</a>.</p><p><strong>(b)</strong> Create a RecyclerView adapter <code>ParticipantAdapter</code> which will be responsible for displaying the participant list. Within this adapter, define a <code>PeerViewHolder</code> class that extends <code>RecyclerView.ViewHolder</code>.</p><pre><code class="language-kotlin">class ParticipantAdapter(meeting: Meeting) : RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt;() {

  override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): PeerViewHolder {
    return PeerViewHolder(
      LayoutInflater.from(parent.context)
        .inflate(R.layout.item_remote_peer, parent, false)
    )
  }

  override fun onBindViewHolder(holder: PeerViewHolder, position: Int) {
  }

  override fun getItemCount(): Int {
    return 0
  }

  class PeerViewHolder(view: View) : RecyclerView.ViewHolder(view) {
    // 'VideoView' to show Video Stream
    var participantView: VideoView
    var tvName: TextView

    init {
        tvName = view.findViewById(R.id.tvName)
        participantView = view.findViewById(R.id.participantView)
    }
  }
}
</code></pre><p><strong>(c)</strong> Now, we will render a list of <code>Participant</code> for the meeting. We will initialize this list in the constructor of the <code>ParticipantAdapter</code></p><pre><code class="language-kotlin">class ParticipantAdapter(meeting: Meeting) :
    RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt;() {

  // creating a empty list which will store all participants
  private val participants: MutableList&lt;Participant&gt; = ArrayList()

  init {
    // adding the local participant(You) to the list
    participants.add(meeting.localParticipant)

    // adding Meeting Event listener to get the participant join/leave event in the meeting.
    meeting.addEventListener(object : MeetingEventListener() {
      override fun onParticipantJoined(participant: Participant) {
        // add participant to the list
        participants.add(participant)
        notifyItemInserted(participants.size - 1)
      }

      override fun onParticipantLeft(participant: Participant) {
        var pos = -1
        for (i in participants.indices) {
          if (participants[i].id == participant.id) {
            pos = i
            break
          }
        }
        // remove participant from the list
        participants.remove(participant)
        if (pos &gt;= 0) {
          notifyItemRemoved(pos)
        }
      }
    })
  }

  // replace getItemCount() method with following.
  // this method returns the size of total number of participants
  override fun getItemCount(): Int {
    return participants.size
  }
  //...
}
</code></pre><p><strong>(d)</strong> We have listed our participants. Let's set up the view holder to display a participant video.</p><pre><code class="language-kotlin">class ParticipantAdapter(meeting: Meeting) :
    RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt;() {

  // replace onBindViewHolder() method with following.
  override fun onBindViewHolder(holder: PeerViewHolder, position: Int) {
    val participant = participants[position]

    holder.tvName.text = participant.displayName

    // adding the initial video stream for the participant into the 'VideoView'
    for ((_, stream) in participant.streams) {
      if (stream.kind.equals("video", ignoreCase = true)) {
        holder.participantView.visibility = View.VISIBLE
        val videoTrack = stream.track as VideoTrack
        holder.participantView.addTrack(videoTrack)
        break
      }
    }

    // add Listener to the participant which will update start or stop the video stream of that participant
    participant.addEventListener(object : ParticipantEventListener() {
      override fun onStreamEnabled(stream: Stream) {
        if (stream.kind.equals("video", ignoreCase = true)) {
          holder.participantView.visibility = View.VISIBLE
          val videoTrack = stream.track as VideoTrack
          holder.participantView.addTrack(videoTrack)
       }
      }

      override fun onStreamDisabled(stream: Stream) {
        if (stream.kind.equals("video", ignoreCase = true)) {
          holder.participantView.removeTrack()
          holder.participantView.visibility = View.GONE
        }
      }
    })
  }
}
</code></pre><p><strong>(e)</strong> Now, add this adapter to the <code>MeetingActivity</code></p><pre><code class="language-kotlin">override fun onCreate(savedInstanceState: Bundle?) {
  // Meeting Setup...
  //...
  val rvParticipants = findViewById&lt;RecyclerView&gt;(R.id.rvParticipants)
  rvParticipants.layoutManager = GridLayoutManager(this, 2)
  rvParticipants.adapter = ParticipantAdapter(meeting!!)
}
</code></pre><h2 id="integrate-active-speaker-indication">Integrate Active Speaker Indication</h2><p>Once you've successfully installed VideoSDK in your Android app, it's time to integrate the Active Speaker Indication feature. This functionality utilizes real-time audio analysis to point to the participant who's speaking at any given moment. VideoSDK then provides this information to your app, allowing you to integrate visual cues into your user interface. Once you've successfully installed VideoSDK in your Android app, it's time to integrate the Active Speaker Indication feature. This functionality utilizes real-time audio analysis to point to the participant who's speaking at any given moment. Text-to-speech can further enhance this by providing audible cues for participants. You can also integrate <a href="https://www.adobe.com/express/feature/ai/audio/voiceover" rel="noreferrer">text to AI voice</a> technology to convert speaker notifications or chat messages into natural-sounding audio for improved accessibility and engagement. VideoSDK then provides this information to your app, allowing you to integrate visual cues into your user interface.</p><p>The active speaker highlight feature aims to visually indicate the participant currently speaking in the video call. This functionality is especially valuable in scenarios with a large number of participants, where it can be difficult to identify the source of the incoming audio.</p><p>This is how we can integrate this feature with VideoSDK. Every time a participant actively speaks during the meeting, the <a href="https://docs.videosdk.live/android/api/sdk-reference/meeting-class/meeting-event-listener-class#onspeakerchanged"><code>onSpeakerChanged</code></a> event is triggered. This event transmits the participant ID of the person who is currently speaking.</p><p>By capturing this event and leveraging the participant ID, you can visually highlight the corresponding participant in your application's user interface. This can be achieved by modifying the user interface element that represents the speaking participant, such as changing the background color or adding a visual indicator.</p><pre><code class="language-kotlin">private final MeetingEventListener meetingEventListener = new MeetingEventListener() {
    @Override
    public void onSpeakerChanged(String participantId) {
        Toast.makeText(MainActivity.this, "Active Speaker participantId" + participantId, Toast.LENGTH_SHORT).show();
        super.onSpeakerChanged(participantId);
    }
};
</code></pre><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/03/Screenshot_20240319_181033_Android-VideoSDK-App--2-.jpg" class="kg-image" alt="How to Integrate Active Speaker in Android(Kotlin) Video Chat App?" loading="lazy" width="396" height="800"/></figure><p>To implement the dynamic speaker detection UI featured in the above image, feel free to check out our <a href="https://github.com/videosdk-live/videosdk-rtc-android-kotlin-sdk-example">GitHub repository</a>.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/videosdk-live/videosdk-rtc-android-kotlin-sdk-example"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - videosdk-live/videosdk-rtc-android-kotlin-sdk-example: WebRTC based video conferencing SDK for Android (Kotlin)</div><div class="kg-bookmark-description">WebRTC based video conferencing SDK for Android (Kotlin) - videosdk-live/videosdk-rtc-android-kotlin-sdk-example</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="How to Integrate Active Speaker in Android(Kotlin) Video Chat App?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">videosdk-live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/67e89b2860333e9273193b829f48464ca203e0c9b6c22c3908faf4ca585a8940/videosdk-live/videosdk-rtc-android-kotlin-sdk-example" alt="How to Integrate Active Speaker in Android(Kotlin) Video Chat App?" onerror="this.style.display = 'none'"/></div></a></figure><h2 id="conclusion"><strong>Conclusion</strong></h2><p>This guide has equipped you with the knowledge to integrate active speaker signals into your Android video chat application using the VideoSDK. With this feature, you enhance the user experience by providing clear visual cues that enhance communication and participation in your video calls.</p><p>To unlock the full potential of VideoSDK, we encourage you to sign up for VideoSDK and further explore its features. Start implementing VideoSDK and Active Speaker Indication today to improve your video app's functionality and user engagement.</p><p><a href="https://www.videosdk.live/signup"><strong>Sign up with VideoSDK</strong></a> today and Get <strong>10,000 Free Minutes to </strong>take your video app to the next level!</p>]]></content:encoded></item><item><title><![CDATA[RegTech (Regulatory Technology): Complete Guide on Compliance]]></title><description><![CDATA[Regulatory Technology (RegTech) has emerged as a transformative solution for enterprises facing complex regulatory structures in the modern financial environment. ]]></description><link>https://www.videosdk.live/blog/what-is-regtech</link><guid isPermaLink="false">66d54ac320fab018df10fe0b</guid><category><![CDATA[RegTech]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Sat, 18 Jan 2025 06:29:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-RegTech_-Understand-its-importance-in-Banking-and-Fintech.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-RegTech_-Understand-its-importance-in-Banking-and-Fintech.jpg" alt="RegTech (Regulatory Technology): Complete Guide on Compliance"/><p>Regulatory Technology known as RegTech is an innovative compliance approach for complex regulatory structures in the contemporary financial environment characterized by quick changes. This approach is reshaping how banking and financial organizations manage complex regulations.</p><h2 id="what-is-regulatory-technology-regtech">What is Regulatory Technology (RegTech)?</h2><p>RegTech, short for Regulatory Technology, refers to innovative solutions designed to help organizations efficiently and effectively manage regulatory compliance. This cutting-edge field primarily serves industries such as financial services, banking, and insurance, where adherence to complex regulatory frameworks is necessary.</p><p>The term "RegTech" was first stamped by the UK's Financial Conduct Authority (FCA) in 2015, although the concept has roots dating back to the 1990s when early <a href="https://www.videosdk.live/solutions/video-banking">digital banking solutions</a> began addressing compliance issues. The 2008 financial crisis further highlighted the need for more robust compliance mechanisms, leading to the rapid growth of the RegTech industry.</p><h2 id="how-regtech-is-becoming-a-key-aspect-of-compliance-needs">How RegTech is Becoming a Key Aspect of Compliance Needs?</h2><p>Regulatory Technology (RegTech) solutions provide an extensive collection of functionalities aimed at streamlining compliance processes including:</p><h3 id="automation-of-compliance-processes">Automation of Compliance Processes: </h3><p>RegTech leverages automation to streamline various compliance-related tasks, including regulatory reporting, risk management, identity verification, transaction monitoring, and fraud detection.</p><h3 id="data-management-and-analysis">Data Management and Analysis: </h3><p>RegTech enables companies to handle and analyze vast volumes of data required for regulatory compliance by harnessing advanced analytics, machine learning, and artificial intelligence. This capability allows for the detection of patterns, anomalies, and potential risks.</p><h3 id="real-time-monitoring">Real-Time Monitoring: </h3><p>RegTech tools provide real-time monitoring of transactions and activities, allowing for immediate responses to any regulatory breaches or emerging risks.</p><h3 id="adaptability-to-regulatory-changes">Adaptability to Regulatory Changes: </h3><p>RegTech solutions are designed to be flexible and adaptable, ensuring that businesses can quickly implement necessary adjustments in response to evolving regulatory landscapes.</p><h3 id="cost-reduction-and-efficiency">Cost Reduction and Efficiency: </h3><p>The automation of compliance tasks significantly reduces the time and resources traditionally required for manual processes. This efficiency not only cuts costs but also helps companies avoid big fines and penalties associated with non-compliance.</p><h2 id="what-is-the-importance-of-regtech-for-financial-institutions">What is the Importance of RegTech for Financial Institutions?</h2><p>The financial services industry has been at the forefront of RegTech adoption. As regulatory needs boosted, the need for more robust compliance mechanisms became obvious. <a href="https://www.pwc.in/industries/financial-services/fintech/fintech-insights/regtech-a-new-disruption-in-the-financial-services-space.html">RegTech in banking</a> has since become an essential tool for managing risk, reducing costs, and ensuring adherence to complex regulatory requirements.</p><h3 id="kyc-and-aml-solutions">KYC and AML Solutions:</h3><p>Automated Know Your Customer (KYC) and Anti-Money Laundering (AML) checks with KYC and <a href="https://www.idenfy.com/blog/best-aml-software-providers/" rel="noreferrer">AML software</a> help verify customer identities and prevent illicit activities, enhancing the overall security of banking operations.</p><h3 id="enhanced-detection-capabilities">Enhanced Detection Capabilities: </h3><p>RegTech solutions utilize advanced technologies to monitor financial transactions in real time, detecting suspicious patterns and anomalies that may indicate fraudulent activity or money laundering.</p><h3 id="improved-accuracy-and-efficiency">Improved Accuracy and Efficiency: </h3><p>RegTech reduces human error and increases the speed and accuracy of detecting potential financial crimes by automating compliance processes.</p><h3 id="advanced-data-analysis">Advanced-Data Analysis: </h3><p>RegTech's ability to analyze vast amounts of data quickly and accurately allows financial institutions to identify high-risk behaviors and potential threats more effectively.</p><h3 id="streamlined-regulatory-reporting">Streamlined Regulatory Reporting: </h3><p>Automated documentation and reporting processes ensure timely submission of suspicious activity reports to regulatory authorities, maintaining transparency and compliance.</p><h3 id="adaptability-to-new-threats">Adaptability to New Threats: </h3><p>As financial criminals become increasingly sophisticated, RegTech solutions can be rapidly adapted to detect and combat new types of financial crimes.</p><h2 id="challenges-of-regulatory-technology">Challenges of Regulatory Technology</h2><p>While RegTech has proven highly effective in enhancing regulatory compliance and combating financial crimes, some challenges remain:</p><ul><li><strong>Data Security</strong>: The increased reliance on data processing and storage raises concerns about potential data breaches and cybersecurity risks. To mitigate these risks, <a href="https://controld.com/blog/best-internet-filtering-software/" rel="noreferrer">internet filtering software</a> helps restrict access to malicious or untrusted online content, reducing exposure to cyber threats and strengthening overall data security.</li><li><strong>Regulatory Uncertainty</strong>: The rapid pace of technological advancement can sometimes outpace regulatory guidance, creating uncertainty for institutions implementing RegTech solutions.</li><li><strong>Talent Gap</strong>: Effectively leveraging RegTech requires specialized skills in areas like <a href="https://professional-education-gl.mit.edu/mit-online-data-science-program" rel="noreferrer">applied data science</a> and AI, posing a challenge for many organizations in attracting and retaining the right talent. The <a href="https://www.mygreatlearning.com/artificial-intelligence/courses" rel="noreferrer">Artificial Intelligence course</a> will help in guidance of data usage, managing access, and overseeing related tasks in blockchain technology.</li></ul><h2 id="future-of-regulatory-technology">Future of Regulatory Technology</h2><p>Despite these challenges, the future of RegTech looks promising. As regulatory environments continue to grow more complex, the need for compliance intensifies. By 2027, the <a href="https://www.cbinsights.com/research/regtech-regulation-compliance-market-map/">global RegTech market</a> is projected to exceed $28 billion, underscoring its critical role in modern business operations.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/09/Reg-Tech-2017-VF4.png" class="kg-image" alt="RegTech (Regulatory Technology): Complete Guide on Compliance" loading="lazy" width="1822" height="1536"/></figure><p>Regulatory Technology represents a significant leap forward in how organizations manage regulatory compliance. By leveraging advanced technologies such as AI, machine learning, and <a href="https://intellias.com/big-data-analytics-telecom/">big data</a> analytics. RegTech is revolutionizing it, and paving the way for a more secure and efficient financial ecosystem.</p><p>For businesses looking to stay ahead in an increasingly complex regulatory environment, embracing RegTech solutions is not just an option, it's essential for maintaining compliance, reducing risks, and focusing on core business operations in the digital age.</p>]]></content:encoded></item><item><title><![CDATA[10 Best 100ms.live Alternatives in 2026 (Tested & Compared)]]></title><description><![CDATA[Discover a revolutionary and transformative alternative to 100ms, unlocking a remarkable online experience that propels you to unprecedented success. Embrace this golden opportunity to unleash unparalleled potential and embark on a journey to new heights.]]></description><link>https://www.videosdk.live/blog/100ms-alternative</link><guid isPermaLink="false">64b0e97a9eadee0b8b9e6ec8</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Sat, 18 Jan 2025 04:32:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/07/100ms-alternative.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/07/100ms-alternative.jpg" alt="10 Best 100ms.live Alternatives in 2026 (Tested & Compared)"/><p>If you're <strong>searching for an </strong><a href="https://www.videosdk.live/100ms-vs-videosdk" rel="noreferrer"><strong>alternative to 100ms.live</strong></a> that offers effortless integration of real-time video into your application, you've landed at the perfect spot! While 100ms.live is undoubtedly a well-known option, there exists a vast array of untapped opportunities outside their platform.</p><p>Stay tuned to discover what you might be overlooking, particularly if you're already a 100ms.live user.</p><h2 id="what-is-a-100ms">What is a 100ms?</h2><p>100ms is a cloud platform that purports to allow developers to integrate video and audio conferencing into their applications. The platform provides REST APIs, SDKs, and a dashboard for managing live interactive audio and video, including recording and playback.</p><p>However, 100ms has been criticized for its complexity and lack of user-friendly documentation, making it challenging for developers to implement its features seamlessly. Additionally, some users have reported issues with the platform's reliability, citing frequent downtime and technical glitches. While 100ms claims to offer a low-code solution, developers have found that the learning curve can be steep, resulting in time-consuming and frustrating development processes.</p><p>In summary, while 100ms offers the promise of video and audio conferencing integration, it falls short in terms of usability, reliability, and developer-friendliness, leaving many users dissatisfied with their experience</p><h2 id="why-consider-alternatives-to-100mslive">Why Consider Alternatives to 100ms.live?</h2>
<p><strong>Finding an alternative to 100ms.live</strong> may be necessary for many reasons such as needing <a href="https://github.com/100mslive/100ms-android/issues/213">better performance</a>, better <a href="https://github.com/100mslive/100ms-flutter/issues/1424">user experience</a>, more features, increased customization, or cost considerations. By evaluating these factors, developers can make informed decisions that align with their project goals and enhance the overall video experience for their users.</p><p>The <strong>top 10 100ms.live Alternatives</strong> are VideoSDK, Twilio, MirrorFly, Agora, Jitsi, Vonage, AWS Chime, EnableX, WhereBy, and SignalWire.</p><blockquote>
<h2 id="10-best-100mslive-alternatives-for-2024">10 Best 100ms.live Alternatives for 2024</h2>
<ul>
<li>VideoSDK</li>
<li>Twilio Video</li>
<li>MirrorFly</li>
<li>Agora</li>
<li>Jitsi</li>
<li>Vonage</li>
<li>AWS Chime SDK</li>
<li>Enablex</li>
<li>Whereby</li>
<li>SignalWire</li>
</ul>
</blockquote>
<h2 id="1-videosdk-seamless-integration-and-comprehensive-features">1. VideoSDK: Seamless Integration and Comprehensive Features</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-SDK-for-Real-time-Communication-Live-Streaming-Video-API-5.jpeg" class="kg-image" alt="10 Best 100ms.live Alternatives in 2026 (Tested & Compared)" loading="lazy" width="1920" height="967"/></figure><p>Immerse yourself in the amazing capabilities of <a href="https://www.videosdk.live">VideoSDK</a>, an API designed to seamlessly integrate robust audio and video features into your applications. With minimal effort, you can enhance your app by providing live audio and video experiences across different platforms. Elevate your user experience and take your application to the next level with VideoSDK's powerful features.</p><h3 id="key-points-about-videosdk">Key points about VideoSDK</h3>
<ul><li>VideoSDK offers a seamless integration experience, combining simplicity and speed to enable you to focus on developing innovative features that enhance user retention. Say goodbye to complex integration processes and unlock a world of possibilities.</li><li>With VideoSDK, you can enjoy the benefits of exceptional scalability, adaptive bitrate technology for optimal video quality, extensive customization options to tailor the user experience, high-quality recording capabilities, comprehensive analytics to gain valuable insights, cross-platform streaming for broader reach, effortless scalability to accommodate growing needs, and robust support across various platforms.</li><li>Whether you're developing for mobile (Flutter, Android, iOS), web (JavaScript Core SDK + UI Kit), or desktop (Flutter Desktop), Video SDK empowers you to effortlessly create immersive and engaging video experiences.</li></ul><h3 id="videosdk-pricing">VideoSDK pricing</h3>
<ul><li>Experience the extraordinary value of Video SDK! Take full advantage of our generous offer of <a href="https://www.videosdk.live/pricing">$20 free credit</a> and enjoy <a href="https://www.videosdk.live/pricing#pricingCalc">flexible pricing</a> options for both video and audio calls.</li><li>With Video SDK, you can enjoy <strong>video calls</strong> starting at an incredible rate of <strong>only $0.003</strong> per participant per minute, allowing you to connect with others without breaking the bank. </li><li>Additionally, their <strong>audio calls</strong> are available at a minimal cost of just <strong>$0.0006</strong> per minute, ensuring affordable communication options.</li><li>To enhance your experience, they offer <strong>cloud recordings</strong> at an affordable rate of <strong>$0.015</strong> per minute, allowing you to capture and preserve important moments. </li><li>For those who require <strong>RTMP output</strong>, they provide competitive pricing at <strong>$0.030</strong> per minute, ensuring seamless streaming capabilities.</li><li>Rest assured that their dedicated support team is available to assist you <strong>24/7</strong>, providing prompt and reliable customer support whenever you need it.</li><li>Upgrade your video capabilities today and embark on a new level of excellence with VideoSDK!</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/100ms-vs-videosdk"><strong>100ms and VideoSDK</strong></a><strong>.</strong></blockquote>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-twilio-video-versatile-and-reliable-video-integration">2. Twilio Video: Versatile and Reliable Video Integration</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Communication-APIs-for-SMS-Voice-Video-Authentication_twilio-4.jpeg" class="kg-image" alt="10 Best 100ms.live Alternatives in 2026 (Tested & Compared)" loading="lazy" width="1920" height="967"/></figure><p>Twilio is a top-tier video SDK solution that enables businesses to easily integrate live video into their mobile and web applications. One of the notable strengths of Twilio is its versatility, allowing you to either create a new app from the ground up or enhance your existing solutions with powerful communication features. Whether you're starting fresh or expanding the capabilities of your app, Twilio offers a dependable and comprehensive solution for seamlessly integrating live video into your applications.</p><h3 id="key-points-about-twilio">Key points about Twilio</h3>
<ul><li>Twilio offers web, iOS, and Android SDKs that enable seamless integration of live video into applications, providing developers with versatile tools to enhance their user experiences.</li><li>However, when using Twilio, manual configuration and additional code are required for utilizing multiple audio and video inputs, which may increase development complexity.</li><li>Twilio's call insights feature allows for error tracking and analysis, but implementing it requires additional code integration.</li><li>As usage grows, pricing can become a concern, as Twilio lacks a built-in tiering system in the dashboard to accommodate scaling needs effectively.</li><li>In terms of call capacity, Twilio supports up to 50 hosts and participants, which may be sufficient for many use cases.</li><li>It's worth noting that Twilio does not provide plugins for easy product development, which may require developers to invest additional time and effort.</li><li>Lastly, while Twilio offers customization options, the level of customization provided by the Twilio Video SDK may not meet the specific requirements of all developers, potentially necessitating additional code development.</li></ul><h3 id="pricing-for-twilio">Pricing for Twilio</h3>
<ul><li><a href="https://www.videosdk.live/blog/twilio-video-alternative"><strong>Twilio</strong></a> offers <a href="https://www.twilio.com/en-us/video/pricing">pricing</a> starting at <strong>$4</strong> per 1,000 minutes for their <strong>video services</strong>.</li><li>Additionally, they charge <strong>$0.004</strong> per participant minute for <strong>recordings</strong>, <strong>$0.01</strong> per composed minute for <strong>recording compositions</strong>, and <strong>storage</strong> costs <strong>$0.00167</strong> per GB per day after the initial 10 GB.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/twilio-vs-100ms"><strong>100ms and Twilio</strong></a><strong>.</strong></blockquote><h2 id="3-mirrorfly-enterprise-level-communication-with-extensive-features">3. MirrorFly: Enterprise-Level Communication with Extensive Features</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Live-Video-Call-API-Best-Video-Chat-SDK-for-Android-iOS-mirrorfly-4.jpeg" class="kg-image" alt="10 Best 100ms.live Alternatives in 2026 (Tested & Compared)" loading="lazy" width="1920" height="967"/></figure><p>MirrorFly is an outstanding in-app communication suite designed specifically for enterprises. It offers a wide range of powerful APIs and SDKs that deliver exceptional chat and calling experiences. With over 150 remarkable features for chat, voice, and video calling, this cloud-based solution seamlessly integrates to create a robust communication platform.</p><h3 id="key-points-about-mirrorfly">Key points about MirrorFly</h3>
<ul><li>MirrorFly may have limited customization options, which can restrict the ability to tailor the platform according to specific branding or user experience requirements. This may limit the uniqueness and personalization of the communication features.</li><li>MirrorFly may face difficulties in scaling for larger applications or handling a high volume of users. The platform may struggle to maintain performance and stability when dealing with significant traffic or complex use cases.</li><li>Users have reported mixed experiences with MirrorFly's technical support. Some have found it lacking in responsiveness, leading to delays or difficulties in resolving issues or addressing concerns.</li><li>MirrorFly's pricing structure may not be suitable for all budgets or use cases. Depending on the desired features and scalability requirements, the costs associated with using MirrorFly may be higher compared to alternative communication platforms.</li><li>Integrating MirrorFly into existing applications or workflows may require significant effort and technical expertise. The platform might lack comprehensive documentation or robust developer resources, making the integration process challenging or time-consuming.</li></ul><h3 id="mirrorfly-pricing">MirrorFly pricing</h3>
<ul><li>MirrorFly's <a href="https://www.mirrorfly.com/pricing.php">pricing</a> starts at <strong>$299</strong> per month, making it a higher-cost option to consider.</li></ul><h2 id="4-agora-global-coverage-with-extensive-video-calling-features">4. Agora: Global Coverage with Extensive Video Calling Features</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Agora-Real-Time-Voice-and-Video-Engagement-4.jpeg" class="kg-image" alt="10 Best 100ms.live Alternatives in 2026 (Tested & Compared)" loading="lazy" width="1920" height="967"/></figure><p>Agora's video calling SDK provides a wide range of features, such as embedded voice and video chat, real-time recording, live streaming, and instant messaging. These features empower developers to create engaging and immersive live experiences within their applications.</p><h3 id="key-points-about-agora">Key points about Agora</h3>
<ul><li>Agora's video SDK offers embedded voice and video chat, real-time recording, live streaming, and instant messaging.</li><li>Additional add-ons like AR facial masks, sound effects, and whiteboards are available at an extra cost.</li><li>Agora's SD-RTN ensures extensive global coverage with ultra-low latency streaming capabilities.</li><li>The pricing structure may be complex and may not be suitable for businesses with limited budgets.</li><li>Users seeking hands-on support may experience delays as Agora's support team may require additional time to provide assistance.</li></ul><h3 id="agora-pricing">Agora pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/agora-alternative"><strong>Agora</strong></a> offers <strong>Premium</strong> and <strong>Standard </strong><a href="https://www.agora.io/en/pricing/"><strong>pricing</strong> </a>options for their services. The pricing is based on the usage duration of audio and video calls, calculated on a monthly basis. </li><li>It is categorized into four types, depending on the video resolution, providing flexibility and cost-effectiveness. </li><li>The pricing structure includes <strong>Audio calls</strong> at <strong>$0.99</strong> per 1,000 participant minutes, <strong>HD Video calls</strong> at <strong>$3.99</strong> per 1,000 participant minutes, and <strong>Full HD Video calls</strong> at <strong>$8.99</strong> per 1,000 participant minutes.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/agora-vs-100ms"><strong>100ms and Agora</strong></a><strong>.</strong> </blockquote><h2 id="5-jitsi-open-source-flexibility">5. Jitsi: Open-Source Flexibility</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Free-Video-Conferencing-Software-for-Web-Mobile-Jitsi-5.jpeg" class="kg-image" alt="10 Best 100ms.live Alternatives in 2026 (Tested & Compared)" loading="lazy" width="1920" height="967"/></figure><p>Jitsi is a collection of multiple open-source projects aimed at enabling video conferencing. As an open-source platform, it provides the flexibility to customize and utilize it according to your specific needs and requirements. The core components of Jitsi include Jitsi Meet, Jitsi Videobridge, Jibri, and Jigsai, each playing a distinct role within the Jitsi ecosystem.</p><h3 id="key-points-about-jitsi">Key points about Jitsi</h3>
<ul><li>Jitsi is an open-source and free platform that offers users the freedom to utilize it according to their preferences and requirements. </li><li>One of its prominent projects, Jitsi Meet, provides a range of features such as text sharing via Etherpad, room locking, text chatting (web only), raising hands, YouTube video access during calls, audio-only calls, and integrations with third-party apps. </li><li>However, Jitsi Meet alone does not include essential collaborative features like screen sharing, recording, or telephone dial-in to a conference. To access these features, additional setup of projects like Jibri and Jigsai is necessary, which can involve more time, resources, and coding efforts. </li><li>This additional setup may make Jitsi less suitable for users seeking a low-code option. While Jitsi ensures end-to-end encryption for video calls, it does not cover chat or polls, so users prioritizing robust security may need to consider additional measures. </li><li>It's worth noting that Jitsi can consume a significant amount of bandwidth due to the functioning of Jitsi Videobridge. </li><li>For large organizations requiring an SDK for frequent long video sessions with a substantial number of participants, Jitsi might not meet their specific needs and could feel less satisfactory in comparison.</li></ul><h3 id="jitsi-pricing">Jitsi pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/jitsi-alternative"><strong>Jitsi</strong></a> is available for <strong>free</strong>, which means you don't have to pay for any of its components. </li><li>However, it's worth mentioning that there is no dedicated technical support provided. If you encounter any issues or need assistance, you can seek help from the community of contributors who actively participate in the Jitsi project.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/100ms-vs-jitsi"><strong>100ms and Jitsi</strong></a><strong>.</strong></blockquote><h2 id="6-vonage-comprehensive-communication-sdk">6. Vonage: Comprehensive Communication SDK</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-API-Fully-Programmable-and-Customizable-Vonage-3.jpeg" class="kg-image" alt="10 Best 100ms.live Alternatives in 2026 (Tested & Compared)" loading="lazy" width="1920" height="967"/></figure><p>Even though it has been acquired by Vonage and renamed as the "Vonage API," TokBox is still commonly referred to by its original name. TokBox's SDKs enable developers to establish reliable point-to-point communication, making it an ideal option for creating proof of concepts during hackathons or meeting investor deadlines. With Vonage's SDKs, developers have the necessary tools to build secure and seamless communication experiences within their applications.</p><h3 id="key-points-about-vonage">Key points about Vonage</h3>
<ul><li>The TokBox SDK empowers developers to create customized audio/video streams with various effects, filters, and AR/VR capabilities on mobile devices.</li><li>It offers support for diverse use cases, including 1:1 video calls, group video chat, and large-scale broadcast sessions. </li><li>Participants in a call can easily share screens, exchange messages through chat, and send data in real-time.</li><li>However, TokBox does pose some challenges. As the user base grows, scaling costs can become a concern since the price per stream per minute increases.</li><li>Additional features like recording and interactive broadcast come at an extra cost, which should be taken into account when considering the platform.</li><li>Additionally, once the number of connections reaches 2,000, the platform switches to CDN delivery, resulting in higher latency. </li><li>Real-time streaming at scale can be challenging, as accommodating over 3,000 viewers requires switching to HLS, which introduces significant latency.</li></ul><h3 id="vonage-pricing">Vonage pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/vonage-alternative"><strong>Vonage</strong></a> adopts a usage-based <a href="https://www.vonage.com/communications-apis/video/pricing/">pricing</a> model for their video sessions, with the cost determined by the number of participants and dynamically calculated every minute. </li><li>Their <strong>pricing plans</strong> begin at <strong>$9.99</strong> per month and include a generous free allowance of 2,000 minutes per month for all plans.</li><li>Once the free allowance is consumed, users are billed at a rate of <strong>$0.00395</strong> per participant per minute for the additional usage. </li><li>For those interested in <strong>recording</strong> services, Vonage offers them starting at <strong>$0.010</strong> per minute. </li><li>If <strong>HLS streaming</strong> is required, it is priced at <strong>$0.003</strong> per minute. These additional services come with their respective costs to enhance the video experience.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/vonage-vs-100ms"><strong>100ms and Vonage</strong></a><strong>.</strong></blockquote><h2 id="7-aws-chime-sdk-customizable-real-time-communication">7. AWS Chime SDK: Customizable Real-Time Communication</h2>
<p>The <a href="https://www.videosdk.live/amazon-chime-sdk-vs-videosdk" rel="noreferrer">Amazon Chime SDK</a> operates as the foundational technology of Amazon Chime, operating separately from its user interface and external shell. It provides the essential building blocks for integrating real-time audio and video communication into applications, enabling developers to create customized communication experiences tailored to their specific needs.</p><h3 id="key-points-about-aws-chime-sdk">Key points about AWS Chime SDK</h3>
<ul><li>The Amazon Chime SDK enables video meetings with a maximum of 25 participants (50 for mobile users), facilitating effective collaboration. </li><li>With the integration of simulcast technology, it ensures that video quality remains consistent across various devices and networks.</li><li>To prioritize security, the Amazon Chime SDK encrypts all calls, videos, and chats, providing a secure communication environment.</li><li>However, it is worth noting that certain features like polling, auto-sync with Google Calendar, and background blur effects are not available in the Amazon Chime SDK.</li><li>Compatibility issues have been reported in Linux environments, and participants using the Safari browser may also encounter challenges while using the SDK.</li><li>Customer support experiences may vary, as query resolution times are inconsistent and can depend on the specific support agent assigned to the case.</li></ul><h3 id="aws-chime-pricing">AWS Chime pricing</h3>
<ul><li>The <strong>free basic plan</strong> offered by <a href="https://www.videosdk.live/blog/amazon-chime-sdk-alternative"><strong>Amazon Chime</strong></a> allows users to engage in <strong>one-on-one audio/video calls</strong> and <strong>group chats</strong> without any <a href="https://aws.amazon.com/chime/pricing/">cost</a>.</li><li>For users who require additional features and functionalities, the <strong>Plus plan</strong> is available at a price of <strong>$2.50</strong> per month per user. This plan includes valuable additions such as <strong>screen sharing</strong>, <strong>remote desktop control</strong>, <strong>1 GB of message history</strong> per user, and <strong>Active Directory integration</strong>.</li><li>For more extensive collaboration needs, the <strong>Pro plan</strong> is available at a cost of <strong>$15</strong> per user per month. It encompasses all the features of the Plus plan and enables <strong>meetings</strong> with <strong>three</strong> or more participants, making it suitable for larger group discussions and presentations.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/amazon-chime-sdk-vs-100ms"><strong>100ms and AWS Chime</strong></a><strong>.</strong></blockquote><h2 id="8-enablex-customized-video-calling-experiences">8. EnableX: Customized Video Calling Experiences</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Call-API-Video-Chat-API-Voice-API-Video-Conferencing_enebleX-5.jpeg" class="kg-image" alt="10 Best 100ms.live Alternatives in 2026 (Tested & Compared)" loading="lazy" width="1920" height="967"/></figure><p>The EnableX SDK offers a wide range of capabilities, including video and audio calling, as well as collaborative features such as a whiteboard, screen sharing, annotation, recording, host control, and chat. With this SDK, you can easily integrate these functionalities into your application. It provides a video builder tool that allows you to create customized video-calling solutions that align with your application's requirements. You have the flexibility to personalize the live video streams with a custom user interface, choose appropriate hosting options, integrate billing functionality, and implement other essential features that cater to your specific needs.</p><h3 id="key-points-about-enablex">Key points about EnableX</h3>
<ul><li>EnableX offers a self-service portal that allows users to access reporting capabilities and live analytics. This portal enables users to track the quality of their communication experiences and facilitate online payments.</li><li>The EnableX SDK supports popular programming languages such as JavaScript, PHP, and Python. This broad language support makes it easier for developers to integrate the SDK into their applications.</li><li>With EnableX, users have the flexibility to stream live content directly from their app or website. They can also leverage popular platforms like YouTube and Facebook to reach a wider audience and extend the reach of their live content.</li><li>It's worth noting that the response time of EnableX's support team may take up to 72 hours. This extended wait time for assistance could be a potential drawback for users who require prompt and timely support.</li></ul><h3 id="enablex-pricing">EnableX pricing</h3>
<ul><li>EnableX offers <a href="https://www.enablex.io/cpaas/pricing/our-pricing"><strong>pricing</strong></a><strong> plans</strong> that start at <strong>$0.004</strong> per minute per participant for rooms with <strong>up to 50 people</strong>. If you require hosting larger meetings or events, you can contact their sales team for custom pricing options.</li><li><strong>Recording</strong> functionality is available at a rate of <strong>$0.010</strong> per minute per participant. This allows you to capture and save your video sessions for future reference.</li><li>If you need to <strong>transcode</strong> your video into a different format, EnableX provides this service at a rate of <strong>$0.010</strong> per minute. This enables you to convert your video content into a format that suits your specific requirements.</li><li><strong>Additional storage</strong> can be obtained at a cost of <strong>$0.05</strong> per gigabyte (GB) per month. This allows you to expand your storage capacity to accommodate your growing video content.</li><li>For those who require <a href="https://www.videosdk.live/developer-hub/rtmp/rtmp-protocol" rel="noreferrer"><strong>RTMP (Real-Time Messaging Protocol) </strong></a><strong>streaming</strong>, EnableX offers this feature at a rate of <strong>$0.10</strong> per minute. RTMP streaming enables you to deliver your video content in real-time to various platforms and devices.</li><li>Please note that the pricing may vary depending on your specific needs and requirements.</li></ul><h2 id="9-whereby-effective-small-scale-video-conferencing">9. Whereby: Effective Small-Scale Video Conferencing</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Calling-API-for-Web-and-App-Developers-Whereby-5.jpeg" class="kg-image" alt="10 Best 100ms.live Alternatives in 2026 (Tested & Compared)" loading="lazy" width="1920" height="967"/></figure><p>Whereby is a user-friendly <a href="https://www.videosdk.live/audio-video-conferencing" rel="noreferrer">video conferencing platform</a> specifically designed for small to medium-sized meetings. It offers a straightforward and easy-to-use experience for users. However, it may not be the ideal choice for larger businesses or those in need of more advanced features that cater to their specific requirements.</p><h3 id="key-points-about-whereby">Key points about Whereby</h3>
<ul><li>Basic customization options are available for the video interface, but they are limited and do not support a fully custom experience.</li><li>Video calls can be embedded directly into websites, mobile apps, and web products, eliminating the need for external links or apps.</li><li>Whereby offers a seamless video conferencing experience, but it may lack advanced features compared to other tools.</li><li>The maximum capacity for meetings on Whereby is 50 participants.</li><li>Screen sharing for mobile users and customization options for the host interface may be limited.</li><li>Whereby does not have a virtual background feature, and some users have reported issues with the mobile app, which can impact the overall user experience.</li></ul><h3 id="whereby-pricing">Whereby pricing</h3>
<ul><li>Whereby <a href="https://whereby.com/information/pricing">pricing</a> starts at <strong>$6.99</strong> per month.</li><li>The <strong>basic plan</strong> includes <strong>up to 2,000 user minutes per month</strong>.</li><li><strong>Additional minutes</strong> are charged at <strong>$0.004</strong> per minute.</li><li><strong>Cloud recording</strong> and <strong>live streaming</strong> options are available at <strong>$0.01</strong> per minute.</li><li><strong>Email</strong> and <strong>chat support</strong> are provided <strong>free</strong> of charge.</li><li><strong>Paid</strong> support plans offer features like <strong>technical onboarding</strong> and <strong>customer success management</strong>.</li><li><strong>HIPAA</strong> compliance is also available as part of the <strong>paid</strong> support plans.</li></ul><h2 id="10-signalwire-video-integration-with-flexible-pricing">10. SignalWire: Video Integration with Flexible Pricing</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Building-The-Software-Defined-Telecom-Network-SignalWire-4.jpeg" class="kg-image" alt="10 Best 100ms.live Alternatives in 2026 (Tested & Compared)" loading="lazy" width="1920" height="967"/></figure><p>SignalWire is a platform that leverages APIs to empower developers in seamlessly integrating live and on-demand video experiences into their applications. Its primary goal is to simplify the processes of video encoding, delivery, and renditions, ensuring a smooth and uninterrupted video streaming experience for users.</p><h3 id="overview-of-signalwire">Overview of SignalWire</h3>
<ul><li>SignalWire offers an SDK that enables developers to integrate real-time video and live streams into web, iOS, and Android applications. The SDK allows for video calls with up to 100 participants in a real-time webRTC environment.</li><li>However, it is important to note that the SDK does not provide built-in support for managing disruptions or user publish-subscribe logic, which developers will need to implement separately.</li></ul><h3 id="signalwire-pricing">SignalWire pricing</h3>
<ul><li>SignalWire utilizes a <a href="https://signalwire.com/pricing/video">pricing</a> model based on per-minute usage. </li><li>For <strong>HD video calls</strong>, the pricing is <strong>$0.0060</strong> per minute, while for <strong>Full HD video calls</strong>, it is <strong>$0.012</strong> per minute. The actual cost may vary depending on the desired video quality for your application.</li><li>In addition, SignalWire offers additional features such as <strong>recording</strong>, which is available at a rate of <strong>$0.0045</strong> per minute. This allows you to capture and store video content for future use. </li><li>The platform also provides <strong>streaming</strong> capabilities priced at <strong>$0.10</strong> per minute, enabling you to broadcast your video content in real-time.</li></ul><h2 id="certainly">Certainly!</h2>
<p><a href="https://www.videosdk.live">VideoSDK</a> stands out as an SDK that prioritizes fast and seamless integration. With a low-code solution, developers can quickly build live video experiences in their applications, deploying custom video conferencing solutions in under 10 minutes. Unlike other SDKs, Video SDK offers a streamlined process with easy creation and embedding of live video experiences, enabling real-time connections, communication, and collaboration.</p><h2 id="still-skeptical">Still skeptical?</h2>
<p>Immerse yourself in the possibilities of VideoSDK by taking a deep dive into its comprehensive <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start">Quickstart guide</a>. Explore the potential with the powerful <a href="https://docs.videosdk.live/code-sample">sample app</a> exclusively designed for Video SDK. Sign up now and start your integration journey, and don't miss out on the chance to claim your <a href="https://www.videosdk.live/pricing">complimentary $20 free credit</a> to unlock the full potential of Video SDK. Our dedicated team is always ready to assist you whenever you need support. Get ready to showcase your creativity and build remarkable experiences with Video SDK. Let the world see what you can create!</p><h2 id="faqs">FAQs</h2>
<ol>
<li>
<p><strong>What is 100ms Live and what services does it offer?</strong><br>
100ms Live is a real-time video and audio communication platform offering services that include seamless integration of live streaming, video conferencing, and interactive applications. It specializes in providing low-latency solutions for engaging and dynamic online experiences.</br></p>
</li>
<li>
<p><strong>What does 100ms live do?</strong><br>
100ms Live facilitates real-time video and audio communication, ensuring low-latency interactions. It enables seamless integration of live streaming, video conferencing, and interactive applications, catering to diverse needs such as virtual events, online education, and collaborative experiences with exceptional responsiveness.</br></p>
</li>
<li>
<p><strong>What are the top alternatives and competitors of 100ms live?</strong><br>
Top alternatives and competitors to 100ms Live include <a href="https://www.videosdk.live/100ms-vs-videosdk">VideoSDK</a>, <a href="https://www.videosdk.live/alternative/agora-vs-videosdk">Agora</a>, <a href="https://www.videosdk.live/alternative/twilio-vs-videosdk">Twilio</a>, <a href="https://www.videosdk.live/alternative/zoom-vs-videosdk">Zoom Video Communications</a>, and <a href="https://www.videosdk.live/daily-vs-videosdk">Daily. co</a>. Each offers real-time video and audio solutions, with varying features and pricing to cater to different user needs.</br></p>
</li>
<li>
<p><strong>Is the integration of 100ms Live SDK user-friendly for developers?</strong><br>
No, the integration process of 100ms Live SDK is not user-friendly for developers. It presents challenges and complexities, requiring additional effort and resources to implement successfully.</br></p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[10 Best Voice Call APIs & SDKs]]></title><description><![CDATA[Discover the top ten audio calling APIs and SDKs that ensure crystal-clear conversations for your applications, offering seamless voice communication.]]></description><link>https://www.videosdk.live/blog/10-best-audio-calling-api</link><guid isPermaLink="false">64ed90d99eadee0b8b9e82a9</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 17 Jan 2025 11:32:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/08/10-Audio--Calling.jpg" medium="image"/><content:encoded><![CDATA[<h3 id="the-rise-of-voice-call-applications-using-webrtc">The Rise of Voice Call Applications Using WebRTC</h3><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/10-Audio--Calling.jpg" alt="10 Best Voice Call APIs & SDKs"/><p>In today's digital age, everything around us is changing to make the most of new opportunities. Thanks to rapid technological advancements, voice call apps using WebRTC have become really popular in recent years.</p><p>These <a href="https://www.videosdk.live/live-audio-rooms" rel="noreferrer">voice APIs</a> have emerged as a cornerstone in the realm of voice chat apps, enabling seamless voice communication and serving as a catalyst for substantial business growth and success.</p><p>This article explores the top <strong>Voice Call SDK or Voice Chat SDK</strong> in the market that has made a significant impact across different areas. Their influence has motivated developers to try their hand at building their own voice applications."</p><p>Let's embark on a journey to explore these APIs and gain a fundamental understanding of their significance in the landscape of voice communication.</p><h2 id="key-features-to-look-for-in-an-audio-call-api">Key Features to Look for in an Audio Call API</h2>
<p>Voice calling APIs serve as the essential bridges that empower companies to seamlessly incorporate voice call functionality into their applications, facilitating connectivity and enabling business transactions to thrive.</p><p>In a sea of voice conferencing system APIs available in the market, it's imperative to navigate based on specific criteria to pinpoint the optimal voice API for your needs.</p><p>Let's delve into four crucial criteria that should guide your selection process:</p><ol><li><strong>Variety in API Features:</strong> Seek out an API that boasts a diverse range of features, enhancing the capabilities and distinctiveness of your application.</li><li><strong>Popularity and Market Demand:</strong> Gauge the popularity and demand for the API in the market. This assessment helps you gauge the API's value and relevance in the broader landscape.</li><li><strong>Cost and Investment:</strong> The financial aspect plays a pivotal role in any development endeavor. It's essential to thoroughly evaluate the pricing and investment required for the chosen APIs, as it significantly shapes your development journey.</li><li><strong>Compatibility and Ease of Integration:</strong> Prioritize APIs that align seamlessly with your existing infrastructure. Ensuring compatibility and ease of integration into your pre-existing or ongoing projects is paramount.</li></ol><p>Discover the top 10 voice call APIs to revolutionize your VoIP and programmable voice apps. These APIs empower developers to build advanced communication solutions with ease, from call routing to real-time analytics. Enhance your app's capabilities with industry-leading APIs for an unparalleled user experience</p><h2 id="top-10-audio-call-apis-sdks-in-2024">Top 10 Audio Call APIs &amp; SDKs in 2024</h2>
<p>The <strong>10 best voice-calling APIs and SDKs</strong> are Video SDK, Twilio, MirrorFly, Agora, Sendbird, Plivo, Sinch, EnableX, Vonage, and MessageBird.</p><h2 id="1-videosdk-a-synthesis-of-speed-and-flexibility">1. VideoSDK: A Synthesis of Speed and Flexibility</h2>
<p>Step into the realm of <a href="https://www.videosdk.live/">VideoSDK</a>, an <strong>extraordinary</strong> integration solution that has garnered a reputation for its remarkable speed in seamlessly infusing audio calling capabilities into applications, all accomplished within an astonishing <strong>10-minute</strong> span. This platform represents a paradigm shift in <strong>efficiency</strong>, offering a harmonious blend that caters to the needs of both end-users and developers alike.</p><p>A standout feature of Video SDK is its <strong>inherent versatility</strong>, a quality that resonates through its <strong>cross-platform</strong> compatibility. It effortlessly spans an impressive array of programming languages and frameworks, encompassing the likes of <strong>JavaScript</strong>, <strong>React JS</strong>, <strong>React Native</strong>, <strong>Android</strong>, <strong>Flutter</strong>, and <strong>iOS</strong>.</p><h3 id="features-offered-by-videosdk">Features offered by VideoSDK</h3>
<ul><li>High availability with zero maintenance</li><li><strong>Unlimited parallel room</strong> in real-time</li><li>Support up to <strong>300 attendees,</strong> including 50 presenters</li><li><strong>Global Infrastructure</strong> for every use case.</li><li>100% Fully <strong>customizable UI</strong></li><li>Accelerate your time-to-market with our <strong>code samples</strong></li><li>Customize <strong>template layouts,</strong> in any orientation</li><li>PubSub feature to <strong>build engaging features.</strong></li></ul>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-twilio-enhancing-phone-services-with-programmable-voice-api">2. Twilio: Enhancing Phone Services with Programmable Voice API</h2>
<ul><li>Twilio is a widely known programmable voice API that aims to enhance phone services by enabling users to engage in phone calls and manage text messaging.</li><li>While it aspires to facilitate personalized interactions and engage users on a broad scale, there are certain considerations worth noting.</li><li>The REST API does offer voice chat capabilities, yet its implementation for routine purposes might be accompanied by certain limitations.</li><li>Additionally, while it provides access to carrier networks and tools to address communication challenges, there could be certain complexities associated with solving these issues.</li></ul><blockquote>
<p>See how Twilio compares with its <a href="https://www.videosdk.live/blog/twilio-video-competitors">competitors</a></p>
</blockquote>
<h2 id="3-mirrorfly-white-label-voice-api-for-unlimited-communication">3. MirrorFly: White-Label Voice API for Unlimited Communication</h2>
<ul><li>There's no need for different <a href="https://www.videosdk.live/developer-hub/sip/what-is-session-initiation-protocol-sip" rel="noreferrer">SIP</a> and voice call APIs for each feature MirrorFly covers everything with its single platform. As a leading voice and video API provider, MirrorFly offers a customized self-hosted solution that covers everything without any limitations. From unlimited audio conferencing to WebRTC video capabilities, this API platform provides complete flexibility to developers.</li><li>MirrorFly voice SDK platform<strong> </strong>is best for its global infrastructure and flexible hosting options. Users can choose between on-cloud/on-premise hosting and deployment on their own servers.</li><li>Our key features include Unlimited one-on-one and group calls, 100% customizable voice API, SIP/VoIP calls, and more.</li></ul><h2 id="4-agora-real-time-voice-calls-with-cross-platform-support">4. Agora: Real-Time Voice Calls with Cross-Platform Support</h2>
<ul><li>Although Agora's voice call API offers real-time voice interactions and the convenience of cross-platform support, some users have reported occasional <strong>connectivity issues</strong> and a <strong>learning curve</strong> when integrating the API into their applications. </li><li>It's important to thoroughly test the API within your specific use case to ensure its performance meets your <strong>expectations</strong> and <strong>requirements</strong>.</li></ul><blockquote>
<p>See how Agora compares with its <a href="https://www.videosdk.live/blog/agora-competitors">competitors</a></p>
</blockquote>
<h2 id="5-sendbird-proven-infrastructure-with-integration-challenges">5. Sendbird: Proven Infrastructure with Integration Challenges</h2>
<ul><li>While SendBird offers a proven managed infrastructure for voice communication, some users have reported occasional challenges with the integration process and a learning curve when using the APIs.</li><li>Additionally, the pricing structure might not be the most budget-friendly option for smaller businesses or startups.</li><li>It's recommended to thoroughly assess your project's requirements and budget before committing to SendBird's voice API.</li></ul><h2 id="6-plivo-comprehensive-voice-and-text-integration">6. Plivo: Comprehensive Voice and Text Integration</h2>
<ul><li>While Plivo does provide a wide range of carrier network connections and the ability to embed voice calling and text messaging into apps, some users have encountered <strong>challenges</strong> in terms of the <strong>complexity</strong> of the API documentation and <strong>integration process</strong>. </li><li>Additionally, there have been reports of occasional <strong>latency issues</strong> during voice calls. </li><li>It's important to carefully review the documentation and consider the potential learning curve before opting for Plivo's voice-calling APIs. </li></ul><h2 id="7-sinch-communication-apis-with-quality-concerns">7. Sinch: Communication APIs with Quality Concerns</h2>
<ul><li>While Sinch does offer a range of communication APIs, including voice calls, some users have expressed concerns about the <strong>quality</strong> and <strong>reliability</strong> of the voice calls when using their APIs.</li><li>Reports of <strong>call drops</strong> and <strong>audio quality</strong> issues have been noted in certain cases.</li><li>It's important to thoroughly test Sinch's voice APIs for your specific use case to ensure that the <strong>call quality</strong> meets your standards and that there are no unexpected <strong>interruptions</strong> during critical communication moments.</li></ul><h2 id="8-enablex-rich-voice-calls-with-user-experience">8. EnableX: Rich Voice Calls with User Experience</h2>
<ul><li>While EnableX does offer a range of voice calling features through its APIs, some users have reported challenges with the overall user experience and the quality of voice calls.</li><li><strong>Delays</strong> in call setup, <strong>audio disruptions</strong>, and occasional <strong>call drops</strong> have been reported by certain users, impacting the seamless communication experience that is essential for audio-calling applications.</li><li>It's recommended to thoroughly evaluate EnableX's voice calling capabilities and conduct extensive testing to ensure that the platform meets your <strong>performance</strong> and <strong>reliability</strong> expectations.</li></ul><h2 id="9-vonage">9. Vonage</h2>
<ul><li>While Vonage's voice chat API does bring innovative features to the table, some users have reported <strong>challenges</strong> with the <strong>integration</strong> and <strong>customization</strong> process.</li><li>The flexibility touted by the API might come with a learning curve, and developers might find themselves needing to invest <strong>additional time</strong> to fully adapt the API to their specific business needs.</li><li>Furthermore, the <strong>voice control</strong> and <strong>voice bot features</strong>, while promising, may <strong>require careful implementation</strong> and tuning to achieve the desired level of performance and user satisfaction.</li><li>It's advised to thoroughly assess the API's documentation and support resources to ensure smooth integration and optimal utilization of its advanced voice features.</li></ul><blockquote>
<p>See how Vonage compares with its <a href="https://www.videosdk.live/blog/vonage-competitors">competitors</a></p>
</blockquote>
<h2 id="10-messagebird">10. Messagebird</h2>
<ul><li>While MessageBird offers a range of communication channels and features, some users have highlighted certain <strong>limitations</strong> in the voice API's <strong>performance</strong> and <strong>reliability</strong>. </li><li>Reports suggest that occasional <strong>call drops</strong> or <strong>delays</strong> might occur, which could impact the quality of real-time voice communication. </li><li>Additionally, the <strong>pricing structure</strong> for MessageBird's voice API might <strong>not</strong> be the most <strong>cost-effective</strong> solution for businesses with high call volumes, as the <strong>costs can escalate</strong> with increased usage. </li><li>Prospective users should carefully evaluate the API's performance and pricing against their specific requirements to determine whether it aligns well with their communication needs.</li></ul><h2 id="elevate-your-audio-calling-app-with-the-ideal-audio-calling-api-integration-today">Elevate Your Audio Calling App with the Ideal Audio Calling API Integration Today</h2>
<p>Embracing the power of voice communication, numerous platforms have emerged, catering to the global demand for interactive conversations. This comprehensive article dives into the realm of voice chat APIs, presenting an array of 10 prominent options that expedite the creation of voice call applications with efficiency and minimal resources.</p><p>Among these, <a href="https://www.videosdk.live/signup/">VideoSDK</a>'s voice-call API stands as a standout recommendation, celebrated for its exceptional attributes and robust support. Notably, Video SDK's offering boasts the remarkable capability of <strong>ultra-low latency</strong>, empowering developers to craft applications that seamlessly accommodate <strong>up to 10,000</strong> users per audio call.</p>]]></content:encoded></item><item><title><![CDATA[How to Build 1on1 Video Chat App in Kotlin for Android?]]></title><description><![CDATA[Learn how to build a basic 1on1 video call app in Kotlin Android. Create Video Calling right now by following this easy instruction!]]></description><link>https://www.videosdk.live/blog/1-to-1-video-chat-app-on-android-using-videosdk</link><guid isPermaLink="false">63c0e689bd44f53bde5d069e</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 17 Jan 2025 11:18:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2023/01/Android-meme--2-.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction-to-android-development">Introduction to Android Development</h2>
<img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Android-meme--2-.jpg" alt="How to Build 1on1 Video Chat App in Kotlin for Android?"/><p>Remote communication has become a pivotal part of our interactions post-pandemic, and without a doubt, it will continue to play a significant role in our future. Today's mobile applications frequently include functionalities for 1on1 video calls and video chats, but creating these features is extremely complex and time-consuming. This is where VideoSDK steps in. VideoSDK provides developers with simple-to-use, highly customizable, and widely compatible APIs to embed real-time video, voice, and interactive functionalities into their applications. With VideoSDK, developers can easily integrate 1on1 video call capabilities, even on platforms like Android, without the need to develop the technology or the underlying infrastructure for real-time engagement themselves. This makes adding video calling on Android and other platforms straightforward and efficient.</p><h2 id="goals">Goals</h2><p>By the end of this blog, you will be able to:</p><ul><li>Understand what VideoSDK is.</li><li>Create a Video SDK account and generate a token.</li><li>Create a 1-to-1 video chat app using Android and Video SDK.</li></ul><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/meeting_screen-1--1-.jpg" class="kg-image" alt="How to Build 1on1 Video Chat App in Kotlin for Android?" loading="lazy" width="270" height="567"/></figure><h2 id="what-is-videosdk">What is VideoSDK?</h2><p>VideoSDK is a versatile platform that empowers developers across the USA &amp; India to create rich in-app experiences by embedding real-time video, voice, recording, live streaming, and messaging functionalities. Available in JavaScript, ReactJS, React-Native, iOS, <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/getting-started">Android</a>, and Flutter, VideoSDK allows for seamless integration into diverse application frameworks. Additionally, VideoSDK offers a pre-built SDK designed for rapid deployment, enabling developers in the US &amp; India to integrate sophisticated real-time communication features with their applications in just 10 minutes.</p><p><strong>✨ Awesome, right?</strong></p><p>Let's create a 1-to-1 video calling in Kotlin Android using the VideoSDK. But first, we need to create a VideoSDK account and generate a token.</p><h3 id="set-up-your-videosdk-account-and-generate-a-token">Set up your VideoSDK account and generate a token.</h3><ul><li>A Video SDK Token (<a href="https://app.videosdk.live/api-keys?utm_source=google&amp;utm_medium=blog&amp;utm_campaign=1-to-1+video+chat+app+on+android+kotlin">Dashboard &gt; Api-Key</a>) (<a href="https://youtu.be/RLOA0U62tOc">Video Tutorial</a>)</li></ul>
<!--kg-card-begin: html-->
<iframe type="text/html" frameborder="0" width="560" height="315" src="https://www.youtube.com/embed/RLOA0U62tOc" allowfullscreen=""/>
<!--kg-card-end: html-->
<h3 id="%EF%B8%8F-prerequisites-setup-of-the-project">?️ Prerequisites &amp; Setup of the project</h3>
<p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>Java Development Kit.</li><li>Android <a href="https://developer.android.com/studio/">Studio</a> 3.0 or later.</li><li>Android SDK API Level 21 or higher.</li><li>A mobile device that runs Android 5.0 or later.</li></ul><h3 id="create-a-new-project">Create a new project</h3>
<ul>
<li>Let’s start by creating a new project. In Android Studio, create a Phone and Tablet Android project with an Empty Activity.</li>
</ul>
<figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/android_new_project.png" class="kg-image" alt="How to Build 1on1 Video Chat App in Kotlin for Android?" loading="lazy" width="899" height="652"><figcaption><span style="white-space: pre-wrap;">Create a new Android project</span></figcaption></img></figure><ul>
<li>The next step is to provide a name. We will name it as OneToOneDemo.</li>
</ul>
<figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/name_project.png" class="kg-image" alt="How to Build 1on1 Video Chat App in Kotlin for Android?" loading="lazy" width="899" height="652"><figcaption><span style="white-space: pre-wrap;">Setting name for Android project</span></figcaption></img></figure><h3 id="integrating-videosdk-into-your-android-app">Integrating VideoSDK into Your Android App</h3>
<ul>
<li><strong>Add the repository to the project's <code>settings.gradle</code> file.</strong></li>
</ul>
<pre><code class="language-js">dependencyResolutionManagement{
  repositories {
    // ...
    google()
    mavenCentral()
    maven { url 'https://jitpack.io' }
    maven { url "https://maven.aliyun.com/repository/jcenter" }
  }
}</code></pre><ul>
<li><strong>Add the following dependency to your app's <code>build.gradle</code>.</strong></li>
</ul>
<pre><code class="language-js">dependencies {
  implementation 'live.videosdk:rtc-android-sdk:0.1.13'

  // library to perform Network call to generate a meeting id
  implementation 'com.amitshekhar.android:android-networking:1.0.2'

  // other app dependencies
  }</code></pre><blockquote>If your project has set <code>android.useAndroidX=true</code>, then set <code>android.enableJetifier=true</code> in the <code>gradle.properties</code> file to migrate your project to AndroidX and avoid duplicate class conflict.</blockquote><ul>
<li><strong>Add permissions to your project</strong></li>
</ul>
<p>In <code>/app/Manifests/AndroidManifest.xml</code>, add the following permissions after <code>&lt;/application&gt;</code>.</p><pre><code class="language-js">&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;</code></pre><h2 id="build-1on1-video-call-app-on-android">Build 1on1 Video Call App on Android</h2>
<h3 id="step-1-android-video-chat-sdk-structure-of-project">STEP 1: Android Video Chat SDK Structure of Project</h3><p>We will create two screens. The first screen is <code>Joining screen</code>, which allows the user to create/join the meeting, and the other is <code>Meeting screen</code> , which will show participants a Whatsapp-like view. </p><p>Our project structure would look like this.</p><pre><code class="language-js">   app
   ├── java
   │    ├── packagename
   │         ├── JoinActivity
   │         ├── MeetingActivity
   ├── res
   │    ├── layout
   │    │    ├── activity_join.xml
   │    │    ├── activity_meeting.xml</code></pre><blockquote>You have to set <code>JoinActivity</code> as Launcher activity.</blockquote><ul><li><strong>Creating Joining Screen</strong></li></ul><p>Create a new Activity named <code>JoinActivity</code></p><h3 id="step-2android-video-chat-sdk-creating-ui-for-joining-screen">STEP 2:Android Video Chat SDK Creating UI for Joining Screen</h3>
<p>The Joining screen will include:</p><ol><li><strong>Create Button</strong> - This button will create a new meeting for you.</li><li><strong>TextField for Meeting ID</strong> - This text field will contain the meeting ID you want to join.</li><li><strong>Join Button</strong> - This button will join the meeting with <code>meetingId</code> you provided.</li></ol><p>In <code>/app/res/layout/activity_join.xml</code> file, replace the content with the following.</p><pre><code class="language-js">&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:gravity="center"
    android:orientation="vertical"
    tools:context=".JoinActivity"&gt;

    &lt;com.google.android.material.appbar.MaterialToolbar
        android:id="@+id/material_toolbar"
        android:layout_width="match_parent"
        android:layout_height="?attr/actionBarSize"
        app:contentInsetStart="0dp"
        android:background="?attr/colorPrimary"
        app:titleTextColor="@color/white" /&gt;

    &lt;LinearLayout
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:gravity="center"
        android:orientation="vertical"&gt;


        &lt;Button
            android:id="@+id/btnCreateMeeting"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginBottom="16dp"
            android:text="Create Meeting" /&gt;

        &lt;TextView
            style="@style/TextAppearance.AppCompat.Headline"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:text="OR" /&gt;

        &lt;com.google.android.material.textfield.TextInputLayout        	style="@style/Widget.MaterialComponents.TextInputLayout.OutlinedBox"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginVertical="16dp"
            android:hint="Enter Meeting ID"&gt;

            &lt;EditText
                android:id="@+id/etMeetingId"
                android:layout_width="250dp"
                android:layout_height="wrap_content" /&gt;
        &lt;/com.google.android.material.textfield.TextInputLayout&gt;

        &lt;Button
            android:id="@+id/btnJoinMeeting"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:text="Join Meeting" /&gt;
    &lt;/LinearLayout&gt;

&lt;/LinearLayout&gt;</code></pre><h3 id="step-3-android-video-chat-sdk-integration-of-create-meeting-api">STEP 3: Android Video Chat SDK Integration of Create Meeting API</h3>
<ol>
<li>Create a field <code>sampleToken</code> in <code>JoinActivity</code> which will hold the generated token from the <a href="https://app.videosdk.live/api-keys">Video SDK dashboard</a>. This token will be used in the Video SDK config as well as generating meetingId.</li>
</ol>
<pre><code class="language-js">class JoinActivity : AppCompatActivity() {

  //Replace with the token you generated from the Video SDK Dashboard
  private var sampleToken = ""

  override fun onCreate(savedInstanceState: Bundle?) {
    //...
  }
}</code></pre><ol start="2">
<li>On the <strong>Join Button</strong> <code>onClick</code> event, we will navigate to <code>MeetingActivity</code> with token and meetingId.</li>
</ol>
<pre><code class="language-js">class JoinActivity : AppCompatActivity() {

   //Replace with the token you generated from the VideoSDK Dashboard
   private var sampleToken = "" 

   override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_join)

        val btnCreate = findViewById&lt;Button&gt;(R.id.btnCreateMeeting)
        val btnJoin = findViewById&lt;Button&gt;(R.id.btnJoinMeeting)
        val etMeetingId = findViewById&lt;EditText&gt;(R.id.etMeetingId)

		//set title
        val toolbar = findViewById&lt;Toolbar&gt;(R.id.material_toolbar)
        toolbar.title = "OneToOneDemo"
        setSupportActionBar(toolbar)
        
        btnCreate.setOnClickListener { v: View? -&gt;
            // we will explore this method in the next step
            createMeeting(sampleToken)
        }
        btnJoin.setOnClickListener { v: View? -&gt;
            val intent = Intent(this@JoinActivity, MeetingActivity::class.java)
            intent.putExtra("token", sampleToken)
            intent.putExtra("meetingId", etMeetingId.text.toString())
            startActivity(intent)
        }
    }

    private fun createMeeting(token: String) {
    }
}</code></pre><ol start="3">
<li>For the <strong>Create Button</strong>, under <code>createMeeting</code> method we will generate meetingId by calling API and navigating to <code>MeetingActivity</code> with token and generated meetingId.</li>
</ol>
<pre><code class="language-js">class JoinActivity : AppCompatActivity() {
  //...onCreate
 private fun createMeeting(token: String) {
    // we will make an API call to VideoSDK Server to get a roomId
    AndroidNetworking.post("https://api.videosdk.live/v2/rooms")
      .addHeaders("Authorization", token) //we will pass the token in the Headers
      .build()
      .getAsJSONObject(object : JSONObjectRequestListener {
          override fun onResponse(response: JSONObject) {
              try {
                  // response will contain `roomId`
                  val meetingId = response.getString("roomId")

                  // starting the MeetingActivity with received roomId and our sampleToken
                  val intent = Intent(this@JoinActivity, MeetingActivity::class.java)
                  intent.putExtra("token", sampleToken)
                  intent.putExtra("meetingId", meetingId)
                  startActivity(intent)
              } catch (e: JSONException) {
                  e.printStackTrace()
              }
          }

          override fun onError(anError: ANError) {
              anError.printStackTrace()
                    Toast.makeText(
                        this@JoinActivity, anError.message,
                        Toast.LENGTH_SHORT
                    ).show()
          }
      })
  }
}</code></pre><ol start="4">
<li>Our App is completely based on audio and video commutation, that's why we need to ask for runtime permissions <code>RECORD_AUDIO</code> and <code>CAMERA</code>. So, we will implement permission logic on <code>JoinActivity</code>.</li>
</ol>
<pre><code class="language-js">class JoinActivity : AppCompatActivity() {
    companion object {
        private const val PERMISSION_REQ_ID = 22
        private val REQUESTED_PERMISSIONS = arrayOf(
            Manifest.permission.RECORD_AUDIO,
            Manifest.permission.CAMERA
        )
    }

  private fun checkSelfPermission(permission: String, requestCode: Int): Boolean {
        if (ContextCompat.checkSelfPermission(this, permission) !=
            PackageManager.PERMISSION_GRANTED)
        {
            ActivityCompat.requestPermissions(this, REQUESTED_PERMISSIONS, requestCode)
            return false
        }
        return true
    }

  override fun onCreate(savedInstanceState: Bundle?) {
    //... button listeneres
    checkSelfPermission(REQUESTED_PERMISSIONS[0], PERMISSION_REQ_ID)
    checkSelfPermission(REQUESTED_PERMISSIONS[1], PERMISSION_REQ_ID)
  }
}</code></pre><blockquote>You will get <code>Unresolved reference: MeetingActivity</code> error, but don't worry. It will be solved automatically once you create <code>MeetingActivity</code>.</blockquote><ol start="5">
<li>The Joining screen is now complete, and it is time to create the participant's view in the Meeting screen. The Joining screen will look like this:</li>
</ol>
<figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/joining_screen.jpg" class="kg-image" alt="How to Build 1on1 Video Chat App in Kotlin for Android?" loading="lazy" width="297" height="600"/></figure><p><strong>Creating Meeting Screen</strong><br>Create a new Activity named <code>MeetingActivity</code>.</br></p><h3 id="step-4-video-chat-android-sdk-creating-the-ui-for-meeting-screen">STEP 4: Video Chat Android SDK Creating the UI for Meeting Screen</h3>
<p>In <code>/app/res/layout/activity_meeting.xml</code> file, replace the content with the following.</p><pre><code class="language-js">&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:id="@+id/mainLayout"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:animateLayoutChanges="true"
    android:gravity="center"
    android:orientation="vertical"
    tools:context=".MeetingActivity"&gt;

    &lt;com.google.android.material.appbar.MaterialToolbar
        android:id="@+id/material_toolbar"
        android:layout_width="match_parent"
        android:layout_height="?attr/actionBarSize"
        android:background="@color/black"
        app:contentInsetStart="0dp"
        app:titleTextColor="@color/white"&gt;

        &lt;LinearLayout
            android:id="@+id/meetingLayout"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginLeft="14dp"
            android:layout_marginTop="10dp"
            android:orientation="horizontal"&gt;

            &lt;RelativeLayout
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"&gt;

                &lt;TextView
                    android:id="@+id/txtMeetingId"
                    android:layout_width="wrap_content"
                    android:layout_height="wrap_content"
                    android:layout_gravity="center"
                    android:fontFamily="sans-serif-medium"
                    android:textColor="@color/white"
                    android:textFontWeight="600"
                    android:textSize="16sp" /&gt;

                &lt;ImageButton
                    android:id="@+id/btnCopyContent"
                    android:layout_width="22dp"
                    android:layout_height="22sp"
                    android:layout_marginLeft="7dp"
                    android:layout_toRightOf="@+id/txtMeetingId"
                    android:backgroundTint="@color/black"
                    android:src="@drawable/ic_outline_content_copy_24" /&gt;

            &lt;/RelativeLayout&gt;

        &lt;/LinearLayout&gt;

        &lt;LinearLayout
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_gravity="end"
            android:layout_marginEnd="10dp"&gt;

            &lt;ImageButton
                android:id="@+id/btnSwitchCameraMode"
                android:layout_width="wrap_content"
                android:layout_height="match_parent"
                android:background="@color/black"
                android:contentDescription="Switch Camera mode"
                android:src="@drawable/ic_baseline_flip_camera_android_24" /&gt;

        &lt;/LinearLayout&gt;

    &lt;/com.google.android.material.appbar.MaterialToolbar&gt;

    &lt;FrameLayout
        android:id="@+id/participants_frameLayout"
        android:layout_width="match_parent"
        android:layout_height="0dp"
        android:layout_weight="1"
        android:background="@color/black"&gt;

        &lt;androidx.cardview.widget.CardView
            android:id="@+id/ParticipantCard"
            android:layout_width="match_parent"
            android:layout_height="match_parent"
            android:layout_marginLeft="12dp"
            android:layout_marginTop="3dp"
            android:layout_marginRight="12dp"
            android:layout_marginBottom="3dp"
            android:backgroundTint="#2B3034"
            android:visibility="gone"
            app:cardCornerRadius="8dp"
            app:strokeColor="#2B3034"&gt;

            &lt;ImageView
                android:layout_width="150dp"
                android:layout_height="150dp"
                android:layout_gravity="center"
                android:src="@drawable/ic_baseline_person_24" /&gt;

            &lt;live.videosdk.rtc.android.VideoView
                android:id="@+id/participantView"
                android:layout_width="match_parent"
                android:layout_height="match_parent"
                android:visibility="gone" /&gt;

        &lt;/androidx.cardview.widget.CardView&gt;

        &lt;FrameLayout
            android:layout_width="match_parent"
            android:layout_height="match_parent"&gt;

        &lt;/FrameLayout&gt;

        &lt;androidx.cardview.widget.CardView
            android:id="@+id/LocalCard"
            android:layout_width="match_parent"
            android:layout_height="match_parent"
            android:layout_marginLeft="12dp"
            android:layout_marginTop="3dp"
            android:layout_marginRight="12dp"
            android:layout_marginBottom="3dp"
            android:backgroundTint="#1A1C22"
            app:cardCornerRadius="8dp"
            app:strokeColor="#1A1C22"&gt;

            &lt;ImageView
                android:id="@+id/localParticipant_img"
                android:layout_width="150dp"
                android:layout_height="150dp"
                android:layout_gravity="center"
                android:src="@drawable/ic_baseline_person_24" /&gt;

            &lt;live.videosdk.rtc.android.VideoView
                android:id="@+id/localView"
                android:layout_width="match_parent"
                android:layout_height="match_parent"
                android:visibility="gone" /&gt;

        &lt;/androidx.cardview.widget.CardView&gt;

    &lt;/FrameLayout&gt;
    
    &lt;!-- add bottombar here--&gt;

&lt;/LinearLayout&gt;</code></pre><pre><code class="language-js">&lt;com.google.android.material.bottomappbar.BottomAppBar
        android:id="@+id/bottomAppbar"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:animateLayoutChanges="true"
        android:backgroundTint="@color/black"
        android:gravity="center_horizontal"
        android:paddingVertical="5dp"
        tools:ignore="BottomAppBar"&gt;

        &lt;RelativeLayout
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:paddingStart="16dp"
            android:paddingEnd="16dp"&gt;

            &lt;com.google.android.material.floatingactionbutton.FloatingActionButton
                android:id="@+id/btnLeave"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:contentDescription="Leave Meeting"
                android:src="@drawable/ic_end_call"
                app:backgroundTint="#FF5D5D"
                app:fabSize="normal"
                app:tint="@color/white" /&gt;

            &lt;com.google.android.material.floatingactionbutton.FloatingActionButton
                android:id="@+id/btnMic"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:layout_marginStart="90dp"
                android:layout_toEndOf="@+id/btnLeave"
                android:contentDescription="Toggle Mic"
                android:src="@drawable/ic_mic_off"
                app:backgroundTint="@color/white"
                app:borderWidth="1dp"
                app:fabSize="normal" /&gt;

            &lt;com.google.android.material.floatingactionbutton.FloatingActionButton
                android:id="@+id/btnWebcam"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:layout_marginStart="90dp"
                android:layout_toEndOf="@+id/btnMic"
                android:backgroundTint="@color/white"
                android:contentDescription="Toggle Camera"
                android:src="@drawable/ic_video_camera_off"
                app:backgroundTint="@color/white"
                app:borderWidth="1dp"
                app:fabSize="normal" /&gt;

        &lt;/RelativeLayout&gt;

    &lt;/com.google.android.material.bottomappbar.BottomAppBar&gt;

</code></pre><blockquote>Copy required icons from <a href="https://github.com/videosdk-live/videosdk-rtc-android-kotlin-sdk-example/tree/one-to-one-demo/app/src/main/res/drawable">here</a> and paste in your project's `<code>res/drawable<br>res/drawable</br></code> folder.</blockquote><h3 id="step-5-video-chat-android-sdk-initializing-the-meeting">STEP 5: Video Chat Android SDK Initializing the Meeting</h3>
<p>After getting the token and meetingId from <code>JoinActivity</code> we need to,</p><ol>
<li>Initialize <strong>VideoSDK</strong></li>
<li>Configure <strong>VideoSDK</strong> with the token.</li>
<li>Initialize the meeting with required params such as <code>meetingId</code>, <code>participantName</code>, <code>micEnabled</code>, <code>webcamEnabled</code>,<code>participantId</code> and map of <code>CustomStreamTrack</code>.</li>
<li>Join the room with <code>meeting.join()</code> method.</li>
</ol>
<pre><code class="language-js">class MeetingActivity : AppCompatActivity() {

    private var meeting: Meeting? = null
    private var micEnabled = true
    private var webcamEnabled = true

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_meeting)

        //
        val toolbar = findViewById&lt;Toolbar&gt;(R.id.material_toolbar)
        toolbar.title = ""
        setSupportActionBar(toolbar)

        //
        val token = intent.getStringExtra("token")
        val meetingId = intent.getStringExtra("meetingId")

        // set participant name
        val localParticipantName = "Alex"

        // Initialize VideoSDK
        VideoSDK.initialize(applicationContext)

        // pass the token generated from api server
        VideoSDK.config(token)

        // create a new meeting instance
        meeting = VideoSDK.initMeeting(
            this@MeetingActivity, meetingId, localParticipantName,
            micEnabled, webcamEnabled, null, null
        )

        // join the meeting
        meeting?.join()

        //
        val textMeetingId = findViewById&lt;TextView&gt;(R.id.txtMeetingId)
        textMeetingId.text = meetingId

	 // copy meetingId to clipboard
        (findViewById&lt;View&gt;(R.id.btnCopyContent) as ImageButton).setOnClickListener {
            if (meetingId != null) {
                copyTextToClipboard(meetingId)
            }
        }
   }
      
      private fun copyTextToClipboard(text: String) {
        val clipboard = getSystemService(CLIPBOARD_SERVICE) as ClipboardManager
        val clip = ClipData.newPlainText("Copied text", text)
        clipboard.setPrimaryClip(clip)
        Toast.makeText(this@MeetingActivity, "Copied to clipboard!", Toast.LENGTH_SHORT).show()
    }

}</code></pre><h3 id="step-6-video-chat-android-sdk-handle-local-participant-media">? STEP 6: Video Chat Android SDK Handle Local Participant Media</h3>
<p>We need to implement clicks for the following <code>Views</code>:</p><ul><li><strong>Mic Button</strong></li><li><strong>Webcam Button</strong></li><li><strong>Switch Camera Button</strong></li><li><strong>Leave Button</strong></li></ul><p>Add the following implementation:</p><pre><code class="language-js">class MeetingActivity : AppCompatActivity() {

	private var btnWebcam: FloatingActionButton? = null
    private var btnMic: FloatingActionButton? = null
    private var btnLeave: FloatingActionButton? = null
    private var btnSwitchCameraMode: ImageButton? = null
    
  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_meeting)
    
    //
    btnLeave = findViewById(R.id.btnLeave)
    btnSwitchCameraMode = findViewById(R.id.btnSwitchCameraMode)
    btnMic = findViewById(R.id.btnMic)
    btnWebcam = findViewById(R.id.btnWebcam)
        
    //...

    // actions
    setActionListeners()
  }
  
  private fun setActionListeners() {
        // Toggle mic
        btnMic!!.setOnClickListener { toggleMic() }

        // Toggle webcam
        btnWebcam!!.setOnClickListener { toggleWebCam() }

        // Leave meeting
        btnLeave!!.setOnClickListener { 
            // this will make the local participant leave the meeting
       	    meeting!!.leave()
        }

        // Switch camera
        btnSwitchCameraMode!!.setOnClickListener { 
        //a participant can change stream from front/rear camera during the meeting.
            meeting!!.changeWebcam() 
        }
    
   }
}</code></pre><pre><code class="language-js">private fun toggleMic() {
        if (micEnabled) {
            // this will mute the local participant's mic
            meeting!!.muteMic()
        } else {
             // this will unmute the local participant's mic
            meeting!!.unmuteMic()
        }
        micEnabled = !micEnabled  
    	// change mic icon according to micEnable status
        toggleMicIcon()
    }

    @SuppressLint("ResourceType")
    private fun toggleMicIcon() {
        if (micEnabled) {
            btnMic!!.setImageResource(R.drawable.ic_mic_on)
            btnMic!!.setColorFilter(Color.WHITE)
            var buttonDrawable = btnMic!!.background
            buttonDrawable = DrawableCompat.wrap(buttonDrawable!!)
            if (buttonDrawable != null) DrawableCompat.setTint(buttonDrawable, Color.TRANSPARENT)
            btnMic!!.background = buttonDrawable
        } else {
            btnMic!!.setImageResource(R.drawable.ic_mic_off)
            btnMic!!.setColorFilter(Color.BLACK)
            var buttonDrawable = btnMic!!.background
            buttonDrawable = DrawableCompat.wrap(buttonDrawable!!)
            if (buttonDrawable != null) DrawableCompat.setTint(buttonDrawable, Color.WHITE)
            btnMic!!.background = buttonDrawable
        }
    }</code></pre><pre><code class="language-js">private fun toggleWebCam() {
        if (webcamEnabled) {
        // this will disable the local participant webcam
            meeting!!.disableWebcam()
        } else {
        // this will enable the local participant webcam
            meeting!!.enableWebcam()
        }
        webcamEnabled = !webcamEnabled
        // change webCam icon according to webcamEnabled status
        toggleWebcamIcon()
    }

    @SuppressLint("ResourceType")
    private fun toggleWebcamIcon() {
        if (webcamEnabled) {
            btnWebcam!!.setImageResource(R.drawable.ic_video_camera)
            btnWebcam!!.setColorFilter(Color.WHITE)
            var buttonDrawable = btnWebcam!!.background
            buttonDrawable = DrawableCompat.wrap(buttonDrawable!!)
            if (buttonDrawable != null) DrawableCompat.setTint(buttonDrawable, Color.TRANSPARENT)
            btnWebcam!!.background = buttonDrawable
        } else {
            btnWebcam!!.setImageResource(R.drawable.ic_video_camera_off)
            btnWebcam!!.setColorFilter(Color.BLACK)
            var buttonDrawable = btnWebcam!!.background
            buttonDrawable = DrawableCompat.wrap(buttonDrawable!!)
            if (buttonDrawable != null) DrawableCompat.setTint(buttonDrawable, Color.WHITE)
            btnWebcam!!.background = buttonDrawable
        }
    }</code></pre><h3 id="step-7-setting-up-local-participant-view">STEP 7: Setting up Local participant view</h3>
<p>To set up participant view, we have to implement all the methods of <code>ParticipantEventListener</code> abstract Class and add the listener to <code>Participant</code> class using the <code>addEventListener()</code> method of <code>Participant</code> Class. <code>ParticipantEventListener</code> class has two methods :</p>
<ol>
<li><code>onStreamEnabled</code> - Whenever any participant enables mic/webcam in the meeting, <code>onStreamEnabled</code> the event will trigger and return <code>Stream</code>.</li>
<li><code>onStreamDisabled</code> - Whenever any participant disables mic/webcam in the meeting, <code>onStreamDisabled</code> the event will trigger and return <code>Stream</code>.</li>
</ol>
<pre><code class="language-js">class MeetingActivity : AppCompatActivity() {

    private var localView: VideoView? = null
    private var participantView: VideoView? = null
    
    private var localCard: CardView? = null
    private var participantCard: CardView? = null
    private var localParticipantImg: ImageView? = null
    
  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_meeting)
    
    //... 
    
    localCard = findViewById(R.id.LocalCard)
    participantCard = findViewById(R.id.ParticipantCard)
    localView = findViewById(R.id.localView)
    participantView = findViewById(R.id.participantView)          		localParticipantImg = findViewById(R.id.localParticipant_img)
        
    //...

    // setup local participant view
    setLocalListeners()
  }

  private fun setLocalListeners() {
        meeting!!.localParticipant
            .addEventListener(object : ParticipantEventListener() {
                override fun onStreamEnabled(stream: Stream) {
                    if (stream.kind.equals("video", ignoreCase = true)) {
                        val track = stream.track as VideoTrack
                        localView!!.visibility = View.VISIBLE
                        localView!!.addTrack(track)
                        localView!!.setZOrderMediaOverlay(true)
                        (localCard as View?)!!.bringToFront()
                    }
                }

                override fun onStreamDisabled(stream: Stream) {
                    if (stream.kind.equals("video", ignoreCase = true)) {
                        localView!!.removeTrack()
                        localView!!.visibility = View.GONE
                    }
                }
            })
    }
}</code></pre><h3 id="step-8-setting-up-remote-participant-view">STEP 8: Setting up Remote participant view</h3>
<pre><code class="language-js">private val participantEventListener: ParticipantEventListener =
        object : ParticipantEventListener() {
        // trigger when participant enabled mic/webcam
            override fun onStreamEnabled(stream: Stream) {
                if (stream.kind.equals("video", ignoreCase = true)) {
                    localView!!.setZOrderMediaOverlay(true)
                    (localCard as View?)!!.bringToFront()
                    val track = stream.track as VideoTrack
                    participantView!!.visibility = View.VISIBLE
                    participantView!!.addTrack(track)
                }
            }

	// trigger when participant disabled mic/webcam
            override fun onStreamDisabled(stream: Stream) {
                if (stream.kind.equals("video", ignoreCase = true)) {
                    participantView!!.removeTrack()
                    participantView!!.visibility = View.GONE
                }
            }
        }</code></pre><h3 id="step-9-handle-meeting-events-manage-participants-view">STEP 9: Handle meeting events &amp; manage participant's view</h3>
<ul>
<li>Add <code>MeetingEventListener</code> for listening events such as Meeting Join/Left and Participant Join/Left.</li>
</ul>
<pre><code class="language-js">class MeetingActivity : AppCompatActivity() {
  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_meeting)
    //...

    // handle meeting events
    meeting!!.addEventListener(meetingEventListener)
  }
  
  private val meetingEventListener: MeetingEventListener = object : MeetingEventListener() {
        override fun onMeetingJoined() {
        // change mic,webCam icon after meeting successfully joined
            toggleMicIcon()
            toggleWebcamIcon()
        }

        override fun onMeetingLeft() {
            if (!isDestroyed) {
                val intent = Intent(this@MeetingActivity, JoinActivity::class.java)
                startActivity(intent)
                finish()
            }
        }

        override fun onParticipantJoined(participant: Participant) {
        // Display local participant as miniView when other participant joined
            changeLocalParticipantView(true)
            Toast.makeText(
                this@MeetingActivity, participant.displayName + " joined",
                Toast.LENGTH_SHORT
            ).show()
            participant.addEventListener(participantEventListener)
        }

        override fun onParticipantLeft(participant: Participant) {
        // Display local participant as largeView when other participant left
            changeLocalParticipantView(false)
            Toast.makeText(
                this@MeetingActivity, participant.displayName + " left",
                Toast.LENGTH_SHORT
            ).show()
        }
    }
}</code></pre><ul>
<li><code>changeLocalParticipantView(isMiniView: Boolean)</code> function handles whether the video of a local participant is displayed as a MiniView or a LargeView.</li>
<li>If the meeting has only one participant (local participant), then the local participant is displayed as LargeView.</li>
<li>When another participant (other than the local participant) joins,<code>changeLocalParticipantView(true)</code> is called. As a result, the local participant is shown as MiniView, while the other participant is shown as LargeView.</li>
</ul>
<pre><code class="language-js">private fun changeLocalParticipantView(isMiniView: Boolean) {
        if (isMiniView) {
            // show localCard as miniView
            localCard!!.layoutParams =
                FrameLayout.LayoutParams(300, 430, Gravity.RIGHT or Gravity.BOTTOM)
            val cardViewMarginParams = localCard!!.layoutParams as MarginLayoutParams
            cardViewMarginParams.setMargins(30, 0, 60, 40)
            localCard!!.requestLayout()
            // set height-width of localParticipant_img
            localParticipantImg!!.layoutParams = FrameLayout.LayoutParams(150, 150, Gravity.CENTER)
            (localCard as View?)!!.bringToFront()
            participantCard!!.visibility = View.VISIBLE
        } else {
            // show localCard as largeView
            localCard!!.layoutParams =
                FrameLayout.LayoutParams(
                    ViewGroup.LayoutParams.MATCH_PARENT,
                    ViewGroup.LayoutParams.MATCH_PARENT
                )
            val cardViewMarginParams = localCard!!.layoutParams as MarginLayoutParams
            cardViewMarginParams.setMargins(30, 5, 30, 30)
            localCard!!.requestLayout()
            // set height-width of localParticipant_img
            localParticipantImg!!.layoutParams = FrameLayout.LayoutParams(400, 400, Gravity.CENTER)
            participantCard!!.visibility = View.GONE
        }
    }</code></pre><h3 id="step-10-destroying-everything">STEP 10: Destroying everything</h3>
<p>We need to release resources when the app is closed and is no longer being used. Override the <code>onDestroy</code> with the following code:</p><pre><code class="language-js">override fun onDestroy() {
        if (meeting != null) {
            meeting!!.removeAllListeners()
            meeting!!.localParticipant.removeAllListeners()
            meeting!!.leave()
            meeting = null
        }
        if (participantView != null) {
            participantView!!.visibility = View.GONE
            participantView!!.releaseSurfaceViewRenderer()
        }
        if (localView != null) {
            localView!!.visibility = View.GONE
            localView!!.releaseSurfaceViewRenderer()
        }
        super.onDestroy()
    }</code></pre><p>This is how the meeting screen will seem with two participants.</p><blockquote>
<p><code>java.lang.IllegalStateException: This Activity already has an action bar supplied by the window decor. Do not request Window.FEATURE_SUPPORT_ACTION_BAR and set windowActionBar to false in your theme to use a Toolbar instead.</code><br>
If you face this error at Runtime include these lines in <code>theme.xml</code> file.<br>
<code>&lt;item name="windowActionBar"&gt;false&lt;/item&gt;</code><br>
<code>&lt;item name="windowNoTitle"&gt;true&lt;/item&gt;</code></br></br></br></p>
</blockquote>
<h2 id="1-to-1-video-chat-app-demo">1-to-1 Video Chat App Demo</h2><p><strong>Tadaa!!</strong> Our app is ready. Easy, right? </p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Add-a-heading.gif" class="kg-image" alt="How to Build 1on1 Video Chat App in Kotlin for Android?" loading="lazy" width="320" height="240"/></figure><p>I hope you enjoyed following along and collaborating on the development of a 1-to-1 video chat Android app using the Video SDK.<br><br>Install and run the app on two different devices and make sure that they are connected to the internet. You should expect it to work as shown in the video below:</br></br></p>
<!--kg-card-begin: html-->
<iframe type="text/html" frameborder="0" width="560" height="315" src="https://www.youtube.com/embed/Kj7jS3dbJFA" allowfullscreen=""/>
<!--kg-card-end: html-->
<blockquote><br>This app only supports 2 participant,it does not manage more than 2 participants. <a href="https://github.com/videosdk-live/videosdk-rtc-android-kotlin-sdk-example">If you want to handle more than 2 participants then checkout our Group chat example here.</a></br></blockquote><h2 id="conclusion">Conclusion</h2><p>In this blog, we learned what VideoSDK is, how to obtain an access token from the VideoSDK <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer"><strong>Dashboard</strong></a>, and how to make a video call on Android VideoSDK. Go ahead and create advanced features like screen-sharing, chat, and others. Browse Our <a href="https://docs.videosdk.live/" rel="noopener noreferrer">Documentation</a>.</p><p>To see the full implementation of the app, check out the GitHub repository: <a href="https://github.com/videosdk-live/videosdk-rtc-android-kotlin-sdk-example/tree/one-to-one-demo">https://github.com/videosdk-live/videosdk-rtc-android-kotlin-sdk-example/tree/one-to-one-demo</a></p><p>If you have any questions or comments, I invite you to <a href="https://discord.gg/Gpmj6eCq5u">Join the Video SDK Developer Discord</a> community.</p><h2 id="resources">Resources</h2><ul><li><a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/getting-started">Get Started with Android Documentation</a></li><li><a href="https://youtu.be/Kj7jS3dbJFA">Build an Android Video Calling App using Android Studio and Video SDK</a></li><li><a href="https://www.videosdk.live/blog/react-native-android-video-calling-app-with-callkeep">Build a React Native Android Video Calling App with ?Callkeep using ? Firebase and Video SDK</a></li><li><a href="https://www.videosdk.live/blog/video-calling-in-flutter">Build a Flutter Video Calling App with Video SDK</a></li></ul><p><br/></p><p/>]]></content:encoded></item><item><title><![CDATA[2024 Rewind: The Rise of Digital-Human Intelligence]]></title><description><![CDATA[As 2024 wraps up, it’s the perfect time to pause, reflect, and share some of the incredible things we’ve accomplished at VideoSDK this year. It’s been a year of building connections, empowering developers, and creating real impact around the world.
]]></description><link>https://www.videosdk.live/blog/2024-year-in-review</link><guid isPermaLink="false">6787c7e64556210426689ae4</guid><category><![CDATA[year-in-review]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Thu, 16 Jan 2025 16:50:59 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2025/01/Thumbnail.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Thumbnail.jpg" alt="2024 Rewind: The Rise of Digital-Human Intelligence"/><p>As 2024 wraps up, it’s the perfect time to pause, reflect, and share some of the incredible things we’ve accomplished at VideoSDK this year. It’s been a year of building connections, empowering developers, and creating real impact around the world.</p><p>Let’s take a look back at the milestones we’ve hit, the lives we’ve touched, and what’s next in our story.</p><h2 id="lives-we%E2%80%99ve-touched"><strong>Lives We’ve Touched</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Lives-we-touched.jpg" class="kg-image" alt="2024 Rewind: The Rise of Digital-Human Intelligence" loading="lazy" width="3840" height="2160"/></figure><p>In India, over 2.8 million people verified their identities effortlessly with our video KYC solution, saving time and building trust on a massive scale. In the MENA region, we bridged the gap between patients and doctors, enabling 50,000 medical consultations and improving access to healthcare. Around the world, we powered online interviews for more than 40,000 people, helping them unlock exciting career opportunities. And on a lighter note, we brought smiles to 76 million people who stayed connected with their loved ones through social apps.</p><h2 id="numbers-that-make-us-proud"><strong>Numbers That Make Us Proud</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Number-that-make-us-proud.jpg" class="kg-image" alt="2024 Rewind: The Rise of Digital-Human Intelligence" loading="lazy" width="3840" height="2160"/></figure><p>Every month, more than 1 million participants from over 150 countries rely on VideoSDK for smooth and uninterrupted communication. This year alone, our npm and pub.dev packages have been downloaded over 10 million times. With our SDK, developers have built over 5,000 apps, empowering innovative solutions across diverse industries.&nbsp;</p><h2 id="scaling-to-new-heights"><strong>Scaling to New Heights</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Scaling-to-new-heights-1.jpg" class="kg-image" alt="2024 Rewind: The Rise of Digital-Human Intelligence" loading="lazy" width="3840" height="2160"/></figure><p>We also pushed our infrastructure to new heights. Imagine handling 9060 concurrent sessions with 970 users at peak, and delivering 5+ TB of recordings. That’s the scale we’re talking about.</p><h2 id="a-year-of-exceptional-support"><strong>A Year of Exceptional Support</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Execptional-Customer-Support.jpg" class="kg-image" alt="2024 Rewind: The Rise of Digital-Human Intelligence" loading="lazy" width="3840" height="2160"/></figure><p>Of course, great products need great support, and our team showed up in a big way. This year, we resolved over 1027 support tickets, keeping our average resolution time under 3 hours. And here’s the cherry on top: our <strong>91% customer satisfaction score</strong> (CSAT). Knowing our users trust us and love what we do keeps us going.</p><h2 id="leaving-our-mark-at-global-tech-and-fintech-events"><strong>Leaving Our Mark at Global Tech and Fintech Events</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/event-1.jpg" class="kg-image" alt="2024 Rewind: The Rise of Digital-Human Intelligence" loading="lazy" width="3840" height="2160"/></figure><p>We showcased our innovative Agent-assisted &amp; Agentless KYC solution at the Global Fintech Fest (GFF), gaining significant interest from many BFSI giants. We also spoke at the Google Developers Group Surat (GDG Surat) event, sharing insights on 'Developer's role in building real-time human-digital worker ecosystems,' highlighting the evolving landscape of application building with the developer community.</p><h2 id="meet-the-future-of-ai"><strong>Meet the Future of AI</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2025/01/Character-SDK.jpg" class="kg-image" alt="2024 Rewind: The Rise of Digital-Human Intelligence" loading="lazy" width="3840" height="2160"/></figure><p>This year, we introduced <strong>Character SDK - </strong>a multimodal AI agent that can see, hear, and respond like never before. Its launch on Product Hunt was a moment to remember—earning us titles like&nbsp;</p><ul><li>#1 Product of the Day</li><li>#1 Product of the Week in the Developer Tool category</li><li>#2 Product of the Month in AI</li><li>#3 Product of the Month overall</li></ul><p>From creating AI companions that think, learn, and act, to simplifying workflows, Character SDK is shaping the future of technology in ways we couldn’t have imagined.</p><h2 id="looking-ahead"><strong>Looking Ahead</strong></h2><p>As we step into 2025, we’re filled with gratitude and excitement. Thank you for being a part of our journey this year. Here’s to pushing boundaries, making an impact, and creating even more meaningful connections in the year ahead.</p><p>Let’s make it amazing—together!</p>]]></content:encoded></item><item><title><![CDATA[Top 10 Video Conferencing API for 2025]]></title><description><![CDATA[Connect with excellence using the ten best video calling APIs and SDKs, enhancing your apps with high-quality real-time video communication.]]></description><link>https://www.videosdk.live/blog/10-best-video-calling-apis</link><guid isPermaLink="false">64ec6ed89eadee0b8b9e7e6a</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 16 Jan 2025 10:25:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/08/10-Video-Calling.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/08/10-Video-Calling.jpg" alt="Top 10 Video Conferencing API for 2025"/><p>The global COVID-19 pandemic has triggered a complete revolution in the communication sector. What used to be reserved for special occasions, like live video calls for festive gatherings, has now become an essential tool for business meetings and professional discussions.</p><p>A Video-calling API (Application Programming Interface) also known as a video chat API or video conference API is a set of protocols, tools, and definitions that allow developers to integrate video-calling functionality into their applications or websites. Essentially, it provides a way for developers to build video calling features without having to develop the entire infrastructure from scratch.</p><p>These APIs typically include functions for establishing video calls, managing participants, handling audio and video streams, and managing other aspects of the video call experience such as screen sharing, chat, and recording. They often support a variety of platforms including web browsers, mobile devices, and desktop applications.</p><h2 id="what-essential-features-should-a-video-calling-sdk-include">What Essential Features Should A Video Calling SDK Include?</h2>
<p><strong>Key Features</strong> to Consider When Developing a <strong>Video Calling App</strong>:</p><ul><li><strong>Single and Group Chats:</strong> Enable users to engage in both one-on-one and group conversations through the conferencing feature.</li><li><strong>Content Sharing:</strong> Allow seamless sharing of content and screen during video calls.</li><li><strong>Enhanced Quality:</strong> Ensure minimal lags, jitters, and background noise for a smoother calling experience.</li><li><strong>Call Recording:</strong> Provide the functionality to record and store video call content for future reference.</li></ul><h2 id="video-conferencing-sdk-for-2024">Video Conferencing SDK for 2024</h2>
<p>We've curated a compilation of the finest <strong>video calling API</strong>, featuring the following top 10 selections: VideoSDK, Agora, Twilio, Mirrorfly, AWS Chime SDK, Zoom Video SDK, Vonage, 100ms, EnableX, LiveKit, and Whereby.</p><h2 id="1-videosdk-real-time-video-infra-for-developer">1. VideoSDK: Real-time video infra for developer</h2>
<ul><li>Embrace the seamless integration of VideoSDK, one of the leading video API providers, where simplicity harmonizes with speed to enable you to concentrate on crafting innovative features that elevate user engagement. </li><li>With VideoSDK, you unlock unparalleled <strong>scalability</strong>, <strong>adaptive bitrate</strong> technology ensuring top-notch video quality, <strong>extensive customization</strong> options for tailored user experiences, <strong>high-fidelity recording</strong> capabilities, <strong>comprehensive analytics</strong> offering insightful data, <strong>cross-platform streaming</strong> for wider accessibility, <strong>effortless scalability</strong> to meet growing demands, and <strong>robust support</strong> spanning diverse platforms. </li><li>Whether you're building for mobile (Flutter, Android, iOS), web (JavaScript Core SDK + UI Kit), or desktop (Flutter Desktop), VideoSDK empowers you to effortlessly forge immersive and captivating video interactions.</li></ul><h3 id="videosdk-pricing">VideoSDK Pricing</h3>
<ul><li>Unlock the exceptional value of VideoSDK! Embrace the generosity of their <a href="https://www.videosdk.live/pricing">10,000 free monthly minutes</a> and immerse yourself in the versatility of their <a href="https://www.videosdk.live/pricing#pricingCalc">pricing options</a> for both video and audio calls. </li><li>With VideoSDK, immerse in <strong>video calls</strong> starting at an astonishingly low rate of just <strong>$0.003</strong> per participant per minute, enabling seamless connections without stretching your budget. </li><li>Furthermore, their <strong>audio calls</strong> are affordably accessible at only <strong>$0.0006</strong> per minute, making communication both cost-effective and within reach. </li><li>To elevate your experiences, they offer <strong>cloud recordings</strong> at an affordable <strong>$0.015</strong> per minute, preserving crucial moments for future reference. </li><li>Those pursuing seamless RTMP output present competitive pricing at <strong>$0.030 per</strong> minute, ensuring uninterrupted and fluid streaming capabilities.</li><li>Rest assured, their dedicated <strong>support</strong> team stands by <strong>24/7</strong>, poised to deliver prompt and dependable customer assistance whenever you need it.</li></ul>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-agora-real-time-voicevideo-engagement">2. Agora: Real-Time Voice/Video Engagement</h2>
<ul><li>Agora offers integrated voice and video chat SDK offers integrated voice and video chat, real-time recording, live streaming, and instant messaging functionalities.</li><li>Users can also enhance their experiences by opting for premium add-ons such as <strong>AR facial masks</strong>, <strong>sound effects</strong>, and <strong>whiteboards</strong> at an <strong>additional cost</strong>.</li><li>With Agora’s SD-RTN (Software Defined Real-Time Network), users can enjoy extensive global coverage and benefit from ultra-low latency streaming capabilities, ensuring a seamless communication experience.</li><li>However, it’s worth noting that Agora’s <strong>pricing structure</strong> may be <strong>intricate</strong> and may not be the most suitable option for businesses with tighter budgets.</li><li>Users who require direct support from Agora’s team should be aware that assistance might involve <strong>longer response times</strong>.</li></ul><h3 id="agora-pricing">Agora pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/agora-alternative" rel="noopener">Agora</a> provides users with a choice between two <a href="https://www.agora.io/en/pricing/" rel="noopener">pricing</a> plans: <strong>Premium</strong> and <strong>Standard</strong>, tailored to their service requirements.</li><li>The pricing is based on the monthly duration of audio and video calls, offering users a cost-effective approach.</li><li>The pricing options are further divided into four categories, each corresponding to different video resolutions, ensuring flexibility and aligning with users’ specific needs.</li><li>For <strong>audio calls</strong>, Agora offers a rate of <strong>$0.99</strong> per 1,000 participant minutes.</li><li><strong>HD Video calls</strong> are priced at <strong>$3.99</strong> per 1,000 participant minutes, and <strong>Full HD Video calls</strong> are available at <strong>$8.99</strong> per 1,000 participant minutes.</li><li>This structure empowers users to select the pricing tier that best fits their usage patterns, ensuring they can make a well-informed decision based on their budget and requirements.</li></ul><blockquote>
<p>See how Agora compares with its <a href="https://www.videosdk.live/blog/agora-competitors">competitors</a></p>
</blockquote>
<h2 id="3-mirrorfly-voice-chat-apis-for-developers">3. MirrorFly: Voice &amp; Chat APIs for Developers</h2>
<ul><li>MirrorFly is known for its custom <a href="https://www.mirrorfly.com/video-call-solution.php">video calling API</a> and messaging SDK that includes 1000+ in-app communication capabilities for web and mobile apps.</li><li>This could affect the platform's ability to offer a uniquely personalized communication experience.</li><li><strong>Scalability</strong> could be a potential <strong>concern</strong> for MirrorFly, particularly when dealing with larger applications or handling a substantial volume of users.</li><li>Maintaining <strong>performance</strong> and <strong>stability</strong> under high traffic conditions or complex use cases may pose <strong>challenges</strong>.</li><li><strong>Users' experiences</strong> with MirrorFly's technical support appear to be <strong>mixed</strong>. Some have reported <strong>less-than-optimal</strong> responsiveness, resulting in <strong>delays</strong> or <strong>difficulties</strong> in resolving issues and addressing concerns.</li><li>The <strong>pricing</strong> structure of MirrorFly <strong>may not align</strong> with all budget constraints or use case scenarios.</li><li>Depending on the <strong>desired features</strong> and <strong>scalability needs</strong>, the associated <strong>costs</strong> might be <strong>higher</strong> compared to alternative communication platforms.</li><li><strong>Integrating MirrorFly</strong> into existing applications or workflows could be a <strong>complex</strong> endeavor requiring significant <strong>technical expertise</strong>.</li><li><strong>Inadequate documentation</strong> or <strong>developer resources</strong> might make the integration process more challenging and time-consuming than desired.</li></ul><h3 id="mirrorfly-pricing">MirrorFly pricing</h3>
<ul><li>MirrorFly's pricing begins at <strong>$299</strong> per month, positioning it as a <strong>premium option</strong> within the market.</li><li>This cost should be carefully considered for the platform's features, capabilities, and the specific budget constraints of your project.</li></ul><h2 id="4-twilio-video">4. Twilio Video</h2>
<ul><li>Twilio offers a range of SDKs for web, iOS, and Android, equipping developers with versatile tools to seamlessly integrate live video functionality into their applications and elevate user experiences.</li><li>However, it’s important to note that working with Twilio may entail <strong>manual configuration</strong> and <strong>additional coding</strong> effort, particularly for utilizing multiple audio and video inputs.</li><li>This complexity can potentially pose challenges during the development process.</li><li>Twilio’s call insights feature is designed for error tracking and analysis, yet its integration requires <strong>additional code</strong> implementation, which might add to the development workload.</li><li>As usage scales, pricing considerations come into play. Twilio <strong>lacks</strong> a <strong>built-in tiering</strong> system in the dashboard, which could lead to <strong>complexities</strong> in managing scaling needs efficiently.</li><li>While Twilio supports <strong>up to 50 hosts</strong> and participants, which covers many use cases, it’s worth mentioning that <strong>Twilio</strong> <strong>does not offer ready-made plugins</strong> for streamlined product development.</li><li>This could demand <strong>extra time</strong> and <strong>effort</strong> from developers for implementation.</li><li>Finally, Twilio does offer customization options, but the extent of customization may not always align perfectly with every developer’s specific requirements.</li><li>This might necessitate <strong>further code</strong> development to achieve the desired outcomes.</li></ul><h3 id="twilio-pricing">Twilio pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/twilio-video-alternative">Twilio</a> follows a <a href="https://www.twilio.com/en-us/video/pricing">pricing</a> model that commences at <strong>$4</strong> per 1,000 minutes for their <strong>video services</strong>.</li><li>Regarding <strong>recordings</strong>, Twilio charges at a rate of <strong>$0.004</strong> per participant minute, while recording <strong>compositions</strong> come with a price tag of <strong>$0.01</strong> per composed minute.</li><li>Furthermore, <strong>storage</strong> incurs costs at a rate of <strong>$0.00167</strong> per GB per day, applicable after the initial 10 GB.</li></ul><blockquote>
<p>See how Twilio compares with its <a href="https://www.videosdk.live/blog/twilio-video-competitors">competitors</a></p>
</blockquote>
<h2 id="5-aws-chime-sdk">5. AWS Chime SDK</h2>
<ul><li>The Amazon Chime SDK facilitates video meetings with a maximum capacity of <strong>25 participants</strong> (50 for mobile users), providing a platform for effective collaboration among users.</li><li>Through the integration of simulcast technology, it ensures uniform video quality across diverse devices and networks, fostering seamless communication.</li><li>To uphold security standards, the Amazon Chime SDK encrypts all calls, videos, and chats, creating a secure communication environment for users.</li><li>However, certain features like <strong>polling</strong>, <strong>auto-sync</strong> with Google Calendar, and <strong>background blur</strong> effects are <strong>not included</strong> in the Amazon Chime SDK, potentially limiting users seeking these specific functionalities.</li><li>Notably, compatibility <strong>issues</strong> have been reported in <strong>Linux</strong> environments, and participants using the <strong>Safari browser</strong> might encounter <strong>challenges</strong> while utilizing the SDK, possibly impacting the <strong>overall user experience</strong>.</li><li>Furthermore, <strong>customer support</strong> encounters with the Amazon Chime SDK can exhibit <strong>variability</strong>, with query resolution times contingent on the specific support agent assigned to the case.</li></ul><h3 id="aws-chime-pricing">AWS Chime pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/amazon-chime-sdk-alternative">Amazon Chime</a> <a href="https://aws.amazon.com/chime/pricing/">offers</a> a foundational <strong>free plan</strong>, allowing users to engage in one-on-one audio/video calls SDK without incurring any cost.</li><li>For users seeking enhanced features and capabilities, the <strong>Plus plan</strong> is available at a rate of <strong>$2.50</strong> per month per user.</li><li>This plan extends valuable additions such as <strong>screen sharing</strong>, <strong>remote desktop control</strong>, <strong>1 GB of message history</strong> per user, and seamless <strong>Active Directory integration</strong>.</li><li>To cater to more intricate collaboration demands, the <strong>Pro plan</strong> is offered at <strong>$15</strong> per user per month.</li><li>This comprehensive plan encompasses all the features available in the Plus plan and accommodates <strong>meetings</strong> with <strong>three</strong> or more participants.</li><li>This Pro plan is suited only for larger group discussions, presentations, and collaborative endeavors.</li></ul><blockquote>
<p>See how AWS Chime compares with its <a href="https://www.videosdk.live/blog/amazon-chime-sdk-competitors">competitors</a></p>
</blockquote>
<h2 id="6-zoom-video-sdk-video-communications-platform">6. Zoom Video SDK: Video Communications Platform</h2>
<ul><li>The Zoom Video SDK empowers developers to craft tailored video compositions, accommodating <strong>up to 1,000 participants</strong> per session.</li><li>Enriching collaboration, it integrates functionalities such as <strong>screen sharing</strong>, <strong>live streaming</strong>, and <strong>in-session chat</strong>, complemented by layout control.</li><li>Zoom’s global outreach is strengthened by its support for multiple languages and the availability of support plans for expedited assistance.</li><li>Nonetheless, inherent <strong>role constraints</strong> and <strong>bandwidth management</strong> aspects should be considered.</li></ul><h3 id="zoom-pricing">Zoom pricing</h3>
<ul><li>In the realm of <a href="https://zoom.us/buy/videosdk">pricing</a>, <a href="https://www.videosdk.live/blog/zoom-video-sdk-alternative">Zoom</a> provides a generous allocation of 10,000 free monthly minutes, with charges incurred once this threshold is surpassed.</li><li>The pricing structure initiates at <strong>$0.31</strong> per user minute, while an additional option offers video <strong>recordings</strong> at <strong>$100</strong> per month for a capacious <strong>1 TB</strong> of storage.</li><li>For <strong>telephony services</strong>, a fixed monthly fee of <strong>$100</strong> applies, encapsulating Zoom’s comprehensive offering.</li></ul><blockquote>
<p>See how Zoom compares with its <a href="https://www.videosdk.live/blog/zoom-video-sdk-competitors">competitors</a></p>
</blockquote>
<h2 id="7-vonage">7. Vonage</h2>
<ul><li>The API brings a versatile set of tools to the table, encompassing live video, voice, messaging, and screen-sharing functionalities, with client libraries catering to a range of platforms.</li><li>Customization of audio/video streams, complete with effects, is facilitated, alongside support for collaborative features.</li><li><strong>Performance analysis tools</strong> are at <strong>your disposal</strong>, and paramount security and compliance measures are built in.</li><li>The solution adeptly scales to meet varying participant needs and even provides chat-based support.</li><li>It’s worth noting that the management of <strong>edge cases</strong> falls under the <strong>purview of the user</strong>.</li></ul><h3 id="vonage-pricing">Vonage pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/vonage-alternative">Vonage</a> employs a usage-based <a href="https://www.vonage.com/communications-apis/video/pricing/">pricing</a> strategy, with charges calculated dynamically by the minute, contingent on the number of participants in a video session.</li><li>Commencing at <strong>$9.99</strong> per month, <strong>each plan</strong> grants a complimentary 2,000 minutes every month, spanning across all plans.</li><li>Once the allotted free minutes are utilized, the pricing adjusts to <strong>$0.00395</strong> per minute per participant.</li><li><strong>Recording</strong> services commence at <strong>$0.10</strong> per minute, and if you’re considering <strong>HLS streaming</strong>, the cost is set at <strong>$0.15</strong> per minute.</li></ul><blockquote>
<p>See how Vonage compares with its <a href="https://www.videosdk.live/blog/vonage-competitors">competitors</a></p>
</blockquote>
<h2 id="8-enablex-engagement-platform-for-video-voice-and-msg">8. EnableX: Engagement Platform For Video, Voice And Msg</h2>
<ul><li>EnableX equips users with a self-service portal, complete with reporting and real-time analytics.</li><li>This empowers users to monitor communication quality and manage online payments efficiently.</li><li>The SDK enables seamless live content streaming directly from your app or website, expanding your reach to platforms like YouTube and Facebook.</li><li>One aspect to consider is that the support team’s <strong>response time</strong> might extend up to <strong>72 hours</strong>, potentially posing a <strong>challenge</strong> for users seeking more immediate assistance.</li></ul><h3 id="enablex-pricing">EnableX pricing</h3>
<ul><li>EnableX extends <a href="https://www.enablex.io/cpaas/pricing/our-pricing">pricing</a> plans commencing at <strong>$0.004</strong> per minute per participant for rooms accommodating <strong>up to 50 individuals</strong>.</li><li><strong>Recording</strong> services, a valuable feature, come at a rate of <strong>$0.010</strong> per minute per participant, allowing you to capture and store video sessions for future use.</li><li>Should you require the conversion of videos into different formats, <strong>transcoding</strong> services are available at a rate of <strong>$0.010</strong> per minute.</li><li>To address your storage needs for growing video content, <strong>additional storage</strong> can be acquired at a rate of <strong>$0.05</strong> per GB per month. </li><li>Real-Time Messaging Protocol (<strong>RTMP</strong>) streaming is provided at a rate of <strong>$0.10</strong> per minute, enabling seamless real-time delivery to various platforms and devices. </li><li>Please note that pricing may be subject to variation based on specific requirements.</li></ul><h2 id="9-signalwire-audiovideo-communication-platform">9. SignalWire: Audio/Video Communication Platform</h2>
<ul><li>SignalWire presents an SDK that empowers developers to seamlessly embed real-time video and live streaming functionalities into web, iOS, and Android applications. </li><li>This versatile SDK facilitates <strong>video calls</strong> accommodating <strong>up to 100 participants</strong> within a real-time WebRTC environment. </li><li>It's important to note that the SDK <strong>lacks</strong> integrated <strong>support</strong> for managing disruptions or <strong>user publish-subscribe logic</strong>, which developers will need to implement independently for a comprehensive experience.</li></ul><h3 id="signalwire-pricing">SignalWire pricing</h3>
<ul><li>SignalWire adopts a <strong>per-minute pricing structure</strong> tailored to usage. </li><li><strong>HD video calls</strong> are priced at <strong>$0.0060</strong> per minute, while <strong>Full HD video calls</strong> are billed at <strong>$0.012</strong> per minute. </li><li>The exact cost may vary based on your chosen video quality for the application.</li><li>Moreover, SignalWire extends additional features including <strong>recording</strong> services at a rate of <strong>$0.0045</strong> per minute, facilitating the capture and storage of video content for future reference. </li><li>For real-time broadcasting of video content, the platform offers <strong>streaming</strong> capabilities at a rate of <strong>$0.10</strong> per minute, enabling seamless live broadcasts.</li></ul><h2 id="10-whereby">10. Whereby</h2>
<ul><li>While Whereby delivers a smooth <a href="https://www.loom.com/blog/video-conferencing" rel="noreferrer">video conferencing</a> experience, it's important to note that its <strong>customization</strong> options for the video interface are somewhat <strong>limited</strong>, potentially <strong>restricting full customization</strong>. </li><li>Notably, video calls can be seamlessly integrated into websites, mobile apps, and web products, negating the need for external links or additional applications. </li><li>However, Whereby <strong>may not offer</strong> the same breadth of <strong>advanced features</strong> compared to other tools in the market. </li><li>Meetings hosted on Whereby can accommodate <strong>up to 50 participants</strong>, which can serve the needs of many scenarios effectively. </li><li>Yet, it's worth mentioning that <strong>screen sharing</strong> for mobile users and <strong>host interface customization</strong> could have some <strong>constraints</strong>. </li><li>For users seeking <strong>virtual backgrounds</strong>, it's important to acknowledge that Whereby <strong>does not</strong> currently <strong>support</strong> this feature. </li><li>Furthermore, some users have reported <strong>issues</strong> with the <strong>mobile app</strong>, which could potentially impact the overall user experience.</li></ul><h3 id="whereby-pricing">Whereby pricing</h3>
<ul><li>Whereby offers a <a href="https://whereby.com/information/pricing">pricing</a> structure starting at <strong>$6.99</strong> per month. This <strong>baseline plan</strong> provides users with a monthly allocation of up to 2,000 user minutes. </li><li>If your usage surpasses the allocated minutes, excess usage is billed at a rate of <strong>$0.004</strong> per minute. </li><li>For those seeking advanced features, <strong>cloud recording</strong> and <strong>live streaming</strong> options can be accessed at a rate of <strong>$0.01</strong> per minute. </li><li>Notably, Whereby extends email and chat support at no additional cost to all users.</li><li>For users desiring <strong>enhanced support</strong>, <strong>paid plans</strong> offer added benefits like <strong>technical onboarding</strong> and <strong>customer success management</strong>. </li><li>Furthermore, Whereby caters to specific security and privacy requirements by offering HIPAA compliance as part of its paid support plans.</li></ul><p>When it comes to integrating video conferencing capabilities into applications, choosing the right API is crucial for ensuring a seamless user experience and maintaining brand consistency.  Below are the additional resources to get better ideas,</p><h2 id="white-label-solutions-for-brand-consistency">White Label Solutions for Brand Consistency</h2><p>Offering <strong>white-label</strong> capabilities, APIs like <strong>Zoom Video SDK</strong> and <strong>Vonage</strong> allow businesses to integrate video conferencing features without third-party branding. This is crucial for maintaining brand consistency and trust, especially in customer-facing applications where user experience is tied closely to brand perception.</p><h2 id="third-party-integrations-and-extensions">Third-Party Integrations and Extensions</h2><p>Most top-tier video conferencing APIs now support extensive <strong>third-party integrations</strong>, enhancing their core functionalities. For instance, <strong>Agora</strong> and <strong>EnableX</strong> provide plugins and extensions for AR effects, additional security layers, and advanced analytics tools. These video conferencing API integrations allow developers to enhance their applications without extensive custom development, offering users a richer, more interactive experience.</p><h2 id="cross-platform-functionality-across-all-devices">Cross-Platform Functionality Across All Devices</h2><p><strong>Cross-platform support</strong> is a significant advantage of modern video conferencing APIs. <strong>LiveKit</strong>, <strong>MirrorFly</strong>, <strong>VideoSDK, </strong>and others offer SDKs that work fluidly across web, desktop, and mobile applications, ensuring that users have a consistent experience regardless of the device or operating system.</p><h2 id="diverse-use-cases-from-telehealth-to-education">Diverse Use Cases: From Telehealth to Education</h2><p>Highlighting specific <strong>use cases</strong> can help potential users visualize the application of these APIs in different contexts. For example, <strong>AWS Chime SDK &amp; VideoSDK</strong> is highly beneficial in telehealth for its high-quality video and secure data transmission, <a href="https://gotrialpro.com/service/zoom/" rel="noreferrer">while Zoom</a> Video SDK can transform educational delivery through its interactive and scalable video conferencing capabilities.</p><h2 id="ensuring-low-latency-for-real-time-communication">Ensuring Low Latency for Real-Time Communication</h2><p><strong>Low latency</strong> is critical for real-time communication, particularly in scenarios like live sports broadcasting or financial trading where delays can be costly. APIs like <strong>100ms</strong> and <strong>VideoSDK</strong> optimize their networking architecture to ensure ultra-low latency, enabling real-time interactions without lags or disruptions.</p><h2 id="self-hosted-options-for-enhanced-data-control">Self-Hosted Options for Enhanced Data Control</h2><p>For enterprises concerned with data privacy and regulatory compliance, <strong>self-hosted solutions</strong> offered by <strong>MirrorFly</strong> and <strong>LiveKit</strong> provide complete control over the data and infrastructure, aligning with stringent data protection laws.</p><h2 id="easy-embedding-into-existing-applications">Easy Embedding into Existing Applications</h2><p>The ability to <strong>embed video</strong> capabilities easily into existing platforms without extensive coding is a significant feature provided by <strong>Whereby</strong> and <strong>Agora</strong>. These APIs offer straightforward embedding options that can be integrated with a few lines of code, enabling businesses to enhance their websites or apps quickly.</p><h2 id="comparing-video-quality-across-platforms">Comparing Video Quality Across Platforms</h2><p>When evaluating the <strong>best video</strong> quality, it's essential to consider factors like resolution, frame rate, and compression artifacts. Each API has different capabilities, with some like <strong>Zoom</strong> and <strong>VideoSDK</strong> offering HD and Full HD options, ensuring that all visual communications are crisp and clear.</p><h2 id="elevate-your-video-app-with-the-perfect-video-calling-api-integration-today">Elevate Your Video App with the Perfect Video Calling API Integration Today!</h2>
<p>In this exploration of the top 10 videos calling SDK, we've unveiled a plethora of possibilities that might perfectly align with your business needs. Remember, the internet is a treasure trove of information waiting to be discovered.</p><p>Regardless of the <a href="https://www.videosdk.live/signup/">API</a> you opt for, make certain it encapsulates the benefits and features we've discussed above for your video-calling app. And don't forget to delve into the pricing details on the API's dedicated page. With these insights in hand, embark on your implementation journey with enthusiasm and delight!</p>]]></content:encoded></item><item><title><![CDATA[How to build a 1-on-1 Video Chat App in Android with Java & VideoSDK]]></title><description><![CDATA[In this tutorial, You'll learn about a 1 on 1 video chat app. Also, a Step-by-Step Guide for building an Android video chat app using Java with VideoSDK.]]></description><link>https://www.videosdk.live/blog/1-on-1-video-chat</link><guid isPermaLink="false">63c7bfeabd44f53bde5d11b1</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Thu, 16 Jan 2025 05:21:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/06/1_1_android_conf-1.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2023/06/1_1_android_conf-1.jpg" alt="How to build a 1-on-1 Video Chat App in Android with Java & VideoSDK"/><p>Remote communication has become the most important part of communication after the pandemic. It will likely play a significant role in the future too. Today's mobile applications often include voice or video chat functionality. But it's extremely complex and takes a lot of time to build. This is where <a href="https://www.videosdk.live">VideoSDK</a> comes into the picture.</p><p>VideoSDK is a platform that allows developers to create rich in-app experiences such as embedding real-time video, voice, real-time recording, live streaming, and real-time messaging.<br><br>VideoSDK is available for <a href="https://en.wikipedia.org/wiki/JavaScript">JavaScript</a>, <a href="https://en.wikipedia.org/wiki/React_(software)">ReactJS</a>, <a href="https://en.wikipedia.org/wiki/React_Native">React-Native</a>, <a href="https://en.wikipedia.org/wiki/IOS">IOS</a>, <a href="https://en.wikipedia.org/wiki/Android_(operating_system)">Android,</a> and <a href="https://en.wikipedia.org/wiki/Flutter_(software)">Flutter</a> to be integrated seamlessly. VideoSDK also provides a Pre-built SDK that enables the opportunity to integrate real-time communication with your application in just 10 minutes!</br></br></p><p>Let's create a 1 on 1 video chat/call app using VideoSDK. But first, we need to create a VideoSDK account and generate the token.</p><h2 id="requirement">Requirement</h2><p>First of all, Your development environment should meet the following requirements:</p><ul><li><a href="https://www.oracle.com/in/java/technologies/downloads/">Java Development Kit.</a></li><li><a href="https://developer.android.com/studio">Android Studio</a> 3.0 or later.</li><li>Android SDK API Level 21 or higher.</li><li>A mobile device that runs Android 5.0 or later.</li></ul><h2 id="setup-the-project">Setup the project</h2><h3 id="step-1-create-a-new-project">STEP 1: Create a new project</h3><ul><li>Let’s start by creating a new project. In Android Studio, create a new project with an Empty Activity.</li><li>Then, provide a name. We will name it as OneToOneDemo.</li></ul><h3 id="step-2-integrate-videosdk">STEP 2: Integrate VideoSDK</h3><ul><li>Add the repository to <code>settings.gradle</code> file.</li></ul><pre><code class="language-javascript">dependencyResolutionManagement{
  repositories {
    // ...
    google()
    mavenCentral()
    maven { url 'https://jitpack.io' }
    maven { url "https://maven.aliyun.com/repository/jcenter" }
  }
}</code></pre><ul>
<li>Add the following dependencies in <code>build.gradle</code> file.</li>
</ul>
<pre><code class="language-javascript">dependencies {
  implementation 'live.videosdk:rtc-android-sdk:0.1.13'

  // library to perform Network call to generate a meeting id
  implementation 'com.amitshekhar.android:android-networking:1.0.2'

  // other app dependencies
  }</code></pre><blockquote>If your project has set <code>android.useAndroidX = true</code>, then set <code>android.enableJetifier = true</code> in the <code>gradle.properties</code> file to migrate your project to AndroidX and avoid duplicate class conflict.</blockquote><h3 id="step-3-add-permissions-to-your-project">STEP 3: Add permissions to your project</h3>
<p>In <code>/app/Manifests/AndroidManifest.xml</code>, add the following permissions after <code>&lt;/application&gt;</code>.</p><pre><code class="language-javascript">&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;</code></pre><h3 id="step-4-getting-started-with-the-code">STEP 4: Getting started with the code!</h3>
<h4 id="a-structure-of-project">(a) Structure of project</h4>
<p>We'll create two screens. The first screen will be <code>Joining screen</code> that allows users to create/join the meeting, and another one is <code>Meeting screen</code> that will show participants like WhatsApp views. </p><p>Our project structure would look like this.</p><pre><code class="language-javascript">   app
   ├── java
   │    ├── packagename
   │         ├── JoinActivity
   │         ├── MeetingActivity
   ├── res
   │    ├── layout
   │    │    ├── activity_join.xml
   │    │    ├── activity_meeting.xml</code></pre><blockquote>You have to set <code>JoinActivity</code> as Launcher activity.</blockquote><h4 id="b-creating-joining-screen">(b) Creating Joining Screen</h4>
<p>Create a new Activity named <code>JoinActivity</code></p><h4 id="c-creating-ui-for-joining-the-screen">(c) Creating UI for Joining the Screen</h4>
<p>The Joining screen will include :</p><ol><li><strong>Create Button</strong> - Creates a new meeting.</li><li><strong>TextField for MeetingID</strong> - Contains the <code>meetingId</code> you want to join.</li><li><strong>Join Button</strong> - Joins the meeting with the <code>meetingId</code> provided.</li></ol><p>In <code>/app/res/layout/activity_join.xml</code> file, replace the content with the following.</p><pre><code class="language-javascript">&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:gravity="center"
    android:orientation="vertical"
    tools:context=".JoinActivity"&gt;

    &lt;com.google.android.material.appbar.MaterialToolbar
        android:id="@+id/material_toolbar"
        android:layout_width="match_parent"
        android:layout_height="?attr/actionBarSize"
        app:contentInsetStart="0dp"
        android:background="?attr/colorPrimary"
        app:titleTextColor="@color/white" /&gt;

    &lt;LinearLayout
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:gravity="center"
        android:orientation="vertical"&gt;


        &lt;Button
            android:id="@+id/btnCreateMeeting"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginBottom="16dp"
            android:text="Create Meeting" /&gt;

        &lt;TextView
            style="@style/TextAppearance.AppCompat.Headline"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:text="OR" /&gt;

        &lt;com.google.android.material.textfield.TextInputLayout        	style="@style/Widget.MaterialComponents.TextInputLayout.OutlinedBox"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginVertical="16dp"
            android:hint="Enter Meeting ID"&gt;

            &lt;EditText
                android:id="@+id/etMeetingId"
                android:layout_width="250dp"
                android:layout_height="wrap_content" /&gt;
        &lt;/com.google.android.material.textfield.TextInputLayout&gt;

        &lt;Button
            android:id="@+id/btnJoinMeeting"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:text="Join Meeting" /&gt;
    &lt;/LinearLayout&gt;

&lt;/LinearLayout&gt;</code></pre><h4 id="d-integration-of-create-meeting-api">(d) Integration of Create Meeting API</h4>
<p>You need to create a field <code>sampleToken</code> in <code>JoinActivity</code> that holds the generated token from the <a href="https://app.videosdk.live/api-keys">VideoSDK dashboard</a>. This token will be used in the VideoSDK config as well as generating meetingId.</p>
<pre><code class="language-javascript">public class JoinActivity extends AppCompatActivity {

  //Replace with the token you generated from the VideoSDK Dashboard
  private String sampleToken = ""; 

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    //...
  }
}</code></pre><p>On <strong>Join Button</strong> <code>onClick</code> the event, it will navigate to <code>MeetingActivity</code> with token and meetingId.</p>
<pre><code class="language-javascript">public class JoinActivity extends AppCompatActivity {

  //Replace with the token you generated from the VideoSDK Dashboard
  private String sampleToken =""; 

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_join);

    final Button btnCreate = findViewById(R.id.btnCreateMeeting);
    final Button btnJoin = findViewById(R.id.btnJoinMeeting);
    final EditText etMeetingId = findViewById(R.id.etMeetingId);

    //set title
    Toolbar toolbar = findViewById(R.id.material_toolbar);
    toolbar.setTitle("OneToOneDemo");
    setSupportActionBar(toolbar);
        
    btnCreate.setOnClickListener(v -&gt; {
      // we will explore this method in the next step
      createMeeting(sampleToken);
    });

    btnJoin.setOnClickListener(v -&gt; {
      Intent intent = new Intent(JoinActivity.this, MeetingActivity.class);
      intent.putExtra("token", sampleToken);
      intent.putExtra("meetingId", etMeetingId.getText().toString());
      startActivity(intent);
    });
  }

  private void createMeeting(String token) {
  }
}</code></pre><p>For the <strong>Create Button</strong>, under <code>createMeeting</code> method we will generate <code>meetingId</code> by calling API and navigating to <code>MeetingActivity</code> with token and generated meetingId.</p>
<pre><code class="language-javascript">public class JoinActivity extends AppCompatActivity {
  //...onCreate

  private void createMeeting(String token) {
      // we will make an API call to VideoSDK Server to get a roomId
      AndroidNetworking.post("https://api.videosdk.live/v2/rooms")
        .addHeaders("Authorization", token) //we will pass the token in the Headers
        .build()
        .getAsJSONObject(new JSONObjectRequestListener() {
            @Override
            public void onResponse(JSONObject response) {
                try {
                    // response will contain `roomId`
                    final String meetingId = response.getString("roomId");

                    // starting the MeetingActivity with received roomId and our sampleToken
                    Intent intent = new Intent(JoinActivity.this, MeetingActivity.class);
                    intent.putExtra("token", sampleToken);
                    intent.putExtra("meetingId", meetingId);
                    startActivity(intent);
                } catch (JSONException e) {
                    e.printStackTrace();
                }
            }

            @Override
            public void onError(ANError anError) {
                anError.printStackTrace();
                Toast.makeText(JoinActivity.this, anError.getMessage(), Toast.LENGTH_SHORT).show();
            }
        });
  }
}</code></pre><p>Our App is completely based on audio and video commutation, that's why we need to ask for runtime permissions <code>RECORD_AUDIO</code> and <code>CAMERA</code>. So, we will implement permission logic on <code>JoinActivity</code>.</p>
<pre><code class="language-javascript">public class JoinActivity extends AppCompatActivity {
  private static final int PERMISSION_REQ_ID = 22;

  private static final String[] REQUESTED_PERMISSIONS = {
    Manifest.permission.RECORD_AUDIO,
    Manifest.permission.CAMERA
  };

  private boolean checkSelfPermission(String permission, int requestCode) {
    if (ContextCompat.checkSelfPermission(this, permission) != PackageManager.PERMISSION_GRANTED) {
      ActivityCompat.requestPermissions(this, REQUESTED_PERMISSIONS, requestCode);
      return false;
    }
    return true;
  }

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    //... button listeneres
    checkSelfPermission(REQUESTED_PERMISSIONS[0], PERMISSION_REQ_ID);
    checkSelfPermission(REQUESTED_PERMISSIONS[1], PERMISSION_REQ_ID);
  }
}</code></pre><blockquote>You'll get <code>Unresolved reference: MeetingActivity</code> error, but don't worry. It will be solved automatically once you create <code>MeetingActivity</code>.</blockquote><p>We're done with the Joining screen now, and it is time to create the participant's view in the Meeting screen.</p>
<h3 id="step-5-creating-meeting-screen">STEP 5: Creating Meeting Screen</h3>
<p>Create a new Activity named <code>MeetingActivity</code>.</p><h4 id="a-creating-the-ui-for-the-meeting-screen">(a) Creating the UI for the Meeting Screen</h4>
<p>In <code>/app/res/layout/activity_meeting.xml</code> file, replace the content with the following.</p><pre><code class="language-javascript">&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:id="@+id/mainLayout"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:animateLayoutChanges="true"
    android:gravity="center"
    android:orientation="vertical"
    tools:context=".MeetingActivity"&gt;

    &lt;com.google.android.material.appbar.MaterialToolbar
        android:id="@+id/material_toolbar"
        android:layout_width="match_parent"
        android:layout_height="?attr/actionBarSize"
        android:background="@color/black"
        app:contentInsetStart="0dp"
        app:titleTextColor="@color/white"&gt;

        &lt;LinearLayout
            android:id="@+id/meetingLayout"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginLeft="14dp"
            android:layout_marginTop="10dp"
            android:orientation="horizontal"&gt;

            &lt;RelativeLayout
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"&gt;

                &lt;TextView
                    android:id="@+id/txtMeetingId"
                    android:layout_width="wrap_content"
                    android:layout_height="wrap_content"
                    android:layout_gravity="center"
                    android:fontFamily="sans-serif-medium"
                    android:textColor="@color/white"
                    android:textFontWeight="600"
                    android:textSize="16sp" /&gt;

                &lt;ImageButton
                    android:id="@+id/btnCopyContent"
                    android:layout_width="22dp"
                    android:layout_height="22sp"
                    android:layout_marginLeft="7dp"
                    android:layout_toRightOf="@+id/txtMeetingId"
                    android:backgroundTint="@color/black"
                    android:src="@drawable/ic_outline_content_copy_24" /&gt;

            &lt;/RelativeLayout&gt;

        &lt;/LinearLayout&gt;

        &lt;LinearLayout
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_gravity="end"
            android:layout_marginEnd="10dp"&gt;

            &lt;ImageButton
                android:id="@+id/btnSwitchCameraMode"
                android:layout_width="wrap_content"
                android:layout_height="match_parent"
                android:background="@color/black"
                android:contentDescription="Switch Camera mode"
                android:src="@drawable/ic_baseline_flip_camera_android_24" /&gt;

        &lt;/LinearLayout&gt;

    &lt;/com.google.android.material.appbar.MaterialToolbar&gt;

    &lt;FrameLayout
        android:id="@+id/participants_frameLayout"
        android:layout_width="match_parent"
        android:layout_height="0dp"
        android:layout_weight="1"
        android:background="@color/black"&gt;

        &lt;androidx.cardview.widget.CardView
            android:id="@+id/ParticipantCard"
            android:layout_width="match_parent"
            android:layout_height="match_parent"
            android:layout_marginLeft="12dp"
            android:layout_marginTop="3dp"
            android:layout_marginRight="12dp"
            android:layout_marginBottom="3dp"
            android:backgroundTint="#2B3034"
            android:visibility="gone"
            app:cardCornerRadius="8dp"
            app:strokeColor="#2B3034"&gt;

            &lt;ImageView
                android:layout_width="150dp"
                android:layout_height="150dp"
                android:layout_gravity="center"
                android:src="@drawable/ic_baseline_person_24" /&gt;

            &lt;live.videosdk.rtc.android.VideoView
                android:id="@+id/participantView"
                android:layout_width="match_parent"
                android:layout_height="match_parent"
                android:visibility="gone" /&gt;

        &lt;/androidx.cardview.widget.CardView&gt;

        &lt;FrameLayout
            android:layout_width="match_parent"
            android:layout_height="match_parent"&gt;

        &lt;/FrameLayout&gt;

        &lt;androidx.cardview.widget.CardView
            android:id="@+id/LocalCard"
            android:layout_width="match_parent"
            android:layout_height="match_parent"
            android:layout_marginLeft="12dp"
            android:layout_marginTop="3dp"
            android:layout_marginRight="12dp"
            android:layout_marginBottom="3dp"
            android:backgroundTint="#1A1C22"
            app:cardCornerRadius="8dp"
            app:strokeColor="#1A1C22"&gt;

            &lt;ImageView
                android:id="@+id/localParticipant_img"
                android:layout_width="150dp"
                android:layout_height="150dp"
                android:layout_gravity="center"
                android:src="@drawable/ic_baseline_person_24" /&gt;

            &lt;live.videosdk.rtc.android.VideoView
                android:id="@+id/localView"
                android:layout_width="match_parent"
                android:layout_height="match_parent"
                android:visibility="gone" /&gt;

        &lt;/androidx.cardview.widget.CardView&gt;

    &lt;/FrameLayout&gt;
    
    &lt;!-- add bottombar here--&gt;

&lt;/LinearLayout&gt;</code></pre><pre><code class="language-javascript">&lt;com.google.android.material.bottomappbar.BottomAppBar
        android:id="@+id/bottomAppbar"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:animateLayoutChanges="true"
        android:backgroundTint="@color/black"
        android:gravity="center_horizontal"
        android:paddingVertical="5dp"
        tools:ignore="BottomAppBar"&gt;

        &lt;RelativeLayout
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:paddingStart="16dp"
            android:paddingEnd="16dp"&gt;

            &lt;com.google.android.material.floatingactionbutton.FloatingActionButton
                android:id="@+id/btnLeave"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:contentDescription="Leave Meeting"
                android:src="@drawable/ic_end_call"
                app:backgroundTint="#FF5D5D"
                app:fabSize="normal"
                app:tint="@color/white" /&gt;

            &lt;com.google.android.material.floatingactionbutton.FloatingActionButton
                android:id="@+id/btnMic"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:layout_marginStart="90dp"
                android:layout_toEndOf="@+id/btnLeave"
                android:contentDescription="Toggle Mic"
                android:src="@drawable/ic_mic_off"
                app:backgroundTint="@color/white"
                app:borderWidth="1dp"
                app:fabSize="normal" /&gt;

            &lt;com.google.android.material.floatingactionbutton.FloatingActionButton
                android:id="@+id/btnWebcam"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:layout_marginStart="90dp"
                android:layout_toEndOf="@+id/btnMic"
                android:backgroundTint="@color/white"
                android:contentDescription="Toggle Camera"
                android:src="@drawable/ic_video_camera_off"
                app:backgroundTint="@color/white"
                app:borderWidth="1dp"
                app:fabSize="normal" /&gt;

        &lt;/RelativeLayout&gt;

    &lt;/com.google.android.material.bottomappbar.BottomAppBar&gt;

</code></pre><blockquote>Copy required icons from <a href="https://github.com/videosdk-live/videosdk-rtc-android-kotlin-sdk-example/tree/one-to-one-demo/app/src/main/res/drawable">here</a> and paste in your project's <code>res/drawable<br>res/drawable</br></code> folder.</blockquote><h4 id="b-initializing-the-meeting">(b) Initializing the Meeting</h4>
<p>After getting the token and meetingId from <code>JoinActivity</code>, we need to</p><ul><li>Step 1: Initialize <strong>VideoSDK</strong></li><li>Step 2: Configure <strong>VideoSDK</strong> with the token.</li><li>Step 3: Initialize the meeting with required params such as <code>meetingId</code>, <code>participantName</code>, <code>micEnabled</code>, <code>webcamEnabled</code>,<code>participantId</code> and map of <code>CustomStreamTrack</code>.</li><li>Step 4: Join the room with <code>meeting.join()</code> method.</li></ul><pre><code class="language-javascript">public class MeetingActivity extends AppCompatActivity {

    private static Meeting meeting;
    private boolean micEnabled = true;
    private boolean webcamEnabled = true;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_meeting);

        //
        Toolbar toolbar = findViewById(R.id.material_toolbar);
        toolbar.setTitle("");
        setSupportActionBar(toolbar);

        //
        String token = getIntent().getStringExtra("token");
        final String meetingId = getIntent().getStringExtra("meetingId");

        // set participant name
        String localParticipantName = "Alex";

        // Initialize VideoSDK
        VideoSDK.initialize(getApplicationContext());

        // pass the token generated from api server
        VideoSDK.config(token);

        // create a new meeting instance
        meeting = VideoSDK.initMeeting(
                MeetingActivity.this, meetingId, localParticipantName,
                micEnabled, webcamEnabled, null, null
        );

        // join the meeting
        if (meeting != null) meeting.join();

        //
        TextView textMeetingId = findViewById(R.id.txtMeetingId);
        textMeetingId.setText(meetingId);

	 // copy meetingId to clipboard
         ((ImageButton) findViewById(R.id.btnCopyContent)).setOnClickListener(v -&gt; copyTextToClipboard(meetingId));

   }
      
      private void copyTextToClipboard(String text) {
        ClipboardManager clipboard = (ClipboardManager) getSystemService(Context.CLIPBOARD_SERVICE);
        ClipData clip = ClipData.newPlainText("Copied text", text);
        clipboard.setPrimaryClip(clip);

        Toast.makeText(MeetingActivity.this, "Copied to clipboard!", Toast.LENGTH_SHORT).show();
    }

}</code></pre><h3 id="step-6-handle-local-participant-media">STEP 6: Handle Local Participant Media</h3>
<p>We need to implement clicks for the following <code>Views</code>:</p><ul><li>Mic Button</li><li>Webcam Button</li><li>Switch Camera Button</li><li>Leave Button</li></ul><p>Add the following implementation:</p><pre><code class="language-javascript">public class MeetingActivity extends AppCompatActivity {

	private FloatingActionButton btnWebcam, btnMic, btnLeave;
    private ImageButton btnSwitchCameraMode;
    
  @Override
    protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_meeting);
    
    //
    btnMic = findViewById(R.id.btnMic);
    btnWebcam = findViewById(R.id.btnWebcam);
    btnLeave = findViewById(R.id.btnLeave);
    btnSwitchCameraMode = findViewById(R.id.btnSwitchCameraMode);
        
    //...

    // actions
    setActionListeners();
  }
  
  private void setActionListeners() {
        // Toggle mic
        btnMic.setOnClickListener(view -&gt; toggleMic());

        // Toggle webcam
        btnWebcam.setOnClickListener(view -&gt; toggleWebCam());
        
        // Leave meeting
        btnLeave.setOnClickListener(view -&gt; {
            // this will make the local participant leave the meeting
            meeting.leave();
        });

        // Switch camera
        btnSwitchCameraMode.setOnClickListener(view -&gt; {
            //a participant can change stream from front/rear camera during the meeting.
            meeting.changeWebcam();
        });
    
   }
}</code></pre><pre><code class="language-javascript">private void toggleMic() {
        if (micEnabled) {
            // this will mute the local participant's mic
            meeting.muteMic();
        } else {
            // this will unmute the local participant's mic
            meeting.unmuteMic();
        }
        micEnabled = !micEnabled;
        // change mic icon according to micEnable status
        toggleMicIcon();
    }

    @SuppressLint("ResourceType")
    private void toggleMicIcon() {
        if (micEnabled) {
            btnMic.setImageResource(R.drawable.ic_mic_on);
            btnMic.setColorFilter(Color.WHITE);
            Drawable buttonDrawable = btnMic.getBackground();
            buttonDrawable = DrawableCompat.wrap(buttonDrawable);
            //the color is a direct color int and not a color resource
            if (buttonDrawable != null) DrawableCompat.setTint(buttonDrawable, Color.TRANSPARENT);
            btnMic.setBackground(buttonDrawable);

        } else {
            btnMic.setImageResource(R.drawable.ic_mic_off);
            btnMic.setColorFilter(Color.BLACK);
            Drawable buttonDrawable = btnMic.getBackground();
            buttonDrawable = DrawableCompat.wrap(buttonDrawable);
            //the color is a direct color int and not a color resource
            if (buttonDrawable != null) DrawableCompat.setTint(buttonDrawable, Color.WHITE);
            btnMic.setBackground(buttonDrawable);
        }
    }</code></pre><pre><code class="language-javascript">private void toggleWebCam() {
        if (webcamEnabled) {
            // this will disable the local participant webcam
            meeting.disableWebcam();
        } else {
            // this will enable the local participant webcam
            meeting.enableWebcam();
        }
        webcamEnabled = !webcamEnabled;
        // change webCam icon according to webcamEnabled status
        toggleWebcamIcon();
    }

    @SuppressLint("ResourceType")
    private void toggleWebcamIcon() {
        if (webcamEnabled) {
            btnWebcam.setImageResource(R.drawable.ic_video_camera);
            btnWebcam.setColorFilter(Color.WHITE);
            Drawable buttonDrawable = btnWebcam.getBackground();
            buttonDrawable = DrawableCompat.wrap(buttonDrawable);
            //the color is a direct color int and not a color resource
            if (buttonDrawable != null) DrawableCompat.setTint(buttonDrawable, Color.TRANSPARENT);
            btnWebcam.setBackground(buttonDrawable);

        } else {
            btnWebcam.setImageResource(R.drawable.ic_video_camera_off);
            btnWebcam.setColorFilter(Color.BLACK);
            Drawable buttonDrawable = btnWebcam.getBackground();
            buttonDrawable = DrawableCompat.wrap(buttonDrawable);
            //the color is a direct color int and not a color resource
            if (buttonDrawable != null) DrawableCompat.setTint(buttonDrawable, Color.WHITE);
            btnWebcam.setBackground(buttonDrawable);
        }
    }</code></pre><h4 id="a-setting-up-local-participant-view">(a) Setting up Local participant view</h4>
<p>To set up participant view, we have to implement all the methods of the <code>ParticipantEventListener</code> Abstract Class and add the listener to <code>Participant</code> class using the <code>addEventListener()</code> method of <code>Participant</code> Class. <code>ParticipantEventListener</code> class has two methods :</p>
<ol>
<li><code>onStreamEnabled</code> - Whenever a participant enables mic/webcam in the meeting, This event will be triggered and return <code>Stream</code>.</li>
<li><code>onStreamDisabled</code> - Whenever a participant disables mic/webcam in the meeting, This event will be triggered and return <code>Stream</code>.</li>
</ol>
<pre><code class="language-javascript">public class MeetingActivity extends AppCompatActivity {

    private VideoView localView;
    private VideoView participantView;
    private CardView localCard, participantCard;
    private ImageView localParticipantImg;
    
  @Override
    protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_meeting);
    
    //... 
    
    localCard = findViewById(R.id.LocalCard);
    participantCard = findViewById(R.id.ParticipantCard);
    localView = findViewById(R.id.localView);
    participantView = findViewById(R.id.localParticipant);
    localParticipantImg = findViewById(R.id.localParticipant_img);
        
    //...

    // setup local participant view
    setLocalListeners();
  }

  private void setLocalListeners() {
        meeting.getLocalParticipant().addEventListener(new ParticipantEventListener() {
            @Override
            public void onStreamEnabled(Stream stream) {
                if (stream.getKind().equalsIgnoreCase("video")) {
                    VideoTrack track = (VideoTrack) stream.getTrack();
                    localView.setVisibility(View.VISIBLE);
                    localView.addTrack(track);
                    localView.setZOrderMediaOverlay(true);
                    localCard.bringToFront();
                }
            }

            @Override
            public void onStreamDisabled(Stream stream) {
                if (stream.getKind().equalsIgnoreCase("video")) {
                    localView.removeTrack();
                    localView.setVisibility(View.GONE);
                }
            }
        });
    }
}</code></pre><h4 id="b-setting-up-remote-participant-view">(b) Setting up Remote participant view</h4>
<pre><code class="language-javascript">private final ParticipantEventListener participantEventListener = new ParticipantEventListener() {
        // trigger when participant enabled mic/webcam
        @Override
        public void onStreamEnabled(Stream stream) {
            if (stream.getKind().equalsIgnoreCase("video")) {
                localView.setZOrderMediaOverlay(true);
                localCard.bringToFront();
                VideoTrack track = (VideoTrack) stream.getTrack();
                participantView.setVisibility(View.VISIBLE);
                participantView.addTrack(track);
            }
        }

        // trigger when participant disabled mic/webcam
        @Override
        public void onStreamDisabled(Stream stream) {
            if (stream.getKind().equalsIgnoreCase("video")) {
                participantView.removeTrack();
                participantView.setVisibility(View.GONE);
            }
        }
    };
</code></pre><h4 id="c-handle-meeting-events-manage-participants-view">(c) Handle meeting events &amp; manage participant's view</h4>
<p>Add <code>MeetingEventListener</code> for listening events such as Meeting Join/Left and Participant Join/Left.</p>
<pre><code class="language-javascript">public class MeetingActivity extends AppCompatActivity {
  @Override
    protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_meeting);
    //...

    // handle meeting events
    meeting.addEventListener(meetingEventListener);
  }
  
  private final MeetingEventListener meetingEventListener = new MeetingEventListener() {
        @Override
        public void onMeetingJoined() {
            // change mic,webCam icon after meeting successfully joined
            toggleMicIcon();
            toggleWebcamIcon();
        }

        @Override
        public void onMeetingLeft() {
            if (!isDestroyed()) {
                Intent intent = new Intent(MeetingActivity.this, JoinActivity.class);
                startActivity(intent);
                finish();
            }
        }

        @Override
        public void onParticipantJoined(Participant participant) {
            // Display local participant as miniView when other participant joined
            changeLocalParticipantView(true);
            Toast.makeText(MeetingActivity.this, participant.getDisplayName() + " joined",
                    Toast.LENGTH_SHORT).show();
            participant.addEventListener(participantEventListener);
        }

        @Override
        public void onParticipantLeft(Participant participant) {
            // Display local participant as largeView when other participant left
            changeLocalParticipantView(false);
            Toast.makeText(MeetingActivity.this, participant.getDisplayName() + " left",
                    Toast.LENGTH_SHORT).show();
        }
    };
}</code></pre><ul>
<li><code>changeLocalParticipantView(isMiniView: Boolean)</code> function first checks whether the video of a local participant is displayed as a MiniView or a LargeView.</li>
<li>If the meeting has only one participant (local participant), then the local participant is displayed as LargeView.</li>
<li>When another participant (other than the local participant) joins, <code>changeLocalParticipantView(true)</code> is called. As a result, the local participant is shown as MiniView, while the other participant is shown as LargeView.</li>
</ul>
<pre><code class="language-javascript">private void changeLocalParticipantView(boolean isMiniView) {
        if(isMiniView)
        {
            // show localCard as miniView
            localCard.setLayoutParams(new CardView.LayoutParams(300, 430, Gravity.RIGHT | Gravity.BOTTOM));
            ViewGroup.MarginLayoutParams cardViewMarginParams = (ViewGroup.MarginLayoutParams) localCard.getLayoutParams();
            cardViewMarginParams.setMargins(30, 0, 60, 40);
            localCard.requestLayout();
            // set height-width of localParticipant_img
            localParticipantImg.setLayoutParams(new FrameLayout.LayoutParams(150, 150, Gravity.CENTER));
            participantCard.setVisibility(View.VISIBLE);
        }else{
            // show localCard as largeView
            localCard.setLayoutParams(new CardView.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, ViewGroup.LayoutParams.MATCH_PARENT));
            ViewGroup.MarginLayoutParams cardViewMarginParams = (ViewGroup.MarginLayoutParams) localCard.getLayoutParams();
            cardViewMarginParams.setMargins(30, 5, 30, 30);
            localCard.requestLayout();
            // set height-width of localParticipant_img
            localParticipantImg.setLayoutParams(new FrameLayout.LayoutParams(400, 400, Gravity.CENTER));
            participantCard.setVisibility(View.GONE);
        }
    }</code></pre><h4 id="d-destroying-everything">(d) Destroying everything</h4>
<p>We need to release resources when the app is closed and is no longer being used. Override the <code>onDestroy</code> with the following code:</p><pre><code class="language-javascript">protected void onDestroy() {
        if(meeting !=null)
        {
            meeting.removeAllListeners();
            meeting.getLocalParticipant().removeAllListeners();
            meeting.leave();
            meeting = null;
        }
        if (participantView != null) {
            participantView.setVisibility(View.GONE);
            participantView.releaseSurfaceViewRenderer();
        }

        if (localView != null) {
            localView.setVisibility(View.GONE);
            localView.releaseSurfaceViewRenderer();
        }

        super.onDestroy();
    }</code></pre><blockquote>
<p><code>java.lang.IllegalStateException: This Activity already has an action bar supplied by the window decor. Do not request Window.FEATURE_SUPPORT_ACTION_BAR and set windowActionBar to false in your theme to use a Toolbar instead.</code><br>
If you face this error at Runtime, Include these lines in <code>theme.xml</code> file.<br>
<code>&lt;item name="windowActionBar"&gt;false&lt;/item&gt;</code><br>
<code>&lt;item name="windowNoTitle"&gt;true&lt;/item&gt;</code></br></br></br></p>
</blockquote>
<h3 id="app-demo">App Demo</h3>
<p>Tadaa!! Our app is ready. Easy, isn't it? <br>Install and run the app on two different devices and make sure that they are connected to the internet.</br></p><blockquote>This app only supports 2 participants, it does not manage more than 2 participants. If you want to handle more than 2 participant then checkout our <strong>Group call</strong> example <a href="https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example">HERE</a>.</blockquote><h2 id="conclusion">Conclusion</h2><ul><li>In this blog, we have learned what videoSDK is, how to obtain an access token from the VideoSDK <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">Dashboard</a>, and how to create a one-to-one <a href="https://www.contus.com/blog/build-a-live-video-chat-app/" rel="noopener noreferrer">video chat app</a> with the VideoSDK.</li><li>Go ahead and create advanced features like screen-sharing, chat, and others. Browse Our <a href="https://docs.videosdk.live/">Documentation</a>.</li><li>To see the full implementation of the app, check out this <a href="https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example/tree/one-to-one-demo">GitHub Repository</a>.</li><li>If you face any problems or have questions, Please join our <a href="https://discord.gg/Gpmj6eCq5u">Discord Community</a>.</li></ul>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h2 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h2>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>
		</div>
		<svg viewBox="0 0 1024 1024" class="absolute left-1/2 top-1/2 -z-10 h-[64rem] w-[64rem] -translate-x-1/2 [mask-image:radial-gradient(closest-side,white,transparent)]" aria-hidden="true">
			
			<defs>
				<radialGradient id="827591b1-ce8c-4110-b064-7cb85a0b1217">
					<stop stop-color="#CB4371"/>
					<stop offset="0.5" stop-color="#AE49B0"/>
					<stop offset="1" stop-color="#493BB9"/>
				</radialGradient>
			</defs>
		</svg>
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="more-android-resources">More Android Resources</h2>
<ul><li><a href="https://www.videosdk.live/blog/android-java-interactive-live-streaming">Android Live Streaming app with Java</a></li><li><a href="https://www.videosdk.live/blog/1-to-1-video-chat-app-on-android-using-videosdk">Android One-To-One video calling app with Kotlin</a></li><li><a href="https://www.videosdk.live/blog/android-live-streaming">Android Interactive Live Streaming App with Kotlin</a></li><li><a href="https://github.com/videosdk-live/videosdk-rtc-android-kotlin-sdk-example">Android Video Conferencing App with Kotlin example code</a></li><li><a href="https://github.com/videosdk-live/videosdk-hls-android-kotlin-example">Android Interactive Live Streaming App with Kotlin example code</a></li><li><a href="https://github.com/videosdk-live/videosdk-hls-android-java-example">Android Interactive Live Streaming App with Java example code</a></li><li><a href="https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example">Android Video Conferencing App with Java example code</a></li></ul>]]></content:encoded></item><item><title><![CDATA[RTMP vs HLS | Choosing the Best Streaming Protocol for Your Needs?]]></title><description><![CDATA[RTMP offers low latency streaming ideal for live broadcasts, while HLS provides better compatibility and adaptability for varying network conditions, making it suitable for a wider audience.]]></description><link>https://www.videosdk.live/blog/rtmp-vs-hls</link><guid isPermaLink="false">65a520f66c68429b5fdf0f3d</guid><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Wed, 15 Jan 2025 11:35:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/RTMP-vs-HLS.png" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-the-streaming-protocol">What is the Streaming Protocol?</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/RTMP-vs-HLS.png" alt="RTMP vs HLS | Choosing the Best Streaming Protocol for Your Needs?"/><p>The Streaming Protocol governs the transmission of multimedia content over the internet, ensuring real-time delivery. Various protocols, such as RTMP (Real-Time Messaging Protocol) and HLS (<a href="https://www.videosdk.live/blog/what-is-http-live-streaming">HTTP Live Streaming</a>). RTMP excels in low-latency live video streaming, while HLS breaks content into small chunks, enabling adaptive streaming. These protocols collectively enhance the user experience by optimizing playback and minimizing buffering for diverse streaming needs.</p><h2 id="what-is-rtmpreal-time-messaging-protocol">What is RTMP(Real-Time Messaging Protocol)?</h2><p>RTMP, or <a href="https://www.videosdk.live/blog/what-is-rtmp">Real-Time Messaging Protocol</a>, is a proprietary protocol developed by Adobe Systems for high-performance audio, video, and data transmission over the Internet.<strong> </strong>RTMP was initially designed for streaming audio, video, and data between a Flash player and a server. It gained popularity for its low latency and real-time capabilities, making it suitable for interactive and live-streaming applications.</p><h3 id="how-does-rtmp-work-in-video-streaming">How Does RTMP Work in Video Streaming?</h3><p>RTMP (Real-Time Messaging Protocol) is used for streaming audio, video, and data over the internet. It operates by establishing a persistent connection between a server and a client, allowing for low-latency transmission of media. RTMP divides the stream into small packets, ensuring efficient delivery and reducing buffering. It's commonly used in live broadcasts and interactive applications due to its ability to provide real-time communication, making it ideal for platforms like Twitch and Facebook Live.</p><h2 id="advantages-and-disadvantages-of-rtmp">Advantages and disadvantages of RTMP:</h2><h3 id="advantages">Advantages:</h3><ul><li>Low-latency streaming.</li><li>Real-time communication capabilities.</li><li>Widely supported by Flash players.</li></ul><h3 id="disadvantages">Disadvantages:</h3><ul><li>Lack of native browser support.</li><li>Security concerns, as RTMP doesn't encrypt data by default.</li></ul><h2 id="what-is-hlshttp-live-streaming">What is HLS(HTTP Live Streaming)?</h2><p>HLS, or <a href="https://www.videosdk.live/blog/what-is-http-live-streaming">HTTP Live Streaming</a>, is an adaptive streaming protocol developed by Apple. It breaks down video files into small chunks and delivers them over standard HTTP protocols. HLS was introduced by Apple in 2009 to support video streaming on iOS devices. It has since become a widely adopted protocol due to its compatibility with various devices and browsers.</p><h3 id="how-does-hls-work-in-live-streaming">How Does HLS Work in Live Streaming?</h3><p>HTTP Live Streaming (HLS) breaks video content into small, manageable chunks. These chunks are delivered over HTTP, allowing adaptive streaming. A manifest file, usually in M3U8 format, contains information about the chunks, enabling devices to adapt to varying network conditions and dynamically switch between different quality levels for smooth playback.</p><h2 id="advantages-and-disadvantages-of-hls">Advantages and disadvantages of HLS:</h2><h3 id="advantages-1">Advantages:</h3><ul><li>Broad compatibility with devices and browsers.</li><li>Effective adaptive streaming for varying network conditions.</li><li>Improved security as data is transmitted over standard HTTP.</li></ul><h3 id="disadvantages-1">Disadvantages:</h3><ul><li>Higher latency compared to RTMP.</li><li>Increased complexity in server-side implementations.</li></ul><h2 id="hls-vs-rtmp-a-direct-comparision">HLS vs RTMP: A Direct Comparision</h2><p><strong>Comparison of streaming quality: </strong>While RTMP offers low-latency streaming suitable for real-time applications, HLS provides adaptive streaming for improved video quality under varying network conditions.</p><p><strong>Latency considerations for live streaming: </strong>RTMP excels in low-latency scenarios, making it ideal for live broadcasts and interactive applications. HLS, on the other hand, has slightly higher latency due to its chunk-based delivery system.</p><p><strong>Device and browser compatibility: </strong>HLS boasts broad compatibility across devices and browsers, making it a versatile choice for reaching a diverse audience. RTMP, however, may require additional plugins for certain browsers.</p><p><strong>Adaptive streaming capabilities: </strong>HLS stands out with its adaptive bitrate streaming, adjusting video quality based on network conditions. RTMP provides consistent quality but may struggle with varying bandwidth.</p><p><strong>Security features: </strong>HLS, transmitting data over HTTP, offers improved security. RTMP, lacking default encryption, may raise security concerns, but encryption can be implemented separately.</p><p>Below is the comparison table between RTMP (Real-Time Messaging Protocol) and HLS (HTTP Live Streaming).</p><table>
<thead>
<tr>
<th>Feature</th>
<th>RTMP</th>
<th>HLS</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Protocol Type</strong></td>
<td>Proprietary protocol developed by Adobe.</td>
<td>HTTP-based protocol developed by Apple.</td>
</tr>
<tr>
<td><strong>Streaming Method</strong></td>
<td>Real-time streaming.</td>
<td>Adaptive streaming with chunks.</td>
</tr>
<tr>
<td><strong>Latency</strong></td>
<td>Lower latency (2-3 seconds typically).</td>
<td>Higher latency (10-30 seconds typically).</td>
</tr>
<tr>
<td><strong>Compatibility</strong></td>
<td>Widely supported in Flash-based players.</td>
<td>Widely supported on various platforms and devices (HTML5, iOS, Android, etc.).</td>
</tr>
<tr>
<td><strong>Firewall/Proxy Friendly</strong></td>
<td>May encounter issues with firewalls/proxies.</td>
<td>Generally more firewall/proxy-friendly as it uses standard HTTP ports (80, 443).</td>
</tr>
<tr>
<td><strong>Adaptive Bitrate (ABR)</strong></td>
<td>Limited support for ABR.</td>
<td>Designed for ABR with multiple quality levels and adaptive streaming.</td>
</tr>
<tr>
<td><strong>Live Streaming Focus</strong></td>
<td>Primarily designed for live streaming.</td>
<td>Supports both live and on-demand streaming.</td>
</tr>
<tr>
<td><strong>Encoding Overhead</strong></td>
<td>Lower encoding overhead.</td>
<td>Higher encoding overhead due to multiple bitrate renditions.</td>
</tr>
<tr>
<td><strong>Supported Devices</strong></td>
<td>Historically Flash-based, limited on mobile.</td>
<td>Widespread support on various devices, including mobile and smart TVs.</td>
</tr>
<tr>
<td><strong>Playback Continuity</strong></td>
<td>More susceptible to interruptions and buffering.</td>
<td>More robust in handling network fluctuations with adaptive streaming.</td>
</tr>
<tr>
<td><strong>Use Cases</strong></td>
<td>Older live streaming scenarios, interactive applications.</td>
<td>Mainly used for on-demand video streaming, live events, and broadcasts.</td>
</tr>
<tr>
<td><strong>Development Status</strong></td>
<td>Adobe has officially deprecated RTMP.</td>
<td>HLS is widely adopted and continues to be a standard for HTTP-based streaming.</td>
</tr>
<tr>
<td><strong>Security</strong></td>
<td>Limited security features.</td>
<td>Improved security with HTTPS transport.</td>
</tr>
</tbody>
</table>
<h2 id="use-cases">Use Cases:</h2><p><strong>Common scenarios where RTMP excels:</strong></p><ul><li>Real-time applications like live gaming and video conferencing.</li><li>Low-latency requirements for interactive streaming.</li></ul><p><strong>Situations where HLS is the preferred choice:</strong></p><ul><li>Wide distribution across diverse devices and browsers.</li><li>Adaptive streaming for varying network conditions.</li></ul><h2 id="which-streaming-protocol-is-best-for-you">Which Streaming Protocol is Best for You?</h2><p><strong>Factors to consider when choosing between RTMP and HLS:</strong></p><ul><li>Latency requirements.</li><li>Target audience and device/browser compatibility.</li><li>Security considerations.</li><li>Adaptive streaming needs.</li></ul><p><strong>Overview of emerging trends in streaming technology: </strong>Emerging trends like WebRTC and low-latency protocols are shaping the future of streaming, impacting the decision-making process.</p><p><strong>Addressing the evolving needs of the online audience: </strong>With the rise of mobile devices and diverse network conditions, choosing a protocol that adapts to changing demands is essential.</p><h2 id="the-role-of-videosdk-in-enhancing-streaming">The Role of VideoSDK in E<strong>nhancing Streaming </strong></h2><p><a href="https://www.videosdk.live/">VideoSDK </a>offers real-time audio-video SDKs, providing flexibility, scalability, and control for seamless integration of <a href="https://www.videosdk.live/audio-video-conferencing">audio-video conferencing</a> and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a> into web and mobile apps.</p><h3 id="features-that-make-videosdk-stand-out">Features that make VideoSDK stand out:</h3><ul><li>Real-time capabilities for interactive applications.</li><li>Compatibility with both RTMP and HLS for versatile streaming options.</li></ul><p><em>Have questions about integrating HLS &amp; LL-HTTP and VideoSDK? Our team offers </em><a href="https://www.videosdk.live/contact?utm_source=blog&amp;utm_medium=google&amp;utm_campaign=organic"><em>expert advice</em></a><em> tailored to your unique needs. Unlock the full potential—</em><a href="https://www.videosdk.live/blog/what-is-http-live-streaming?utm_source=blog&amp;utm_medium=google&amp;utm_campaign=organic"><em>sign up </em></a><em>now to access resources and join our </em><a href="https://discord.com/invite/Qfm8j4YAUJ?utm_source=blog&amp;utm_medium=google&amp;utm_campaign=organic"><em>developer community</em></a><em>. Schedule a demo to see features in action and discover how our solutions meet your streaming app needs.</em></p><h2 id="does-videosdk-support-hls-ll-http-live-streaming">Does VideoSDK support HLS &amp; LL-HTTP live streaming?</h2><p>Yes, VideoSDK supports RTMP (Real-Time Messaging Protocol) and HLS (HTTP Live Streaming), offering flexibility to choose the streaming protocol that best fits your application's requirements.</p><h2 id="how-can-i-integrate-hls-with-videosdk-for-my-application">How can I integrate HLS with VideoSDK for my application?</h2><p>Integrating HLS with VideoSDK is a straightforward process. The VideoSDK provides comprehensive documentation and code examples to guide you through the integration. You can follow the step-by-step instructions to seamlessly incorporate HLS streaming into your application.</p><h2 id="is-it-possible-to-use-rtmp-with-videosdk-for-my-live-streaming-needs">Is it possible to use RTMP with VideoSDK for my live-streaming needs?</h2><p>Absolutely! VideoSDK is designed to offer flexibility, and this includes support for RTMP streaming. The documentation includes clear instructions and code snippets to help you integrate RTMP seamlessly into your application, ensuring a smooth and reliable live-streaming experience.</p><h2 id="can-i-use-both-hls-and-rtmp-within-the-same-application-with-videosdk">Can I use both HLS and RTMP within the same application with VideoSDK?</h2><p>Yes, you can! VideoSDK is versatile and allows you to choose the streaming protocol that best suits your requirements. Whether you prefer the adaptability of HLS or the low-latency features of RTMP, VideoSDK provides the framework to integrate both protocols within the same application.</p><h2 id="what-are-the-main-differences-between-rtmp-and-hls">What are the main differences between RTMP and HLS?</h2><p>RTMP (Real-Time Messaging Protocol) is a streaming protocol for live content delivery, while HLS (HTTP Live Streaming) is a protocol that uses HTTP for on-demand video streaming, enhancing compatibility.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate VideoSDK with Expo (React Native)?]]></title><description><![CDATA[Learn how to seamlessly integrate the VideoSDK with your Expo (React Native) project and add real-time video communication features to your mobile application.]]></description><link>https://www.videosdk.live/blog/integrate-videosdk-with-expo-react-native</link><guid isPermaLink="false">664b0fd020fab018df10e5b2</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Wed, 15 Jan 2025 09:31:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/05/Integrate-Video-SDK-with-Expo--React-Native--1.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/05/Integrate-Video-SDK-with-Expo--React-Native--1.jpg" alt="How to Integrate VideoSDK with Expo (React Native)?"/><p>Integrating a Video SDK with Expo (React Native) can be a powerful way to add real-time video communication features to your mobile application. In this guide, we'll walk you through the process of integrating the VideoSDK with your Expo project, step by step. By the end of this tutorial, you'll have a solid understanding of how to set up and configure the VideoSDK, and use its features in your Expo app.</p><h2 id="getting-started">Getting Started</h2><p>We must use the capabilities that the <a href="https://www.videosdk.live/">VideoSDK</a> offers. Before dipping into the implementation steps, ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your  <a href="https://app.videosdk.live/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Node.js v12+</li><li>NPM v6+ (comes installed with newer Node versions)</li></ul><h3 id="project-structure">Project Structure</h3><p>Directory Structure</p><pre><code class="language-js">   root
   ├── node_modules
   ├── App.js
   ├── api.js
</code></pre><h3 id="install-videosdk-config">Install VideoSDK Config.</h3><p>It is necessary to set up VideoSDK within your project before going into the details. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.</p><p>For NPM</p><pre><code class="language-js">npm install "@videosdk.live/react-native-sdk"  "@videosdk.live/react-native-incallmanager"
</code></pre><p>For Yarn</p><pre><code class="language-js">yarn add "@videosdk.live/react-native-sdk" "@videosdk.live/react-native-incallmanager"
</code></pre><h2 id="integrating-react-native-videosdk-with-expo">Integrating React Native VideoSDK with Expo</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/05/react-native-with-expo.png" class="kg-image" alt="How to Integrate VideoSDK with Expo (React Native)?" loading="lazy" width="1940" height="1206"/></figure><p>This guide details integrating VideoSDK seamlessly into your Expo project. Expo is a popular framework for building cross-platform mobile applications using React Native.</p><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-text">Our SDK utilizes native code and cannot be directly used in the Expo Go app. However, you can leverage the <code spellcheck="false" style="white-space: pre-wrap;">expo-dev-client</code> library to run your Expo app with a development build for testing purposes.</div></div><h3 id="creating-a-new-expo-project">Creating a New Expo Project</h3><p>Open your terminal and run the following command to create a new Expo project:</p><pre><code class="language-js">npx create-expo-app my-video-app
</code></pre><ul><li>Replace  <code>my-video-app</code>  with your desired project name.</li></ul><p>Navigate to your project directory:</p><pre><code class="language-js">cd my-video-app
</code></pre><h3 id="installing-the-videosdk">Installing the VideoSDK</h3><p>Use the following commands to install the VideoSDK and its dependencies:</p><pre><code class="language-js">npx expo install @videosdk.live/react-native-sdk
npx expo install @videosdk.live/react-native-webrtc
npx expo install @videosdk.live/react-native-incallmanager
npx expo install @videosdk.live/react-native-react-native-foreground-service
npx expo install @videosdk.live/expo-config-plugin
npx expo install @config-plugins/react-native-webrtc
</code></pre><h3 id="adding-configuration-plugins">Adding Configuration Plugins</h3><p>Update your  <code>app.json</code>  file to include the configuration plugins:</p><pre><code class="language-js">{
  "expo": {
    // ...
    "plugins": [
      "@videosdk.live/expo-config-plugin",
      [
        "@config-plugins/react-native-webrtc",
        {
          "cameraPermission": "Allow $(PRODUCT_NAME) to access your camera",
          "microphonePermission": "Allow $(PRODUCT_NAME) to access your microphone"
        }
      ]
    ]
  }
}
</code></pre><p>This adds the  <code>@videosdk.live/expo-config-plugin</code>  and  <code>@config-plugins/react-native-webrtc</code>  plugins.<br>The  <code>@config-plugins/react-native-webrtc</code>  plugin configuration optionally allows you to customize permission request messages for iOS.</br></p><h3 id="updating-native-folders">Updating Native Folders</h3><p>Run the following command to update the native folders with the added plugins:</p><pre><code class="language-js">npx expo prebuild
</code></pre><h3 id="troubleshooting-expo-50-compatibility">Troubleshooting: Expo 50+ Compatibility</h3><p>Expo versions above 50 use  <code>event-target-shim@5</code>, which conflicts with  <code>react-native-webrtc</code>'s dependency on  <code>event-target-shim@6</code>. To resolve this:</p><p>if the project doesn't have  <code>metro.config.js</code>  file, then create a new  <code>metro.config.js</code>  file and paste below code.</p><pre><code class="language-js">const { getDefaultConfig } = require("expo/metro-config");
const resolveFrom = require("resolve-from");

/** @type {import('expo/metro-config').MetroConfig} */
const config = getDefaultConfig(__dirname);

config.resolver.resolveRequest = (context, moduleName, platform) =&gt; {
  if (
    // If the bundle is resolving "event-target-shim" from a module that is part of "react-native-webrtc".
    moduleName.startsWith("event-target-shim") &amp;&amp;
    context.originModulePath.includes("@videosdk.live/react-native-webrtc")
  ) {
    const updatedModuleName = moduleName.endsWith("/index")
      ? moduleName.replace("/index", "")
      : moduleName;

    // Resolve event-target-shim relative to the react-native-webrtc package to use v6.
    // React Native requires v5 which is not compatible with react-native-webrtc.
    const eventTargetShimPath = resolveFrom(
      context.originModulePath,
      updatedModuleName
    );

    return {
      filePath: eventTargetShimPath,
      type: "sourceFile",
    };
  }

  // Ensure you call the default resolver.
  return context.resolveRequest(context, moduleName, platform);
};

module.exports = config;
</code></pre><h3 id="registering-services">Registering Services</h3><p>In your project's main  <code>App.js</code>  file, register the VideoSDK services:</p><pre><code class="language-js">import { register } from "@videosdk.live/react-native-sdk";
import { registerRootComponent } from "expo";

// Register VideoSDK services before app component registration
register();
registerRootComponent(App);

export default function App() {}
</code></pre><p>After installing the VideoSDK in your Expo project, you can start using its features to add real-time video communication to your app. The VideoSDK provides a comprehensive set of APIs and components that allow you to create, join, and manage video meetings, as well as control various aspects of the meeting experience.</p><h2 id="essential-steps-to-implement-the-video-calling-functionality">Essential Steps to Implement the Video Calling Functionality<br/></h2><h3 id="step-1-get-started-with-apijs">Step 1: Get started with api.js</h3><p>Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples">videosdk-rtc-api-server-examples</a> or directly from the  <a href="https://app.videosdk.live/api-keys">VideoSDK Dashboard</a> for developers.</p><pre><code class="language-js">export const token = "&lt;Generated-from-dashbaord&gt;";
// API call to create meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${token}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });

  const { roomId } = await res.json();
  return roomId;
};
</code></pre><h3 id="step-2-wireframe-appjs-with-all-the-components">Step 2: Wireframe App.js with all the components</h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the  <strong>Context Provider </strong>and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value  <code>config</code>  and  <code>token</code>  as a prop. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meetings such as join, leave, enable/disable the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as <strong>name</strong>,  <strong>webcamStream</strong>,  <strong>micStream</strong>, etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the  <strong>App.js</strong>  file.</p><pre><code class="language-js">import React, { useState } from "react";
import {
  SafeAreaView,
  TouchableOpacity,
  Text,
  TextInput,
  View,
  FlatList,
} from "react-native";
import {
  MeetingProvider,
  useMeeting,
  useParticipant,
  MediaStream,
  RTCView,
} from "@videosdk.live/react-native-sdk";
import { createMeeting, token } from "./api";

function JoinScreen(props) {
  return null;
}

function ControlsContainer() {
  return null;
}

function MeetingView() {
  return null;
}

export default function App() {
  const [meetingId, setMeetingId] = useState(null);

  const getMeetingId = async (id) =&gt; {
    const meetingId = id == null ? await createMeeting({ token }) : id;
    setMeetingId(meetingId);
  };

  return meetingId ? (
    &lt;SafeAreaView style={{ flex: 1, backgroundColor: "#F6F6FF" }}&gt;
      &lt;MeetingProvider
        config={{
          meetingId,
          micEnabled: false,
          webcamEnabled: true,
          name: "Test User",
        }}
        token={token}
      &gt;
        &lt;MeetingView /&gt;
      &lt;/MeetingProvider&gt;
    &lt;/SafeAreaView&gt;
  ) : (
    &lt;JoinScreen getMeetingId={getMeetingId} /&gt;
  );
}
</code></pre><h3 id="step-3-implement-join-screen">Step 3: Implement Join Screen</h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><pre><code class="language-js">function JoinScreen(props) {
  const [meetingVal, setMeetingVal] = useState("");
  return (
    &lt;SafeAreaView
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        paddingHorizontal: 6 * 10,
      }}
    &gt;
      &lt;TouchableOpacity
        onPress={() =&gt; {
          props.getMeetingId();
        }}
        style={{ backgroundColor: "#1178F8", padding: 12, borderRadius: 6 }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Create Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;

      &lt;Text
        style={{
          alignSelf: "center",
          fontSize: 22,
          marginVertical: 16,
          fontStyle: "italic",
          color: "grey",
        }}
      &gt;
        ---------- OR ----------
      &lt;/Text&gt;
      &lt;TextInput
        value={meetingVal}
        onChangeText={setMeetingVal}
        placeholder={"XXXX-XXXX-XXXX"}
        style={{
          padding: 12,
          borderWidth: 1,
          borderRadius: 6,
          fontStyle: "italic",
        }}
      /&gt;
      &lt;TouchableOpacity
        style={{
          backgroundColor: "#1178F8",
          padding: 12,
          marginTop: 14,
          borderRadius: 6,
        }}
        onPress={() =&gt; {
          props.getMeetingId(meetingVal);
        }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Join Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;
    &lt;/SafeAreaView&gt;
  );
}
</code></pre><h3 id="step-4-implement-controls">Step 4: Implement Controls</h3><p>The next step is to create a  <code>ControlsContainer</code>  component to manage features such as Join or leave a Meeting and Enable or Disable the Webcam/Mic.</p><p>In this step, the  <code>useMeeting</code>  hook is utilized to acquire all the required methods such as  <code>join()</code>,  <code>leave()</code>,  <code>toggleWebcam</code>  and  <code>toggleMic</code>.	</p><pre><code class="language-js">const Button = ({ onPress, buttonText, backgroundColor }) =&gt; {
  return (
    &lt;TouchableOpacity
      onPress={onPress}
      style={{
        backgroundColor: backgroundColor,
        justifyContent: "center",
        alignItems: "center",
        padding: 12,
        borderRadius: 4,
      }}
    &gt;
      &lt;Text style={{ color: "white", fontSize: 12 }}&gt;{buttonText}&lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
  );
};

function ControlsContainer({ join, leave, toggleWebcam, toggleMic }) {
  return (
    &lt;View
      style={{
        padding: 24,
        flexDirection: "row",
        justifyContent: "space-between",
      }}
    &gt;
      &lt;Button
        onPress={() =&gt; {
          join();
        }}
        buttonText={"Join"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleWebcam();
        }}
        buttonText={"Toggle Webcam"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleMic();
        }}
        buttonText={"Toggle Mic"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          leave();
        }}
        buttonText={"Leave"}
        backgroundColor={"#FF0000"}
      /&gt;
    &lt;/View&gt;
  );
}
</code></pre><p>ControlsContainer Component</p><pre><code class="language-js">function ParticipantList() {
  return null;
}
function MeetingView() {
  const { join, leave, toggleWebcam, toggleMic, meetingId } = useMeeting({});

  return (
    &lt;View style={{ flex: 1 }}&gt;
      {meetingId ? (
        &lt;Text style={{ fontSize: 18, padding: 12 }}&gt;
          Meeting Id :{meetingId}
        &lt;/Text&gt;
      ) : null}
      &lt;ParticipantList /&gt;
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}
</code></pre><p>MeetingView Component</p><h3 id="step-5-render-participant-list">Step 5: Render Participant List</h3><p>After implementing the controls, the next step is to render the joined participants.</p><p>You can get all the joined  <code>participants</code>  from the  <code>useMeeting</code>  Hook.</p><pre><code class="language-js">function ParticipantView() {
  return null;
}

function ParticipantList({ participants }) {
  return participants.length &gt; 0 ? (
    &lt;FlatList
      data={participants}
      renderItem={({ item }) =&gt; {
        return &lt;ParticipantView participantId={item} /&gt;;
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 20 }}&gt;Press Join button to enter meeting.&lt;/Text&gt;
    &lt;/View&gt;
  );
}
</code></pre><p>ParticipantList Component</p><pre><code class="language-js">function MeetingView() {
  // Get `participants` from useMeeting Hook
  const { join, leave, toggleWebcam, toggleMic, participants } = useMeeting({});
  const participantsArrId = [...participants.keys()];

  return (
    &lt;View style={{ flex: 1 }}&gt;
      &lt;ParticipantList participants={participantsArrId} /&gt;
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}
</code></pre><p>MeetingView Component</p><h3 id="step-6-handling-participants-media">Step 6: Handling Participant's Media</h3><p>Before Handling the Participant's Media, you need to understand a couple of concepts.</p><h4 id="a-useparticipant-hook">[a] <code>useParticipant</code> Hook</h4><p>The  <code>useParticipant</code>  hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take  <code>participantId</code>  as argument.</p><pre><code class="language-js">const { webcamStream, webcamOn, displayName } = useParticipant(participantId);
</code></pre><p>useParticipant Hook Example</p><h4 id="b-mediastream-api">[b] MediaStream API</h4><p>The MediaStream API is beneficial for adding a MediaTrack to the  <code>RTCView</code>  component, enabling the playback of audio or video.</p><pre><code class="language-js">&lt;RTCView
  streamURL={new MediaStream([webcamStream.track]).toURL()}
  objectFit={"cover"}
  style={{
    height: 300,
    marginVertical: 8,
    marginHorizontal: 8,
  }}
/&gt;
</code></pre><p>useParticipant Hook Example</p><h4 id="c-rendering-participant-media">[c] Rendering Participant Media</h4><pre><code class="language-js">function ParticipantView({ participantId }) {
  const { webcamStream, webcamOn } = useParticipant(participantId);

  return webcamOn &amp;&amp; webcamStream ? (
    &lt;RTCView
      streamURL={new MediaStream([webcamStream.track]).toURL()}
      objectFit={"cover"}
      style={{
        height: 300,
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        backgroundColor: "grey",
        height: 300,
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 16 }}&gt;NO MEDIA&lt;/Text&gt;
    &lt;/View&gt;
  );
}
</code></pre><p>We hope this guide has been helpful in getting you started with the VideoSDK in your Expo app. If you have any further questions or need additional support, be sure to check out the VideoSDK documentation and community resources.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/react-native-expo-setup"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Start a Video &amp; Audio Call in React Native - Video SDK Docs | Video SDK</div><div class="kg-bookmark-description">Build customizable real-time video &amp; audio calling applications in React Native Android SDK using Video SDK add live Video &amp; Audio conferencing to your applications.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="How to Integrate VideoSDK with Expo (React Native)?"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="How to Integrate VideoSDK with Expo (React Native)?" onerror="this.style.display = 'none'"/></div></a></figure><h2 id="conclusion">Conclusion</h2><p>In this guide, we have covered the steps to integrate the VideoSDK with your Expo (React Native) project. By following these steps, you will be able to add real-time video communication features to your mobile application, allowing your users to connect and collaborate in new ways.</p><p>By integrating the VideoSDK, you can provide your users with a powerful and flexible video communication solution that is easy to set up and customize. With features like screen sharing, recording, and moderation tools, you can create a meeting experience that meets the needs of your users and your business.</p><p>We hope this guide has been helpful in getting you started with the VideoSDK in your Expo app. If you have any further questions or need additional support, be sure to check out the VideoSDK documentation and community resources.</p><p>Remember, VideoSDK offers a generous <strong>free tier</strong> that includes <strong>10,000 minutes</strong> allowing you to experiment and build your video call application without initial investment. Start your  <a href="https://www.videosdk.live/signup"><strong>free trial today</strong></a>  and unlock the potential streaming in your React Native app!</p>]]></content:encoded></item><item><title><![CDATA[What is SIP Connect Protocol? How it Works with VideoSDK?]]></title><description><![CDATA[Explore how SIP protocol transforms industries like telehealth and customer support, and get step-by-step directions on using VideoSDK's SIP Connect feature.]]></description><link>https://www.videosdk.live/blog/sip-connect</link><guid isPermaLink="false">66505ad620fab018df10e6a3</guid><category><![CDATA[SIP Protocol]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Wed, 15 Jan 2025 09:30:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/05/SIP-Connect-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/05/SIP-Connect-1.png" alt="What is SIP Connect Protocol? How it Works with VideoSDK?"/><p>The integration of various platforms in communication technology is essential for enhancing user experience and operational efficiency. it's a necessity and that's why we need game-changing technology like SIP protocol that's redefining how we interact. </p><p>SIP plays a pivotal role in integration, enabling seamless transitions between voice, video, and messaging applications. With this article, we explore the <a href="https://docs.videosdk.live/javascript/guide/sip-connect">SIP Connect</a> feature that signifies the capability of developers working with applications.</p><h2 id="what-is-sip-connect-protocol">What is SIP Connect Protocol?</h2><p><a href="https://www.videosdk.live/developer-hub/sip">SIP</a> is a signaling protocol that facilitates the initiation, management, and termination of real-time multimedia communication sessions. It is widely used for voice calls, video conferencing, and messaging, making it a cornerstone of unified communication systems. It enables users to connect instantly and clearly and enhances collaboration across various devices and platforms.</p><h2 id="what-is-the-technology-behind-sip-connect">What is the Technology Behind SIP Connect?</h2><p>Let’s understand the main component of this technology and how SIP works:</p><h3 id="pstnpublic-switched-telephone-network">PSTN - Public Switched Telephone Network</h3><p><a href="https://www.twilio.com/docs/glossary/what-is-pstn">PSTN</a> is a conventional landline telephone system that utilizes circuit-switched networks to transmit voice calls. It depends on physical copper wires to establish connections between callers, which makes it a dependable but expensive communication method. </p><p>PSTN functions on a centralized infrastructure managed by telecommunications companies, providing stable voice quality but limited flexibility in terms of features and scalability.</p><h3 id="voip">VOIP</h3><p><a href="https://www.nextiva.com/blog/what-is-voip.html">Voice over Internet Protocol (VoIP)</a>, on the other hand, is a technology that allows voice communication over the Internet, eliminating the necessity for traditional phone lines. VoIP converts voice signals into digital data packets that are transmitted over IP networks, offering a more cost-effective and versatile communication solution.</p><p>One of the key advantages of using VoIP with SIP (Session Initiation Protocol) is the ability to integrate third-party PSTN providers like Twilio for making calls. This enables businesses to leverage the existing PSTN infrastructure while benefiting from the flexibility and cost-efficiency of VoIP technology.</p><h2 id="codec-switching">Codec Switching</h2><p>One of the advanced functionalities of SIP is <a href="https://www.videosdk.live/blog/what-is-codec-switching">codec switching</a>, which allows for the dynamic selection of audio and video codecs during a call. This capability is crucial for optimizing the quality of communication based on network conditions. By adjusting the codec in real time, SIP can ensure that users experience the best possible audio and video quality, even in varying bandwidth scenarios.</p><h2 id="how-to-implement-sip-connect-with-videosdk">How to Implement SIP Connect with VideoSDK?</h2><p>With VideoSDK's SIP Connect feature, you can enable your users to join VideoSDK meetings via VOIP using third-party service providers. This allows for establishing a bridge between participants using our client SDKs and those joining via SIP, enhancing the accessibility and connectivity of your video meetings.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/05/Videosdk-s-SIP-diagram.png" class="kg-image" alt="What is SIP Connect Protocol? How it Works with VideoSDK?" loading="lazy" width="1122" height="530"/></figure><h3 id="getting-sip-credentials">Getting SIP Credentials</h3><p>To use SIP Connect Protocol, you will first need to generate the credentials for the SIP. Go to the <a href="https://app.videosdk.live/">VideoSDK dashboard</a> and under the <a href="https://app.videosdk.live/api-keys">API Keys</a> sections, first enable the SIP for the desired API key. Afterward, you will be presented with the SIP username and password to establish a connection with our SIP servers.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/05/SIP-credentials-by-VideoSDK-1.gif" class="kg-image" alt="What is SIP Connect Protocol? How it Works with VideoSDK?" loading="lazy" width="1920" height="1080"/></figure><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">?</div><div class="kg-callout-text">This feature is currently available in US and EU regions only. If you want to enable SIP for another region. Please, contact us at <a href="mailto:support@videosdk.live">support@videosdk.live</a></div></div><h3 id="use-sip-to-join-an-individual-meeting">Use SIP to join an individual meeting</h3><p>Once you have the credentials, you can use any of the softphones like <a href="https://www.linphone.org/">Linphone</a>, or <a href="https://www.zoiper.com/">Zoiper</a>, or use a third-party service provider like Twilio to connect with a VideoSDK meeting by initiating an SIP call to <code>sip:&lt;meeting-id&gt;@sip.videosdk.live</code>.</p><p>For example, if you want to connect to <code>abcd-abcd-abcd</code> meeting then the SIP address will look like <code>sip:abcd-abcd-abcd@sip.videosdk.live</code>.</p><h3 id="integration-with-twilio-pstn">Integration with Twilio PSTN</h3><p>To utilize third-party PSTN (Public Switched Telephone Network) providers like Twilio for calls via SIP (Session Initiation Protocol), follow these steps:</p><ol><li>Log in to your Twilio account and navigate to the Phone Numbers section</li><li>Purchase a phone number and access its configuration settings</li><li>Set up a Webhook handler that will initiate a call to the VideoSDK SIP endpoint whenever an incoming call is received on the selected phone number</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/05/Twilio-PSTN.png" class="kg-image" alt="What is SIP Connect Protocol? How it Works with VideoSDK?" loading="lazy" width="2000" height="747"/></figure><h3 id="example-for-creating-webhook-using-express">Example for Creating Webhook using Express</h3><pre><code class="language-jsx">const express = require("express");
const VoiceResponse = require("twilio").twiml.VoiceResponse;
const urlencoded = require("body-parser").urlencoded;

const app = express();

// Parse incoming POST params with Express middleware
app.use(urlencoded({ extended: false }));

// Webhook route which will be called by twilio when a call is received
app.post("/incoming-call-handler", (request, response) =&gt; {
  console.log({ request });
  // Use the Twilio Node.js SDK to build an XML response
  const twiml = new VoiceResponse();

  const dial = twiml.dial();
  dial.sip(
    {
      username: "&lt;User name generated for dashboard&gt;",
      password: "&lt;Password generated from dashboard&gt;",
    },
    "sip:&lt;meetingId&gt;@sip.videosdk.live"
  );

  // Render the response as XML in reply to the webhook request
  response.type("text/xml");
  response.send(twiml.toString());
});

//Start the express app on port 8000
app.listen(8000, () =&gt; {
  console.log(`Server listening on port ${8000}.`);
});
</code></pre><h2 id="why-integrate-sip-in-communication-systems">Why Integrate SIP in Communication Systems?</h2><h3 id="improved-user-experience">Improved User Experience</h3><p>SIP integration elevates user interactions by enabling smooth transitions between various communication modes. Users can effortlessly shift from video to phone calls without disruptions, maintaining conversation flow and engagement.</p><h3 id="global-reach">Global Reach</h3><p>SIP allows businesses to connect with clients and teams worldwide, with international calls and enabling collaboration across geographical barriers.</p><h3 id="adaptability">Adaptability</h3><p>SIP's flexibility enables businesses to connect calls to video conferences and switch between media types, ensuring seamless and uninterrupted communication.</p><h3 id="cost-efficiency">Cost Efficiency</h3><p>Combining platforms through SIP protocol integration reduces hardware dependency and any subscription expenses, streamlining their operations and making them more cost-effective.</p><h2 id="conclusion">Conclusion</h2><p>Bridging traditional telephony with modern video conferencing technologies offers unparalleled flexibility, cost-efficiency, and user experience. Whether you're in healthcare, customer support, or any industry requiring robust communication solutions, SIP Connect Protocol provides the tools to stay connected and productive.</p><p>As we move forward in an increasingly interconnected world, technologies like the SIP Connect Protocol will continue to play a crucial role in shaping the future of communication. Embracing these innovations today can set your business apart, enhancing customer interactions and fostering global collaboration.</p><p>Want to read more about SIP? <a href="https://docs.videosdk.live/react-native/guide/sip-connect">Explore VideoSDK's SIP Connect</a> and take the first step towards a more connected future.</p>]]></content:encoded></item><item><title><![CDATA[What is Simulcast? How Simulcast Works?]]></title><description><![CDATA[Explore simulcast technology: broadcasting multiple streams at varying qualities. Learn its applications in streaming, broadcasting, and optimizing network bandwidth efficiently.]]></description><link>https://www.videosdk.live/blog/what-is-simulcast</link><guid isPermaLink="false">66756e6720fab018df10ecc6</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 14 Jan 2025 11:15:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/06/what-is-simulcast.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<img src="https://assets.videosdk.live/static-assets/ghost/2024/06/what-is-simulcast.jpg" alt="What is Simulcast? How Simulcast Works?"/><p>In the digital age, the art of broadcasting has transcended traditional mediums to embrace the vast potential of the Internet. Simulcasting, or simultaneous broadcasting, has emerged as a powerful tool for content creators and businesses to amplify their reach across multiple digital platforms simultaneously. This technique allows a single stream to be broadcast to diverse platforms like YouTube, Twitch, and Facebook, tapping into varied audiences with minimal additional effort. As we explore the nuances of simulcasting, this article will delve into its workings, benefits, and the strategic considerations needed to maximize its potential. Whether you’re a seasoned broadcaster or new to digital media, understanding simulcasting could significantly enhance your content strategy.</p><h2 id="what-is-simulcast">What is Simulcast?</h2>
<p>Simulcasting, a term derived from the fusion of “simultaneous” and “broadcasting,” has evolved significantly from its origins in traditional media to its current prominence in digital streaming. Initially coined in the mid-20th century, simulcasting referred to broadcasting the same program simultaneously across different media channels, such as radio and television. Today, however, it encompasses a broader spectrum, particularly in the digital realm, where it involves streaming audio or video content simultaneously across multiple platforms or channels such as YouTube, Twitch, and Facebook Live.</p><p>Simulcasting is often used interchangeably with terms like multistreaming and multi-destination streaming, but subtle nuances distinguish these concepts. Multistreaming generally refers to streaming across various online platforms simultaneously, which is technically a form of simulcasting but specifically for digital channels. This adaptation reflects the term’s evolution from traditional broadcast methods to modern online content delivery, where it enhances reach and engagement by leveraging the unique audiences of different platforms.</p><h2 id="what-is-twitch-simulcasting">What is Twitch Simulcasting?</h2>
<p>Twitch simulcasting involves broadcasting the same live stream concurrently across multiple platforms, such as Twitch, YouTube, and Facebook. This strategy expands reach, diversifies audience engagement, and maximizes content exposure, making it a popular choice for streamers looking to grow their viewer base and increase interaction across various social media channels.4</p><h2 id="how-simulcasting-works">How Simulcasting Works?</h2>
<p>The technical setup for simulcasting involves both hardware and software components that manage the distribution of streams to multiple destinations. At the core of this process is a simulcasting or multistreaming service, such as Restream or Castr, which integrates various streaming platforms into a single interface. These services typically allow users to connect their video source—be it a camera or screen capture—to the service, which then encodes and distributes the stream to selected platforms like LinkedIn, Twitch, and more.</p><p>To ensure the efficient management and optimization of these streams, companies often turn to data warehousing consulting to handle the large volumes of data generated and ensure that streaming quality remains high across multiple platforms.</p><h3 id="setting-up-a-simulcast">Setting Up a Simulcast:</h3>
<ol><li><strong>Choose a Simulcasting Service:</strong> Select a platform that supports the desired streaming destinations and offers features like chat aggregation and analytics. Restream and Castr are popular choices that cater to a range of streaming needs from amateur content creators to professional broadcasters.</li><li><strong>Connect Your Video Source</strong>: This could be anything from professional broadcasting equipment to a simple webcam setup. The key is ensuring that the video feed is stable and of high quality.</li><li><strong>Configure Streaming Destinations</strong>: In the simulcasting service, specify where you want your content to go. This might include setting up individual stream keys or API access for platforms like YouTube, Facebook, and others.</li><li><strong>Go Live</strong>: Once everything is set up, you can start streaming simultaneously across all chosen platforms, significantly expanding your content’s reach.<br/></li></ol><p>The process is designed to be as seamless as possible, minimizing the technical challenges and allowing content creators to focus more on their presentation and less on the logistics of distribution.</p><p>By utilizing these platforms, broadcasters can effectively amplify their presence across the internet, tapping into diverse audience pools without the need for multiple separate streams, which would require significantly more bandwidth and resources. This efficiency not only simplifies the workflow but also maximizes the potential viewership with minimal additional effort.</p><h2 id="benefits-of-simulcasting">Benefits of Simulcasting</h2>
<p>Simulcasting offers several strategic advantages for content creators, businesses, and brands aiming to maximize their digital footprint. The primary benefit is the expansion of audience reach. By broadcasting simultaneously across multiple platforms, you tap into diverse viewer segments, each with unique demographics and engagement patterns. This approach not only increases visibility but also enhances the potential for viewer interaction and content virality.</p><p>Cost-effectiveness is another significant advantage. Simulcasting allows you to produce a single stream that reaches multiple channels, reducing the resources and time typically required for separate productions. This unified strategy maximizes return on investment, as the same content serves multiple audience bases without additional production costs.</p><p>Furthermore, simulcasting improves engagement rates. Platforms like Facebook Live and YouTube have built-in features that promote real-time interaction, such as comments and shares. By leveraging these features across multiple platforms simultaneously, creators can engage with a broader audience, receive instant feedback, and foster a sense of community among viewers.</p><h2 id="key-considerations-and-best-practices">Key Considerations and Best Practices</h2>
<p>While the benefits of simulcasting are clear, achieving optimal results requires attention to several key considerations. One of the main challenges is managing multiple audience interactions simultaneously. Creators must be adept at engaging with comments and reactions across different platforms, which can be overwhelming without the right tools or strategies.</p><p>To effectively manage this, it’s crucial to use simulcasting services that offer centralized control panels for monitoring and interacting with audiences across all platforms. This setup helps maintain a consistent level of engagement and ensures that no viewer feels neglected.</p><h2 id="best-practices-for-simulcasting-include">Best practices for simulcasting include</h2>
<p>Testing all equipment and connections before going live to avoid technical issues that could disrupt the broadcast.<br>Crafting content that resonates across different platforms, considering the nuances of each audience.<br>Promoting the simulcast ahead of time on all platforms to maximize live viewership.<br>Analyzing performance metrics post-broadcast to understand viewer behavior and preferences, which can inform future simulcasts.<br>By adhering to these best practices and continuously refining your approach based on audience feedback and analytics, you can harness the full potential of simulcasting to reach a wider audience, engage more effectively with viewers, and achieve your broadcasting goals with increased efficiency and impact.</br></br></br></br></p><h2 id="conclusion">Conclusion</h2>
<p>Simulcasting represents a modern broadcasting strategy that leverages technology to multiply audience reach without proportionate increases in effort or expense. By broadcasting content simultaneously across multiple platforms, creators and brands can achieve unprecedented levels of engagement and exposure. However, the success of simulcasting depends on careful planning, understanding the technical requirements, and effectively engaging with diverse audiences. As we’ve discussed, the benefits of simulcasting are substantial, from increased reach and engagement to cost efficiency. By adopting best practices and continuously refining your simulcasting strategies, you can effectively utilize this powerful tool to expand your digital footprint and connect with a broader audience more effectively than ever before.</p><h2 id="faqs-for-simulcasting">FAQs for Simulcasting</h2>
<h3 id="1-what-is-simulcast-and-how-does-it-differ-from-other-broadcasting-methods">1. What is simulcast and how does it differ from other broadcasting methods?</h3>
<p>Simulcast refers to the simultaneous broadcast of the same content across multiple platforms or channels. Unlike traditional broadcasting, which might limit content distribution to a single channel, simulcasting enables reaching diverse audiences concurrently.</p><h3 id="2-why-is-simulcasting-important-in-the-context-of-broadcasting-and-media-distribution">2. Why is simulcasting important in the context of broadcasting and media distribution?</h3>
<p>Simulcasting is crucial because it allows broadcasters to extend their reach and engagement by delivering content to various platforms simultaneously. It enhances audience accessibility and flexibility, catering to diverse viewing habits and preferences.</p><h3 id="3-what-types-of-content-can-be-simulcasted">3. What types of content can be simulcasted?</h3>
<p>Virtually any live or prerecorded content can be simulcasted, including video streams, audio broadcasts, webinars, concerts, and educational sessions, among others.</p><h3 id="4-is-there-a-difference-between-simulcasting-in-traditional-media-versus-digital-platforms">4. Is there a difference between simulcasting in traditional media versus digital platforms?</h3>
<p>Simulcasting in traditional media typically involves broadcasting across multiple TV or radio stations simultaneously, whereas digital platforms extend to streaming services, social media, and websites, catering to online audiences.</p><h3 id="5-is-simulcasting-the-same-as-live-streaming">5. Is simulcasting the same as live streaming?</h3>
<p>While simulcasting often involves live streaming, it specifically refers to broadcasting the same content simultaneously across multiple platforms. Live streaming, on the other hand, can involve broadcasting unique content to a single platform in real time.</p>]]></content:encoded></item><item><title><![CDATA[Post-Call Transcription & Summary in React]]></title><description><![CDATA[Introduction

Post-call transcription and summary is a powerful feature provided by VideoSDK that allows users to generate detailed transcriptions and summaries of recorded meetings after they have concluded. This feature is particularly beneficial for capturing and documenting important information discussed during meetings, ensuring that nothing is missed and that there is a comprehensive record of the conversation.


How Post-Call Transcription Works

Post-call transcription involves processi]]></description><link>https://www.videosdk.live/react-transcription-blog/</link><guid isPermaLink="false">667a5bb420fab018df10f0fa</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 14 Jan 2025 09:45:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/07/Post-time-transcription-and-summary-4.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/Post-time-transcription-and-summary-4.png" alt="Post-Call Transcription & Summary in React"/><p>Post-call transcription and summary is a powerful feature provided by VideoSDK that allows users to generate detailed transcriptions and summaries of recorded meetings after they have concluded. This feature is particularly beneficial for capturing and documenting important information discussed during meetings, ensuring that nothing is missed and that there is a comprehensive record of the conversation.</p><h3 id="how-post-call-transcription-works">How Post-Call Transcription Works</h3><p><strong>Post-call transcription</strong> involves processing the recorded audio or video content of a meeting to produce a textual representation of the conversation. Here’s a step-by-step breakdown of how it works:</p><ol><li><strong>Recording the Meeting:</strong> During the meeting, the audio and video are recorded. This can include everything that was said and any shared content, such as presentations or screen shares.</li><li><strong>Uploading the Recording:</strong> Once the meeting is over, the recorded file is uploaded to the VideoSDK platform. This can be done automatically or manually, depending on the configuration.</li><li><strong>Transcription Processing:</strong> The uploaded recording is then processed by VideoSDK’s transcription engine. This engine uses advanced speech recognition technology to convert spoken words into written text.</li><li><strong>Retrieving the Transcription:</strong> After the transcription process is complete, the textual representation of the meeting is made available. This text can be accessed via the VideoSDK API and used in various applications.</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/react-quick-start-fafbfbc2ed2d7cbfd4c5ee2e36296f9e-1.png" class="kg-image" alt="Post-Call Transcription & Summary in React" loading="lazy" width="2906" height="1446"/></figure><h3 id="benefits-of-post-call-transcription">Benefits of Post-Call Transcription</h3><ul><li><strong>Accurate Documentation:</strong> Provides a precise record of what was discussed, which is invaluable for meeting minutes, legal documentation, and reference.</li><li><strong>Enhanced Accessibility:</strong> Makes content accessible to those who may have missed the meeting or have hearing impairments.</li><li><strong>Easy Review and Analysis:</strong> Enables quick review of key points and decisions made during the meeting without having to re-watch the entire recording.</li></ul><h2 id="lets-get-started">Let's Get started </h2><p>VideoSDK empowers you to seamlessly integrate the video calling feature into your React application within minutes.</p><p>In this quickstart, you'll explore the group calling feature of VideoSDK. Follow the step-by-step guide to integrate it within your application.</p><h3 id="prerequisites%E2%80%8B">Prerequisites<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#prerequisites">​</a></h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (Not having one, follow <strong><a href="https://app.videosdk.live/" rel="noopener noreferrer">VideoSDK Dashboard</a></strong>)</li><li>Basic understanding of React</li><li><strong><a href="https://www.npmjs.com/package/@videosdk.live/react-sdk" rel="noopener noreferrer">React VideoSDK</a></strong></li><li>Have Node and NPM installed on your device.</li><li>Basic understanding of Hooks (useState, useRef, useEffect)</li><li>React Context API (optional)</li><li>Generate a token from VideoSDK <a href="https://app.videosdk.live/api-keys">dashboard </a></li></ul><h2 id="getting-started-with-the-code%E2%80%8B">Getting Started with the Code!<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#getting-started-with-the-code">​</a></h2><p>Follow the steps to create the environment necessary to add video calls into your app. You can also find the code sample for <a href="https://github.com/videosdk-live/quickstart/tree/main/react-rtc" rel="noopener noreferrer">quickstart here</a>.</p><h3 id="create-new-react-app%E2%80%8B">Create new React App<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#create-new-react-app">​</a></h3><p>Create a new React App using the below command.</p><pre><code class="language-JS">$ npx create-react-app videosdk-rtc-react-app</code></pre><h3 id="install-videosdk%E2%80%8B">Install VideoSDK<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#install-videosdk">​</a></h3><p>Install the VideoSDK using the below-mentioned npm command. Make sure you are in your react app directory before you run this command.</p><figure class="kg-card kg-code-card"><pre><code class="language-JS">$ npm install "@videosdk.live/react-sdk"

//For the Participants Video
$ npm install "react-player"</code></pre><figcaption>Terminal</figcaption></figure><h3 id="structure-of-the-project%E2%80%8B">Structure of the project<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#structure-of-the-project">​</a></h3><p>Your project structure should look like this.</p><pre><code class="language-Project Structure">   root
   ├── node_modules
   ├── public
   ├── src
   │    ├── API.js
   │    ├── App.js
   │    ├── index.js
   .    .</code></pre><p>You are going to use functional components to leverage react's reusable component architecture. There will be components for users, videos and controls (mic, camera, leave) over the video.</p><h3 id="app-architecture%E2%80%8B">App Architecture<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#app-architecture">​</a></h3><p>The App will contain a <code>MeetingView</code> component which includes a <code>ParticipantView</code> component which will render the participant's name, video, audio, etc. It will also have a <code>Controls</code> component which will allow the user to perform operations like leave and toggle media.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/react-quick-start-fafbfbc2ed2d7cbfd4c5ee2e36296f9e.png" class="kg-image" alt="Post-Call Transcription & Summary in React" loading="lazy" width="1356" height="780"/></figure><p>You will be working on the following files:</p><ul><li>API.js: Responsible for handling API calls such as generating unique meetingId and token</li><li>App.js: Responsible for rendering <code>MeetingView</code> and joining the meeting.</li></ul><h3 id="step-1-get-started-with-apijs%E2%80%8B">Step 1: Get started with API.js<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-1-get-started-with-apijs">​</a></h3><p>Prior to moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><figure class="kg-card kg-code-card"><pre><code class="language-JS">//This is the Auth token, you will use it to generate a meeting and connect to it
export const authToken = "&lt;Generated-from-dashbaord&gt;";
// API call to create a meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${authToken}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });
  //Destructuring the roomId from the response
  const { roomId } = await res.json();
  return roomId;
};</code></pre><figcaption>API.js</figcaption></figure><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting and useParticipant hooks.</p><p>First you need to understand Context Provider and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meeting such as join, leave, enable/disable mic or webcam etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as name, webcamStream, micStream etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the App.js file.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">import "./App.css";
import React, { useEffect, useMemo, useRef, useState } from "react";
import {
  MeetingProvider,
  MeetingConsumer,
  useMeeting,
  useParticipant,
} from "@videosdk.live/react-sdk";
import { authToken, createMeeting } from "./API";
import ReactPlayer from "react-player";

function JoinScreen({ getMeetingAndToken }) {
  return null;
}

function ParticipantView(props) {
  return null;
}

function Controls(props) {
  return null;
}

function MeetingView(props) {
  return null;
}

function App() {
  const [meetingId, setMeetingId] = useState(null);

  //Getting the meeting id by calling the api we just wrote
  const getMeetingAndToken = async (id) =&gt; {
    const meetingId =
      id == null ? await createMeeting({ token: authToken }) : id;
    setMeetingId(meetingId);
  };

  //This will set Meeting Id to null when meeting is left or ended
  const onMeetingLeave = () =&gt; {
    setMeetingId(null);
  };

  return authToken &amp;&amp; meetingId ? (
    &lt;MeetingProvider
      config={{
        meetingId,
        micEnabled: true,
        webcamEnabled: true,
        name: "C.V. Raman",
      }}
      token={authToken}
    &gt;
      &lt;MeetingView meetingId={meetingId} onMeetingLeave={onMeetingLeave} /&gt;
    &lt;/MeetingProvider&gt;
  ) : (
    &lt;JoinScreen getMeetingAndToken={getMeetingAndToken} /&gt;
  );
}

export default App;</code></pre><figcaption>App.js</figcaption></figure><p/><h3 id="step-3-implement-join-screen%E2%80%8B">Step 3: Implement Join Screen<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-3-implement-join-screen">​</a></h3><p>Join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">function JoinScreen({ getMeetingAndToken }) {
  const [meetingId, setMeetingId] = useState(null);
  const onClick = async () =&gt; {
    await getMeetingAndToken(meetingId);
  };
  return (
    &lt;div&gt;
      &lt;input
        type="text"
        placeholder="Enter Meeting Id"
        onChange={(e) =&gt; {
          setMeetingId(e.target.value);
        }}
      /&gt;
      &lt;button onClick={onClick}&gt;Join&lt;/button&gt;
      {" or "}
      &lt;button onClick={onClick}&gt;Create Meeting&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><figcaption>JoinScreen Component</figcaption></figure><h4 id="output">Output</h4><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/image_2024-06-26_164057583.png" class="kg-image" alt="Post-Call Transcription & Summary in React" loading="lazy" width="720" height="130"/></figure><h3 id="step-4-implement-meetingview-and-controls%E2%80%8B">Step 4: Implement MeetingView and Controls<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-4-implement-meetingview-and-controls">​</a></h3><p>Next step is to create <code>MeetingView</code> and <code>Controls</code> components to manage features such as join, leave, mute and unmute.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">function MeetingView(props) {
  const [joined, setJoined] = useState(null);
  //Get the method which will be used to join the meeting.
  //We will also get the participants list to display all participants
  const { join, participants } = useMeeting({
    //callback for when meeting is joined successfully
    onMeetingJoined: () =&gt; {
      setJoined("JOINED");
    },
    //callback for when meeting is left
    onMeetingLeft: () =&gt; {
      props.onMeetingLeave();
    },
  });
  const joinMeeting = () =&gt; {
    setJoined("JOINING");
    join();
  };

  return (
    &lt;div className="container"&gt;
      &lt;h3&gt;Meeting Id: {props.meetingId}&lt;/h3&gt;
      {joined &amp;&amp; joined == "JOINED" ? (
        &lt;div&gt;
          &lt;Controls /&gt;
          //For rendering all the participants in the meeting
          {[...participants.keys()].map((participantId) =&gt; (
            &lt;ParticipantView
              participantId={participantId}
              key={participantId}
            /&gt;
          ))}
        &lt;/div&gt;
      ) : joined &amp;&amp; joined == "JOINING" ? (
        &lt;p&gt;Joining the meeting...&lt;/p&gt;
      ) : (
        &lt;button onClick={joinMeeting}&gt;Join&lt;/button&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><figcaption>MeetingView</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">function Controls() {
  const { leave, toggleMic, toggleWebcam } = useMeeting();
  return (
    &lt;div&gt;
      &lt;button onClick={() =&gt; leave()}&gt;Leave&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleMic()}&gt;toggleMic&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleWebcam()}&gt;toggleWebcam&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><figcaption>Controls Component</figcaption></figure><h3 id="step-5-configuring-transcription">Step 5: Configuring Transcription</h3><ul><li>In this step, we set up the configuration for post transcription and summary generation. We define the webhook URL where the webhooks will be received.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">function Controls() {
  const { leave, toggleMic, toggleWebcam, startRecording, stopRecording } =  useMeeting();

  // Webhook URL where, webhooks are received
  const webhookurl = "https://www.example.com";
  const transcription = {
    enabled: true, // Enables post transcription
    summary: {
      enabled: true, // Enables summary generation

      // Guides summary generation
      prompt:
        "Write summary in sections like Title, Agenda, Speakers, Action Items, Outlines, Notes and Summary",
    },
  };

  return (
    &lt;div&gt;
      &lt;button onClick={() =&gt; leave()}&gt;Leave&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleMic()}&gt;toggleMic&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleWebcam()}&gt;toggleWebcam&lt;/button&gt;
      //Start Post-Call Transcription with Recording
      &lt;button
        onClick={() =&gt; startRecording(webhookurl, null, null, transcription)}
      &gt;
        Start Recording
      &lt;/button&gt;
      //Stop Recording
      &lt;button onClick={() =&gt; stopRecording()}&gt;Stop Recording&lt;/button&gt;
    &lt;/div&gt;
  );
}
</code></pre><figcaption>App.js</figcaption></figure><p><strong>Output of Controls Component</strong></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/Screenshot-2024-06-27-111321-1.png" class="kg-image" alt="Post-Call Transcription & Summary in React" loading="lazy" width="787" height="131"/></figure><h3 id="step-6-implement-participant-view%E2%80%8B">Step 6: Implement Participant View<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-5-implement-participant-view">​</a></h3><p>Before implementing the participant view, you need to understand a couple of concepts.</p><p><strong>1. Forwarding Ref for mic and camera<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#1-forwarding-ref-for-mic-and-camera">​</a></strong></p><p>The <code>useRef</code> hook is responsible for referencing the audio and video components. It will be used to play and stop the audio and video of the participant.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const webcamRef = useRef(null);
const micRef = useRef(null);</code></pre><figcaption>Forwarding Ref for mic and camera</figcaption></figure><p><strong>2. useParticipant Hook</strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#2-useparticipant-hook">​</a></p><p>The <code>useParticipant</code> hook is responsible for handling all the properties and events of one particular participant joined in the meeting. It will take participantId as argument.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const { webcamStream, micStream, webcamOn, micOn } = useParticipant(
  props.participantId
);</code></pre><figcaption>useParticipant Hook</figcaption></figure><p><strong>3. MediaStream API</strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#3-mediastream-api">​</a></p><p>The MediaStream API is beneficial for adding a MediaTrack to the audio/video tag, enabling the playback of audio or video.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const webcamRef = useRef(null);
const mediaStream = new MediaStream();
mediaStream.addTrack(webcamStream.track);

webcamRef.current.srcObject = mediaStream;
webcamRef.current
  .play()
  .catch((error) =&gt; console.error("videoElem.current.play() failed", error));</code></pre><figcaption>MediaStream API</figcaption></figure><p><strong>4. Implement ParticipantView<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#4-implement-participantview">​</a></strong></p><p>Now you can use both of the hooks and the API to create <code>ParticipantView</code></p><figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantView(props) {
  const micRef = useRef(null);
  const { webcamStream, micStream, webcamOn, micOn, isLocal, displayName } =
    useParticipant(props.participantId);

  const videoStream = useMemo(() =&gt; {
    if (webcamOn &amp;&amp; webcamStream) {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(webcamStream.track);
      return mediaStream;
    }
  }, [webcamStream, webcamOn]);

  useEffect(() =&gt; {
    if (micRef.current) {
      if (micOn &amp;&amp; micStream) {
        const mediaStream = new MediaStream();
        mediaStream.addTrack(micStream.track);

        micRef.current.srcObject = mediaStream;
        micRef.current
          .play()
          .catch((error) =&gt;
            console.error("videoElem.current.play() failed", error)
          );
      } else {
        micRef.current.srcObject = null;
      }
    }
  }, [micStream, micOn]);

  return (
    &lt;div&gt;
      &lt;p&gt;
        Participant: {displayName} | Webcam: {webcamOn ? "ON" : "OFF"} | Mic:{" "}
        {micOn ? "ON" : "OFF"}
      &lt;/p&gt;
      &lt;audio ref={micRef} autoPlay playsInline muted={isLocal} /&gt;
      {webcamOn &amp;&amp; (
        &lt;ReactPlayer
          //
          playsinline // extremely crucial prop
          pip={false}
          light={false}
          controls={false}
          muted={true}
          playing={true}
          //
          url={videoStream}
          //
          height={"300px"}
          width={"300px"}
          onError={(err) =&gt; {
            console.log(err, "participant video error");
          }}
        /&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><figcaption>ParticipantView</figcaption></figure><h2 id="final-output%E2%80%8B">Final Output<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#final-output">​</a></h2><p>You have completed the implementation of a customized video calling app in React.js using VideoSDK. To explore more features, go through Basic and Advanced features.</p><figure class="kg-card kg-video-card"><div class="kg-video-container"><video src="http://blogs.videosdk.live/content/media/2024/06/part1-1.mp4" poster="https://img.spacergif.org/v1/1920x1080/0a/spacer.png" width="1920" height="1080" playsinline="" preload="metadata" style="background: transparent url('https://assets.videosdk.live/static-assets/ghost/2024/06/media-thumbnail-ember748.jpg') 50% 50% / cover no-repeat;"/><div class="kg-video-overlay"><button class="kg-video-large-play-icon"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/></svg></button></div><div class="kg-video-player-container"><div class="kg-video-player"><button class="kg-video-play-icon"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/></svg></button><button class="kg-video-pause-icon kg-video-hide"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><rect x="3" y="1" width="7" height="22" rx="1.5" ry="1.5"/><rect x="14" y="1" width="7" height="22" rx="1.5" ry="1.5"/></svg></button><span class="kg-video-current-time">0:00</span><div class="kg-video-time">/<span class="kg-video-duration"/></div><input type="range" class="kg-video-seek-slider" max="100" value="0"><button class="kg-video-playback-rate">1&#215;</button><button class="kg-video-unmute-icon"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><path d="M15.189 2.021a9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h1.794a.249.249 0 0 1 .221.133 9.73 9.73 0 0 0 7.924 4.85h.06a1 1 0 0 0 1-1V3.02a1 1 0 0 0-1.06-.998Z"/></svg></button><button class="kg-video-mute-icon kg-video-hide"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><path d="M16.177 4.3a.248.248 0 0 0 .073-.176v-1.1a1 1 0 0 0-1.061-1 9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h.114a.251.251 0 0 0 .177-.073ZM23.707 1.706A1 1 0 0 0 22.293.292l-22 22a1 1 0 0 0 0 1.414l.009.009a1 1 0 0 0 1.405-.009l6.63-6.631A.251.251 0 0 1 8.515 17a.245.245 0 0 1 .177.075 10.081 10.081 0 0 0 6.5 2.92 1 1 0 0 0 1.061-1V9.266a.247.247 0 0 1 .073-.176Z"/></svg></button><input type="range" class="kg-video-volume-slider" max="100" value="100"/></input></div></div></div></figure><h2 id="fetching-the-transcription-from-the-dashboard">Fetching the Transcription from the Dashboard</h2><p>Once the transcription is ready, you can fetch it from the VideoSDK dashboard. The dashboard provides a user-friendly interface where you can view, download, and manage your transcriptions.</p><figure class="kg-card kg-video-card"><div class="kg-video-container"><video src="http://blogs.videosdk.live/content/media/2024/06/part2.mp4" poster="https://img.spacergif.org/v1/2520x1080/0a/spacer.png" width="2520" height="1080" playsinline="" preload="metadata" style="background: transparent url('https://assets.videosdk.live/static-assets/ghost/2024/06/media-thumbnail-ember696.jpg') 50% 50% / cover no-repeat;"/><div class="kg-video-overlay"><button class="kg-video-large-play-icon"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/></svg></button></div><div class="kg-video-player-container"><div class="kg-video-player"><button class="kg-video-play-icon"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/></svg></button><button class="kg-video-pause-icon kg-video-hide"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><rect x="3" y="1" width="7" height="22" rx="1.5" ry="1.5"/><rect x="14" y="1" width="7" height="22" rx="1.5" ry="1.5"/></svg></button><span class="kg-video-current-time">0:00</span><div class="kg-video-time">/<span class="kg-video-duration"/></div><input type="range" class="kg-video-seek-slider" max="100" value="0"><button class="kg-video-playback-rate">1&#215;</button><button class="kg-video-unmute-icon"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><path d="M15.189 2.021a9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h1.794a.249.249 0 0 1 .221.133 9.73 9.73 0 0 0 7.924 4.85h.06a1 1 0 0 0 1-1V3.02a1 1 0 0 0-1.06-.998Z"/></svg></button><button class="kg-video-mute-icon kg-video-hide"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><path d="M16.177 4.3a.248.248 0 0 0 .073-.176v-1.1a1 1 0 0 0-1.061-1 9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h.114a.251.251 0 0 0 .177-.073ZM23.707 1.706A1 1 0 0 0 22.293.292l-22 22a1 1 0 0 0 0 1.414l.009.009a1 1 0 0 0 1.405-.009l6.63-6.631A.251.251 0 0 1 8.515 17a.245.245 0 0 1 .177.075 10.081 10.081 0 0 0 6.5 2.92 1 1 0 0 0 1.061-1V9.266a.247.247 0 0 1 .073-.176Z"/></svg></button><input type="range" class="kg-video-volume-slider" max="100" value="100"/></input></div></div></div></figure><h2 id="conclusion">Conclusion</h2><p>Integrating post-call transcription and summary features into your React application using VideoSDK provides significant advantages for capturing and documenting meeting content. This guide has meticulously detailed the steps required to set up and implement these features, ensuring that every conversation during a meeting is accurately transcribed and easily accessible for future reference.</p>]]></content:encoded></item><item><title><![CDATA[Best Jitsi Competitors in 2025]]></title><description><![CDATA[Explore Jitsi's competition by comparing Jitsi with its competitors in the realm of real-time communication.]]></description><link>https://www.videosdk.live/blog/jitsi-competitors</link><guid isPermaLink="false">64b5257f9eadee0b8b9e71ec</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 14 Jan 2025 04:30:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/08/Jitsi-Meet-competitors.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Jitsi-Meet-competitors.jpg" alt="Best Jitsi Competitors in 2025"/><p>Absolutely, when considering <strong>video communication API</strong>s, Jitsi could be among the choices you're evaluating. </p><p>Exploring <a href="https://www.videosdk.live/blog/jitsi-alternative">alternatives to Jitsi</a> presents a vast landscape. Navigating this array can be challenging. Before delving into video API options, define your project's budget, use case, and essential features.</p><p>To simplify this process, we've curated a list of the <strong>best competitors to Jitsi</strong>. This compilation is intended to aid you in streamlining your selection process and identifying the optimal video API solution that harmonizes with your application's unique needs. It's important to emphasize that our objective is to offer an impartial overview of each provider, allowing you to make a well-informed choice that aligns with your specific requirements and preferences.</p><h2 id="jitsi-meet">Jitsi Meet</h2>
<h3 id="key-points-about-jitsi">Key points about Jitsi</h3>
<ul><li>Indeed, Jitsi is a versatile open-source platform that allows users to tailor its usage according to their preferences and needs. </li><li>Jitsi Meet, one of its flagship projects, offers an array of features such as text sharing through Etherpad, room locking, text chatting (web only), raising hands, accessing YouTube videos during calls, audio-only calls, and integrations with third-party applications.</li><li>However, Jitsi Meet alone might <strong>lack</strong> certain crucial collaborative functionalities like <strong>screen sharing</strong>, <strong>recording</strong>, or <strong>telephone dial-in</strong> for conferences. </li><li>To access these features, setting up <strong>additional projects</strong> like <strong>Jibri</strong> and <strong>Jigsai</strong> is necessary, which can entail <strong>more time</strong>, <strong>resources</strong>, and <strong>coding efforts</strong>. </li><li>This could make Jitsi <strong>less suitable</strong> for users seeking a low-code option to quickly implement such features.</li><li>While Jitsi ensures end-to-end encryption for video calls, it's important to note that this encryption <strong>doesn't cover</strong> aspects like <strong>chat</strong> or <strong>polls</strong>. </li><li>For users placing a high emphasis on robust security, additional measures might be needed.</li><li>It's also worth considering that Jitsi can consume a <strong>notable amount of bandwidth</strong> due to the way Jitsi Videobridge functions. </li><li>For larger organizations requiring an SDK for frequent and lengthy video sessions involving a substantial number of participants, Jitsi might not fully meet their specific needs and could potentially be less satisfactory in comparison to more specialized solutions.</li></ul><h3 id="jitsi-pricing">Jitsi pricing</h3>
<ul><li>Jitsi is available for <strong>free</strong>, which means you can use its components without any payment.</li><li>However, it's important to note that dedicated technical support is not provided by the platform. </li><li>In case you encounter any issues or require assistance, you can seek help from the active community of contributors who participate in the Jitsi project.</li></ul><h2 id="direct-comparison-jitsi-vs-top-competitors">Direct Comparison: Jitsi vs Top Competitors</h2>
<p>The <strong>top Jitsi</strong> <strong>competitors </strong>are VideoSDK, 100ms, WebRTC, Daily, and LiveKit.</p><p>Let's compare Jitsi with each of the above competitors.</p><h2 id="1-jitsi-vs-videosdk">1. Jitsi vs VideoSDK</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Jitsi-Meet-vs-Videosdk.jpg" class="kg-image" alt="Best Jitsi Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>VideoSDK offers developers a seamless API that simplifies incorporating robust, scalable, and dependable audio-video capabilities into their applications. With only a few lines of code, developers can introduce live audio and video experiences to various platforms within minutes. One of the primary benefits of opting for the <a href="https://www.videosdk.live/">VideoSDK</a> is its remarkable ease of integration. This characteristic enables developers to concentrate their efforts on crafting innovative features that contribute to enhanced user engagement and retention.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Jitsi pricing</strong></td>
        <td><strong>VideoSDK pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>NA</td>
        <td>Starts from <strong>$2</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>Starts from <strong>$1</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>NA</td>
        <td>
            <strong>$15</strong> per 1,000 minutes, No limit on participants
        </td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td>NA</td>
        <td>
            <strong>$15</strong> per 1,000 minutes, No limit on participants
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/jitsi-vs-videosdk">Jitsi vs VideoSDK</a>.</blockquote>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-jitsi-vs-100ms">2. Jitsi vs 100ms</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Jitsi-Meet-vs-100ms.jpg" class="kg-image" alt="Best Jitsi Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p><a href="https://www.videosdk.live/blog/100ms-alternative">100ms</a> offers a cloud-based platform that empowers developers to seamlessly integrate video and audio conferencing into a variety of applications, spanning web, Android, and iOS platforms. With a comprehensive suite of tools, including REST APIs, software development kits (SDKs), and an intuitive user-friendly dashboard, 100ms simplifies the process of capturing, distributing, recording, and presenting live interactive audio and video content. This platform serves as a valuable resource for enhancing user engagement and communication experiences across diverse applications.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Jitsi pricing</strong></td>
        <td><strong>100ms pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>NA</td>
        <td>Starts from <strong>$4</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>Starts from <strong>$4</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>NA</td>
        <td><strong>$40</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td>NA</td>
        <td><strong>$13.5</strong> per 1,000 minutes</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/100ms-vs-jitsi">Jitsi vs 100ms</a>.</blockquote><h2 id="3-jitsi-vs-webrtc">3. Jitsi vs WebRTC</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Jitsi-Meet-vs-WebRTC.jpg" class="kg-image" alt="Best Jitsi Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>WebRTC stands as an open-source initiative that equips web browsers with the prowess to seamlessly integrate Real-Time Communications (RTC) capabilities through intuitive JavaScript APIs. Through intricate optimization, the diverse components of <a href="https://www.videosdk.live/blog/webrtc-alternative">WebRTC</a> aptly fulfill the mission of enabling real-time communication functionalities within web browsers. Developers are thereby empowered to effortlessly embed an array of features, including audio and video calls, chat functionalities, file sharing, and more, directly into their web applications.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Jitsi pricing</strong></td>
        <td><strong>WebRTC pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/webrtc-vs-jitsi">Jitsi vs WebRTC</a>.</blockquote><h2 id="4-jitsi-vs-daily">4. Jitsi vs Daily</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Jitsi-Meet-vs-Daily.jpg" class="kg-image" alt="Best Jitsi Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>Daily emerges as a developer-centric platform that grants developers the seamless capability to integrate real-time video and audio calls directly into their applications, all within the confines of web browsers. Armed with a suite of tools and features, <a href="https://www.videosdk.live/blog/daily-co-alternative">Daily</a> empowers developers to masterfully navigate the intricate backend intricacies of video calls, regardless of the diverse platforms they operate on.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Jitsi pricing</strong></td>
        <td><strong>Daily pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>NA</td>
        <td>Starts from <strong>$4</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>Starts from <strong>$1.2</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>NA</td>
        <td><strong>$15</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td>NA</td>
        <td><strong>$13.49</strong> per 1,000 minutes</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/jitsi-vs-daily">Jitsi vs Daily</a>.</blockquote><h2 id="5-jitsi-vs-livekit">5. Jitsi vs LiveKit</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Jitsi-Meet-vs-LiveKit.jpg" class="kg-image" alt="Best Jitsi Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>Livekit extends a comprehensive solution tailored for developers keen on seamlessly integrating live video and audio functionalities into their native applications. By leveraging its array of well-crafted software development kits (SDKs), <a href="https://www.videosdk.live/blog/livekit-alternative">Livekit</a> simplifies the process of infusing a diverse range of real-time communication features into your applications.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Jitsi pricing</strong></td>
        <td><strong>LiveKit pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>NA</td>
        <td>Starts from <strong>$20</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>
            <strong>$69</strong> per hour (upto 500 viewers only and doesn't support Full HD)
        </td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>NA</td>
        <td>No accurate data available</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording</strong></td>
        <td>NA</td>
        <td>No accurate data available</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/jitsi-vs-livekit">Jitsi vs LiveKit</a>.</blockquote><h2 id="have-you-determined-whether-jitsi-aligns-with-your-requirements-or-have-you-found-an-alternative">Have you determined whether Jitsi aligns with your requirements, or have you found an alternative?</h2>
<p>Absolutely, the alternatives to Jitsi that were discussed earlier provide a wide spectrum of solutions for developers looking to elevate in-app user experiences. However, if your objectives go beyond basic in-app communication and encompass a broader engagement strategy that integrates voice and video capabilities, solutions like <a href="https://www.videosdk.live/signup/">VideoSDK</a> might indeed align more closely with your requirements. These solutions offer the tools and features necessary to craft immersive and interactive experiences for your users, enabling you to provide a communication environment that goes beyond mere text-based interaction. By delving into these alternatives, you have the opportunity to enhance your application's user experience and create a more dynamic and engaging platform for your users.</p><p>Considering your unique requirements, budget constraints, and the essential features you're seeking, it's possible that Jitsi may not be the perfect match for your needs. To ensure that you make a well-informed decision, it's highly recommended to thoroughly explore the alternatives that were previously discussed. Some of these alternatives, such as VideoSDK, offer the advantage of free trial options. These trials provide an excellent opportunity to assess their capabilities in real-world projects, allowing you to gain valuable insights into how effectively they align with your requirements before committing fully. Furthermore, it's crucial to keep in mind that should your needs evolve over time, you retain the flexibility to transition away from Jitsi and opt for a solution that better aligns with your changing demands.</p>]]></content:encoded></item><item><title><![CDATA[How to Build an iOS Live Streaming App with VideoSDK]]></title><description><![CDATA[Learn to develop an iOS live-streaming app using VideoSDK. Master the process step-by-step for seamless video streaming on iOS devices.]]></description><link>https://www.videosdk.live/blog/ios-live-streaming-app</link><guid isPermaLink="false">662781052a88c204ca9d4bef</guid><category><![CDATA[iOS]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Mon, 13 Jan 2025 07:04:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/HTTP-Live-Stream-in-iOS-Video-Call-App.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/HTTP-Live-Stream-in-iOS-Video-Call-App.png" alt="How to Build an iOS Live Streaming App with VideoSDK"/><p>Creating a live streaming app for iOS involves a structured process to ensure its functionality and appeal to users. Initially, developers need to select suitable streaming protocols, such as HLS (<a href="https://www.videosdk.live/blog/what-is-http-live-streaming">HTTP Live Streaming</a>), to ensure compatibility with iOS devices. User interface design is crucial, incorporating features like live chat, reactions, and notifications for enhanced engagement. It's essential to evaluate factors like performance, reliability, ease of integration, and platform support.</p><p>And here <a href="https://www.videosdk.live/">VideoSDK</a> stands out as a comprehensive solution, offering SDKs for iOS, Android, Flutter, and React Native. Its key advantage lies in providing an intuitive and easy-to-integrate SDK specifically tailored for live-streaming apps. VideoSDK ensures seamless performance, robust features, and excellent support across multiple platforms.</p><h3 id="benefits-of-ios-live-streaming-app">Benefits of iOS live streaming app:</h3><ul><li><strong>Real-Time Engagement</strong>: Live streaming allows for instant interaction and engagement with the audience, fostering a sense of connection and community.</li><li><strong>Increased Reach</strong>: With live streaming, you can reach a wider audience, as viewers can tune in from anywhere in the world.</li><li><strong>Cost-Effective Marketing</strong>: Compared to traditional marketing methods, live streaming can be a cost-effective way to promote your brand, as it requires minimal equipment and can be done from anywhere. For businesses focusing on regional growth, <a href="https://blacksmith.agency/services/branding-agency/atlanta/" rel="noreferrer">atlanta branding</a> helps tailor live streaming content to local audiences, improving relevance, engagement, and brand recognition.</li><li><strong>Builds Trust and Authenticity</strong>: Live streaming provides an authentic and transparent way to connect with your audience, helping to build trust and credibility.</li></ul><h3 id="use-cases-of-ios-live-streaming-app">Use Cases of iOS live streaming app:</h3><ul><li><strong>Live Events and Webinars</strong>: Host live events or webinars to engage with your audience in real-time, share knowledge, and provide valuable information on specific topics. You can <a href="https://www.uniqode.com/qr-code-generator" rel="noopener noreferrer">create a QR code</a> to allow quick access to the event link or resources.</li><li><strong>Product Launches</strong>: Live stream product launches to showcase new products or services, demonstrate features, and answer audience questions in real time.</li><li><strong>Q&amp;A Sessions</strong>: Host live Q&amp;A sessions to interact with your audience, address their questions, and provide valuable insights or advice.</li></ul><h2 id="how-to-develop-a-live-streaming-app-with-videosdk"><strong>How to Develop a Live Streaming App</strong> with VideoSDK</h2><h3 id="getting-started-with-videosdk">Getting Started with VideoSDK</h3><p>VideoSDK enables the opportunity to integrate video &amp; audio calling into Web, Android, and iOS applications with so many different frameworks. It is the best infrastructure solution that provides programmable SDKs and REST APIs to build scalable video conferencing applications. This guide will get you running with the VideoSDK video &amp; audio calling in minutes.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/login">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/server-setup">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><ul><li>iOS 11.0+</li><li>Xcode 12.0+</li><li>Swift 5.0+</li></ul><p><strong>This App will contain two screens:</strong></p><p><strong>Join Screen</strong>: This screen allows the user to either create a meeting or join the predefined meeting.</p><p><strong>Meeting Screen</strong>: This screen basically contains local and remote participant views and some meeting controls such as Enable/Disable the mic &amp; Camera and Leave meeting.</p><h2 id="integrate-videosdk%E2%80%8B">Integrate VideoSDK​</h2><p>To install VideoSDK, you must initialize the pod on the project by running the following command:</p><pre><code class="language-swift">pod init</code></pre><p>It will create the pod file in your project folder, Open that file and add the dependency for the VideoSDK, like below:</p><pre><code class="language-swift">pod 'VideoSDKRTC', :git =&gt; 'https://github.com/videosdk-live/videosdk-rtc-ios-sdk.git'</code></pre><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_podfile.png" class="kg-image" alt="How to Build an iOS Live Streaming App with VideoSDK" loading="lazy" width="1376" height="798"/></figure><p>then run the below code to install the pod:</p><pre><code class="language-swift">pod install</code></pre><p>then declare the permissions in Info.plist :</p><pre><code class="language-swift">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Camera permission description&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Microphone permission description&lt;/string&gt;</code></pre><h3 id="project-structure">Project Structure</h3><pre><code class="language-swift">iOSQuickStartDemo
   ├── Models
        ├── RoomStruct.swift
        └── MeetingData.swift
   ├── ViewControllers
        ├── StartMeetingViewController.swift
        └── MeetingViewController.swift
   ├── AppDelegate.swift // Default
   ├── SceneDelegate.swift // Default
   └── APIService
           └── APIService.swift
   ├── Main.storyboard // Default
   ├── LaunchScreen.storyboard // Default
   └── Info.plist // Default
 Pods
     └── Podfile</code></pre><h3 id="create-models%E2%80%8B">Create models<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#create-models">​</a></h3><p>Create a swift file for <code>MeetingData</code> and <code>RoomStruct</code> class model for setting data in object pattern.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
struct MeetingData {
    let token: String
    let name: String
    let meetingId: String
    let micEnabled: Bool
    let cameraEnabled: Bool
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingData.swift</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
struct RoomsStruct: Codable {
    let createdAt, updatedAt, roomID: String?
    let links: Links?
    let id: String?
    enum CodingKeys: String, CodingKey {
        case createdAt, updatedAt
        case roomID = "roomId"
        case links, id
    }
}

// MARK: - Links
struct Links: Codable {
    let getRoom, getSession: String?
    enum CodingKeys: String, CodingKey {
        case getRoom = "get_room"
        case getSession = "get_session"
    }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">RoomStruct.swift</span></p></figcaption></figure><h2 id="essential-steps-for-building-the-video-calling">Essential Steps for Building the Video Calling</h2><p>This guide is designed to walk you through the process of integrating Chat with <a href="https://www.videosdk.live/">VideoSDK</a>. We'll cover everything from setting up the SDK to incorporating the visual cues into your app's interface, ensuring a smooth and efficient implementation process.</p><h3 id="step-1-get-started-with-apiclient%E2%80%8B">Step 1: Get started with APIClient<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-1--get-started-with-apiclient">​</a></h3><p>Before jumping to anything else, we have to write an API to generate a unique <code>meetingId</code>. You will require an <strong>authentication token;</strong> you can generate it either using <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-server-api-example</a> or from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation

let TOKEN_STRING: String = "&lt;AUTH_TOKEN&gt;"

class APIService {

  class func createMeeting(token: String, completion: @escaping (Result&lt;String, Error&gt;) -&gt; Void) {

    let url = URL(string: "https://api.videosdk.live/v2/rooms")!

    var request = URLRequest(url: url)
    request.httpMethod = "POST"
    request.addValue(TOKEN_STRING, forHTTPHeaderField: "authorization")

    URLSession.shared.dataTask(
      with: request,
      completionHandler: { (data: Data?, response: URLResponse?, error: Error?) in

        DispatchQueue.main.async {

          if let data = data, let utf8Text = String(data: data, encoding: .utf8) {
            do {
              let dataArray = try JSONDecoder().decode(RoomsStruct.self, from: data)

              completion(.success(dataArray.roomID ?? ""))
            } catch {
              print("Error while creating a meeting: \(error)")
              completion(.failure(error))
            }
          }
        }
      }
    ).resume()
  }
}
</code></pre><figcaption><p><span style="white-space: pre-wrap;">APIService.swift</span></p></figcaption></figure><h3 id="step-2-implement-join-screen%E2%80%8B">Step 2: Implement Join Screen<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-2--implement-join-screen">​</a></h3><p>The Join Screen will work as a medium to either schedule a new meeting or join an existing meeting.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
import UIKit

class StartMeetingViewController: UIViewController, UITextFieldDelegate {

  private var serverToken = ""

  /// MARK: outlet for create meeting button
  @IBOutlet weak var btnCreateMeeting: UIButton!

  /// MARK: outlet for join meeting button
  @IBOutlet weak var btnJoinMeeting: UIButton!

  /// MARK: outlet for meetingId textfield
  @IBOutlet weak var txtMeetingId: UITextField!

  /// MARK: Initialize the private variable with TOKEN_STRING &amp;
  /// setting the meeting id in the textfield
  override func viewDidLoad() {
    txtMeetingId.delegate = self
    serverToken = TOKEN_STRING
    txtMeetingId.text = "PROVIDE-STATIC-MEETING-ID"
  }

  /// MARK: method for joining meeting through seague named as "StartMeeting"
  /// after validating the serverToken in not empty
  func joinMeeting() {

    txtMeetingId.resignFirstResponder()

    if !serverToken.isEmpty {
      DispatchQueue.main.async {
        self.dismiss(animated: true) {
          self.performSegue(withIdentifier: "StartMeeting", sender: nil)
        }
      }
    } else {
      print("Please provide auth token to start the meeting.")
    }
  }

  /// MARK: outlet for create meeting button tap event
  @IBAction func btnCreateMeetingTapped(_ sender: Any) {
    print("show loader while meeting gets connected with server")
    joinRoom()
  }

  /// MARK: outlet for join meeting button tap event
  @IBAction func btnJoinMeetingTapped(_ sender: Any) {
    if (txtMeetingId.text ?? "").isEmpty {

      print("Please provide meeting id to start the meeting.")
      txtMeetingId.resignFirstResponder()
    } else {
      joinMeeting()
    }
  }

  // MARK: - method for creating room api call and getting meetingId for joining meeting

  func joinRoom() {

    APIService.createMeeting(token: self.serverToken) { result in
      if case .success(let meetingId) = result {
        DispatchQueue.main.async {
          self.txtMeetingId.text = meetingId
          self.joinMeeting()
        }
      }
    }
  }

  /// MARK: preparing to animate to meetingViewController screen
  override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

    guard let navigation = segue.destination as? UINavigationController,

      let meetingViewController = navigation.topViewController as? MeetingViewController
    else {
      return
    }

    meetingViewController.meetingData = MeetingData(
      token: serverToken,
      name: txtMeetingId.text ?? "Guest",
      meetingId: txtMeetingId.text ?? "",
      micEnabled: true,
      cameraEnabled: true
    )
  }
}
</code></pre><figcaption><p><span style="white-space: pre-wrap;">StartMeetingViewController.swift</span></p></figcaption></figure><h3 id="step-3-initialize-and-join-meeting%E2%80%8B">Step 3: Initialize and Join Meeting<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-3--initialize-and-join-meeting">​</a></h3><p>Using the provided <code>token</code> and <code>meetingId</code>, we will configure and initialize the meeting in <code>viewDidLoad()</code>.</p><p>Then, we'll add <strong>@IBOutlet</strong> for <code>localParticipantVideoView</code> and <code>remoteParticipantVideoView</code>, which can render local and remote participant media, respectively.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">class MeetingViewController: UIViewController {

import UIKit
import VideoSDKRTC
import WebRTC
import AVFoundation

class MeetingViewController: UIViewController {

// MARK: - Properties
// outlet for local participant container view
   @IBOutlet weak var localParticipantViewContainer: UIView!

// outlet for label for meeting Id
   @IBOutlet weak var lblMeetingId: UILabel!

// outlet for local participant video view
   @IBOutlet weak var localParticipantVideoView: RTCMTLVideoView!

// outlet for remote participant video view
   @IBOutlet weak var remoteParticipantVideoView: RTCMTLVideoView!

// outlet for remote participant no media label
   @IBOutlet weak var lblRemoteParticipantNoMedia: UILabel!

// outlet for remote participant container view
   @IBOutlet weak var remoteParticipantViewContainer: UIView!

// outlet for local participant no media label
   @IBOutlet weak var lblLocalParticipantNoMedia: UILabel!

// Meeting data - required to start
   var meetingData: MeetingData!

// current meeting reference
   private var meeting: Meeting?

    // MARK: - video participants including self to show in UI
    private var participants: [Participant] = []

        // MARK: - Lifecycle Events

        override func viewDidLoad() {
        super.viewDidLoad()
        // configure the VideoSDK with token
        VideoSDK.config(token: meetingData.token)

        // init meeting
        initializeMeeting()

        // set meeting id in button text
        lblMeetingId.text = "Meeting Id: \(meetingData.meetingId)"
      }

      override func viewWillAppear(_ animated: Bool) {
          super.viewWillAppear(animated)
          navigationController?.navigationBar.isHidden = true
      }

    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)
        navigationController?.navigationBar.isHidden = false
        NotificationCenter.default.removeObserver(self)
    }

        // MARK: - Meeting

        private func initializeMeeting() {

            // Initialize the VideoSDK
            meeting = VideoSDK.initMeeting(
                meetingId: meetingData.meetingId,
                participantName: meetingData.name,
                micEnabled: meetingData.micEnabled,
                webcamEnabled: meetingData.cameraEnabled
            )

            // Adding the listener to meeting
            meeting?.addEventListener(self)

            // joining the meeting
            meeting?.join()
        }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingViewController.swift</span></p></figcaption></figure><h3 id="step-4-implement-controls%E2%80%8B">Step 4: Implement Controls<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-4--implement-controls">​</a></h3><p>After initializing the meeting in the previous step, we will now add <strong>@IBOutlet</strong> for <code>btnLeave</code>, <code>btnToggleVideo</code> and <code>btnToggleMic</code> which can control the media in the meeting. </p><p>Along with that we'll add one more <strong>@IBOutlet</strong> button <code>btnToggleHLS</code> using which we will start/stop HLS.</p><blockquote>Added a button to toggle HLS and code for the same, please check for grammar or typos.</blockquote><figure class="kg-card kg-code-card"><pre><code class="language-swift">class MeetingViewController: UIViewController {

...

    // outlet for leave button
    @IBOutlet weak var btnLeave: UIButton!

    // outlet for toggle video button
    @IBOutlet weak var btnToggleVideo: UIButton!

    // outlet for toggle audio button
    @IBOutlet weak var btnToggleMic: UIButton!
    
    // outlet for toggle HLS button
    @IBOutlet weak var btnToggleHLS: UIButton!

    // bool for mic
    var micEnabled = true
    // bool for video
    var videoEnabled = true
    // bool for hls state
    var hlsEnabled = false


    // outlet for leave button click event
    @IBAction func btnLeaveTapped(_ sender: Any) {
            DispatchQueue.main.async {
                self.meeting?.leave()
                self.dismiss(animated: true)
            }
        }

    // outlet for toggle mic button click event
    @IBAction func btnToggleMicTapped(_ sender: Any) {
        if micEnabled {
            micEnabled = !micEnabled // false
            self.meeting?.muteMic()
        } else {
            micEnabled = !micEnabled // true
            self.meeting?.unmuteMic()
        }
    }

    // outlet for toggle video button click event
    @IBAction func btnToggleVideoTapped(_ sender: Any) {
        if videoEnabled {
            videoEnabled = !videoEnabled // false
            self.meeting?.disableWebcam()
        } else {
            videoEnabled = !videoEnabled // true
            self.meeting?.enableWebcam()
        }
    }
    
      // outlet for toggle HLS button click event
    @IBAction func btnToggleHLSTapped(_ sender: Any) {
        if !hlsEnabled {
        		// sample config
             let config: HLSConfig = HLSConfig(layout: 
                            ConfigLayout(type: .GRID, 
                                         priority: .SPEAKER, 
                                         gridSize: 4), 
                                         theme: .DARK, 
                                         mode: .video_and_audio, 
                                         quality: .high, 
                                         orientation: .portrait)
          // start hls, you can use your custom configs as per your need
             self.meeting?.startHLS(config: config)
             self.hlsEnabled = true
        } else {
            self.meeting?.stopHLS()
            self.hlsEnabled = false
        }
    }

...

}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingViewController.swift</span></p></figcaption></figure><h3 id="step-5-implementing-meetingeventlistener%E2%80%8B">Step 5: Implementing <code>MeetingEventListener</code>​</h3><p>In this step, we'll create an extension for the <code>MeetingViewController</code> that implements the MeetingEventListener, which implements the <code>onMeetingJoined</code>, <code>onMeetingLeft</code>, <code>onParticipantJoined</code>, <code>onParticipantLeft</code>, <code>onParticipantChanged</code>, <code>onHlsStateChanged</code> etc. methods.</p><p>When HLS is started, it triggers the <code>onHlsStateChanged</code> event of the MeetingEventListener.</p><blockquote>Removed onspeakerchanged event and added onhlsstatechanged event and code for the same</blockquote><figure class="kg-card kg-code-card"><pre><code class="language-swift">
class MeetingViewController: UIViewController {

...

extension MeetingViewController: MeetingEventListener {

        /// Meeting started
        func onMeetingJoined() {

            // handle local participant on start
            guard let localParticipant = self.meeting?.localParticipant else { return }
            // add to list
            participants.append(localParticipant)

            // add event listener
            localParticipant.addEventListener(self)

            localParticipant.setQuality(.high)

            if(localParticipant.isLocal){
                self.localParticipantViewContainer.isHidden = false
            } else {
                self.remoteParticipantViewContainer.isHidden = false
            }
        }

        /// Meeting ended
        func onMeetingLeft() {
            // remove listeners
            meeting?.localParticipant.removeEventListener(self)
            meeting?.removeEventListener(self)
        }

        /// A new participant joined
        func onParticipantJoined(_ participant: Participant) {
            participants.append(participant)

            // add listener
            participant.addEventListener(self)

            participant.setQuality(.high)

            if(participant.isLocal){
                self.localParticipantViewContainer.isHidden = false
            } else {
                self.remoteParticipantViewContainer.isHidden = false
            }
        }

        /// A participant left from the meeting
        /// - Parameter participant: participant object
        func onParticipantLeft(_ participant: Participant) {
            participant.removeEventListener(self)
            guard let index = self.participants.firstIndex(where: { $0.id == participant.id }) else {
                return
            }
            // remove participant from list
            participants.remove(at: index)
            // hide from ui
            UIView.animate(withDuration: 0.5){
                if(!participant.isLocal){
                    self.remoteParticipantViewContainer.isHidden = true
                }
            }
        }
        
        /// HLS state changed event
        /// - Parameters state: HLSState and hlsUrl: HLSUrl if given
        func onHlsStateChanged(state: HLSState, hlsUrl: HLSUrl?) {
            switch(state) {
                case .HLS_STARTING:
                    print("HLS Starting")

                case .HLS_STARTED:
                    print("HLS Started")

                case .HLS_PLAYABLE:
                    print("HLS Playable")

                case .HLS_STOPPING:
                    print("HLS Stopping")

                case .HLS_STOPPED:
                    print("HLS Stopped")
                }
    }
}

...</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingViewController.swift</span></p></figcaption></figure><h3 id="step-6-implementing-participanteventlistener">Step 6: Implementing <code>ParticipantEventListener</code></h3><p>In this stage, we'll add an extension for the <code>MeetingViewController</code> that implements the ParticipantEventListener, which implements the <code>onStreamEnabled</code> and <code>onStreamDisabled</code> methods for the audio and video of MediaStreams enabled or disabled.</p><p>The function update UI is frequently used to control or modify the user interface (enable/disable camera &amp; mic) by the MediaStream state.</p><pre><code class="language-swift">class MeetingViewController: UIViewController {

...

extension MeetingViewController: ParticipantEventListener {

/// Participant has enabled mic, video or screenshare
/// - Parameters:
/// - stream: enabled stream object
/// - participant: participant object
func onStreamEnabled(_ stream: MediaStream, forParticipant participant: Participant) {
    updateUI(participant: participant, forStream: stream, enabled: true)
 }

/// Participant has disabled mic, video or screenshare
/// - Parameters:
///   - stream: disabled stream object
///   - participant: participant object
        
func onStreamDisabled(_ stream: MediaStream, 
			forParticipant participant: Participant) {
            
  updateUI(participant: participant, forStream: stream, enabled: false)
 }
 
}

private extension MeetingViewController {

 func updateUI(participant: Participant, forStream stream: MediaStream, 							enabled: Bool) { // true
        switch stream.kind {
        case .state(value: .video):
            if let videotrack = stream.track as? RTCVideoTrack {
                if enabled {
                    DispatchQueue.main.async {
                        UIView.animate(withDuration: 0.5){
                        
                            if(participant.isLocal) {
                            
    	self.localParticipantViewContainer.isHidden = false
	self.localParticipantVideoView.isHidden = false       
	self.localParticipantVideoView.videoContentMode = .scaleAspectFill                            self.localParticipantViewContainer.bringSubviewToFront(self.localParticipantVideoView)                         									
    videotrack.add(self.localParticipantVideoView)
    self.lblLocalParticipantNoMedia.isHidden = true

} else {
		self.remoteParticipantViewContainer.isHidden = false
        	self.remoteParticipantVideoView.isHidden = false
                                self.remoteParticipantVideoView.videoContentMode = .scaleAspectFill
                                self.remoteParticipantViewContainer.bringSubviewToFront(self.remoteParticipantVideoView)
                                		        videotrack.add(self.remoteParticipantVideoView)
 self.lblRemoteParticipantNoMedia.isHidden = true
        }
     }
  }
} else {
         UIView.animate(withDuration: 0.5){
                if(participant.isLocal){
                
                    self.localParticipantViewContainer.isHidden = false
                    self.localParticipantVideoView.isHidden = true
                    self.lblLocalParticipantNoMedia.isHidden = false
                            videotrack.remove(self.localParticipantVideoView)
} else {
                   self.remoteParticipantViewContainer.isHidden = false
                   self.remoteParticipantVideoView.isHidden = true
                   self.lblRemoteParticipantNoMedia.isHidden = false
                            videotrack.remove(self.remoteParticipantVideoView)
      }
    }
  }
}

     case .state(value: .audio):
            if participant.isLocal {
                
               localParticipantViewContainer.layer.borderWidth = 4.0
               localParticipantViewContainer.layer.borderColor = enabled ? UIColor.clear.cgColor : UIColor.red.cgColor
            } else {
                remoteParticipantViewContainer.layer.borderWidth = 4.0
                remoteParticipantViewContainer.layer.borderColor = enabled ? UIColor.clear.cgColor : UIColor.red.cgColor
            }
        default:
            break
        }
    }
}

...
</code></pre><h3 id="known-issue%E2%80%8B">Known Issue<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#known-issue">​</a></h3><p>Please add the following line to the <code>MeetingViewController.swift</code> file's <code>viewDidLoad</code> method If you get your video out of the container, view like below image.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">override func viewDidLoad() {

  localParticipantVideoView.frame = CGRect(x: 10, y: 0, 
    		width: localParticipantViewContainer.frame.width, 
   		height: localParticipantViewContainer.frame.height)

  localParticipantVideoView.bounds = CGRect(x: 10, y: 0, 
  		width: localParticipantViewContainer.frame.width, 
        	height: localParticipantViewContainer.frame.height)

  localParticipantVideoView.clipsToBounds = true

  remoteParticipantVideoView.frame = CGRect(x: 10, y: 0, 
  		width: remoteParticipantViewContainer.frame.width, 
        	height: remoteParticipantViewContainer.frame.height)
        
  remoteParticipantVideoView.bounds = CGRect(x: 10, y: 0, 
  		width: remoteParticipantViewContainer.frame.width, 
    		height: remoteParticipantViewContainer.frame.height)
    
    remoteParticipantVideoView.clipsToBounds = true
}
</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingViewController.swift</span></p></figcaption></figure><blockquote><strong>TIP:</strong><br>Stuck anywhere? Check out this <a href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example" rel="noopener noreferrer">example code</a> on GitHub</br></blockquote><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - videosdk-live/videosdk-rtc-ios-sdk-example: WebRTC based video conferencing SDK for iOS (Swift / Objective C)</div><div class="kg-bookmark-description">WebRTC based video conferencing SDK for iOS (Swift / Objective C) - videosdk-live/videosdk-rtc-ios-sdk-example</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="How to Build an iOS Live Streaming App with VideoSDK"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">videosdk-live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/3d2f5eef43ad3d03fbe693ee2c8633053215e90252a63bddc775d8b8d8a7e380/videosdk-live/videosdk-rtc-ios-sdk-example" alt="How to Build an iOS Live Streaming App with VideoSDK" onerror="this.style.display = 'none'"/></div></a></figure><p>Now you have successfully created an iOS Live Streaming Application with the VideoSDK.</p><h2 id="conclusion">Conclusion</h2><p>VideoSDK offers a seamless solution for building iOS live-streaming applications. With its user-friendly interface and robust features, developers can efficiently create high-quality streaming experiences for users. With an iOS development agency like <a href="https://www.simpalm.com/services/iphone-app-development-company"><strong>Simpalm</strong></a><strong> </strong>can further enhance the development process, ensuring top-notch results.</p><p>Unlock the full potential of VideoSDK today and craft seamless video experiences! <a href="https://app.videosdk.live/dashboard"><strong>Sign up</strong></a> now to receive 10,000 free minutes and take your video app to new heights.</p>]]></content:encoded></item><item><title><![CDATA[Quality Comparison: VideoSDK vs Zoom in Web 1:1 Video Calls]]></title><description><![CDATA[If you're here, it's likely that the sudden news about Twilio Programmable Video has thrown a curveball into your quarterly plans. We get it - disruptions in your tech stack can be a game-changer. You might be considering Zoom as your next service provider, but between this twist, remember, it's not just a change but an opportunity to upgrade.

That's why we've conducted a comparison between VideoSDK and Zoom. Check the results before making your decision – let's ensure this unexpected turn beco]]></description><link>https://www.videosdk.live/blog/quality-comparison-videosdk-vs-zoom-in-web-1-1-video-calls</link><guid isPermaLink="false">65ab70ca6c68429b5fdf18a8</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 13 Jan 2025 06:17:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/VideoSDK-vs-Zoom-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/01/VideoSDK-vs-Zoom-1.png" alt="Quality Comparison: VideoSDK vs Zoom in Web 1:1 Video Calls"/><p>If you're here, it's likely that the sudden news about Twilio Programmable Video has thrown a curveball into your quarterly plans. We get it - disruptions in your tech stack can be a game-changer. You might be considering Zoom as your next service provider, but between this twist, remember, it's not just a change but an opportunity to upgrade. </p><p>That's why we've conducted a comparison between VideoSDK and Zoom. Check the results before making your decision – let's ensure this unexpected turn becomes a positive step forward.</p><h2 id="quality-is-the-core">Quality is the Core!</h2><p>When it comes to video calls, quality is everything. You can throw in a bunch of cool features or fancy tech, but if you lack quality, it's a deal-breaker. No arguments there! We understand this, and that's why in this blog, we're diving into the nitty-gritty of quality.</p><p>Numerous factors can affect the quality of a video call, with the most basic connection being tied to your network. While considering Zoom as your potential service provider, compromising on quality should never be on the table.</p><p>That's why we're putting VideoSDK against Zoom in a quality showdown under various network bandwidths. See for yourself how VideoSDK not only keeps up but outperforms standards previously set by Twilio and Zoom.</p><h2 id="setting-up-the-platform">Setting up the platform</h2><p>Before we jump into the comparison, let's quickly examine key parameters for our evaluation, including devices, configurations, network throttling, and more.</p><ul><li>We'll be covering standard one-to-one web video calls</li><li>Both the sender and receiver are using MacBook Pro devices. </li><li>In maintaining an unbiased assessment, VideoSDK and Zoom App configurations have been set to default. Since their default settings are almost identical.</li><li>Additionally, for measuring network bandwidth, we'll be using <a href="https://fast.com">fast.com</a></li></ul><h2 id="test-scenarios">Test Scenarios</h2><p>Now that we've covered the setup, let's cover what scenarios we'll be creating to compare call quality between <a href="https://www.videosdk.live/alternative/zoom-vs-videosdk">VideoSDK and Zoom</a>.</p><p>Firstly, we'll evaluate metrics under normal conditions to establish a baseline.</p><p>Following that, we'll introduce network bandwidth throttling from only the narrator's side, simulating various scenarios at - </p><ul><li>1mbps</li><li>500kbps</li><li>250kbps</li></ul><h2 id="results">Results</h2><p>Here's a look at the important metrics we followed in different network situations, showing how VideoSDK and Zoom tackled challenges. Watch Rajan in the next video showcasing VideoSDK, and Ahmed demonstrating how Zoom performed.</p><!--kg-card-begin: html--><div>
<video src="https://assets.videosdk.live/alternative-page/videosdk-quality-comparison.mp4" controls="true" width="100%"/>
</div>
<div>
<video src="https://assets.videosdk.live/alternative-page/zoom-videosdk-comparison.mp4" controls="true" width="100%"/>
</div><!--kg-card-end: html--><p>Let's break down the results, exploring how each handles FPS, latency, and packet loss under different network bandwidths.</p><h3 id="frames-per-second-fps">Frames Per Second (FPS)</h3><p>As shown in the video, we can see how VideoSDK outperformed Zoom at handling Frame Per Second at each bandwidth.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/VideoSDK-vs-Zoom---fps-1.png" class="kg-image" alt="Quality Comparison: VideoSDK vs Zoom in Web 1:1 Video Calls" loading="lazy" width="1280" height="638"/></figure><h3 id="latency">Latency</h3><p>VideoSDK clearly outperformed Zoom in managing latency by a considerable margin. This outcome was somewhat anticipated, given Zoom's lack of globally connected regions.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/VideoSDK-vs-Zoom---latency.png" class="kg-image" alt="Quality Comparison: VideoSDK vs Zoom in Web 1:1 Video Calls" loading="lazy" width="1280" height="638"/></figure><h3 id="packet-loss">Packet loss</h3><p>In another instance of the same issue, VideoSDK demonstrated superiority in handling packet loss compared to Zoom.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/VideoSDK-vs-Zoom---packet-loss-1.png" class="kg-image" alt="Quality Comparison: VideoSDK vs Zoom in Web 1:1 Video Calls" loading="lazy" width="1280" height="638"/></figure><h2 id="conclusion">Conclusion</h2><p>Upon reviewing the videos and closely tracking the metrics, it's evident that <a href="https://videosdk.live">VideoSDK</a> outperforms Zoom significantly, especially in low network conditions, across various quality aspects. However, it's important to note that Zoom faces limitations due to its approach to global latency issues. Zoom requires users to select servers before initiating meetings, which can lead to connectivity challenges, even in well-connected scenarios where users are connecting to servers thousands of kilometers away.</p><p>VideoSDK addresses this challenge effectively by allowing participants to connect to the nearest servers. With efficient server-to-server communication for data sharing, VideoSDK achieves lower latency at a higher quality, providing a robust solution to global latency issues.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/Conclusion.png" class="kg-image" alt="Quality Comparison: VideoSDK vs Zoom in Web 1:1 Video Calls" loading="lazy" width="1493" height="1203"/></figure>]]></content:encoded></item><item><title><![CDATA[Quality Comparison: VideoSDK vs Twilio in Web 1:1 Video Calls]]></title><description><![CDATA[If you're here, it's likely that the sudden news about Twilio Programmable Video has thrown a curveball into your quarterly plans. We get it - disruptions in your tech stack can be a game-changer. But worry not! VideoSDK is here, not just for migration but to enhance your experience – turning this unexpected twist into an opportunity for an upgrade.


Quality is the Core!

When it comes to video calls, quality is everything. You can throw in a bunch of cool features or fancy tech, but if you lac]]></description><link>https://www.videosdk.live/blog/comparing-call-quality-in-web-1-1-video-calls-videosdk-vs-twilio</link><guid isPermaLink="false">65ab57c16c68429b5fdf17fa</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 13 Jan 2025 06:17:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/VideoSDK-vs-Twilio-3.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/01/VideoSDK-vs-Twilio-3.png" alt="Quality Comparison: VideoSDK vs Twilio in Web 1:1 Video Calls"/><p>If you're here, it's likely that the sudden news about Twilio Programmable Video has thrown a curveball into your quarterly plans. We get it - disruptions in your tech stack can be a game-changer. But worry not! VideoSDK is here, not just for migration but to enhance your experience – turning this unexpected twist into an opportunity for an upgrade.</p><h2 id="quality-is-the-core">Quality is the Core!</h2><p>When it comes to video calls, quality is everything. You can throw in a bunch of cool features or fancy tech, but if you lack in quality, it's a deal-breaker. No arguments there! We understand this, and that's why in this blog, we're diving into the nitty-gritty of quality.</p><p>Numerous factors can affect the quality of a video call, with the most basic connection being tied to your network. Thus, while exploring options for migrating to different service providers, you must not compromise the quality. </p><p>That's why we're putting VideoSDK against Twilio in a quality showdown under various network bandwidths. See for yourself how VideoSDK not only keeps up but outperforms Twilio's standards.</p><h2 id="setting-up-the-platform">Setting up the platform</h2><p>Before we jump into the comparison, let's quickly examine key parameters for our evaluation, including devices, configurations, network throttling, and more.</p><ul><li>We'll be covering standard one-to-one web video calls</li><li>Both the sender and receiver are using MacBook Pro devices. </li><li>In maintaining an unbiased assessment, VideoSDK and Twilio configurations have been set to default. Since their default settings are almost identical.</li><li>Additionally, for measuring network bandwidth, we'll be using <a href="https://fast.com ">fast.com</a></li></ul><h2 id="test-scenarios">Test Scenarios</h2><p>Now that we've covered the setup, let us cover what scenarios we'll be creating to compare call quality between VideoSDK and Twilio.</p><p>Firstly, we'll evaluate metrics under normal conditions to establish a baseline.</p><p>Following that, we'll introduce network bandwidth throttling from only the narrator's side, simulating various scenarios at - </p><ul><li>1mbps</li><li>500kbps</li><li>250kbps</li></ul><h2 id="results">Results</h2><p>Here's a look at the important metrics we followed in different network situations, showing how VideoSDK and Twilio tackled challenges. Watch Rajan in the next video showcasing VideoSDK and Twilio demonstrating how they performed.</p><!--kg-card-begin: html--><div>
<video src="https://assets.videosdk.live/alternative-page/videosdk-quality-comparison.mp4" controls="true" width="100%"/>
</div>
<div>
<video src="https://assets.videosdk.live/alternative-page/twilio-quality-comparison.mp4" controls="true" width="100%"/>
</div><!--kg-card-end: html--><p>Let's break down the results, exploring how each handles FPS, latency, and packet loss under different network bandwidths.</p><h3 id="frames-per-second-fps">Frames Per Second (FPS)</h3><p>As shown in the video, VideoSDK outshines Twilio in managing Frames Per Second (FPS) at different bandwidths. The comparison highlights VideoSDK's ability to deliver smoother and more consistent frame rates that too at better resolutions.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/VideoSDK-vs-Twilio---fps-3.png" class="kg-image" alt="Quality Comparison: VideoSDK vs Twilio in Web 1:1 Video Calls" loading="lazy" width="1280" height="638"/></figure><h3 id="latency">Latency</h3><p>The latency battle between <a href="https://www.videosdk.live/alternative/twilio-vs-videosdk">VideoSDK and Twilio</a> is neck-to-neck, with VideoSDK performing slightly better at higher resolutions.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/VideoSDK-vs-Twilio---latency.png" class="kg-image" alt="Quality Comparison: VideoSDK vs Twilio in Web 1:1 Video Calls" loading="lazy" width="1280" height="638"/></figure><h3 id="packet-loss">Packet-loss</h3><p>Both VideoSDK and Twilio performed better at managing packet loss, with Twilio performing slightly better, but at the expense of lower bitrate and resolutions.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/VideoSDK-vs-Twilio---packet-loss.png" class="kg-image" alt="Quality Comparison: VideoSDK vs Twilio in Web 1:1 Video Calls" loading="lazy" width="1280" height="638"/></figure><h2 id="conclusion">Conclusion</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Twilio-conclusion.png" class="kg-image" alt="Quality Comparison: VideoSDK vs Twilio in Web 1:1 Video Calls" loading="lazy" width="1493" height="1203"/></figure><p>Upon reviewing the videos and closely tracking the metrics, it's evident that <a href="https://videosdk.live">VideoSDK</a> and Twilio engaged in a neck-to-neck battle. VideoSDK won at delivering superior resolutions, bitrate and FPS even under lower network bandwidth conditions. On the other hand, Twilio, while successfully mitigating packet loss and latency, compromised on the other at lower values. Despite these trade-offs, VideoSDK emerged as the provider offering an overall enhanced experience for viewers.</p>]]></content:encoded></item><item><title><![CDATA[How to Implement Low Latency Video Calls with VideoSDK?]]></title><description><![CDATA[Discover how to minimize latency in WebRTC applications using VideoSDK and implement low-latency video calls with step-by-step instructions.]]></description><link>https://www.videosdk.live/blog/implement-low-latency-video-calls</link><guid isPermaLink="false">669a0bfd20fab018df10f9ad</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Sun, 12 Jan 2025 13:03:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/07/Low-Latency-Integration.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-low-latency">What is Low Latency? </h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/Low-Latency-Integration.jpg" alt="How to Implement Low Latency Video Calls with VideoSDK?"/><p>Low latency refers to the minimal delay between the transmission and reception of data in a communication system. In the context of <a href="https://www.videosdk.live/developer-hub/webrtc">WebRTC</a> (Web Real-Time Communication), low latency is crucial for providing a seamless, real-time experience for users engaged in video calls, audio chats, or any form of live interaction over the web. The goal is to reduce the time it takes for data to travel from its source to its destination, ensuring that conversations flow naturally and actions appear instantaneous.</p><h2 id="factors-affecting-latency">Factors Affecting Latency </h2><p>Several key factors influence the latency in WebRTC applications. Network conditions play a significant role, with factors such as geographical distance between participants, internet connection speed, and network congestion all impacting the overall latency.</p><p>Additionally, the processing power of devices, the efficiency of encoding and decoding algorithms, and the performance of the WebRTC infrastructure can affect latency. By optimizing these elements, developers can work towards achieving the lowest possible latency for their WebRTC applications.</p><h2 id="low-latency-in-videosdk">Low Latency in VideoSDK</h2><p><a href="https://www.videosdk.live/">VideoSDK</a>, like other WebRTC-based solutions, aims to minimize latency as much as possible. However, the exact "lowest latency" can vary depending on several factors and real-world conditions. That said, here's what we can say about VideoSDK's latency performance:</p><ul><li><strong>Lowest possible latency</strong>: Can reach as low as 12 milliseconds in ideal scenarios with internal servers.</li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/image-29.png" class="kg-image" alt="How to Implement Low Latency Video Calls with VideoSDK?" loading="lazy" width="492" height="459"/></figure><p>Our<strong> average latency</strong> can vary around less than 99 milliseconds end-to-end in optimal conditions.</p><h2 id="%E2%9A%99%EF%B8%8F-implementing-low-latency-video-calls-with-videosdk">⚙️ Implementing Low-Latency Video Calls with VideoSDK</h2><p>Now that we understand the factors affecting latency and VideoSDK's performance capabilities, let's walk through the process of setting up a low-latency video call using VideoSDK's &amp; <a href="https://www.writersofusa.com/script-writing" rel="noreferrer">video script writer</a> prebuilt solution. This approach allows you to quickly implement real-time communication while leveraging VideoSDK's optimized infrastructure for minimal latency.</p><h3 id="step-1-clone-the-repository">Step 1: Clone the Repository</h3><p>First, clone the VideoSDK examples repository and navigate to the JavaScript directory:</p><pre><code class="language-bash">git clone https://github.com/videosdk-live/videosdk-rtc-prebuilt-examples.git
cd videosdk-rtc-prebuilt-examples
cd javascript</code></pre><h3 id="step-2-generate-api-key">Step 2: Generate API Key</h3><p>To generate your API key, visit <a href="https://www.videosdk.live/">VideoSDK's Dashboard</a>. This key is essential for accessing VideoSDK's low-latency infrastructure and optimizing your real-time communication application.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/API-generation.gif" class="kg-image" alt="How to Implement Low Latency Video Calls with VideoSDK?" loading="lazy" width="600" height="338"/></figure><h3 id="step-3-configure-the-api-key">Step 3: Configure the API Key</h3><p>Open the <code>index.html</code> file and update it with your API key generated from the VideoSDK dashboard. Also, set the meeting ID and participant name:</p><pre><code class="language-js">// Set apikey, meetingId and participant name
const apiKey = "&lt;API KEY&gt;"; // generated from app.videosdk.live
const meetingId = "milkyway";
const name = "John Doe";</code></pre><p>Replace <code>&lt;API KEY&gt;</code> with your actual API key. This key is crucial for authenticating your application with VideoSDK's servers and ensuring secure, low-latency communication.</p><h3 id="step-4-setup-a-local-server">Step 4:  Setup a Local Server</h3><p>To run your application, you'll need a local HTTP server. If you don't have one installed, you can use <code>live-server</code>. Install it globally using npm:</p><pre><code class="language-terminal">npm install -g live-server
live-server</code></pre><p>This will launch your application in your default web browser, allowing you to test your low-latency video call implementation.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/realtime-stats.gif" class="kg-image" alt="How to Implement Low Latency Video Calls with VideoSDK?" loading="lazy" width="600" height="338"/></figure><h2 id="conclusion">Conclusion</h2><p>Implementing low-latency video calls is essential for delivering a seamless and real-time communication experience. VideoSDK offers a powerful and easy-to-use solution for developers looking to integrate high-quality video calls into their applications.</p><p>Remember that while this setup provides a great starting point, achieving the lowest possible latency in production environments may require further optimization based on your specific use case and the factors we discussed earlier. With VideoSDK, you can ensure that your video call application meets the highest standards of performance and reliability, providing your users with the best possible experience. </p><p>If you are new to VideoSDK, just <a href="https://app.videosdk.live/signup">Signup</a> and get 10,000? free minutes every month.</p>]]></content:encoded></item><item><title><![CDATA[What is Jitter Buffer?]]></title><description><![CDATA[Discover what a jitter buffer is and how it optimizes network communication. Learn its role in maintaining audio and video quality seamlessly.]]></description><link>https://www.videosdk.live/blog/what-is-jitter-buffer</link><guid isPermaLink="false">6676639120fab018df10ee53</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Sun, 12 Jan 2025 11:14:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/06/What-is-Jitter-Buffer_.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h2 id="introduction-to-jitter-and-jitter-buffers">Introduction to Jitter and Jitter Buffers</h2>
<!--kg-card-end: markdown--><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/What-is-Jitter-Buffer_.png" alt="What is Jitter Buffer?"/><p>In the world of digital communications, particularly in systems like Voice over Internet Protocol (VoIP), data packets need to travel across networks to reach their destination. However, these packets sometimes face delays and may not arrive in the order they were sent. This disorder is known as “jitter,” which can significantly degrade the quality of voice and video communications. To manage this, a crucial component called a jitter buffer is deployed within the network infrastructure.</p><p>Jitter refers to the variability in packet delay at the receiving end of a conversation, which can result in garbled or scrambled communications. For VoIP systems, where clarity and timing are crucial, excessive jitter can make a conversation difficult to understand. Jitter buffers combat this issue by temporarily holding incoming packets to realign out-of-order packets into the correct order, thus smoothing out the voice transmission and ensuring a clearer conversation.</p><p>Understanding both jitter and the function of jitter buffers is essential for maintaining the integrity of VoIP communications. By effectively managing jitter, businesses and individuals can ensure high-quality audio and video transmissions, crucial for everything from business meetings to personal chats. As we dive deeper into the workings of jitter buffers, it’s important to recognize their role not just as a fix but as a proactive measure in network setup and maintenance.</p><!--kg-card-begin: markdown--><h2 id="understanding-jitter-and-jitter-buffers">Understanding Jitter and Jitter Buffers</h2>
<!--kg-card-end: markdown--><p>Jitter is a common challenge in network communications, affecting how data packets are transmitted over the internet. Particularly in Voice over Internet Protocol (VoIP) systems, jitter can cause packets to arrive at their destination at different intervals, which can severely impact the quality of the transmitted voice or video.</p><!--kg-card-begin: markdown--><h3 id="what-is-jitter">What is Jitter?</h3>
<!--kg-card-end: markdown--><p>Jitter in networking terms refers to the variation in time between packets arriving, caused by network congestion, timing drift, or route changes. This inconsistency can lead to choppy audio or video, echoes, and even dropped calls. The phenomenon becomes particularly problematic in real-time communications, where timing is crucial for the quality of the conversation.</p><!--kg-card-begin: markdown--><h3 id="role-of-jitter-buffers">Role of Jitter Buffers</h3>
<!--kg-card-end: markdown--><p>To mitigate the effects of jitter, jitter buffers are employed within the network infrastructure. A jitter buffer temporarily stores arriving packets to compensate for differences in packet arrival time before passing them to the user. This process allows the packets to be delivered in a more consistent flow, enhancing the overall quality of the voice or video call.</p><!--kg-card-begin: markdown--><h2 id="types-and-functions-of-jitter-buffers">Types and Functions of Jitter Buffers</h2>
<!--kg-card-end: markdown--><p>Jitter buffers are essential tools in network management, especially where VoIP and other real-time services are concerned. They come in two primary types: static and dynamic.</p><!--kg-card-begin: markdown--><h3 id="static-vs-dynamic-jitter-buffers">Static vs. Dynamic Jitter Buffers</h3>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h4 id="static-jitter-buffers">Static Jitter Buffers</h4>
<!--kg-card-end: markdown--><p>Static Jitter Buffers have a fixed size and delay capacity which are set during their configuration. They are simpler but less flexible, and they might not adapt well to changes in network conditions, which can vary widely throughout the day or even during a single call.</p><!--kg-card-begin: markdown--><h4 id="dynamic-jitter-buffers">Dynamic Jitter Buffers</h4>
<!--kg-card-end: markdown--><p>Dynamic Jitter Buffers on the other hand, adjust their buffer size based on the network conditions. This adaptability makes them more effective at handling jitter in environments where network performance is unpredictable. Dynamic buffers can improve the performance of VoIP applications by adjusting in real time to the current state of the network.</p><!--kg-card-begin: markdown--><h3 id="importance-of-jitter-buffer-in-voip">Importance of Jitter Buffer in VoIP</h3>
<!--kg-card-end: markdown--><p>In VoIP systems, the quality of voice transmission depends significantly on the consistency of packet delivery. Jitter buffers play a critical role in ensuring that voice packets are delivered smoothly and in sequence, thereby maintaining the clarity and understandability of the conversation. They are particularly vital in professional settings where high-quality communication is essential for effective collaboration and decision-making.</p><!--kg-card-begin: markdown--><h2 id="measuring-and-managing-jitter">Measuring and Managing Jitter</h2>
<!--kg-card-end: markdown--><p>Understanding and managing jitter involves measuring it accurately and implementing strategies to mitigate its effects. Effective jitter management can substantially improve the quality of VoIP communications and enhance user experience.</p><!--kg-card-begin: markdown--><h3 id="how-to-measure-jitter">How to Measure Jitter</h3>
<!--kg-card-end: markdown--><p>Jitter is typically measured by the average packet-to-packet delay time. Tools like ping tests and jitter calculators can help network administrators determine the current jitter levels on their networks. Understanding these metrics is crucial for setting up appropriate buffers and making informed decisions about network configurations.</p><!--kg-card-begin: markdown--><h3 id="strategies-to-reduce-jitter">Strategies to Reduce Jitter</h3>
<!--kg-card-end: markdown--><p>Several techniques can be employed to reduce jitter:</p><ol><li><strong>Quality of Service (QoS) Settings: </strong>By prioritizing VoIP and other real-time traffic over less time-sensitive data, QoS settings help maintain a stable and consistent packet flow.</li><li><strong>Network Infrastructure Optimization: </strong>Upgrading routers, switching to wired connections, and ensuring sufficient bandwidth are all critical steps in reducing jitter. Additionally, configuring network devices correctly to handle high-priority traffic effectively can prevent packets from being delayed or lost.</li></ol><p>By comprehending the types of jitter buffers and understanding how to measure and manage jitter, network administrators can significantly enhance the stability and clarity of VoIP calls. This section of the article provides a deeper insight into the technical solutions available to combat jitter, ensuring high-quality communication in various network environments.</p><!--kg-card-begin: markdown--><h3 id="common-issues-and-solutions-with-jitter-buffers">Common Issues and Solutions with Jitter Buffers</h3>
<!--kg-card-end: markdown--><p>While jitter buffers significantly enhance the quality of VoIP communications, they also introduce their own set of challenges. Understanding these issues and implementing effective solutions are crucial for maintaining optimal network performance.</p><!--kg-card-begin: markdown--><h3 id="challenges-with-jitter-buffers">Challenges with Jitter Buffers</h3>
<!--kg-card-end: markdown--><p>One of the main challenges with using jitter buffers is the additional latency they introduce into the communication stream. While they compensate for jitter by delaying the packet delivery slightly, this can sometimes lead to a perceptible delay in conversations. This delay, especially if improperly managed, can disrupt the natural flow of interactive communications such as video conferences or collaborative virtual workspaces.</p><p>Another issue arises from the balance between buffer size and delay. Larger buffers provide more room to smooth out packet delivery times but at the cost of increased delay. Conversely, smaller buffers reduce delay but might not compensate adequately for high levels of jitter, leading to degraded audio or video quality.</p><!--kg-card-begin: markdown--><h3 id="optimizing-jitter-buffer-settings">Optimizing Jitter Buffer Settings</h3>
<!--kg-card-end: markdown--><p>To address these issues, it's essential to optimize jitter buffer settings according to the specific needs and conditions of the network:</p><ol><li><strong>Dynamic Adaptation:</strong> Implementing adaptive jitter buffers that can adjust their size dynamically based on real-time network conditions can help balance the trade-off between delay and jitter compensation.</li><li><strong>Network Assessments:</strong> Regular network performance assessments can help identify the optimal buffer settings for current conditions. This includes monitoring network traffic patterns and adjusting buffer sizes to prevent both excessive delay and packet loss.</li></ol><!--kg-card-begin: markdown--><h2 id="conclusion-the-critical-role-of-jitter-buffers-in-voip">Conclusion: The Critical Role of Jitter Buffers in VoIP</h2>
<!--kg-card-end: markdown--><p>Jitter buffers are vital components in the architecture of modern VoIP systems, playing a critical role in ensuring high-quality, reliable communications. By compensating for jitter, these buffers help to maintain sound and video quality over IP networks, which is crucial for business communications, teleconferencing, and other real-time services.</p><p>The effective use of jitter buffers, however, requires a thorough understanding of both their benefits and their limitations. Network administrators must be adept at configuring these buffers correctly and adjusting them as network conditions change. With the right settings and strategies, jitter buffers can significantly improve the user experience by providing clear and uninterrupted communication.</p><p>As VoIP and other real-time communication technologies continue to evolve, the management of jitter and the optimization of jitter buffers will remain key areas of focus for ensuring seamless digital interactions. Understanding the mechanisms of jitter buffers and implementing them wisely is essential for any organization that relies on stable and clear digital communication platforms.</p><p>By addressing the common challenges associated with jitter buffers and optimizing their functionality, businesses can enhance their communication systems, leading to improved productivity and satisfaction among users.</p><!--kg-card-begin: markdown--><h2 id="frequently-asked-questionsfaqs-about-jitter-buffers">Frequently Asked Questions(FAQs) About Jitter Buffers</h2>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="1-what-is-a-jitter-buffer">1. What is a jitter buffer?</h3>
<!--kg-card-end: markdown--><p>A jitter buffer is a shared data area where voice packets can be collected, stored, and sent to the voice processor in evenly spaced intervals. By doing this, jitter buffers reduce the effects of jitter in VoIP communications, such as packet delay and out-of-order packet arrival, which are crucial for maintaining high-quality audio transmission.</p><!--kg-card-begin: markdown--><h3 id="2-how-does-a-jitter-buffer-improve-voip-call-quality">2. How does a jitter buffer improve VoIP call quality?</h3>
<!--kg-card-end: markdown--><p>Jitter buffers manage the timing of voice packet delivery, smoothing out the arrival time of these packets at the receiving end. This management helps to ensure that voice calls remain clear and free of distortions like echoing or choppiness, which are often caused by network irregularities.</p><!--kg-card-begin: markdown--><h3 id="3-are-there-different-types-of-jitter-buffers">3. Are there different types of jitter buffers?</h3>
<!--kg-card-end: markdown--><p>Yes, there are mainly two types of jitter buffers used in VoIP technologies: static and dynamic. Static jitter buffers have a fixed size that does not change during the call, while dynamic jitter buffers can adjust their size based on the changing conditions of the network traffic. The adaptability of dynamic buffers makes them more suitable for networks with highly variable delay patterns.</p><!--kg-card-begin: markdown--><h3 id="4-when-should-i-use-a-jitter-buffer">4. When should I use a jitter buffer?</h3>
<!--kg-card-end: markdown--><p>A jitter buffer should be used in any real-time audio or video communication setup, particularly in VoIP systems where timing and order of packet delivery are critical for quality. They are essential in environments where packet delivery is irregular and can significantly impact the communication experience.</p><!--kg-card-begin: markdown--><h3 id="5-can-jitter-buffers-eliminate-all-voip-issues">5. Can jitter buffers eliminate all VoIP issues?</h3>
<!--kg-card-end: markdown--><p>While jitter buffers effectively manage and mitigate many of the issues caused by jitter, they are not a cure-all solution. They do not address the root causes of jitter such as network congestion or inadequate bandwidth. To fully optimize VoIP performance, additional measures such as Quality of Service (QoS) adjustments and proper network configuration are necessary.</p><!--kg-card-begin: markdown--><h3 id="6-how-do-i-configure-a-jitter-buffer">6. How do I configure a jitter buffer?</h3>
<!--kg-card-end: markdown--><p>Configuring a jitter buffer typically involves setting its size and the delay it should introduce. This setup can be managed either automatically by dynamic jitter buffers or manually in the case of static buffers. The specific configuration steps can vary depending on the equipment and software in use, so consulting with a network professional or the documentation of the VoIP system is recommended.</p>]]></content:encoded></item><item><title><![CDATA[Post-call Transcription & Summary in JavaScript]]></title><description><![CDATA[Implement Post-call Transcription in JavaScript to convert audio calls into text seamlessly. Enhance accessibility, accuracy, and workflow efficiency effortlessly.]]></description><link>https://www.videosdk.live/blog/post-call-transcription-in-javascript</link><guid isPermaLink="false">6682a30020fab018df10f2a9</guid><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Sun, 12 Jan 2025 09:46:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/07/Post-time-transcription-and-summary-3.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/Post-time-transcription-and-summary-3.png" alt="Post-call Transcription & Summary in JavaScript"/><p>Post-call transcription and summary is a powerful feature provided by <a href="https://www.videosdk.live/">VideoSDK</a> that allows users to generate detailed transcriptions and summaries of recorded meetings after they have concluded. This feature is particularly beneficial for capturing and documenting important information discussed during meetings, ensuring that nothing is missed and that there is a comprehensive record of the conversation.</p><h3 id="how-post-call-transcription-works">How Post-Call Transcription Works?</h3><p><strong>Post-call transcription</strong> involves processing the recorded audio or video content of a meeting to produce a textual representation of the conversation. Here’s a step-by-step breakdown of how it works:</p><ol><li><strong>Recording the Meeting:</strong> During the meeting, the audio and video are recorded. This can include everything that was said and any shared content, such as presentations or screen shares.</li><li><strong>Uploading the Recording:</strong> Once the meeting is over, the recorded file is uploaded to the VideoSDK platform. This can be done automatically or manually, depending on the configuration.</li><li><strong>Transcription Processing:</strong> The uploaded recording is then processed by VideoSDK’s transcription engine. This engine uses advanced speech recognition technology to convert spoken words into written text.</li><li><strong>Retrieving the Transcription:</strong> After the transcription process is complete, the textual representation of the meeting is made available. This text can be accessed via the VideoSDK API and used in various applications.</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/react-quick-start-fafbfbc2ed2d7cbfd4c5ee2e36296f9e-1.png" class="kg-image" alt="Post-call Transcription & Summary in JavaScript" loading="lazy" width="2906" height="1446"/></figure><h3 id="benefits-of-post-call-transcription">Benefits of Post-Call Transcription</h3><ul><li><strong>Accurate Documentation:</strong> Provides a precise record of what was discussed, which is invaluable for meeting minutes, legal documentation, and reference.</li><li><strong>Enhanced Accessibility:</strong> Makes content accessible to those who may have missed the meeting or have hearing impairments.</li><li><strong>Easy Review and Analysis:</strong> Enables quick review of key points and decisions made during the meeting without having to re-watch the entire recording.</li></ul><h2 id="lets-get-started">Let's Get started </h2><p>VideoSDK empowers you to seamlessly integrate the video calling feature into your React application within minutes.</p><p>In this quickstart, you'll explore the group calling feature of VideoSDK. Follow the step-by-step guide to integrate it within your application.</p><h3 id="prerequisites%E2%80%8B">Prerequisites<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#prerequisites">​</a></h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (Not having one, follow <strong><a href="https://app.videosdk.live/" rel="noopener noreferrer">VideoSDK Dashboard</a></strong>)</li><li>Basic understanding of JavaScript.</li><li><strong><a href="https://www.npmjs.com/package/@videosdk.live/js-sdk">JavaScript VideoSDK</a></strong></li><li>Have Node and NPM installed on your device.</li><li>Generate a token from the VideoSDK <a href="https://app.videosdk.live/api-keys">dashboard </a></li></ul><h2 id="getting-started-with-the-code%E2%80%8B">Getting Started with the Code!<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#getting-started-with-the-code">​</a></h2><p>Follow the steps to create the environment necessary to add video calls to your app. You can also find the code sample for <a href="https://github.com/videosdk-live/quickstart/tree/main/js-rtc" rel="noopener noreferrer">quickstart here</a>.</p><p>First, create one empty project using <code>mkdir folder_name</code> on your preferred location.</p><h3 id="install-video-sdk%E2%80%8B">Install Video SDK<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#install-video-sdk">​</a></h3><p>Import VideoSDK using the <code>&lt;script&gt;</code> tag or Install it using the following npm command. Make sure you are in your app directory before you run this command.</p><pre><code class="language-html">&lt;html&gt;
  &lt;head&gt;
    &lt;!--.....--&gt;
  &lt;/head&gt;
  &lt;body&gt;
    &lt;!--.....--&gt;
    &lt;script src="https://sdk.videosdk.live/js-sdk/0.0.89/videosdk.js"&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre><h2 id="structure-of-the-project%E2%80%8B">Structure of the project<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#structure-of-the-project">​</a></h2><p>Your project structure should look like this.</p><pre><code class="language-Structure">  root
   ├── index.html
   ├── config.js
   ├── index.js</code></pre><p>You will be working on the following files:</p><ul><li>index.html: Responsible for creating a basic UI.</li><li>config.js: Responsible for storing the token.</li><li>index.js: Responsible for rendering the meeting view and the join meeting functionality.</li></ul><h3 id="step-1-design-the-user-interface-ui%E2%80%8B">Step 1: Design the user interface (UI)<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-1--design-the-user-interface-ui">​</a></h3><p>Create an HTML file containing the screens, <code>join-screen</code> and <code>grid-screen</code>.</p><figure class="kg-card kg-code-card"><pre><code class="language-html">&lt;!DOCTYPE html&gt;
&lt;html&gt;

&lt;head&gt; &lt;/head&gt;

&lt;body&gt;
    &lt;div id="join-screen"&gt;
        &lt;!-- Create new Meeting Button --&gt;
        &lt;button id="createMeetingBtn"&gt;New Meeting&lt;/button&gt;
        OR
        &lt;!-- Join existing Meeting --&gt;
        &lt;input type="text" id="meetingIdTxt" placeholder="Enter Meeting id" /&gt;
        &lt;button id="joinBtn"&gt;Join Meeting&lt;/button&gt;
        &lt;select id="microphone-list"&gt;
        &lt;/select&gt;
        &lt;select id="speaker-list"&gt;&lt;/select&gt;
        &lt;select id="camera-list"&gt;&lt;/select&gt;
    &lt;/div&gt;

    &lt;!-- for Managing meeting status --&gt;
    &lt;div id="textDiv"&gt;&lt;/div&gt;

    &lt;div id="grid-screen" style="display: none"&gt;
        &lt;!-- To Display MeetingId --&gt;
        &lt;h3 id="meetingIdHeading"&gt;&lt;/h3&gt;
        &lt;!-- &lt;br&gt; --&gt;

        &lt;p id="micStatus" style="margin: 5px;"&gt;MIC | ON&lt;/p&gt;
        &lt;p id="cameraStatus" style="margin: 5px;"&gt;CAMERA | ON&lt;/p&gt;
        &lt;p id="recordingStatus" style="margin: 5px;"&gt;RECORDING | STOPPED&lt;/p&gt;

        &lt;!-- Controllers --&gt;
        &lt;button id="leaveBtn"&gt;Leave&lt;/button&gt;
        &lt;button id="toggleMicBtn"&gt;Toggle Mic&lt;/button&gt;
        &lt;button id="toggleWebCamBtn"&gt;Toggle WebCam&lt;/button&gt;
        &lt;button id="startRecording"&gt;Start Recording&lt;/button&gt;
        &lt;button id="stopRecording"&gt;Stop Recording&lt;/button&gt;

        &lt;!-- render Video --&gt;
        &lt;div class="row" id="videoContainer"&gt;&lt;/div&gt;
    &lt;/div&gt;
    &lt;script src="https://sdk.videosdk.live/js-sdk/0.0.88/videosdk.js"&gt;&lt;/script&gt;
    &lt;script src="config.js"&gt;&lt;/script&gt;
    &lt;script src="index.js"&gt;&lt;/script&gt;
&lt;/body&gt;

&lt;/html&gt;</code></pre><figcaption>index.html</figcaption></figure><h4 id="output">Output</h4><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/image.png" class="kg-image" alt="Post-call Transcription & Summary in JavaScript" loading="lazy" width="1782" height="694"/></figure><h3 id="step-2-implement-join-screen%E2%80%8B">Step 2: Implement Join Screen<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-2--implement-join-screen">​</a></h3><p>Configure the token in the <code>config.js</code> file, which you can obtain from the <a href="https://app.videosdk.live/login" rel="noopener noreferrer">VideoSDK Dashbord</a>.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">// Auth token will be used to generate a meeting and connect to it
TOKEN = "Your_Token_Here";</code></pre><figcaption>config.js</figcaption></figure><p>Next, retrieve all the elements from the DOM and declare the following variables in the <code>index.js</code> file. Then, add an event listener to the join and create meeting buttons.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">// Getting Elements from DOM
const joinButton = document.getElementById("joinBtn");
const leaveButton = document.getElementById("leaveBtn");
const toggleMicButton = document.getElementById("toggleMicBtn");
const toggleWebCamButton = document.getElementById("toggleWebCamBtn");
const createButton = document.getElementById("createMeetingBtn");
const videoContainer = document.getElementById("videoContainer");
const textDiv = document.getElementById("textDiv");

// Declare Variables
let meeting = null;
let meetingId = "";
let isMicOn = false;
let isWebCamOn = false;

function initializeMeeting() {}

function createLocalParticipant() {}

function createVideoElement() {}

function createAudioElement() {}

function setTrack() {}

// Join Meeting Button Event Listener
joinButton.addEventListener("click", async () =&gt; {
  document.getElementById("join-screen").style.display = "none";
  textDiv.textContent = "Joining the meeting...";

  roomId = document.getElementById("meetingIdTxt").value;
  meetingId = roomId;

  initializeMeeting();
});

// Create Meeting Button Event Listener
createButton.addEventListener("click", async () =&gt; {
  document.getElementById("join-screen").style.display = "none";
  textDiv.textContent = "Please wait, we are joining the meeting";

  // API call to create meeting
  const url = `https://api.videosdk.live/v2/rooms`;
  const options = {
    method: "POST",
    headers: { Authorization: TOKEN, "Content-Type": "application/json" },
  };

  const { roomId } = await fetch(url, options)
    .then((response) =&gt; response.json())
    .catch((error) =&gt; alert("error", error));
  meetingId = roomId;

  initializeMeeting();
});</code></pre><figcaption>index.js</figcaption></figure><h3 id="step-3-initialize-the-meeting%E2%80%8B">Step 3: Initialize the meeting<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-3--initialize-meeting">​</a></h3><p>Following that, initialize the meeting using the <code>initMeeting()</code> function and proceed to join the meeting.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">// Initialize meeting
function initializeMeeting() {
  window.VideoSDK.config(TOKEN);

  meeting = window.VideoSDK.initMeeting({
    meetingId: meetingId, // required
    name: "Thomas Edison", // required
    micEnabled: true, // optional, default: true
    webcamEnabled: true, // optional, default: true
  });

  meeting.join();

  // Creating local participant
  createLocalParticipant();

  // Setting local participant stream
  meeting.localParticipant.on("stream-enabled", (stream) =&gt; {
    setTrack(stream, null, meeting.localParticipant, true);
  });

  // meeting joined event
  meeting.on("meeting-joined", () =&gt; {
    textDiv.style.display = "none";
    document.getElementById("grid-screen").style.display = "block";
    document.getElementById(
      "meetingIdHeading"
    ).textContent = `Meeting Id: ${meetingId}`;
  });

  // meeting left event
  meeting.on("meeting-left", () =&gt; {
    videoContainer.innerHTML = "";
  });

  // Remote participants Event
  // participant joined
  meeting.on("participant-joined", (participant) =&gt; {
    //  ...
  });

  // participant left
  meeting.on("participant-left", (participant) =&gt; {
    //  ...
  });
}</code></pre><figcaption>index.js</figcaption></figure><h4 id="output-1">Output</h4><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/image-3.png" class="kg-image" alt="Post-call Transcription & Summary in JavaScript" loading="lazy" width="890" height="330"/></figure><h3 id="step-4-create-the-media-elements%E2%80%8B">Step 4: Create the Media Elements<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-4--create-the-media-elements">​</a></h3><p>In this step, Create a function to generate audio and video elements for displaying both local and remote participants. Set the corresponding media track based on whether it's a video or audio stream.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">// creating video element
function createVideoElement(pId, name) {
  let videoFrame = document.createElement("div");
  videoFrame.setAttribute("id", `f-${pId}`);

  //create video
  let videoElement = document.createElement("video");
  videoElement.classList.add("video-frame");
  videoElement.setAttribute("id", `v-${pId}`);
  videoElement.setAttribute("playsinline", true);
  videoElement.setAttribute("width", "300");
  videoFrame.appendChild(videoElement);

  let displayName = document.createElement("div");
  displayName.innerHTML = `Name : ${name}`;

  videoFrame.appendChild(displayName);
  return videoFrame;
}

// creating audio element
function createAudioElement(pId) {
  let audioElement = document.createElement("audio");
  audioElement.setAttribute("autoPlay", "false");
  audioElement.setAttribute("playsInline", "true");
  audioElement.setAttribute("controls", "false");
  audioElement.setAttribute("id", `a-${pId}`);
  audioElement.style.display = "none";
  return audioElement;
}

// creating local participant
function createLocalParticipant() {
  let localParticipant = createVideoElement(
    meeting.localParticipant.id,
    meeting.localParticipant.displayName
  );
  videoContainer.appendChild(localParticipant);
}

// setting media track
function setTrack(stream, audioElement, participant, isLocal) {
  if (stream.kind == "video") {
    isWebCamOn = true;
    const mediaStream = new MediaStream();
    mediaStream.addTrack(stream.track);
    let videoElm = document.getElementById(`v-${participant.id}`);
    videoElm.srcObject = mediaStream;
    videoElm
      .play()
      .catch((error) =&gt;
        console.error("videoElem.current.play() failed", error)
      );
  }
  if (stream.kind == "audio") {
    if (isLocal) {
      isMicOn = true;
    } else {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(stream.track);
      audioElement.srcObject = mediaStream;
      audioElement
        .play()
        .catch((error) =&gt; console.error("audioElem.play() failed", error));
    }
  }
}</code></pre><figcaption>index.js</figcaption></figure><h3 id="step-5-handle-participant-events%E2%80%8B">Step 5: Handle participant events<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-5--handle-participant-events">​</a></h3><p>Thereafter, implement the events related to the participants and the stream.</p><p>The following are the events to be executed in this step:</p><ol><li><code>participant-joined</code>: When a remote participant joins, this event will trigger. In the event callback, create video and audio elements previously defined for rendering their video and audio streams.</li><li><code>participant-left</code>: When a remote participant leaves, this event will trigger. In the event callback, remove the corresponding video and audio elements.</li><li><code>stream-enabled</code>: This event manages the media track of a specific participant by associating it with the appropriate video or audio element.</li></ol><figure class="kg-card kg-code-card"><pre><code class="language-js">// Initialize meeting
function initializeMeeting() {
  // ...

  // participant joined
  meeting.on("participant-joined", (participant) =&gt; {
    let videoElement = createVideoElement(
      participant.id,
      participant.displayName
    );
    let audioElement = createAudioElement(participant.id);
    // stream-enabled
    participant.on("stream-enabled", (stream) =&gt; {
      setTrack(stream, audioElement, participant, false);
    });
    videoContainer.appendChild(videoElement);
    videoContainer.appendChild(audioElement);
  });

  // participants left
  meeting.on("participant-left", (participant) =&gt; {
    let vElement = document.getElementById(`f-${participant.id}`);
    vElement.remove(vElement);

    let aElement = document.getElementById(`a-${participant.id}`);
    aElement.remove(aElement);
  });
}</code></pre><figcaption>index.js</figcaption></figure><h4 id="output-2">Output</h4><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/image-4.png" class="kg-image" alt="Post-call Transcription & Summary in JavaScript" loading="lazy" width="1105" height="767"/></figure><h3 id="step-6-implement-controls%E2%80%8B">Step 6: Implement Controls<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-6--implement-controls">​</a></h3><p>Next, implement the meeting controls such as toggleMic, toggleWebcam and leave the meeting.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">// leave Meeting Button Event Listener
leaveButton.addEventListener("click", async () =&gt; {
  meeting?.leave();
  document.getElementById("grid-screen").style.display = "none";
  document.getElementById("join-screen").style.display = "block";
});

// Toggle Mic Button Event Listener
toggleMicButton.addEventListener("click", async () =&gt; {
  if (isMicOn) {
    // Disable Mic in Meeting
    meeting?.muteMic();
  } else {
    // Enable Mic in Meeting
    meeting?.unmuteMic();
  }
  isMicOn = !isMicOn;
});

// Toggle Web Cam Button Event Listener
toggleWebCamButton.addEventListener("click", async () =&gt; {
  if (isWebCamOn) {
    // Disable Webcam in Meeting
    meeting?.disableWebcam();

    let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
    vElement.style.display = "none";
  } else {
    // Enable Webcam in Meeting
    meeting?.enableWebcam();

    let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
    vElement.style.display = "inline";
  }
  isWebCamOn = !isWebCamOn;
});</code></pre><figcaption>index.js</figcaption></figure><h3 id="step-7-configuring-transcription">Step 7: Configuring Transcription</h3><ul><li>In this step, we set up the configuration for post-transcription and summary generation. We define the webhook URL where the webhooks will be received.</li><li>In the <code>startRecording</code> function, we have passed the transcription object and the webhook URL, which will initiate the post-call transcription process.</li><li>Finally, when we call the <code>stopRecording</code> function, both the post-call transcription and the recording will be stopped.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">const webhookurl = "example.site";

const transcription = {
  enabled: true, // Enables post transcription
  summary: {
    enabled: true, // Enables summary generation

    // Guides summary generation
    prompt:
      "Write summary in sections like Title, Agenda, Speakers, Action Items, Outlines, Notes and Summary",
  },
};

// Start Recording with Post Transcription
startRecordingButton.addEventListener("click", () =&gt; {
  recordingStatus.textContent = "RECORDING | STARTING..."
  meeting.startRecording(webhookurl, null,  null, transcription);
})

// Stop Recording with Post Transcription
stopRecordingButton.addEventListener("click", () =&gt; {
  recordingStatus.textContent = "RECORDING | STOPPING..."
  meeting.stopRecording();
})</code></pre><figcaption>index.js</figcaption></figure><h2 id="run-your-code%E2%80%8B">Run your code<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#run-your-code">​</a></h2><p>Once you have completed all the steps mentioned above, run your application using the code block below.</p><pre><code class="language-bash">live-server --port=8000</code></pre><h2 id="final-output">Final Output</h2><p>You have completed the implementation of a customized video calling app in Javascript using VideoSDK. To explore more features, go through Basic and Advanced features.<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#final-output">​</a></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/Untitled-video---Made-with-Clipchamp--2-.gif" class="kg-image" alt="Post-call Transcription & Summary in JavaScript" loading="lazy" width="1920" height="1080"/></figure><h2 id="fetching-the-transcription-from-the-dashboard">Fetching the Transcription from the Dashboard</h2><p>Once the transcription is ready, you can fetch it from the VideoSDK dashboard. The dashboard provides a user-friendly interface where you can view, download, and manage your Transcriptions &amp; Summary.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/20240702_171456-ezgif.com-resize.gif" class="kg-image" alt="Post-call Transcription & Summary in JavaScript" loading="lazy" width="1920" height="1080"><figcaption>To Access Transcription &amp; Summary Files</figcaption></img></figure><h2 id="conclusion">Conclusion</h2><p>Integrating post-call transcription and summary features into your React application using VideoSDK provides significant advantages for capturing and documenting meeting content. This guide has meticulously detailed the steps required to set up and implement these features, ensuring that every conversation during a meeting is accurately transcribed and easily accessible for future reference.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Screen Share in iOS Video Call App?]]></title><description><![CDATA[Integrate screen sharing in your iOS video call app with VideoSDK. Elevate user experience with seamless screen sharing features.]]></description><link>https://www.videosdk.live/blog/integrate-screen-share-in-ios-video-call-app</link><guid isPermaLink="false">662781442a88c204ca9d4bfd</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[iOS]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Sat, 11 Jan 2025 12:50:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-SHare-in-iOS-video-Call-App.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-SHare-in-iOS-video-Call-App.png" alt="How to Integrate Screen Share in iOS Video Call App?"/><p>Integrating screen share in an <a href="https://www.videosdk.live/blog/ios-video-calling-sdk">iOS video call app</a> enhances user experience and collaboration. With this feature, users can seamlessly share their screen during calls, facilitating presentations, demonstrations, and remote assistance. Implementing screen sharing requires integrating APIs for capturing device screens, ensuring smooth transmission of visuals, and maintaining privacy controls.</p><p><strong>Benefits of Screen Share in iOS Video Call App:</strong></p><ol><li>Enhanced Collaboration: Screen share enables users to collaborate more effectively by sharing documents, presentations, or designs during video calls, fostering better understanding and teamwork.</li><li>Improved Communication: Visual aids facilitate clearer communication, especially for technical support, education, or remote work scenarios, leading to faster issue resolution and knowledge transfer.</li><li>Increased Productivity: With real-time sharing, teams can discuss projects or review documents without the need for additional tools or meetings, saving time and boosting productivity.</li></ol><p><strong>Use Cases of Screen Share in iOS Video Call App:</strong></p><ol><li>Business Meetings: Sales teams can share presentations or product demos with clients, enhancing engagement and closing deals more effectively.</li><li>Remote Work: Colleagues can collaborate on projects by sharing screens to discuss documents, designs, or code, replicating in-person collaboration remotely.</li><li>Technical Support: Customer support agents can visually guide users through troubleshooting steps by sharing screens, and resolving issues efficiently.</li></ol><p>This tutorial guides you through integrating this valuable feature into your JavaScript video call application using VideoSDK. We'll cover the steps required to leverage VideoSDK's capabilities and implement visual cues that highlight the active speaker within your app's interface.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>VideoSDK enables the opportunity to integrate video &amp; audio calling into Web, Android, and iOS applications with so many different frameworks. It is the best infrastructure solution that provides programmable SDKs and REST APIs to build scalable video conferencing applications. This guide will get you running with the VideoSDK video &amp; audio calling in minutes.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/login">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/server-setup">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><ul><li>iOS 11.0+</li><li>Xcode 12.0+</li><li>Swift 5.0+</li></ul><p>This App will contain two screens:</p><p><strong>Join Screen</strong>: This screen allows the user to either create a meeting or join the predefined meeting.</p><p><strong>Meeting Screen</strong>: This screen basically contains local and remote participant views and some meeting controls such as Enable/Disable the mic &amp; Camera and Leave meeting.</p><h2 id="integrate-videosdk%E2%80%8B">Integrate VideoSDK​</h2><p>To install VideoSDK, you must initialize the pod on the project by running the following command:</p><pre><code class="language-swift">pod init</code></pre><p>It will create the podfile in your project folder, Open that file and add the dependency for the VideoSDK, like below:</p><pre><code class="language-swift">pod 'VideoSDKRTC', :git =&gt; 'https://github.com/videosdk-live/videosdk-rtc-ios-sdk.git'</code></pre><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_podfile.png" class="kg-image" alt="How to Integrate Screen Share in iOS Video Call App?" loading="lazy"/></figure><p>then run the below code to install the pod:</p><pre><code class="language-swift">pod install</code></pre><p>then declare the permissions in Info.plist :</p><pre><code class="language-swift">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Camera permission description&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Microphone permission description&lt;/string&gt;</code></pre><h3 id="project-structure">Project Structure</h3><pre><code class="language-swift">iOSQuickStartDemo
   ├── Models
        ├── RoomStruct.swift
        └── MeetingData.swift
   ├── ViewControllers
        ├── StartMeetingViewController.swift
        └── MeetingViewController.swift
   ├── AppDelegate.swift // Default
   ├── SceneDelegate.swift // Default
   └── APIService
           └── APIService.swift
   ├── Main.storyboard // Default
   ├── LaunchScreen.storyboard // Default
   └── Info.plist // Default
 BroadcastExtension
   ├── SampleHandler.swift // Default
   ├── Atomic.swift
   └── SocketConnection.swift
   ├── DarwinNotification.swift
   ├── SampleUploader.swift
   └── Info.plist // Default
 Pods
     └── Podfile</code></pre><h3 id="create-models%E2%80%8B">Create models<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#create-models">​</a></h3><p>Create swift file for <code>MeetingData</code> and <code>RoomStruct</code> class model for setting data in object pattern.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
struct MeetingData {
    let token: String
    let name: String
    let meetingId: String
    let micEnabled: Bool
    let cameraEnabled: Bool
}</code></pre><figcaption>MeetingData.swift</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
struct RoomsStruct: Codable {
    let createdAt, updatedAt, roomID: String?
    let links: Links?
    let id: String?
    enum CodingKeys: String, CodingKey {
        case createdAt, updatedAt
        case roomID = "roomId"
        case links, id
    }
}

// MARK: - Links
struct Links: Codable {
    let getRoom, getSession: String?
    enum CodingKeys: String, CodingKey {
        case getRoom = "get_room"
        case getSession = "get_session"
    }
}</code></pre><figcaption>RoomStruct.swift</figcaption></figure><h2 id="essential-steps-for-building-the-video-calling">Essential Steps for Building the Video Calling</h2><p>This guide is designed to walk you through the process of integrating Screen Share with <a href="https://www.videosdk.live/">VideoSDK</a>. We'll cover everything from setting up the SDK to incorporating the visual cues into your app's interface, ensuring a smooth and efficient implementation process.</p><h3 id="step-1-get-started-with-apiclient%E2%80%8B">Step 1 : Get started with APIClient​</h3><p>Before jumping to anything else, we have to write an API to generate unique <code>meetingId</code>. You will require an <strong>authentication token;</strong> you can generate it either using <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-server-api-example</a> or from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">Video SDK Dashboard</a> for developers.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation

let TOKEN_STRING: String = "&lt;AUTH_TOKEN&gt;"

class APIService {

  class func createMeeting(token: String, completion: @escaping (Result&lt;String, Error&gt;) -&gt; Void) {

    let url = URL(string: "https://api.videosdk.live/v2/rooms")!

    var request = URLRequest(url: url)
    request.httpMethod = "POST"
    request.addValue(TOKEN_STRING, forHTTPHeaderField: "authorization")

    URLSession.shared.dataTask(
      with: request,
      completionHandler: { (data: Data?, response: URLResponse?, error: Error?) in

        DispatchQueue.main.async {

          if let data = data, let utf8Text = String(data: data, encoding: .utf8) {
            do {
              let dataArray = try JSONDecoder().decode(RoomsStruct.self, from: data)

              completion(.success(dataArray.roomID ?? ""))
            } catch {
              print("Error while creating a meeting: \(error)")
              completion(.failure(error))
            }
          }
        }
      }
    ).resume()
  }
}
</code></pre><figcaption>APIService.swift</figcaption></figure><h3 id="step-2-implement-join-screen%E2%80%8B">Step 2: Implement Join Screen<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-2--implement-join-screen">​</a></h3><p>The Join Screen will work as a medium to either schedule a new meeting or join an existing meeting.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
import UIKit

class StartMeetingViewController: UIViewController, UITextFieldDelegate {

  private var serverToken = ""

  /// MARK: outlet for create meeting button
  @IBOutlet weak var btnCreateMeeting: UIButton!

  /// MARK: outlet for join meeting button
  @IBOutlet weak var btnJoinMeeting: UIButton!

  /// MARK: outlet for meetingId textfield
  @IBOutlet weak var txtMeetingId: UITextField!

  /// MARK: Initialize the private variable with TOKEN_STRING &amp;
  /// setting the meeting id in the textfield
  override func viewDidLoad() {
    txtMeetingId.delegate = self
    serverToken = TOKEN_STRING
    txtMeetingId.text = "PROVIDE-STATIC-MEETING-ID"
  }

  /// MARK: method for joining meeting through seague named as "StartMeeting"
  /// after validating the serverToken in not empty
  func joinMeeting() {

    txtMeetingId.resignFirstResponder()

    if !serverToken.isEmpty {
      DispatchQueue.main.async {
        self.dismiss(animated: true) {
          self.performSegue(withIdentifier: "StartMeeting", sender: nil)
        }
      }
    } else {
      print("Please provide auth token to start the meeting.")
    }
  }

  /// MARK: outlet for create meeting button tap event
  @IBAction func btnCreateMeetingTapped(_ sender: Any) {
    print("show loader while meeting gets connected with server")
    joinRoom()
  }

  /// MARK: outlet for join meeting button tap event
  @IBAction func btnJoinMeetingTapped(_ sender: Any) {
    if (txtMeetingId.text ?? "").isEmpty {

      print("Please provide meeting id to start the meeting.")
      txtMeetingId.resignFirstResponder()
    } else {
      joinMeeting()
    }
  }

  // MARK: - method for creating room api call and getting meetingId for joining meeting

  func joinRoom() {

    APIService.createMeeting(token: self.serverToken) { result in
      if case .success(let meetingId) = result {
        DispatchQueue.main.async {
          self.txtMeetingId.text = meetingId
          self.joinMeeting()
        }
      }
    }
  }

  /// MARK: preparing to animate to meetingViewController screen
  override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

    guard let navigation = segue.destination as? UINavigationController,

      let meetingViewController = navigation.topViewController as? MeetingViewController
    else {
      return
    }

    meetingViewController.meetingData = MeetingData(
      token: serverToken,
      name: txtMeetingId.text ?? "Guest",
      meetingId: txtMeetingId.text ?? "",
      micEnabled: true,
      cameraEnabled: true
    )
  }
}
</code></pre><figcaption>StartMeetingViewController.swift</figcaption></figure><h3 id="step-3-initialize-and-join-meeting%E2%80%8B">Step 3: Initialize and Join Meeting<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-3--initialize-and-join-meeting">​</a></h3><p>Using the provided <code>token</code> and <code>meetingId</code>, we will configure and initialise the meeting in <code>viewDidLoad()</code>.</p><p>Then, we'll add <strong>@IBOutlet</strong> for <code>localParticipantVideoView</code> and <code>remoteParticipantVideoView</code>, which can render local and remote participant media, respectively.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">class MeetingViewController: UIViewController {

import UIKit
import VideoSDKRTC
import WebRTC
import AVFoundation

class MeetingViewController: UIViewController {

// MARK: - Properties
// outlet for local participant container view
   @IBOutlet weak var localParticipantViewContainer: UIView!

// outlet for label for meeting Id
   @IBOutlet weak var lblMeetingId: UILabel!

// outlet for local participant video view
   @IBOutlet weak var localParticipantVideoView: RTCMTLVideoView!

// outlet for remote participant video view
   @IBOutlet weak var remoteParticipantVideoView: RTCMTLVideoView!

// outlet for remote participant no media label
   @IBOutlet weak var lblRemoteParticipantNoMedia: UILabel!

// outlet for remote participant container view
   @IBOutlet weak var remoteParticipantViewContainer: UIView!

// outlet for local participant no media label
   @IBOutlet weak var lblLocalParticipantNoMedia: UILabel!

// Meeting data - required to start
   var meetingData: MeetingData!

// current meeting reference
   private var meeting: Meeting?

    // MARK: - video participants including self to show in UI
    private var participants: [Participant] = []

        // MARK: - Lifecycle Events

        override func viewDidLoad() {
        super.viewDidLoad()
        // configure the VideoSDK with token
        VideoSDK.config(token: meetingData.token)

        // init meeting
        initializeMeeting()

        // set meeting id in button text
        lblMeetingId.text = "Meeting Id: \(meetingData.meetingId)"
      }

      override func viewWillAppear(_ animated: Bool) {
          super.viewWillAppear(animated)
          navigationController?.navigationBar.isHidden = true
      }

    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)
        navigationController?.navigationBar.isHidden = false
        NotificationCenter.default.removeObserver(self)
    }

        // MARK: - Meeting

        private func initializeMeeting() {

            // Initialize the VideoSDK
            meeting = VideoSDK.initMeeting(
                meetingId: meetingData.meetingId,
                participantName: meetingData.name,
                micEnabled: meetingData.micEnabled,
                webcamEnabled: meetingData.cameraEnabled
            )

            // Adding the listener to meeting
            meeting?.addEventListener(self)

            // joining the meeting
            meeting?.join()
        }
}</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h3 id="step-4-implement-controls%E2%80%8B">Step 4: Implement Controls<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-4--implement-controls">​</a></h3><p>After initializing the meeting in the previous step, we will now add <strong>@IBOutlet</strong> for <code>btnLeave</code>, <code>btnToggleVideo</code> and <code>btnToggleMic</code> which can control the media in the meeting.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">class MeetingViewController: UIViewController {

...

    // outlet for leave button
    @IBOutlet weak var btnLeave: UIButton!

    // outlet for toggle video button
    @IBOutlet weak var btnToggleVideo: UIButton!

    // outlet for toggle audio button
    @IBOutlet weak var btnToggleMic: UIButton!

    // bool for mic
    var micEnabled = true
    // bool for video
    var videoEnabled = true


    // outlet for leave button click event
    @IBAction func btnLeaveTapped(_ sender: Any) {
            DispatchQueue.main.async {
                self.meeting?.leave()
                self.dismiss(animated: true)
            }
        }

    // outlet for toggle mic button click event
    @IBAction func btnToggleMicTapped(_ sender: Any) {
        if micEnabled {
            micEnabled = !micEnabled // false
            self.meeting?.muteMic()
        } else {
            micEnabled = !micEnabled // true
            self.meeting?.unmuteMic()
        }
    }

    // outlet for toggle video button click event
    @IBAction func btnToggleVideoTapped(_ sender: Any) {
        if videoEnabled {
            videoEnabled = !videoEnabled // false
            self.meeting?.disableWebcam()
        } else {
            videoEnabled = !videoEnabled // true
            self.meeting?.enableWebcam()
        }
    }

...

}</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h3 id="step-5-implementing-meetingeventlistener%E2%80%8B">Step 5: Implementing <code>MeetingEventListener​</code></h3><p>In this step, we'll create an extension for the <code>MeetingViewController</code> that implements the MeetingEventListener, which implements the <code>onMeetingJoined</code>, <code>onMeetingLeft</code>, <code>onParticipantJoined</code>, <code>onParticipantLeft</code>, <code>onParticipantChanged</code>, <code>onSpeakerChanged</code>, etc. methods.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">
class MeetingViewController: UIViewController {

...

extension MeetingViewController: MeetingEventListener {

        /// Meeting started
        func onMeetingJoined() {

            // handle local participant on start
            guard let localParticipant = self.meeting?.localParticipant else { return }
            // add to list
            participants.append(localParticipant)

            // add event listener
            localParticipant.addEventListener(self)

            localParticipant.setQuality(.high)

            if(localParticipant.isLocal){
                self.localParticipantViewContainer.isHidden = false
            } else {
                self.remoteParticipantViewContainer.isHidden = false
            }
        }

        /// Meeting ended
        func onMeetingLeft() {
            // remove listeners
            meeting?.localParticipant.removeEventListener(self)
            meeting?.removeEventListener(self)
        }

        /// A new participant joined
        func onParticipantJoined(_ participant: Participant) {
            participants.append(participant)

            // add listener
            participant.addEventListener(self)

            participant.setQuality(.high)

            if(participant.isLocal){
                self.localParticipantViewContainer.isHidden = false
            } else {
                self.remoteParticipantViewContainer.isHidden = false
            }
        }

        /// A participant left from the meeting
        /// - Parameter participant: participant object
        func onParticipantLeft(_ participant: Participant) {
            participant.removeEventListener(self)
            guard let index = self.participants.firstIndex(where: { $0.id == participant.id }) else {
                return
            }
            // remove participant from list
            participants.remove(at: index)
            // hide from ui
            UIView.animate(withDuration: 0.5){
                if(!participant.isLocal){
                    self.remoteParticipantViewContainer.isHidden = true
                }
            }
        }

        /// Called when speaker is changed
        /// - Parameter participantId: participant id of the speaker, nil when no one is speaking.
        func onSpeakerChanged(participantId: String?) {

            // show indication for active speaker
            if let participant = participants.first(where: { $0.id == participantId }) {
                self.showActiveSpeakerIndicator(participant.isLocal ? localParticipantViewContainer : remoteParticipantViewContainer, true)
            }

            // hide indication for others participants
            let otherParticipants = participants.filter { $0.id != participantId }
            for participant in otherParticipants {
                if participants.count &gt; 1 &amp;&amp; participant.isLocal {
                    showActiveSpeakerIndicator(localParticipantViewContainer, false)
                } else {
                    showActiveSpeakerIndicator(remoteParticipantViewContainer, false)
                }
            }
        }

        func showActiveSpeakerIndicator(_ view: UIView, _ show: Bool) {
            view.layer.borderWidth = 4.0
            view.layer.borderColor = show ? UIColor.blue.cgColor : 													UIColor.clear.cgColor
        }

}

...</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h3 id="step-6-implementing-participanteventlistener">Step 6: Implementing <code>ParticipantEventListener</code></h3><p>In this stage, we'll add an extension for the <code>MeetingViewController</code> that implements the ParticipantEventListener, which implements the <code>onStreamEnabled</code> and <code>onStreamDisabled</code> methods for the audio and video of MediaStreams enabled or disabled.</p><p>The function update UI is frequently used to control or modify the user interface (enable/disable camera &amp; mic) following the MediaStream state.</p><pre><code class="language-swift">class MeetingViewController: UIViewController {

...

extension MeetingViewController: ParticipantEventListener {

/// Participant has enabled mic, video or screenshare
/// - Parameters:
/// - stream: enabled stream object
/// - participant: participant object
func onStreamEnabled(_ stream: MediaStream, forParticipant participant: Participant) {
    updateUI(participant: participant, forStream: stream, enabled: true)
 }

/// Participant has disabled mic, video or screenshare
/// - Parameters:
///   - stream: disabled stream object
///   - participant: participant object
        
func onStreamDisabled(_ stream: MediaStream, 
			forParticipant participant: Participant) {
            
  updateUI(participant: participant, forStream: stream, enabled: false)
 }
 
}

private extension MeetingViewController {

 func updateUI(participant: Participant, forStream stream: MediaStream, 							enabled: Bool) { // true
        switch stream.kind {
        case .state(value: .video):
            if let videotrack = stream.track as? RTCVideoTrack {
                if enabled {
                    DispatchQueue.main.async {
                        UIView.animate(withDuration: 0.5){
                        
                            if(participant.isLocal) {
                            
    	self.localParticipantViewContainer.isHidden = false
	self.localParticipantVideoView.isHidden = false       
	self.localParticipantVideoView.videoContentMode = .scaleAspectFill                            self.localParticipantViewContainer.bringSubviewToFront(self.localParticipantVideoView)                         									
    videotrack.add(self.localParticipantVideoView)
    self.lblLocalParticipantNoMedia.isHidden = true

} else {
		self.remoteParticipantViewContainer.isHidden = false
        	self.remoteParticipantVideoView.isHidden = false
                                self.remoteParticipantVideoView.videoContentMode = .scaleAspectFill
                                self.remoteParticipantViewContainer.bringSubviewToFront(self.remoteParticipantVideoView)
                                		        videotrack.add(self.remoteParticipantVideoView)
 self.lblRemoteParticipantNoMedia.isHidden = true
        }
     }
  }
} else {
         UIView.animate(withDuration: 0.5){
                if(participant.isLocal){
                
                    self.localParticipantViewContainer.isHidden = false
                    self.localParticipantVideoView.isHidden = true
                    self.lblLocalParticipantNoMedia.isHidden = false
                            videotrack.remove(self.localParticipantVideoView)
} else {
                   self.remoteParticipantViewContainer.isHidden = false
                   self.remoteParticipantVideoView.isHidden = true
                   self.lblRemoteParticipantNoMedia.isHidden = false
                            videotrack.remove(self.remoteParticipantVideoView)
      }
    }
  }
}

     case .state(value: .audio):
            if participant.isLocal {
                
               localParticipantViewContainer.layer.borderWidth = 4.0
               localParticipantViewContainer.layer.borderColor = enabled ? UIColor.clear.cgColor : UIColor.red.cgColor
            } else {
                remoteParticipantViewContainer.layer.borderWidth = 4.0
                remoteParticipantViewContainer.layer.borderColor = enabled ? UIColor.clear.cgColor : UIColor.red.cgColor
            }
        default:
            break
        }
    }
}

...
</code></pre><h3 id="known-issue%E2%80%8B">Known Issue<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#known-issue">​</a></h3><p>Please add the following line in the <code>MeetingViewController.swift</code> file's <code>viewDidLoad</code> method If you get your video out of the container.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">override func viewDidLoad() {

  localParticipantVideoView.frame = CGRect(x: 10, y: 0, 
    		width: localParticipantViewContainer.frame.width, 
   		height: localParticipantViewContainer.frame.height)

  localParticipantVideoView.bounds = CGRect(x: 10, y: 0, 
  		width: localParticipantViewContainer.frame.width, 
        	height: localParticipantViewContainer.frame.height)

  localParticipantVideoView.clipsToBounds = true

  remoteParticipantVideoView.frame = CGRect(x: 10, y: 0, 
  		width: remoteParticipantViewContainer.frame.width, 
        	height: remoteParticipantViewContainer.frame.height)
        
  remoteParticipantVideoView.bounds = CGRect(x: 10, y: 0, 
  		width: remoteParticipantViewContainer.frame.width, 
    		height: remoteParticipantViewContainer.frame.height)
    
    remoteParticipantVideoView.clipsToBounds = true
}
</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><blockquote><strong>TIP:</strong><br>Stuck anywhere? Check out this <a href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example" rel="noopener noreferrer">example code</a> on GitHub</br></blockquote><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - videosdk-live/videosdk-rtc-ios-sdk-example: WebRTC based video conferencing SDK for iOS (Swift / Objective C)</div><div class="kg-bookmark-description">WebRTC based video conferencing SDK for iOS (Swift / Objective C) - videosdk-live/videosdk-rtc-ios-sdk-example</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="How to Integrate Screen Share in iOS Video Call App?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">videosdk-live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/3d2f5eef43ad3d03fbe693ee2c8633053215e90252a63bddc775d8b8d8a7e380/videosdk-live/videosdk-rtc-ios-sdk-example" alt="How to Integrate Screen Share in iOS Video Call App?"/></div></a></figure><p/><h2 id="integrate-screen-share-in-video-app">Integrate Screen Share in Video App </h2><h3 id="step-1-open-target%E2%80%8B">Step 1: Open Target<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/handling-media/native-ios-screen-share#step-1--open-target">​</a></h3><p>Open your project with XCode, the select <strong>File &gt; New &gt; Target</strong> in menu bar.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step1-xcode-8cbf05258253368fa8095f10aa06459d.png" class="kg-image" alt="How to Integrate Screen Share in iOS Video Call App?" loading="lazy" width="801" height="580"/></figure><h3 id="step-2-select-target%E2%80%8B">Step 2: Select Target​</h3><p>Select <strong>Broadcast Upload Extension</strong> and click </p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step2-xcode-5270422797cc8941922cefadffb5238d.png" class="kg-image" alt="How to Integrate Screen Share in iOS Video Call App?" loading="lazy" width="769" height="555"/></figure><h3 id="step-3-configure-broadcast-upload-extension%E2%80%8B">Step 3: Configure Broadcast Upload Extension​</h3><p>Enter the extension's name in the <strong>Product Name</strong> field, choose the team from the dropdown, uncheck the "Include UI extension" field, and click "Finish."</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step3-xcode-458facd46d26cfa090fd8724b6ba831b.png" class="kg-image" alt="How to Integrate Screen Share in iOS Video Call App?" loading="lazy" width="789" height="567"/></figure><h3 id="step-4-activate-extension-scheme%E2%80%8B">Step 4: Activate Extension scheme​</h3><p>You will be prompted with a popup: <strong>Activate "Your-Extension-name" scheme?</strong> click on activate.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step4-xcode-6de6fa7fc079f920f52d8315b2717e9d.png" class="kg-image" alt="How to Integrate Screen Share in iOS Video Call App?" loading="lazy" width="510" height="490"/></figure><p>Now, the "Broadcast" folder will appear in the Xcode left side bar.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step5-xcode-79ba89532c028fb25376bf6d4953f0b2.png" class="kg-image" alt="How to Integrate Screen Share in iOS Video Call App?" loading="lazy" width="474" height="652"/></figure><h3 id="step-5-add-an-external-file-in-the-created-extension%E2%80%8B">Step 5: Add an External file in the created extension.​</h3><p>Open the <a href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example/tree/main/VideoSDKScreenShare" rel="noopener noreferrer">videosdk-rtc-ios-sdk-example</a> repository, and copy the following files: <code>SampleUploader.swift</code>, <code>SocketConnection.swift</code>, <code>DarwinNotificationCenter.swift</code>, and <code>Atomic.swift</code> to your extension's folder. Ensure that these files are added to the target.</p><h3 id="step-6-update-samplehandlerswift-file%E2%80%8B">Step 6: Update <code>SampleHandler.swift</code> file​</h3><p>Open <a href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example/blob/main/VideoSDKScreenShare/SampleHandler.swift" rel="noopener noreferrer">SampleHandler.swift</a>, and copy the content of the file. Paste this content into your extension's SampleHandler.swift file.</p><h3 id="step-7-add-capability-to-the-app%E2%80%8B">Step 7: Add Capability to the App​</h3><p>In Xcode, navigate to <strong>YourappName &gt; Signing &amp; Capabilities</strong>, and click on <strong>+Capability</strong> to configure the app group.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step8-xcode-60177a1783e9203570a06479c30492b8.png" class="kg-image" alt="How to Integrate Screen Share in iOS Video Call App?" loading="lazy" width="930" height="529"/></figure><p>Choose <strong>App Groups</strong> from the list.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step9-xcode-13aa5448218f78f473f060c6854874a2.png" class="kg-image" alt="How to Integrate Screen Share in iOS Video Call App?" loading="lazy" width="803" height="579"/></figure><p>After that, select or add the generated App Group ID that you have created before.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step10-xcode-80bd6f05c87acae9a9fbaa4a89f414f9.png" class="kg-image" alt="How to Integrate Screen Share in iOS Video Call App?" loading="lazy" width="1176" height="735"/></figure><h3 id="step-8-add-capability-in-extension%E2%80%8B">Step 8: Add Capability in Extension​</h3><p>Go to <strong>Your-Extension-Name &gt; Signing &amp; Capabilities</strong> and configure the <strong>App Group</strong> functionality which we had performed in previous steps. (Group ID should be same for both targets).</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step11-xcode-021329a0abd110f74fb1eba7d668fc79.png" class="kg-image" alt="How to Integrate Screen Share in iOS Video Call App?" loading="lazy" width="803" height="445"/></figure><h3 id="step-9-add-app-group-id-in-the-extension-file%E2%80%8B">Step 9: Add App Group ID in the Extension File​</h3><p>Go to the extension's <code>SampleHandler.swift</code> file and paste your group ID into the <code>appGroupIdentifier</code> constant.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step12-xcode-6f145ca7606a486596da66ebf5c31e36.png" class="kg-image" alt="How to Integrate Screen Share in iOS Video Call App?" loading="lazy" width="951" height="562"/></figure><h3 id="step-10-update-app-level-infoplist-file%E2%80%8B">Step 10: Update App level info.plist file​</h3><ol><li>Add a new key, <strong>RTCScreenSharingExtension</strong> in Info.plist with the extension's Bundle Identifier as the value.</li><li>Add a new key <strong>RTCAppGroupIdentifier</strong> in Info.plist with the extension's App groups Id as the value.</li></ol><p><strong>Note</strong>: For the extension's Bundle Identifier, go to <strong>TARGETS &gt; Your-Extension-Name &gt; Signing &amp; Capabilities</strong>.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step13-xcode-00cb59e564facb3c7f365235065f4561.png" class="kg-image" alt="How to Integrate Screen Share in iOS Video Call App?" loading="lazy" width="933" height="550"/></figure><blockquote>NOTE:<br>You can also check out the extension's <a href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example/tree/main/VideoSDKScreenShare" rel="noopener noreferrer">example code</a> on Github.</br></blockquote><h2 id="integrate-screenshare-in-your-app%E2%80%8B">Integrate ScreenShare in your App​</h2><p>After successfully creating Broadcast Upload Extension using the above-listed steps, we can start using the <code>enableScreenShare</code> and <code>disableScreenShare</code> functions of the <code>Meeting</code> class.</p><h3 id="how-to-use-the-screenshare-functions">How to use the ScreenShare functions</h3><p>Use these functions in your app's Meeting Screen.</p><pre><code class="language-swift">@IBAction func ScreenShareButtonTapped(_ sender: Any) {
    Task {
      self.meeting?.enableScreenShare()
    }
}

@IBAction func StopScreenShareButtonTapped(_ sender: Any) {
    Task {
      self.meeting?.disableScreenShare()
    }
}</code></pre><blockquote><strong>CAUTION</strong>:<br>The function <code>enableScreenShare</code> and <code>disableScreenShare</code> are async functions; therefore use above syntax to call the ScreenShare functions.</br></blockquote><p>Calling the <code>enableScreenShare()</code> will prompt a <code>RPBroadcastPickerView</code> with the extension that was created using the above steps.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step23-xcode-e815d2c96e8061551e941055e32578b3.png" class="kg-image" alt="How to Integrate Screen Share in iOS Video Call App?" loading="lazy" width="307" height="504"/></figure><p>After clicking the <strong>Start Broadcast</strong> button, you will be able to get the screen share stream in the session.</p><ul><li>When the broadcast is started, it creates a Stream that has <code>MediaStream.kind = .share</code>. Using the stream kind, you can prompt a ScreenShare view for remote peers when ScreenShare is started by the local peer.</li><li>Similarly, you can use the same kind to dismiss the ScreenShare view on the remote peer when the ScreenShare is stopped.</li></ul><pre><code class="language-Swift">extension MeetingViewController: ParticipantEventListener {
    /// Participant has enabled mic, video or screenshare
    /// - Parameters:
    ///   - stream: enabled stream object
    ///   - participant: participant object
    func onStreamEnabled(_ stream: MediaStream, forParticipant participant: Participant) {
        
        if stream.kind == .share {
        // show screen share
            showScreenSharingView(true)
            screenSharingView.showMediastream(stream)
            return
        }
        
        // show stream in cell
        if let cell = self.cellForParticipant(participant) {
            cell.updateView(forStream: stream, enabled: true)
        }
        
        if participant.isLocal {
        // turn on controls for local participant
            self.buttonControlsView.updateButtons(forStream: stream, enabled: true)
        }
    }
    
    /// Participant has disabled mic, video or screenshare
    /// - Parameters:
    /// - stream: disabled stream object
    /// - participant: participant object
    func onStreamDisabled(_ stream: MediaStream, forParticipant participant: Participant) {
        
        if stream.kind == .share {
        // remove screen share
            showScreenSharingView(false)
            screenSharingView.hideMediastream(stream)
            return
        }
        
        // hide stream in cell
        if let cell = self.cellForParticipant(participant) {
            cell.updateView(forStream: stream, enabled: false)
        }
        
        if participant.isLocal {
        // turn off controls for local participant
            self.buttonControlsView.updateButtons(forStream: stream, enabled: false)
        }
    }
}</code></pre><h2 id="conclusion">Conclusion</h2><p>Integrating screen sharing into your iOS video call app with VideoSDK is a straightforward process that unlocks a powerful new feature for your users.  This functionality can streamline communication, boost productivity, and open doors for innovative use cases within your app.</p><p>Unlock the power of seamless video communication with VideoSDK! <a href="https://www.videosdk.live/signup">Sign up</a> now and dive into an extraordinary world of interactive video-calling experiences.</p><p>With VideoSDK, you get a generous 10,000 free minutes to kickstart your journey towards creating engaging and immersive connections. Whether you're building a social app, a collaboration tool, or an e-learning platform, our platform provides the tools you need to elevate your user experience to the next level.</p>]]></content:encoded></item><item><title><![CDATA[10 Best Video Live Streaming APIs for 2025]]></title><description><![CDATA[Stream with excellence using the 10+ best aoi for live video streaming, enhancing your apps with high-quality and real-time streaming capabilities.]]></description><link>https://www.videosdk.live/blog/10-best-live-streaming-api</link><guid isPermaLink="false">64ec8d109eadee0b8b9e80e2</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Sat, 11 Jan 2025 11:32:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/08/10-Live-Stream.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/08/10-Live-Stream.jpg" alt="10 Best Video Live Streaming APIs for 2025"/><p>In an era where live streaming has seamlessly integrated into our daily routines, its popularity is undeniably on the rise. As we continue to redefine content consumption and real-time interactions, the development of live-streaming apps plays a pivotal role. Choosing the perfect live-streaming SDK is crucial for those embarking on this journey.</p><p>In this comprehensive article, we explore the top live-streaming APIs &amp; SDK providers that are shaping the industry. We delve into solutions that range from basic live video streaming services APIs that cater to large audiences, to specialized <a href="https://www.videosdk.live/interactive-live-streaming" rel="noreferrer"><strong>live streaming APIs</strong></a> designed for interactive experiences. Additionally, we cover <strong>live video streaming APIs</strong> that support high-definition content and robust <a href="https://www.videosdk.live/audio-video-conferencing" rel="noreferrer"><strong>video broadcasting APIs</strong></a> for broader distribution needs. Each provider is evaluated to help you find the most reliable and scalable <strong>live stream API</strong> for your project.</p><h2 id="what-is-video-live-streaming-api">What is Video Live Streaming API?</h2>
<p>In the world of software development, a Software Development Kit (SDK) is a collection of tools and resources that developers can use to create mobile applications with finesse. This comprehensive toolkit includes a variety of helper libraries, precompiled modules, extensive documentation, code samples, well-designed processes, and informative guides. An SDK is customized to specific platforms or programming languages and helps to streamline the development process, making it easier to implement digital advertising for SaaS solutions. Among the various tools available, the <a href="https://www.videosdk.live/interactive-live-streaming" rel="noreferrer">live-streaming SDK</a> is particularly beneficial. It saves developers from the hassle of creating thousands of lines of code from scratch, offering a shortcut to implement live streaming features in their mobile apps. This SDK not only enhances the efficiency of the app development process but also boosts the overall capabilities of the mobile apps with features like analytics, advertising integration, and push notifications handling.</p><p>However, standing shoulder to shoulder with SDKs in the developer's toolkit is the mighty API – the Application Programming Interface. A veritable software intermediary, APIs orchestrate harmonious communication between two applications. Within this expansive landscape, the <a href="https://www.videosdk.live/interactive-live-streaming" rel="noreferrer"><strong>live streaming API</strong></a><strong> for websites</strong> and mobile apps shines, offering a suite of harmonious APIs curated for the world of video interactions. These include the <strong>streaming video API</strong>, which facilitates seamless video streaming across various platforms, and the <strong>video broadcasting API</strong>, designed to enhance video distribution and quality across diverse media channels.</p><h2 id="10-video-live-streaming-apis-in-2024">10+ Video Live Streaming APIs in 2024</h2>
<p>The <strong>Top 10 Live Streaming APIs &amp; SDKs</strong> are <strong>VideoSDK</strong>, <strong>MirrorFly</strong>, <strong>Agora</strong>, <strong>api.video</strong>, <strong>AWS IVS, 100ms, DaCast, Wowza, Vonage, and Dolby.io</strong>. You may choose your provider based on their features, pricing, and more.</p><h2 id="1-videosdk-real-time-video-infra-for-every-developer">1. VideoSDK: Real-time video infra for every developer</h2>
<p>Enter the realm of VideoSDK, a premier solution tailored for the bustling tech markets of the USA and India. The remarkable integration powerhouse is renowned for its <strong>lightning-fast</strong> prowess in <strong>seamlessly weaving</strong> live streaming capabilities into applications within a mere <strong>10 minutes</strong>. This platform redefines <strong>efficiency</strong>, offering a harmonious synergy that caters to both end-users and developers. Dive into an expansive array of Software Development Kit (SDK) functionalities, elegantly curated within its repository.</p><p>Noteworthy is VideoSDK's <strong>versatility</strong>, embracing <strong>cross-platform</strong> compatibility like a true virtuoso. It effortlessly spans an array of programming languages and frameworks, encompassing the likes of <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start"><strong>JavaScript</strong></a>, <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start"><strong>React JS</strong></a>, <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start"><strong>React Native</strong></a>, <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/getting-started"><strong>Android</strong></a>, <a href="https://www.videosdk.live/blog/video-calling-in-flutter"><strong>Flutter</strong></a>, and <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start"><strong>iOS</strong></a>, ensuring a universal canvas for your innovative creations. This cross-platform adaptability makes VideoSDK a universal toolkit for digital creators across the USA and India, empowering them to craft cutting-edge applications with ease.</p><h3 id="features-offered-by-videosdk">Features offered by VideoSDK</h3>
<ul><li>Unite with <strong>25 co-streamers</strong> in a seamless symphony of live content.</li><li>Transition fluidly from <strong>viewer to co-host</strong>, amplifying interactive engagement.</li><li>Elevate video quality with <strong>adaptive bitrate</strong> technology for an unmatched experience, even at scale.</li><li>Tailor your experience by selecting from an array of <strong>resolution options</strong>.</li><li>Immerse viewers with orientation-based <strong>adaptive live streams</strong>.</li><li>Infuse personal flair with customizable components like <strong>logos</strong>, <strong>overlays</strong>, and <strong>backgrounds</strong>.</li><li>Unleash your content across <strong>20+ concurrent platforms</strong>, broadening your reach.</li><li>Embrace versatility with compatibility for <strong>custom RTMP outputs</strong>.</li><li>Embrace browser freedom – go live effortlessly from <strong>any browser</strong>.</li></ul><h3 id="videosdk-pricing">VideoSDK Pricing</h3>
<ul><li>Experience a <a href="https://www.videosdk.live/pricing">pricing</a> model as <strong>dynamic</strong> as your content – tailored to active participants in your live stream app.</li><li>Choose your streaming quality for a <strong>personalized experience</strong> that meets your viewers' expectations.</li><li>Enjoy the <a href="https://www.videosdk.live/pricing#pricingCalc">flexibility of pricing</a> that aligns with your usage, ensuring <strong>cost-effectiveness</strong> and <strong>optimal value</strong>.</li><li>With <strong>transparency</strong> and <strong>control</strong>, Video SDK's pricing adapts to your streaming needs</li></ul>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-cloudinary">2. <a href="https://cloudinary.com/" rel="noreferrer">Cloudinary</a></h2><p>Cloudinary offers a comprehensive video live streaming solution designed for developers seeking to integrate real-time video capabilities into their applications. This platform supports both RTMP and WebRTC protocols, enabling seamless live streaming experiences across various devices and network conditions. Developers can leverage Cloudinary's APIs and SDKs to create, manage, and deliver live video content efficiently.</p><p><strong>Features Offered by Cloudinary</strong></p><ul><li><strong>RTMP Input Streaming</strong>: Initiate live streams using RTMP input, compatible with popular streaming software like OBS.</li><li><strong>WebRTC Support</strong>: Utilize WebRTC for low-latency, peer-to-peer video communication, ideal for interactive applications.</li><li><strong>Adaptive Bitrate Streaming</strong>: Deliver high-quality video experiences by automatically adjusting the stream quality based on the viewer's network conditions.</li><li><strong>Simulcasting</strong>: Broadcast live streams simultaneously to multiple platforms, such as YouTube and Twitch, by configuring additional output destinations via the API.</li><li><strong>Video Archiving</strong>: Automatically archive live streams for on-demand viewing, providing viewers with access to past content.</li><li><strong>Real-Time Monitoring</strong>: Monitor live stream health and performance metrics, including viewer count and stream status, through the Cloudinary Console.</li></ul><p><strong>Cloudinary Pricing</strong></p><p>Cloudinary offers a tiered pricing model to accommodate different usage needs:</p><ul><li><strong>Free Plan</strong>: Provides limited storage, transformations, and bandwidth, suitable for testing and small-scale applications.</li><li><strong>Plus Plan</strong>: Priced at $89 per month, this plan includes increased storage, bandwidth, and additional features for growing businesses.</li><li><strong>Advanced Plan</strong>: At $224 per month, this plan offers advanced features, dedicated support, and higher usage limits for enterprises.</li><li><strong>Enterprise Plan</strong>: Custom pricing for large organizations with specific requirements and higher usage needs.</li></ul><p>Each plan includes access to Cloudinary's video APIs, SDKs, and media management tools, with pricing based on usage metrics such as storage, bandwidth, and transformations.</p><h2 id="3-mux-video-streaming-api">3. Mux: Video Streaming Api</h2>
<ul><li>Seamlessly integrate live streaming into your app with Mux's powerful tools and APIs.</li><li>Deliver captivating real-time video content to your users with ease.</li><li>Mux's SDK streamlines the live streaming process, handling encoding, transcoding, adaptive bitrate streaming, and delivery.</li><li>Empower your app with Mux's expertise in video streaming and analytics, creating a dynamic and engaging user experience.</li></ul><h3 id="mux-pricing">Mux pricing</h3>
<ul><li>Choose from a range of Starter Plans, perfect for small businesses and newcomers to live streaming.</li><li>Explore Pro Plans with advanced features and increased capacity, catering to businesses with larger audiences and higher demands.</li><li>Tailored Custom Enterprise Plans are available for larger organizations with specific and unique requirements.</li></ul><h2 id="4-agora-real-time-audiovideo-engagement">4. Agora: Real-time Audio/Video Engagement</h2>
<ul><li>Seamlessly integrate voice and video chat, real-time recording, live streaming, and instant messaging features into your application.</li><li>Enhance user engagement with premium add-ons like AR facial masks, sound effects, and whiteboards (additional cost).</li><li>Keep in mind that Agora's pricing structure may be detailed, which could be a consideration for businesses with budget constraints.</li><li>Users seeking direct support from Agora should be aware that response<strong> times</strong> may <strong>vary</strong>.</li></ul><h3 id="agora-pricing">Agora pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/agora-alternative">Agora</a> presents a choice between two pricing plans, Premium and Standard, allowing users to choose based on their service needs.</li><li>The pricing model is based on the monthly duration of audio and video calls, providing a cost-effective approach.</li><li>To enhance flexibility, Agora offers four pricing categories, each aligned with different video resolutions, ensuring precise fit to users' needs.</li><li><strong>Audio calls</strong> are priced at <strong>$0.99</strong> per 1,000 participant minutes.</li><li><strong>HD Video calls</strong> are available at <strong>$3.99</strong> per 1,000 participant minutes, and <strong>Full HD Video calls</strong> come in at <strong>$8.99</strong> per 1,000 participant minutes.</li></ul><blockquote>
<p>See how Agora compares with its <a href="https://www.videosdk.live/blog/agora-competitors">competitors</a></p>
</blockquote>
<h2 id="5-apivideo">5. api.video</h2>
<ul><li>api.video serves as a powerful platform with a comprehensive API toolkit, empowering developers to seamlessly integrate live streaming and video hosting functionalities into their applications and websites.</li><li>Developers can effortlessly infuse their projects with <strong>video streaming</strong>, <strong>hosting</strong>, and <strong>playback</strong> capabilities using api.video's offerings.</li><li>The platform encompasses a range of features, including <strong>live streaming</strong>, <strong>video recording</strong>, <strong>video encoding</strong>, and efficient content delivery, enhancing the multimedia experience for users.</li></ul><h3 id="apivideo-pricing">api.video pricing</h3>
<ul><li>api.video presents a diverse range of pricing plans that align with your usage patterns and specific requirements.</li><li>These plans encompass a variety of features, including <strong>live streaming</strong>, <strong>video recording</strong>, <strong>transcoding</strong>, and efficient <strong>content delivery</strong>.</li><li>The <strong>cost</strong> structure takes into account factors such as the <strong>duration</strong> of streamed minutes, the <strong>quality</strong> of the streams, and your <strong>storage needs</strong>.</li><li>With api.video's flexible pricing, you can select a plan that perfectly matches your project's demands and budget, ensuring a cost-effective solution tailored to your goals.</li></ul><h2 id="6-aws-ivs">6. AWS IVS</h2>
<ul><li>Amazon Interactive Video Service (IVS) emerges as a dynamic offering within Amazon Web Services (AWS), delivering a fully managed live video streaming solution.</li><li>This service empowers developers with the seamless integration of interactive live video content into their applications or websites through a suite of Software Development Kits (SDKs) and APIs.</li><li>Amazon IVS streamlines the process of enriching your platform with live video capabilities, enabling real-time engagement and interaction with your audience.</li><li>By harnessing the power of Amazon IVS, developers can create captivating live-streaming experiences that resonate with their users, driving engagement and growth.</li></ul><h3 id="aws-ivs-pricing">AWS IVS pricing</h3>
<ul><li>AWS IVS presents a versatile pricing structure, rooted in the <strong>pay-as-you-go</strong> model. Your costs are intricately tied to the volume of video input and output associated with your streams.</li><li>This model ensures that you are billed in proportion to factors like <strong>stream quality</strong>, <strong>duration</strong>, and the <strong>concurrent viewers</strong> engaging with your content.</li><li>AWS IVS extends an alternative avenue known as <strong>reserved pricing</strong>. By opting for reserved pricing, you commit to a <strong>predetermined usage level</strong>, consequently unlocking discounted rates that align with your long-term engagement plans.</li></ul><h2 id="7-100ms">7. 100ms</h2>
<ul><li>Step into the realm of 100ms, a cloud platform tailored to empower developers in seamlessly integrating video and audio conferencing functionalities into a spectrum of applications encompassing Web, Android, and iOS domains.</li><li>Within its arsenal lies a suite of meticulously designed tools. REST APIs and SDKs join forces with a user-friendly dashboard to elevate the process of capturing, distributing, recording, and showcasing live interactive audio and video content.</li><li>This platform stands as a testament to the fusion of convenience and capability, opening up avenues for creating immersive and dynamic user experiences in the realm of real-time communication.</li></ul><h2 id="8-dacast">8. DaCast</h2>
<ul><li>While DaCast promises ad-free streaming and monetization options, the actual <strong>implementation</strong> of these features might require a <strong>thorough exploration</strong> of their APIs and SDKs.</li><li>The <strong>integration process</strong>, despite the provided tools, could potentially demand substantial <strong>technical expertise</strong> and <strong>time investment</strong> to align with your desired outcomes.</li><li>The availability of various SDKs and APIs sounds promising, yet navigating their intricacies might not be entirely hassle-free, particularly for those less familiar with technical integrations.</li><li>While ongoing <strong>technical support</strong> is highlighted, the <strong>responsiveness</strong> and <strong>effectiveness</strong> of this support could <strong>vary</strong>, impacting your ability to swiftly resolve challenges you encounter during integration or usage.</li><li>DaCast offers a subscription-based pricing model.</li></ul><h2 id="9-wowza-embedded-video-platform-for-solution-builders">9. Wowza: Embedded Video Platform for Solution Builders</h2>
<ul><li>While Wowza Live Streaming SDK boasts versatility, integrating it into your application might <strong>not</strong> be as <strong>straightforward</strong> as expected.</li><li>The wide range of supported streaming protocols and formats could potentially lead to <strong>complexities</strong> in configuration and optimization, demanding <strong>extra development efforts</strong>.</li><li>Despite its rich feature set, the SDK's <strong>learning curve</strong> could pose <strong>challenges</strong>, especially for developers who are new to the intricacies of live video streaming.</li><li>The abundance of options might not always translate into a streamlined experience, potentially requiring <strong>more time</strong> and <strong>resources</strong> to fine-tune the integration according to your specific requirements.</li></ul><h2 id="10-vonage">10. Vonage</h2>
<ul><li>However, while the inclusion of live video, voice, messaging, and screen-sharing functionalities seems comprehensive, the depth of customization and integration ease might not align with all developers' expectations.</li><li>While customization options exist, they might not be as intuitive or robust as desired, potentially requiring more <strong>manual code development</strong> than initially anticipated.</li><li>The availability of performance analysis tools is valuable, but their usability and effectiveness might vary based on your familiarity with the provided metrics and insights.</li><li>The mention of paramount <strong>security</strong> and <strong>compliance</strong> measures is reassuring, yet the precise extent of these measures and how they align with your project's specific requirements may warrant <strong>further exploration</strong>.</li><li><strong>Scalability</strong> sounds promising, but the actual capacity to smoothly accommodate varying participant needs under real-world usage conditions could <strong>vary</strong>.</li><li>The note about edge case management being user responsibility might indicate that certain complex scenarios or troubleshooting might not be fully supported through the platform's native features.</li></ul><h3 id="vonage-pricing">Vonage pricing</h3>
<ul><li>Starting at <strong>$9.99</strong> per month might appear to be an appealing entry point; however, it's important to carefully assess your expected usage against this plan's allowances to determine its true value for your needs.</li><li>The allocation of 2,000 free minutes across plans could be enticing, but evaluating how these minutes align with your typical usage patterns is essential to assess their actual significance.</li><li>While the incremental charge of <strong>$0.00395</strong> per minute per participant after consuming the free minutes seems reasonable, projecting your anticipated participant counts and usage frequency can help determine if this rate is competitive for your expected activity.</li><li>The noted cost of <strong>$0.10</strong> per minute for <strong>recording</strong> services might appear affordable at first glance, but it's crucial to map out your recording needs and their potential impact on costs over time.</li><li>Similarly, the <strong>HLS</strong> streaming cost of <strong>$0.15</strong> per minute should be evaluated alongside your streaming requirements and long-term streaming plans to gauge its feasibility.</li></ul><h2 id="elevate-your-live-video-streaming-app-with-the-ideal-live-streaming-api-integration-today">Elevate Your Live Video Streaming App with the Ideal Live Streaming API Integration Today</h2>
<p>As you navigate the landscape of live streaming APIs, a realm rich with potential reveals itself, and the ten exceptional options we've uncovered here present their unique merits. Yet, it's vital to acknowledge that the digital universe stretches endlessly before you, beckoning you to explore further and unveil a myriad of opportunities that extend far beyond your current view.</p><p>No matter which API you decide to embrace, make certain it resonates with the constellation of <strong>features</strong> and <strong>advantages</strong> we've brought to light for your live-streaming app. As you traverse this terrain, keep <strong>pricing</strong> in your field of vision, as it holds a significant role. With these insights as your compass, set forth on your implementation journey with a spirit of boundless enthusiasm and unshakable confidence.</p><p>Happy Implementing &amp; Celebrating Your Success!</p>]]></content:encoded></item><item><title><![CDATA[Build Flutter Live Streaming App for Android, iOS, and Web]]></title><description><![CDATA[In this tutorial, build an interactive live stream Flutter app. Developers must follow steps to build a robust app that incorporates video playback, chat functionality, user authentication, and backend infrastructure. ]]></description><link>https://www.videosdk.live/blog/flutter-live-streaming</link><guid isPermaLink="false">642d44dd2c7661a49f3824b4</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Yash]]></dc:creator><pubDate>Sat, 11 Jan 2025 08:46:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/04/ils_flutter_blog.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<img src="https://assets.videosdk.live/static-assets/ghost/2023/04/ils_flutter_blog.jpg" alt="Build Flutter Live Streaming App for Android, iOS, and Web"/><p>Video calling has become a staple form of communication, but it often struggles with scalability when user numbers climb into the millions. Interactive live streaming offers a robust solution, providing a way to engage with a vast audience in near real-time.<br><br>Interactive live streaming enhances user experience with features like real-time comments, reactions (Emojis ?), and hand-raising, making it indispensable across industries such as gaming, social platforms, and education.</br></br></p><h3 id="why-choose-videosdk-for-your-flutter-live-streaming-app">Why Choose VideoSDK for Your Flutter Live Streaming App?</h3>
<p>VideoSDK integrates seamlessly with Flutter, allowing developers to add real-time communication capabilities to their apps while scaling effortlessly to accommodate millions of users. It combines the intimacy of video calls with the broad reach of live streaming.</p><h3 id="key-features-to-incorporate-in-your-flutter-live-streaming-app">Key Features to Incorporate in Your Flutter Live Streaming App</h3>
<p>Flutter is Google’s UI toolkit for building natively compiled applications for mobile, web, and desktop from a single codebase. It is particularly well-suited for developing high-performance apps. VideoSDK’s Flutter package comes loaded with over 15 features that are essential for a top-notch live streaming experience.</p><h2 id="step-by-step-guide-to-building-your-app">Step-by-Step Guide to Building Your App</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/image-5.png" class="kg-image" alt="Build Flutter Live Streaming App for Android, iOS, and Web" loading="lazy" width="1656" height="864"/></figure><p><strong>Pre-requisites:</strong></p><ul><li>Basic understanding of Flutter framework.</li><li>Video SDK Developer Account (Not having one? Follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a>).</li><li>Video SDK <a href="https://pub.dev/packages/videosdk">Flutter Package</a>.</li></ul><p>So you are now ready to create a fresh flutter project </p><h3 id="a-create-and-configure-your-flutter-live-streaming-project">(a) <strong>Create and Configure Your Flutter Live Streaming Project</strong></h3><p>To create a new Flutter project, run the following command at your chosen location</p><pre><code class="language-shell">flutter create videosdk_flutter_ils_app</code></pre><p>Inside the terminal after navigating to the project directory, run the command.</p><pre><code class="language-shell">flutter pub add videosdk</code></pre><p><code>videosdk</code> package will now be added to the <strong>pubspec.yaml</strong>.</p><p>You'll need to add a few other packages</p><ul><li>HTTP - <code>http: ^0.13.5</code></li><li>flutter_vlc_player - <code>flutter_vlc_player: ^7.2.0</code></li></ul><h3 id="b-%E2%9A%99%EF%B8%8F-configure-flutter-live-streaming-project">(b) ⚙️ Configure Flutter Live Streaming Project</h3><p/><p><strong>For </strong>Flutter<strong> Android</strong></p><ul><li>You will need to update the <code>/android/app/src/main/AndroidManifest.xml</code> for the permissions you will be using to implement the audio and video features.</li></ul><pre><code class="language-xml">&lt;uses-feature android:name="android.hardware.camera" /&gt;
&lt;uses-feature android:name="android.hardware.camera.autofocus" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET"/&gt;</code></pre><ul><li>App level <code>build.gradle</code> file at <code>/android/app/build.gradle</code> needs to be updated. You will need to increase <code>minSdkVersion</code> of <code>defaultConfig</code> up to <code>23</code></li></ul><p><strong>For Flutter iOS</strong></p><ul><li><code>Info.plist</code> file at <code>/ios/Runner/Info.plist</code> needs to be updated to add permissions for iOS.</li></ul><pre><code class="language-js">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Camera Usage!&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Microphone Usage!&lt;/string&gt;</code></pre><ul><li>You will need to uncomment the following line to define a global platform for your project in <code>/ios/Podfile</code></li></ul><pre><code># platform :ios, '11.0'</code></pre><h2 id="project-structure-for-flutter-live-streaming-app">Project Structure for Flutter Live Streaming App</h2><figure class="kg-card kg-code-card"><pre><code class="language-js">root
├── android
├── ios
├── lib
     ├── api_call.dart
     ├── ils_screen.dart
     ├── ils_speaker_view.dart
     ├── ils_viewer_view.dart
     ├── join_screen.dart
     ├── livestream_player.dart
     ├── main.dart
     ├── meeting_controls.dart
     ├── participant_tile.dart</code></pre><figcaption><p><span style="white-space: pre-wrap;">Project Structure</span></p></figcaption></figure><h3 id="step-1-get-started-flutter-live-streaming-with-apicalldart">STEP 1: Get Started Flutter Live Streaming with api_call.dart</h3><p>Before jumping to anything else, you will have to write a function to generate a unique meeting ID. To generate a meeting ID, you will need an Authentication Token. You can get an Authentication Token from <a href="https://app.videosdk.live/">VideoSDK Dashboard</a> (For Development) or <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples">VideoSDK Authentication Server</a> (For Production).</p><pre><code class="language-js">import 'dart:convert';
import 'package:http/http.dart' as http;

//Auth token we will use to generate a meeting and connect to it
String token = "&lt;&lt; Authentication Token &gt;&gt;";

// API call to create meeting
Future&lt;String&gt; createMeeting() async {
  final http.Response httpResponse = await http.post(
    Uri.parse("https://api.videosdk.live/v2/rooms"),
    headers: {'Authorization': token},
  );

//Destructuring the roomId from the response
  return json.decode(httpResponse.body)['roomId'];
}</code></pre><h3 id="step-2-develop-the-join-screen-widget-for-flutter-live-streaming-app">STEP 2: <strong>Develop the Join Screen</strong> Widget for Flutter Live Streaming App</h3><p>Let's create <code>join_screen.dart</code> file in <code>lib</code> directory and create a JoinScreen<strong> StatelessWidget.</strong></p><p>The JoinScreen will consist of:</p><ul><li><strong>Create Meeting Button - </strong>This button will create a new meeting for the speaker and join it with the mode <code>CONFERENCE</code>.</li><li><strong>Meeting ID TextField - </strong>This text field will contain the meeting ID, you want to join.</li><li><strong>Join Meeting as Host Button - </strong>This button will join the meeting in <code>CONFERENCE</code> mode for the speakers, which you have provided.</li><li><strong>Join Meeting as Viewer Button - </strong>This button will join the meeting in <code>VIEWER</code> mode for the viewers, which you have provided.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';
import 'package:http/http.dart' as http;

import 'api_call.dart';
import 'ils_screen.dart';

class JoinScreen extends StatelessWidget {
  
  final _meetingIdController = TextEditingController();

  JoinScreen({super.key});

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      backgroundColor: Colors.black,
      appBar: AppBar(
        title: const Text('VideoSDK ILS QuickStart'),
      ),
      body: Padding(
        padding: const EdgeInsets.all(12.0),
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: [
            // Creating a new meeting
            ElevatedButton(
              onPressed: () =&gt; onCreateButtonPressed(context),
              child: const Text('Create Meeting'),
            ),
            const SizedBox(height: 40),
            TextField(
              style: const TextStyle(color: Colors.white),
              decoration: const InputDecoration(
                hintText: 'Enter Meeting Id',
                border: OutlineInputBorder(),
                hintStyle: TextStyle(color: Colors.white),
              ),
              controller: _meetingIdController,
            ),
            // Joining the meeting as host
            ElevatedButton(
              onPressed: () =&gt; onJoinButtonPressed(context, Mode.CONFERENCE),
              child: const Text('Join Meeting as Host'),
            ),
            // Joining the meeting as viewer
            ElevatedButton(
              onPressed: () =&gt; onJoinButtonPressed(context, Mode.VIEWER),
              child: const Text('Join Meeting as Viewer'),
            ),
          ],
        ),
      ),
    );
  }
  
  // Creates new Meeting Id and joins it in CONFERNCE mode.
  void onCreateButtonPressed(BuildContext context) async {
    // TODO: Implement 
  }

  // Join the provided meeting with given Mode and meetingId
  void onJoinButtonPressed(BuildContext context, Mode mode) {
	// TODO: Implement
  }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">join_screen.dart</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">  // Creates new Meeting Id and joins it in CONFERNCE mode.
  void onCreateButtonPressed(BuildContext context) async {
    // Call createMeeting method
    final meetingId = await createMeeting();
   
    if (!context.mounted) return;
    Navigator.of(context).push(
      MaterialPageRoute(
        builder: (context) =&gt; ILSScreen(
          meetingId: meetingId,
          token: token,
          mode: Mode.CONFERENCE,
        ),
      ),
    );
   });
  }</code></pre><figcaption><p><span style="white-space: pre-wrap;">onCreateButtonPressed() implementation</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">  // Join the provided meeting with given Mode and meetingId
  void onJoinButtonPressed(BuildContext context, Mode mode) {
    String meetingId = _meetingIdController.text;
    var re = RegExp("\\w{4}\\-\\w{4}\\-\\w{4}");
    
    // Check meeting ID is not null or invaild
    if (meetingId.isNotEmpty &amp;&amp; re.hasMatch(meetingId)) {
      _meetingIdController.clear();
      // Navigate to ILSScreen
      Navigator.of(context).push(
        MaterialPageRoute(
          builder: (context) =&gt; ILSScreen(
            meetingId: meetingId,
            token: token,
            mode: mode,
          ),
        ),
      );
    } else {
      ScaffoldMessenger.of(context).showSnackBar(const SnackBar(
        content: Text("Please enter valid meeting id"),
      ));
    }
  }</code></pre><figcaption><p><span style="white-space: pre-wrap;">onJoinButtonPressed() implementation</span></p></figcaption></figure><ul><li>Update the app's home to JoinScreen in the <code>main.dart</code> file.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">import 'package:flutter/material.dart';
import 'join_screen.dart';

void main() {
  runApp(const MyApp());
}

class MyApp extends StatelessWidget {
  const MyApp({super.key});

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'VideoSDK QuickStart',
      theme: ThemeData(
        primarySwatch: Colors.blue,
      ),
      home: JoinScreen(),
    );
  }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">main.dart</span></p></figcaption></figure><h4 id="output">Output</h4>
<figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/flutter_ils_joining--3-.png" class="kg-image" alt="Build Flutter Live Streaming App for Android, iOS, and Web" loading="lazy" width="226" height="450"><figcaption><span style="white-space: pre-wrap;">JoinScreen</span></figcaption></img></figure><h3 id="step-3-flutter-live-streaming-screen-widget">STEP 3: Flutter Live Streaming Screen Widget</h3><p>Let's create <code>ils_screen.dart</code> file and create an <strong>ILSScreen </strong><code>StatefulWidget</code> which will take the <code>meetingId</code>, <code>token</code> and <code>mode</code> of the participant in the constructor.</p><p>You will create a new room using the <code>createRoom</code> method and render the <code>ILSSpeakerView</code> or <code>ILSViewerView</code> based on the past participant <code>mode</code>.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';
import './ils_speaker_view.dart';
import './ils_viewer_view.dart';

class ILSScreen extends StatefulWidget {
  final String meetingId;
  final String token;
  final Mode mode;

  const ILSScreen(
      {super.key,
      required this.meetingId,
      required this.token,
      required this.mode});

  @override
  State&lt;ILSScreen&gt; createState() =&gt; _ILSScreenState();
}

class _ILSScreenState extends State&lt;ILSScreen&gt; {
  late Room _room;
  bool isJoined = false;

  @override
  void initState() {
    // TODO: Implement
    super.initState();
  }

  // listening to room events
  void setMeetingEventListener() {
    // TODO: Implement
  }

  // onbackButton pressed leave the room
  Future&lt;bool&gt; _onWillPop() async {
    _room.leave();
    return true;
  }

  @override
  Widget build(BuildContext context) {
    return WillPopScope(
      onWillPop: () =&gt; _onWillPop(),
      child: Scaffold(
        backgroundColor: Colors.black,
        appBar: AppBar(
          title: const Text('VideoSDK ILS QuickStart'),
        ),
        //Showing the Speaker or Viewer View based on the mode
        body: isJoined
            ? widget.mode == Mode.CONFERENCE
                ? ILSSpeakerView(room: _room)
                : widget.mode == Mode.VIEWER
                    ? ILSViewerView(room: _room)
                    : null
            : const Center(
                  child: Text("Joining...",
                      style: TextStyle(color: Colors.white),
                    ),
                  ),
      ),
    );
  }
}
</code></pre><figcaption><p><span style="white-space: pre-wrap;">ils_screen.dart</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">@override
void initState() {
  // create room when widget loads
  _room = VideoSDK.createRoom(
    roomId: widget.meetingId,
    token: widget.token,
    displayName: "John Doe",
    micEnabled: true,
    camEnabled: true,
    defaultCameraIndex: 1, // Index of MediaDevices will be used to set default camera
    mode: widget.mode,
  );

  // setting the event listener for join and leave events
  setMeetingEventListener();
  
  // Joining room
  _room.join();
  
   super.initState();
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">initState() implementation</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">// listening to room events
void setMeetingEventListener() {

  // Setting the joining flag to true when meeting is joined
  _room.on(Events.roomJoined, () {
    if (widget.mode == Mode.CONFERENCE) {
      _room.localParticipant.pin();
    }
    setState(() {
      isJoined = true;
    });
  });

  //Handling navigation when meeting is left
  _room.on(Events.roomLeft, () {
    Navigator.popUntil(context, ModalRoute.withName('/'));
  });
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">setMeetingEventListener() implementation</span></p></figcaption></figure><h3 id="step-4-speakerview-widget-for-flutter-live-streaming-app">STEP 4: SpeakerView Widget for Flutter Live Streaming App</h3><p>Let's create the <code>ILSSpeakerView</code> which will show the <code>MeetingControls</code> and the <code>ParticipantTile</code> in grid.</p><ul><li>Let us start off by creating the <code>StatefulWidget</code> named <code>ParticipantTile</code> in <code>participant_tile.dart</code> file. It will consist of <code>RTCVideoView</code> which will show the participant's video stream.</li></ul><pre><code class="language-js">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class ParticipantTile extends StatefulWidget {
  final Participant participant;
  const ParticipantTile({super.key, required this.participant});

  @override
  State&lt;ParticipantTile&gt; createState() =&gt; _ParticipantTileState();
}

class _ParticipantTileState extends State&lt;ParticipantTile&gt; {
  Stream? videoStream;

  @override
  void initState() {
    // initialze video stream for the participant if they are present
    widget.participant.streams.forEach((key, Stream stream) {
      setState(() {
        if (stream.kind == 'video') {
          videoStream = stream;
        }
      });
    });
    _initStreamListeners();
    super.initState();
  }

  //Event listener for the video stream of the participant
  _initStreamListeners() {
    widget.participant.on(Events.streamEnabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; videoStream = stream);
      }
    });

    widget.participant.on(Events.streamDisabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; videoStream = null);
      }
    });
  }

  @override
  Widget build(BuildContext context) {
    return Padding(
      padding: const EdgeInsets.all(8.0),
      //Rending the Video Stream of the participant
      child: videoStream != null
          ? RTCVideoView(
              videoStream?.renderer as RTCVideoRenderer,
              objectFit: RTCVideoViewObjectFit.RTCVideoViewObjectFitCover,
            )
          : Container(
              color: Colors.grey.shade800,
              child: const Center(
                child: Icon(
                  Icons.person,
                  size: 100,
                ),
              ),
            ),
    );
  }
}</code></pre><ul><li>Next, let us add the <code>StatelessWidget</code> named <code>MeetingControls</code> in the <code>meeting_controls.dart</code> file. This widget will accept the callback handlers for all the meeting control buttons and the current HLS state of the meeting.</li></ul><pre><code class="language-js">import 'package:flutter/material.dart';

class MeetingControls extends StatelessWidget {
  final String hlsState;
  final void Function() onToggleMicButtonPressed;
  final void Function() onToggleCameraButtonPressed;
  final void Function() onHLSButtonPressed;

  const MeetingControls(
      {super.key,
      required this.hlsState,
      required this.onToggleMicButtonPressed,
      required this.onToggleCameraButtonPressed,
      required this.onHLSButtonPressed});

  @override
  Widget build(BuildContext context) {
    return Wrap(
      children: [
        ElevatedButton(
          onPressed: onToggleMicButtonPressed,
            child: const Text('Toggle Mic')),
        const SizedBox(width: 10),
        ElevatedButton(
            onPressed: onToggleCameraButtonPressed,
            child: const Text('Toggle WebCam')),
        const SizedBox(width: 10),
        ElevatedButton(
            onPressed: onHLSButtonPressed,
            child: Text(hlsState == "HLS_STOPPED"
                ? 'Start HLS'
                : hlsState == "HLS_STARTING"
                    ? "Starting HLS"
                    : hlsState == "HLS_STARTED" || hlsState == "HLS_PLAYABLE"
                        ? "Stop HLS"
                        : hlsState == "HLS_STOPPING"
                            ? "Stopping HLS"
                            : "Start HLS")),
      ],
    );
  }
}</code></pre><ul><li>Let's finally put all these widgets in the <code>StatefulWidget</code> named <code>ILSSpeakerView</code> in <code>ils_speaker_view.dart</code> file. This widget will listen to the <code>hlsStateChanged</code>, <code>participantJoined</code> and <code>participantLeft</code> events. It will render the participants and the meeting controls like leave, toggle mic and webcam as well as start and stop HLS.</li></ul><pre><code class="language-js">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';
import 'package:videosdk_flutter_quickstart/meeting_controls.dart';
import 'package:videosdk_flutter_quickstart/participant_tile.dart';

class ILSSpeakerView extends StatefulWidget {
  final Room room;
  const ILSSpeakerView({super.key, required this.room});

  @override
  State&lt;ILSSpeakerView&gt; createState() =&gt; _ILSSpeakerViewState();
}

class _ILSSpeakerViewState extends State&lt;ILSSpeakerView&gt; {
  var micEnabled = true;
  var camEnabled = true;
  String hlsState = "HLS_STOPPED";

  Map&lt;String, Participant&gt; participants = {};

  @override
  void initState() {
    super.initState();
    //Setting up the event listeners and initializing the participants and hls state
    setMeetingEventListener();
    participants.putIfAbsent(
        widget.room.localParticipant.id, () =&gt; widget.room.localParticipant);
    //filtering the CONFERENCE participants to be shown in the grid
    widget.room.participants.values.forEach((participant) {
      if (participant.mode == Mode.CONFERENCE) {
        participants.putIfAbsent(participant.id, () =&gt; participant);
      }
    });
    hlsState = widget.room.hlsState;
  }

  @override
  void setState(VoidCallback fn) {
    if (mounted) {
      super.setState(fn);
    }
  }

  @override
  Widget build(BuildContext context) {
    return Padding(
      padding: const EdgeInsets.all(8.0),
      child: Column(
        children: [
          Row(
            children: [
              Expanded(
                  child: Text(
                widget.room.id,
                style: const TextStyle(color: Colors.white),
              )),
              ElevatedButton(
                onPressed: () =&gt; {
                  Clipboard.setData(ClipboardData(text: widget.room.id)),
                  ScaffoldMessenger.of(context).showSnackBar(const SnackBar(
                    content: Text("Meeting Id Coppied"),
                  ))
                },
                child: const Text("Copy Meeting Id"),
              ),
              const SizedBox(width: 10),
              ElevatedButton(
                onPressed: () =&gt; {widget.room.leave()},
                style: ElevatedButton.styleFrom(
                  backgroundColor: Colors.red,
                ),
                child: const Text("Leave"),
              )
            ],
          ),
          const SizedBox(
            height: 20,
          ),
          //Showing the current HLS state
          Text(
            "Current HLS State: ${hlsState == "HLS_STARTED" || hlsState == "HLS_PLAYABLE" ? "Livestream is Started" : hlsState == "HLS_STARTING" ? "Livestream is starting" : hlsState == "HLS_STOPPING" ? "Livestream is stopping" : "Livestream is stopped"}",
            style: const TextStyle(color: Colors.white),
          ),
          //render all participants in the room
          Expanded(
            child: GridView.builder(
              gridDelegate: const SliverGridDelegateWithFixedCrossAxisCount(
                crossAxisCount: 1,
              ),
              itemBuilder: (context, index) {
                return ParticipantTile(
                    participant: participants.values.elementAt(index));
              },
              itemCount: participants.length,
            ),
          ),
          //Rending the meeting Controls
          MeetingControls(
            hlsState: hlsState,
            onToggleMicButtonPressed: () {
              micEnabled ? widget.room.muteMic() : widget.room.unmuteMic();
              micEnabled = !micEnabled;
            },
            onToggleCameraButtonPressed: () {
              camEnabled ? widget.room.disableCam() : widget.room.enableCam();
              camEnabled = !camEnabled;
            },
            //HLS handler which will start and stop HLS
            onHLSButtonPressed: () =&gt; {
              if (hlsState == "HLS_STOPPED")
                {
                  widget.room.startHls(config: {
                    "layout": {
                      "type": "SPOTLIGHT",
                      "priority": "PIN",
                      "GRID": 20,
                    },
                    "mode": "video-and-audio",
                    "theme": "DARK",
                    "quality": "high",
                    "orientation": "portrait",
                  })
                }
              else if (hlsState == "HLS_STARTED" || hlsState == "HLS_PLAYABLE")
                {widget.room.stopHls()}
            },
          ),
        ],
      ),
    );
  }

  // listening to room events for participants join, left and hls state changes
  void setMeetingEventListener() {
    widget.room.on(
      Events.participantJoined,
      (Participant participant) {
        //Adding only Conference participant to show in grid
        if (participant.mode == Mode.CONFERENCE) {
          setState(
            () =&gt; participants.putIfAbsent(participant.id, () =&gt; participant),
          );
        }
      },
    );

    widget.room.on(Events.participantLeft, (String participantId) {
      if (participants.containsKey(participantId)) {
        setState(
          () =&gt; participants.remove(participantId),
        );
      }
    });
    widget.room.on(
      Events.hlsStateChanged,
      (Map&lt;String, dynamic&gt; data) {
        setState(
          () =&gt; hlsState = data['status'],
        );
      },
    );
  }
}</code></pre><h4 id="output">Output</h4>
<figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/flutter_ils_speaker_view--1-.png" class="kg-image" alt="Build Flutter Live Streaming App for Android, iOS, and Web" loading="lazy" width="226" height="450"><figcaption><span style="white-space: pre-wrap;">SpeakerView</span></figcaption></img></figure><h3 id="step-5-implementing-hls-for-flutter-live-streaming-app">STEP 5: <strong>Implementing HLS</strong> for Flutter Live Streaming app</h3><p>Let's create <code>StatefulWidget</code> named <code>ILSViewerView</code> in the <code>ils_viewer_view.dart</code> file.</p><p>ILSViewerView will listen to the <code>hlsStateChanged</code> event and if the HLS is started, it will show the live stream using the <code>flutter_vlc_player</code> library.</p><pre><code class="language-js">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';
import './livestream_player.dart';

class ILSViewerView extends StatefulWidget {
  final Room room;
  const ILSViewerView({super.key, required this.room});

  @override
  State&lt;ILSViewerView&gt; createState() =&gt; _ILSViewerViewState();
}

class _ILSViewerViewState extends State&lt;ILSViewerView&gt; {
  String hlsState = "HLS_STOPPED";
  String? downstreamUrl;

  @override
  void initState() {
    super.initState();
    //initialize the room's hls state and hls's downstreamUrl
    hlsState = widget.room.hlsState;
    downstreamUrl = widget.room.hlsDownstreamUrl;
    //add the hlsStateChanged event listener
    setMeetingEventListener();
  }

  @override
  Widget build(BuildContext context) {
    return SingleChildScrollView(
      child: Column(
        mainAxisAlignment: MainAxisAlignment.center,
        children: [
          Padding(
            padding: const EdgeInsets.all(8.0),
            child: Row(
              children: [
                Expanded(
                    child: Text(widget.room.id,
                        style: const TextStyle(color: Colors.white))),
                ElevatedButton(
                  onPressed: () =&gt;
                      {Clipboard.setData(ClipboardData(text: widget.room.id))},
                  child: const Text("Copy Meeting Id"),
                ),
                const SizedBox(width: 10),
                ElevatedButton(
                  onPressed: () =&gt; {widget.room.leave()},
                  style: ElevatedButton.styleFrom(
                    backgroundColor: Colors.red,
                  ),
                  child: const Text("Leave"),
                )
              ],
            ),
          ),
          //Show the livestream player if the downstreamUrl is present
          downstreamUrl != null
              ? LivestreamPlayer(downstreamUrl: downstreamUrl!)
              : const Text("Host has not started the HLS",
                  style: TextStyle(color: Colors.white)),
        ],
      ),
    );
  }

  void setMeetingEventListener() {
    // listening to hlsStateChanged events and updating the hlsState and downstremUrl
    widget.room.on(
      Events.hlsStateChanged,
      (Map&lt;String, dynamic&gt; data) {
        String status = data['status'];
        if (mounted) {
          setState(() {
            hlsState = status;
            if (status == "HLS_PLAYABLE" || status == "HLS_STOPPED") {
              downstreamUrl = data['downstreamUrl'];
            }
          });
        }
      },
    );
  }
}</code></pre><ul><li>Let us use the <code>flutter_vlc_player</code> library to play the HLS stream on the Viewer side. For these, you will create a <code>StatefulWidget</code> named <code>LivestreamPlayer</code> in the <code>livestream_player.dart</code> file.</li></ul><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:flutter_vlc_player/flutter_vlc_player.dart';

class LivestreamPlayer extends StatefulWidget {
  final String downstreamUrl;
  const LivestreamPlayer({
    Key? key,
    required this.downstreamUrl,
  }) : super(key: key);

  @override
  LivestreamPlayerState createState() =&gt; LivestreamPlayerState();
}

class LivestreamPlayerState extends State&lt;LivestreamPlayer&gt;
    with AutomaticKeepAliveClientMixin {
  late VlcPlayerController _controller;

  @override
  bool get wantKeepAlive =&gt; true;

  @override
  void initState() {
    super.initState();
    //initializing the player
    _controller = VlcPlayerController.network(widget.downstreamUrl,
        options: VlcPlayerOptions());
  }

  @override
  void dispose() async {
    _controller.dispose();
    super.dispose();
  }

  @override
  Widget build(BuildContext context) {
    super.build(context);
    //rendering the VlcPlayer
    return VlcPlayer(
      controller: _controller,
      aspectRatio: 9 / 16,
      placeholder: const Center(child: CircularProgressIndicator()),
    );
  }
}</code></pre><h4 id="output">Output</h4>
<figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/flutter_ils_viewer_view--1-.png" class="kg-image" alt="Build Flutter Live Streaming App for Android, iOS, and Web" loading="lazy" width="226" height="450"><figcaption><span style="white-space: pre-wrap;">ViewerView</span></figcaption></img></figure><h3 id="final-steps-testing-and-launching-your-app"><strong>Final Steps: Testing and Launching Your App</strong></h3><p>The app is all set to test. Make sure to update the <code>token</code> in <code>api_call.dart</code><br>Your app should look like this.</br></p>
<!--kg-card-begin: html-->
<center>
<video height="450px" src="https://cdn.videosdk.live/website-resources/docs-resources/flutter_ils_speaker_video.mp4" controls="" muted=""> </video> 
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 
<video height="450px" src="https://cdn.videosdk.live/website-resources/docs-resources/flutter_ils_viewer_video.mp4" controls="" muted=""/>
</center>
<!--kg-card-end: html-->
<p>You can check out <a href="https://github.com/videosdk-live/quickstart/tree/main/flutter-hls">the Complete Code HERE</a>.</p>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h2 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h2>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="conclusion-enhancing-your-app-with-additional-functionalities"><strong>Conclusion: Enhancing Your App with Additional Functionalities</strong></h2><p>With this, we successfully built the Flutter live streaming app using the video SDK in Flutter. If you wish to add functionalities like chat messaging, and screen sharing, you can always check out our <a href="https://docs.videosdk.live/" rel="noopener">Documentation</a>. If you face any difficulty with the implementation, you can connect with us on our <a href="https://discord.gg/Gpmj6eCq5u" rel="noopener">DISCORD Community</a>.</p><h2 id="more-flutter-resources">More Flutter Resources</h2>
<ul><li><a href="https://www.videosdk.live/blog/video-calling-in-flutter">Build a Flutter Video Calling App with Video SDK</a></li><li><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/getting-started">Video SDK Flutter SDK Integration Guide</a></li><li><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start">Flutter SDK QuickStart</a></li><li><a href="https://github.com/videosdk-live/videosdk-rtc-flutter-sdk-example">Flutter SDK Github Example</a></li><li><a href="https://www.videosdk.live/blog/video-calling-in-flutter">Build a Flutter Video Calling App with Video SDK</a></li><li><a href="https://youtu.be/4h57eVcaC34">Video Call Flutter app with Video SDK (Android &amp; IOS)</a></li><li><a href="https://pub.dev/packages/videosdk">Official Video SDK flutter plugin: (feel free to give it a star⭐)</a></li></ul>]]></content:encoded></item><item><title><![CDATA[How to Integrate Chat Feature using PubSub in iOS Video Call App?]]></title><description><![CDATA[Integrate real-time chat into the iOS video call app using PubSub for seamless communication. Enable users to chat while on call, enhancing user interaction.]]></description><link>https://www.videosdk.live/blog/integrate-chat-using-pubsub-in-ios-video-call-app</link><guid isPermaLink="false">66275e512a88c204ca9d4ae7</guid><category><![CDATA[iOS]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Sat, 11 Jan 2025 07:30:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Chat-Feature-in-iOS-Video-Call-App.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Chat-Feature-in-iOS-Video-Call-App.jpg" alt="How to Integrate Chat Feature using PubSub in iOS Video Call App?"/><p>Integrate Chat using PubSub in your <a href="https://www.videosdk.live/blog/ios-video-calling-sdk">iOS Video Call App</a> to enhance user communication and experience. With PubSub, real-time messaging becomes seamless, allowing users to exchange messages effortlessly during video calls. This integration adds a layer of interactivity, enabling users to chat, share information, and collaborate seamlessly while engaging in video conversations. By incorporating PubSub technology, your app ensures reliable message delivery and synchronization across all connected devices, enhancing the overall user experience.</p><p><strong>Benefits of Integrate Chat using PubSub in iOS Video Call App:</strong></p><ol><li><strong>Real-time Communication:</strong> PubSub integration enables real-time messaging within your iOS Video Call App, enhancing user interaction during video calls.</li><li><strong>Seamless Collaboration:</strong> Users can share information, exchange messages, and collaborate effectively while engaged in video conversations, fostering teamwork and productivity.</li><li><strong>Enhanced User Experience:</strong> The addition of PubSub technology enriches the app's functionality, providing a smoother and more interactive experience for users.</li><li><strong>Reliable Message Delivery:</strong> PubSub ensures reliable message delivery and synchronization across all connected devices, eliminating delays and ensuring messages are delivered promptly.</li><li><strong>Scalability:</strong> PubSub architecture allows your app to scale effortlessly, accommodating a growing user base without compromising performance.</li></ol><p><strong>Use Case of Integrate Chat using PubSub in iOS Video Call App:</strong></p><ul><li><strong>Real-time Collaboration:</strong> While discussing project details via video call, team members can use the integrated chat feature powered by PubSub to share documents, links, and updates instantly.</li><li><strong>Instant Feedback:</strong> Team members can provide instant feedback or ask questions via chat without interrupting the flow of the video call, enhancing communication efficiency.</li><li><strong>Document Sharing:</strong> With PubSub, users can seamlessly share project documents, screenshots, or any necessary files directly within the chat, facilitating collaboration and decision-making.</li></ul><p>In this guide, we'll walk you through the process of seamlessly adding chat features to your iOS video-calling application. From setting up the chat environment to handling chat interactions within your video call interfaces, we'll cover all the essential steps to improve your app's functionality and user experience.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To integrate the Chat Feature, we must use the capabilities that VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/login">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/server-setup">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><ul><li>iOS 11.0+</li><li>Xcode 12.0+</li><li>Swift 5.0+</li></ul><p>This App will contain two screens:</p><p><strong>Join Screen</strong>: This screen allows the user to either create a meeting or join the predefined meeting.</p><p><strong>Meeting Screen</strong>: This screen basically contains local and remote participant views and some meeting controls such as Enable/Disable the microphone or Camera and Leave the meeting.</p><h2 id="integrate-videosdk%E2%80%8B">Integrate VideoSDK​</h2><p>To install VideoSDK, you must initialize the pod on the project by running the following command:</p><pre><code class="language-swift">pod init</code></pre><p>It will create the pod file in your project folder, Open that file and add the dependency for the VideoSDK, like below:</p><pre><code class="language-swift">pod 'VideoSDKRTC', :git =&gt; 'https://github.com/videosdk-live/videosdk-rtc-ios-sdk.git'</code></pre><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_podfile.png" class="kg-image" alt="How to Integrate Chat Feature using PubSub in iOS Video Call App?" loading="lazy"/></figure><p>then run the below code to install the pod:</p><pre><code class="language-swift">pod install</code></pre><p>then declare the permissions in Info.plist :</p><pre><code class="language-swift">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Camera permission description&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Microphone permission description&lt;/string&gt;</code></pre><h3 id="project-structure">Project Structure</h3><pre><code class="language-swift">iOSQuickStartDemo
   ├── Models
        ├── RoomStruct.swift
        └── MeetingData.swift
   ├── ViewControllers
        ├── StartMeetingViewController.swift
        └── MeetingViewController.swift
   ├── AppDelegate.swift // Default
   ├── SceneDelegate.swift // Default
   └── APIService
           └── APIService.swift
   ├── Main.storyboard // Default
   ├── LaunchScreen.storyboard // Default
   └── Info.plist // Default
 Pods
     └── Podfile</code></pre><h3 id="create-models%E2%80%8B">Create models<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#create-models">​</a></h3><p>Create a swift file for <code>MeetingData</code> and <code>RoomStruct</code> class model for setting data in object pattern.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
struct MeetingData {
    let token: String
    let name: String
    let meetingId: String
    let micEnabled: Bool
    let cameraEnabled: Bool
}</code></pre><figcaption>MeetingData.swift</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
struct RoomsStruct: Codable {
    let createdAt, updatedAt, roomID: String?
    let links: Links?
    let id: String?
    enum CodingKeys: String, CodingKey {
        case createdAt, updatedAt
        case roomID = "roomId"
        case links, id
    }
}

// MARK: - Links
struct Links: Codable {
    let getRoom, getSession: String?
    enum CodingKeys: String, CodingKey {
        case getRoom = "get_room"
        case getSession = "get_session"
    }
}</code></pre><figcaption>RoomStruct.swift</figcaption></figure><h2 id="essential-steps-for-building-the-video-calling">Essential Steps for Building the Video Calling</h2><p>This guide is designed to walk you through the process of integrating Chat with <a href="https://www.videosdk.live/">VideoSDK</a>. We'll cover everything from setting up the SDK to incorporating the visual cues into your app's interface, ensuring a smooth and efficient implementation process.</p><h3 id="step-1-get-started-with-apiclient%E2%80%8B">Step 1: Get started with APIClient<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-1--get-started-with-apiclient">​</a></h3><p>Before jumping to anything else, we have to write an API to generate a unique <code>meetingId</code>. You will require an <strong>authentication token;</strong> you can generate it either using <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-server-api-example</a> or from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation

let TOKEN_STRING: String = "&lt;AUTH_TOKEN&gt;"

class APIService {

  class func createMeeting(token: String, completion: @escaping (Result&lt;String, Error&gt;) -&gt; Void) {

    let url = URL(string: "https://api.videosdk.live/v2/rooms")!

    var request = URLRequest(url: url)
    request.httpMethod = "POST"
    request.addValue(TOKEN_STRING, forHTTPHeaderField: "authorization")

    URLSession.shared.dataTask(
      with: request,
      completionHandler: { (data: Data?, response: URLResponse?, error: Error?) in

        DispatchQueue.main.async {

          if let data = data, let utf8Text = String(data: data, encoding: .utf8) {
            do {
              let dataArray = try JSONDecoder().decode(RoomsStruct.self, from: data)

              completion(.success(dataArray.roomID ?? ""))
            } catch {
              print("Error while creating a meeting: \(error)")
              completion(.failure(error))
            }
          }
        }
      }
    ).resume()
  }
}
</code></pre><figcaption>APIService.swift</figcaption></figure><h3 id="step-2-implement-join-screen%E2%80%8B">Step 2: Implement Join Screen<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-2--implement-join-screen">​</a></h3><p>The Join Screen will work as a medium to either schedule a new meeting or join an existing meeting.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
import UIKit

class StartMeetingViewController: UIViewController, UITextFieldDelegate {

  private var serverToken = ""

  /// MARK: outlet for create meeting button
  @IBOutlet weak var btnCreateMeeting: UIButton!

  /// MARK: outlet for join meeting button
  @IBOutlet weak var btnJoinMeeting: UIButton!

  /// MARK: outlet for meetingId textfield
  @IBOutlet weak var txtMeetingId: UITextField!

  /// MARK: Initialize the private variable with TOKEN_STRING &amp;
  /// setting the meeting id in the textfield
  override func viewDidLoad() {
    txtMeetingId.delegate = self
    serverToken = TOKEN_STRING
    txtMeetingId.text = "PROVIDE-STATIC-MEETING-ID"
  }

  /// MARK: method for joining meeting through seague named as "StartMeeting"
  /// after validating the serverToken in not empty
  func joinMeeting() {

    txtMeetingId.resignFirstResponder()

    if !serverToken.isEmpty {
      DispatchQueue.main.async {
        self.dismiss(animated: true) {
          self.performSegue(withIdentifier: "StartMeeting", sender: nil)
        }
      }
    } else {
      print("Please provide auth token to start the meeting.")
    }
  }

  /// MARK: outlet for create meeting button tap event
  @IBAction func btnCreateMeetingTapped(_ sender: Any) {
    print("show loader while meeting gets connected with server")
    joinRoom()
  }

  /// MARK: outlet for join meeting button tap event
  @IBAction func btnJoinMeetingTapped(_ sender: Any) {
    if (txtMeetingId.text ?? "").isEmpty {

      print("Please provide meeting id to start the meeting.")
      txtMeetingId.resignFirstResponder()
    } else {
      joinMeeting()
    }
  }

  // MARK: - method for creating room api call and getting meetingId for joining meeting

  func joinRoom() {

    APIService.createMeeting(token: self.serverToken) { result in
      if case .success(let meetingId) = result {
        DispatchQueue.main.async {
          self.txtMeetingId.text = meetingId
          self.joinMeeting()
        }
      }
    }
  }

  /// MARK: preparing to animate to meetingViewController screen
  override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

    guard let navigation = segue.destination as? UINavigationController,

      let meetingViewController = navigation.topViewController as? MeetingViewController
    else {
      return
    }

    meetingViewController.meetingData = MeetingData(
      token: serverToken,
      name: txtMeetingId.text ?? "Guest",
      meetingId: txtMeetingId.text ?? "",
      micEnabled: true,
      cameraEnabled: true
    )
  }
}
</code></pre><figcaption>StartMeetingViewController.swift</figcaption></figure><h3 id="step-3-initialize-and-join-meeting%E2%80%8B">Step 3: Initialize and Join Meeting<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-3--initialize-and-join-meeting">​</a></h3><p>Using the provided <code>token</code> and <code>meetingId</code>, we will configure and initialize the meeting in <code>viewDidLoad()</code>.</p><p>Then, we'll add <strong>@IBOutlet</strong> for <code>localParticipantVideoView</code> and <code>remoteParticipantVideoView</code>, which can render local and remote participant media, respectively.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">class MeetingViewController: UIViewController {

import UIKit
import VideoSDKRTC
import WebRTC
import AVFoundation

class MeetingViewController: UIViewController {

// MARK: - Properties
// outlet for local participant container view
   @IBOutlet weak var localParticipantViewContainer: UIView!

// outlet for label for meeting Id
   @IBOutlet weak var lblMeetingId: UILabel!

// outlet for local participant video view
   @IBOutlet weak var localParticipantVideoView: RTCMTLVideoView!

// outlet for remote participant video view
   @IBOutlet weak var remoteParticipantVideoView: RTCMTLVideoView!

// outlet for remote participant no media label
   @IBOutlet weak var lblRemoteParticipantNoMedia: UILabel!

// outlet for remote participant container view
   @IBOutlet weak var remoteParticipantViewContainer: UIView!

// outlet for local participant no media label
   @IBOutlet weak var lblLocalParticipantNoMedia: UILabel!

// Meeting data - required to start
   var meetingData: MeetingData!

// current meeting reference
   private var meeting: Meeting?

    // MARK: - video participants including self to show in UI
    private var participants: [Participant] = []

        // MARK: - Lifecycle Events

        override func viewDidLoad() {
        super.viewDidLoad()
        // configure the VideoSDK with token
        VideoSDK.config(token: meetingData.token)

        // init meeting
        initializeMeeting()

        // set meeting id in button text
        lblMeetingId.text = "Meeting Id: \(meetingData.meetingId)"
      }

      override func viewWillAppear(_ animated: Bool) {
          super.viewWillAppear(animated)
          navigationController?.navigationBar.isHidden = true
      }

    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)
        navigationController?.navigationBar.isHidden = false
        NotificationCenter.default.removeObserver(self)
    }

        // MARK: - Meeting

        private func initializeMeeting() {

            // Initialize the VideoSDK
            meeting = VideoSDK.initMeeting(
                meetingId: meetingData.meetingId,
                participantName: meetingData.name,
                micEnabled: meetingData.micEnabled,
                webcamEnabled: meetingData.cameraEnabled
            )

            // Adding the listener to meeting
            meeting?.addEventListener(self)

            // joining the meeting
            meeting?.join()
        }
}</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h3 id="step-4-implement-controls%E2%80%8B">Step 4: Implement Controls<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-4--implement-controls">​</a></h3><p>After initializing the meeting in the previous step, we will now add <strong>@IBOutlet</strong> for <code>btnLeave</code>, <code>btnToggleVideo</code> and <code>btnToggleMic</code> which can control the media in the meeting.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">class MeetingViewController: UIViewController {

...

    // outlet for leave button
    @IBOutlet weak var btnLeave: UIButton!

    // outlet for toggle video button
    @IBOutlet weak var btnToggleVideo: UIButton!

    // outlet for toggle audio button
    @IBOutlet weak var btnToggleMic: UIButton!

    // bool for mic
    var micEnabled = true
    // bool for video
    var videoEnabled = true


    // outlet for leave button click event
    @IBAction func btnLeaveTapped(_ sender: Any) {
            DispatchQueue.main.async {
                self.meeting?.leave()
                self.dismiss(animated: true)
            }
        }

    // outlet for toggle mic button click event
    @IBAction func btnToggleMicTapped(_ sender: Any) {
        if micEnabled {
            micEnabled = !micEnabled // false
            self.meeting?.muteMic()
        } else {
            micEnabled = !micEnabled // true
            self.meeting?.unmuteMic()
        }
    }

    // outlet for toggle video button click event
    @IBAction func btnToggleVideoTapped(_ sender: Any) {
        if videoEnabled {
            videoEnabled = !videoEnabled // false
            self.meeting?.disableWebcam()
        } else {
            videoEnabled = !videoEnabled // true
            self.meeting?.enableWebcam()
        }
    }

...

}</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h3 id="step-5-implementing-meetingeventlistener%E2%80%8B">Step 5: Implementing <code>MeetingEventListener</code>​</h3><p>In this step, we'll create an extension of the <code>MeetingViewController</code> that implements the MeetingEventListener, which implements the <code>onMeetingJoined</code>, <code>onMeetingLeft</code>, <code>onParticipantJoined</code>, <code>onParticipantLeft</code>, <code>onParticipantChanged</code> etc. methods.</p><p>We'll create another extension of the <code>MeetingViewController</code> that implements the <code>PubSubMessageListener</code> that triggers when you receive a new message, it implements <code>onMessageReceived</code> method.</p><p>//Added pub-sub message listener.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">
class MeetingViewController: UIViewController {

...

extension MeetingViewController: MeetingEventListener {

        /// Meeting started
        func onMeetingJoined() {

            // handle local participant on start
            guard let localParticipant = self.meeting?.localParticipant else { return }
            // add to list
            participants.append(localParticipant)

            // add event listener
            localParticipant.addEventListener(self)

            localParticipant.setQuality(.high)

            if(localParticipant.isLocal){
                self.localParticipantViewContainer.isHidden = false
            } else {
                self.remoteParticipantViewContainer.isHidden = false
            }
        }

        /// Meeting ended
        func onMeetingLeft() {
            // remove listeners
            meeting?.localParticipant.removeEventListener(self)
            meeting?.removeEventListener(self)
        }

        /// A new participant joined
        func onParticipantJoined(_ participant: Participant) {
            participants.append(participant)

            // add listener
            participant.addEventListener(self)

            participant.setQuality(.high)

            if(participant.isLocal){
                self.localParticipantViewContainer.isHidden = false
            } else {
                self.remoteParticipantViewContainer.isHidden = false
            }
        }

        /// A participant left from the meeting
        /// - Parameter participant: participant object
        func onParticipantLeft(_ participant: Participant) {
            participant.removeEventListener(self)
            guard let index = self.participants.firstIndex(where: { $0.id == participant.id }) else {
                return
            }
            // remove participant from list
            participants.remove(at: index)
            // hide from ui
            UIView.animate(withDuration: 0.5){
                if(!participant.isLocal){
                    self.remoteParticipantViewContainer.isHidden = true
                }
            }
        }

}

extension MeetingViewController: PubSubMessageListener {
    
    func onMessageReceived(_ message: PubSubMessage) {
        print("Message Received:= " + message.message)
        
        /// Your ui code for showing the chat view
    }
}

...</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h3 id="step-6-implementing-participanteventlistener">Step 6: Implementing <code>ParticipantEventListener</code></h3><p>In this stage, we'll add an extension for the <code>MeetingViewController</code> that implements the ParticipantEventListener, which implements the <code>onStreamEnabled</code> and <code>onStreamDisabled</code> methods for the audio and video of MediaStreams enabled or disabled.</p><p>The function update UI is frequently used to control or modify the user interface (enable/disable camera &amp; mic) by the MediaStream state.</p><pre><code class="language-swift">class MeetingViewController: UIViewController {

...

extension MeetingViewController: ParticipantEventListener {

/// Participant has enabled mic, video or screenshare
/// - Parameters:
/// - stream: enabled stream object
/// - participant: participant object
func onStreamEnabled(_ stream: MediaStream, forParticipant participant: Participant) {
    updateUI(participant: participant, forStream: stream, enabled: true)
 }

/// Participant has disabled mic, video or screenshare
/// - Parameters:
///   - stream: disabled stream object
///   - participant: participant object
        
func onStreamDisabled(_ stream: MediaStream, 
			forParticipant participant: Participant) {
            
  updateUI(participant: participant, forStream: stream, enabled: false)
 }
 
}

private extension MeetingViewController {

 func updateUI(participant: Participant, forStream stream: MediaStream, 							enabled: Bool) { // true
        switch stream.kind {
        case .state(value: .video):
            if let videotrack = stream.track as? RTCVideoTrack {
                if enabled {
                    DispatchQueue.main.async {
                        UIView.animate(withDuration: 0.5){
                        
                            if(participant.isLocal) {
                            
    	self.localParticipantViewContainer.isHidden = false
	self.localParticipantVideoView.isHidden = false       
	self.localParticipantVideoView.videoContentMode = .scaleAspectFill                            self.localParticipantViewContainer.bringSubviewToFront(self.localParticipantVideoView)                         									
    videotrack.add(self.localParticipantVideoView)
    self.lblLocalParticipantNoMedia.isHidden = true

} else {
		self.remoteParticipantViewContainer.isHidden = false
        	self.remoteParticipantVideoView.isHidden = false
                                self.remoteParticipantVideoView.videoContentMode = .scaleAspectFill
                                self.remoteParticipantViewContainer.bringSubviewToFront(self.remoteParticipantVideoView)
                                		        videotrack.add(self.remoteParticipantVideoView)
 self.lblRemoteParticipantNoMedia.isHidden = true
        }
     }
  }
} else {
         UIView.animate(withDuration: 0.5){
                if(participant.isLocal){
                
                    self.localParticipantViewContainer.isHidden = false
                    self.localParticipantVideoView.isHidden = true
                    self.lblLocalParticipantNoMedia.isHidden = false
                            videotrack.remove(self.localParticipantVideoView)
} else {
                   self.remoteParticipantViewContainer.isHidden = false
                   self.remoteParticipantVideoView.isHidden = true
                   self.lblRemoteParticipantNoMedia.isHidden = false
                            videotrack.remove(self.remoteParticipantVideoView)
      }
    }
  }
}

     case .state(value: .audio):
            if participant.isLocal {
                
               localParticipantViewContainer.layer.borderWidth = 4.0
               localParticipantViewContainer.layer.borderColor = enabled ? UIColor.clear.cgColor : UIColor.red.cgColor
            } else {
                remoteParticipantViewContainer.layer.borderWidth = 4.0
                remoteParticipantViewContainer.layer.borderColor = enabled ? UIColor.clear.cgColor : UIColor.red.cgColor
            }
        default:
            break
        }
    }
}

...
</code></pre><h3 id="known-issue%E2%80%8B">Known Issue<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#known-issue">​</a></h3><p>Please add the following line to the <code>MeetingViewController.swift</code> file's <code>viewDidLoad</code> method If you get your video out of the container, view like below image.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">override func viewDidLoad() {

  localParticipantVideoView.frame = CGRect(x: 10, y: 0, 
    		width: localParticipantViewContainer.frame.width, 
   		height: localParticipantViewContainer.frame.height)

  localParticipantVideoView.bounds = CGRect(x: 10, y: 0, 
  		width: localParticipantViewContainer.frame.width, 
        	height: localParticipantViewContainer.frame.height)

  localParticipantVideoView.clipsToBounds = true

  remoteParticipantVideoView.frame = CGRect(x: 10, y: 0, 
  		width: remoteParticipantViewContainer.frame.width, 
        	height: remoteParticipantViewContainer.frame.height)
        
  remoteParticipantVideoView.bounds = CGRect(x: 10, y: 0, 
  		width: remoteParticipantViewContainer.frame.width, 
    		height: remoteParticipantViewContainer.frame.height)
    
    remoteParticipantVideoView.clipsToBounds = true
}
</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><blockquote><strong>TIP:</strong><br>Stuck anywhere? Check out this <a href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example" rel="noopener noreferrer">example code</a> on GitHub</br></blockquote><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - videosdk-live/videosdk-rtc-ios-sdk-example: WebRTC based video conferencing SDK for iOS (Swift / Objective C)</div><div class="kg-bookmark-description">WebRTC based video conferencing SDK for iOS (Swift / Objective C) - videosdk-live/videosdk-rtc-ios-sdk-example</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="How to Integrate Chat Feature using PubSub in iOS Video Call App?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">videosdk-live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/3d2f5eef43ad3d03fbe693ee2c8633053215e90252a63bddc775d8b8d8a7e380/videosdk-live/videosdk-rtc-ios-sdk-example" alt="How to Integrate Chat Feature using PubSub in iOS Video Call App?"/></div></a></figure><p>After successfully integrating the VideoSDK into your iOS app, you've unlocked the power of high-quality video calling. This not only improves user experience but can encourage longer call duration and deeper engagement in your app. By adding a chat feature, you provide users with more flexibility and control during their calls.</p><h2 id="integrate-chat-feature-in-video-app">Integrate Chat Feature in Video App </h2><p>For communication or any kind of messaging between the participants, VideoSDK provides <code>pubSub</code> classes that use the Publish-Subscribe mechanism and can be used to develop a wide variety of functionalities. For example, participants could use it to send chat messages to each other, share files or other media, or even trigger actions like muting or unmuting audio or video.</p><p>Now we will see, how we can use PubSub to implement Chat functionality. If you are not familiar with the PubSub mechanism and <code>pubSub</code> class, you can <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/pubsub">follow this guide</a>.</p><h3 id="implementing-group-chat">Implementing Group Chat</h3><ul><li>The first step in creating a group chat is choosing the topic that all the participants will publish and subscribe to send and receive the messages. We will be using <code>CHAT</code> as the topic for this one.</li><li>On the send button, publish the message that the sender typed in the <code>Text</code> field.</li></ul><blockquote>NOTE:<br>It is assumed that you have created a user interface along with the implementation of the meeting class and its event listeners. For reference checkout our iOS <a href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example" rel="noopener noreferrer">example app</a> on how can you setup your app.</br></blockquote><pre><code class="language-swift">class MeetingViewController {
    //Button that will send the message when tapped
    @IBAction func sendMessageTapped(_ sender: Any) {
        let options = ["persist" : true]
        // publish message
        self.meeting?.pubsub.publish(topic: "CHAT", message: "How are you?", options: options)
    }
}</code></pre><ul><li>The next step would be to display the messages others send. For this, we have to <code>subscribe</code> to that topic i.e <code>CHAT</code> and display all the messages. When a message is received, the <code>onMessageReceived</code> event of the <code>PubSubMessageListener</code> is triggered.</li></ul><pre><code class="language-swift">extension MeetingViewController: MeetingEventListener {
    func onMeetingJoined() {
    //subscribe to the topic 'CHAT' when onMeetingJoined is triggered
    meeting?.pubsub.subscribe(topic: "CHAT", forListener: self)
    }
}
extension MeetingViewController: PubSubMessageListener {
    // read message when it is received
    func onMessageReceived(_ message: PubSubMessage) {
        print("Message Received: " + message.message)
    }
}</code></pre><p>The final step in the group chat would be <code>unsubscribe</code> to that topic, which you had previously subscribed to but no longer needed. Here we are <code>unsubscribe</code> to <code>CHAT</code> topic on activity destroy.</p><pre><code class="language-swift">extension MeetingViewController: MeetingEventListener {
    func onMeetingLeft() {
    //unsubscribe to the topic 'CHAT' when onMeetingLeft is triggered
    ////highlight-next-line
    meeting?.pubsub.unsubscribe(topic: "CHAT", forListener: self)
    }
}</code></pre><ul><li><strong>If you want to full guide about all features with chat, you can check our documentation:</strong></li></ul><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/chat-using-pubsub"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Chat messages with PubSub - Video SDK Docs | Video SDK</div><div class="kg-bookmark-description">PubSub features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video &amp; audio conferencing to your applications.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="How to Integrate Chat Feature using PubSub in iOS Video Call App?"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="How to Integrate Chat Feature using PubSub in iOS Video Call App?"/></div></a></figure><h3 id="integrate-private-chat">Integrate Private Chat</h3><p>In the above example, if you want to convert into a private chat between two participants, then all you have to do is pass the participantID of the participant in the <code>sendOnly</code> option as <code>["sendOnly" : ["ABCD"]]</code>. Here sendOnly accepts an array of participant IDs if you wish to send a message to.</p><pre><code class="language-swift">class MeetingViewController {
    //Button that will send the private message when tapped
    @IBAction func sendPrivateMessageTapped(_ sender: Any) {
        //adding sendOnly option
        let options = ["sendOnly" : ["ABCD"]]
        // publish message
        self.meeting?.pubsub.publish(topic: "CHAT", message: "How are you?", options: options)
    }
}</code></pre><h3 id="downloading-chat-messages%E2%80%8B">Downloading Chat Messages<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/chat-using-pubsub#downloading-chat-messages">​</a></h3><p>All the messages from the PubSub which where published with <code>persist : true</code> and can be downloaded as a <code>.csv</code> file. This file will be available in the VideoSDK dashboard as well as through the <a href="https://docs.videosdk.live/api-reference/realtime-communication/fetch-session-using-sessionid">Sessions API</a>.</p><h2 id="conclusion">Conclusion</h2><p>By integrating chat functionality using PubSub, you've added a valuable layer of communication to your video call application with VideoSDK.live. This guide has equipped you with the knowledge to implement this valuable addition to your iOS app. This integration improves engagement and fosters better communication among users. By leveraging PubSub, developers can create a robust and scalable chat feature that complements the video calling experience.</p><p>Unlock the full potential of VideoSDK today and craft seamless video experiences! <strong><strong><a href="https://app.videosdk.live/dashboard">Sign up</a></strong></strong> now to receive 10,000 free minutes and take your video app to new heights.</p>]]></content:encoded></item><item><title><![CDATA[Integrate HLS Player in React JS Video Calling App: Complete Guide]]></title><description><![CDATA[Integrate HLS Player seamlessly into your React JS video calling app for smooth, high-quality streaming.]]></description><link>https://www.videosdk.live/blog/implement-hls-player-in-react-js</link><guid isPermaLink="false">6618f8102a88c204ca9d0980</guid><category><![CDATA[React]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Sat, 11 Jan 2025 06:54:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/HLS-player-React.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/HLS-player-React.png" alt="Integrate HLS Player in React JS Video Calling App: Complete Guide"/><p>Integrating an HLS (<a href="https://www.videosdk.live/blog/what-is-http-live-streaming">HTTP Live Streaming</a>) player into your React JS video calling app can enhance the streaming experience for your users. With HLS, you can ensure smooth playback across various devices and network conditions. Using VideoSDK's powerful HLS playback feature, you can ensure a smooth and reliable video calling experience for your users.</p><h3 id="advantages-of-using-hls-player">Advantages of Using HLS Player:</h3><ul><li><strong>Seamless Streaming</strong>: HLS ensures smooth video playback by adapting to network fluctuations, and delivering uninterrupted communication during video calls.</li><li><strong>Cross-Platform Compatibility</strong>: HLS is supported across different devices and platforms, enabling users to engage in video calls seamlessly regardless of their device type.</li><li><strong>Improved User Experience</strong>: With HLS, users experience minimal buffering and faster load times, enhancing their overall satisfaction with the video calling app.</li><li><strong>Scalability:</strong> HLS supports scalable streaming, allowing your app to accommodate a growing user base without compromising on performance.</li><li><a href="https://www.videosdk.live/blog/what-is-adaptive-bitrate-streaming"><strong>Adaptive Bitrate Streaming</strong></a>: HLS automatically adjusts the video quality based on the user's network speed, ensuring optimal viewing experience under varying conditions.</li></ul><h3 id="practical-applications-of-hls-player">Practical Applications of HLS Player:</h3><ul><li><strong>Virtual Meetings</strong>: HLS integration enables businesses to conduct virtual meetings with remote teams or clients, fostering collaboration regardless of geographical barriers.</li><li><strong>Online Education</strong>: Educational platforms can leverage HLS to deliver high-quality video lectures and interactive sessions, facilitating remote learning for students worldwide.</li><li><strong>Telehealth Services</strong>: Healthcare providers can use HLS-enabled video calling apps to offer remote consultations and medical advice, improving access to healthcare services.</li><li><strong>Live Events</strong>: Event organizers can live stream conferences, concerts, or sports events using HLS, reaching a wider audience and enhancing attendee engagement.</li><li><strong>Customer Support</strong>: Companies can integrate HLS into their customer support systems to provide real-time video assistance and troubleshooting, enhancing customer satisfaction and loyalty.</li></ul><p>Let's build the React HLS player with an integrated using <a href="https://www.videosdk.live/">VideoSDK</a>. With VideoSDK's robust APIs and SDKs, you can effortlessly incorporate React HLS streaming capabilities into your application, ensuring smooth and high-quality video playback for all users.</p><h2 id="getting-started-with-videosdk-and-hls"><strong>Getting Started with VideoSDK and HLS</strong></h2><p>To take advantage of the React HLS player functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure, that you have completed the necessary prerequisites.</p><h3 id="a-create-a-videosdk-account">[a] Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="b-generate-your-auth-token">[b] Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a>.</p><h3 id="c-prerequisites-and-setup">[c] Prerequisites and Setup</h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (Not having one?) Follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a>.</li><li><a href="https://www.npmjs.com/package/@videosdk.live/react-sdk" rel="noopener noreferrer"><strong>React VideoSDK</strong></a></li><li>Make sure Node and NPM are installed on your device.</li><li>Basic understanding of Hooks (useState, useRef, useEffect)</li><li>React Context API (optional)</li></ul><p>Follow the steps to create the environment necessary to add video calls to your app. You can also find the code sample for <a href="https://github.com/videosdk-live/quickstart/tree/main/react-rtc" rel="noopener noreferrer">Quickstart here</a>.<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#create-new-react-app" rel="noopener noreferrer">​</a></p><h3 id="creating-and-configuring-your-react-app">Creating and Configuring Your React App</h3><pre><code class="language-js">$ npx create-react-app videosdk-rtc-react-app</code></pre><h2 id="install-videosdk%E2%80%8B">Install VideoSDK<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#install-videosdk">​</a></h2><p>It is necessary to set up VideoSDK within your project before going into the details of integrating the HLS feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.</p><ul><li>For NPM</li></ul><pre><code class="language-js">$ npm install "@videosdk.live/react-sdk"

//For the Participants Video
$ npm install "react-player"</code></pre><ul><li>For Yarn</li></ul><pre><code class="language-js">$ yarn add "@videosdk.live/react-sdk"

//For the Participants Video
$ yarn add "react-player"</code></pre><p>You are going to use functional components to leverage React's reusable component architecture. There will be components for users, videos and controls (mic, camera, leave) over the video.</p><h3 id="structure-of-the-project%E2%80%8B">Structure of the project<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start-ILS#structure-of-the-project">​</a></h3><p>Your project structure should look like this.</p><pre><code class="language-js">   root
   ├── node_modules
   ├── public
   ├── src
   │    ├── API.js
   │    ├── App.js
   │    ├── index.js
   ├── package.json
   .    .</code></pre><p>You are going to use functional components to leverage react's reusable component architecture. There will be components for users, videos and controls (mic, camera, leave) over the video.</p><h3 id="app-architecture%E2%80%8B">App Architecture<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start-ILS#app-architecture">​</a></h3><p>The App will contain a container component which includes a user component with videos. Each video component will have control buttons for the mic, camera, leave a meeting, and HLS.</p><p>You will be working on these files:</p><ul><li>API.js: Responsible for handling API calls such as generating unique meetingId and token</li><li>App.js: Responsible for rendering the container and joining the meeting.</li></ul><h4 id="architecture-for-speaker">Architecture for Speaker</h4>
<figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/react_quick_start_ils_speaker_arch.png" class="kg-image" alt="Integrate HLS Player in React JS Video Calling App: Complete Guide" loading="lazy" width="1324" height="844"/></figure><h4 id="architecture-for-viewer">Architecture for Viewer</h4>
<figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/react_quick_start_ils_viewer_arch.png" class="kg-image" alt="Integrate HLS Player in React JS Video Calling App: Complete Guide" loading="lazy" width="1324" height="844"/></figure><h2 id="implementing-hls-player-in-react-js">Implementing HLS Player in React JS</h2><p>To add video capability to your React application, you must first complete a sequence of prerequisites.</p><h3 id="step-1-get-started-with-apijs">Step 1: Get started with API.js</h3><p>Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">//This is the Auth token, you will use it to generate a meeting and connect to it
export const authToken = "&lt;Generated-from-dashbaord&gt;";
// API call to create a meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${authToken}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });
  //Destructuring the roomId from the response
  const { roomId } = await res.json();
  return roomId;
};</code></pre><figcaption><p><span style="white-space: pre-wrap;">API.js</span></p></figcaption></figure><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the <strong>Context of </strong>provider can connect with<strong>Provider</strong> and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One provider can connect with many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meetings such as join/leave, enable/disable the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as <strong>name</strong>, <strong>webcamStream</strong>, <strong>micStream</strong>, etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the <strong>App.js</strong> file.</p><pre><code class="language-js">import "./App.css";
import React, { useEffect, useMemo, useRef, useState } from "react";
import {
  MeetingProvider,
  MeetingConsumer,
  useMeeting,
  useParticipant,
} from "@videosdk.live/react-sdk";
import { authToken, createMeeting } from "./API";
import ReactPlayer from "react-player";

function JoinScreen({ getMeetingAndToken }) {
  return null;
}

function ParticipantView(props) {
  return null;
}

function Controls(props) {
  return null;
}

function MeetingView(props) {
  return null;
}

function App() {
  const [meetingId, setMeetingId] = useState(null);

  //Getting the meeting id by calling the api we just wrote
  const getMeetingAndToken = async (id) =&gt; {
    const meetingId =
      id == null ? await createMeeting({ token: authToken }) : id;
    setMeetingId(meetingId);
  };

  //This will set Meeting Id to null when meeting is left or ended
  const onMeetingLeave = () =&gt; {
    setMeetingId(null);
  };

  return authToken &amp;&amp; meetingId ? (
    &lt;MeetingProvider
      config={{
        meetingId,
        micEnabled: true,
        webcamEnabled: true,
        name: "C.V. Raman",
      }}
      token={authToken}
    &gt;
      &lt;MeetingView meetingId={meetingId} onMeetingLeave={onMeetingLeave} /&gt;
    &lt;/MeetingProvider&gt;
  ) : (
    &lt;JoinScreen getMeetingAndToken={getMeetingAndToken} /&gt;
  );
}

export default App;</code></pre><h3 id="step-3-implement-join-screen">Step 3: Implement Join Screen</h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one as a host or a viewer.</p><p>This functionality will have 3 buttons:</p><ol><li><strong>Join as Host</strong>: When this button is clicked, the person will join the meeting with the entered <code>meetingId</code> as <code>HOST</code>.</li><li><strong>Join as Viewer</strong>: When this button is clicked, the person will join the meeting with the entered <code>meetingId</code> as <code>VIEWER</code>.</li><li><strong>Create Meeting</strong>: When this button is clicked, the person will join a new meeting as <code>HOST</code>.</li></ol><pre><code class="language-js">function JoinScreen({ getMeetingAndToken, setMode }) {
  const [meetingId, setMeetingId] = useState(null);
  //Set the mode of joining participant and set the meeting id or generate new one
  const onClick = async (mode) =&gt; {
    setMode(mode);
    await getMeetingAndToken(meetingId);
  };
  return (
    &lt;div className="container"&gt;
      &lt;button onClick={() =&gt; onClick("CONFERENCE")}&gt;Create Meeting&lt;/button&gt;
      &lt;br /&gt;
      &lt;br /&gt;
      {" or "}
      &lt;br /&gt;
      &lt;br /&gt;
      &lt;input
        type="text"
        placeholder="Enter Meeting Id"
        onChange={(e) =&gt; {
          setMeetingId(e.target.value);
        }}
      /&gt;
      &lt;br /&gt;
      &lt;br /&gt;
      &lt;button onClick={() =&gt; onClick("CONFERENCE")}&gt;Join as Host&lt;/button&gt;
      {" | "}
      &lt;button onClick={() =&gt; onClick("VIEWER")}&gt;Join as Viewer&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><h4 id="output">Output</h4>
<figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/quick_start_react_ils_join_screen.png" class="kg-image" alt="Integrate HLS Player in React JS Video Calling App: Complete Guide" loading="lazy" width="326" height="256"/></figure><h3 id="step-4-implement-meetingview-and-controls">Step 4: Implement MeetingView and Controls</h3><p><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-4-implement-meetingview-and-controls">​</a>The next step is to create a container to manage features such as join, leave, mute, unmute, start, and stop HLS for the HOST and to display an HLS Player for the viewer.</p><p>You need to determine the mode of the <code>localParticipant</code>, if its <code>CONFERENCE</code>, display the <code>SpeakerView</code> component otherwise shows the <code>ViewerView</code> component.</p><pre><code class="language-js">function Container(props) {
  const [joined, setJoined] = useState(null);
  //Get the method which will be used to join the meeting.
  const { join } = useMeeting();
  const mMeeting = useMeeting({
    //callback for when a meeting is joined successfully
    onMeetingJoined: () =&gt; {
      setJoined("JOINED");
    },
    //callback for when a meeting is left
    onMeetingLeft: () =&gt; {
      props.onMeetingLeave();
    },
    //callback for when there is an error in a meeting
    onError: (error) =&gt; {
      alert(error.message);
    },
  });
  const joinMeeting = () =&gt; {
    setJoined("JOINING");
    join();
  };

  return (
    &lt;div className="container"&gt;
      &lt;h3&gt;Meeting Id: {props.meetingId}&lt;/h3&gt;
      {joined &amp;&amp; joined == "JOINED" ? (
        mMeeting.localParticipant.mode == Constants.modes.CONFERENCE ? (
          &lt;SpeakerView /&gt;
        ) : mMeeting.localParticipant.mode == Constants.modes.VIEWER ? (
          &lt;ViewerView /&gt;
        ) : null
      ) : joined &amp;&amp; joined == "JOINING" ? (
        &lt;p&gt;Joining the meeting...&lt;/p&gt;
      ) : (
        &lt;button onClick={joinMeeting}&gt;Join&lt;/button&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><h3 id="step-5-implement-speakerview%E2%80%8B">Step 5: Implement SpeakerView<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start-ILS#step-5-implement-speakerview">​</a></h3><p>The next step is to create <code>SpeakerView</code> and <code>Controls</code> components to manage features such as join, leave, mute, and unmute.</p><ul><li>You have to retrieve all the <code>participants</code> using the <code>useMeeting</code> hook and filter them based on the mode set to <code>CONFERENCE</code> ensuring that only Speakers are displayed on the screen.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">function SpeakerView() {
  //Get the participants and HLS State from useMeeting
  const { participants, hlsState } = useMeeting();

  //Filtering the host/speakers from all the participants
  const speakers = useMemo(() =&gt; {
    const speakerParticipants = [...participants.values()].filter(
      (participant) =&gt; {
        return participant.mode == Constants.modes.CONFERENCE;
      }
    );
    return speakerParticipants;
  }, [participants]);
  return (
    &lt;div&gt;
      &lt;p&gt;Current HLS State: {hlsState}&lt;/p&gt;
      {/* Controls for the meeting */}
      &lt;Controls /&gt;

      {/* Rendring all the HOST participants */}
      {speakers.map((participant) =&gt; (
        &lt;ParticipantView participantId={participant.id} key={participant.id} /&gt;
      ))}
    &lt;/div&gt;
  );
}

function Container(){
  ...

  const mMeeting = useMeeting({
    onMeetingJoined: () =&gt; {
      //Pin the local participant if he joins in CONFERENCE mode
      if (mMeetingRef.current.localParticipant.mode == "CONFERENCE") {
        mMeetingRef.current.localParticipant.pin();
      }
      setJoined("JOINED");
    },
    ...
  });

  //Create a ref to meeting object so that when used inside the
  //Callback functions, meeting state is maintained
  const mMeetingRef = useRef(mMeeting);
  useEffect(() =&gt; {
    mMeetingRef.current = mMeeting;
  }, [mMeeting]);

  return &lt;&gt;...&lt;/&gt;;
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">SpeakerView</span></p></figcaption></figure><ul><li>You have to add the <code>Controls</code> component which will allow the participant to toggle their media.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">function Controls() {
  const { leave, toggleMic, toggleWebcam, startHls, stopHls } = useMeeting();
  return (
    &lt;div&gt;
      &lt;button onClick={() =&gt; leave()}&gt;Leave&lt;/button&gt;
      &amp;emsp;|&amp;emsp;
      &lt;button onClick={() =&gt; toggleMic()}&gt;toggleMic&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleWebcam()}&gt;toggleWebcam&lt;/button&gt;
      &amp;emsp;|&amp;emsp;
      &lt;button
        onClick={() =&gt; {
          //Start the HLS in SPOTLIGHT mode and PIN as
          //priority so only speakers are visible in the HLS stream
          startHls({
            layout: {
              type: "SPOTLIGHT",
              priority: "PIN",
              gridSize: "20",
            },
            theme: "LIGHT",
            mode: "video-and-audio",
            quality: "high",
            orientation: "landscape",
          });
        }}
      &gt;
        Start HLS
      &lt;/button&gt;
      &lt;button onClick={() =&gt; stopHls()}&gt;Stop HLS&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">Controls Component</span></p></figcaption></figure><ul><li>You need to then create the <code>ParticipantView</code> to display the participant's name and media. To play the media, use the <code>webcamStream</code> and <code>micStream</code> from the <code>useParticipant</code> hook.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantView(props) {
  const micRef = useRef(null);
  const { webcamStream, micStream, webcamOn, micOn, isLocal, displayName } =
    useParticipant(props.participantId);

  const videoStream = useMemo(() =&gt; {
    if (webcamOn &amp;&amp; webcamStream) {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(webcamStream.track);
      return mediaStream;
    }
  }, [webcamStream, webcamOn]);

  //Playing the audio in the &lt;audio&gt;
  useEffect(() =&gt; {
    if (micRef.current) {
      if (micOn &amp;&amp; micStream) {
        const mediaStream = new MediaStream();
        mediaStream.addTrack(micStream.track);

        micRef.current.srcObject = mediaStream;
        micRef.current
          .play()
          .catch((error) =&gt;
            console.error("videoElem.current.play() failed", error)
          );
      } else {
        micRef.current.srcObject = null;
      }
    }
  }, [micStream, micOn]);

  return (
    &lt;div&gt;
      &lt;p&gt;
        Participant: {displayName} | Webcam: {webcamOn ? "ON" : "OFF"} | Mic:{" "}
        {micOn ? "ON" : "OFF"}
      &lt;/p&gt;
      &lt;audio ref={micRef} autoPlay playsInline muted={isLocal} /&gt;
      {webcamOn &amp;&amp; (
        &lt;ReactPlayer
          //
          playsinline // extremely crucial prop
          pip={false}
          light={false}
          controls={false}
          muted={true}
          playing={true}
          //
          url={videoStream}
          //
          height={"300px"}
          width={"300px"}
          onError={(err) =&gt; {
            console.log(err, "participant video error");
          }}
        /&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ParticipantView</span></p></figcaption></figure><h4 id="output-of-speakerview-component%E2%80%8B">Output Of SpeakerView Component<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start-ILS#output-of-speakerview-component">​</a></h4><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/quick_start_react_ils_speaker.png" class="kg-image" alt="Integrate HLS Player in React JS Video Calling App: Complete Guide" loading="lazy" width="578" height="448"/></figure><h3 id="step-6-implement-viewerview-with-hls-player%E2%80%8B">Step 6: Implement ViewerView with HLS Player<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start-ILS#step-6-implement-viewerview">​</a></h3><p>When the host initiates the live streaming, viewers will be able to watch it. To implement the player view, you have to use <code>hls.js</code>. It will help play the HLS stream.</p><p>Begin by adding this package.</p><pre><code class="language-js">$ npm install hls.js</code></pre><p>or</p><pre><code class="language-js">$ yarn add hls.js</code></pre><p>With <code>hls.js</code> installed, you can now get the <code>hlsUrls</code> from the <code>useMeeting</code> hook which will be used to play the HLS in the player.</p><pre><code class="language-js">//importing hls.js
import Hls from "hls.js";

function ViewerView() {
  // States to store downstream url and current HLS state
  const playerRef = useRef(null);
  //Getting the hlsUrls
  const { hlsUrls, hlsState } = useMeeting();

  //Playing the HLS stream when the playbackHlsUrl is present and it is playable
  useEffect(() =&gt; {
    if (hlsUrls.playbackHlsUrl &amp;&amp; hlsState == "HLS_PLAYABLE") {
      if (Hls.isSupported()) {
        const hls = new Hls({
          maxLoadingDelay: 1, // max video loading delay used in automatic start level selection
          defaultAudioCodec: "mp4a.40.2", // default audio codec
          maxBufferLength: 0, // If buffer length is/becomes less than this value, a new fragment will be loaded
          maxMaxBufferLength: 1, // Hls.js will never exceed this value
          startLevel: 0, // Start playback at the lowest quality level
          startPosition: -1, // set -1 playback will start from intialtime = 0
          maxBufferHole: 0.001, // 'Maximum' inter-fragment buffer hole tolerance that hls.js can cope with when searching for the next fragment to load.
          highBufferWatchdogPeriod: 0, // if media element is expected to play and if currentTime has not moved for more than highBufferWatchdogPeriod and if there are more than maxBufferHole seconds buffered upfront, hls.js will jump buffer gaps, or try to nudge playhead to recover playback.
          nudgeOffset: 0.05, // In case playback continues to stall after first playhead nudging, currentTime will be nudged evenmore following nudgeOffset to try to restore playback. media.currentTime += (nb nudge retry -1)*nudgeOffset
          nudgeMaxRetry: 1, // Max nb of nudge retries before hls.js raise a fatal BUFFER_STALLED_ERROR
          maxFragLookUpTolerance: .1, // This tolerance factor is used during fragment lookup.
          liveSyncDurationCount: 1, // if set to 3, playback will start from fragment N-3, N being the last fragment of the live playlist
          abrEwmaFastLive: 1, // Fast bitrate Exponential moving average half-life, used to compute average bitrate for Live streams.
          abrEwmaSlowLive: 3, // Slow bitrate Exponential moving average half-life, used to compute average bitrate for Live streams.
          abrEwmaFastVoD: 1, // Fast bitrate Exponential moving average half-life, used to compute average bitrate for VoD streams
          abrEwmaSlowVoD: 3, // Slow bitrate Exponential moving average half-life, used to compute average bitrate for VoD streams
          maxStarvationDelay: 1, // ABR algorithm will always try to choose a quality level that should avoid rebuffering
        });

        let player = document.querySelector("#hlsPlayer");

        hls.loadSource(hlsUrls.playbackHlsUrl);
        hls.attachMedia(player);
      } else {
        if (typeof playerRef.current?.play === "function") {
          playerRef.current.src = hlsUrls.playbackHlsUrl;
          playerRef.current.play();
        }
      }
    }
  }, [hlsUrls, hlsState, playerRef.current]);

  return (
    &lt;div&gt;
      {/* Showing message if HLS is not started or is stopped by HOST */}
      {hlsState != "HLS_PLAYABLE" ? (
        &lt;div&gt;
          &lt;p&gt;HLS has not started yet or is stopped&lt;/p&gt;
        &lt;/div&gt;
      ) : (
        hlsState == "HLS_PLAYABLE" &amp;&amp; (
          &lt;div&gt;
            &lt;video
              ref={playerRef}
              id="hlsPlayer"
              autoPlay={true}
              controls
              style={{ width: "100%", height: "100%" }}
              playsinline
              playsInline
              muted={true}
              playing
              onError={(err) =&gt; {
                console.log(err, "hls video error");
              }}
            &gt;&lt;/video&gt;
          &lt;/div&gt;
        )
      )}
    &lt;/div&gt;
  );
}

</code></pre><h4 id="output-of-viewerview%E2%80%8B">Output of ViewerView<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start-ILS#output-of-viewerview">​</a></h4><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/quick_start_react_ils_viewer.png" class="kg-image" alt="Integrate HLS Player in React JS Video Calling App: Complete Guide" loading="lazy" width="1132" height="664"/></figure><p>Congrats! You have completed the implementation of a customized live-streaming app in ReactJS using VideoSDK. To explore more features, go through Basic and Advanced features.</p><p>For, more reference, check our docs:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/interactive-livestream"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Interactive Livestream - Video SDK Docs | Video SDK</div><div class="kg-bookmark-description">Interactive Livestream features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video &amp; audio conferencing to your applications.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Integrate HLS Player in React JS Video Calling App: Complete Guide"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Integrate HLS Player in React JS Video Calling App: Complete Guide" onerror="this.style.display = 'none'"/></div></a></figure><h2 id="%E2%9C%A8-want-to-add-more-features-to-react-js-video-calling-app">✨ Want to Add More Features to React JS Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your React JS video calling app, check out these additional resources:</p><ul><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/integrate-active-speaker-indication-in-react-js" rel="nofollow noopener">Link</a></li><li>RTMP Live Stream: <a href="https://www.videosdk.live/blog/integrate-rtmp-livestream-in-react-js" rel="nofollow noopener">Link</a></li><li>Image Capture: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-react-js" rel="nofollow noopener">Link</a></li><li>Screen Share: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-js" rel="nofollow noopener">Link</a></li><li>Chat Feature: <a href="https://www.videosdk.live/blog/integrate-chat-feature-in-react-js" rel="nofollow noopener">Link</a></li><li>Collaborative Whiteboard: <a href="https://www.videosdk.live/blog/integrate-whiteboard-in-react-js" rel="nofollow noopener">Link</a></li><li>Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/integrate-picture-in-picture-pip-in-react-js" rel="nofollow noopener">Link</a></li></ul><h2 id="wrap-up">Wrap-up</h2><p>Integrating the HLS player into your React JS video calling app using VideoSDK offers a powerful solution for seamless streaming and enhanced user experience. With the help of VideoSDK, you can easily implement HLS playback, ensuring smooth and reliable video communication for your users. </p><p>With HLS.js library integration and VideoSDK's APIs, you can build a feature-rich video-calling application that meets the demands of modern communication. </p><p>And, If you are new here and want to build an interactive react app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get? <em>10000 free minutes every month.</em> This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[How to Migrate Twilio Video to VideoSDK]]></title><description><![CDATA[This migration guide provides a step-by-step transition from Twilio Video to VideoSDK for your real-time communication needs.]]></description><link>https://www.videosdk.live/blog/migrate-twilio-video-to-videosdk</link><guid isPermaLink="false">6571c9a2cbe3c80e020ebb14</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 10 Jan 2025 14:48:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/12/migration_to_videosdk_720.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/12/migration_to_videosdk_720.jpg" alt="How to Migrate Twilio Video to VideoSDK"/><p>Integrating video calls into apps (using JavaScript, React, Android, iOS) is a common need, and developers often look for real-time voice and video features. Twilio Video and <a href="https://www.videosdk.live/">VideoSDK</a> are popular platforms, but with Twilio discontinuing Programmable Video, it's crucial to find a reliable alternative. VideoSDK emerges as a <a href="https://www.videosdk.live/blog/twilio-video-competitors">top competitors of Twilio Video</a>, providing a user-friendly API for the seamless integration of robust audio-video features into your apps with minimal code. Its quick integration allows you to focus on enhancing user experience effortlessly.</p><!--kg-card-begin: html--><!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h2 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h2>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>
		</div>
	</div>
</body>

</html><!--kg-card-end: html--><h3 id="videosdk-key-features">VideoSDK Key Features:</h3><p><strong>High Scalability: Reaching New Heights</strong></p><p>Supports up to 300 attendees, and 50 presenters, with global reach and &lt;99ms latency, ensuring uninterrupted large-scale collaborations.</p><p><strong>Adaptive Bitrate</strong>: <strong>A Symphony of Quality</strong></p><p>Offers adaptive bitrate technology for optimal audio-video experiences, auto-adjusting stream quality based on bandwidth and network conditions.</p><p><strong>Customizable SDK</strong>: <strong>Tailored Perfection</strong></p><p>Fully customize UI with end-to-end SDK, accelerate time-to-market with code samples, and build interactive features using PubSub for a tailored user experience.</p><p><strong>Quality Recordings</strong>: <strong>Cinematic Memories</strong></p><p>Supports 1080p video recording with programmable layouts and custom templates. Store recordings on VideoSDK cloud or popular providers like AWS, GCP, or Azure.</p><p><strong>Detailed Analytics</strong>: <strong>Peek Behind the Curtains</strong></p><p>Access in-depth video call metrics for comprehensive session analysis, including participant interactions and duration.</p><p><strong>Cross-Platform Streaming</strong>: <strong>Be Everywhere</strong></p><p>Stream live events on platforms like YouTube, LinkedIn, Facebook, etc., with built-in RTMP support.</p><p><strong>Seamless Scaling</strong>: <strong>Grow Without Limits</strong></p><p>Effortlessly scale live audio/video from a few users to over 10,000 within your web app, reaching millions through RTMP output.</p><p><strong>Platform Support</strong>: <strong>Across the Universe</strong></p><p>Build live video apps for specific platforms and run seamlessly across browsers, devices, and operating systems with minimal development effort.</p><p><strong>And here's the most exciting par</strong>t – <strong>it's FREE</strong> for every developer to kickstart their journey! VideoSDK not only provides a cutting-edge solution but also boasts a pricing structure that effortlessly aligns with the growth of your business. Enjoy a guaranteed <strong>10,000 FREE</strong> minutes<strong> </strong>every month, ensuring that you not only get top-notch quality but also incredible value for your investment.</p><p>We're here to assist you in transitioning from <strong>Twilio Video</strong> to VideoSDK, ensuring a smooth journey as you migrate for powerful voice, video, and streaming integrations. Let's simplify the process together.</p><h2 id="twilio-video-to-videosdk-migration-guide">Twilio Video to <strong>VideoSDK</strong> Migration Guide:</h2><p><strong>Javascript: </strong>Smooth Transition Code</p><p>This migration guide provides a step-by-step transition from using <a href="https://docs.videosdk.live/tutorials/migration-guide-from-twilio-to-videosdk-js-sdk">Twilio Video to VideoSDK on Javascript </a>for your real-time communication needs. The process involves understanding key concepts, setup differences, installation procedures, and code adjustments.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/tutorials/migration-guide-from-twilio-to-videosdk-js-sdk"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Migration Guide From Twilio Video to Video SDK - Javascript | Video SDK</div><div class="kg-bookmark-description">Explore the seamless transition from Twilio to Video SDK for javascript with our comprehensive migration guide. Elevate your video communication with expert insights and step-by-step instructions.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="How to Migrate Twilio Video to VideoSDK"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="How to Migrate Twilio Video to VideoSDK"/></div></a></figure><p><strong>React: </strong>Elevate Your React Experience</p><p>This migration guide provides a step-by-step transition from using <a href="https://docs.videosdk.live/tutorials/migration-guide-from-twilio-to-videosdk-web-edition">Twilio Video to VideoSDK on React</a> for your real-time communication needs. The process involves understanding key concepts, setup differences, installation procedures, and code adjustments.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/tutorials/migration-guide-from-twilio-to-videosdk-web-edition"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Migration Guide From Twilio Video to Video SDK - React | Video SDK</div><div class="kg-bookmark-description">Easily migrate from Twilio to VideoSDK on React. Our guide provides concise steps for seamless integration, enhancing your app’s video capabilities effortlessly.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="How to Migrate Twilio Video to VideoSDK"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="How to Migrate Twilio Video to VideoSDK"/></div></a></figure><p><strong>Android: </strong>Androids, Assemble!</p><p>This migration guide provides a step-by-step transition from using <a href="https://docs.videosdk.live/tutorials/migration-guide-from-twilio-to-videosdk-android-sdk">Twilio Video to VideoSDK on Android</a> for your real-time communication needs. The process involves understanding key concepts, setup differences, installation procedures, and code adjustments.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/tutorials/migration-guide-from-twilio-to-videosdk-android-sdk"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Migration Guide From Twilio Video to Video SDK - Android | Video SDK</div><div class="kg-bookmark-description">Explore the seamless transition from Twilio to Video SDK for android with our comprehensive migration guide. Elevate your video communication with expert insights and step-by-step instructions.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="How to Migrate Twilio Video to VideoSDK"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="How to Migrate Twilio Video to VideoSDK"/></div></a></figure><h3 id="features-map-choosing-excellence-over-good">Features Map: Choosing Excellence Over Good</h3><p>Choose wisely! Delve into the feature map, comparing Twilio Video and VideoSDK. Understand the unique offerings and capabilities of each, aiding your decision-making process based on project requirements.</p><h2 id="faqs">FAQs</h2><p/><p><strong>Q1: Why should I migrate to VideoSDK?</strong></p><p>VideoSDK not only matches Twilio Video but exceeds it with features like high scalability, adaptive bitrate, customizable SDK, and more. It's a strategic move for future-proofing your real-time communication apps.</p><p><br><strong>Q2: Is the migration process complicated?</strong></br></p><p>Not at all! We provide detailed migration guides for Javascript, React, and Android, ensuring a smooth transition without headaches. You'll be up and running in no time.</p><p><br><strong>Q3: What makes VideoSDK stand out?</strong></br></p><p>VideoSDK offers not just quality features but also a generous FREE tier, providing 10,000 minutes every month. It's a cost-effective solution without compromising on excellence.<br/></p><h3 id="conclusion">Conclusion</h3><p>In conclusion, migrating from Twilio Video to VideoSDK marks a strategic move in ensuring the continued success and advancement of your real-time communication applications. With Twilio discontinuing Programmable Video, the transition becomes imperative, and VideoSDK emerges as an exceptional alternative, offering a seamless integration experience across JavaScript, React, Android, and iOS platforms.</p><p>Do you have any questions about switching to VideoSDK? We’d love to help. Contact our <a href="https://www.videosdk.live/support">support team</a> or jump into our <a href="https://discord.com/invite/Qfm8j4YAUJ">Discord community</a> to get started.</p>]]></content:encoded></item><item><title><![CDATA[How to Implement Chat Feature in Flutter Video Call App?]]></title><description><![CDATA[Explore the step-by-step guide on Flutter's video call app with VideoSDK, which enables real-time chat alongside video calls, enhancing collaboration.]]></description><link>https://www.videosdk.live/blog/implement-chat-feature-in-flutter-video-call-app</link><guid isPermaLink="false">661fa3e82a88c204ca9d43aa</guid><category><![CDATA[Flutter]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Fri, 10 Jan 2025 09:25:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Chat-Feature-in-Flutter-Video-Call-App.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Chat-Feature-in-Flutter-Video-Call-App.jpg" alt="How to Implement Chat Feature in Flutter Video Call App?"/><p>Whether it's sharing important information, asking questions, or simply chatting, participants can engage with each other effortlessly during the video call sessions, integrating a real time chat feature into your <a href="https://www.videosdk.live/blog/video-calling-in-flutter">Flutter video call app</a> improves participant collaboration and communication. You can implement real-time messaging capabilities, using the PubSub (Publish-Subscribe) mechanism, ensuring efficient and scalable message distribution across meetings. </p><p>In this guide, we'll navigate the process of seamlessly integrating real time chat capabilities into your existing flutter video chat app. From establishing the chat environment to managing real time chat interactions within your video call interfaces, we'll cover all the essential steps to augment your app's functionality and user experience.</p><h3 id="benefits-of-implement-chat-feature-in-flutter-video-call-app">Benefits of Implement Chat Feature in Flutter Video Call App</h3><ul><li><strong>Enhanced Collaboration:</strong> Chat enables participants to communicate ideas, share files, and ask questions, fostering collaboration and teamwork.</li><li><strong>Real-time Feedback:</strong> During video calls, users can provide immediate feedback, ask questions, or clarify doubts improving communication efficiency.</li><li><strong>Documentation:</strong> Chat transcripts record discussions and decisions made during meetings, facilitating post-meeting review and follow-up.</li><li><strong>Increased Engagement:</strong> Chat encourages active participation from all participants, even those who might be hesitant to speak up during video calls.</li></ul><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>We must use the capabilities that VideoSDK offers. Before diving into the implementation steps and building the Flutter chat app, let's ensure you complete the necessary prerequisites to integrate the real time chat feature.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/authentication-and-tokens#1-generating-token-from-dashboard">provided tutorial</a>.</p><h3 id="prerequisites%E2%80%8B">Prerequisites<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#prerequisites">​</a></h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (if you do not have one, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a>)</li><li>The basic understanding of Flutter and Having installed it on your device.</li><li><strong><a href="https://pub.dev/packages/videosdk" rel="noopener noreferrer">Flutter VideoSDK</a></strong></li></ul><h2 id="install-videosdk%E2%80%8B">Install VideoSDK<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#install-video-sdk">​</a></h2><p>Install the VideoSDK using the below-mentioned flutter command. Make sure you are in your Flutter chat app directory before you run this command.</p><pre><code class="language-dart">$ flutter pub add videosdk

//run this command to add http library to perform network call to generate roomId
$ flutter pub add http</code></pre><h3 id="videosdk-compatibility">VideoSDK Compatibility</h3><!--kg-card-begin: html--><table style="border: 1px solid black;">
<thead>
<tr>
<th style="border:1px solid white;">Android and iOS app</th>
<th style="border:1px solid white;">Web</th>
<th style="border:1px solid white;">Desktop app</th>
<th style="border:1px solid white;">Safari browser</th>
</tr>
</thead>
<tbody>
<tr>
<td style="border:1px solid white;"><center> ✅ </center></td>
<td style="border:1px solid white;"><center> ✅ </center></td>
<td style="border:1px solid white;"><center> ✅ </center></td>
<td style="border:1px solid white;"><center> ❌ </center></td>
</tr>
</tbody>
</table>
<!--kg-card-end: html--><h3 id="structure-of-the-project%E2%80%8B">Structure of the project<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#structure-of-the-project">​</a></h3><p>Your project structure should look like this.</p><pre><code class="language-dart">    root
    ├── android
    ├── ios
    ├── lib
         ├── api_call.dart
         ├── join_screen.dart
         ├── main.dart
         ├── meeting_controls.dart
         ├── meeting_screen.dart
         ├── participant_tile.dart</code></pre><p>We are going to create flutter widgets (JoinScreen, MeetingScreen, MeetingControls, and ParticipantTile).</p><h3 id="app-structure%E2%80%8B">App Structure<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#app-structure">​</a></h3><p>The app widget will contain <code>JoinScreen</code> and <code>MeetingScreen</code> widget. <code>MeetingScreen</code> will have <code>MeetingControls</code> and <code>ParticipantTile</code> widget.</p><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/flutter_quick_start_arch.png" class="kg-image" alt="How to Implement Chat Feature in Flutter Video Call App?" loading="lazy"/></figure><h3 id="configure-project">Configure Project</h3><h4 id="for-android%E2%80%8B"><br>For Android<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#for-android">​</a></br></h4><ul><li>Update <code>/android/app/src/main/AndroidManifest.xml</code> for the permissions we will be using to implement the audio and video features.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">&lt;uses-feature android:name="android.hardware.camera" /&gt;
&lt;uses-feature android:name="android.hardware.camera.autofocus" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET"/&gt;
&lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE"/&gt;
&lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;</code></pre><figcaption>AndroidManifest.xml</figcaption></figure><ul><li>Also, you will need to set your build settings to Java 8 because the official WebRTC jar now uses static methods in <code>EglBase</code> an interface. Just add this to your app-level <code>/android/app/build.gradle</code>.</li></ul><pre><code class="language-dart">android {
    //...
    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8
    }
}</code></pre><ul><li>If necessary, in the same <code>build.gradle</code> you will need to increase <code>minSdkVersion</code> of <code>defaultConfig</code> up to <code>23</code> (currently, the default Flutter generator sets it to <code>16</code>).</li><li>If necessary, in the same <code>build.gradle</code> you will need to increase <code>compileSdkVersion</code> and <code>targetSdkVersion</code> up to <code>33</code> (currently, the default Flutter generator sets it to <code>30</code>).</li></ul><h4 id="for-ios%E2%80%8B">For iOS<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#for-ios">​</a></h4><ul><li>Add the following entries which allow your app to access the camera and microphone of your <code>/ios/Runner/Info.plist</code> file :</li></ul><pre><code class="language-dart">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Camera Usage!&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Microphone Usage!&lt;/string&gt;</code></pre><ul><li>Uncomment the following line to define a global platform for your project in <code>/ios/Podfile</code> :</li></ul><pre><code class="language-dart"># platform :ios, '12.0'</code></pre><h4 id="for-macos%E2%80%8B">For MacOS<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#for-macos">​</a></h4><ul><li>Add the following entries to your <code>/macos/Runner/Info.plist</code> file that allows your app to access the camera and microphone:</li></ul><pre><code class="language-dart">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Camera Usage!&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Microphone Usage!&lt;/string&gt;</code></pre><ul><li>Add the following entries to your <code>/macos/Runner/DebugProfile.entitlements</code> file that allows your app to access the camera, microphone, and open outgoing network connections:</li></ul><pre><code class="language-dart">&lt;key&gt;com.apple.security.network.client&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.camera&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.microphone&lt;/key&gt;
&lt;true/&gt;</code></pre><ul><li>Add the following entries to your <code>/macos/Runner/Release.entitlements</code> file that allows your app to access the camera, microphone, and open outgoing network connections:</li></ul><pre><code class="language-dart">&lt;key&gt;com.apple.security.network.server&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.network.client&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.camera&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.microphone&lt;/key&gt;
&lt;true/&gt;</code></pre><h2 id="essential-steps-to-implement-video-calling-functionality">Essential Steps to Implement Video Calling Functionality</h2><p>After successfully integrating VideoSDK into your Flutter chat app, you have laid the foundation for real-time audio and video communication. Now, let's take it a step further by integrating the real time chat feature. This will allow users to send messages during a call, providing a convenient way to share quick messages, links, or information.</p><h3 id="step-1-get-started-with-apicalldart%E2%80%8B">Step 1: Get started with <code>api_call.dart</code><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-1-get-started-with-api_calldart">​</a></h3><p>Before jumping to anything else, you will write a function to generate a unique meetingId. You will require an authentication token, you can generate it either by using <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or by generating it from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for development.</p><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'dart:convert';
import 'package:http/http.dart' as http;

//Auth token we will use to generate a meeting and connect to it
String token = "&lt;Generated-from-dashboard&gt;";

// API call to create meeting
Future&lt;String&gt; createMeeting() async {
  final http.Response httpResponse = await http.post(
    Uri.parse("https://api.videosdk.live/v2/rooms"),
    headers: {'Authorization': token},
  );

//Destructuring the roomId from the response
  return json.decode(httpResponse.body)['roomId'];
}</code></pre><figcaption>api_call.dart</figcaption></figure><h3 id="step-2-creating-the-joinscreen%E2%80%8B">Step 2: Creating the JoinScreen<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-2--creating-the-joinscreen">​</a></h3><p>Let's create <code>join_screen.dart</code> file in <code>lib</code> directory and create JoinScreen <code>StatelessWidget</code>.</p><p>The JoinScreen will consist of:</p><ul><li><strong>Create Meeting Button</strong>: This button will create a new meeting for you.</li><li><strong>Meeting ID TextField</strong>: This text field will contain the meeting ID, you want to join.</li><li><strong>Join Meeting Button</strong>: This button will join the meeting, which you have provided.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'api_call.dart';
import 'meeting_screen.dart';

class JoinScreen extends StatelessWidget {
  final _meetingIdController = TextEditingController();

  JoinScreen({super.key});

  void onCreateButtonPressed(BuildContext context) async {
    // call api to create meeting and then navigate to MeetingScreen with meetingId,token
    await createMeeting().then((meetingId) {
      if (!context.mounted) return;
      Navigator.of(context).push(
        MaterialPageRoute(
          builder: (context) =&gt; MeetingScreen(
            meetingId: meetingId,
            token: token,
          ),
        ),
      );
    });
  }

  void onJoinButtonPressed(BuildContext context) {
    String meetingId = _meetingIdController.text;
    var re = RegExp("\\w{4}\\-\\w{4}\\-\\w{4}");
    // check meeting id is not null or invaild
    // if meeting id is vaild then navigate to MeetingScreen with meetingId,token
    if (meetingId.isNotEmpty &amp;&amp; re.hasMatch(meetingId)) {
      _meetingIdController.clear();
      Navigator.of(context).push(
        MaterialPageRoute(
          builder: (context) =&gt; MeetingScreen(
            meetingId: meetingId,
            token: token,
          ),
        ),
      );
    } else {
      ScaffoldMessenger.of(context).showSnackBar(const SnackBar(
        content: Text("Please enter valid meeting id"),
      ));
    }
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: const Text('VideoSDK QuickStart'),
      ),
      body: Padding(
        padding: const EdgeInsets.all(12.0),
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: [
            ElevatedButton(
              onPressed: () =&gt; onCreateButtonPressed(context),
              child: const Text('Create Meeting'),
            ),
            Container(
              margin: const EdgeInsets.fromLTRB(0, 8.0, 0, 8.0),
              child: TextField(
                decoration: const InputDecoration(
                  hintText: 'Meeting Id',
                  border: OutlineInputBorder(),
                ),
                controller: _meetingIdController,
              ),
            ),
            ElevatedButton(
              onPressed: () =&gt; onJoinButtonPressed(context),
              child: const Text('Join Meeting'),
            ),
          ],
        ),
      ),
    );
  }
}</code></pre><figcaption>join_screen.dart</figcaption></figure><ul><li>Update the home screen of the app in the <code>main.dart</code></li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'join_screen.dart';

void main() {
  runApp(const MyApp());
}

class MyApp extends StatelessWidget {
  const MyApp({super.key});

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'VideoSDK QuickStart',
      theme: ThemeData(
        primarySwatch: Colors.blue,
      ),
      home: JoinScreen(),
    );
  }
}</code></pre><figcaption>main.dart</figcaption></figure><h3 id="step-3-creating-the-meetingcontrols%E2%80%8B">Step 3: Creating the MeetingControls<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-3--creating-the-meetingcontrols">​</a></h3><p>Let's create <code>meeting_controls.dart</code> file and create MeetingControls <code>StatelessWidget</code>.</p><p>The MeetingControls will consist of:</p><ul><li><strong>Leave Button</strong>: This button will leave the meeting.</li><li><strong>Toggle Mic Button</strong>: This button will unmute or mute the mic.</li><li><strong>Toggle Camera Button</strong>: This button will enable or disable the camera.</li></ul><p>MeetingControls will accept 3 functions in the constructor.</p><ul><li><strong><code>onLeaveButtonPressed</code></strong>: invoked when the Leave button is pressed.</li><li><strong><code>onToggleMicButtonPressed</code></strong>: invoked when the toggle mic button is pressed.</li><li><strong><code>onToggleCameraButtonPressed</code></strong>: invoked when the toggle Camera button is pressed.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';

class MeetingControls extends StatelessWidget {
  final void Function() onToggleMicButtonPressed;
  final void Function() onToggleCameraButtonPressed;
  final void Function() onLeaveButtonPressed;

  const MeetingControls(
      {super.key,
      required this.onToggleMicButtonPressed,
      required this.onToggleCameraButtonPressed,
      required this.onLeaveButtonPressed});

  @override
  Widget build(BuildContext context) {
    return Row(
      mainAxisAlignment: MainAxisAlignment.spaceEvenly,
      children: [
        ElevatedButton(
            onPressed: onLeaveButtonPressed, child: const Text('Leave')),
        ElevatedButton(
            onPressed: onToggleMicButtonPressed, child: const Text('Toggle Mic')),
        ElevatedButton(
            onPressed: onToggleCameraButtonPressed,
            child: const Text('Toggle WebCam')),
      ],
    );
  }
}</code></pre><figcaption>meeting_controls.dart</figcaption></figure><h3 id="step-4-creating-participanttile%E2%80%8B">Step 4: Creating ParticipantTile<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-4--creating-participanttile">​</a></h3><p>Let's create <code>participant_tile.dart</code> file and create ParticipantTile <code>StatefulWidget</code>.</p><p>The ParticipantTile will consist of:</p><ul><li><strong>RTCVideoView</strong>: This will show the participant's video stream.</li></ul><p>ParticipantTile will accept <code>Participant</code> in constructor</p><ul><li><strong>participant:</strong> participant of the meeting.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class ParticipantTile extends StatefulWidget {
  final Participant participant;
  const ParticipantTile({super.key, required this.participant});

  @override
  State&lt;ParticipantTile&gt; createState() =&gt; _ParticipantTileState();
}

class _ParticipantTileState extends State&lt;ParticipantTile&gt; {
  Stream? videoStream;

  @override
  void initState() {
    // initial video stream for the participant
    widget.participant.streams.forEach((key, Stream stream) {
      setState(() {
        if (stream.kind == 'video') {
          videoStream = stream;
        }
      });
    });
    _initStreamListeners();
    super.initState();
  }

  _initStreamListeners() {
    widget.participant.on(Events.streamEnabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; videoStream = stream);
      }
    });

    widget.participant.on(Events.streamDisabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; videoStream = null);
      }
    });
  }

  @override
  Widget build(BuildContext context) {
    return Padding(
      padding: const EdgeInsets.all(8.0),
      child: videoStream != null
          ? RTCVideoView(
              videoStream?.renderer as RTCVideoRenderer,
              objectFit: RTCVideoViewObjectFit.RTCVideoViewObjectFitCover,
            )
          : Container(
              color: Colors.grey.shade800,
              child: const Center(
                child: Icon(
                  Icons.person,
                  size: 100,
                ),
              ),
            ),
    );
  }
}</code></pre><figcaption>participant_tile.dart</figcaption></figure><h3 id="step-5-creating-the-meetingscreen%E2%80%8B">Step 5: Creating the MeetingScreen<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-5--creating-the-meetingscreen">​</a></h3><p>Let's create <code>meeting_screen.dart</code> file and create MeetingScreen <code>StatefulWidget</code>.</p><p>MeetingScreen will accept meetingId and token in the constructor.</p><ul><li><strong>meetingID:</strong> meetingId, you want to join</li><li><strong>token</strong>: VideoSDK Auth token.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';
import './participant_tile.dart';

class MeetingScreen extends StatefulWidget {
  final String meetingId;
  final String token;

  const MeetingScreen(
      {super.key, required this.meetingId, required this.token});

  @override
  State&lt;MeetingScreen&gt; createState() =&gt; _MeetingScreenState();
}

class _MeetingScreenState extends State&lt;MeetingScreen&gt; {
  late Room _room;
  var micEnabled = true;
  var camEnabled = true;

  Map&lt;String, Participant&gt; participants = {};

  @override
  void initState() {
    // create room
    _room = VideoSDK.createRoom(
      roomId: widget.meetingId,
      token: widget.token,
      displayName: "John Doe",
      micEnabled: micEnabled,
      camEnabled: camEnabled
    );

    setMeetingEventListener();

    // Join room
    _room.join();

    super.initState();
  }

  // listening to meeting events
  void setMeetingEventListener() {
    _room.on(Events.roomJoined, () {
      setState(() {
        participants.putIfAbsent(
            _room.localParticipant.id, () =&gt; _room.localParticipant);
      });
    });

    _room.on(
      Events.participantJoined,
      (Participant participant) {
        setState(
          () =&gt; participants.putIfAbsent(participant.id, () =&gt; participant),
        );
      },
    );

    _room.on(Events.participantLeft, (String participantId) {
      if (participants.containsKey(participantId)) {
        setState(
          () =&gt; participants.remove(participantId),
        );
      }
    });

    _room.on(Events.roomLeft, () {
      participants.clear();
      Navigator.popUntil(context, ModalRoute.withName('/'));
    });
  }

  // onbackButton pressed leave the room
  Future&lt;bool&gt; _onWillPop() async {
    _room.leave();
    return true;
  }

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return WillPopScope(
      onWillPop: () =&gt; _onWillPop(),
      child: Scaffold(
        appBar: AppBar(
          title: const Text('VideoSDK QuickStart'),
        ),
        body: Padding(
          padding: const EdgeInsets.all(8.0),
          child: Column(
            children: [
              Text(widget.meetingId),
              //render all participant
              Expanded(
                child: Padding(
                  padding: const EdgeInsets.all(8.0),
                  child: GridView.builder(
                    gridDelegate: const SliverGridDelegateWithFixedCrossAxisCount(
                      crossAxisCount: 2,
                      crossAxisSpacing: 10,
                      mainAxisSpacing: 10,
                      mainAxisExtent: 300,
                    ),
                    itemBuilder: (context, index) {
                      return ParticipantTile(
                        key: Key(participants.values.elementAt(index).id),
                          participant: participants.values.elementAt(index));
                    },
                    itemCount: participants.length,
                  ),
                ),
              ),
              MeetingControls(
                onToggleMicButtonPressed: () {
                  micEnabled ? _room.muteMic() : _room.unmuteMic();
                  micEnabled = !micEnabled;
                },
                onToggleCameraButtonPressed: () {
                  camEnabled ? _room.disableCam() : _room.enableCam();
                  camEnabled = !camEnabled;
                },
                onLeaveButtonPressed: () {
                  _room.leave()
                },
              ),
            ],
          ),
        ),
      ),
      home: JoinScreen(),
    );
  }
}</code></pre><figcaption>meeting_screen.dart</figcaption></figure><blockquote><strong>CAUTION</strong><br>If you get <code>webrtc/webrtc.h </code> file not found error at a runtime in iOS, then check the solution <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/known-issues#issue--1" rel="noopener noreferrer">here</a>.</br></blockquote><blockquote><strong>TIP</strong>:<br>You can checkout the complete <a href="https://github.com/videosdk-live/quickstart/tree/main/flutter-rtc" rel="noopener noreferrer">quick start example here</a>.</br></blockquote><h2 id="integrate-chat-feature">Integrate Chat Feature</h2><p>For communication or any kind of messaging between the participants, VideoSDK provides <code>pubSub</code> mechanism and can be used to develop a wide variety of functionalities. For example, participants could use it to send messages to each other, share files or other media, or even trigger actions like muting or unmuting audio or video.</p><p>Now we will see, how we can use PubSub to implement real time chat functionality. If you are not familiar with the PubSub mechanism, you can <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/pubsub">follow this guide</a>.</p><h3 id="group-chat">Group Chat</h3><p>The first step in creating a group chat is choosing the topic that all the participants will publish and subscribe to send and receive the messages. We will be using <code>CHAT</code> this as the topic for this one. So let us create a message input and send button to publish the messages using the <code>pubSub</code> from the <code>Room</code> object.</p><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class ChatView extends StatefulWidget {
  final Room room;
  ...
}

class _ChatViewState extends State&lt;ChatView&gt; {

  final msgTextController = TextEditingController();


  @override
  void initState() {
    ...

    // Subscribing 'CHAT' Topic
    widget.room.pubSub
      .subscribe("CHAT", messageHandler)
      .then((value) =&gt; setState((() =&gt; messages = value)));
  }

  //Handler which will be called when new mesasge is received
  void messageHandler(PubSubMessage message) {
    setState(() =&gt; messages!.messages.add(message));
  }

  @override
  Widget build(BuildContext context) {
    return Column(
      children:[
        Row(
          children: [
            Expanded(
              child: TextField(
                style: TextStyle(
                  fontSize:16,
                  fontWeight: FontWeight.w500,
                ),
                controller: msgTextController,
                onChanged: (value) =&gt; setState(() {
                  msgTextController.text;
                }),
                decoration: const InputDecoration(
                  hintText: "Write your message",
                  border: InputBorder.none,
                ),
              ),
            ),
            ElevatedButton(
              onPressed:(){
                if(!msgTextController.text.trim().isEmpty){
                  widget.room.pubSub
                        .publish(
                          "CHAT",
                          msgTextController.text,
                          const PubSubPublishOptions(
                              persist: true),
                        )
                        .then(
                            (value) =&gt; msgTextController.clear())
                }
              },
              child: const Text("Send Message"),
            ),
          ],
        ),
      ]
    );
  }

}</code></pre><p>The final step in the group chat would be to display the messages others send. For this will use the <code>messages</code> and display all the messages by subscribing to the topic <code>CHAT</code>.</p><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class ChatView extends StatefulWidget {
  final Room room;
  ...
}

class _ChatViewState extends State&lt;ChatView&gt; {

  // PubSubMessages
  PubSubMessages? messages;

  @override
  void initState() {
    ...

    // Subscribing 'CHAT' Topic
    widget.room.pubSub
      .subscribe("CHAT", messageHandler)
      .then((value) =&gt; setState((() =&gt; messages = value)));
  }

  //Handler which will be called when new mesasge is received
  void messageHandler(PubSubMessage message) {
    setState(() =&gt; messages!.messages.add(message));
  }

  @override
  Widget build(BuildContext context) {
    return Column(
      children:[
        Expanded(
          child: messages == null
              ? const Center(child: CircularProgressIndicator())
              : SingleChildScrollView(
                  reverse: true,
                  child: Column(
                    children: messages!.messages
                        .map(
                          (message) =&gt; Text(
                            message.message
                          ),
                        )
                        .toList(),
                  ),
                ),
        ),

        ...
        //Send Message code Here
      ]
    );
  }

  @override
  void dispose() {
    // Unsubscribe
    widget.room.pubSub.unsubscribe("CHAT", messageHandler);
    super.dispose();
  }
}</code></pre><p>Now let us open this <code>ChatView</code> widget on a button click from our meeting screen.</p><pre><code class="language-dart">import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';
import './participant_tile.dart';
import './ChatView.dart';

class MeetingScreen extends StatefulWidget {
  final String meetingId;
  final String token;

  const MeetingScreen(
      {super.key, required this.meetingId, required this.token});

  @override
  State&lt;MeetingScreen&gt; createState() =&gt; _MeetingScreenState();
}

class _MeetingScreenState extends State&lt;MeetingScreen&gt; {
   
  // Other code here... 
  
  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return WillPopScope(
      onWillPop: () =&gt; _onWillPop(),
      child: Scaffold(
        appBar: AppBar(
          title: const Text('VideoSDK QuickStart'),
        ),
        body: Padding(
          padding: const EdgeInsets.all(8.0),
          child: Column(
            children: [
              // Other meeting widgets ...
              
              ElevatedButton(
                onPressed: (){
                	showModalBottomSheet(
                    context: context,
                    isScrollControlled: true,
                    builder: (context) =&gt; ChatView(
                        key: const Key("ChatScreen"),
                        meeting: widget.room),
                    ));
                },
                child: const Text('Open')),
            ],
          ),
        ),
      ),
      home: JoinScreen(),
    );
  }
}</code></pre><h3 id="private-chat">Private Chat</h3><p>In the above example, if you want to convert into a private chat between two participants, then all you have to do is pass <code>sendOnly</code> parameter in <code>PubSubPublishOptions</code>.</p><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class ChatView extends StatefulWidget {
  final Room room;
  ...
}

class _ChatViewState extends State&lt;ChatView&gt; {

  //...

  @override
  Widget build(BuildContext context) {
    return Column(
      children:[
        Row(
          children: [
            Expanded(
              child: TextField(
                style: TextStyle(
                  fontSize:16,
                  fontWeight: FontWeight.w500,
                ),
                controller: msgTextController,
                onChanged: (value) =&gt; setState(() {
                  msgTextController.text;
                }),
                decoration: const InputDecoration(
                  hintText: "Write your message",
                  border: InputBorder.none,
                ),
              ),
            ),
            ElevatedButton(
              onPressed:(){
                if(!msgTextController.text.trim().isEmpty){
                  // Pass the participantId of the participant to whom you want to send the message.
                  widget.room.pubSub
                        .publish(
                          "CHAT",
                          msgTextController.text,
                          const PubSubPublishOptions(
                            persist: true, sendOnly: ["xyz"]),
                        )
                        .then(
                            (value) =&gt; msgTextController.clear())
                }
              },
              child: const Text("Send Message"),
            ),
          ],
        ),
      ]
    );
  }

}</code></pre><h3 id="displaying-the-latest-message-notification">Displaying the Latest Message Notification</h3><p>You may want to show the notification to the user when a new message arrives. So let's continue our example and add an alert for the new images.</p><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class ChatView extends StatefulWidget {
  final Room room;
  ...
}

class _ChatViewState extends State&lt;ChatView&gt; {

  @override
  void initState() {
    ...
  }

  //Handler which will be called when new mesasge is received
  void messageHandler(PubSubMessage message) {
    //Show snackbar on new message
    if(context.mounted){
      ScaffoldMessenger.of(context).showSnackBar(SnackBar(
        content: Text(
          message.message,
          overflow: TextOverflow.fade,
        ),
      ));
    }
    setState(() =&gt; messages!.messages.add(message));
  }

  @override
  Widget build(BuildContext context) {
    return Column(
      children:[
        ...
      ]
    );
  }

  @override
  void dispose() {
    ...
  }
}</code></pre><h3 id="downloading-chat-messages">Downloading Chat Messages</h3><p>All the messages from the PubSub were published  <code>persist : true</code> and can be downloaded as a <code>.csv</code> file. This file will be available in the VideoSDK dashboard as well as through the <a href="https://docs.videosdk.live/api-reference/realtime-communication/fetch-session-using-sessionid">Sessions API</a>.</p><p>In Flutter's real-time chat feature, combine real-time voice and video with asynchronous text messaging, accommodating various communication preferences and scenarios with the functionality provided by <a href="https://www.videosdk.live/">VideoSDK</a>. With the guidance offered in this tutorial, you can develop a user-friendly video-calling platform that enables users to connect in innovative ways.</p><p>Now if you have just started to build or integrate new features with VideoSDK, you can unlock the full potential of VideoSDK today and build personalized video experiences! Just <a href="https://app.videosdk.live/dashboard">Sign up</a> now and receive <strong>10,000 free minutes</strong> and take your video app to new heights.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Screen Share in Android(Java) Video Chat App?]]></title><description><![CDATA[This Expert guide is about your Android video app with the powerful Screen Share feature integration in Android(Java) using VideoSDK.]]></description><link>https://www.videosdk.live/blog/how-to-integrate-screen-share-in-java-video-chat-app-for-android</link><guid isPermaLink="false">65fd0f0b2a88c204ca9cef67</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[Android]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Thu, 09 Jan 2025 12:55:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-Share--Java-2.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-Share--Java-2.png" alt="How to Integrate Screen Share in Android(Java) Video Chat App?"/><p>Ever need to show others exactly what's on your mobile screen during a video call? If your answer is 'Yes!', then you need to know that this very important feature is called Screen Sharing. Screen sharing is the process of showing your smartphone screen to the other participants. It enables everyone in the conference to view precisely what you see on your screen, which is useful for presentations, demos, and collaborations. </p><p>Integrating the Screen Share feature into your video app offers various possibilities for improved collaboration and communication. Whether delivering presentations or collaborating on projects, the Screen Share functionality allows users to easily share their displays during video calls. </p><p>Android developers may create compelling and interactive video experiences for users by following the steps below and leveraging VideoSDK's capabilities. Start implementing the Screen Share feature immediately to transform your video app's functionality and user engagement.</p><h2 id="goals">Goals</h2><p>By the End of this Article:</p><ol><li>Create a <a href="https://www.videosdk.live/signup">VideoSDK account</a> and generate your VideoSDK auth token.</li><li>Integrate the VideoSDK library and dependencies into your project.</li><li>Implement core functionalities for video calls using VideoSDK</li><li>Enable/Disable Screen Share Functionality</li></ol><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the screen share functionality, we will need to use the capabilities that the VideoSDK offers. Before we dive into the implementation steps, let's make sure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token plays a crucial role in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/authentication-and-token#1-generating-token-from-dashboard">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Java Development Kit is supported.</li><li>Android Studio version 3.0 or later.</li><li>Android SDK API level 21 or higher.</li><li>A mobile device with Android 5.0 or later version.</li></ul><h2 id="integrate-videosdk">Integrate VideoSDK</h2><p>Following the account creation and token generation steps, we'll guide you through the process of adding the VideoSDK library and other dependencies to your project. We'll also ensure your app has the required permissions to access features like audio recording, camera usage, and internet connectivity, all crucial for a seamless video experience.</p><h3 id="step-a-add-the-repositories-to-the-projects-settingsgradle-file">Step (a): Add the repositories to the project's <code>settings.gradle</code> file.</h3><pre><code class="language-xml">dependencyResolutionManagement{
  repositories {
    // ...
    google()
    mavenCentral()
    maven { url 'https://jitpack.io' }
    maven { url "https://maven.aliyun.com/repository/jcenter" }
  }
}
</code></pre><h3 id="step-b-include-the-following-dependency-within-your-applications-buildgradle-file">Step (b): Include the following dependency within your application's <code>build.gradle</code> file:</h3><pre><code class="language-xml">dependencies {
  implementation 'live.videosdk:rtc-android-sdk:0.1.26'

  // library to perform Network call to generate a meeting id
  implementation 'com.amitshekhar.android:android-networking:1.0.2'

  // Other dependencies specific to your app
}
</code></pre><blockquote>If your project has set <code>android.useAndroidX=true</code>, then set <code>android.enableJetifier=true</code> in the <code>gradle.properties</code> file to migrate your project to AndroidX and avoid duplicate class conflict.</blockquote><h3 id="step-c-add-permissions-to-your-project">Step (c): Add permissions to your project</h3><p>In <code>/app/Manifests/AndroidManifest.xml</code>, add the following permissions after <code>&lt;/application&gt;</code>.</p><pre><code class="language-xml">&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
</code></pre><p>These permissions are essential for enabling core functionalities like audio recording, internet connectivity for real-time communication, and camera access for video streams within your video application.</p><h2 id="essential-steps-for-building-the-video-calling-functionality">Essential Steps for Building the Video Calling Functionality</h2><p>We'll now delve into the functionalities that make your video application after set up your project with VideoSDK. This section outlines the essential steps for implementing core functionalities within your app.</p><p>This section will guide you through four key aspects:</p><h3 id="step-1-generate-a-meetingid">Step 1: Generate a <code>meetingId</code></h3><p>Now, we can create the <code>meetingId</code> from the VideoSDK's rooms API. You can refer to this <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/setup-call/initialize-meeting#generating-meeting-id">documentation</a> to generate meetingId.</p><h3 id="step-2-initializing-the-meeting">Step 2: Initializing the Meeting</h3><p>After getting <code>meetingId</code> , the next step involves initializing the meeting for that we need to,</p><ol><li>Initialize VideoSDK.</li><li>Configure <strong>VideoSDK</strong> with a token.</li><li>Initialize the meeting with required params such as <code>meetingId</code>, <code>participantName</code>, <code>micEnabled</code>, <code>webcamEnabled</code> and more.</li><li>Add <code>MeetingEventListener</code> for listening events such as Meeting Join/Left and Participant Join/Left.</li><li>Join the room with <code>meeting.join()</code> a method.</li></ol><p>Please copy the .xml file of the <code>MeetingActivity</code> from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/activity_meeting.xml"><strong>here</strong></a>.</p><pre><code class="language-Java">public class MeetingActivity extends AppCompatActivity {
  // declare the variables we will be using to handle the meeting
  private Meeting meeting;
  private boolean micEnabled = true;
  private boolean webcamEnabled = true;

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_meeting);

    final String token = ""; // Replace with the token you generated from the VideoSDK Dashboard
    final String meetingId = ""; // Replace with the meetingId you have generated
    final String participantName = "John Doe";

    // 1. Initialize VideoSDK
    VideoSDK.initialize(applicationContext);

    // 2. Configuration VideoSDK with Token
    VideoSDK.config(token);

    // 3. Initialize VideoSDK Meeting
    meeting = VideoSDK.initMeeting(
            MeetingActivity.this, meetingId, participantName,
            micEnabled, webcamEnabled,null, null, false, null, null);

    // 4. Add event listener for listening upcoming events
    meeting.addEventListener(meetingEventListener);

    // 5. Join VideoSDK Meeting
    meeting.join();

    ((TextView)findViewById(R.id.tvMeetingId)).setText(meetingId);
  }

  // creating the MeetingEventListener
  private final MeetingEventListener meetingEventListener = new MeetingEventListener() {
    @Override
    public void onMeetingJoined() {
      Log.d("#meeting", "onMeetingJoined()");
    }

    @Override
    public void onMeetingLeft() {
      Log.d("#meeting", "onMeetingLeft()");
      meeting = null;
      if (!isDestroyed()) finish();
    }

    @Override
    public void onParticipantJoined(Participant participant) {
      Toast.makeText(MeetingActivity.this, participant.getDisplayName() + " joined", Toast.LENGTH_SHORT).show();
    }

    @Override
    public void onParticipantLeft(Participant participant) {
      Toast.makeText(MeetingActivity.this, participant.getDisplayName() + " left", Toast.LENGTH_SHORT).show();
    }
  };
}</code></pre><h3 id="step-3-handle-local-participant-media">Step 3: Handle Local Participant Media</h3><p>After successfully entering the meeting, it's time to manage the webcam and microphone for the local participant (you).</p><p>To enable or disable the webcam, we'll use the <code>Meeting</code> class methods <code>enableWebcam()</code> and <code>disableWebcam()</code>, respectively. Similarly, to mute or unmute the microphone, we'll utilize the methods <code>muteMic()</code> and <code>unmuteMic()</code></p><pre><code class="language-Java">public class MeetingActivity extends AppCompatActivity {
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_meeting);
    //...Meeting Setup is Here

    // actions
    setActionListeners();
  }

  private void setActionListeners() {
    // toggle mic
    findViewById(R.id.btnMic).setOnClickListener(view -&gt; {
      if (micEnabled) {
        // this will mute the local participant's mic
        meeting.muteMic();
        Toast.makeText(MeetingActivity.this, "Mic Disabled", Toast.LENGTH_SHORT).show();
      } else {
        // this will unmute the local participant's mic
        meeting.unmuteMic();
        Toast.makeText(MeetingActivity.this, "Mic Enabled", Toast.LENGTH_SHORT).show();
      }
      micEnabled=!micEnabled;
    });

    // toggle webcam
    findViewById(R.id.btnWebcam).setOnClickListener(view -&gt; {
      if (webcamEnabled) {
        // this will disable the local participant webcam
        meeting.disableWebcam();
        Toast.makeText(MeetingActivity.this, "Webcam Disabled", Toast.LENGTH_SHORT).show();
      } else {
        // this will enable the local participant webcam
        meeting.enableWebcam();
        Toast.makeText(MeetingActivity.this, "Webcam Enabled", Toast.LENGTH_SHORT).show();
      }
      webcamEnabled=!webcamEnabled;
    });

    // leave meeting
    findViewById(R.id.btnLeave).setOnClickListener(view -&gt; {
      // this will make the local participant leave the meeting
      meeting.leave();
    });
  }
}</code></pre><h3 id="step-4-handling-the-participants-view">Step 4: Handling the Participants' View</h3><p>To display a list of participants in your video UI, we'll utilize a <code>RecyclerView</code>.</p><p><strong>(a)</strong> This involves creating a new layout for the participant view named <code>item_remote_peer.xml</code> in the <code>res/layout</code> folder. You can copy <code>item_remote_peer.xml </code>file from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/item_remote_peer.xml">here</a>.</p><p><strong>(b)</strong> Create a RecyclerView adapter <code>ParticipantAdapter</code> which will be responsible for displaying the participant list. Within this adapter, define a <code>PeerViewHolder</code> class that extends <code>RecyclerView.ViewHolder</code>.</p><pre><code class="language-Java">public class ParticipantAdapter extends RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt; {

  @NonNull
  @Override
  public PeerViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
      return new PeerViewHolder(LayoutInflater.from(parent.getContext()).inflate(R.layout.item_remote_peer, parent, false));
  }

  @Override
  public void onBindViewHolder(@NonNull PeerViewHolder holder, int position) {
  }

  @Override
  public int getItemCount() {
      return 0;
  }

  static class PeerViewHolder extends RecyclerView.ViewHolder {
    // 'VideoView' to show Video Stream
    public VideoView participantView;
    public TextView tvName;
    public View itemView;

    PeerViewHolder(@NonNull View view) {
        super(view);
        itemView = view;
        tvName = view.findViewById(R.id.tvName);
        participantView = view.findViewById(R.id.participantView);
    }
  }
}</code></pre><p><strong>(c)</strong> Now, we will render a list of <code>Participant</code> for the meeting. We will initialize this list in the constructor of the <code>ParticipantAdapter</code></p><pre><code class="language-Java">public class ParticipantAdapter extends RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt; {

  // creating a empty list which will store all participants
  private final List&lt;Participant&gt; participants = new ArrayList&lt;&gt;();

  public ParticipantAdapter(Meeting meeting) {
    // adding the local participant(You) to the list
    participants.add(meeting.getLocalParticipant());

    // adding Meeting Event listener to get the participant join/leave event in the meeting.
    meeting.addEventListener(new MeetingEventListener() {
      @Override
      public void onParticipantJoined(Participant participant) {
        // add participant to the list
        participants.add(participant);
        notifyItemInserted(participants.size() - 1);
      }

      @Override
      public void onParticipantLeft(Participant participant) {
        int pos = -1;
        for (int i = 0; i &lt; participants.size(); i++) {
          if (participants.get(i).getId().equals(participant.getId())) {
            pos = i;
            break;
          }
        }
        // remove participant from the list
        participants.remove(participant);

        if (pos &gt;= 0) {
          notifyItemRemoved(pos);
        }
      }
    });
  }

  // replace getItemCount() method with following.
  // this method returns the size of total number of participants
  @Override
  public int getItemCount() {
    return participants.size();
  }
  //...
}</code></pre><p><strong>(d)</strong> We have listed our participants. Let's set up the view holder to display a participant video.</p><pre><code class="language-Java">public class ParticipantAdapter extends RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt; {

  // replace onBindViewHolder() method with following.
  @Override
  public void onBindViewHolder(@NonNull PeerViewHolder holder, int position) {
    Participant participant = participants.get(position);

    holder.tvName.setText(participant.getDisplayName());

    // adding the initial video stream for the participant into the 'VideoView'
    for (Map.Entry&lt;String, Stream&gt; entry : participant.getStreams().entrySet()) {
      Stream stream = entry.getValue();
      if (stream.getKind().equalsIgnoreCase("video")) {
        holder.participantView.setVisibility(View.VISIBLE);
        VideoTrack videoTrack = (VideoTrack) stream.getTrack();
        holder.participantView.addTrack(videoTrack)
        break;
      }
    }
    // add Listener to the participant which will update start or stop the video stream of that participant
    participant.addEventListener(new ParticipantEventListener() {
      @Override
      public void onStreamEnabled(Stream stream) {
        if (stream.getKind().equalsIgnoreCase("video")) {
          holder.participantView.setVisibility(View.VISIBLE);
          VideoTrack videoTrack = (VideoTrack) stream.getTrack();
          holder.participantView.addTrack(videoTrack)
        }
      }

      @Override
      public void onStreamDisabled(Stream stream) {
        if (stream.getKind().equalsIgnoreCase("video")) {
          holder.participantView.removeTrack();
          holder.participantView.setVisibility(View.GONE);
        }
      }
    });
  }
}</code></pre><p><strong>(e)</strong> Now, add this adapter to the <code>MeetingActivity</code></p><pre><code class="language-Java">@Override
protected void onCreate(Bundle savedInstanceState) {
  //Meeting Setup...
  //...
  final RecyclerView rvParticipants = findViewById(R.id.rvParticipants);
  rvParticipants.setLayoutManager(new GridLayoutManager(this, 2));
  rvParticipants.setAdapter(new ParticipantAdapter(meeting));
}</code></pre><h2 id="screen-share-feature-integration">Screen Share Feature Integration</h2><p>The screen share feature enhances the collaborative experience in video conferences by allowing participants to share their screens with others. Integrating screen share functionality into your video app using VideoSDK is straightforward and can significantly enhance the usability and effectiveness of your application.</p><p>Let's walk through the steps to enable screen-sharing functionality using VideoSDK.</p><h3 id="how-does-screen-share-work%E2%80%8B">How does Screen Share work?<a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/handling-media/screen-share#how-screen-share-works">​</a></h3><p>The following diagram shows the flow of screen sharing in Android using VideoSDK :</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/flow_diagram-babfdcf534c77239e38b3a11d005def6.png" class="kg-image" alt="How to Integrate Screen Share in Android(Java) Video Chat App?" loading="lazy" width="674" height="826"/></figure><h3 id="enable-screen-sharing">Enable Screen Sharing</h3><p>To initiate screen sharing, utilize the <code>enableScreenShare()</code> function within the Meeting class. This enables the local participants to share their mobile screens with other participants seamlessly.</p><ul><li>You can pass customized screen share track in <code>enableScreenShare()</code> by using <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/render-media/optimize-video-track#custom-screen-share-track">Custom Screen Share Track</a>.</li><li>Screen Share stream of the participant can be accessed from the <code>onStreamEnabled</code> event of <code>ParticipantEventListener</code>.</li></ul><!--kg-card-begin: markdown--><h4 id="screenshare-permission%E2%80%8B">Screenshare permission​</h4>
<!--kg-card-end: markdown--><ul><li>Before commencing screen sharing, it's crucial to address screen share permissions. The participant's screen share stream is facilitated through the <code>MediaProjection</code> API, compatible only with <code>Build.VERSION_CODES.LOLLIPOP</code> or higher.</li><li>To attain permission for screen sharing, acquire an instance of the <code>MediaProjectionManager</code> and invoke the <code>createScreenCaptureIntent()</code> method within an activity. This prompts a dialog for the user to authorize screen projection. </li></ul><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/user_permission-d3adf623ea99e011b2b069031f2570be.jpg" class="kg-image" alt="How to Integrate Screen Share in Android(Java) Video Chat App?" loading="lazy"/></figure><ul><li>Following the permission grant, proceed to call the <code>enableScreenShare()</code> method.</li></ul><pre><code class="language-java">private void enableScreenShare() {
    MediaProjectionManager mediaProjectionManager =
        (MediaProjectionManager) getApplication().getSystemService(
            Context.MEDIA_PROJECTION_SERVICE);
    startActivityForResult(
        mediaProjectionManager.createScreenCaptureIntent(), CAPTURE_PERMISSION_REQUEST_CODE);
}

@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
    super.onActivityResult(requestCode, resultCode, data);
    if (requestCode != CAPTURE_PERMISSION_REQUEST_CODE)
        return;
    if (resultCode == Activity.RESULT_OK) {
        // Enabling screen share
        meeting.enableScreenShare(data);
    }
}
</code></pre><!--kg-card-begin: markdown--><h4 id="customize-notification">Customize notification</h4>
<!--kg-card-end: markdown--><ul><li>Upon initiating screen sharing, the presenter will receive a notification featuring a predefined title and message. Notification with pre-defined title and message will look like this:</li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/03/notification-screenshare-android.jpg" class="kg-image" alt="How to Integrate Screen Share in Android(Java) Video Chat App?" loading="lazy" width="600" height="531"/></figure><ul><li>You can Customise those titles, messages, and icons as per your requirements using <code>&lt;meta-data&gt;</code> specified in <code>app/src/main/AndroidManifest.xml</code>.</li><li>The notification appearance can be customized to align with specific requirements by modifying the titles, messages, and icons using <code>&lt;meta-data&gt;</code> specified in <code>app/src/main/AndroidManifest.xml</code>.</li></ul><pre><code class="language-java">&lt;application&gt;
  &lt;meta-data
    android:name="notificationTitle"
    android:value="@string/notificationTitle"
  /&gt;
  &lt;meta-data
    android:name="notificationContent"
    android:value="@string/notificationContent"
  /&gt;
  &lt;meta-data
    android:name="notificationIcon"
    android:resource="@mipmap/ic_launcher_round"
  /&gt;
&lt;/application&gt;</code></pre><h3 id="disable-screen-sharing">Disable Screen Sharing</h3><p>You have to employ the <code>disableScreenShare()</code> function from the <code>Meeting</code> class. This action enables the local participant to halt sharing their mobile screen with other participants.</p><pre><code class="language-java">private void disableScreenShare(){
    // Disabling screen share
    meeting.disableScreenShare();
}</code></pre><h3 id="events-associated-with-screen-sharing"><strong>Events Associated with Screen Sharing</strong></h3><!--kg-card-begin: markdown--><h4 id="events-associated-with-enablescreenshare%E2%80%8B">Events associated with <code>enableScreenShare</code>​</h4>
<!--kg-card-end: markdown--><ul><li>The participant who shares their mobile screen will receive a callback on <a href="https://docs.videosdk.live/android/api/sdk-reference/participant-class/participant-event-listener-class#onstreamenabled"><code>onStreamEnabled()</code></a> of the <a href="https://docs.videosdk.live/android/api/sdk-reference/participant-class/introduction"><code>Participant</code></a> with <code>Stream</code> object.</li><li>While other Participants will receive <a href="https://docs.videosdk.live/android/api/sdk-reference/meeting-class/meeting-event-listener-class#onpresenterchanged"><code>onPresenterChanged()</code></a> callback of the <a href="https://docs.videosdk.live/android/api/sdk-reference/meeting-class/introduction"><code>Meeting</code></a> class with the participantId as <code>presenterId</code> who started the screen share.</li><li>The participant who shared their mobile screen will receive a callback on <a href="https://docs.videosdk.live/android/api/sdk-reference/participant-class/participant-event-listener-class#onstreamdisabled"><code>onStreamDisabled()</code></a> of the <a href="https://docs.videosdk.live/android/api/sdk-reference/participant-class/introduction"><code>Participant</code></a> with Stream object.</li><li>While other Participants will receive <a href="https://docs.videosdk.live/android/api/sdk-reference/meeting-class/meeting-event-listener-class#onpresenterchanged"><code>onPresenterChanged()</code></a> callback of the <a href="https://docs.videosdk.live/android/api/sdk-reference/meeting-class/introduction"><code>Meeting</code></a> class with the <code>presenterId</code> as <code>null</code> indicating there is no presenter.</li></ul><!--kg-card-begin: markdown--><h4 id="events-associated-with-disablescreenshare%E2%80%8B">Events associated with <code>disableScreenShare</code>​</h4>
<!--kg-card-end: markdown--><pre><code class="language-java">private void setLocalListeners() {
    meeting.getLocalParticipant().addEventListener(new ParticipantEventListener() {
        //Callback for when the participant starts a stream
        @Override
        public void onStreamEnabled(Stream stream) {
            if (stream.getKind().equalsIgnoreCase("share")) {
                Log.d("VideoSDK","Share Stream On: onStreamEnabled" + stream);
            }
        }

        //Callback for when the participant stops a stream
        @Override
        public void onStreamDisabled(Stream stream) {
            if (stream.getKind().equalsIgnoreCase("share")) {
                Log.d("VideoSDK","Share Stream Off: onStreamDisabled" + stream);
            }
        }
    });
}

private final MeetingEventListener meetingEventListener = new MeetingEventListener() {
    //Callback for when the presenter changes
    @Override
    public void onPresenterChanged(String participantId) {
      if(participantId != null){
        Log.d("VideoSDK",participantId + "started screen share");
      }else{
        Log.d("VideoSDK","some one stopped screen share");
      }
    }
};</code></pre><p>That's it. Following these steps will allow you to effortlessly implement screen-share capability into your video app, increasing its adaptability and usefulness for users across a wide range of use cases. </p><p>For an in-depth exploration of the code snippets along with thorough explanations, I highly recommend delving into the <a href="https://github.com/videosdk-live/videosdk-rtc-android-kotlin-sdk-example" rel="noopener noreferrer">GitHub repository</a>. By navigating through the repository, you'll gain access to the complete set of code snippets, accompanied by detailed explanations that shed light on their functionality and implementation.</p><h2 id="conclusion"><strong>Conclusion</strong></h2><p>We have discussed the essential steps for integrating the screen share feature into your Android video app using VideoSDK. By following these steps, You may improve the collaborative experience of your video apps, allowing users to effortlessly exchange information during video conferences. </p><p>Screen sharing feature not only increases user engagement but also extends the number of use cases for video communication services, making them more adaptable and beneficial to users. Adding the screen-sharing functionality to your video app now opens up new avenues for collaboration and communication. </p><p><a href="https://www.videosdk.live/signup"><strong>Sign up with VideoSDK</strong></a> today and Get <strong>10000 Free Minutes to </strong>take your video app to the next level!</p>]]></content:encoded></item><item><title><![CDATA[Integrate Pre-Call Check in React]]></title><description><![CDATA[Discover how to implement Precall Integration in React SDK. Our comprehensive guide helps you enhance your app's communication capabilities, ensuring smooth and efficient user interactions.]]></description><link>https://www.videosdk.live/blog/precall-integration-in-react</link><guid isPermaLink="false">6683fe8420fab018df10f5bb</guid><category><![CDATA[React]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 09 Jan 2025 09:45:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/07/Pre-Call-Check-React.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/07/Pre-Call-Check-React.jpg" alt="Integrate Pre-Call Check in React"/><p>Picture this: before diving into the depths of a video call, imagine giving your setup a quick check-up, like a tech-savvy doctor ensuring all systems are a go. That's essentially what a precall experience does- it’s like your extensive debug session before the main code execution—a crucial step in ensuring your app's performance is top-notch.</p><h2 id="why-is-it-necessary%E2%80%8B"><strong>Why is it necessary?</strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/setup-call/precall#why-is-it-necessary">​</a></h2><p>Why invest time and effort into crafting a precall experience, you wonder? Well, picture this scenario: your users eagerly join a video call, only to encounter a myriad of technical difficulties—muted microphones, pixelated cameras, and laggy connections. Not exactly the smooth user experience you had in mind, right?</p><p>By integrating a robust precall process into your app, developers become the unsung heroes, preemptively addressing potential pitfalls and ensuring that users step into their video calls with confidence.</p><h2 id="step-by-step-guide-integrating-precall-feature%E2%80%8B"><strong>Step-by-Step Guide: Integrating Precall Feature</strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/setup-call/precall#step-by-step-guide-integrating-precall-feature">​</a></h2><h3 id="step-1-check-permissions%E2%80%8B"><strong>Step 1: Check Permissions</strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/setup-call/precall#step-1-check-permissions">​</a></h3><ul><li>Begin by ensuring that your application has the necessary permissions to access user devices such as cameras, microphones, and speakers.</li><li>Utilize the <code>checkPermissions()</code> method of the <code>useMediaDevice</code> hook to verify if permissions are granted.</li></ul><pre><code class="language-js">import { useMediaDevice } from "@videosdk.live/react-sdk";

const { checkPermissions } = useMediaDevice();

const checkMediaPermission = async () =&gt; {
  //These methods return a Promise that resolve to a Map&lt;string, boolean&gt; object.
  const checkAudioPermission = await checkPermissions("audio"); //For getting audio permission
  const checkVideoPermission = await checkPermissions("video"); //For getting video permission
  const checkAudioVideoPermission = await checkPermissions("audio_video"); //For getting both audio and video permissions
  // Output: Map object for both audio and video permission:
  /*
        Map(2)
        0 : {"audio" =&gt; true}
            key: "audio"
            value: true
        1 : {"video" =&gt; true}
            key: "video"
            value: true
    */
};</code></pre><ul><li>When microphone and camera permissions are blocked, rendering device lists is not possible:</li></ul><!--kg-card-begin: html--><video src="https://cdn.videosdk.live/website-resources/docs-resources/precall_no_permissions.mp4" preload="auto" autoplay="" controls="" loop="" style="width: 100%; height: 100%;"/><!--kg-card-end: html--><h3 id="step-2-request-permissions-if-necessary%E2%80%8B"><strong>Step 2: Request Permissions (if necessary)</strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/setup-call/precall#step-2-request-permissions-if-necessary">​</a></h3><ul><li>If permissions are not granted, use the <code>requestPermission()</code> method of the <code>useMediaDevice</code> hook to prompt users to grant access to their devices.</li></ul><blockquote>NOTE<br>In case permissions are blocked by the user, the browser's permission request dialogue cannot be re-rendered programmatically. In such cases, consider providing guidance to users on</br></blockquote><pre><code class="language-js">const requestAudioVideoPermission = async () =&gt; {
  try {
    //These methods return a Promise that resolve to a Map&lt;string, boolean&gt; object.
    const requestAudioPermission = await requestPermission("audio"); //For Requesting Audio Permission
    const requestVideoPermission = await requestPermission("video"); //For Requesting Video Permission
    const requestAudioVideoPermission = await requestPermission("audio_video"); //For Requesting Audio and Video Permissions
  } catch (ex) {
    console.log("Error in requestPermission ", ex);
  }
};</code></pre><ul><li>Requesting permissions if not already granted:</li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/image-19.png" class="kg-image" alt="Integrate Pre-Call Check in React" loading="lazy" width="1920" height="1019"/></figure><h3 id="step-3-render-device-lists%E2%80%8B"><strong>Step 3: Render Device Lists</strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/setup-call/precall#step-3-render-device-lists">​</a></h3><ul><li>Once you have the necessary permissions, Fetch and render lists of available camera, microphone, and speaker devices using the <code>getCameras()</code>, <code>getMicrophones()</code>, and <code>getPlaybackDevices()</code> methods of the <code>useMediaDevice</code> hook respectively.</li><li>Enable users to select their preferred devices from these lists.</li></ul><pre><code class="language-js">const getMediaDevices = async () =&gt; {
  try {
    //Method to get all available webcams.
    //It returns a Promise that is resolved with an array of CameraDeviceInfo objects describing the video input devices.
    let webcams = await getCameras();
    //Method to get all available Microphones.
    //It returns a Promise that is resolved with an array of MicrophoneDeviceInfo objects describing the audio input devices.
    let mics = await getMicrophones();
    //Method to get all available speakers.
    //It returns a Promise that is resolved with an array of PlaybackDeviceInfo objects describing the playback devices.
    let speakers = await getPlaybackDevices();
  } catch (err) {
    console.log("Error in getting audio or video devices", err);
  }
};</code></pre><ul><li>Displaying device lists once permissions are granted:</li></ul><!--kg-card-begin: html--><video src="https://cdn.videosdk.live/website-resources/docs-resources/precall_render_device_list.mp4" preload="auto" autoplay="" controls="" loop="" style="width: 100%; height: 100%;"/><!--kg-card-end: html--><h3 id="step-4-handle-device-changes%E2%80%8B"><strong>Step 4: Handle Device Changes</strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/setup-call/precall#step-4-handle-device-changes">​</a></h3><ul><li>Implement the <code>OnDeviceChanged</code> callback of the <code>useMediaDevice</code> hook to dynamically re-render device lists whenever new devices are attached or removed from the system.</li><li>Ensure that users can seamlessly interact with newly connected devices without disruptions.</li></ul><pre><code class="language-js">const {
    ...
  } = useMediaDevice({ onDeviceChanged });

//Fetch camera, mic and speaker devices again using this function.
function onDeviceChanged(devices) {
    console.log("Device Changed", devices)
}</code></pre><ul><li>Dynamically updating device lists when new devices are connected or disconnected:</li></ul><!--kg-card-begin: html--><video src="https://cdn.videosdk.live/website-resources/docs-resources/precall_on_device_change.mp4" preload="auto" autoplay="" controls="" loop="" style="width: 100%; height: 100%;"/><!--kg-card-end: html--><h3 id="step-5-create-media-tracks%E2%80%8B"><strong>Step 5: Create Media Tracks</strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/setup-call/precall#step-5-create-media-tracks">​</a></h3><ul><li>Upon user selection of devices, create media tracks for the selected microphone and camera using the <code>createMicrophoneAudioTrack()</code> and <code>createCameraVideoTrack()</code> methods.</li><li>Ensure that these tracks originate from the user-selected devices for accurate testing.</li></ul><pre><code class="language-js">import {
  createCameraVideoTrack,
  createMicrophoneAudioTrack,
} from "@videosdk.live/react-sdk";

//For Getting Audio Tracks
const getMediaTracks = async () =&gt; {
  try {
    //Returns a MediaStream object, containing the Audio Stream from the selected Mic Device.
    const customAudioStream = await createMicrophoneAudioTrack({
      // Here, selectedMicId should be the microphone id of the device selected by the user.
      microphoneId: selectedMicId,
    });
    //To retrive audio tracks that will be displayed to the user from the stream.
    const audioTracks = stream?.getAudioTracks();
    const audioTrack = audioTracks.length ? audioTracks[0] : null;
  } catch (error) {
    console.log("Error in getting Audio Track", error);
  }

  //For Getting Video Tracks
  try {
    //Returns a MediaStream object, containing the Video Stream from the selected Webcam Device.
    const customVideoStream = await createCameraVideoTrack({
      // Here, selectedWebcamId should be the webcam id of the device selected by the user.
      cameraId: selectedWebcamId,
      encoderConfig: encoderConfig ? encoderConfig : "h540p_w960p",
      optimizationMode: "motion",
      multiStream: false,
    });
    //To retrive video tracks that will be displayed to the user from the stream.
    const videoTracks = stream?.getVideoTracks();
    const videoTrack = videoTracks.length ? videoTracks[0] : null;
  } catch (error) {
    console.log("Error in getting Video Track", error);
  }
};</code></pre><ul><li>Rendering Media Tracks when necessary permissions are available:</li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/image-24.png" class="kg-image" alt="Integrate Pre-Call Check in React" loading="lazy" width="1920" height="879"/></figure><h3 id="step-6-testing-microphone%E2%80%8B"><strong>Step 6: Testing Microphone</strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/setup-call/precall#step-6-testing-microphone">​</a></h3><ul><li>The process of testing microphone device provides valuable insights into microphone quality and ensures users can optimize their audio setup for clear communication.</li><li>To facilitate this functionality, incorporate a recording feature that enables users to capture audio for a specified duration. After recording, users can playback the audio to evaluate microphone performance accurately.</li><li>For implementing this functionality, you can refer to the official guide of <a href="https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder" rel="noopener noreferrer">MediaRecorder</a> for comprehensive instructions and best practices.</li></ul><!--kg-card-begin: html--><video src="https://cdn.videosdk.live/website-resources/docs-resources/precall_test_mic.mp4" preload="auto" autoplay="" controls="" loop="" style="width: 100%; height: 100%;"/><!--kg-card-end: html--><h3 id="step-7-testing-speakers%E2%80%8B"><strong>Step 7: Testing Speakers</strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/setup-call/precall#step-7-testing-speakers">​</a></h3><ul><li>Testing the speaker device allows users to assess audio playback clarity and fidelity, enabling them to fine-tune settings for optimal sound quality in calls and meetings.</li><li>To facilitate effective speaker testing, integrate sound playback functionality into your application.</li><li>This functionality empowers users to play a predefined audio sample, providing a precise evaluation of their speaker output quality.</li></ul><h2 id="conclusion">Conclusion</h2><p>Incorporating a robust precall experience into your video calling application is essential for ensuring seamless and high-quality communication. By proactively addressing potential technical issues before the call begins, you can significantly enhance the user experience and prevent common pitfalls such as muted microphones, pixelated cameras, and laggy connections.</p><p>This guide has provided a comprehensive step-by-step approach to integrating the precall feature into your app. From checking and requesting permissions to rendering device lists and creating media tracks, each step is designed to ensure that users can confidently join their video calls with optimized settings. </p>]]></content:encoded></item><item><title><![CDATA[How to Implement Screen Share in Flutter Video Call App for iOS?]]></title><description><![CDATA[Integrate screen sharing into your Flutter video call app for iOS with VideoSDK. Empower users to collaborate and present seamlessly.]]></description><link>https://www.videosdk.live/blog/implement-screen-share-flutter-video-call-app-for-ios</link><guid isPermaLink="false">661dfe012a88c204ca9d3ed4</guid><category><![CDATA[Flutter]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Thu, 09 Jan 2025 09:20:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-Share-Flutter-iOS.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-Share-Flutter-iOS.png" alt="How to Implement Screen Share in Flutter Video Call App for iOS?"/><p>Implement screen-sharing functionality into your <a href="https://www.videosdk.live/blog/video-calling-in-flutter">Flutter video call app for iOS</a> to enhance user experience and collaboration. With screen sharing, users can seamlessly share their screen content during video calls, fostering effective communication and teamwork. Utilizing Flutter's versatile framework, integrate a user-friendly interface that allows users to initiate screen sharing with just a few taps. VideoSDK offers easy-to-use APIs and <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/concept-and-architecture">comprehensive documentation</a>, simplifying the integration process for developers.</p><p><strong>Benefits of Implementing Screen Share in Flutter iOS Video Call App:</strong></p><ol><li><strong>Enhanced Collaboration:</strong> Screen sharing facilitates real-time sharing of information, fostering better collaboration among users.</li><li><strong>Improved Communication:</strong> Visual demonstrations and presentations help convey complex ideas more effectively, leading to clearer communication.</li><li><strong>Increased Productivity:</strong> With the ability to share screens, teams can work together more efficiently, reducing misunderstandings and saving time.</li><li><strong>Interactive Learning:</strong> Educators and trainers can use screen sharing to demonstrate concepts, conduct workshops, and engage learners interactively.</li><li><strong>Client Presentations:</strong> Professionals can showcase designs, reports, or demos directly to clients, enhancing engagement and understanding.</li></ol><p><strong>Use cases of Screen Share in Flutter iOS Video Call App:</strong></p><ol><li><strong>Remote Work:</strong> Teams working remotely can collaborate effectively by sharing screens and&nbsp;<a href="https://visme.co/blog/presentation-slides/" rel="noreferrer">presentation slides</a>&nbsp;during meetings, brainstorming sessions, and code reviews.</li><li><strong>Education:</strong> Teachers can use screen sharing to present slideshows, demonstrate software, or conduct virtual experiments for remote students.</li><li><strong>Technical Support:</strong> IT professionals can troubleshoot software issues by remotely accessing and viewing users' screens to identify and resolve problems.</li></ol><p>In this article, we'll explore how to integrate screen sharing into your Flutter video calling app for iOS using VideoSDK. We'll cover everything from managing permissions to leveraging VideoSDK's tools and seamlessly integrating the feature into your app's interface.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of Screen Share for iOS, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/authentication-and-tokens#1-generating-token-from-dashboard">provided tutorial</a>.</p><h3 id="prerequisites%E2%80%8B">Prerequisites<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#prerequisites">​</a></h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (if you do not have one, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a>)</li><li>The basic understanding of Flutter.</li><li><a href="https://pub.dev/packages/videosdk" rel="noopener noreferrer"><strong>Flutter Video SDK</strong></a></li><li>Have Flutter installed on your device.</li></ul><h2 id="install-videosdk%E2%80%8B">Install VideoSDK<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#install-video-sdk">​</a></h2><p>Install the VideoSDK using the below-mentioned flutter command. Make sure you are in your Flutter app directory before you run this command.</p><pre><code class="language-dart">$ flutter pub add videosdk

//run this command to add http library to perform network call to generate roomId
$ flutter pub add http</code></pre><h3 id="videosdk-compatibility">VideoSDK Compatibility</h3>
<!--kg-card-begin: html-->
<table style="border: 1px solid black;">
<thead>
<tr>
<th style="border:1px solid white;">Android and iOS app</th>
<th style="border:1px solid white;">Web</th>
<th style="border:1px solid white;">Desktop app</th>
<th style="border:1px solid white;">Safari browser</th>
</tr>
</thead>
<tbody>
<tr>
<td style="border:1px solid white;"><center> ✅ </center></td>
<td style="border:1px solid white;"><center> ✅ </center></td>
<td style="border:1px solid white;"><center> ✅ </center></td>
<td style="border:1px solid white;"><center> ❌ </center></td>
</tr>
</tbody>
</table>

<!--kg-card-end: html-->
<h3 id="structure-of-the-project%E2%80%8B">Structure of the project<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#structure-of-the-project">​</a></h3><p>Your project structure should look like this.</p><pre><code class="language-dart">    root
    ├── android
    ├── ios
    ├── lib
         ├── api_call.dart
         ├── join_screen.dart
         ├── main.dart
         ├── meeting_controls.dart
         ├── meeting_screen.dart
         ├── participant_tile.dart</code></pre><p>We are going to create flutter widgets (JoinScreen, MeetingScreen, MeetingControls, and ParticipantTile).</p><h3 id="app-structure%E2%80%8B">App Structure<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#app-structure">​</a></h3><p>The app widget will contain <code>JoinScreen</code> and <code>MeetingScreen</code> widget. <code>MeetingScreen</code> will have <code>MeetingControls</code> and <code>ParticipantTile</code> widget.</p><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/flutter_quick_start_arch.png" class="kg-image" alt="How to Implement Screen Share in Flutter Video Call App for iOS?" loading="lazy" width="1920" height="1080"/></figure><h3 id="configure-project-for-ios%E2%80%8B">Configure Project For iOS<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#for-ios">​</a></h3><ul><li>Add the following entries that allow your app to access the camera and microphone in your <code>/ios/Runner/Info.plist</code> file:</li></ul><pre><code class="language-dart">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Camera Usage!&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Microphone Usage!&lt;/string&gt;</code></pre><ul><li>Uncomment the following line to define a global platform for your project in <code>/ios/Podfile</code> :</li></ul><pre><code class="language-dart"># platform :ios, '12.0'</code></pre><h2 id="essential-steps-to-implement-video-calling-functionality">Essential Steps to Implement Video Calling Functionality</h2><p>Before diving into the specifics of screen-sharing implementation, it's crucial to ensure you have VideoSDK properly installed and configured within your Flutter project. Refer to VideoSDK's documentation for detailed installation instructions. Once you have a functional video calling setup, you can proceed with adding the screen-sharing feature.</p><h3 id="step-1-get-started-with-apicalldart%E2%80%8B">Step 1: Get started with <code>api_call.dart</code><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-1-get-started-with-api_calldart">​</a></h3><p>Before jumping to anything else, you will write a function to generate a unique meetingId. You will require an authentication token, you can generate it either by using <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or by generating it from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for development.</p><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'dart:convert';
import 'package:http/http.dart' as http;

//Auth token we will use to generate a meeting and connect to it
String token = "&lt;Generated-from-dashboard&gt;";

// API call to create meeting
Future&lt;String&gt; createMeeting() async {
  final http.Response httpResponse = await http.post(
    Uri.parse("https://api.videosdk.live/v2/rooms"),
    headers: {'Authorization': token},
  );

//Destructuring the roomId from the response
  return json.decode(httpResponse.body)['roomId'];
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">api_call.dart</span></p></figcaption></figure><h3 id="step-2-creating-the-joinscreen%E2%80%8B">Step 2: Creating the JoinScreen<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-2--creating-the-joinscreen">​</a></h3><p>Let's create <code>join_screen.dart</code> file in <code>lib</code> directory and create JoinScreen <code>StatelessWidget</code>.</p><p>The JoinScreen will consist of:</p><ul><li><strong>Create Meeting Button</strong>: This button will create a new meeting for you.</li><li><strong>Meeting ID TextField</strong>: This text field will contain the meeting ID, you want to join.</li><li><strong>Join Meeting Button</strong>: This button will join the meeting, which you have provided.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'api_call.dart';
import 'meeting_screen.dart';

class JoinScreen extends StatelessWidget {
  final _meetingIdController = TextEditingController();

  JoinScreen({super.key});

  void onCreateButtonPressed(BuildContext context) async {
    // call api to create meeting and then navigate to MeetingScreen with meetingId,token
    await createMeeting().then((meetingId) {
      if (!context.mounted) return;
      Navigator.of(context).push(
        MaterialPageRoute(
          builder: (context) =&gt; MeetingScreen(
            meetingId: meetingId,
            token: token,
          ),
        ),
      );
    });
  }

  void onJoinButtonPressed(BuildContext context) {
    String meetingId = _meetingIdController.text;
    var re = RegExp("\\w{4}\\-\\w{4}\\-\\w{4}");
    // check meeting id is not null or invaild
    // if meeting id is vaild then navigate to MeetingScreen with meetingId,token
    if (meetingId.isNotEmpty &amp;&amp; re.hasMatch(meetingId)) {
      _meetingIdController.clear();
      Navigator.of(context).push(
        MaterialPageRoute(
          builder: (context) =&gt; MeetingScreen(
            meetingId: meetingId,
            token: token,
          ),
        ),
      );
    } else {
      ScaffoldMessenger.of(context).showSnackBar(const SnackBar(
        content: Text("Please enter valid meeting id"),
      ));
    }
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: const Text('VideoSDK QuickStart'),
      ),
      body: Padding(
        padding: const EdgeInsets.all(12.0),
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: [
            ElevatedButton(
              onPressed: () =&gt; onCreateButtonPressed(context),
              child: const Text('Create Meeting'),
            ),
            Container(
              margin: const EdgeInsets.fromLTRB(0, 8.0, 0, 8.0),
              child: TextField(
                decoration: const InputDecoration(
                  hintText: 'Meeting Id',
                  border: OutlineInputBorder(),
                ),
                controller: _meetingIdController,
              ),
            ),
            ElevatedButton(
              onPressed: () =&gt; onJoinButtonPressed(context),
              child: const Text('Join Meeting'),
            ),
          ],
        ),
      ),
    );
  }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">join_screen.dart</span></p></figcaption></figure><ul><li>Update the home screen of the app in the <code>main.dart</code></li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'join_screen.dart';

void main() {
  runApp(const MyApp());
}

class MyApp extends StatelessWidget {
  const MyApp({super.key});

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'VideoSDK QuickStart',
      theme: ThemeData(
        primarySwatch: Colors.blue,
      ),
      home: JoinScreen(),
    );
  }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">main.dart</span></p></figcaption></figure><h3 id="step-3-creating-the-meetingcontrols%E2%80%8B">Step 3: Creating the MeetingControls<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-3--creating-the-meetingcontrols">​</a></h3><p>Let's create <code>meeting_controls.dart</code> file and create MeetingControls <code>StatelessWidget</code>.</p><p>The MeetingControls will consist of:</p><ul><li><strong>Leave Button</strong>: This button will leave the meeting.</li><li><strong>Toggle Mic Button</strong>: This button will unmute or mute the mic.</li><li><strong>Toggle Camera Button</strong>: This button will enable or disable the camera.</li></ul><p>MeetingControls will accept 3 functions in the constructor</p><ul><li><strong><code>onLeaveButtonPressed</code></strong>: invoked when the Leave button pressed</li><li><strong><code>onToggleMicButtonPressed</code></strong>: invoked when the toggle mic button pressed</li><li><strong><code>onToggleCameraButtonPressed</code></strong>: invoked when the toggle Camera button pressed</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';

class MeetingControls extends StatelessWidget {
  final void Function() onToggleMicButtonPressed;
  final void Function() onToggleCameraButtonPressed;
  final void Function() onLeaveButtonPressed;

  const MeetingControls(
      {super.key,
      required this.onToggleMicButtonPressed,
      required this.onToggleCameraButtonPressed,
      required this.onLeaveButtonPressed});

  @override
  Widget build(BuildContext context) {
    return Row(
      mainAxisAlignment: MainAxisAlignment.spaceEvenly,
      children: [
        ElevatedButton(
            onPressed: onLeaveButtonPressed, child: const Text('Leave')),
        ElevatedButton(
            onPressed: onToggleMicButtonPressed, child: const Text('Toggle Mic')),
        ElevatedButton(
            onPressed: onToggleCameraButtonPressed,
            child: const Text('Toggle WebCam')),
      ],
    );
  }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">meeting_controls.dart</span></p></figcaption></figure><h3 id="step-4-creating-participanttile%E2%80%8B">Step 4: Creating ParticipantTile<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-4--creating-participanttile">​</a></h3><p>Let's create <code>participant_tile.dart</code> file and create ParticipantTile <code>StatefulWidget</code>.</p><p>The ParticipantTile will consist of:</p><ul><li><strong>RTCVideoView</strong>: This will show the participant's video stream.</li></ul><p>ParticipantTile will accept <code>Participant</code> in constructor</p><ul><li><strong>participant:</strong> participant of the meeting.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class ParticipantTile extends StatefulWidget {
  final Participant participant;
  const ParticipantTile({super.key, required this.participant});

  @override
  State&lt;ParticipantTile&gt; createState() =&gt; _ParticipantTileState();
}

class _ParticipantTileState extends State&lt;ParticipantTile&gt; {
  Stream? videoStream;

  @override
  void initState() {
    // initial video stream for the participant
    widget.participant.streams.forEach((key, Stream stream) {
      setState(() {
        if (stream.kind == 'video') {
          videoStream = stream;
        }
      });
    });
    _initStreamListeners();
    super.initState();
  }

  _initStreamListeners() {
    widget.participant.on(Events.streamEnabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; videoStream = stream);
      }
    });

    widget.participant.on(Events.streamDisabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; videoStream = null);
      }
    });
  }

  @override
  Widget build(BuildContext context) {
    return Padding(
      padding: const EdgeInsets.all(8.0),
      child: videoStream != null
          ? RTCVideoView(
              videoStream?.renderer as RTCVideoRenderer,
              objectFit: RTCVideoViewObjectFit.RTCVideoViewObjectFitCover,
            )
          : Container(
              color: Colors.grey.shade800,
              child: const Center(
                child: Icon(
                  Icons.person,
                  size: 100,
                ),
              ),
            ),
    );
  }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">participant_tile.dart</span></p></figcaption></figure><h3 id="step-5-creating-the-meetingscreen%E2%80%8B">Step 5: Creating the MeetingScreen<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-5--creating-the-meetingscreen">​</a></h3><p>Let's create <code>meeting_screen.dart</code> file and create MeetingScreen <code>StatefulWidget</code>.</p><p>MeetingScreen will accept meetingId and token in the constructor</p><ul><li><strong>meetingId:</strong> meetingId, you want to join</li><li><strong>token</strong>: VideoSdk Auth token</li></ul><p>Your app should look like this after the implementation.</p><p>If you get <code>webrtc/webrtc.h </code> file not found error at runtime in ios then check the solution <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/known-issues#issue--1" rel="noopener noreferrer">here</a>.</p><p><strong>TIP:</strong> You can check out the complete <a href="https://github.com/videosdk-live/quickstart/tree/main/flutter-rtc" rel="noopener noreferrer">quick start example here</a>.</p><h2 id="integrate-screen-share-feature">Integrate Screen Share Feature</h2><p>Screen sharing in a meeting is the process of sharing your device screen with other participants in the meeting. It allows everyone in the meeting to see exactly what you are seeing on your screen, which can be helpful for presentations, demonstrations, or collaborations.</p><h3 id="why-integrate-screen-sharing-with-videosdk">Why Integrate Screen Sharing with VideoSDK?</h3><p>Integrating screen sharing into your Flutter video calling app using videoSDK presents several compelling reasons:</p><ol><li><strong>Simplified Development</strong>: With VideoSDK, the intricate tasks of screen capture and transmission are effortlessly managed, saving you valuable development time and resources.</li><li><strong>Native Functionality</strong>: Harnessing the power of the iOS Media Projection API, VideoSDK ensures a seamless and efficient screen-sharing experience for your users, providing them with a native feel and optimal performance.</li><li><strong>Seamless Integration</strong>: The integration process with VideoSDK is intuitive and hassle-free, enabling you to seamlessly incorporate screen-sharing functionality into your existing video call features without extensive modifications.</li><li><strong>Robust Features</strong>: VideoSDK offers a robust screen-sharing solution equipped with essential features such as permission handling and in-app sharing capabilities, guaranteeing a reliable and user-friendly experience.</li></ol><p>By embracing screen sharing through VideoSDK, you elevate the functionality and appeal of your Flutter video-calling app, providing users with an enriched communication experience.</p><blockquote><strong>NOTE:</strong><br>Flutter iOS screenshare is available from the SDK version 1.0.5 and above and the deployment target for iOS app should be <strong>iOS 14 or newer</strong></br></blockquote><p>iOS requires you to add a Broadcast Upload Extension to capture the screen of the device.</p><h3 id="step-1-open-target%E2%80%8B">Step 1: Open Target<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/extras/flutter-ios-screen-share#step-1--open-target">​</a></h3><p>Open your project with Xcode and select <strong>File &gt; New &gt; Target</strong> in the menu bar.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step1-xcode-8cbf05258253368fa8095f10aa06459d.png" class="kg-image" alt="How to Implement Screen Share in Flutter Video Call App for iOS?" loading="lazy" width="801" height="580"/></figure><h3 id="step-2-select-target%E2%80%8B">Step 2: Select Target<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/extras/flutter-ios-screen-share#step-2--select-target">​</a></h3><p>Choose <strong>Broadcast Upload Extension</strong>, and click next.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step2-xcode-5270422797cc8941922cefadffb5238d.png" class="kg-image" alt="How to Implement Screen Share in Flutter Video Call App for iOS?" loading="lazy" width="769" height="555"/></figure><h3 id="step-3-configure-broadcast-upload-extension%E2%80%8B">Step 3: Configure Broadcast Upload Extension<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/extras/flutter-ios-screen-share#step-3--configure-broadcast-upload-extension">​</a></h3><p>Enter the extension name in the <strong>Product Name</strong> field, choose a team from the dropdown, Uncheck the include UI extension field, and click finish.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step3-xcode-458facd46d26cfa090fd8724b6ba831b.png" class="kg-image" alt="How to Implement Screen Share in Flutter Video Call App for iOS?" loading="lazy" width="789" height="567"/></figure><h3 id="step-4-activate-extension-scheme%E2%80%8B">Step 4: Activate Extension scheme<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/extras/flutter-ios-screen-share#step-4--activate-extension-scheme">​</a></h3><p>You will be prompted to <strong>Activate the "Your-Extension-name" scheme?</strong> pop-up, click activate.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step4-xcode-6de6fa7fc079f920f52d8315b2717e9d.png" class="kg-image" alt="How to Implement Screen Share in Flutter Video Call App for iOS?" loading="lazy" width="510" height="490"/></figure><p>Now, the broadcast folder will appear in Xcode left sidebar.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step5-xcode-79ba89532c028fb25376bf6d4953f0b2.png" class="kg-image" alt="How to Implement Screen Share in Flutter Video Call App for iOS?" loading="lazy" width="474" height="652"/></figure><h3 id="step-5-add-external-file-in-created-extension%E2%80%8B">Step 5: Add External file in Created Extension<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/extras/flutter-ios-screen-share#step-5--add-external-file-in-created-extension">​</a></h3><p>Open <a href="https://github.com/videosdk-live/videosdk-rtc-flutter-sdk-example/tree/main/ios/FlutterBroadcast" rel="noopener noreferrer">videosdk-rtc-flutter-sdk-example</a>, Copy <code>SampleUploader.swift</code>, <code>SocketConnection.swift</code>, <code>DarwinNotificationCenter.swift</code>, and <code>Atomic.swift</code> files to your extension's folder and make sure they're added to the target.</p><h3 id="step-6-update-samplehandlerswift-file%E2%80%8B">Step 6: Update <code>SampleHandler.swift</code> file<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/extras/flutter-ios-screen-share#step-6--update-samplehandlerswift-file">​</a></h3><p>Open <a href="https://github.com/videosdk-live/videosdk-rtc-flutter-sdk-example/blob/main/ios/FlutterBroadcast/SampleHandler.swift" rel="noopener noreferrer">SampleHandler.swift</a> and Copy <code>SampleHandler.swift</code> file content and paste it into your extensions <code>SampleHandler.swift</code> file.</p><h3 id="step-7-add-capability-in-the-app%E2%80%8B">Step 7: Add Capability in the App<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/extras/flutter-ios-screen-share#step-7--add-capability-in-app">​</a></h3><p>In Xcode, go to <strong>YourappName &gt; Signing &amp; Capabilities</strong>. and click on <strong>+Capability</strong> to configure the app group.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step8-xcode-60177a1783e9203570a06479c30492b8.png" class="kg-image" alt="How to Implement Screen Share in Flutter Video Call App for iOS?" loading="lazy" width="930" height="529"/></figure><p>Select <strong>App Groups</strong> from the list:</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step9-xcode-13aa5448218f78f473f060c6854874a2.png" class="kg-image" alt="How to Implement Screen Share in Flutter Video Call App for iOS?" loading="lazy" width="803" height="579"/></figure><p>After that, you have to select or add the generated app group ID that you have created before.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step10-xcode-80bd6f05c87acae9a9fbaa4a89f414f9.png" class="kg-image" alt="How to Implement Screen Share in Flutter Video Call App for iOS?" loading="lazy" width="1176" height="735"/></figure><h3 id="step-8-add-capability-in-extension%E2%80%8B">Step 8: Add Capability in Extension<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/extras/flutter-ios-screen-share#step-8--add-capability-in-extension">​</a></h3><p>Go to <strong>Your-Extension-Name &gt; Signing &amp; Capabilities</strong> and configure <strong>App Group</strong> functionality, which we had performed in previous steps. (The group ID should be the same for both targets.)</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step11-xcode-021329a0abd110f74fb1eba7d668fc79.png" class="kg-image" alt="How to Implement Screen Share in Flutter Video Call App for iOS?" loading="lazy" width="803" height="445"/></figure><h3 id="step-9-add-app-group-id-to-extension-file%E2%80%8B">Step 9: Add App Group ID to Extension File<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/extras/flutter-ios-screen-share#step-9--add-app-group-id-in-extension-file">​</a></h3><p>Go to the extensions <code>SampleHandler.swift</code> file and paste your group Id in <code>appGroupIdentifier</code> constant.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step12-xcode-6f145ca7606a486596da66ebf5c31e36.png" class="kg-image" alt="How to Implement Screen Share in Flutter Video Call App for iOS?" loading="lazy" width="951" height="562"/></figure><h3 id="step-10-update-app-level-infoplist-file%E2%80%8B">Step 10: Update App level info.plist file<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/extras/flutter-ios-screen-share#step-10--update-app-level-infoplist-file">​</a></h3><ol><li>Add a new key, <strong>RTCScreenSharingExtension</strong> in <strong>Info.plist</strong> with the extension's Bundle Identifier as the value.</li><li>Add a new key, <strong>RTCAppGroupIdentifier</strong> in <strong>Info.plist</strong> with the extension's app group ID as the value.</li></ol><blockquote><strong>Note</strong> : For extension's Bundle Identifier, go to <strong>TARGETS &gt; Your-Extension-Name &gt; Signing &amp; Capabilities</strong> .</blockquote><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step13-xcode-00cb59e564facb3c7f365235065f4561.png" class="kg-image" alt="How to Implement Screen Share in Flutter Video Call App for iOS?" loading="lazy" width="933" height="550"/></figure><blockquote>NOTE: You can also check out the extension <a href="https://github.com/videosdk-live/videosdk-rtc-flutter-sdk-example/tree/main/ios/FlutterBroadcast" rel="noopener noreferrer">example code</a> on GitHub.</blockquote><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/videosdk-live/videosdk-rtc-flutter-sdk-example/tree/main/ios/FlutterBroadcast"><div class="kg-bookmark-content"><div class="kg-bookmark-title">videosdk-rtc-flutter-sdk-example/ios/FlutterBroadcast at main · videosdk-live/videosdk-rtc-flutter-sdk-example</div><div class="kg-bookmark-description">WebRTC based video conferencing SDK for Flutter (Android / iOS) - videosdk-live/videosdk-rtc-flutter-sdk-example</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="How to Implement Screen Share in Flutter Video Call App for iOS?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">videosdk-live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/de9f710aa0874a5467b708c1c752532d36ee6c1d7ea72f118df3eb47b3260ba6/videosdk-live/videosdk-rtc-flutter-sdk-example" alt="How to Implement Screen Share in Flutter Video Call App for iOS?" onerror="this.style.display = 'none'"/></div></a></figure><h3 id="enable-screen-share%E2%80%8B">Enable Screen Share<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/extras/flutter-ios-screen-share#enable-screen-share">​</a></h3><ul><li>To start screen sharing, just call <code>room.enableScreenShare()</code> method.</li></ul><pre><code class="language-dart">ElevatedButton(
    child: Text("Start ScreenSharing"),
    onPressed: () {
        room.enableScreenShare();
    },
),</code></pre><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/step23-xcode-e815d2c96e8061551e941055e32578b3.png" class="kg-image" alt="How to Implement Screen Share in Flutter Video Call App for iOS?" loading="lazy" width="307" height="504"/></figure><p>After clicking the <strong>Start Broadcast</strong> button, we will be able to get the screen stream in session.</p><h3 id="disable-screen-share%E2%80%8B">Disable Screen Share<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/extras/flutter-ios-screen-share#disable-screen-share">​</a></h3><p>By using <code>room.disableScreenShare()</code> function, a participant can stop publishing screen streams to other participants</p><pre><code class="language-dart">ElevatedButton(
    child: Text("Stop ScreenSharing"),
    onPressed: () {
        room.disableScreenShare();
    },
),</code></pre><h3 id="events-associated-with-screen-share%E2%80%8B">Events associated with Screen Share<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/handling-media/screen-share#events-associated-with-enablescreenshare">​</a></h3><h4 id="events-associated-with-enable-screenshare%E2%80%8B">Events associated with Enable ScreenShare​</h4>
<p>Every Participant will receive a callback on <a href="https://docs.videosdk.live/flutter/api/sdk-reference/participant-class/events#streamenabled"><code>Events.streamEnabled</code></a> of the <a href="https://docs.videosdk.live/flutter/api/sdk-reference/participant-class/introduction"><code>Participant</code></a> object with <code>Stream</code> object.</p><ul><li>Every Remote Participant will receive <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/events#presenterchanged"><code>Events.presenterChanged</code></a> callback on the <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/introduction"><code>Room</code></a> object with the participantId as <code>presenterId</code> who started the screen share.</li></ul><h4 id="events-associated-with-disable-screenshare%E2%80%8B">Events associated with Disable ScreenShare​</h4>
<ul><li>Every Participant will receive a callback on <a href="https://docs.videosdk.live/flutter/api/sdk-reference/participant-class/events#streamdisabled"><code>Events.streamDisabled</code></a> of the <a href="https://docs.videosdk.live/flutter/api/sdk-reference/participant-class/introduction"><code>Participant</code></a> object with <code>Stream</code> object.</li><li>Every Remote Participant will receive <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/events#presenterchanged"><code>Events.presenterChanged</code></a> callback on the <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/introduction"><code>Room</code></a> object with the <code>presenterId</code> as <code>null</code>.</li></ul><h3 id="example%E2%80%8B">Example<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/handling-media/screen-share#rendering-screen-share-stream">​</a></h3><p>To render the screen share, you will need to know <code>participantId</code> who is presenting the screen, which can be found in the <code>presenterId</code> property of <code>Room</code> object.</p><p>We will listen for the <code>Events.presenterChanged</code> on <code>Room</code> object to check if some other participant starts screen share, <code>Events.streamEnabled</code> and <code>Events.streamDisabled</code> on the <code>localParticipant</code>'s object to identify if the local participant is presenting or not.</p><pre><code class="language-dart">import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';
import './participant_tile.dart';
import './screen_share_view.dart';

class MeetingScreen extends StatefulWidget {
  final String meetingId;
  final String token;

  const MeetingScreen(
      {super.key, required this.meetingId, required this.token});

  @override
  State&lt;MeetingScreen&gt; createState() =&gt; _MeetingScreenState();
}

class _MeetingScreenState extends State&lt;MeetingScreen&gt; {
  late Room _room;
  var micEnabled = true;
  var camEnabled = true;
  String? _presenterId;

  Map&lt;String, Participant&gt; participants = {};

  @override
  void initState() {
    // create room
    _room = VideoSDK.createRoom(
      roomId: widget.meetingId,
      token: widget.token,
      displayName: "John Doe",
      micEnabled: micEnabled,
      camEnabled: camEnabled
    );

    setMeetingEventListener();

    // Join room
    _room.join();

    super.initState();
  }

  // listening to meeting events
  void setMeetingEventListener() {
   // ...

   //Listening if remote participant starts presenting
    _room.on(Events.presenterChanged, (String? presenterId) {
      setState(() =&gt; {_presenterId = presenterId});
    });

    //Listening if local participant starts presenting
    _room.localParticipant.on(Events.streamEnabled, (Stream stream) {
      if (stream.kind == "share") {
        setState(() =&gt; {_presenterId = _room.localParticipant.id});
      }
    });

    _room.localParticipant.on(Events.streamDisabled, (Stream stream) {
      if (stream.kind == "share") {
        setState(() =&gt; {_presenterId = null});
      }
    });
  }

  // onbackButton pressed leave the room
  Future&lt;bool&gt; _onWillPop() async {
    _room.leave();
    return true;
  }

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return WillPopScope(
      onWillPop: () =&gt; _onWillPop(),
      child: Scaffold(
        appBar: AppBar(
          title: const Text('VideoSDK QuickStart'),
        ),
        body: Padding(
          padding: const EdgeInsets.all(8.0),
          child: Column(
            children: [
              Text(widget.meetingId),
              //render all participant
              // ...
              
              //we will render the screenshare view if the presenterId is not null
              if (_presenterId != null)
                ScreenShareView(
                  participant: _presenterId == _room.localParticipant.id
                      ? _room.localParticipant
                      : _room.participants[_presenterId],
                ),
                // MeetingControls
                // ....
              ElevatedButton(
                  onPressed: () {
                    if (_presenterId != null) {
                      _room.disableScreenShare();
                    } else {
                      _room.enableScreenShare();
                    }
                  },
                  child: const Text('Toggle Screnshare')),
            ],
          ),
        ),
      ),
      home: JoinScreen(),
    );
  }
}</code></pre><p>Now that we know if there is an active presenter, let us get the screen share stream from the <code>Participant</code> object and render it.</p><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class ScreenShareView extends StatelessWidget {
  final Participant? participant;

  ScreenShareView({super.key, required this.participant}) {
  //intialize the shareStream
    participant?.streams.forEach((key, value) {
      if (value.kind == "share") {
        shareStream = value;
      }
    });
  }
  Stream? shareStream;

  @override
  Widget build(BuildContext context) {
    return Container(
      color: Colors.grey.shade800,
      height: 300,
      width: 200,
      //show the screen share stream
      child: shareStream != null
          ? RTCVideoView(
              shareStream?.renderer as RTCVideoRenderer,
              objectFit: RTCVideoViewObjectFit.RTCVideoViewObjectFitCover,
            )
          : null,
    );
  }
}</code></pre><h2 id="conclusion">Conclusion</h2><p>By integrating screen sharing into your Flutter video calling app using VideoSDK, you provide users with a powerful tool to enhance communication and collaboration. This feature proves beneficial across various scenarios, from business presentations to technical demonstrations, making your app a versatile and invaluable platform. </p><p>With <a href="https://www.videosdk.live/">VideoSDK's </a>seamless integration capabilities, incorporating screen sharing becomes not only feasible but also streamlined. This allows you to focus more on crafting a remarkable user interface and experience. Whether users need to share slides, collaborate on projects, or provide remote assistance, your app becomes their go-to solution. Empower users with the ability to connect, communicate, and collaborate effectively, all within a single, user-friendly platform.</p><p>Unlock the full potential of VideoSDK and create seamless video experiences today! <a href="https://app.videosdk.live/dashboard"><strong>Sign up now</strong></a> and get 10,000 free minutes to elevate your video app to the next level.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Chat Feature in React Native Video Call App?]]></title><description><![CDATA[This article explores the integration of a chat feature into a React Native video call app using VideoSDK, aiming to create more immersive and engaging user experiences.]]></description><link>https://www.videosdk.live/blog/integrate-chat-feature-in-react-native-video-call-app</link><guid isPermaLink="false">660f88a72a88c204ca9cfd4e</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[React Native]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Thu, 09 Jan 2025 05:55:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Chat-Feature-React-Native-1.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Chat-Feature-React-Native-1.png" alt="How to Integrate Chat Feature in React Native Video Call App?"/><p>Participants in a multi-participant video call may find it difficult to engage in meaningful talks without interrupting each other. It is also challenging to enhance programs with real-time messaging capabilities, which facilitate communication between participants during video chats. In this environment, the goal is to create more immersive and engaging user experiences, with services such as chat emerging as essential components. </p><p>It enables users to share messages, transfer files, and swap data without disrupting the audio or video feeds. This is especially handy when participants want to ask questions, provide feedback, or provide relevant information during a video chat. This article will walk you through the process of implementing a chat feature into a React Native video call app with VideoSDK.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><h3 id="goals">Goals</h3><p>By the End of this Article, we'll:</p><ol><li>Create a <a href="https://app.videosdk.live/signup">VideoSDK account</a> and generate your VideoSDK auth token.</li><li>Integrate the VideoSDK library and dependencies into your project.</li><li>Implement core functionalities for video calls using VideoSDK.</li><li>Enable the Chat feature in your app.</li></ol><p>To take advantage of the chat functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Node.js v12+</li><li>NPM v6+ (comes installed with newer Node versions)</li><li>Android Studio or Xcode installed</li></ul><h2 id="%E2%AC%87%EF%B8%8F-install-videosdk-config">⬇️ Install VideoSDK Config.</h2><p>It is necessary to set up VideoSDK within your project before going into the details of integrating the Chat feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.</p><ul><li>For NPM </li></ul><pre><code class="language-js">npm install "@videosdk.live/react-native-sdk"  "@videosdk.live/react-native-incallmanager"</code></pre><ul><li>For Yarn</li></ul><pre><code class="language-js">yarn add "@videosdk.live/react-native-sdk" "@videosdk.live/react-native-incallmanager"</code></pre><h3 id="project-configuration">Project Configuration</h3><p>Before integrating the Chat functionality, ensure that your project is correctly prepared to handle the integration. This setup consists of a sequence of steps for configuring rights, dependencies, and platform-specific parameters so that VideoSDK can function seamlessly inside your application context.</p><h4 id="android-setup">Android Setup</h4>
<ul><li>Add the required permissions in the <code>AndroidManifest.xml</code> file.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">&lt;manifest
  xmlns:android="http://schemas.android.com/apk/res/android"
  package="com.cool.app"
&gt;
    &lt;!-- Give all the required permissions to app --&gt;
    &lt;uses-permission android:name="android.permission.INTERNET" /&gt;
    &lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Legacy up to Android 11) --&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH"
        android:maxSdkVersion="30" /&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH_ADMIN"
        android:maxSdkVersion="30" /&gt;

    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Android 12 upwards)--&gt;
    &lt;uses-permission android:name="android.permission.BLUETOOTH_CONNECT" /&gt;

    &lt;uses-permission android:name="android.permission.CAMERA" /&gt;
    &lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
    &lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
    &lt;uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" /&gt;
    &lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE"/&gt;
    &lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;

    &lt;application&gt;
   &lt;meta-data
      android:name="live.videosdk.rnfgservice.notification_channel_name"
      android:value="Meeting Notification"
     /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_channel_description"
    android:value="Whenever meeting started notification will appear."
    /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_color"
    android:resource="@color/red"
    /&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundService" android:foregroundServiceType="mediaProjection"&gt;&lt;/service&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundServiceTask"&gt;&lt;/service&gt;
  &lt;/application&gt;
&lt;/manifest&gt;</code></pre><figcaption><p><span style="white-space: pre-wrap;">AndroidManifest.xml</span></p></figcaption></figure><ul><li>Update your <code>colors.xml</code> file for internal dependencies.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">&lt;resources&gt;
  &lt;item name="red" type="color"&gt;
    #FC0303
  &lt;/item&gt;
  &lt;integer-array name="androidcolors"&gt;
    &lt;item&gt;@color/red&lt;/item&gt;
  &lt;/integer-array&gt;
&lt;/resources&gt;</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/app/src/main/res/values/colors.xml</span></p></figcaption></figure><ul><li>Link the necessary VideoSDK Dependencies.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">  dependencies {
   implementation project(':rnwebrtc')
   implementation project(':rnfgservice')
  }</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/app/build.gradle</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">include ':rnwebrtc'
project(':rnwebrtc').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-webrtc/android')

include ':rnfgservice'
project(':rnfgservice').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-foreground-service/android')</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/settings.gradle</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">import live.videosdk.rnwebrtc.WebRTCModulePackage;
import live.videosdk.rnfgservice.ForegroundServicePackage;

public class MainApplication extends Application implements ReactApplication {
  private static List&lt;ReactPackage&gt; getPackages() {
      @SuppressWarnings("UnnecessaryLocalVariable")
      List&lt;ReactPackage&gt; packages = new PackageList(this).getPackages();
      // Packages that cannot be autolinked yet can be added manually here, for example:

      packages.add(new ForegroundServicePackage());
      packages.add(new WebRTCModulePackage());

      return packages;
  }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MainApplication.java</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">/* This one fixes a weird WebRTC runtime problem on some devices. */
android.enableDexingArtifactTransform.desugaring=false</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/gradle.properties</span></p></figcaption></figure><ul><li>Include the following line in your <code>proguard-rules.pro</code> file (optional: if you are using Proguard)</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">-keep class org.webrtc.** { *; }</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/app/proguard-rules.pro</span></p></figcaption></figure><ul><li>In your <code>build.gradle</code> file, update the minimum OS/SDK version to <code>23</code>.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">buildscript {
  ext {
      minSdkVersion = 23
  }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">build.gradle</span></p></figcaption></figure><h4 id="ios-setup%E2%80%8B">iOS Setup​</h4>
<blockquote>IMPORTANT: Ensure that you are using CocoaPods version 1.10 or later.</blockquote><ol><li>To update CocoaPods, you can reinstall the <code>gem</code> using the following command:</li></ol><pre><code class="language-swift">$ sudo gem install cocoapods</code></pre><p><strong>2.</strong> Manually link react-native-incall-manager (if it is not linked automatically).</p><p>Select <code>Your_Xcode_Project/TARGETS/BuildSettings</code>, in Header Search Paths, add <code>"$(SRCROOT)/../node_modules/@videosdk.live/react-native-incall-manager/ios/RNInCallManager"</code></p><p><strong>3.</strong> Change the path of <code>react-native-webrtc</code> using the following command:</p><pre><code class="language-swift">pod ‘react-native-webrtc’, :path =&gt; ‘../node_modules/@videosdk.live/react-native-webrtc’</code></pre><p><strong>4. </strong>Change the version of your platform.</p><p>You need to change the platform field in the Podfile to 12.0 or above because <strong>react-native-webrtc</strong> doesn't support iOS versions earlier than 12.0. Update the line: platform: ios, ‘12.0’.</p><p><strong>5. </strong>Install pods.</p><p>After updating the version, you need to install the pods by running the following command:</p><pre><code class="language-swift">Pod install</code></pre><p><strong>6. </strong>Add “<strong>libreact-native-webrtc.a</strong>” binary.</p><p>Add the "<strong>libreact-native-webrtc.a</strong>" binary to the "Link Binary With Libraries" section in the target of your main project folder.</p><p><strong>7. </strong>Declare permissions in <strong>Info.plist</strong>:</p><p>Add the following lines to your <strong>info.plist</strong> file located at (project folder/ios/projectname/info.plist):</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Camera permission description&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Microphone permission description&lt;/string&gt;</code></pre><figcaption><p><span style="white-space: pre-wrap;">ios/projectname/info.plist</span></p></figcaption></figure><h4 id="register-service">Register Service</h4>
<p>Register VideoSDK services in your root <code>index.js</code> file for the initialization service.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">import { AppRegistry } from "react-native";
import App from "./App";
import { name as appName } from "./app.json";
import { register } from "@videosdk.live/react-native-sdk";

register();

AppRegistry.registerComponent(appName, () =&gt; App);</code></pre><figcaption><p><span style="white-space: pre-wrap;">index.js</span></p></figcaption></figure><h2 id="essential-steps-for-implement-the-video-calling-functionality">Essential Steps for Implement the Video Calling Functionality</h2><h3 id="step-1-get-started-with-apijs%E2%80%8B">Step 1: Get started with api.js<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-1--get-started-with-apijs">​</a></h3><p>Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">export const token = "&lt;Generated-from-dashbaord&gt;";
// API call to create meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${token}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });

  const { roomId } = await res.json();
  return roomId;
};</code></pre><figcaption><p><span style="white-space: pre-wrap;">api.js</span></p></figcaption></figure><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the <strong>Context Provider</strong> and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meetings such as join, leave, enable/disable the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as <strong>name</strong>, <strong>webcamStream</strong>, <strong>micStream</strong>, etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the <strong>App.js</strong> file.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">import React, { useState } from "react";
import {
  SafeAreaView,
  TouchableOpacity,
  Text,
  TextInput,
  View,
  FlatList,
} from "react-native";
import {
  MeetingProvider,
  useMeeting,
  useParticipant,
  MediaStream,
  RTCView,
} from "@videosdk.live/react-native-sdk";
import { createMeeting, token } from "./api";

function JoinScreen(props) {
  return null;
}

function ControlsContainer() {
  return null;
}

function MeetingView() {
  return null;
}

export default function App() {
  const [meetingId, setMeetingId] = useState(null);

  const getMeetingId = async (id) =&gt; {
    const meetingId = id == null ? await createMeeting({ token }) : id;
    setMeetingId(meetingId);
  };

  return meetingId ? (
    &lt;SafeAreaView style={{ flex: 1, backgroundColor: "#F6F6FF" }}&gt;
      &lt;MeetingProvider
        config={{
          meetingId,
          micEnabled: false,
          webcamEnabled: true,
          name: "Test User",
        }}
        token={token}
      &gt;
        &lt;MeetingView /&gt;
      &lt;/MeetingProvider&gt;
    &lt;/SafeAreaView&gt;
  ) : (
    &lt;JoinScreen getMeetingId={getMeetingId} /&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">App.js</span></p></figcaption></figure><h3 id="step-3-implement-join-screen%E2%80%8B">Step 3: Implement Join Screen<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-3--implement-join-screen">​</a></h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">function JoinScreen(props) {
  const [meetingVal, setMeetingVal] = useState("");
  return (
    &lt;SafeAreaView
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        paddingHorizontal: 6 * 10,
      }}
    &gt;
      &lt;TouchableOpacity
        onPress={() =&gt; {
          props.getMeetingId();
        }}
        style={{ backgroundColor: "#1178F8", padding: 12, borderRadius: 6 }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Create Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;

      &lt;Text
        style={{
          alignSelf: "center",
          fontSize: 22,
          marginVertical: 16,
          fontStyle: "italic",
          color: "grey",
        }}
      &gt;
        ---------- OR ----------
      &lt;/Text&gt;
      &lt;TextInput
        value={meetingVal}
        onChangeText={setMeetingVal}
        placeholder={"XXXX-XXXX-XXXX"}
        style={{
          padding: 12,
          borderWidth: 1,
          borderRadius: 6,
          fontStyle: "italic",
        }}
      /&gt;
      &lt;TouchableOpacity
        style={{
          backgroundColor: "#1178F8",
          padding: 12,
          marginTop: 14,
          borderRadius: 6,
        }}
        onPress={() =&gt; {
          props.getMeetingId(meetingVal);
        }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Join Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;
    &lt;/SafeAreaView&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">JoinScreen Component</span></p></figcaption></figure><h3 id="step-4-implement-controls%E2%80%8B">Step 4: Implement Controls<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-4--implement-controls">​</a></h3><p>The next step is to create a <code>ControlsContainer</code> component to manage features such as Join or leave a Meeting and Enable or Disable the Webcam/Mic.</p><p>In this step, the <code>useMeeting</code> hook is utilized to acquire all the required methods such as <code>join()</code>, <code>leave()</code>, <code>toggleWebcam</code> and <code>toggleMic</code>.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const Button = ({ onPress, buttonText, backgroundColor }) =&gt; {
  return (
    &lt;TouchableOpacity
      onPress={onPress}
      style={{
        backgroundColor: backgroundColor,
        justifyContent: "center",
        alignItems: "center",
        padding: 12,
        borderRadius: 4,
      }}
    &gt;
      &lt;Text style={{ color: "white", fontSize: 12 }}&gt;{buttonText}&lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
  );
};

function ControlsContainer({ join, leave, toggleWebcam, toggleMic }) {
  return (
    &lt;View
      style={{
        padding: 24,
        flexDirection: "row",
        justifyContent: "space-between",
      }}
    &gt;
      &lt;Button
        onPress={() =&gt; {
          join();
        }}
        buttonText={"Join"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleWebcam();
        }}
        buttonText={"Toggle Webcam"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleMic();
        }}
        buttonText={"Toggle Mic"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          leave();
        }}
        buttonText={"Leave"}
        backgroundColor={"#FF0000"}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ControlsContainer Component</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantList() {
  return null;
}
function MeetingView() {
  const { join, leave, toggleWebcam, toggleMic, meetingId } = useMeeting({});

  return (
    &lt;View style={{ flex: 1 }}&gt;
      {meetingId ? (
        &lt;Text style={{ fontSize: 18, padding: 12 }}&gt;
          Meeting Id :{meetingId}
        &lt;/Text&gt;
      ) : null}
      &lt;ParticipantList /&gt;
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingView Component</span></p></figcaption></figure><h3 id="step-5-render-participant-list%E2%80%8B">Step 5: Render Participant List<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-5--render-participant-list">​</a></h3><p>After implementing the controls, the next step is to render the joined participants.</p><p>You can get all the joined <code>participants</code> from the <code>useMeeting</code> Hook.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantView() {
  return null;
}

function ParticipantList({ participants }) {
  return participants.length &gt; 0 ? (
    &lt;FlatList
      data={participants}
      renderItem={({ item }) =&gt; {
        return &lt;ParticipantView participantId={item} /&gt;;
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 20 }}&gt;Press Join button to enter meeting.&lt;/Text&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ParticipantList Component</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">function MeetingView() {
  // Get `participants` from useMeeting Hook
  const { join, leave, toggleWebcam, toggleMic, participants } = useMeeting({});
  const participantsArrId = [...participants.keys()];

  return (
    &lt;View style={{ flex: 1 }}&gt;
      &lt;ParticipantList participants={participantsArrId} /&gt;
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingView Component</span></p></figcaption></figure><h3 id="step-6-handling-participants-media%E2%80%8B">Step 6: Handling Participant's Media<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-6--handling-participants-media">​</a></h3><p>Before Handling the Participant's Media, you need to understand a couple of concepts.</p><h4 id="1-useparticipant-hook">1. useParticipant Hook</h4>
<p>The <code>useParticipant</code> hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take <code>participantId</code> as argument.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const { webcamStream, webcamOn, displayName } = useParticipant(participantId);</code></pre><figcaption><p><span style="white-space: pre-wrap;">useParticipant Hook Example</span></p></figcaption></figure><h4 id="2-mediastream-api">2. MediaStream API</h4>
<p>The MediaStream API is beneficial for adding a MediaTrack to the <code>RTCView</code> component, enabling the playback of audio or video.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">&lt;RTCView
  streamURL={new MediaStream([webcamStream.track]).toURL()}
  objectFit={"cover"}
  style={{
    height: 300,
    marginVertical: 8,
    marginHorizontal: 8,
  }}
/&gt;</code></pre><figcaption><p><span style="white-space: pre-wrap;">useParticipant Hook Example</span></p></figcaption></figure><h4 id="rendering-participant-media">Rendering Participant Media</h4>
<figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantView({ participantId }) {
  const { webcamStream, webcamOn } = useParticipant(participantId);

  return webcamOn &amp;&amp; webcamStream ? (
    &lt;RTCView
      streamURL={new MediaStream([webcamStream.track]).toURL()}
      objectFit={"cover"}
      style={{
        height: 300,
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        backgroundColor: "grey",
        height: 300,
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 16 }}&gt;NO MEDIA&lt;/Text&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ParticipantView Component</span></p></figcaption></figure><p>Congratulations! By following these steps, you're on your way to unlocking the video within your application. Now, we are moving forward to integrate the feature that builds immersive video experiences for your users!</p><h2 id="integrate-chat-feature">Integrate Chat Feature</h2><p>For communication or any kind of messaging between participants, VideoSDK provides the <code>usePubSub</code> hook, which utilizes the Publish-Subscribe mechanism. It can be employed to develop a wide variety of functionalities. For example, participants could use it to send chat messages to each other, share files or other media, or even trigger actions like muting or unmuting audio or video.</p><p>This guide focuses on using PubSub to implement Chat functionality. If you are not familiar with the PubSub mechanism and <code>usePubSub</code> hook, you can <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/pubsub">follow this guide</a>.</p><h3 id="implementing-chat%E2%80%8B">Implementing Chat<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/chat-using-pubsub#implementing-chat">​</a></h3><p>The initial step in setting up a group chat involves selecting a topic to which all participants will publish and subscribe, facilitating the exchange of messages. In the following example, <strong>CHAT</strong> is used as the topic. Next, obtain the <code>publish()</code> method and the messages array from the <code>usePubSub</code> hook.</p><p><strong>Step 1:</strong>  Add another button in <code>ControlsContainer</code> to enable chat functionality and open the Chat Modal.</p><pre><code class="language-js">// MeetingView component to manage the meeting and chat functionality
function MeetingView() {
  const [modalVisible, setModalVisible] = useState(false);

  // Function to toggle the visibility of the modal
  const toggleModal = () =&gt; {
    setModalVisible(!modalVisible);
  };

  return (
    &lt;View style={{ flex: 1 }}&gt;
      {/* Other components for the meeting view */}
      &lt;ChatView modalVisible={modalVisible} toggleModal={toggleModal} /&gt;
      &lt;ControlsContainer
        // other props, join, leave etc.
        enableChat={() =&gt; {
          toggleModal();
        }}
      /&gt;
    &lt;/View&gt;
  );
}


function ControlsContainer({ enableChat }) {
  return (
    // Container for control buttons
    &lt;View
      style={{
        padding: 24,
        flexDirection: "row",
        justifyContent: "space-between",
      }}
    &gt;
      {/* Button to enable chat */}
      &lt;Button
        onPress={() =&gt; {
          enableChat();
        }}
        buttonText={"Chat"}
        backgroundColor={"#1178F8"}
      /&gt;
    &lt;/View&gt;
  );
}

</code></pre><p><strong>Step 2:</strong> Add React Native Modal component to handle chat functionality.</p><pre><code class="language-js">import {
  TextInput,
  Modal,
  Pressable,
} from "react-native";
import {
  usePubSub,
} from "@videosdk.live/react-native-sdk";

// ChatView component for displaying chat messages and input
function ChatView({ modalVisible, toggleModel }) {
  // Destructure publish method from usePubSub hook
  const { publish, messages } = usePubSub("CHAT");

  // State to store the user typed message
  const [message, setMessage] = useState("");

  // Function to handle sending messages
  const handleSendMessage = () =&gt; {
    // Publish the message using the publish method
    publish(message, { persist: true });
    // Clear the message input after sending
    setMessage("");
  };

  return (
    &lt;View
      style={{
        flex: 1,
        justifyContent: "center",
        alignItems: "center",
        marginTop: 22,
      }}
    &gt;
      &lt;Modal animationType="slide" visible={modalVisible}&gt;
        &lt;SafeAreaView
          style={{
            flex: 1,
            backgroundColor: "#050A0E",
            justifyContent: "space-between",
          }}
        &gt;
          &lt;Pressable
            style={{
              height: 40,
              aspectRatio: 1,
              backgroundColor: "#5568FE",
              justifyContent: "center",
              alignItems: "center",
              borderRadius: 24,
              marginTop: 12,
              marginLeft: 12,
            }}
            onPress={toggleModel}
          &gt;
            &lt;Text style={{ fontWeight: "bold", fontSize: 24 }}&gt;X&lt;/Text&gt;
          &lt;/Pressable&gt;
          &lt;View&gt;
            {/* Render chat messages */}
            {messages.map((message) =&gt; {
              return (
                &lt;Text
                  style={{
                    fontSize: 12,
                    color: "#FFFFFF",
                    marginVertical: 8,
                    marginHorizontal: 12,
                  }}
                &gt;
                  {message.senderName} says {message.message}
                &lt;/Text&gt;
              );
            })}

            &lt;View
              style={{
                paddingHorizontal: 12,
              }}
            &gt;
              {/* Render text input container */}
              &lt;TextInputContainer
                message={message}
                setMessage={setMessage}
                sendMessage={handleSendMessage}
              /&gt;
            &lt;/View&gt;
          &lt;/View&gt;
        &lt;/SafeAreaView&gt;
      &lt;/Modal&gt;
    &lt;/View&gt;
  );
}</code></pre><p><strong>Step 3:</strong> Implement the TextInputContainer component for inputting and sending messages.</p><pre><code class="language-js">// TextInputContainer component for inputting and sending messages
function TextInputContainer({ sendMessage, setMessage, message }) {
  // Function to render the text input UI
  const textInput = () =&gt; {
    return (
      &lt;View
        style={{
          height: 40,
          marginBottom: 14,
          flexDirection: "row",
          borderRadius: 10,
          backgroundColor: "#404B53",
        }}
      &gt;
        &lt;View
          style={{
            flexDirection: "row",
            flex: 2,
            justifyContent: "center",
            alignItems: "center",
          }}
        &gt;
          {/* TextInput for typing messages */}
          &lt;TextInput
            multiline
            value={message}
            placeholder={"Write your message"}
            style={{
              flex: 1,
              color: "white",
              marginLeft: 12,
              margin: 4,
              padding: 4,
            }}
            numberOfLines={2}
            onChangeText={setMessage}
            selectionColor={"white"}
            placeholderTextColor={"#9FA0A7"}
          /&gt;
        &lt;/View&gt;

        &lt;View
          style={{
            justifyContent: "center",
            alignItems: "center",
            backgroundColor: message.length &gt; 0 ? "#5568FE" : "transparent",
            margin: 4,
            padding: 4,
            borderRadius: 8,
          }}
        &gt;
          {/* Button to send the message */}
          &lt;TouchableOpacity
            onPress={sendMessage}
            style={{
              height: 40,
              aspectRatio: 1,
              justifyContent: "center",
              alignItems: "center",
              paddingVertical: 8,
              paddingVertical: 4,
            }}
          &gt;
            &lt;Text&gt;Send&lt;/Text&gt;
          &lt;/TouchableOpacity&gt;
        &lt;/View&gt;
      &lt;/View&gt;
    );
  };

  return &lt;&gt;{textInput()}&lt;/&gt;;
}</code></pre><h3 id="private-chat">Private Chat</h3><p>In the following example, to convert the chat into a private conversation between two participants, you can set the <code>sendOnly</code> property. This property ensures that messages are only sent to the intended recipient, making the chat interaction exclusive and private between the two users involved.</p><pre><code class="language-js">import { SafeAreaView, TouchableOpacity, TextInput, Text } from "react-native";

function ChatView() {
  // destructure publish method from usePubSub hook
  const { publish, messages } = usePubSub("CHAT");

  // State to store the user typed message
  const [message, setMessage] = useState("");

  const handleSendMessage = () =&gt; {
    // Sending the Message using the publish method
    // Pass the participantId of the participant to whom you want to send the message.
    // hightlight-next-line
    publish(message, { persist: true, sendOnly: ['XYZ'] });
    // Clearing the message input
    setMessage("");
  };

 //...
}</code></pre><h3 id="downloading-chat-messages%E2%80%8B">Downloading Chat Messages<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/chat-using-pubsub#downloading-chat-messages">​</a></h3><p>All the messages from PubSub published  <code>persist : true</code> can be downloaded as an <code>.csv</code> file. This file will be accessible in the VideoSDK dashboard and through the <a href="https://docs.videosdk.live/api-reference/realtime-communication/fetch-session-using-sessionid">Sessions API</a>.</p><h2 id="%E2%9C%A8-want-to-add-more-features-to-react-native-video-calling-app">✨ Want to Add More Features to React Native Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your React Native video-calling app,</p><p><strong>Check out these additional resources:</strong></p><ul><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/active-speaker-indication-in-react-native-video-call-app">Link</a></li><li>RTMP Live Stream: <a href="https://www.videosdk.live/blog/integrate-rtmp-in-react-native-video-app">Link</a></li><li>Image Capture Feature: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-react-native-for-android-app">Link</a></li><li>Screen Share Feature in Android: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-native-android-video-call-app">Link</a></li><li>Screen Share Feature in iOS: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-native-ios-video-call-app">Link</a></li><li>Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/picture-in-picture-pip-in-react-native">Link</a></li></ul><h2 id="conclusion">Conclusion</h2><p>Congratulations!, you have integrated Screen Sharing successfully and unlocked the full potential of real-time communication, enabling users to engage in seamless conversations during video calls. Integrating a chat feature into a React Native video call app using VideoSDK can significantly enhance the user experience.</p><p>If you are new here and want to build an interactive React Native app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get ? <em>10000 free minutes every month</em>. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate RTMP Live Stream in Flutter Video Call App?]]></title><description><![CDATA[Learn how to integrate RTMP Live Stream into your Flutter video calling app and transform it into a real-time video calling platform with VideoSDK.
]]></description><link>https://www.videosdk.live/blog/integrate-rtmp-livestream-in-flutter-video-call-app</link><guid isPermaLink="false">661e69322a88c204ca9d413b</guid><category><![CDATA[Flutter]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Wed, 08 Jan 2025 12:30:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-Live-Stream-in-Flutter-Video-Call-App.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-Live-Stream-in-Flutter-Video-Call-App.jpg" alt="How to Integrate RTMP Live Stream in Flutter Video Call App?"/><p>Integrate <a href="https://www.videosdk.live/blog/what-is-rtmp">real-time messaging protocol</a> (RTMP) livestream functionality seamlessly into your <a href="https://www.videosdk.live/blog/video-calling-in-flutter">Flutter video call application</a> for a dynamic and engaging user experience. With RTMP support, users can effortlessly stream live video content, fostering interactive communication within your app. Enable users to broadcast their video feeds in real-time, creating vibrant and immersive virtual gatherings. Implementing RTMP in your Flutter app ensures smooth and efficient live streaming, enhancing the overall quality of video calls.</p><p><strong>Benefits of RTMP Livestream:</strong></p><ol><li><strong>Improved Communication</strong>: RTMP enhances communication by allowing users to share live video content, fostering collaboration and information sharing.</li><li><strong>High-Quality Streaming:</strong> RTMP ensures high-quality video streaming with minimal latency, providing users with a seamless viewing experience.</li><li><strong>Scalability:</strong> With RTMP support, your app can accommodate a large number of simultaneous viewers, making it suitable for hosting virtual events of varying sizes.</li><li><strong>Enhanced User Engagement:</strong> RTMP livestreaming facilitates real-time interaction, keeping users engaged and connected during video calls.</li></ol><p><strong>Use Cases of RTMP Livestream:</strong></p><ol><li><strong>Virtual Events</strong>: Host virtual conferences, concerts, or seminars where participants can stream live video content to engage with the audience in real-time.</li><li><strong>Educational Platforms</strong>: Enable teachers to conduct live lectures and interactive sessions, allowing students to stream video content and ask questions in real-time.</li><li><strong>Remote Work</strong>: Facilitate remote meetings and conferences where participants can share live video presentations and collaborate effectively</li></ol><p>This tutorial VideoSDK provides clear instructions and code examples to help you seamlessly integrate RTMP live streaming into your Flutter video-calling app.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To integrate RTMP, we must use the capabilities of VideoSDK and what it offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/authentication-and-tokens#1-generating-token-from-dashboard">provided tutorial</a>.</p><h3 id="prerequisites%E2%80%8B">Prerequisites<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#prerequisites">​</a></h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>Video SDK Developer Account (if you do not have one, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a>)</li><li>The basic understanding of Flutter.</li><li><strong><a href="https://pub.dev/packages/videosdk" rel="noopener noreferrer">Flutter VideoSDK</a></strong></li><li>Have Flutter installed on your device.</li></ul><h2 id="install-videosdk%E2%80%8B">Install VideoSDK<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#install-video-sdk">​</a></h2><p>Install the VideoSDK using the below-mentioned flutter command. Make sure you are in your Flutter app directory before you run this command.</p><pre><code class="language-dart">$ flutter pub add videosdk

//run this command to add http library to perform network call to generate roomId
$ flutter pub add http</code></pre><h3 id="videosdk-compatibility">VideoSDK Compatibility</h3><!--kg-card-begin: html--><table style="border: 1px solid black;">
<thead>
<tr>
<th style="border:1px solid white;">Android and iOS app</th>
<th style="border:1px solid white;">Web</th>
<th style="border:1px solid white;">Desktop app</th>
<th style="border:1px solid white;">Safari browser</th>
</tr>
</thead>
<tbody>
<tr>
<td style="border:1px solid white;"><center> ✅ </center></td>
<td style="border:1px solid white;"><center> ✅ </center></td>
<td style="border:1px solid white;"><center> ✅ </center></td>
<td style="border:1px solid white;"><center> ❌ </center></td>
</tr>
</tbody>
</table>
<!--kg-card-end: html--><h3 id="structure-of-the-project%E2%80%8B">Structure of the project<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#structure-of-the-project">​</a></h3><p>Your project structure should look like this.</p><pre><code class="language-dart">    root
    ├── android
    ├── ios
    ├── lib
         ├── api_call.dart
         ├── join_screen.dart
         ├── main.dart
         ├── meeting_controls.dart
         ├── meeting_screen.dart
         ├── participant_tile.dart</code></pre><p>We are going to create flutter widgets (JoinScreen, MeetingScreen, MeetingControls, and ParticipantTile).</p><h3 id="app-structure%E2%80%8B">App Structure<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#app-structure">​</a></h3><p>The app widget will contain <code>JoinScreen</code> and <code>MeetingScreen</code> widget. <code>MeetingScreen</code> will have <code>MeetingControls</code> and <code>ParticipantTile</code> widget.</p><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/flutter_quick_start_arch.png" class="kg-image" alt="How to Integrate RTMP Live Stream in Flutter Video Call App?" loading="lazy"/></figure><h3 id="configure-project">Configure Project</h3><h4 id="for-android%E2%80%8B"><br>For Android<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#for-android">​</a></br></h4><ul><li>Update <code>/android/app/src/main/AndroidManifest.xml</code> for the permissions we will be using to implement the audio and video features.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">&lt;uses-feature android:name="android.hardware.camera" /&gt;
&lt;uses-feature android:name="android.hardware.camera.autofocus" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET"/&gt;
&lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE"/&gt;
&lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;</code></pre><figcaption>AndroidManifest.xml</figcaption></figure><ul><li>Also, you will need to set your build settings to Java 8 because the official WebRTC jar now uses static methods in <code>EglBase</code> the interface. Just add this to your app-level <code>/android/app/build.gradle</code>.</li></ul><pre><code class="language-dart">android {
    //...
    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8
    }
}</code></pre><ul><li>If necessary, in the same <code>build.gradle</code> you will need to increase <code>minSdkVersion</code> of <code>defaultConfig</code> up to <code>23</code> (currently default Flutter generator set to <code>16</code>).</li><li>If necessary, in the same <code>build.gradle</code> you will need to increase <code>compileSdkVersion</code> and <code>targetSdkVersion</code> up to <code>33</code> (currently, the default Flutter generator set to <code>30</code>).</li></ul><h4 id="for-ios%E2%80%8B">For iOS<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#for-ios">​</a></h4><ul><li>Add the following entries which allow your app to access the camera and microphone of your <code>/ios/Runner/Info.plist</code> file :</li><li>Uncomment the following line to define a global platform for your project in <code>/ios/Podfile</code> :</li></ul><pre><code class="language-dart">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Camera Usage!&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Microphone Usage!&lt;/string&gt;</code></pre><pre><code class="language-dart"># platform :ios, '12.0'</code></pre><h4 id="for-macos%E2%80%8B">For MacOS<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#for-macos">​</a></h4><ul><li>Add the following entries to your <code>/macos/Runner/Info.plist</code> file that allows your app to access the camera and microphone:</li></ul><pre><code class="language-dart">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Camera Usage!&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Microphone Usage!&lt;/string&gt;</code></pre><ul><li>Add the following entries to your <code>/macos/Runner/DebugProfile.entitlements</code> file that allows your app to access the camera, microphone, and open outgoing network connections:</li></ul><pre><code class="language-dart">&lt;key&gt;com.apple.security.network.client&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.camera&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.microphone&lt;/key&gt;
&lt;true/&gt;</code></pre><ul><li>Add the following entries to your <code>/macos/Runner/Release.entitlements</code> file that allows your app to access the camera, microphone, and open outgoing network connections:</li></ul><pre><code class="language-dart">&lt;key&gt;com.apple.security.network.server&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.network.client&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.camera&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.microphone&lt;/key&gt;
&lt;true/&gt;</code></pre><h2 id="essential-steps-to-implement-video-calling-functionality">Essential Steps to Implement Video Calling Functionality</h2><p>Before diving into the specifics of screen sharing implementation, it's crucial to ensure you have VideoSDK properly installed and configured within your Flutter project. Refer to <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start">VideoSDK's documentation</a> for detailed installation instructions. Once you have a functional video calling setup, you can proceed with adding the screen-sharing feature.</p><h3 id="step-1-get-started-with-apicalldart%E2%80%8B">Step 1: Get started with <code>api_call.dart</code><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-1-get-started-with-api_calldart">​</a></h3><p>Before jumping to anything else, you will write a function to generate a unique meetingId. You will require an authentication token, you can generate it either by using <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or by generating it from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for development.</p><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'dart:convert';
import 'package:http/http.dart' as http;

//Auth token we will use to generate a meeting and connect to it
String token = "&lt;Generated-from-dashboard&gt;";

// API call to create meeting
Future&lt;String&gt; createMeeting() async {
  final http.Response httpResponse = await http.post(
    Uri.parse("https://api.videosdk.live/v2/rooms"),
    headers: {'Authorization': token},
  );

//Destructuring the roomId from the response
  return json.decode(httpResponse.body)['roomId'];
}</code></pre><figcaption>api_call.dart</figcaption></figure><h3 id="step-2-creating-the-joinscreen%E2%80%8B">Step 2: Creating the JoinScreen<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-2--creating-the-joinscreen">​</a></h3><p>Let's create <code>join_screen.dart</code> file in <code>lib</code> directory and create JoinScreen <code>StatelessWidget</code>.</p><p>The JoinScreen will consist of:</p><ul><li><strong>Create Meeting Button</strong>: This button will create a new meeting for you.</li><li><strong>Meeting ID TextField</strong>: This text field will contain the meeting ID, you want to join.</li><li><strong>Join Meeting Button</strong>: This button will join the meeting, which you have provided.</li><li>Update the home screen of the app in the <code>main.dart</code></li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'api_call.dart';
import 'meeting_screen.dart';

class JoinScreen extends StatelessWidget {
  final _meetingIdController = TextEditingController();

  JoinScreen({super.key});

  void onCreateButtonPressed(BuildContext context) async {
    // call api to create meeting and then navigate to MeetingScreen with meetingId,token
    await createMeeting().then((meetingId) {
      if (!context.mounted) return;
      Navigator.of(context).push(
        MaterialPageRoute(
          builder: (context) =&gt; MeetingScreen(
            meetingId: meetingId,
            token: token,
          ),
        ),
      );
    });
  }

  void onJoinButtonPressed(BuildContext context) {
    String meetingId = _meetingIdController.text;
    var re = RegExp("\\w{4}\\-\\w{4}\\-\\w{4}");
    // check meeting id is not null or invaild
    // if meeting id is vaild then navigate to MeetingScreen with meetingId,token
    if (meetingId.isNotEmpty &amp;&amp; re.hasMatch(meetingId)) {
      _meetingIdController.clear();
      Navigator.of(context).push(
        MaterialPageRoute(
          builder: (context) =&gt; MeetingScreen(
            meetingId: meetingId,
            token: token,
          ),
        ),
      );
    } else {
      ScaffoldMessenger.of(context).showSnackBar(const SnackBar(
        content: Text("Please enter valid meeting id"),
      ));
    }
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: const Text('VideoSDK QuickStart'),
      ),
      body: Padding(
        padding: const EdgeInsets.all(12.0),
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: [
            ElevatedButton(
              onPressed: () =&gt; onCreateButtonPressed(context),
              child: const Text('Create Meeting'),
            ),
            Container(
              margin: const EdgeInsets.fromLTRB(0, 8.0, 0, 8.0),
              child: TextField(
                decoration: const InputDecoration(
                  hintText: 'Meeting Id',
                  border: OutlineInputBorder(),
                ),
                controller: _meetingIdController,
              ),
            ),
            ElevatedButton(
              onPressed: () =&gt; onJoinButtonPressed(context),
              child: const Text('Join Meeting'),
            ),
          ],
        ),
      ),
    );
  }
}</code></pre><figcaption>join_screen.dart</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'join_screen.dart';

void main() {
  runApp(const MyApp());
}

class MyApp extends StatelessWidget {
  const MyApp({super.key});

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'VideoSDK QuickStart',
      theme: ThemeData(
        primarySwatch: Colors.blue,
      ),
      home: JoinScreen(),
    );
  }
}</code></pre><figcaption>main.dart</figcaption></figure><h3 id="step-3-creating-the-meetingcontrols%E2%80%8B">Step 3: Creating the MeetingControls<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-3--creating-the-meetingcontrols">​</a></h3><p>Let's create <code>meeting_controls.dart</code> file and create MeetingControls <code>StatelessWidget</code>.</p><p>The MeetingControls will consist of:</p><ul><li><strong>Leave Button</strong>: This button will leave the meeting.</li><li><strong>Toggle Mic Button</strong>: This button will unmute or mute the mic.</li><li><strong>Toggle Camera Button</strong>: This button will enable or disable the camera.</li></ul><p>MeetingControls will accept 3 functions in the constructor</p><ul><li><strong>onLeaveButtonPressed</strong>: invoked when the Leave button is pressed.</li><li><strong>onToggleMicButtonPressed</strong>: invoked when the toggle mic button is pressed.</li><li><strong>onToggleCameraButtonPressed</strong>: invoked when the toggle Camera button is pressed.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';

class MeetingControls extends StatelessWidget {
  final void Function() onToggleMicButtonPressed;
  final void Function() onToggleCameraButtonPressed;
  final void Function() onLeaveButtonPressed;

  const MeetingControls(
      {super.key,
      required this.onToggleMicButtonPressed,
      required this.onToggleCameraButtonPressed,
      required this.onLeaveButtonPressed});

  @override
  Widget build(BuildContext context) {
    return Row(
      mainAxisAlignment: MainAxisAlignment.spaceEvenly,
      children: [
        ElevatedButton(
            onPressed: onLeaveButtonPressed, child: const Text('Leave')),
        ElevatedButton(
            onPressed: onToggleMicButtonPressed, child: const Text('Toggle Mic')),
        ElevatedButton(
            onPressed: onToggleCameraButtonPressed,
            child: const Text('Toggle WebCam')),
      ],
    );
  }
}</code></pre><figcaption>meeting_controls.dart</figcaption></figure><h3 id="step-4-creating-participanttile%E2%80%8B">Step 4: Creating ParticipantTile<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-4--creating-participanttile">​</a></h3><p>Let's create <code>participant_tile.dart</code> file and create ParticipantTile <code>StatefulWidget</code>.</p><p>The ParticipantTile will consist of:</p><ul><li><strong>RTCVideoView</strong>: This will show the participant's video stream.</li></ul><p>ParticipantTile will accept <code>Participant</code> in constructor</p><ul><li><strong>participant:</strong> participant of the meeting.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class ParticipantTile extends StatefulWidget {
  final Participant participant;
  const ParticipantTile({super.key, required this.participant});

  @override
  State&lt;ParticipantTile&gt; createState() =&gt; _ParticipantTileState();
}

class _ParticipantTileState extends State&lt;ParticipantTile&gt; {
  Stream? videoStream;

  @override
  void initState() {
    // initial video stream for the participant
    widget.participant.streams.forEach((key, Stream stream) {
      setState(() {
        if (stream.kind == 'video') {
          videoStream = stream;
        }
      });
    });
    _initStreamListeners();
    super.initState();
  }

  _initStreamListeners() {
    widget.participant.on(Events.streamEnabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; videoStream = stream);
      }
    });

    widget.participant.on(Events.streamDisabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; videoStream = null);
      }
    });
  }

  @override
  Widget build(BuildContext context) {
    return Padding(
      padding: const EdgeInsets.all(8.0),
      child: videoStream != null
          ? RTCVideoView(
              videoStream?.renderer as RTCVideoRenderer,
              objectFit: RTCVideoViewObjectFit.RTCVideoViewObjectFitCover,
            )
          : Container(
              color: Colors.grey.shade800,
              child: const Center(
                child: Icon(
                  Icons.person,
                  size: 100,
                ),
              ),
            ),
    );
  }
}</code></pre><figcaption>participant_tile.dart</figcaption></figure><h3 id="step-5-creating-the-meetingscreen%E2%80%8B">Step 5: Creating the MeetingScreen<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-5--creating-the-meetingscreen">​</a></h3><p>Let's create <code>meeting_screen.dart</code> file and create MeetingScreen <code>StatefulWidget</code>.</p><p>MeetingScreen will accept meetingId and token in the constructor.</p><ul><li><strong>meetingID:</strong> meetingId, you want to join</li><li><strong>token</strong>: VideoSDK Auth token.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';
import './participant_tile.dart';

class MeetingScreen extends StatefulWidget {
  final String meetingId;
  final String token;

  const MeetingScreen(
      {super.key, required this.meetingId, required this.token});

  @override
  State&lt;MeetingScreen&gt; createState() =&gt; _MeetingScreenState();
}

class _MeetingScreenState extends State&lt;MeetingScreen&gt; {
  late Room _room;
  var micEnabled = true;
  var camEnabled = true;

  Map&lt;String, Participant&gt; participants = {};

  @override
  void initState() {
    // create room
    _room = VideoSDK.createRoom(
      roomId: widget.meetingId,
      token: widget.token,
      displayName: "John Doe",
      micEnabled: micEnabled,
      camEnabled: camEnabled
    );

    setMeetingEventListener();

    // Join room
    _room.join();

    super.initState();
  }

  // listening to meeting events
  void setMeetingEventListener() {
    _room.on(Events.roomJoined, () {
      setState(() {
        participants.putIfAbsent(
            _room.localParticipant.id, () =&gt; _room.localParticipant);
      });
    });

    _room.on(
      Events.participantJoined,
      (Participant participant) {
        setState(
          () =&gt; participants.putIfAbsent(participant.id, () =&gt; participant),
        );
      },
    );

    _room.on(Events.participantLeft, (String participantId) {
      if (participants.containsKey(participantId)) {
        setState(
          () =&gt; participants.remove(participantId),
        );
      }
    });

    _room.on(Events.roomLeft, () {
      participants.clear();
      Navigator.popUntil(context, ModalRoute.withName('/'));
    });
  }

  // onbackButton pressed leave the room
  Future&lt;bool&gt; _onWillPop() async {
    _room.leave();
    return true;
  }

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return WillPopScope(
      onWillPop: () =&gt; _onWillPop(),
      child: Scaffold(
        appBar: AppBar(
          title: const Text('VideoSDK QuickStart'),
        ),
        body: Padding(
          padding: const EdgeInsets.all(8.0),
          child: Column(
            children: [
              Text(widget.meetingId),
              //render all participant
              Expanded(
                child: Padding(
                  padding: const EdgeInsets.all(8.0),
                  child: GridView.builder(
                    gridDelegate: const SliverGridDelegateWithFixedCrossAxisCount(
                      crossAxisCount: 2,
                      crossAxisSpacing: 10,
                      mainAxisSpacing: 10,
                      mainAxisExtent: 300,
                    ),
                    itemBuilder: (context, index) {
                      return ParticipantTile(
                        key: Key(participants.values.elementAt(index).id),
                          participant: participants.values.elementAt(index));
                    },
                    itemCount: participants.length,
                  ),
                ),
              ),
              MeetingControls(
                onToggleMicButtonPressed: () {
                  micEnabled ? _room.muteMic() : _room.unmuteMic();
                  micEnabled = !micEnabled;
                },
                onToggleCameraButtonPressed: () {
                  camEnabled ? _room.disableCam() : _room.enableCam();
                  camEnabled = !camEnabled;
                },
                onLeaveButtonPressed: () {
                  _room.leave()
                },
              ),
            ],
          ),
        ),
      ),
      home: JoinScreen(),
    );
  }
}</code></pre><figcaption>meeting_screen.dart</figcaption></figure><blockquote><strong>CAUTION</strong><br>If you get <code>webrtc/webrtc.h file not found</code> error at a runtime in iOS, then check the solution <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/known-issues#issue--1" rel="noopener noreferrer">here</a>.</br></blockquote><blockquote><strong>TIP</strong>:<br>You can checkout the complete <a href="https://github.com/videosdk-live/quickstart/tree/main/flutter-rtc" rel="noopener noreferrer">quick start example here</a>.</br></blockquote><h2 id="integrate-rtmp-livestream-feature">Integrate RTMP Livestream Feature</h2><p>RTMP is a popular protocol for live streaming video content from a VideoSDK to platforms such as YouTube, Twitch, Facebook, and others.</p><p>By providing the platform-specific stream key and stream URL, the VideoSDK can connect to the platform's RTMP server and transmit the live video stream.</p><p>VideoSDK allows you to live stream your meeting to a platform that supports RTMP ingestion just by providing the platform-specific stream key and stream URL, we can connect to the platform's RTMP server and transmit the live video stream.</p><p>VideoSDK also allows you to configure the livestream layouts in numerous ways, like by simply setting different prebuilt layouts in the configuration or by providing your own custom template to do the live stream according to your layout choice.</p><p>After installing videoSDK, you can unlock the power of live streaming by following these steps:</p><h3 id="start-livestream">Start Livestream</h3><p><code>startLivestream()</code> can be used to start an RTMP live stream of the meeting, which can be accessed from the <code>Room</code> object. This method accepts two parameters:</p><ul><li><code>1. outputs</code>: This parameter accepts an array of objects that contains the RTMP <code>url</code> and <code>streamKey</code> of the platforms, you want to start the live stream.</li><li><code>2. config (optional)</code>: This parameter will define what the live stream layout should look like.</li></ul><pre><code class="language-dart">var outputs = [
  {
    url: "rtmp://a.rtmp.youtube.com/live2",
    streamKey: "&lt;STREAM_KEY&gt;",
  },
  {
    url: "rtmps://",
    streamKey: "&lt;STREAM_KEY&gt;",
  },
];
Map&lt;String, dynamic&gt; config = {
  // Layout Configuration
  layout: {
    type: "GRID", // "SPOTLIGHT" | "SIDEBAR",  Default : "GRID"
    priority: "SPEAKER", // "PIN", Default : "SPEAKER"
    gridSize: 4, // MAX : 4
  },

  // Theme of livestream
  theme: "DARK", //  "LIGHT" | "DEFAULT"
};

room.startLivestream(outputs,config:config);</code></pre><h3 id="stop-live-stream">Stop Live Stream</h3><ul><li><code>stopLivestream()</code> is used to stop the meeting live stream which can be accessed from the <code>Room</code> object.</li></ul><h3 id="event-associated-with-livestream%E2%80%8B">Event associated with Livestream<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream#event-associated-with-livestream">​</a></h3><p><strong>livestreamStateChanged</strong>: Whenever the livestream state changes, then <code>livestreamStateChanged</code> the event will trigger.</p><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class MeetingScreen extends StatefulWidget {
  ...
}

class _MeetingScreenState extends State&lt;MeetingScreen&gt; {
  late Room room;

  @override
  void initState() {
    ...

    setupRoomEventListener();
  }

  @override
  Widget build(BuildContext context) {
    return YourMeetingWidget();
  }

  void setupRoomEventListener() {
  	//...
    room.on(Events.livestreamStateChanged, (String status) {
      //Status can be :: LIVESTREAM_STARTING
      //Status can be :: LIVESTREAM_STARTED
      //Status can be :: LIVESTREAM_STOPPING
      //Status can be :: LIVESTREAM_STOPPED
      log("Meeting Livestream status : $status");
    });
  }
}</code></pre><h2 id="example">Example</h2><ul><li>Let's create <code>LiveStreamControls</code> widget which will help to toggle RTMP LiveStream.</li></ul><pre><code class="language-dart">import 'package:flutter/material.dart';

class LiveStreamControls extends StatelessWidget {
  final String liveStreamState;
  final void Function() onToggleLiveStreamButtonPressed;

  const LiveStreamControls({
    super.key,
    required this.liveStreamState,
    required this.onToggleLiveStreamButtonPressed,
  });

  @override
  Widget build(BuildContext context) {
    return Wrap(
      children: [
        ElevatedButton(
            onPressed: onToggleLiveStreamButtonPressed,
            child: Text(liveStreamState == "LIVESTREAM_STOPPED"
                ? 'Start LiveStream'
                : liveStreamState == "LIVESTREAM_STARTING"
                    ? "Starting LiveStream"
                    : liveStreamState == "LIVESTREAM_STARTED"
                        ? "Stop LiveStream"
                        : liveStreamState == "LIVESTREAM_STOPPING"
                            ? "Stopping LiveStream"
                            : "Start LiveStream")),
      ],
    );
  }
}
</code></pre><ul><li>Now, we will add <code>LiveStreamControls</code> widget in <code>MeetingScreen</code> after <code>MeetingControls</code> widget.</li><li>Also, we will listen to <code>liveStreamStateChanged</code> event when the meeting's livestream status changed.</li></ul><pre><code class="language-dart">import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';
import './participant_tile.dart';
import './livestream_controls.dart';

class MeetingScreen extends StatefulWidget {
  //...
}

class _MeetingScreenState extends State&lt;MeetingScreen&gt; {
  late Room _room;
  var micEnabled = true;
  var camEnabled = true;
  String liveStreamState = "LIVESTREAM_STOPPED";

  Map&lt;String, Participant&gt; participants = {};

  @override
  void initState() {
    // create room
    //...
  }

  // listening to meeting events
  void setMeetingEventListener() {
    // ...
	widget.room.on(
      Events.liveStreamStateChanged,
      (String status) {
        setState(
          () =&gt; liveStreamState = status,
        );
      },
    );
  }

  // onbackButton pressed leave the room
  Future&lt;bool&gt; _onWillPop() async {
    _room.leave();
    return true;
  }

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return WillPopScope(
      onWillPop: () =&gt; _onWillPop(),
      child: Scaffold(
        appBar: AppBar(
          title: const Text('VideoSDK QuickStart'),
        ),
        body: Padding(
          padding: const EdgeInsets.all(8.0),
          child: Column(
            children: [
              // ...,
              LiveStreamControls(
                onToggleLiveStreamButtonPressed: () {
                  if (liveStreamState == "LIVESTREAM_STOPPED") {
                    var outputs = [
                      {
                        "url": "&lt;URL&gt;",
                        "streamKey": "&lt;Stream-Key&gt;",
                      }
                    ];
                    var liveStreamConfig = {
                      'layout': {
                        'type': 'GRID',
                        'priority': 'SPEAKER',
                        'gridSize': 4,
                      },
                      'theme': "LIGHT",
                    };
                    widget.room.startLivestream(outputs,config: liveStreamConfig);
                  } else if (liveStreamState == "LIVESTREAM_STARTED") {
                    widget.room.stopLivestream();
                  }
                },
                liveStreamState: liveStreamState,
                ),
            ],
          ),
        ),
      ),
      home: JoinScreen(),
    );
  }
}</code></pre><h3 id="custom-template%E2%80%8B">Custom Template<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream#custom-template">​</a></h3><p>With VideoSDK, you can also use your own custom designed layout template to live stream the meetings. To use the custom template, you need to create a template, for which you can <a href="https://docs.videosdk.live/react/guide/interactive-live-streaming/custom-template">follow this guide</a>. Once you have set up the template, you can use the <a href="https://docs.videosdk.live/api-reference/realtime-communication/start-livestream">REST API to start</a> the live stream with the <code>templateURL</code> parameter.</p><h2 id="conclusion">Conclusion</h2><p>Integrating RTMP livestream functionality into your Flutter video call app offers numerous benefits and opens up a wide range of use cases. With seamless RTMP support, users can engage in high-quality, real-time video streaming, enhancing communication and collaboration within your application. Embrace the versatility and scalability of RTMP in your Flutter app to elevate user engagement, foster meaningful connections, and provide a platform for diverse multimedia experiences.</p><p>Unlock the full potential of VideoSDK and craft seamless video experiences effortlessly. Sign up with VideoSDK today and receive 10,000 free minutes to propel your video app to the next level.</p>]]></content:encoded></item><item><title><![CDATA[Complete Guide to React Native]]></title><description><![CDATA[Explore mobile app development with our guide to React Native. Learn setup, master components, advanced features, and deployment for efficient cross-platform apps.]]></description><link>https://www.videosdk.live/blog/react-native</link><guid isPermaLink="false">66210a2a2a88c204ca9d4483</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Wed, 08 Jan 2025 12:29:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/pexels-realtoughcandy-11035471.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction-to-react-native">Introduction to React Native</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/pexels-realtoughcandy-11035471.jpg" alt="Complete Guide to React Native"/><p>React Native is an innovative and highly popular framework that allows developers to build cross-platform mobile applications using JavaScript. Introduced by Facebook in 2015, React Native quickly became a game-changer in mobile development. It enables programmers to write once and deploy for both iOS and Android, significantly reducing development time and maintaining a consistent user experience across platforms.</p><h3 id="what-is-react-native">What is React Native?</h3><p>React Native is a framework that bridges the gap between web and mobile platforms by allowing developers to use React's declarative UI paradigm and JavaScript to build native applications. Unlike traditional mobile development approaches, React Native uses the same fundamental building blocks as regular iOS and Android apps. The applications are composed using a mixture of JavaScript and JSX, which then interact with native platform components, rendering the application directly on the mobile device​ (<a href="https://reactnative.dev/docs/getting-started" rel="noreferrer">ReactNativeLearn</a>)​​ (<a href="https://www.coursera.org/articles/what-is-react-native" rel="noreferrer">Coursera</a>)​.</p><h3 id="how-does-react-native-work">How does react native work?</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/image-5.png" class="kg-image" alt="Complete Guide to React Native" loading="lazy" width="1365" height="603"/></figure><h3 id="who-uses-react-native">Who Uses React Native?</h3><p>From tech startups to big corporations, React Native has been adopted by a variety of companies looking to streamline their mobile development processes. Companies like Instagram, Tesla, and Airbnb have utilized React Native to enhance their mobile applications, leveraging its ability to integrate seamlessly with existing code while providing a native user experience​.</p><h3 id="why-react-native-is-popular">Why React Native is Popular</h3><p>One of the main reasons for React Native's popularity is its efficiency in <a href="https://www.empat.tech/services/custom-mobile-app-development-services" rel="noreferrer">developing mobile applications</a>. Developers can use hot reloading features, which allow them to see the results of their latest changes instantly without recompiling the entire application. This feature not only speeds up the development process but also makes it more dynamic and responsive to changes.</p><p>Furthermore, React Native supports components and APIs that enable interaction with device functionality like the camera and location services, making it highly versatile for comprehensive app development. The framework’s focus on user interface (UI) renders applications that are not only highly functional but also visually appealing​ (<a href="https://reactnative.dev/docs/getting-started" rel="noreferrer">ReactNativeLearn</a>)​​ (<a href="https://reactnativecentral.com/react-native-essentials-flexbox-guide/" rel="noreferrer">React Native Central</a>)​.</p><h3 id="key-features-of-react-native">Key Features of React Native</h3><p>React Native is lauded for its rich set of features that cater to both developers and users:</p><ul><li><strong>Cross-Platform Development</strong>: Code written in React Native can be deployed on both iOS and Android, which reduces the time and cost associated with app development.</li><li><strong>Native Components</strong>: React Native uses components that are native to the user’s device platform, which ensures that the look and feel of the app conforms to the norms of the device’s OS.</li><li><strong>JavaScript Interface</strong>: Utilizing JavaScript, one of the most popular programming languages, React Native makes it easier for web developers to transition into mobile development, leveraging their existing skills​ (<a href="https://www.reactnative.express/" rel="noreferrer">React Native Express</a>)​​ (<a href="https://blog.logrocket.com/react-native-track-player-complete-guide/" rel="noreferrer">LogRocket Blog</a>)​.</li></ul><p>As the mobile development landscape continues to evolve, React Native stands out as a powerful tool that simplifies the creation of robust, efficient, and aesthetically pleasing mobile applications. It not only helps in faster development cycles but also maintains high performance and native aesthetics, making it a preferred choice for many developers around the world.</p><h2 id="setting-up-your-development-environment">Setting Up Your Development Environment</h2><p>To start developing with React Native, the first critical step is setting up your development environment. This process ensures you have all necessary tools and dependencies installed, enabling you to create and run React Native applications on your computer. Below we'll go through how to prepare your development environment for both iOS and Android platforms.</p><h3 id="installation-and-environment-setup">Installation and Environment Setup</h3><ul><li><strong>Node.js and Watchman</strong>: Begin by installing Node.js, which includes npm (Node Package Manager). This is essential as it helps manage the packages required for React Native development. Additionally, install Watchman, a tool developed by Facebook for watching changes in the filesystem. It's particularly useful in development for hot reloading​ (<a href="https://reactnative.dev/docs/environment-setup" rel="noreferrer">ReactNativeLearn</a>)​.</li><li><strong>React Native CLI</strong>: While you can use Expo CLI for a managed app development experience, installing React Native CLI allows for more control and is necessary for apps that require custom native code. Use npm to install the React Native CLI globally on your system </li></ul><pre><code class="language-js">npm install -g react-native-cli</code></pre><ul><li><strong>Java Development Kit (JDK)</strong>: Install JDK if you are planning to develop Android applications. It’s used to compile your app’s code.</li></ul><h3 id="tools-and-editors-for-react-native-app">Tools and Editors for React Native app</h3><ol><li><strong>Code Editor</strong>: A reliable code editor is crucial. Visual Studio Code (VS Code) is highly recommended for React Native development due to its robust ecosystem of extensions and built-in support for JavaScript and React Native.</li><li><strong>Android Studio and Xcode</strong>: For Android development, Android Studio provides the necessary SDKs and tools to build your apps. It also includes an emulator to test your apps. For iOS development, Xcode is indispensable as it contains the iOS SDK, simulator, and other tools needed for iOS app development​ (<a href="https://reactnative.dev/docs/environment-setup" rel="noreferrer">ReactNativeLearn</a>)​.</li><li><strong>iOS and Android Emulators</strong>: After installing Android Studio and Xcode, set up the emulators. These allow you to run and test your applications on different virtual devices without needing actual hardware.</li></ol><h3 id="preparing-the-devices-for-react-native-app">Preparing the Devices for React Native app</h3><ul><li><strong>Android Device Preparation</strong>: If you choose to use a physical Android device, enable USB debugging found in the developer options. This setting allows your development environment to interface with your device, enabling live application testing​ (<a href="https://reactnative.dev/docs/environment-setup" rel="noreferrer">ReactNativeLearn</a>)​.</li><li><strong>iOS Simulator Setup</strong>: With Xcode installed, access the iOS simulator through the Xcode development environment. It simulates iOS devices on your Mac, allowing for straightforward app testing and debugging.</li></ul><h3 id="running-your-first-react-native-app">Running Your First React Native App</h3><p>Once your environment is configured, you can start your first project and test it:</p><p><strong>Creating a New App</strong>:</p><pre><code class="language-bash">react-native init MyNewProject</code></pre><p>This command sets up a new React Native project with all necessary files and folders.</p><p><strong>Running the App</strong>:</p><ul><li>For Android:</li></ul><pre><code class="language-bash">react-native run-android</code></pre><ul><li>For iOS:</li></ul><pre><code class="language-bash">react-native run-ios </code></pre><p>These commands compile the app and launch it on the respective simulator or physical device, giving you a firsthand look at how your app appears and operates on mobile devices​ (<a href="https://reactnative.dev/docs/environment-setup" rel="noreferrer">ReactNativeLearn</a>)​.</p><h2 id="basic-components-and-api-for-react-native-app">Basic Components and API for React Native app</h2><p>Building React Native applications involves using a variety of components and APIs that enable you to create feature-rich, interactive, and efficient mobile apps. In this section, we will explore the core components of React Native and how to work with external APIs within your applications.</p><h4 id="core-components">Core Components</h4><p>React Native offers several built-in components that are essential for any <a href="https://www.suffescom.com/mobile-app-development-company">mobile app development</a>. These components are the building blocks that you will use to construct the user interface of your app:</p><ol><li><strong>View</strong>: The most fundamental component for building a UI, <code>View</code> is a container that supports layout with flexbox, style, some touch handling, and accessibility controls. It is similar to the <code>div</code> in web development​​.</li><li><strong>Text</strong>: To display text in your app, you use the <code>Text</code> component. It supports nesting, styling, and touch handling. It is essential for any kind of text representation in your app​​.</li><li><strong>Image</strong>: The <code>Image</code> component is used to display different types of images, including network images, static resources, temporary local images, and images from local disk, like the camera roll​​.</li><li><strong>ScrollView</strong>: For creating a scrolling view within your app, <code>ScrollView</code> is used. It can host multiple components and views that can be scrolled using gestures​​.</li><li><strong>TextInput</strong>: To capture user input, <code>TextInput</code> is a component that allows users to input text into an app. It's essential for forms, search bars, and anything else that requires the user to enter text​ ​.</li><li><strong>Button</strong>: A simple touchable button is provided by the <code>Button</code> component. It comes with a few customization options, like title, color, and a simple handler for user taps​​.</li></ol><h3 id="working-with-apis">Working with APIs</h3><p>React Native enables developers to integrate a wide range of APIs to extend the functionality of apps. Here’s how you can work with APIs:</p><p><strong>Fetching Data</strong>: To fetch data from the web, you can use the <code>fetch</code> function, which works similarly to how it works in modern web browsers. Here is a basic example of using fetch to retrieve data from an API:javascript.</p><pre><code class="language-javascript">fetch('https://api.example.com/data')
  .then((response) =&gt; response.json())
  .then((json) =&gt; console.log(json))
  .catch((error) =&gt; console.error(error));</code></pre><p><strong>Integrating Third-Party APIs</strong>: React Native allows the integration of third-party APIs for added functionalities such as Google Maps or social media services. This usually involves installing a third-party library and linking it to your React Native project​​.</p><p><strong>Using Native Modules</strong>: Sometimes, you might need functionality that is not available in JavaScript APIs. In such cases, React Native allows you to create your own native modules in Java (for Android) or Objective-C/Swift (for iOS)​ (<a href="https://blog.risingstack.com/a-definitive-react-native-guide-for-react-developers/" rel="noreferrer">RisingStack Engineering</a>)​.</p><p>Understanding and utilizing these components and APIs will allow you to create versatile and robust applications. React Native's component-based structure also helps in managing the codebase and reusing code across different parts of your app, making the development process efficient and modular.</p><h2 id="advanced-features-and-techniques-for-react-native-app">Advanced Features and Techniques for React Native app</h2><p>As you become more familiar with the basics of React Native, you can begin to explore more advanced features and techniques that will allow you to build sophisticated and high-performing mobile applications. This part of the article covers navigation and routing, state management, and performance optimization in React Native.</p><h4 id="navigation-and-routing">Navigation and Routing</h4><p>Handling navigation is a fundamental aspect of mobile development. React Native does not include a built-in library for navigation, so developers typically use third-party solutions such as React Navigation. Here's how you can set up and use React Navigation in your app:</p><ol><li><strong>Installation</strong>: Install React Navigation via npm or yarn:</li></ol><pre><code class="language-bash">npm install @react-navigation/native</code></pre><pre><code class="language-bash">yarn add @react-navigation/native</code></pre><ol><li>You also need to install dependencies that link into the native platform, <code>react-native-screens</code> and <code>react-native-safe-area-context</code>.</li><li><strong>Setting Up a Navigator</strong>: React Navigation offers various types of navigators, such as stack, tab, and drawer navigators. Here's a simple example of setting up a stack navigator:</li></ol><pre><code class="language-javascript">import { createStackNavigator } from '@react-navigation/stack';
const Stack = createStackNavigator();

function MyStack() {
  return (
    &lt;Stack.Navigator&gt;
      &lt;Stack.Screen name="Home" component={HomeScreen} /&gt;
      &lt;Stack.Screen name="Details" component={DetailsScreen} /&gt;
    &lt;/Stack.Navigator&gt;
  );
}</code></pre><ol><li>This setup enables navigation between the Home and Details screens​ (<a href="https://blog.risingstack.com/a-definitive-react-native-guide-for-react-developers/" rel="noreferrer">RisingStack Engineering</a>)​.</li></ol><h4 id="state-management">State Management</h4><p>For complex applications, managing state can become cumbersome with React's built-in state management alone. Many React Native developers turn to external libraries such as Redux or Context API to make state management more manageable:</p><ol><li><strong>Using Redux</strong>: Redux provides a predictable state container for JavaScript apps, helping you write applications that behave consistently. It's especially useful in large-scale applications where state management can get complicated​ (<a href="https://blog.risingstack.com/a-definitive-react-native-guide-for-react-developers/" rel="noreferrer">RisingStack Engineering</a>)​.</li><li><strong>Context API</strong>: For simpler or medium-scale applications, the Context API might be sufficient. It allows you to share values between components without having to explicitly pass a prop through every level of the tree​.</li></ol><h3 id="performance-optimization">Performance Optimization</h3><p>Optimizing the performance of a React Native app is crucial for maintaining a smooth and responsive user experience. Here are some strategies to enhance your app’s performance:</p><ol><li><strong>Reduce Render Cycles</strong>: Use React’s <code>shouldComponentUpdate</code>, <code>React.memo</code>, or <code>PureComponent</code> to prevent unnecessary re-renders of your components.</li><li><strong>Optimize Images</strong>: Large images can significantly impact performance. You should <a href="https://picsart.com/photo-editor/" rel="noreferrer">edit photos</a> to resize them to the smallest size needed and convert them to appropriate image formats, while also considering lazy loading images only when necessary. (<a href="https://www.freecodecamp.org/news/react-native-guide/" rel="noreferrer">FreeCodeCamp</a>)​​ (<a href="https://reactnativecentral.com/react-native-essentials-flexbox-guide/" rel="noreferrer">React Native Central</a>)​.</li><li><strong>Use Native Modules Wisely</strong>: When you need more performance, native modules can be written in native code to optimize specific parts of your application, such as heavy computation tasks or animation​ (<a href="https://www.reactnative.express/" rel="noreferrer">React Native Express</a>)​.</li></ol><p>Understanding these advanced features and techniques will not only improve the quality of your applications but also your efficiency as a developer. As you incorporate these practices, you’ll find that your React Native apps perform better and offer a more seamless user experience.</p><h2 id="building-and-deploying-your-react-native-app">Building and Deploying Your React Native App</h2><p>After developing your React Native app, the next crucial steps are building it for production and deploying it to app stores. This process involves several important stages, from ensuring your app meets platform-specific guidelines to actual publication in the App Store and Google Play Store. Here's a comprehensive guide to building and deploying your React Native application.</p><h4 id="building-for-android-and-ios">Building for Android and iOS</h4><p>Building a React Native app for Android and iOS involves different processes due to the distinct nature of each platform. Here are the essential steps for each:</p><h3 id="android">Android:</h3><ul><li><strong>Generating a Signed APK</strong>: Android requires that all apps be digitally signed with a certificate before they can be installed. To generate a signed APK, you need to create a keystore file, then configure your app’s <code>gradle</code> files to include the keystore details.</li><li><strong>Running the Build</strong>: Use the following command to create an APK file</li></ul><pre><code class="language-bash">cd android &amp;&amp; ./gradlew assembleRelease</code></pre><p>This command compiles the release version of your app for Android, ready to be distributed on the Play Store​ (<a href="https://reactnative.dev/docs/environment-setup" rel="noreferrer">ReactNativeLearn</a>)​.</p><h3 id="ios">iOS:</h3><ul><li><strong>Xcode Build Settings</strong>: Open your app’s project file in Xcode, and ensure all build settings are correctly configured for the target iOS devices. This includes setting the device orientations, version number, and build identifier.</li><li><strong>Archiving the App</strong>: Use Xcode to archive the iOS app. This process prepares your app for distribution, either through the App Store or via other means like ad-hoc distribution.</li><li><strong>Export the IPA</strong>: Once archived, you can export the packaged IPA file from Xcode, which is ready for submission to the App Store​ (<a href="https://reactnative.dev/docs/environment-setup" rel="noreferrer">ReactNativeLearn</a>)​.</li></ul><h3 id="deployment">Deployment</h3><p>Deploying your app to the App Store and Google Play requires following specific guidelines provided by each platform:</p><p><strong>Google Play Store</strong>:</p><ul><li><strong>Create a Developer Account</strong>: You will need a Google Developer Account to submit apps to the Google Play Store. This involves a one-time registration fee.</li><li><strong>Prepare Store Listing</strong>: This includes details like app title, description, screenshots, and privacy policies.</li><li><strong>Upload APK and Publish</strong>: Once your APK is ready and all listing details are set, upload the APK through the Google Play Console and submit it for review​ (<a href="https://reactnative.dev/docs/environment-setup" rel="noreferrer">ReactNativeLearn</a>)​​ (<a href="https://blog.risingstack.com/a-definitive-react-native-guide-for-react-developers/" rel="noreferrer">RisingStack Engineering</a>)​.</li></ul><p><strong>Apple App Store</strong>:</p><ul><li><strong>Apple Developer Program</strong>: Enrollment in the Apple Developer Program is necessary to submit apps to the App Store and requires an annual fee.</li><li><strong>Prepare App Store Listing</strong>: Similar to Google Play, you’ll need to provide your app’s metadata, screenshots, and a preview video.</li><li><strong>Submit for Review</strong>: Using Xcode, submit your app for review. Apple’s review process can take from a few days to a couple of weeks, depending on the app’s complexity and adherence to guidelines​ (<a href="https://reactnative.dev/docs/environment-setup" rel="noreferrer">ReactNativeLearn</a>)​​ (<a href="https://blog.risingstack.com/a-definitive-react-native-guide-for-react-developers/" rel="noreferrer">RisingStack Engineering</a>)​.</li></ul><h3 id="maintaining-and-updating-your-app">Maintaining and Updating Your App</h3><p>After your app is live, maintaining and regularly updating it is crucial to keep it relevant and functional. Monitor user feedback and crash reports to address any issues and regularly update the app with improvements and new features. Both the Google Play Store and Apple App Store provide tools to analyze app performance and user engagement, which can help guide your updates​ (<a href="https://www.freecodecamp.org/news/react-native-guide/" rel="noreferrer">FreeCodeCamp</a>)​​​.</p><h3 id="examples-of-applications-built-with-react-native">Examples of applications built with React Native</h3><p>React Native, a highly favored technology among developers globally, powers numerous cross-platform applications. Below are some prominent examples:</p><table>
<thead>
<tr>
<th>Application</th>
<th>App Store Link</th>
<th>Google Play Link</th>
</tr>
</thead>
<tbody>
<tr>
<td>Facebook</td>
<td><a href="https://apps.apple.com/us/app/facebook/id284882215">Facebook on the App Store</a></td>
<td><a href="https://play.google.com/store/apps/details?id=com.facebook.katana">Facebook on Google Play</a></td>
</tr>
<tr>
<td>Pinterest</td>
<td><a href="https://apps.apple.com/us/app/pinterest/id429047995">Pinterest on the App Store</a></td>
<td><a href="https://play.google.com/store/apps/details?id=com.pinterest">Pinterest on Google Play</a></td>
</tr>
<tr>
<td>Oculus</td>
<td><a href="https://apps.apple.com/us/app/oculus/id1366478176">Oculus on the App Store</a></td>
<td><a href="https://play.google.com/store/apps/details?id=com.oculus.twilight">Oculus on Google Play</a></td>
</tr>
<tr>
<td>Salesforce</td>
<td><a href="https://apps.apple.com/us/app/salesforce/id404249815">Salesforce on the App Store</a></td>
<td><a href="https://play.google.com/store/apps/details?id=com.salesforce.chatter">Salesforce on Google Play</a></td>
</tr>
<tr>
<td>Airbnb</td>
<td><a href="https://apps.apple.com/us/app/airbnb/id401626263">Airbnb on the App Store</a></td>
<td><a href="https://play.google.com/store/apps/details?id=com.airbnb.android">Airbnb on Google Play</a></td>
</tr>
<tr>
<td>Coinbase</td>
<td><a href="https://apps.apple.com/us/app/coinbase-buy-sell-bitcoin/id886427730">Coinbase on the App Store</a></td>
<td><a href="https://play.google.com/store/apps/details?id=com.coinbase.android">Coinbase on Google Play</a></td>
</tr>
<tr>
<td>Shopify</td>
<td><a href="https://apps.apple.com/us/app/shopify-point-of-sale-pos/id507131708">Shopify on the App Store</a></td>
<td><a href="https://play.google.com/store/apps/details?id=com.shopify.mpos">Shopify on Google Play</a></td>
</tr>
</tbody>
</table>
<p>By carefully following these steps, you can ensure that your React Native app is well-prepared for release, helping to facilitate a smooth approval process on both the Google Play Store and Apple App Store. Successful deployment is not just about building a functional app but also about crafting an engaging store presence and maintaining a strong update cycle.</p><h3 id="community-and-resources">Community and Resources</h3><p>In the dynamic world of software development, continuous learning and community engagement are key to staying updated with the latest trends and advancements. React Native, being a widely-used and rapidly evolving framework, has a robust ecosystem of resources and a supportive community. This section will guide you through the best ways to engage with the React Native community and highlight the most valuable resources for further learning and growth.</p><h4 id="community-support">Community Support</h4><p>The React Native community is active and welcoming, offering several platforms for developers to interact, share knowledge, and solve problems together:</p><ol><li><strong>GitHub</strong>: The React Native repository on GitHub is not just a place to access its source code. It's also a hub where developers report issues, suggest features, and contribute to the project. Engaging here allows you to directly influence the development of React Native​.</li><li><strong>React Native Community on Discord and Reddit</strong>: These platforms host vibrant communities where you can ask questions, share your projects, and get feedback from fellow developers. They are ideal for real-time communication and getting to know other developers in the field​​.</li><li><strong>Conferences and Meetups</strong>: Attend React Native-specific events, conferences, and meetups to learn from experienced speakers, network with professionals, and discover new opportunities. Events like Chain React and React Native EU are popular among developers​ (<a href="https://reactnative.dev/docs/getting-started" rel="noreferrer">ReactNativeLearn</a>)​​ (<a href="https://blog.risingstack.com/a-definitive-react-native-guide-for-react-developers/" rel="noreferrer">RisingStack Engineering</a>)​.</li></ol><h4 id="learning-resources">Learning Resources</h4><p>To deepen your knowledge and skills in React Native, here are some of the top resources:</p><ol><li><strong>Official Documentation</strong>: The React Native <a href="https://reactnative.dev/docs/getting-started">official documentation</a> is the most authoritative resource for learning about the framework. It is comprehensive and regularly updated with the latest features and best practices​ (<a href="https://reactnative.dev/docs/getting-started" rel="noreferrer">ReactNativeLearn</a>)​.</li><li><strong>Online Courses and Tutorials</strong>: Platforms like Coursera, Udemy, and freeCodeCamp offer a range of tutorials and courses that cover everything from basic to advanced React Native development. These resources are great for structured learning and often include hands-on projects to practice what you learn​ ​​(<a href="https://www.reactnative.express/" rel="noreferrer">React Native Express</a>)​.</li><li><strong>Books and Blogs</strong>: Consider reading books like "Learning React Native" by Bonnie Eisenman or blogs from major React contributors. These can provide deeper insights into the framework and its use in real-world applications​​​ (<a href="https://blog.logrocket.com/react-native-track-player-complete-guide/" rel="noreferrer">LogRocket Blog</a>)​.</li><li><strong>Podcasts and Videos</strong>: Listening to podcasts like "React Native Radio" and watching tutorial videos on YouTube are excellent ways to stay updated and learn new concepts in a more digestible format. They can be particularly useful for visual learners and those who prefer audio-visual content​ (<a href="https://www.freecodecamp.org/news/react-native-guide/" rel="noreferrer">FreeCodeCamp</a>)​​ (<a href="https://reactnativecentral.com/react-native-essentials-flexbox-guide/" rel="noreferrer">React Native Central</a>)​.</li></ol><p>By engaging with the community and utilizing a variety of learning resources, you can stay at the forefront of React Native development. This will not only enhance your technical skills but also keep you inspired and motivated as you see what others in the community are achieving. Whether you're troubleshooting a complex problem or looking for the next big idea, the React Native community and the wealth of available resources will support you on your development journey.</p><h2 id="frequently-asked-questions-about-react-native">Frequently Asked Questions About React Native</h2><p>This section addresses some of the most commonly asked questions about React Native, providing clear and concise answers that are essential for both beginners and experienced developers.</p><h4 id="what-is-the-difference-between-react-and-react-native">What is the difference between React and React Native?</h4><p>React and React Native are both open-source libraries developed by Facebook, but they serve different purposes. React, also known as React.js, is a JavaScript library for building user interfaces primarily for web applications. It follows the component-based architecture, which allows developers to create reusable UI components.</p><p>React Native, on the other hand, extends React's architecture to build native mobile apps for iOS and Android. It uses the same design as React, letting you compose a rich mobile UI from declarative components using JavaScript and React, but it renders using native components instead of web components​ (<a href="https://reactnative.dev/docs/getting-started" rel="noreferrer">ReactNativeLearn</a>)​​ (<a href="https://www.reactnative.express/" rel="noreferrer">React Native Express</a>)​.</p><h4 id="how-do-i-choose-between-react-native-and-other-mobile-development-frameworks-like-flutter-or-xamarin">How do I choose between React Native and other mobile development frameworks like Flutter or Xamarin?</h4><p>Choosing between React Native and other frameworks like Flutter or Xamarin depends on several factors:</p><ul><li><strong>Project requirements</strong>: Consider the specific needs of your project. For instance, if you require a high degree of native functionality, React Native might be preferable because it integrates well with native components.</li><li><strong>Developer expertise</strong>: Your team's familiarity with the programming languages and frameworks plays a critical role. React Native uses JavaScript, which is widely used by web developers, whereas Flutter uses Dart, a less common language.</li><li><strong>Community and support</strong>: React Native has a large and active community, which can be beneficial for getting support and finding resources. Evaluate the community size and support for each framework to ensure long-term viability​ (<a href="https://www.freecodecamp.org/news/react-native-guide/" rel="noreferrer">FreeCodeCamp</a>)​​ (<a href="https://www.coursera.org/articles/what-is-react-native" rel="noreferrer">Coursera</a>)​.</li></ul><h4 id="what-are-the-prerequisites-for-learning-react-native">What are the prerequisites for learning React Native?</h4><p>To effectively learn React Native, you should have a basic understanding of:</p><ul><li><strong>JavaScript</strong>: Since React Native is based on JavaScript, having a solid grasp of JavaScript fundamentals is essential.</li><li><strong>React</strong>: Knowledge of React principles such as JSX, components, state, and props is crucial because React Native builds on these concepts.</li><li><strong>HTML/CSS</strong>: Although not mandatory, understanding HTML and CSS can help you grasp how to structure and style your applications​ (<a href="https://reactnative.dev/docs/getting-started" rel="noreferrer">ReactNativeLearn</a>)​​ (<a href="https://www.reactnative.express/" rel="noreferrer">React Native Express</a>)​.</li></ul><h4 id="can-i-use-react-native-to-develop-applications-for-platforms-other-than-ios-and-android">Can I use React Native to develop applications for platforms other than iOS and Android?</h4><p>Yes, while React Native is primarily used for developing iOS and Android applications, it can also be used to target other platforms. There are extensions available for React Native that allow you to build applications for the web, Windows, and macOS. Libraries like React Native Web, React Native Windows, and others enable developers to extend their applications to these platforms​ (<a href="https://reactnativecentral.com/react-native-essentials-flexbox-guide/" rel="noreferrer">React Native Central</a>)​​ (<a href="https://blog.logrocket.com/react-native-track-player-complete-guide/" rel="noreferrer">LogRocket Blog</a>)​.</p><p>These questions cover fundamental aspects of React Native, providing a solid foundation for understanding how to approach building applications with this powerful framework. For more detailed information or specific queries, the React Native community and official documentation are excellent resources.</p><h3 id="conclusion">Conclusion</h3><p>React Native has emerged as a powerful and versatile framework for developing mobile applications, enabling developers to create high-quality, native apps using JavaScript and React. Throughout this article, we've explored various aspects of React Native, from setting up the development environment and understanding basic components, to leveraging advanced features and optimizing app performance. We also discussed the vibrant community and extensive resources available that support learning and development in React Native.</p><p>As mobile technology continues to evolve, React Native provides a scalable solution for cross-platform development that meets the needs of today's fast-paced, app-driven markets. Whether you are a seasoned developer or just starting out, React Native offers the tools and flexibility to bring your mobile app ideas to life efficiently.</p><p>With its strong community support and continuous updates, React Native ensures that developers have access to the latest in mobile technology and best practices. By engaging with the community and utilizing the wealth of learning resources discussed, developers can enhance their skills and stay ahead in the competitive landscape of <a href="https://www.aalpha.net/services/mobile-app-development/">mobile app development</a>.</p><p>React Native's ability to integrate seamlessly with both Android and iOS platforms while providing a near-native user experience makes it an invaluable tool in the arsenal of any mobile developer looking to efficiently create versatile and robust mobile applications.</p>]]></content:encoded></item><item><title><![CDATA[How to Implement Screen Share in Flutter Video Call App for Android?]]></title><description><![CDATA[Boost your Flutter video call app with Screen Share functionality using VideoSDK for Android. Enhance user experience and engagement.]]></description><link>https://www.videosdk.live/blog/implement-screen-share-flutter-video-call-app-for-android</link><guid isPermaLink="false">661ccbd72a88c204ca9d3ccb</guid><category><![CDATA[Flutter]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Wed, 08 Jan 2025 09:08:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-Share--Flutter-Android.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-Share--Flutter-Android.png" alt="How to Implement Screen Share in Flutter Video Call App for Android?"/><p>Implement Screen Share functionality into your <a href="https://www.videosdk.live/blog/video-calling-in-flutter">Flutter video call app for Android</a> to enhance user experience and collaboration. With Screen Share, users can seamlessly share their device screens during video calls, making it easier to present documents, slideshows, or demonstrate apps. By integrating this feature, your app becomes a versatile tool for remote work, online learning, and virtual meetings. With Flutter's flexibility and powerful capabilities, you can quickly develop and deploy this feature, keeping your app at the forefront of modern communication technology.</p><p><strong>Benefits of implementing Screen Share in your Flutter video call app for Android:</strong></p><ol><li><strong>Enhanced Collaboration</strong>: Users can collaborate effectively by sharing their screens, allowing for real-time discussions and feedback.</li><li><strong>Efficient Communication</strong>: Screen Share enables clearer communication by visually demonstrating ideas, documents, or presentations.</li><li><strong>Remote Work Facilitation</strong>: Facilitates remote work by enabling virtual meetings where team members can share progress, review documents, and provide guidance.</li><li><strong>Technical Support</strong>: Users can share their screens to seek technical assistance, allowing support staff to diagnose issues or guide troubleshooting steps more efficiently.</li><li><strong>Ease of Use</strong>: The intuitive user interface makes it easy for users to initiate screen sharing, enhancing user experience.</li></ol><p><strong>Use Cases Screen Share in your Flutter video call app:</strong></p><ol><li><strong>Online Meetings</strong>: Participants can share their screens to discuss reports, and presentations, or collaborate on projects.</li><li><strong>Remote Training</strong>: Trainers can demonstrate software usage or guide trainees through processes by sharing their screens.</li><li><strong>Virtual Classrooms</strong>: Educators can share educational content, conduct lectures, or provide one-on-one assistance to students.</li><li><strong>Software Demos</strong>: Developers or product managers can provide software demonstrations to stakeholders or users.</li><li><strong>Collaborative Editing</strong>: Users can collectively edit documents or work on projects by sharing their screens.</li></ol><p>Elevate your app's capabilities and provide users with a truly immersive and productive video calling experience with screen sharing with VideoSDK.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of Screen Share, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/authentication-and-tokens#1-generating-token-from-dashboard">provided tutorial</a>.</p><h3 id="prerequisites%E2%80%8B">Prerequisites<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#prerequisites">​</a></h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>Video SDK Developer Account (if you do not have one, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a>)</li><li>The basic understanding of Flutter.</li><li><strong><a href="https://pub.dev/packages/videosdk" rel="noopener noreferrer">Flutter VideoSDK</a></strong></li><li>Have Flutter installed on your device.</li></ul><h2 id="install-video-sdk%E2%80%8B">Install Video SDK<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#install-video-sdk">​</a></h2><p>Install the VideoSDK using the below-mentioned flutter command. Make sure you are in your Flutter app directory before you run this command.</p><pre><code class="language-dart">$ flutter pub add videosdk

//run this command to add http library to perform network call to generate roomId
$ flutter pub add http</code></pre><h3 id="videosdk-compatibility">VideoSDK Compatibility</h3><!--kg-card-begin: html--><table style="border: 1px solid black;">
<thead>
<tr>
<th style="border:1px solid white;">Android and iOS app</th>
<th style="border:1px solid white;">Web</th>
<th style="border:1px solid white;">Desktop app</th>
<th style="border:1px solid white;">Safari browser</th>
</tr>
</thead>
<tbody>
<tr>
<td style="border:1px solid white;"><center> ✅ </center></td>
<td style="border:1px solid white;"><center> ✅ </center></td>
<td style="border:1px solid white;"><center> ✅ </center></td>
<td style="border:1px solid white;"><center> ❌ </center></td>
</tr>
</tbody>
</table>
<!--kg-card-end: html--><h3 id="structure-of-the-project%E2%80%8B">Structure of the project<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#structure-of-the-project">​</a></h3><p>Your project structure should look like this.</p><pre><code class="language-dart">    root
    ├── android
    ├── ios
    ├── lib
         ├── api_call.dart
         ├── join_screen.dart
         ├── main.dart
         ├── meeting_controls.dart
         ├── meeting_screen.dart
         ├── participant_tile.dart</code></pre><p>We are going to create flutter widgets (JoinScreen, MeetingScreen, MeetingControls, and ParticipantTile).</p><h3 id="app-structure%E2%80%8B">App Structure<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#app-structure">​</a></h3><p>The app widget will contain <code>JoinScreen</code> and <code>MeetingScreen</code> widget. <code>MeetingScreen</code> will have <code>MeetingControls</code> and <code>ParticipantTile</code> widget.</p><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/flutter_quick_start_arch.png" class="kg-image" alt="How to Implement Screen Share in Flutter Video Call App for Android?" loading="lazy"/></figure><h3 id="configure-project">Configure Project</h3><h4 id="for-android%E2%80%8B"><br>For Android<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#for-android">​</a></br></h4><ul><li>Update <code>/android/app/src/main/AndroidManifest.xml</code> for the permissions we will be using to implement the audio and video features.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">&lt;uses-feature android:name="android.hardware.camera" /&gt;
&lt;uses-feature android:name="android.hardware.camera.autofocus" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET"/&gt;
&lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE"/&gt;
&lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;</code></pre><figcaption>AndroidManifest.xml</figcaption></figure><ul><li>Also, you will need to set your build settings to Java 8 because the official WebRTC jar now uses static methods in <code>EglBase</code> an interface. Just add this to your app-level <code>/android/app/build.gradle</code>.</li></ul><pre><code class="language-dart">android {
    //...
    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8
    }
}</code></pre><ul><li>If necessary, in the same <code>build.gradle</code> you will need to increase <code>minSdkVersion</code> of <code>defaultConfig</code> up to <code>23</code> (currently default Flutter generator set to <code>16</code>).</li><li>If necessary, in the same <code>build.gradle</code> you will need to increase <code>compileSdkVersion</code> and <code>targetSdkVersion</code> up to <code>33</code> (currently, the default Flutter generator set to <code>30</code>).</li></ul><h2 id="essential-steps-to-implement-video-calling-functionality">Essential Steps to Implement Video Calling Functionality</h2><p>Before diving into the specifics of screen sharing implementation, it's crucial to ensure you have videoSDK properly installed and configured within your Flutter project. Refer to <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start">VideoSDK's documentation</a> for detailed installation instructions. Once you have a functional video calling setup, you can proceed with adding the screen-sharing feature.</p><h3 id="step-1-get-started-with-apicalldart%E2%80%8B">Step 1: Get started with <code>api_call.dart</code><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-1-get-started-with-api_calldart">​</a></h3><p>Before jumping to anything else, you will write a function to generate a unique meetingId. You will require an authentication token, you can generate it either by using <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or by generating it from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for development.</p><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'dart:convert';
import 'package:http/http.dart' as http;

//Auth token we will use to generate a meeting and connect to it
String token = "&lt;Generated-from-dashboard&gt;";

// API call to create meeting
Future&lt;String&gt; createMeeting() async {
  final http.Response httpResponse = await http.post(
    Uri.parse("https://api.videosdk.live/v2/rooms"),
    headers: {'Authorization': token},
  );

//Destructuring the roomId from the response
  return json.decode(httpResponse.body)['roomId'];
}</code></pre><figcaption>api_call.dart</figcaption></figure><h3 id="step-2-creating-the-joinscreen%E2%80%8B">Step 2: Creating the JoinScreen<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-2--creating-the-joinscreen">​</a></h3><p>Let's create <code>join_screen.dart</code> file in <code>lib</code> directory and create JoinScreen <code>StatelessWidget</code>.</p><p>The JoinScreen will consist of:</p><ul><li><strong>Create Meeting Button</strong>: This button will create a new meeting for you.</li><li><strong>Meeting ID TextField</strong>: This text field will contain the meeting ID, you want to join.</li><li><strong>Join Meeting Button</strong>: This button will join the meeting, which you have provided.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'api_call.dart';
import 'meeting_screen.dart';

class JoinScreen extends StatelessWidget {
  final _meetingIdController = TextEditingController();

  JoinScreen({super.key});

  void onCreateButtonPressed(BuildContext context) async {
    // call api to create meeting and then navigate to MeetingScreen with meetingId,token
    await createMeeting().then((meetingId) {
      if (!context.mounted) return;
      Navigator.of(context).push(
        MaterialPageRoute(
          builder: (context) =&gt; MeetingScreen(
            meetingId: meetingId,
            token: token,
          ),
        ),
      );
    });
  }

  void onJoinButtonPressed(BuildContext context) {
    String meetingId = _meetingIdController.text;
    var re = RegExp("\\w{4}\\-\\w{4}\\-\\w{4}");
    // check meeting id is not null or invaild
    // if meeting id is vaild then navigate to MeetingScreen with meetingId,token
    if (meetingId.isNotEmpty &amp;&amp; re.hasMatch(meetingId)) {
      _meetingIdController.clear();
      Navigator.of(context).push(
        MaterialPageRoute(
          builder: (context) =&gt; MeetingScreen(
            meetingId: meetingId,
            token: token,
          ),
        ),
      );
    } else {
      ScaffoldMessenger.of(context).showSnackBar(const SnackBar(
        content: Text("Please enter valid meeting id"),
      ));
    }
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: const Text('VideoSDK QuickStart'),
      ),
      body: Padding(
        padding: const EdgeInsets.all(12.0),
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: [
            ElevatedButton(
              onPressed: () =&gt; onCreateButtonPressed(context),
              child: const Text('Create Meeting'),
            ),
            Container(
              margin: const EdgeInsets.fromLTRB(0, 8.0, 0, 8.0),
              child: TextField(
                decoration: const InputDecoration(
                  hintText: 'Meeting Id',
                  border: OutlineInputBorder(),
                ),
                controller: _meetingIdController,
              ),
            ),
            ElevatedButton(
              onPressed: () =&gt; onJoinButtonPressed(context),
              child: const Text('Join Meeting'),
            ),
          ],
        ),
      ),
    );
  }
}</code></pre><figcaption>join_screen.dart</figcaption></figure><ul><li>Update the home screen of the app in the <code>main.dart</code></li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'join_screen.dart';

void main() {
  runApp(const MyApp());
}

class MyApp extends StatelessWidget {
  const MyApp({super.key});

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'VideoSDK QuickStart',
      theme: ThemeData(
        primarySwatch: Colors.blue,
      ),
      home: JoinScreen(),
    );
  }
}</code></pre><figcaption>main.dart</figcaption></figure><h3 id="step-3-creating-the-meetingcontrols%E2%80%8B">Step 3: Creating the MeetingControls<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-3--creating-the-meetingcontrols">​</a></h3><p>Let's create <code>meeting_controls.dart</code> file and create MeetingControls <code>StatelessWidget</code>.</p><p>The MeetingControls will consist of:</p><ul><li><strong>Leave Button</strong>: This button will leave the meeting.</li><li><strong>Toggle Mic Button</strong>: This button will unmute or mute the mic.</li><li><strong>Toggle Camera Button</strong>: This button will enable or disable the camera.</li></ul><p>MeetingControls will accept 3 functions in the constructor</p><ul><li><strong><code>onLeaveButtonPressed</code></strong>: invoked when the Leave button pressed</li><li><strong><code>onToggleMicButtonPressed</code></strong>: invoked when the toggle mic button pressed</li><li><strong><code>onToggleCameraButtonPressed</code></strong>: invoked when the toggle Camera button pressed</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';

class MeetingControls extends StatelessWidget {
  final void Function() onToggleMicButtonPressed;
  final void Function() onToggleCameraButtonPressed;
  final void Function() onLeaveButtonPressed;

  const MeetingControls(
      {super.key,
      required this.onToggleMicButtonPressed,
      required this.onToggleCameraButtonPressed,
      required this.onLeaveButtonPressed});

  @override
  Widget build(BuildContext context) {
    return Row(
      mainAxisAlignment: MainAxisAlignment.spaceEvenly,
      children: [
        ElevatedButton(
            onPressed: onLeaveButtonPressed, child: const Text('Leave')),
        ElevatedButton(
            onPressed: onToggleMicButtonPressed, child: const Text('Toggle Mic')),
        ElevatedButton(
            onPressed: onToggleCameraButtonPressed,
            child: const Text('Toggle WebCam')),
      ],
    );
  }
}</code></pre><figcaption>meeting_controls.dart</figcaption></figure><h3 id="step-4-creating-participanttile%E2%80%8B">Step 4: Creating ParticipantTile<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-4--creating-participanttile">​</a></h3><p>Let's create <code>participant_tile.dart</code> file and create ParticipantTile <code>StatefulWidget</code>.</p><p>The ParticipantTile will consist of:</p><ul><li><strong>RTCVideoView</strong>: This will show the participant's video stream.</li></ul><p>ParticipantTile will accept <code>Participant</code> in constructor</p><ul><li><strong>participant:</strong> participant of the meeting.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class ParticipantTile extends StatefulWidget {
  final Participant participant;
  const ParticipantTile({super.key, required this.participant});

  @override
  State&lt;ParticipantTile&gt; createState() =&gt; _ParticipantTileState();
}

class _ParticipantTileState extends State&lt;ParticipantTile&gt; {
  Stream? videoStream;

  @override
  void initState() {
    // initial video stream for the participant
    widget.participant.streams.forEach((key, Stream stream) {
      setState(() {
        if (stream.kind == 'video') {
          videoStream = stream;
        }
      });
    });
    _initStreamListeners();
    super.initState();
  }

  _initStreamListeners() {
    widget.participant.on(Events.streamEnabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; videoStream = stream);
      }
    });

    widget.participant.on(Events.streamDisabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; videoStream = null);
      }
    });
  }

  @override
  Widget build(BuildContext context) {
    return Padding(
      padding: const EdgeInsets.all(8.0),
      child: videoStream != null
          ? RTCVideoView(
              videoStream?.renderer as RTCVideoRenderer,
              objectFit: RTCVideoViewObjectFit.RTCVideoViewObjectFitCover,
            )
          : Container(
              color: Colors.grey.shade800,
              child: const Center(
                child: Icon(
                  Icons.person,
                  size: 100,
                ),
              ),
            ),
    );
  }
}</code></pre><figcaption>participant_tile.dart</figcaption></figure><h3 id="step-5-creating-the-meetingscreen%E2%80%8B">Step 5: Creating the MeetingScreen<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-5--creating-the-meetingscreen">​</a></h3><p>Let's create <code>meeting_screen.dart</code> file and create MeetingScreen <code>StatefulWidget</code>.</p><p>MeetingScreen will accept meetingId and token in the constructor.</p><ul><li><strong>meetingID:</strong> meetingId, you want to join</li><li><strong>token</strong>: VideoSDK Auth token.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';
import './participant_tile.dart';

class MeetingScreen extends StatefulWidget {
  final String meetingId;
  final String token;

  const MeetingScreen(
      {super.key, required this.meetingId, required this.token});

  @override
  State&lt;MeetingScreen&gt; createState() =&gt; _MeetingScreenState();
}

class _MeetingScreenState extends State&lt;MeetingScreen&gt; {
  late Room _room;
  var micEnabled = true;
  var camEnabled = true;

  Map&lt;String, Participant&gt; participants = {};

  @override
  void initState() {
    // create room
    _room = VideoSDK.createRoom(
      roomId: widget.meetingId,
      token: widget.token,
      displayName: "John Doe",
      micEnabled: micEnabled,
      camEnabled: camEnabled
    );

    setMeetingEventListener();

    // Join room
    _room.join();

    super.initState();
  }

  // listening to meeting events
  void setMeetingEventListener() {
    _room.on(Events.roomJoined, () {
      setState(() {
        participants.putIfAbsent(
            _room.localParticipant.id, () =&gt; _room.localParticipant);
      });
    });

    _room.on(
      Events.participantJoined,
      (Participant participant) {
        setState(
          () =&gt; participants.putIfAbsent(participant.id, () =&gt; participant),
        );
      },
    );

    _room.on(Events.participantLeft, (String participantId) {
      if (participants.containsKey(participantId)) {
        setState(
          () =&gt; participants.remove(participantId),
        );
      }
    });

    _room.on(Events.roomLeft, () {
      participants.clear();
      Navigator.popUntil(context, ModalRoute.withName('/'));
    });
  }

  // onbackButton pressed leave the room
  Future&lt;bool&gt; _onWillPop() async {
    _room.leave();
    return true;
  }

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return WillPopScope(
      onWillPop: () =&gt; _onWillPop(),
      child: Scaffold(
        appBar: AppBar(
          title: const Text('VideoSDK QuickStart'),
        ),
        body: Padding(
          padding: const EdgeInsets.all(8.0),
          child: Column(
            children: [
              Text(widget.meetingId),
              //render all participant
              Expanded(
                child: Padding(
                  padding: const EdgeInsets.all(8.0),
                  child: GridView.builder(
                    gridDelegate: const SliverGridDelegateWithFixedCrossAxisCount(
                      crossAxisCount: 2,
                      crossAxisSpacing: 10,
                      mainAxisSpacing: 10,
                      mainAxisExtent: 300,
                    ),
                    itemBuilder: (context, index) {
                      return ParticipantTile(
                        key: Key(participants.values.elementAt(index).id),
                          participant: participants.values.elementAt(index));
                    },
                    itemCount: participants.length,
                  ),
                ),
              ),
              MeetingControls(
                onToggleMicButtonPressed: () {
                  micEnabled ? _room.muteMic() : _room.unmuteMic();
                  micEnabled = !micEnabled;
                },
                onToggleCameraButtonPressed: () {
                  camEnabled ? _room.disableCam() : _room.enableCam();
                  camEnabled = !camEnabled;
                },
                onLeaveButtonPressed: () {
                  _room.leave()
                },
              ),
            ],
          ),
        ),
      ),
      home: JoinScreen(),
    );
  }
}</code></pre><figcaption>meeting_screen.dart</figcaption></figure><blockquote><strong>CAUTION</strong><br>If you get <code>webrtc/webrtc.h file not found</code> error at a runtime in iOS, then check the solution <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/known-issues#issue--1" rel="noopener noreferrer">here</a>.</br></blockquote><blockquote><strong>TIP</strong>:<br>You can checkout the complete quick start example <a href="https://github.com/videosdk-live/quickstart/tree/main/flutter-rtc">here</a>.</br></blockquote><h2 id="integrate-screen-share-feature">Integrate Screen Share Feature</h2><p>Screen sharing in a meeting is the process of sharing your device screen with other participants in the meeting. It allows everyone in the meeting to see exactly what you are seeing on your screen, which can be helpful for presentations, demonstrations, or collaborations.</p><h3 id="why-integrate-screen-sharing-with-videosdk">Why Integrate Screen Sharing with VideoSDK?</h3><p>Integrating screen sharing into your Flutter video calling app using VideoSDK presents several compelling reasons:</p><ol><li><strong>Simplified Development</strong>: With VideoSDK, the intricate tasks of screen capture and transmission are effortlessly managed, saving you valuable development time and resources.</li><li><strong>Native Functionality</strong>: Harnessing the power of the Android Media Projection API, VideoSDK ensures a seamless and efficient screen-sharing experience for your users, providing them with a native feel and optimal performance.</li><li><strong>Seamless Integration</strong>: The integration process with VideoSDK is intuitive and hassle-free, enabling you to seamlessly incorporate screen-sharing functionality into your existing video call features without extensive modifications.</li><li><strong>Robust Features</strong>: VideoSDK offers a robust screen-sharing solution equipped with essential features such as permission handling and in-app sharing capabilities, guaranteeing a reliable and user-friendly experience.</li></ol><p>By embracing screen sharing through VideoSDK, you elevate the functionality and appeal of your Flutter video-calling app, providing users with an enriched communication experience.</p><h3 id="enable-screen-share">Enable Screen Share</h3><ul><li>By using <code>enableScreenShare()</code> function of <code>Room</code> object, a local participant can share his/her screen or window with other participants.</li><li>The screen share stream of the participant can be accessed from the <code>streams</code> property of <code>Participant</code> object.</li></ul><h3 id="disable-screen-share">Disable Screen Share</h3><ul><li>By using <code>disableScreenShare()</code> function of <code>Room</code> object, a local participant can stop sharing his/her screen or window with other participants.</li></ul><h3 id="events-associated-with-screen-share%E2%80%8B">Events associated with Screen Share<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/handling-media/screen-share#events-associated-with-enablescreenshare">​</a></h3><!--kg-card-begin: markdown--><h4 id="events-associated-with-enable-screenshare%E2%80%8B">Events associated with Enable ScreenShare​</h4>
<!--kg-card-end: markdown--><p>Every Participant will receive a callback on <a href="https://docs.videosdk.live/flutter/api/sdk-reference/participant-class/events#streamenabled"><code>Events.streamEnabled</code></a> of the <a href="https://docs.videosdk.live/flutter/api/sdk-reference/participant-class/introduction"><code>Participant</code></a> object with <code>Stream</code> object.</p><ul><li>Every Remote Participant will receive <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/events#presenterchanged"><code>Events.presenterChanged</code></a> callback on the <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/introduction"><code>Room</code></a> object with the participantId as <code>presenterId</code> who started the screen share.</li></ul><!--kg-card-begin: markdown--><h4 id="events-associated-with-disable-screenshare%E2%80%8B">Events associated with Disable ScreenShare​</h4>
<!--kg-card-end: markdown--><ul><li>Every Participant will receive a callback on <a href="https://docs.videosdk.live/flutter/api/sdk-reference/participant-class/events#streamdisabled"><code>Events.streamDisabled</code></a> of the <a href="https://docs.videosdk.live/flutter/api/sdk-reference/participant-class/introduction"><code>Participant</code></a> object with <code>Stream</code> object.</li><li>Every Remote Participant will receive <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/events#presenterchanged"><code>Events.presenterChanged</code></a> callback on the <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/introduction"><code>Room</code></a> object with the <code>presenterId</code> as <code>null</code>.</li></ul><h3 id="example%E2%80%8B">Example<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/handling-media/screen-share#rendering-screen-share-stream">​</a></h3><p>To render the screenshare, you will need to know <code>participantId</code> who is presenting the screen, which can be found in the <code>presenterId</code> property of <code>Room</code> object.</p><p>We will listen for the <code>Events.presenterChanged</code> on <code>Room</code> object to check if some other participant starts screen share, <code>Events.streamEnabled</code> and <code>Events.streamDisabled</code> on the <code>localParticipant</code>'s object to identify if the local participant is presenting or not.</p><pre><code class="language-dart">import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';
import './participant_tile.dart';
import './screen_share_view.dart';

class MeetingScreen extends StatefulWidget {
  final String meetingId;
  final String token;

  const MeetingScreen(
      {super.key, required this.meetingId, required this.token});

  @override
  State&lt;MeetingScreen&gt; createState() =&gt; _MeetingScreenState();
}

class _MeetingScreenState extends State&lt;MeetingScreen&gt; {
  late Room _room;
  var micEnabled = true;
  var camEnabled = true;
  String? _presenterId;

  Map&lt;String, Participant&gt; participants = {};

  @override
  void initState() {
    // create room
    _room = VideoSDK.createRoom(
      roomId: widget.meetingId,
      token: widget.token,
      displayName: "John Doe",
      micEnabled: micEnabled,
      camEnabled: camEnabled
    );

    setMeetingEventListener();

    // Join room
    _room.join();

    super.initState();
  }

  // listening to meeting events
  void setMeetingEventListener() {
   // ...

   //Listening if remote participant starts presenting
    _room.on(Events.presenterChanged, (String? presenterId) {
      setState(() =&gt; {_presenterId = presenterId});
    });

    //Listening if local participant starts presenting
    _room.localParticipant.on(Events.streamEnabled, (Stream stream) {
      if (stream.kind == "share") {
        setState(() =&gt; {_presenterId = _room.localParticipant.id});
      }
    });

    _room.localParticipant.on(Events.streamDisabled, (Stream stream) {
      if (stream.kind == "share") {
        setState(() =&gt; {_presenterId = null});
      }
    });
  }

  // onbackButton pressed leave the room
  Future&lt;bool&gt; _onWillPop() async {
    _room.leave();
    return true;
  }

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return WillPopScope(
      onWillPop: () =&gt; _onWillPop(),
      child: Scaffold(
        appBar: AppBar(
          title: const Text('VideoSDK QuickStart'),
        ),
        body: Padding(
          padding: const EdgeInsets.all(8.0),
          child: Column(
            children: [
              Text(widget.meetingId),
              //render all participant
              // ...
              
              //we will render the screenshare view if the presenterId is not null
              if (_presenterId != null)
                ScreenShareView(
                  participant: _presenterId == _room.localParticipant.id
                      ? _room.localParticipant
                      : _room.participants[_presenterId],
                ),
                // MeetingControls
                // ....
              ElevatedButton(
                  onPressed: () {
                    if (_presenterId != null) {
                      _room.disableScreenShare();
                    } else {
                      _room.enableScreenShare();
                    }
                  },
                  child: const Text('Toggle Screnshare')),
            ],
          ),
        ),
      ),
      home: JoinScreen(),
    );
  }
}</code></pre><p>Now that we know if there is an active presenter, let us get the screen share stream from the <code>Participant</code> object and render it.</p><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class ScreenShareView extends StatelessWidget {
  final Participant? participant;

  ScreenShareView({super.key, required this.participant}) {
  //intialize the shareStream
    participant?.streams.forEach((key, value) {
      if (value.kind == "share") {
        shareStream = value;
      }
    });
  }
  Stream? shareStream;

  @override
  Widget build(BuildContext context) {
    return Container(
      color: Colors.grey.shade800,
      height: 300,
      width: 200,
      //show the screen share stream
      child: shareStream != null
          ? RTCVideoView(
              shareStream?.renderer as RTCVideoRenderer,
              objectFit: RTCVideoViewObjectFit.RTCVideoViewObjectFitCover,
            )
          : null,
    );
  }
}</code></pre><h2 id="conclusion">Conclusion</h2><p>Integrating screen sharing into your Flutter video calling app using VideoSDK offers users a powerful tool to enhance communication and collaboration. This feature is beneficial in a range of scenarios, from business presentations to technical demonstrations, making your app a versatile and valuable platform. </p><p>Leveraging VideoSDK's seamless integration capabilities, the process of incorporating screen sharing becomes not only feasible but also streamlined, enabling you to devote more attention to crafting a remarkable user interface and experience.</p><p>Unlock VideoSDK's full potential and craft seamless video experiences effortlessly. <a href="https://app.videosdk.live/dashboard"><strong>Sign up</strong></a> today and receive 10,000 free minutes to elevate your video app to new heights.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Chat Feature in React JS Video Call App?]]></title><description><![CDATA[Learn to seamlessly integrate a chat feature into your React JS video call app. Enhance communication and collaboration with our step-by-step guide.]]></description><link>https://www.videosdk.live/blog/integrate-chat-feature-in-react-js</link><guid isPermaLink="false">6618e1e52a88c204ca9d08c8</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[React]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Tue, 07 Jan 2025 12:18:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Chat-React.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Chat-React.png" alt="How to Integrate Chat Feature in React JS Video Call App?"/><p>Integrating a chat feature into a React JS video call app enhances communication and collaboration. By seamlessly blending real-time video and text chat, users can discuss, share links, and exchange information concurrently, enriching the interactive experience. Utilizing React JS ensures smooth integration, leveraging its component-based architecture for streamlined development. Implementing features like message threading, emoji support, and real-time updates further enrich the chat experience, fostering engagement and productivity.</p><p>Benefits of Chat Feature:</p><ol><li><strong>Enhanced Communication:</strong> Seamlessly integrate text chat with video calls for richer communication experiences.</li><li><strong>Increased Collaboration:</strong> Users can share links, files, and information in real-time, fostering collaboration.</li><li><strong>Improved Productivity:</strong> Instant messaging alongside video calls accelerates decision-making and task completion.</li><li><strong>Flexible Interaction:</strong> Users can choose between video or text communication based on preferences or situational needs.</li><li><strong>Better User Experience:</strong> React JS ensures smooth integration and UI updates, enhancing the overall user experience.</li></ol><p>Build your modern ReactJS video-calling app with a chat feature and VideoSDK - unlock enhanced communication and collaboration possibilities</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the chat functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (Not having one?, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a>)</li><li>Basic understanding of React.</li><li><a href="https://www.npmjs.com/package/@videosdk.live/react-sdk" rel="noopener noreferrer"><strong>React VideoSDK</strong></a></li><li>Make sure Node and NPM are installed on your device.</li><li>Basic understanding of Hooks (useState, useRef, useEffect)</li><li>React Context API (optional)</li></ul><p>Follow the steps to create the environment necessary to add video calls to your app. You can also find the code sample for <a href="https://github.com/videosdk-live/quickstart/tree/main/react-rtc" rel="noopener noreferrer">Quickstart here</a>.<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#create-new-react-app" rel="noopener noreferrer">​</a></p><p><strong>Create a new React App using the below command.</strong></p><pre><code class="language-js">$ npx create-react-app videosdk-rtc-react-app</code></pre><h2 id="%E2%AC%87%EF%B8%8F-install-videosdk%E2%80%8B">⬇️ Install VideoSDK<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#install-videosdk">​</a></h2><p>It is necessary to set up VideoSDK within your project before going into the details of integrating the Chat feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.</p><ul><li>For NPM</li></ul><pre><code class="language-js">$ npm install "@videosdk.live/react-sdk"

//For the Participants Video
$ npm install "react-player"</code></pre><ul><li>For Yarn</li></ul><pre><code class="language-js">$ yarn add "@videosdk.live/react-sdk"

//For the Participants Video
$ yarn add "react-player"</code></pre><p>You are going to use functional components to leverage React's reusable component architecture. There will be components for users, videos and controls (mic, camera, leave) over the video.</p><h3 id="app-architecture">App Architecture</h3>
<p>The App will contain a <code>MeetingView</code> component which includes a <code>ParticipantView</code> component which will render the participant's name, video, audio, etc. It will also have a <code>Controls</code> component that will allow the user to perform operations like leave and toggle media.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-quick-start-fafbfbc2ed2d7cbfd4c5ee2e36296f9e.png" class="kg-image" alt="How to Integrate Chat Feature in React JS Video Call App?" loading="lazy" width="1356" height="780"/></figure><p>You will be working on the following files:</p><ul><li>API.js: Responsible for handling API calls such as generating unique meetingId and token</li><li>App.js: Responsible for rendering <code>MeetingView</code> and joining the meeting.</li></ul><h2 id="essential-steps-to-implement-video-calling-functionality">Essential Steps to Implement Video Calling Functionality</h2><p>To add video capability to your React application, you must first complete a sequence of prerequisites.</p><h3 id="step-1-get-started-with-apijs%E2%80%8B">Step 1: Get started with API.js<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-1-get-started-with-apijs">​</a></h3><p>Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><pre><code class="language-JavaScript">//This is the Auth token, you will use it to generate a meeting and connect to it
export const authToken = "&lt;Generated-from-dashbaord&gt;";
// API call to create a meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${authToken}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });
  //Destructuring the roomId from the response
  const { roomId } = await res.json();
  return roomId;
};</code></pre><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the <strong>Context Provider</strong> and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meetings such as join/leave, enable/disable the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as <strong>name</strong>, <strong>webcamStream</strong>, <strong>micStream</strong>, etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the <strong>App.js</strong> file.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">import "./App.css";
import React, { useEffect, useMemo, useRef, useState } from "react";
import {
  MeetingProvider,
  MeetingConsumer,
  useMeeting,
  useParticipant,
} from "@videosdk.live/react-sdk";
import { authToken, createMeeting } from "./API";
import ReactPlayer from "react-player";

function JoinScreen({ getMeetingAndToken }) {
  return null;
}

function ParticipantView(props) {
  return null;
}

function Controls(props) {
  return null;
}

function MeetingView(props) {
  return null;
}

function ChatView() {
  return null;
}

function App() {
  const [meetingId, setMeetingId] = useState(null);

  //Getting the meeting id by calling the api we just wrote
  const getMeetingAndToken = async (id) =&gt; {
    const meetingId =
      id == null ? await createMeeting({ token: authToken }) : id;
    setMeetingId(meetingId);
  };

  //This will set Meeting Id to null when meeting is left or ended
  const onMeetingLeave = () =&gt; {
    setMeetingId(null);
  };

  return authToken &amp;&amp; meetingId ? (
    &lt;MeetingProvider
      config={{
        meetingId,
        micEnabled: true,
        webcamEnabled: true,
        name: "C.V. Raman",
      }}
      token={authToken}
    &gt;
      &lt;MeetingView meetingId={meetingId} onMeetingLeave={onMeetingLeave} /&gt;
    &lt;/MeetingProvider&gt;
  ) : (
    &lt;JoinScreen getMeetingAndToken={getMeetingAndToken} /&gt;
  );
}

export default App;</code></pre><figcaption><p><span style="white-space: pre-wrap;">App.js</span></p></figcaption></figure><h3 id="step-3-implement-join-screen%E2%80%8B">Step 3: Implement Join Screen<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-3-implement-join-screen">​</a></h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><pre><code class="language-js">function JoinScreen({ getMeetingAndToken }) {
  const [meetingId, setMeetingId] = useState(null);
  const onClick = async () =&gt; {
    await getMeetingAndToken(meetingId);
  };
  return (
    &lt;div&gt;
      &lt;input
        type="text"
        placeholder="Enter Meeting Id"
        onChange={(e) =&gt; {
          setMeetingId(e.target.value);
        }}
      /&gt;
      &lt;button onClick={onClick}&gt;Join&lt;/button&gt;
      {" or "}
      &lt;button onClick={onClick}&gt;Create Meeting&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><h4 id="output">Output</h4>
<figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-join-screen-06fb57cf0d9e3bcc1e7da9fc032298c3.jpeg" class="kg-image" alt="How to Integrate Chat Feature in React JS Video Call App?" loading="lazy" width="720" height="130"/></figure><h3 id="step-4-implement-meetingview-and-controls%E2%80%8B">Step 4: Implement MeetingView and Controls<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-4-implement-meetingview-and-controls">​</a></h3><p>The next step is to create <code>MeetingView</code> and <code>Controls</code> components to manage features such as join, leave, mute, and unmute.</p><pre><code class="language-js">function MeetingView(props) {
  const [joined, setJoined] = useState(null);
  //Get the method which will be used to join the meeting.
  //We will also get the participants list to display all participants
  const { join, participants } = useMeeting({
    //callback for when meeting is joined successfully
    onMeetingJoined: () =&gt; {
      setJoined("JOINED");
    },
    //callback for when meeting is left
    onMeetingLeft: () =&gt; {
      props.onMeetingLeave();
    },
  });
  const joinMeeting = () =&gt; {
    setJoined("JOINING");
    join();
  };

  return (
    &lt;div className="container"&gt;
      &lt;h3&gt;Meeting Id: {props.meetingId}&lt;/h3&gt;
      {joined &amp;&amp; joined == "JOINED" ? (
        &lt;div&gt;
          &lt;Controls /&gt;
          //For rendering all the participants in the meeting
          {[...participants.keys()].map((participantId) =&gt; (
            &lt;ParticipantView
              participantId={participantId}
              key={participantId}
            /&gt;
          ))}
          &lt;ChatView /&gt;
        &lt;/div&gt;
      ) : joined &amp;&amp; joined == "JOINING" ? (
        &lt;p&gt;Joining the meeting...&lt;/p&gt;
      ) : (
        &lt;button onClick={joinMeeting}&gt;Join&lt;/button&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><figure class="kg-card kg-code-card"><pre><code class="language-js">function Controls() {
  const { leave, toggleMic, toggleWebcam } = useMeeting();
  return (
    &lt;div&gt;
      &lt;button onClick={() =&gt; leave()}&gt;Leave&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleMic()}&gt;toggleMic&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleWebcam()}&gt;toggleWebcam&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">Control Component</span></p></figcaption></figure><h4 id="output-of-controls-component">Output of Controls Component</h4>
<figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-container-controls-2cebdfdfd1371b010b773cb6fb9c7ae8.jpeg" class="kg-image" alt="How to Integrate Chat Feature in React JS Video Call App?" loading="lazy" width="720" height="177"/></figure><h3 id="step-5-implement-participant-view%E2%80%8B">Step 5: Implement Participant View<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-5-implement-participant-view">​</a></h3><p>Before implementing the participant view, you need to understand a couple of concepts.</p><h4 id="51-forwarding-ref-for-mic-and-camera">5.1 Forwarding Ref for mic and camera</h4>
<p>The <code>useRef</code> hook is responsible for referencing the audio and video components. It will be used to play and stop the audio and video of the participant.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const webcamRef = useRef(null);
const micRef = useRef(null);</code></pre><figcaption><p><span style="white-space: pre-wrap;">Forwarding Ref for mic and camera</span></p></figcaption></figure><h4 id="52-useparticipant-hook">5.2 useParticipant Hook</h4>
<p>The <code>useParticipant</code> hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take participantId as an argument.</p><pre><code class="language-js">const { webcamStream, micStream, webcamOn, micOn } = useParticipant(
  props.participantId
);</code></pre><h4 id="53-mediastream-api">5.3 MediaStream API</h4>
<p>The MediaStream API is beneficial for adding a MediaTrack to the audio/video tag, enabling the playback of audio or video.</p><pre><code class="language-js">const webcamRef = useRef(null);
const mediaStream = new MediaStream();
mediaStream.addTrack(webcamStream.track);

webcamRef.current.srcObject = mediaStream;
webcamRef.current
  .play()
  .catch((error) =&gt; console.error("videoElem.current.play() failed", error));</code></pre><h4 id="54-implement-participantview%E2%80%8B">5.4 Implement <code>ParticipantView</code>​</h4>
<p>Now you can use both of the hooks and the API to create <code>ParticipantView</code></p><pre><code class="language-js">function ParticipantView(props) {
  const micRef = useRef(null);
  const { webcamStream, micStream, webcamOn, micOn, isLocal, displayName } =
    useParticipant(props.participantId);

  const videoStream = useMemo(() =&gt; {
    if (webcamOn &amp;&amp; webcamStream) {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(webcamStream.track);
      return mediaStream;
    }
  }, [webcamStream, webcamOn]);

  useEffect(() =&gt; {
    if (micRef.current) {
      if (micOn &amp;&amp; micStream) {
        const mediaStream = new MediaStream();
        mediaStream.addTrack(micStream.track);

        micRef.current.srcObject = mediaStream;
        micRef.current
          .play()
          .catch((error) =&gt;
            console.error("videoElem.current.play() failed", error)
          );
      } else {
        micRef.current.srcObject = null;
      }
    }
  }, [micStream, micOn]);

  return (
    &lt;div&gt;
      &lt;p&gt;
        Participant: {displayName} | Webcam: {webcamOn ? "ON" : "OFF"} | Mic:{" "}
        {micOn ? "ON" : "OFF"}
      &lt;/p&gt;
      &lt;audio ref={micRef} autoPlay playsInline muted={isLocal} /&gt;
      {webcamOn &amp;&amp; (
        &lt;ReactPlayer
          //
          playsinline // extremely crucial prop
          pip={false}
          light={false}
          controls={false}
          muted={true}
          playing={true}
          //
          url={videoStream}
          //
          height={"300px"}
          width={"300px"}
          onError={(err) =&gt; {
            console.log(err, "participant video error");
          }}
        /&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><h2 id="integrate-chat-feature">Integrate Chat Feature</h2><p>For communication or any kind of messaging between participants, VideoSDK provides the <code>usePubSub</code> hook, which utilizes the Publish-Subscribe mechanism. It can be employed to develop a wide variety of functionalities. For example, participants could use it to send chat messages to each other, share files or other media, or even trigger actions like muting or unmuting audio or video.</p><p>This guide focuses on using PubSub to implement Chat functionality. If you are not familiar with the PubSub mechanism and <code>usePubSub</code> hook, you can <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/pubsub">follow this guide</a>.</p><h3 id="implementing-group-chat">Implementing Group Chat</h3><ul><li>The initial step in setting up a group chat involves selecting a topic to which all participants will publish and subscribe, facilitating the exchange of messages. In the following example, <code>CHAT</code> is used as the topic. Next, obtain the <code>publish()</code> method and the <code>messages</code> array from the <code>usePubSub</code>hook.</li></ul><pre><code class="language-js">// importing usePubSub hook from react-sdk
import { usePubSub } from "@videosdk.live/react-sdk";

function ChatView() {
  // destructure publish method from usePubSub hook
  const { publish, messages } = usePubSub("CHAT");

  return &lt;&gt;...&lt;/&gt;;
}</code></pre><ul><li>Next create a message input and a send button to publish the messages.</li></ul><pre><code class="language-js">function ChatView() {
  // destructure publish method from usePubSub hook
  const { publish, messages } = usePubSub("CHAT");

  // State to store the user typed message
  const [message, setMessage] = useState("");

  const handleSendMessage = () =&gt; {
    // Sending the Message using the publish method
    publish(message, { persist: true });
    // Clearing the message input
    setMessage("");
  };

  return (
    &lt;&gt;
      &lt;input
        value={message}
        onChange={(e) =&gt; {
          setMessage(e.target.value);
        }}
      /&gt;
      &lt;button onClick={handleSendMessage}&gt;Send Message&lt;/button&gt;
    &lt;/&gt;
  );
}</code></pre><ul><li>The final step in the group chat is to display the messages sent by others. For this use the <code>messages</code> array and display all the messages.</li></ul><pre><code class="language-js">function ChatView() {
  // destructure publish method from usePubSub hook
  const { publish, messages } = usePubSub("CHAT");

  const [message, setMessage] = useState("");

  const handleSendMessage = () =&gt; {
    ...
  };

  return (
    &lt;&gt;
      &lt;div&gt;
      &lt;p&gt;Messages: &lt;/p&gt;
      {messages.map((message) =&gt; {
        return (
          &lt;p&gt;
            {messsage.senderName} says {message.message}
          &lt;/p&gt;
        );
      })}
      &lt;/div&gt;
      &lt;input
        value={message}
        onChange={(e) =&gt; {
          setMessage(e.target.value);
        }}
      /&gt;
      &lt;button onClick={handleSendMessage}&gt;Send Message&lt;/button&gt;
    &lt;/&gt;
  );
}</code></pre><h3 id="implement-private-chat%E2%80%8B">Implement Private Chat<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/chat-using-pubsub#private-chat">​</a></h3><p>In the above example, to convert the chat into a private one between two participants, set the <code>sendOnly</code> property.</p><pre><code class="language-js">function ChatView() {
  // destructure publish method from usePubSub hook
  const { publish, messages } = usePubSub("CHAT");

  // State to store the user typed message
  const [message, setMessage] = useState("");

  const handleSendMessage = () =&gt; {
    // Sending the Message using the publish method
    // Pass the participantId of the participant to whom you want to send the message.
    publish(message, { persist: true , sendOnly: ['XYZ'] });
    // Clearing the message input
    setMessage("");
  };

  //...
}</code></pre><h3 id="display-the-latest-message-notification%E2%80%8B">Display the Latest Message Notification<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/chat-using-pubsub#display-latest-message-notificaiton">​</a></h3><p>To show the notification to the user when a new message arrives, the following code snippet has to be followed.</p><pre><code class="language-js">function ChatView() {
  // destructure publish method from usePubSub hook
  const { publish, messages } = usePubSub("CHAT", {
    onMessageReceived: (message)=&gt;{
      window.alert(message.senderName + "says" + message.message);
    }
  });
  const [message, setMessage] = useState("");
  const handleSendMessage = () =&gt; {
    ...
  };
  return &lt;&gt;...&lt;/&gt;;
}</code></pre><h3 id="downloading-chat-messages%E2%80%8B">Downloading Chat Messages<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/chat-using-pubsub#downloading-chat-messages">​</a></h3><p>All the messages from PubSub published  <code>persist : true</code> can be downloaded as a <code>.csv</code> file. This file will be accessible in the VideoSDK dashboard and through the <a href="https://docs.videosdk.live/api-reference/realtime-communication/fetch-session-using-sessionid">Sessions API</a>.</p><h2 id="%E2%9C%A8-want-to-add-more-features-to-react-js-video-calling-app">✨ Want to Add More Features to React JS Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your React video-calling app,</p><p><strong>Check out these additional resources:</strong></p><ul><li>HLS Player: <a href="https://www.videosdk.live/blog/implement-hls-player-in-react-js">Link</a></li><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/integrate-active-speaker-indication-in-react-js">Link</a></li><li>RTMP Live Stream: <a href="https://www.videosdk.live/blog/integrate-rtmp-livestream-in-react-js">Link</a></li><li>Image Capture Feature: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-react-js">Link</a></li><li>?Screen Share Feature: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-js">Link</a></li><li>Collaborative Whiteboard: <a href="https://www.videosdk.live/blog/integrate-whiteboard-in-react-js">Link</a></li><li>Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/integrate-picture-in-picture-pip-in-react-js">Link</a></li></ul><h2 id="conclusion">Conclusion</h2><p>By following these steps, you'll have successfully integrated a chat feature into your React.js video call app using VideoSDK. Integrating a chat feature into your React JS video calls offers numerous benefits, including enhanced communication, collaboration, and engagement among participants. </p><p>By incorporating this feature, you not only improve the overall user experience but also empower users to communicate more effectively during video calls. Whether for business meetings, remote collaboration, or virtual events, the addition of a chat feature enhances the functionality and utility of your video conferencing platform.</p><p>If you are new here and want to build an interactive react app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get ? <em>10000 free minutes every month</em>. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[What is Low-latency HTTP Live Streaming? | How Does LL-HLS Work?]]></title><description><![CDATA[Low-latency HTTP Live Streaming (LL-HLS) is an optimized version of HLS, minimizing delays for real-time interaction in live streaming applications.
]]></description><link>https://www.videosdk.live/blog/what-is-low-latency-http-live-streaming</link><guid isPermaLink="false">65a514926c68429b5fdf0f0e</guid><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Tue, 07 Jan 2025 11:09:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/What-is--Low-latency-HTTP-Live-Streaming_.png" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-low-latency-http-live-streaming"><strong>What is Low-latency HTTP Live Streaming?</strong></h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/What-is--Low-latency-HTTP-Live-Streaming_.png" alt="What is Low-latency HTTP Live Streaming? | How Does LL-HLS Work?"/><p><strong>Low-latency HTTP Live Streaming</strong> (LL-HLS) is an advanced streaming protocol designed to minimize the delay between the transmission of audio-video content from the server to the viewer's device. Unlike traditional HTTP Live Streaming (HLS), aims to achieve ultra-low latency, enabling real-time interaction and enhancing the overall user experience.</p><h2 id="what-is-http-live-streaming">What is HTTP Live Streaming?</h2><p><a href="https://www.videosdk.live/blog/what-is-http-live-streaming">HTTP Live Streaming (HLS)</a> is a protocol developed by Apple for transmitting live and on-demand audio and video content over the Internet. HLS breaks multimedia files into small segments, facilitating <a href="https://www.videosdk.live/blog/what-is-adaptive-bitrate-streaming">adaptive bitrate streaming</a>. This method enhances playback quality and allows seamless adjustment to varying network conditions for a smoother user experience.</p><h3 id="importance-of-low-latency-streaming-in-the-digital-age"><strong>Importance of Low-latency Streaming in the Digital Age</strong></h3><p>In the digital age, where instant communication and real-time experiences are paramount, low-latency streaming has become a crucial element for various applications. Whether it's live sports events, online gaming, or interactive webinars, minimizing latency ensures that users receive content with minimal delay, fostering a more immersive and engaging experience.</p><h3 id="key-features-of-low-latency-http-live-streaming"><strong>Key Features of Low-latency HTTP Live Streaming</strong></h3><p>LL-HLS comes with several key features that set it apart from traditional streaming protocols. These include:</p><ul><li><strong>Reduced Latency:</strong> The primary goal of <strong>Low-latency HTTP Live Streaming</strong> is to achieve significantly lower latency compared to standard HLS, enabling near-real-time content delivery.</li><li><strong>Chunked Transfer Encoding:</strong> LL-HLS utilizes chunked transfer encoding to break down content into smaller, more manageable chunks. This allows for quicker transmission and reduced buffering times.</li><li><strong>Media Playlists and Segments:</strong> Low-latency HTTP Live Streaming employs dynamic media playlists and segments, optimizing the delivery of content based on the viewer's network conditions and device capabilities.</li><li><strong>Adaptive Bitrate Streaming (ABR):</strong> ABR ensures that viewers receive the highest quality stream possible based on their network bandwidth, adapting in real-time to fluctuations in internet speed.</li></ul><h2 id="what-is-low-latency-streaming"><strong>What is Low-latency Streaming?</strong></h2><p>Low-latency streaming is a technology that minimizes delays in delivering audio or video content over the Internet. It ensures real-time interaction, enhancing user experiences in applications like live streaming, online gaming, and video conferencing by reducing delays and improving responsiveness.</p><h3 id="latency-in-video-streaming"><strong>Latency in Video Streaming</strong></h3><p>Latency in video streaming refers to the time delay between the moment content is transmitted from the server to when it is displayed on the viewer's screen. In live streaming scenarios, latency becomes particularly critical, as prolonged delays can hinder real-time interactions and diminish the user experience.</p><h3 id="significance-of-low-latency-in-live-streaming"><strong>Significance of Low Latency in Live Streaming</strong></h3><p>Low latency is crucial for various live streaming applications, such as live sports broadcasts, online gaming, and interactive virtual events. Minimizing latency enhances the sense of immediacy and responsiveness, allowing viewers to engage in real-time interactions like <a href="https://learn.g2.com/best-live-chat-software" rel="noreferrer">live chat</a>, polls, and collaborative activities.</p><h3 id="challenges-associated-with-latency-in-video-streaming"><strong>Challenges Associated with Latency in Video Streaming</strong></h3><p>While traditional streaming protocols may have acceptable latency for on-demand content, they often fall short in live-streaming scenarios. The challenges associated with latency include buffering issues, synchronization issues in interactive applications, and a lack of responsiveness in real-time communication.</p><h2 id="how-does-ll-hls-work"><strong>How Does LL-HLS Work?</strong></h2><h3 id="low-latency-http-live-streaming-architecture">Low-latency HTTP Live Streaming<strong> Architecture</strong></h3><p>LL-HLS achieves low latency through a combination of architectural enhancements and optimized streaming techniques. The key components of LL-HLS architecture include:</p><h3 id="chunked-transfer-encoding">Chunked Transfer Encoding</h3><p>Low-latency HTTP Live Streaming divides content into smaller chunks, typically a few seconds in duration. This allows for more efficient transmission, as each chunk can be delivered independently, reducing overall latency.</p><h3 id="media-playlists-and-segments">Media Playlists and Segments</h3><p>Dynamic media playlists and segments enable LL-HLS to adapt to changing network conditions and viewer capabilities. The server can dynamically adjust the quality and type of content delivered based on the viewer's device and available bandwidth.</p><h3 id="adaptive-bitrate-streaming-abr">Adaptive Bitrate Streaming (ABR)</h3><p>ABR ensures a smooth viewing experience by adjusting the quality of the stream in real time. If network conditions change, Low-latency HTTP Live Streaming can seamlessly switch between different bitrates, preventing buffering and maintaining optimal video quality.</p><h2 id="benefits-of-low-latency-http-live-streaming"><strong>Benefits of </strong>Low-latency HTTP Live Streaming</h2><h3 id="improved-viewer-engagement"><strong>Improved Viewer Engagement</strong></h3><p>With lower latency, viewers can engage more actively with live content, participating in real-time discussions, polls, and other interactive elements. This heightened engagement contributes to a more satisfying user experience.</p><h3 id="real-time-interaction-opportunities"><strong>Real-time Interaction Opportunities</strong></h3><p>LL-HLS opens the door to real-time interactions, making live streaming more than just a one-way broadcast. Viewers can engage with content creators, fellow viewers, and event hosts, fostering a sense of community and participation.</p><h3 id="enhanced-user-experience"><strong>Enhanced User Experience</strong></h3><p>Reduced latency translates to a smoother and more enjoyable viewing experience. Viewers can watch live events without significant delays, resulting in a more immersive and satisfying experience.</p><h2 id="how-does-videosdk-enhance-the-ll-hls-experience"><strong>How does VideoSDK enhance the LL-HLS experience?</strong></h2><h3 id="introduction-to-videosdk"><strong>Introduction to VideoSDK</strong></h3><p><a href="https://www.videosdk.live/">VideoSDK</a> is a powerful set of audio-video software development kits (SDKs) that provide complete flexibility, scalability, and control for integrating <a href="https://www.videosdk.live/audio-video-conferencing">audio-video conferencing </a>and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a> into web and mobile applications.</p><h3 id="integration-steps-for-ll-hls-with-videosdk"><strong>Integration Steps for LL-HLS with VideoSDK</strong></h3><p>Integrating VideoSDK with LL-HLS is a seamless process. Developers can leverage VideoSDK to enhance the Low-latency HTTP Live Streaming experience by incorporating features such as:</p><ul><li><strong>Real-time Audio-Video Communication:</strong> VideoSDK enables developers to integrate real-time audio and video communication, enhancing the interactive elements of LL-HLS.</li><li><strong>Scalability:</strong> VideoSDK's scalable architecture ensures that the LL-HLS experience remains robust, even as the number of viewers and participants increases.</li><li><strong>Customization:</strong> Developers can customize the user interface and features of Low-latency HTTP Live Streaming using VideoSDK, tailoring the streaming experience to meet specific requirements.</li></ul><h3 id="features-and-capabilities-of-videosdk-in-ll-hls-environments"><strong>Features and Capabilities of VideoSDK in LL-HLS Environments</strong></h3><p>VideoSDK offers a range of features and capabilities to augment LL-HLS environments:</p><ul><li><strong>High-Quality Video:</strong> VideoSDK ensures high-quality video streaming, enhancing the visual appeal of live content delivered through LL-HLS.</li><li><strong>Low Latency:</strong> VideoSDK complements the low-latency nature of Low-latency HTTP Live Streaming, providing a seamless and responsive streaming experience.</li><li><strong>Cross-Platform Compatibility:</strong> VideoSDK supports cross-platform integration, allowing developers to create LL-HLS applications that work seamlessly on various devices and operating systems.</li></ul><h2 id="what-are-the-future-trends-and-innovations-in-ll-hls"><strong>What are the Future Trends and Innovations in LL-HLS</strong></h2><h3 id="ongoing-developments-in-low-latency-streaming-technologies"><strong>Ongoing Developments in Low-latency Streaming Technologies</strong></h3><p>The field of low-latency streaming is continually evolving, with ongoing developments focusing on further reducing latency and improving the overall streaming experience. Innovations in content delivery networks (CDNs), edge computing, and advanced compression algorithms contribute to the ongoing evolution of LL-HLS.</p><h3 id="emerging-standards-and-protocols"><strong>Emerging Standards and Protocols</strong></h3><p>As the demand for low-latency streaming grows, industry standards and protocols are emerging to streamline the adoption of these technologies. The emergence of new standards ensures interoperability and compatibility across different streaming platforms and devices.</p><p><em>Have questions about integrating Low-latency HTTP Live Streaming and VideoSDK? Our team offers expert advice tailored to your unique needs. Unlock the full potential—</em><a href="https://www.videosdk.live/blog/what-is-http-live-streaming?utm_source=blog&amp;utm_medium=google&amp;utm_campaign=organic"><em>sign up </em></a><em>now to access resources and join our </em><a href="https://discord.com/invite/Qfm8j4YAUJ?utm_source=blog&amp;utm_medium=google&amp;utm_campaign=organic"><em>developer community</em></a><em>. </em><a href="https://bookings.videosdk.live/#/discovery?utm_source=blog&amp;utm_medium=google&amp;utm_campaign=organic"><em>Schedule a demo</em></a><em> to see features in action and discover how our solutions meet your streaming app needs.</em></p><h2 id="ll-http-live-streaming-faqs">LL-HTTP Live Streaming FAQs</h2><h3 id="does-videosdk-support-ll-http-live-streaming">Does VideoSDK support LL-HTTP Live Streaming?</h3><p>Yes, VideoSDK fully supports Low-latency HTTP Live Streaming (LL-HLS), providing developers with the tools to seamlessly integrate real-time audio-video capabilities into web and mobile applications.</p><h3 id="how-easy-is-it-to-integrate-low-latency-http-live-streaming-ll-hls-with-videosdk">How easy is it to integrate Low-latency HTTP Live Streaming (LL-HLS) with VideoSDK?</h3><p>Integrating LL-HLS with VideoSDK is designed to be straightforward. VideoSDK provides <a href="https://docs.videosdk.live/">comprehensive documentation</a> and APIs, making the integration process smooth for developers.</p><h3 id="does-videosdk-support-cross-platform-integration-with-ll-hls">Does VideoSDK support cross-platform integration with LL-HLS?</h3><p>Yes, VideoSDK supports cross-platform integration, ensuring that LL-HLS applications can be seamlessly deployed on various devices and operating systems.</p><h3 id="can-developers-customize-the-ll-hls-user-interface-using-videosdk">Can developers customize the LL-HLS user interface using VideoSDK?</h3><p>Absolutely. VideoSDK offers customization options, allowing developers to tailor the LL-HLS user interface and features according to specific requirements.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Screen Share in React Native (Android) Video Call App?]]></title><description><![CDATA[This tutorial will teach you how to seamlessly integrate Screen Share features in your React Native Android app using VideoSDK.]]></description><link>https://www.videosdk.live/blog/integrate-screen-share-in-react-native-android-video-call-app</link><guid isPermaLink="false">660f76632a88c204ca9cfca3</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[React Native]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Tue, 07 Jan 2025 10:37:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-Share-in-React-Native-iOS-3.png" medium="image"/><content:encoded><![CDATA[<h2 id="%F0%9F%93%96-introduction"><strong>? </strong>Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-Share-in-React-Native-iOS-3.png" alt="How to Integrate Screen Share in React Native (Android) Video Call App?"/><p>In today's digital world, good communication and teamwork are crucial. As people look for more effective methods to connect, video conferencing solutions have become indispensable. In that scenario, the Screen sharing feature has evolved into an integral element of modern communication and collaboration platforms, allowing users to share their device displays with others during meetings, presentations, and distant collaborations. </p><p>In this article, we'll look into how you can smoothly incorporate Screen Share capabilities with VideoSDK, allowing your users to easily share their displays. Let's start.</p><h2 id="%F0%9F%9A%80-getting-started-with-videosdk"><strong>? </strong>Getting Started with VideoSDK</h2><h3 id="goals">Goals</h3><p>By the end of this article, we'll:</p><ol><li>Create a <a href="https://app.videosdk.live/signup">VideoSDK account</a> and generate your VideoSDK auth token.</li><li>Integrate the VideoSDK library and dependencies into your project.</li><li>Implement core functionalities for video calls using VideoSDK.</li><li>Enable Screen Sharing feature.</li></ol><p>To take advantage of the Screen Share functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Node.js v12+</li><li>NPM v6+ (comes installed with newer Node versions)</li><li>Android Studio or Xcode installed</li></ul><h2 id="%E2%AC%87%EF%B8%8F-integrate-videosdk-config"><strong>⬇️ </strong>Integrate VideoSDK Config.</h2><p>It is necessary to set up VideoSDK within your project before going into the details of integrating the Screen Share feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.</p><ul><li>For NPM </li></ul><pre><code class="language-js">npm install "@videosdk.live/react-native-sdk"  "@videosdk.live/react-native-incallmanager"</code></pre><ul><li>For Yarn</li></ul><pre><code class="language-js">yarn add "@videosdk.live/react-native-sdk" "@videosdk.live/react-native-incallmanager"</code></pre><h3 id="project-configuration">Project Configuration</h3><p>Before integrating the Screen Share functionality, ensure that your project is correctly prepared to handle the integration. This setup consists of a sequence of steps for configuring rights, dependencies, and platform-specific parameters so that VideoSDK can function seamlessly inside your application context.</p><!--kg-card-begin: markdown--><h4 id="android-setup">Android Setup</h4>
<!--kg-card-end: markdown--><ul><li>Add the required permissions in the <code>AndroidManifest.xml</code> file.</li></ul><pre><code class="language-js">&lt;manifest
  xmlns:android="http://schemas.android.com/apk/res/android"
  package="com.cool.app"
&gt;
    &lt;!-- Give all the required permissions to app --&gt;
    &lt;uses-permission android:name="android.permission.INTERNET" /&gt;
    &lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Legacy up to Android 11) --&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH"
        android:maxSdkVersion="30" /&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH_ADMIN"
        android:maxSdkVersion="30" /&gt;

    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Android 12 upwards)--&gt;
    &lt;uses-permission android:name="android.permission.BLUETOOTH_CONNECT" /&gt;

    &lt;uses-permission android:name="android.permission.CAMERA" /&gt;
    &lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
    &lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
    &lt;uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" /&gt;
    &lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE"/&gt;
    &lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;

    &lt;application&gt;
   &lt;meta-data
      android:name="live.videosdk.rnfgservice.notification_channel_name"
      android:value="Meeting Notification"
     /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_channel_description"
    android:value="Whenever meeting started notification will appear."
    /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_color"
    android:resource="@color/red"
    /&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundService" android:foregroundServiceType="mediaProjection"&gt;&lt;/service&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundServiceTask"&gt;&lt;/service&gt;
  &lt;/application&gt;
&lt;/manifest&gt;</code></pre><ul><li>Update your <code>colors.xml</code> file for internal dependencies:</li></ul><pre><code class="language-js">&lt;resources&gt;
  &lt;item name="red" type="color"&gt;
    #FC0303
  &lt;/item&gt;
  &lt;integer-array name="androidcolors"&gt;
    &lt;item&gt;@color/red&lt;/item&gt;
  &lt;/integer-array&gt;
&lt;/resources&gt;</code></pre><ul><li>Link the necessary VideoSDK Dependencies: </li></ul><pre><code class="language-js">  dependencies {
   implementation project(':rnwebrtc')
   implementation project(':rnfgservice')
  }</code></pre><pre><code class="language-js">include ':rnwebrtc'
project(':rnwebrtc').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-webrtc/android')

include ':rnfgservice'
project(':rnfgservice').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-foreground-service/android')</code></pre><pre><code class="language-js">import live.videosdk.rnwebrtc.WebRTCModulePackage;
import live.videosdk.rnfgservice.ForegroundServicePackage;

public class MainApplication extends Application implements ReactApplication {
  private static List&lt;ReactPackage&gt; getPackages() {
      @SuppressWarnings("UnnecessaryLocalVariable")
      List&lt;ReactPackage&gt; packages = new PackageList(this).getPackages();
      // Packages that cannot be autolinked yet can be added manually here, for example:

      packages.add(new ForegroundServicePackage());
      packages.add(new WebRTCModulePackage());

      return packages;
  }
}</code></pre><pre><code class="language-js">/* This one fixes a weird WebRTC runtime problem on some devices. */
android.enableDexingArtifactTransform.desugaring=false</code></pre><ul><li>Include the following line in your <code>proguard-rules.pro</code> file (optional: if you are using Proguard)</li></ul><pre><code class="language-js">-keep class org.webrtc.** { *; }</code></pre><ul><li>In your <code>build.gradle</code> file, update the minimum OS/SDK version to <code>23</code>.</li></ul><pre><code class="language-js">buildscript {
  ext {
      minSdkVersion = 23
  }
}</code></pre><!--kg-card-begin: markdown--><h4 id="register-service">Register Service</h4>
<!--kg-card-end: markdown--><p>Register VideoSDK services in your root <code>index.js</code> file for the initialization service.</p><pre><code class="language-js">import { AppRegistry } from "react-native";
import App from "./App";
import { name as appName } from "./app.json";
import { register } from "@videosdk.live/react-native-sdk";

register();

AppRegistry.registerComponent(appName, () =&gt; App);</code></pre><h2 id="%F0%9F%93%8B-essential-steps-for-building-the-video-calling-functionality"><strong>? </strong>Essential Steps for Building the Video Calling Functionality</h2><p>By following essential steps, you can seamlessly implement video into your applications.</p><h3 id="step-1-get-started-with-apijs%E2%80%8B">Step 1: Get started with api.js<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-1--get-started-with-apijs">​</a></h3><p>Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><pre><code class="language-js">export const token = "&lt;Generated-from-dashbaord&gt;";
// API call to create meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${token}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });

  const { roomId } = await res.json();
  return roomId;
};</code></pre><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the <strong>Context Provider</strong> and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meetings such as join/leave, enable/disable the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as <strong>name</strong>,<strong> webcamStream</strong>,<strong> micStream</strong>, etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the <strong>App.js</strong> file.</p><pre><code class="language-js">import React, { useState } from "react";
import {
  SafeAreaView,
  TouchableOpacity,
  Text,
  TextInput,
  View,
  FlatList,
} from "react-native";
import {
  MeetingProvider,
  useMeeting,
  useParticipant,
  MediaStream,
  RTCView,
} from "@videosdk.live/react-native-sdk";
import { createMeeting, token } from "./api";

function JoinScreen(props) {
  return null;
}

function ControlsContainer() {
  return null;
}

function MeetingView() {
  return null;
}

export default function App() {
  const [meetingId, setMeetingId] = useState(null);

  const getMeetingId = async (id) =&gt; {
    const meetingId = id == null ? await createMeeting({ token }) : id;
    setMeetingId(meetingId);
  };

  return meetingId ? (
    &lt;SafeAreaView style={{ flex: 1, backgroundColor: "#F6F6FF" }}&gt;
      &lt;MeetingProvider
        config={{
          meetingId,
          micEnabled: false,
          webcamEnabled: true,
          name: "Test User",
        }}
        token={token}
      &gt;
        &lt;MeetingView /&gt;
      &lt;/MeetingProvider&gt;
    &lt;/SafeAreaView&gt;
  ) : (
    &lt;JoinScreen getMeetingId={getMeetingId} /&gt;
  );
}</code></pre><h3 id="step-3-implement-join-screen%E2%80%8B">Step 3: Implement Join Screen<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-3--implement-join-screen">​</a></h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><pre><code class="language-js">function JoinScreen(props) {
  const [meetingVal, setMeetingVal] = useState("");
  return (
    &lt;SafeAreaView
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        paddingHorizontal: 6 * 10,
      }}
    &gt;
      &lt;TouchableOpacity
        onPress={() =&gt; {
          props.getMeetingId();
        }}
        style={{ backgroundColor: "#1178F8", padding: 12, borderRadius: 6 }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Create Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;

      &lt;Text
        style={{
          alignSelf: "center",
          fontSize: 22,
          marginVertical: 16,
          fontStyle: "italic",
          color: "grey",
        }}
      &gt;
        ---------- OR ----------
      &lt;/Text&gt;
      &lt;TextInput
        value={meetingVal}
        onChangeText={setMeetingVal}
        placeholder={"XXXX-XXXX-XXXX"}
        style={{
          padding: 12,
          borderWidth: 1,
          borderRadius: 6,
          fontStyle: "italic",
        }}
      /&gt;
      &lt;TouchableOpacity
        style={{
          backgroundColor: "#1178F8",
          padding: 12,
          marginTop: 14,
          borderRadius: 6,
        }}
        onPress={() =&gt; {
          props.getMeetingId(meetingVal);
        }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Join Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;
    &lt;/SafeAreaView&gt;
  );
}</code></pre><h3 id="step-4-implement-controls%E2%80%8B">Step 4: Implement Controls<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-4--implement-controls">​</a></h3><p>The next step is to create a <code>ControlsContainer</code> component to manage features such as Join/leave a Meeting and Enable/Disable the Webcam or Mic.</p><p>In this step, the <code>useMeeting</code> hook is utilized to acquire all the required methods such as <code>join()</code>, <code>leave()</code>, <code>toggleWebcam</code> and <code>toggleMic</code>.</p><pre><code class="language-js">const Button = ({ onPress, buttonText, backgroundColor }) =&gt; {
  return (
    &lt;TouchableOpacity
      onPress={onPress}
      style={{
        backgroundColor: backgroundColor,
        justifyContent: "center",
        alignItems: "center",
        padding: 12,
        borderRadius: 4,
      }}
    &gt;
      &lt;Text style={{ color: "white", fontSize: 12 }}&gt;{buttonText}&lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
  );
};

function ControlsContainer() {
  const { join, leave, toggleWebcam, toggleMic } = useMeeting();
  return (
    &lt;View
      style={{
        padding: 24,
        flexDirection: "row",
        justifyContent: "space-between",
      }}
    &gt;
      &lt;Button
        onPress={() =&gt; {
          join();
        }}
        buttonText={"Join"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleWebcam();
        }}
        buttonText={"Toggle Webcam"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleMic();
        }}
        buttonText={"Toggle Mic"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          leave();
        }}
        buttonText={"Leave"}
        backgroundColor={"#FF0000"}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><pre><code class="language-js">function ParticipantList() {
  return null;
}
function MeetingView() {
  const { meetingId } = useMeeting({});

  return (
    &lt;View style={{ flex: 1 }}&gt;
      {meetingId ? (
        &lt;Text style={{ fontSize: 18, padding: 12 }}&gt;
          Meeting Id :{meetingId}
        &lt;/Text&gt;
      ) : null}
      &lt;ParticipantList /&gt;
      &lt;ControlsContainer/&gt;
    &lt;/View&gt;
  );
}</code></pre><h3 id="step-5-render-participant-list%E2%80%8B">Step 5: Render Participant List<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-5--render-participant-list">​</a></h3><p>After implementing the controls, the next step is to render the joined participants.</p><p>You can get all the joined <code>participants</code> from the <code>useMeeting</code> Hook.</p><pre><code class="language-js">function ParticipantView() {
  return null;
}

function ParticipantList({ participants }) {
  return participants.length &gt; 0 ? (
    &lt;FlatList
      data={participants}
      renderItem={({ item }) =&gt; {
        return &lt;ParticipantView participantId={item} /&gt;;
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 20 }}&gt;Press Join button to enter meeting.&lt;/Text&gt;
    &lt;/View&gt;
  );
}</code></pre><pre><code class="language-js">function MeetingView() {
  // Get `participants` from useMeeting Hook
  const { participants } = useMeeting({});
  const participantsArrId = [...participants.keys()];

  return (
    &lt;View style={{ flex: 1 }}&gt;
      &lt;ParticipantList participants={participantsArrId} /&gt;
      &lt;ControlsContainer /&gt;
    &lt;/View&gt;
  );
}</code></pre><h3 id="step-6-handling-participants-media%E2%80%8B">Step 6: Handling Participant's Media<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-6--handling-participants-media">​</a></h3><p>Before Handling the Participant's Media, you need to understand a couple of concepts.</p><!--kg-card-begin: markdown--><h4 id="1-useparticipant-hook">1. useParticipant Hook</h4>
<!--kg-card-end: markdown--><p>The <code>useParticipant</code> hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take <code>participantId</code> as argument.</p><pre><code class="language-js">const { webcamStream, webcamOn, displayName } = useParticipant(participantId);</code></pre><!--kg-card-begin: markdown--><h4 id="2-mediastream-api">2. MediaStream API</h4>
<!--kg-card-end: markdown--><p>The MediaStream API is beneficial for adding a MediaTrack to the <code>RTCView</code> component, enabling the playback of audio or video.</p><pre><code class="language-js">&lt;RTCView
  streamURL={new MediaStream([webcamStream.track]).toURL()}
  objectFit={"cover"}
  style={{
    height: 300,
    marginVertical: 8,
    marginHorizontal: 8,
  }}
/&gt;</code></pre><!--kg-card-begin: markdown--><h4 id="rendering-participant-media">Rendering Participant Media</h4>
<!--kg-card-end: markdown--><pre><code class="language-js">function ParticipantView({ participantId }) {
  const { webcamStream, webcamOn } = useParticipant(participantId);

  return webcamOn &amp;&amp; webcamStream ? (
    &lt;RTCView
      streamURL={new MediaStream([webcamStream.track]).toURL()}
      objectFit={"cover"}
      style={{
        height: 300,
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        backgroundColor: "grey",
        height: 300,
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 16 }}&gt;NO MEDIA&lt;/Text&gt;
    &lt;/View&gt;
  );
}</code></pre><p>Congratulations! By following these steps, you're on your way to unlocking the video capabilities within your application. Now, we are moving forward to integrate the feature that builds immersive video experiences for your users!</p><h2 id="%F0%9F%93%9A-integrate-screen-sharing-feature"><strong>? </strong>Integrate Screen Sharing Feature</h2><p>Adding the Screen Share functionality to your application improves cooperation by allowing users to share their device screens during meetings or presentations. It allows everyone in the conference to view precisely what you see on your screen, which is useful for presentations, demos, and collaborations.</p><h3 id="enable-screen-share">Enable Screen Share</h3><ul><li>By using the <code>enableScreenShare()</code> function of the <code>useMeeting</code> hook, the local participant can share their mobile screen with other participants.</li><li>The Screen Share stream of a participant can be accessed from the <code>screenShareStream</code> property of the <code>useParticipant</code> hook.</li></ul><h3 id="disable-screen-share">Disable Screen Share</h3><ul><li>By using the <code>disableScreenShare()</code> function of the <code>useMeeting</code> hook, the local participants can stop sharing their mobile screen with other participants.</li></ul><h3 id="toggle-screen-share">Toggle Screen Share</h3><ul><li>By using the <code>toggleScreenShare()</code> function of the <code>useMeeting</code> hook, the local participant can start or stop sharing their mobile screen with other participants based on the current state of the screen sharing.</li><li>The Screen Share stream of a participant can be accessed from the <code>screenShareStream</code> property of the <code>useParticipant</code> hook.</li></ul><pre><code class="language-js">const ControlsContainer = () =&gt; {
  //Getting the screen-share method from hook
  const { toggleScreenShare } = useMeeting();

  return (
    //...
    //...
    &lt;Button
      onPress={() =&gt; {
        toggleScreenShare();
      }}
      buttonText={"Toggle ScreenShare"}
      backgroundColor={"#1178F8"}
    /&gt;
    //...
    //...
  );
};</code></pre><h3 id="rendering-screen-share-stream%E2%80%8B">Rendering Screen Share Stream<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/handling-media/screen-share#rendering-screen-share-stream">​</a></h3><ul><li>To render the screenshare, you will need the <code>participantId</code> of the user presenting the screen. This can be obtained from the <code>presenterId</code> property of the <code>useMeeting</code> hook.</li></ul><pre><code class="language-js">function MeetingView() {
  const { presenterId } = useMeeting({});

  return (
    &lt;View style={{ flex: 1 }}&gt;
      //..
      {presenterId &amp;&amp; &lt;PresenterView presenterId={presenterId} /&gt;}
      &lt;ParticipantList /&gt;
      // ...
    &lt;/View&gt;
  );
}

const PresenterView = ({ presenterId }) =&gt; {
  return &lt;Text&gt;PresenterView&lt;/Text&gt;;
};</code></pre><ul><li>Now that you have the <code>presenterId</code>, you can obtain the <code>screenShareStream</code> using the <code>useParticipant</code> hook and play it in the <code>RTCView</code> component.</li></ul><pre><code class="language-js">const PresenterView = ({ presenterId }) =&gt; {
  const { screenShareStream, screenShareOn } = useParticipant(presenterId);

  return (
    &lt;&gt;
      // playing the media stream in the RTCView
      {screenShareOn &amp;&amp; screenShareStream ? (
        &lt;RTCView
          streamURL={new MediaStream([screenShareStream.track]).toURL()}
          objectFit={"contain"}
          style={{
            flex: 1,
          }}
        /&gt;
      ) : null}
    &lt;/&gt;
  );
};</code></pre><h3 id="events-associated-with-togglescreenshare%E2%80%8B">Events associated with toggleScreenShare<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/handling-media/screen-share#events-associated-with-togglescreenshare">​</a></h3><ul><li>Every Participant will receive a callback on <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-participant/events#onstreamdisabled"><code>onStreamEnabled()</code></a> event of the <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-participant/introduction"><code>useParticipant()</code></a> hook with the <code>Stream</code> object, if the <strong>screen share broadcasting was started</strong>.</li><li>Every Participant will receive a callback on <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-participant/events#onstreamdisabled"><code>onStreamDisabled()</code></a> event of the <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-participant/introduction"><code>useParticipant()</code></a> hook with the <code>Stream</code> object, if the <strong>screen share broadcasting was stopped</strong>.</li><li>Every Participant will receive the <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-meeting/events#onpresenterchanged"><code>onPresenterChanged()</code></a> callback of the <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-meeting/introduction"><code>useMeeting</code></a> hook, providing the <code>participantId</code> as the <code>presenterId</code> of the participant who started the screen share or <code>null</code> if the screen share was turned off.</li></ul><pre><code class="language-js">import { useParticipant, useMeeting } from "@videosdk.live/react-native-sdk";

const MeetingView = () =&gt; {
  //Callback for when the presenter changes
  function onPresenterChanged(presenterId) {
    if(presenterId){
      console.log(presenterId, "started screen share");
    }else{
      console.log("someone stopped screen share");
    }
  }

  const { participants } = useMeeting({
    onPresenterChanged,
    ...
  });

  return &lt;&gt;...&lt;/&gt;
}

const ParticipantView = (participantId) =&gt; {
  //Callback for when the participant starts a stream
  function onStreamEnabled(stream) {
    if(stream.kind === 'share'){
      console.log("Share Stream On: onStreamEnabled", stream);
    }
  }

  //Callback for when the participant stops a stream
  function onStreamDisabled(stream) {
    if(stream.kind === 'share'){
      console.log("Share Stream Off: onStreamDisabled", stream);
    }
  }

  const {
    displayName
    ...
  } = useParticipant(participantId,{
    onStreamEnabled,
    onStreamDisabled
    ...
  });
  return &lt;&gt; Participant View &lt;/&gt;;
}</code></pre><p>By following the steps outlined in this guide,  you can seamlessly integrate the Screen Share feature and empower your users to share their screens with ease, fostering better communication and collaboration.</p><h2 id="%E2%9C%A8-want-to-add-more-features-to-react-native-video-calling-app">✨ Want to Add More Features to React Native Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your React Native video-calling app,</p><p><strong>? Check out these additional resources:</strong></p><ul><li>? Active Speaker Indication: <a href="https://www.videosdk.live/blog/active-speaker-indication-in-react-native-video-call-app">Link</a></li><li>? RTMP Live Stream: <a href="https://www.videosdk.live/blog/integrate-rtmp-in-react-native-video-app">Link</a></li><li>? Image Capture Feature: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-react-native-for-android-app">Link</a></li><li>?️ Screen Share Feature in iOS: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-native-ios-video-call-app">Link</a></li><li>? Chat Feature: <a href="https://www.videosdk.live/blog/integrate-chat-feature-in-react-native-video-call-app">Link</a></li><li>?️ Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/picture-in-picture-pip-in-react-native">Link</a></li></ul><h2 id="%F0%9F%93%98-conclusion">? Conclusion</h2><p>With VideoSDK's Screen Share functionality, you can create immersive and engaging video experiences that connect people from all over the world. Screen Share enables participants to easily share their ideas, presentations, and information during corporate meetings, educational sessions, or creative collaborations. </p><p>If you are new here and want to build an interactive React Native app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get ? <em>10000 free minutes every month</em>. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.<br/></p>]]></content:encoded></item><item><title><![CDATA[What is Video on Demand?]]></title><description><![CDATA[Video on Demand (VoD) creates an online video library for the viewers, which they can access anytime at their ease with any compatible device.]]></description><link>https://www.videosdk.live/blog/what-is-video-on-demand</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb63</guid><category><![CDATA[live-streaming]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 06 Jan 2025 14:45:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/07/What-is-VOD.png" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/07/What-is-VOD.png" alt="What is Video on Demand?"/><p>Video on Demand (VoD) creates an online video library for the viewers, which they can access anytime at their ease with any compatible device. Unlike the traditional media broadcasting channels, which demanded viewers to watch videos at the time of broadcast only, VoD is different. It allows viewers to view content at their comfort.</p><p>Video<strong> </strong>on Demand technology, the future of online content delivery has witnessed a rapid engagement in the current times. Today, the majority of the content is found attractive when it is in video format.<strong> </strong>There comes the idea of VoD. VoD is an abbreviation for Video on Demand. This system provides the technology of storing video content on the cloud in a compressed file format. Researchers have found that more than a written article or an e-mail, today, video content looks more alluring to the reader who has turned into a viewer. It is well said that whatever we visualize has more worth than whatever we read or listen to. The content visualized always holds more significance.</p><p>With the increasing global population, the use of smartphones and internet connection has also increased. This has in turn led to an increase in demand for online digital content. Today the businesses have witnessed a rise in demand for videos, and to assure you, this is a never-ending demand which is creating endless opportunities. Video on Demand is a technology that serves to reap benefits to the content creators as well. It allows users to view videos at their comfort, whenever they want, and in whatsoever manner they choose to view them. Video on Demand allows flexible viewing facilities to the users. This blog explains the term Video on Demand in detail, about what it is, how it works, and its benefits.</p><p>On a Video on Demand platform, one can play videos, seek the video forward and backward, pause and play, and watch later. It helps in managing videos, and deliver the content with the help of CDNs, which helps in providing fast and reliable content delivery to the businesses. Professionally, it also helps businesses to have full control over the content delivery and its distribution. It helps to customize the content delivery for businesses to deliver the apt data consensual by them. It sets an arrangement of pre-recorded data, which can be reviewed in the future.</p><p>Often live streaming and VoD are considered similar but in reality, they are pretty different. Let us understand what is live streaming. </p><h2 id="what-is-live-streaming">What is Live Streaming?</h2><p>Live streaming is a concept that allows businesses to stream their content online. Viewers can watch the content on a real-time basis. The viewers can watch the streaming which is happening live and can also watch them later, if recorded, using VoD. On a live stream, the viewers can also pause, play, and make the videos play backward. Live streaming is valuable as it connects to its customers, creating a live bond with them. It also allows live chat with the viewers. VoD lacks a live chat facility, though it allows a well-versed playback video facility for the same recorded live stream.</p><p>Live streaming helps businesses to make announcements for the public, media releases, and press conferences to increase their PR activity with the external environment. Live streaming is a beneficial tool as it helps businesses to boost their marketing and branding strategies too. A live stream can be later accessed as a VoD. the stream can also be accessed later with the help of the VoD facility. Choosing the <a href="https://castr.com/blog/best-vod-platforms/" rel="noreferrer">best VoD platform</a> ensures a smooth and engaging viewing experience for your audience.</p><h3 id="how-to-choose-an-ideal-video-on-demand-facility">How to choose an ideal Video on Demand facility?</h3><p>Choosing an ideal VoD facility is a must as it helps in making engagements better. The viewers are generally consumer-oriented which leads to a raised concern of security and a good system workforce, which doesn’t lag or create unusual issues for the general viewers. Analyzing business trends, good branding is necessary, but in the current trends, businesses have turned customer-oriented, where the features have been designed in accordance to the customer ease.<br/></p><p>VideoSDK  makes the client experience better. We infuse all the features of live streaming as well as video-on-demand at one platform. We make your experience worth sharing with others. We keep up with a huge product range, starting from</p><ul><li>Customizable API and SDK with UI library</li><li>Low-latency scalable live streaming</li><li>Video-on-demand facility</li><li>Content Delivery Networks and more</li></ul><blockquote>All these products we serve to our clients on a single platform enhancing the consumer-development opportunities and user-friendly approach.</blockquote><h2 id="videosdk-offers-clients-amazing-video-on-demand-facilities">VideoSDK offers clients amazing video on demand facilities<br/></h2><p><strong>(1) </strong><a href="https://docs.videosdk.live/docs/video-on-demand/intro"><strong>Whitelabel</strong></a></p><p>You can host your videos on live streaming with the Whitelabel facility, engaging the screen with your branding and <a href="https://www.kittl.com/create/logos">logo</a>.</p><p><strong>(2) </strong><a href="https://docs.videosdk.live/docs/video-on-demand/tutorials/stream-video-files-tutorial"><strong>Scalable streaming</strong></a></p><p>We cater to flawless, uninterrupted streaming, with the stream recording facility. We help you engage a million users</p><p><strong>(3) </strong><a href="https://docs.videosdk.live/docs/video-on-demand/storage-api-reference/auth"><strong>Compatibility</strong></a></p><p>We are compatible with 98% of devices, including Android, iOS, and more. All we aim is to provide maximum engagement to your application</p><p><strong>(4) </strong><a href="https://docs.videosdk.live/docs/video-on-demand/storage-api-reference/create-new-url"><strong>Customizable APIs and SDKs</strong></a></p><p>VideoSDK develop APIs and SDKs designed on the demand of its clients in any manner they wish</p><p><strong>(5) </strong><a href="https://docs.videosdk.live/docs/video-on-demand/storage-api-reference/upload-file"><strong>Video on demand facility</strong></a></p><p>With the facility of live streaming, VideoSDK also allows a flexible VoD feature, where the clients can view the content at their ease.</p><p><strong>(6) </strong><a href="https://docs.videosdk.live/docs/video-on-demand/storage-api-reference/list-all-files"><strong>Video playback </strong></a></p><p>We also offer a facility of video playback, where the viewers get an option to loop the video, play and pause, and make their video fasten or slow as for their comfort.</p><p><strong>(7) </strong><a href="https://docs.videosdk.live/docs/video-on-demand/encoding-api-reference/auth"><strong>Secured accessibility</strong></a><strong> </strong></p><p>We ensure secured access to your platform for your customers to make a better engagement platform for you.</p><p><strong>(8) </strong><a href="https://docs.videosdk.live/docs/video-on-demand/encoding-api-reference/create-encoding-job"><strong>Adaptive Live Streaming</strong></a></p><p>VideoSDK  caters to scalable streaming based on the device, supportive quality, and  internet bandwidth</p><p><strong>(9) </strong><a href="https://docs.videosdk.live/docs/video-on-demand/encoding-api-reference/list-all-encoding-jobs"><strong>Adaptive Video Streaming</strong></a></p><p>We also provide services for streaming videos, with effective scalability and VoD facility, supporting the majority of devices.</p><p><strong>(10) </strong><a href="https://docs.videosdk.live/docs/video-on-demand/encoding-api-reference/get-encoding-job-details"><strong>Encoding </strong></a></p><p>VideoSDK  helps in encoding videos and images, compressing them into digital format, saving them as fluid data making them compatible with all mobile devices.</p><p><strong>(11) </strong><a href="https://docs.videosdk.live/docs/live-streaming/api-reference/create-live-stream"><strong>Hosting</strong></a></p><p>We provide the facility of hosting videos, which are uploaded by the clients with us. We help in uploading the videos and hosting them to online platforms.</p><p><strong>(12) </strong><a href="https://docs.videosdk.live/docs/live-streaming/api-reference/get-live-stream"><strong>Content Delivery Network</strong></a></p><p>We provide a global CDN with global geo-replication and edge location delivery. Additionally, tools to <a href="https://picsart.com/photo-editor/" rel="noreferrer">edit images</a> for video covers can be immensely helpful. Protected with DDos, we ensure faster delivery with enterprise-grade security.</p><p><strong>(13) </strong><a href="https://docs.videosdk.live/docs/live-streaming/api-reference/update-live-stream"><strong>Multi-platform Streaming</strong></a></p><p>Stream live on several social media platforms all at once. Enjoy going live, consuming less time. Build a strong branding strategy with us.<br/></p><p>VideoSDK  is an ideal platform for users to develop their streaming platforms flawless with no extra effort. We customize our APIs and SDKs according to client preferences to increase their app engagement. The CDNs we use for storing the digital content are exclusively secured enabling reliable and scalable streaming.Videosdk.live excels in its features. </p><p>Apart from the above facilities, we also deliver some additional features making ourselves a reliable platform to use. Connect with us and explore what you never explored before.<br/></p><blockquote>Reach us and get enriched with more such value content and an everlasting business corporate relation.</blockquote><h2 id="find-our-documentation-here">Find our documentation here</h2><h3 id="video-on-demand">Video On Demand</h3><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/docs/video-on-demand/intro"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Introduction | videosdk.live Documentation</div><div class="kg-bookmark-description">Video-on-demand API provides end-to-end solution for building scalable on demand video platform for millions of the users.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/favicon.ico" alt="What is Video on Demand?"><span class="kg-bookmark-author">videosdk.live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/zujonow_32.png" alt="What is Video on Demand?" onerror="this.style.display = 'none'"/></div></a></figure><h3 id="live-streaming">Live Streaming</h3><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/docs/live-streaming/intro"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Introduction | videosdk.live Documentation</div><div class="kg-bookmark-description">Live streming lets you stream your live videos with just few lines of code. Reach to your audience across the globe.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/favicon.ico" alt="What is Video on Demand?"><span class="kg-bookmark-author">videosdk.live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/assets/images/live-streaming-20847dd496d7ef5166746aaec3747f49.jpg" alt="What is Video on Demand?" onerror="this.style.display = 'none'"/></div></a></figure><h3 id="reach-out-to-us-we-are-the-happiest-to-serve-you">Reach out to us, we are the happiest to serve you</h3><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://videosdk.live/contact"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Contact Us | videosdk.live</div><div class="kg-bookmark-description">Contact us for low latency live streaming, video on demand APIs</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://videosdk.live/favicon/apple-touch-icon.png" alt="What is Video on Demand?"><span class="kg-bookmark-author">videosdk.live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.videosdk.live/videosdklive-thumbnail.jpg" alt="What is Video on Demand?" onerror="this.style.display = 'none'"/></div></a></figure><h2 id=""><br/></h2>]]></content:encoded></item><item><title><![CDATA[How to Integrate RTMP Live Stream in Android(Java) Video Chat App?]]></title><description><![CDATA[This comprehensive guide explores integrating the RTMP Livestream feature seamlessly into your Java app using VideoSDK and amplifying it with the capability of a livestream.]]></description><link>https://www.videosdk.live/blog/integrate-rtmp-livestream-feature-in-android-java-video-chat-app</link><guid isPermaLink="false">6605498a2a88c204ca9cf4b0</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[Android]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Mon, 06 Jan 2025 11:07:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-Live-Stream-Java-1.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-Live-Stream-Java-1.png" alt="How to Integrate RTMP Live Stream in Android(Java) Video Chat App?"/><p>If you're on the path to enhancing your Android video app with the power of RTMP live streaming, you're in the right place. In this technical guide, we will explore the integration of the RTMP Livestream feature within your Android video app with the help of VideoSDK. By following the outlined steps, you will not only learn to create a robust video calling experience but also amplify it with the capability to live stream your meetings to various platforms effortlessly.</p><h2 id="goals">Goals</h2><p>By the End of this Article:</p><ol><li>Create a <a href="https://www.videosdk.live/signup">VideoSDK account</a> and generate your VideoSDK auth token.</li><li>Integrate the VideoSDK library and dependencies into your project.</li><li>Implement core functionalities for video calls using VideoSDK</li><li>Enable the RTMP Livestream feature.</li></ol><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>Before we dive into the implementation steps, let's make sure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://www.videosdk.live/signup">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token plays a crucial role in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/authentication-and-token#1-generating-token-from-dashboard">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Java Development Kit is supported.</li><li>Android Studio version 3.0 or later.</li><li>Android SDK API level 21 or higher.</li><li>A mobile device with Android 5.0 or later version.</li></ul><h2 id="integrate-videosdk">Integrate VideoSDK</h2><p>Following the account creation and token generation steps, we'll guide you through the process of adding the VideoSDK library and other dependencies to your project. We'll also ensure your app has the required permissions to access features like audio recording, camera usage, and internet connectivity, all crucial for a seamless video experience.</p><h3 id="step-a-add-the-repositories-to-the-projects-settingsgradle-file">Step (a): Add the repositories to the project's <code>settings.gradle</code> file.</h3><pre><code class="language-Java">dependencyResolutionManagement{
  repositories {
    // ...
    google()
    mavenCentral()
    maven { url 'https://jitpack.io' }
    maven { url "https://maven.aliyun.com/repository/jcenter" }
  }
}
</code></pre><h3 id="step-b-include-the-following-dependency-within-your-applications-buildgradle-file">Step (b): Include the following dependency within your application's <code>build.gradle</code> file:</h3><pre><code class="language-Java">dependencies {
  implementation 'live.videosdk:rtc-android-sdk:0.1.26'

  // library to perform Network call to generate a meeting id
  implementation 'com.amitshekhar.android:android-networking:1.0.2'

  // Other dependencies specific to your app
}
</code></pre><blockquote>If your project has set <code>android.useAndroidX=true</code>, then set <code>android.enableJetifier=true</code> in the <code>gradle.properties</code> file to migrate your project to AndroidX and avoid duplicate class conflict.</blockquote><h3 id="step-c-add-permissions-to-your-project">Step (c): Add permissions to your project</h3><p>In <code>/app/Manifests/AndroidManifest.xml</code>, add the following permissions after <code>&lt;/application&gt;</code>.</p><pre><code class="language-Java">&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
</code></pre><p>These permissions are essential for enabling core functionalities like audio recording, internet connectivity for real-time communication, and camera access for video streams within your video application.</p><h2 id="essential-steps-for-building-the-video-calling-functionality">Essential Steps for Building the Video Calling Functionality</h2><p>We'll now delve into the functionalities that make your video application after setting up your project with VideoSDK. This section outlines the essential steps for implementing core functionalities within your app.</p><p>This section will guide you through four key aspects:</p><h3 id="step-1-generate-a-meetingid">Step 1: Generate a <code>meetingId</code></h3><p>Now, we can create the <code>meetingId</code> from the VideoSDK's rooms API. You can refer to this <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/setup-call/initialize-meeting#generating-meeting-id">documentation</a> to generate meetingId.</p><h3 id="step-2-initializing-the-meeting">Step 2: Initializing the Meeting</h3><p>After getting <code>meetingId</code> , the next step involves initializing the meeting for that we need to,</p><ol><li>Initialize VideoSDK.</li><li>Configure <strong>VideoSDK</strong> with a token.</li><li>Initialize the meeting with required params such as <code>meetingId</code>, <code>participantName</code>, <code>micEnabled</code>, <code>webcamEnabled</code> and more.</li><li>Add <code>MeetingEventListener</code> for listening events such as Meeting Join/Left and Participant Join/Left.</li><li>Join the room with <code>meeting.join()</code> method.</li></ol><p>Please copy the .xml file of the <code>MeetingActivity</code> from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/activity_meeting.xml">here</a>.</p><pre><code class="language-Java">public class MeetingActivity extends AppCompatActivity {
  // declare the variables we will be using to handle the meeting
  private Meeting meeting;
  private boolean micEnabled = true;
  private boolean webcamEnabled = true;

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_meeting);

    final String token = ""; // Replace with the token you generated from the VideoSDK Dashboard
    final String meetingId = ""; // Replace with the meetingId you have generated
    final String participantName = "John Doe";

    // 1. Initialize VideoSDK
    VideoSDK.initialize(applicationContext);

    // 2. Configuration VideoSDK with Token
    VideoSDK.config(token);

    // 3. Initialize VideoSDK Meeting
    meeting = VideoSDK.initMeeting(
            MeetingActivity.this, meetingId, participantName,
            micEnabled, webcamEnabled,null, null, false, null, null);

    // 4. Add event listener for listening upcoming events
    meeting.addEventListener(meetingEventListener);

    // 5. Join VideoSDK Meeting
    meeting.join();

    ((TextView)findViewById(R.id.tvMeetingId)).setText(meetingId);
  }

  // creating the MeetingEventListener
  private final MeetingEventListener meetingEventListener = new MeetingEventListener() {
    @Override
    public void onMeetingJoined() {
      Log.d("#meeting", "onMeetingJoined()");
    }

    @Override
    public void onMeetingLeft() {
      Log.d("#meeting", "onMeetingLeft()");
      meeting = null;
      if (!isDestroyed()) finish();
    }

    @Override
    public void onParticipantJoined(Participant participant) {
      Toast.makeText(MeetingActivity.this, participant.getDisplayName() + " joined", Toast.LENGTH_SHORT).show();
    }

    @Override
    public void onParticipantLeft(Participant participant) {
      Toast.makeText(MeetingActivity.this, participant.getDisplayName() + " left", Toast.LENGTH_SHORT).show();
    }
  };
}</code></pre><h3 id="step-3-handle-local-participant-media">Step 3: Handle Local Participant Media</h3><p>After successfully entering the meeting, it's time to manage the webcam and microphone for the local participant (you).</p><p>To enable or disable the webcam, we'll use the <code>Meeting</code> class methods <code>enableWebcam()</code> and <code>disableWebcam()</code>, respectively. Similarly, to mute or unmute the microphone, we'll utilize the methods <code>muteMic()</code> and <code>unmuteMic()</code></p><pre><code class="language-Java">public class MeetingActivity extends AppCompatActivity {
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_meeting);
    //...Meeting Setup is Here

    // actions
    setActionListeners();
  }

  private void setActionListeners() {
    // toggle mic
    findViewById(R.id.btnMic).setOnClickListener(view -&gt; {
      if (micEnabled) {
        // this will mute the local participant's mic
        meeting.muteMic();
        Toast.makeText(MeetingActivity.this, "Mic Disabled", Toast.LENGTH_SHORT).show();
      } else {
        // this will unmute the local participant's mic
        meeting.unmuteMic();
        Toast.makeText(MeetingActivity.this, "Mic Enabled", Toast.LENGTH_SHORT).show();
      }
      micEnabled=!micEnabled;
    });

    // toggle webcam
    findViewById(R.id.btnWebcam).setOnClickListener(view -&gt; {
      if (webcamEnabled) {
        // this will disable the local participant webcam
        meeting.disableWebcam();
        Toast.makeText(MeetingActivity.this, "Webcam Disabled", Toast.LENGTH_SHORT).show();
      } else {
        // this will enable the local participant webcam
        meeting.enableWebcam();
        Toast.makeText(MeetingActivity.this, "Webcam Enabled", Toast.LENGTH_SHORT).show();
      }
      webcamEnabled=!webcamEnabled;
    });

    // leave meeting
    findViewById(R.id.btnLeave).setOnClickListener(view -&gt; {
      // this will make the local participant leave the meeting
      meeting.leave();
    });
  }
}</code></pre><h3 id="step-4-handling-the-participants-view">Step 4: Handling the Participants' View</h3><p>To display a list of participants in your video UI, we'll utilize a <code>RecyclerView</code>.</p><p><strong>(a)</strong> This involves creating a new layout for the participant view named <code>item_remote_peer.xml</code> in the <code>res/layout</code> folder. You can copy <code>item_remote_peer.xml </code>file from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/item_remote_peer.xml">here</a>.</p><p><strong>(b)</strong> Create a RecyclerView adapter <code>ParticipantAdapter</code> which will be responsible for displaying the participant list. Within this adapter, define a <code>PeerViewHolder</code> class that extends <code>RecyclerView.ViewHolder</code>.</p><pre><code class="language-Java">public class ParticipantAdapter extends RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt; {

  @NonNull
  @Override
  public PeerViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
      return new PeerViewHolder(LayoutInflater.from(parent.getContext()).inflate(R.layout.item_remote_peer, parent, false));
  }

  @Override
  public void onBindViewHolder(@NonNull PeerViewHolder holder, int position) {
  }

  @Override
  public int getItemCount() {
      return 0;
  }

  static class PeerViewHolder extends RecyclerView.ViewHolder {
    // 'VideoView' to show Video Stream
    public VideoView participantView;
    public TextView tvName;
    public View itemView;

    PeerViewHolder(@NonNull View view) {
        super(view);
        itemView = view;
        tvName = view.findViewById(R.id.tvName);
        participantView = view.findViewById(R.id.participantView);
    }
  }
}</code></pre><p><strong>(c)</strong> Now, we will render a list of <code>Participant</code> for the meeting. We will initialize this list in the constructor of the <code>ParticipantAdapter</code></p><pre><code class="language-Java">public class ParticipantAdapter extends RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt; {

  // creating a empty list which will store all participants
  private final List&lt;Participant&gt; participants = new ArrayList&lt;&gt;();

  public ParticipantAdapter(Meeting meeting) {
    // adding the local participant(You) to the list
    participants.add(meeting.getLocalParticipant());

    // adding Meeting Event listener to get the participant join/leave event in the meeting.
    meeting.addEventListener(new MeetingEventListener() {
      @Override
      public void onParticipantJoined(Participant participant) {
        // add participant to the list
        participants.add(participant);
        notifyItemInserted(participants.size() - 1);
      }

      @Override
      public void onParticipantLeft(Participant participant) {
        int pos = -1;
        for (int i = 0; i &lt; participants.size(); i++) {
          if (participants.get(i).getId().equals(participant.getId())) {
            pos = i;
            break;
          }
        }
        // remove participant from the list
        participants.remove(participant);

        if (pos &gt;= 0) {
          notifyItemRemoved(pos);
        }
      }
    });
  }

  // replace getItemCount() method with following.
  // this method returns the size of total number of participants
  @Override
  public int getItemCount() {
    return participants.size();
  }
  //...
}</code></pre><p><strong>(d)</strong> We have listed our participants. Let's set up the view holder to display a participant video.</p><pre><code class="language-Java">public class ParticipantAdapter extends RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt; {

  // replace onBindViewHolder() method with following.
  @Override
  public void onBindViewHolder(@NonNull PeerViewHolder holder, int position) {
    Participant participant = participants.get(position);

    holder.tvName.setText(participant.getDisplayName());

    // adding the initial video stream for the participant into the 'VideoView'
    for (Map.Entry&lt;String, Stream&gt; entry : participant.getStreams().entrySet()) {
      Stream stream = entry.getValue();
      if (stream.getKind().equalsIgnoreCase("video")) {
        holder.participantView.setVisibility(View.VISIBLE);
        VideoTrack videoTrack = (VideoTrack) stream.getTrack();
        holder.participantView.addTrack(videoTrack)
        break;
      }
    }
    // add Listener to the participant which will update start or stop the video stream of that participant
    participant.addEventListener(new ParticipantEventListener() {
      @Override
      public void onStreamEnabled(Stream stream) {
        if (stream.getKind().equalsIgnoreCase("video")) {
          holder.participantView.setVisibility(View.VISIBLE);
          VideoTrack videoTrack = (VideoTrack) stream.getTrack();
          holder.participantView.addTrack(videoTrack)
        }
      }

      @Override
      public void onStreamDisabled(Stream stream) {
        if (stream.getKind().equalsIgnoreCase("video")) {
          holder.participantView.removeTrack();
          holder.participantView.setVisibility(View.GONE);
        }
      }
    });
  }
}</code></pre><p><strong>(e)</strong> Now, add this adapter to the <code>MeetingActivity</code></p><pre><code class="language-Java">@Override
protected void onCreate(Bundle savedInstanceState) {
  //Meeting Setup...
  //...
  final RecyclerView rvParticipants = findViewById(R.id.rvParticipants);
  rvParticipants.setLayoutManager(new GridLayoutManager(this, 2));
  rvParticipants.setAdapter(new ParticipantAdapter(meeting));
}</code></pre><h2 id="integrate-rtmp-livestream-feature">Integrate RTMP Livestream Feature</h2><p>RTMP is a popular protocol for live streaming video content from a VideoSDK to platforms such as YouTube, Twitch, Facebook, and others. It enables the transmission of live video streams by connecting the VideoSDK to the platform's RTMP server.</p><p>VideoSDK allows you to live stream your meeting to the platform which supports RTMP ingestion just by providing the platform-specific stream key and stream URL, we can connect to the platform's RTMP server and transmit the live video stream.</p><p>VideoSDK also allows you to configure the livestream layouts in numerous ways like by simply setting different prebuilt layouts in the configuration or by providing your own custom template to do the livestream according to your layout choice.</p><p>This guide will provide an overview of how to implement RTMP Livestreaming in your video app.</p><h3 id="start-rtmp-livestreaming">Start RTMP Livestreaming</h3><p><code>startLivestream()</code> can be used to start an RTMP live stream of the meeting which can be accessed from the <code>Meeting</code> class. This method accepts two parameters:</p><ul><li><code>outputs</code>: This parameter accepts a list of <code>LivestreamOutput</code> objects that contain the RTMP <code>url</code> and <code>streamKey</code> of the platforms, you want to start the live stream.</li><li><code>config</code>: This parameter will define how the live stream layout should look like. You can pass <code>null</code> for the default layout.</li></ul><pre><code class="language-Java">JSONObject config = new JSONObject();

// Layout Configuration
JSONObject layout = new JSONObject();
JsonUtils.jsonPut(layout, "type", "GRID"); // "SPOTLIGHT" | "SIDEBAR",  Default : "GRID"
JsonUtils.jsonPut(layout, "priority", "SPEAKER");  // "PIN", Default : "SPEAKER"
JsonUtils.jsonPut(layout, "gridSize", 4); // MAX : 4
JsonUtils.jsonPut(config, "layout", layout);

// Theme of livestream layout
JsonUtils.jsonPut(config, "theme", "DARK"); //  "LIGHT" | "DEFAULT"

List&lt;LivestreamOutput&gt; outputs = new ArrayList&lt;&gt;();
outputs.add(new LivestreamOutput(RTMP_URL, RTMP_STREAM_KEY));

meeting.startLivestream(outputs,config);</code></pre><h3 id="stop-rtmp-livestreaming">Stop RTMP Livestreaming</h3><ul><li><code>stopLivestream()</code> is used to stop the meeting live stream which can be accessed from the <code>Meeting</code> class.</li></ul><pre><code class="language-Java">// keep track of livestream status
boolean liveStream = false;

findViewById(R.id.btnLiveStream).setOnClickListener(view -&gt; {
  if (!liveStream) {
    JSONObject config = new JSONObject();
    JSONObject layout = new JSONObject();
    JsonUtils.jsonPut(layout, "type", "GRID");
    JsonUtils.jsonPut(layout, "priority", "SPEAKER");
    JsonUtils.jsonPut(layout, "gridSize", 4);
    JsonUtils.jsonPut(config, "layout", layout);
    JsonUtils.jsonPut(config, "theme", "DARK");

    List&lt;LivestreamOutput&gt; outputs = new ArrayList&lt;&gt;();
    outputs.add(new LivestreamOutput(RTMP_URL, RTMP_STREAM_KEY));

    // Start LiveStream
    meeting.startLivestream(outputs,config);
  } else {
    // Stop LiveStream
    meeting.stopLivestream();
  }
});</code></pre><h3 id="event-associated-with-livestream">Event associated with Livestream</h3><ul><li>onLivestreamStateChanged - Whenever meeting livestream state changes, then <code>onLivestreamStateChanged</code> event will trigger.</li></ul><pre><code class="language-Java">private final MeetingEventListener meetingEventListener = new MeetingEventListener() {
  @Override
  public void onLivestreamStateChanged(String livestreamState) {
    switch (livestreamState) {
      case "LIVESTREAM_STARTING":
          Log.d("LivestreamStateChanged", "Meeting livestream is starting");
          break;
      case "LIVESTREAM_STARTED":
          Log.d("LivestreamStateChanged", "Meeting livestream is started");
          break;
      case "LIVESTREAM_STOPPING":
          Log.d("LivestreamStateChanged", "Meeting livestream is stopping");
          break;
      case "LIVESTREAM_STOPPED":
          Log.d("LivestreamStateChanged", "Meeting livestream is stopped");
          break;
    }
  }
}

@Override
protected void onCreate(Bundle savedInstanceState) {
  //...

  // add listener to meeting
  meeting.addEventListener(meetingEventListener);
}</code></pre><h3 id="custom-template%E2%80%8B">Custom Template<a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream#custom-template">​</a></h3><p>With VideoSDK, you can also use your own custom-designed layout template to livestream the meetings. To use the custom template, you need to create a template for which you can <a href="https://docs.videosdk.live/android/guide/interactive-live-streaming/custom-template">follow this guide</a>. Once you have set the template, you can use the <a href="https://docs.videosdk.live/api-reference/realtime-communication/start-livestream">REST API to start</a> the livestream with the <code>templateURL</code> parameter.</p><h2 id="conclusion">Conclusion</h2><p>Integrating the RTMP Livestream feature into your Android (Java) video call application using VideoSDK opens up a world of possibilities for enhancing real-time video communication. VideoSDK offers the flexibility needed to tailor the live stream experience according to your app's unique requirements. By following the instructions provided, you can seamlessly incorporate RTMP Livestreaming capabilities, extend the reach of your application to popular streaming platforms, and deliver captivating visual experiences to your audience. </p><p>To unlock the full potential of VideoSDK and create easy-to-use video experiences, developers are encouraged to sign up for VideoSDK and further explore its features. </p><p><a href="https://www.videosdk.live/signup"><strong>Sign up with VideoSDK</strong></a> today and Get <strong>10000 minutes free</strong> to take your video app to the next level!</p>]]></content:encoded></item><item><title><![CDATA[How to Build Live Streaming App with React?]]></title><description><![CDATA[In this article, we provide a step-by-step guide to building a robust live streaming app with React and VideoSDK.]]></description><link>https://www.videosdk.live/blog/react-interactive-live-streaming</link><guid isPermaLink="false">642d44cc2c7661a49f3824b0</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Mon, 06 Jan 2025 10:41:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/04/React.js-Live-Streaming.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h2 id="%F0%9F%93%A1-what-is-live-streaming">? What is Live Streaming?</h2>
<!--kg-card-end: markdown--><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/React.js-Live-Streaming.jpg" alt="How to Build Live Streaming App with React?"/><p>Live Streaming is gaining popularity among younger audiences due to its ability to create a sense of community and provide immersive experiences, diverse content, and monetization opportunities. It's used by content creators, influencers, brands, celebrities, events, and educators to create engaging and meaningful content. </p><p>As a software developer in the video space, it's essential to understand the concepts and differences between traditional and <a href="https://www.videosdk.live/interactive-live-streaming">Interactive Live Streaming</a>. This allows for real-time engagement between the streamer and the audience, offering a personalized and innovative experience. This form of media is becoming increasingly popular and provides unique opportunities for content creators and businesses to build a loyal community and monetize their streams.</p><!--kg-card-begin: markdown--><h3 id="explore-top-platforms-for-interactive-live-streaming">Explore Top Platforms for Interactive Live Streaming</h3>
<!--kg-card-end: markdown--><p>Several platforms in the market offer Interactive Live Streaming, and among the most popular ones are Twitch, YouTube Live, Facebook Live, and Instagram Live.</p><!--kg-card-begin: markdown--><h3 id="creating-your-own-interactive-live-streaming-app">Creating Your Own Interactive Live Streaming App</h3>
<!--kg-card-end: markdown--><p>According to Simpalm, an App and web development company, to build successfully host an interactive live streaming platform or build your own Live streaming app, a robust infrastructure is essential to handle the workload of serving thousands of participants while maintaining reliability, stability, and optimizations. However, building this type of highly complex application is not possible on your own. That's why VideoSDK provides a well-tested infrastructure that can handle this kind of workload. </p><p>Thus, we will be creating our Interactive Live Streaming App using VideoSDK.live, which will ensure a reliable and stable platform for our users. Let's get started!</p><!--kg-card-begin: markdown--><h2 id="%F0%9F%9B%A0%EF%B8%8F-steps-to-build-an-interactive-live-streaming-app-in-reactjs">?️ Steps to Build an Interactive Live Streaming App in ReactJS</h2>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="tools-for-building-an-interactive-live-streaming-app">Tools for building an Interactive Live Streaming App</h3>
<!--kg-card-end: markdown--><ul><li><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start">VideoSDK.Live's React SDK</a></li><li>VideoSDK.Live's HLS Composition</li><li><a href="https://docs.videosdk.live/react/guide/interactive-live-streaming/integrate-hls/overview">VideoSDK.Live's HLS Streaming</a></li></ul><!--kg-card-begin: markdown--><h3 id="step-1-understanding-app-functionalities-and-project-structure">STEP 1: Understanding App Functionalities and Project Structure</h3>
<!--kg-card-end: markdown--><p>I will be creating this app for 2 types of users <code>Speaker</code> and <code>Viewer</code></p><ul><li>Speakers<strong> </strong>will have all media controls i.e. they can toggle their webcam and mic to share their information to the viewers. Speakers can also start a HLS stream so that viewers consume the content.</li><li>The viewer<strong> </strong>will not have any media controls, they will just watch VideoSDK HLS Stream, which was started by the speaker</li></ul><!--kg-card-begin: markdown--><h4 id="pre-requisites-before-starting-to-write-code">Pre-requisites before starting to write code</h4>
<!--kg-card-end: markdown--><ul><li>You need VideoSDK Account, if not you can Signup from <a href="https://app.videosdk.live/signup">https://app.videosdk.live/signup</a></li><li>Coding Environment for React</li><li>Good Understanding of React </li></ul><p>After our coding environment is set, we can now start writing our code, first I will create a new <strong>React Streaming</strong> App using <code>create-react-app</code>, also we will install useful dependencies.</p><pre><code class="language-js">npx create-react-app videosdk-interactive-live-streaming-app

cd videosdk-interactive-live-streaming-app

npm install @videosdk.live/react-sdk react-player hls.js</code></pre><!--kg-card-begin: markdown--><h4 id="project-structure">Project Structure</h4>
<!--kg-card-end: markdown--><p>I will create 3 screens:</p><ol><li>Welcome Screen</li><li>Speaker Screen</li><li>Viewer Screen</li></ol><p>Below is the folder structure of our app.</p><pre><code class="language-js">root/
├──node_modules/
├──public/
├──src/
├────screens/
├───────WelcomeScreenContainer.js
├───────speakerScreen/
├──────────MediaControlsContainer.js
├──────────ParticipantsGridContainer.js
├──────────SingleParticipantContainer.js
├──────────SpeakerScreenContainer.js
├──────ViewerScreenContainer.js
├────api.js
├────App.js
├────index.js</code></pre><!--kg-card-begin: markdown--><h4 id="app-container">App Container</h4>
<!--kg-card-end: markdown--><p>I will prepare a basic <code>App.js</code>, this file will contain all the screens. and render all screens conditionally according to the <code>appData</code> state changes.</p><p>Add this code into <code>/src/App.js</code></p><pre><code class="language-js">import React, { useState } from "react";
import SpeakerScreenContainer from "./screens/speakerScreen/SpeakerScreenContainer";
import ViewerScreenContainer from "./screens/ViewerScreenContainer";
import WelcomeScreenContainer from "./screens/WelcomeScreenContainer";

const App = () =&gt; {
  const [appData, setAppData] = useState({ meetingId: null, mode: null });

  return appData.meetingId ? (
    appData.mode === "CONFERENCE" ? (
      &lt;SpeakerScreenContainer meetingId={appData.meetingId} /&gt;
    ) : (
      &lt;ViewerScreenContainer meetingId={appData.meetingId} /&gt;
    )
  ) : (
    &lt;WelcomeScreenContainer setAppData={setAppData} /&gt;
  );
};

export default App;</code></pre><!--kg-card-begin: markdown--><h3 id="step-2-welcome-screen">STEP 2: Welcome Screen</h3>
<!--kg-card-end: markdown--><p>Creating a new meeting will require an API call, so we will write some code for that</p><p>A temporary auth-token can be fetched from our user dashboard, but in production, we recommend using an authToken generated by your servers.</p><p>Follow this <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/authentication-and-token">guide</a> to get a temporary auth token from the user Dashboard.</p><p>Add this code into <code>src/api.js</code></p><pre><code class="language-js">export const authToken = "temporary-generated-auth-token-goes-here";

export const createNewRoom = async () =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${authToken}`,
      "Content-Type": "application/json",
    },
  });

  const { roomId } = await res.json();
  return roomId;
};</code></pre><p><code>WelcomeScreenContainer</code> will be useful for creating a new meeting with speakers. it will also allow you to enter already already-created meetingId to join the existing session.</p><p>Add this code into <code>src/screens/WelcomeScreenContainer.js</code></p><pre><code class="language-js">import React, { useState } from "react";
import { createNewRoom } from "../api";

const WelcomeScreenContainer = ({ setAppData }) =&gt; {
  const [meetingId, setMeetingId] = useState("");

  const createClick = async () =&gt; {
    const meetingId = await createNewRoom();

    setAppData({ mode: "CONFERENCE", meetingId });
  };
  const hostClick = () =&gt; setAppData({ mode: "CONFERENCE", meetingId });
  const viewerClick = () =&gt; setAppData({ mode: "VIEWER", meetingId });

  return (
    &lt;div&gt;
      &lt;button onClick={createClick}&gt;Create new Meeting&lt;/button&gt;
      &lt;p&gt;{"\n\nor\n\n"}&lt;/p&gt;
      &lt;input
        placeholder="Enter meetingId"
        onChange={(e) =&gt; setMeetingId(e.target.value)}
        value={meetingId}
      /&gt;
      &lt;p&gt;{"\n\n"}&lt;/p&gt;
      &lt;button onClick={hostClick}&gt;Join As Host&lt;/button&gt;
      &lt;button onClick={viewerClick}&gt;Join As Viewer&lt;/button&gt;
    &lt;/div&gt;
  );
};

export default WelcomeScreenContainer;</code></pre><!--kg-card-begin: markdown--><h3 id="step-3-speaker-screen">STEP 3: Speaker Screen</h3>
<!--kg-card-end: markdown--><p>This screen will contain all media controls and the participant's grid. First I will create a name input box for participants who will be joining. </p><p><code>src/screens/speakerScreen/SpeakerScreenContainer.js</code></p><pre><code class="language-js">import { MeetingProvider } from "@videosdk.live/react-sdk";
import React from "react";
import MediaControlsContainer from "./MediaControlsContainer";
import ParticipantsGridContainer from "./ParticipantsGridContainer";

import { authToken } from "../../api";

const SpeakerScreenContainer = ({ meetingId }) =&gt; {
  return (
    &lt;MeetingProvider
      token={authToken}
      config={{
        meetingId,
        name: "C.V. Raman",
        micEnabled: true,
        webcamEnabled: true,
      }}
      joinWithoutUserInteraction
    &gt;
      &lt;MediaControlsContainer meetingId={meetingId} /&gt;
      &lt;ParticipantsGridContainer /&gt;
    &lt;/MeetingProvider&gt;
  );
};

export default SpeakerScreenContainer;</code></pre><!--kg-card-begin: markdown--><h4 id="mediacontrols">MediaControls</h4>
<!--kg-card-end: markdown--><p>This container will be used for toggling the mic and webcam. Also, we will add some code for starting HLS streaming.</p><p><code>src/screens/speakerScreen/MediaControlsContainer.js</code></p><pre><code class="language-js">import { useMeeting, Constants } from "@videosdk.live/react-sdk";
import React, { useMemo } from "react";

const MediaControlsContainer = () =&gt; {
  const { toggleMic, toggleWebcam, startHls, stopHls, hlsState, meetingId } =
    useMeeting();

  const { isHlsStarted, isHlsStopped, isHlsPlayable } = useMemo(
    () =&gt; ({
      isHlsStarted: hlsState === Constants.hlsEvents.HLS_STARTED,
      isHlsStopped: hlsState === Constants.hlsEvents.HLS_STOPPED,
      isHlsPlayable: hlsState === Constants.hlsEvents.HLS_PLAYABLE,
    }),
    [hlsState]
  );

  const _handleToggleHls = () =&gt; {
    if (isHlsStarted) {
      stopHls();
    } else if (isHlsStopped) {
      startHls({ quality: "high" });
    }
  };

  return (
    &lt;div&gt;
      &lt;p&gt;MeetingId: {meetingId}&lt;/p&gt;
      &lt;p&gt;HLS state: {hlsState}&lt;/p&gt;
      {isHlsPlayable &amp;&amp; &lt;p&gt;Viewers will now be able to watch the stream.&lt;/p&gt;}
      &lt;button onClick={toggleMic}&gt;Toggle Mic&lt;/button&gt;
      &lt;button onClick={toggleWebcam}&gt;Toggle Webcam&lt;/button&gt;
      &lt;button onClick={_handleToggleHls}&gt;
        {isHlsStarted ? "Stop Hls" : "Start Hls"}
      &lt;/button&gt;
    &lt;/div&gt;
  );
};

export default MediaControlsContainer;</code></pre><!--kg-card-begin: markdown--><h4 id="participantgridcontainer">ParticipantGridContainer</h4>
<!--kg-card-end: markdown--><p>This will get all joined participants from <code>useMeeting</code> Hook and render them individually. Here we will be using <code>SingleParticipantContainer</code> for rendering a single-participant webcam stream</p><p><code>src/screens/speakerScreen/ParticipantsGridContainer.js</code></p><pre><code class="language-js">import { useMeeting } from "@videosdk.live/react-sdk";
import React, { useMemo } from "react";
import SingleParticipantContainer from "./SingleParticipantContainer";

const ParticipantsGridContainer = () =&gt; {
  const { participants } = useMeeting();

  const participantIds = useMemo(
    () =&gt; [...participants.keys()],
    [participants]
  );

  return (
    &lt;div&gt;
      {participantIds.map((participantId) =&gt; (
        &lt;SingleParticipantContainer
          {...{ participantId, key: participantId }}
        /&gt;
      ))}
    &lt;/div&gt;
  );
};

export default ParticipantsGridContainer;</code></pre><!--kg-card-begin: markdown--><h4 id="singleparticipantcontainer">SingleParticipantContainer</h4>
<!--kg-card-end: markdown--><p>This container will get <code>participantId</code> from props and will get webcam streams and other information from <code>useParticipant</code> hook.</p><p>It will render both Audio and Video streams of the participant whose participantId is provided from props.</p><p><code>src/screens/speakerScreen/SingleParticipantContainer.js</code></p><pre><code class="language-js">import { useParticipant } from "@videosdk.live/react-sdk";
import React, { useEffect, useMemo, useRef } from "react";
import ReactPlayer from "react-player";

const SingleParticipantContainer = ({ participantId }) =&gt; {
  const { micOn, micStream, isLocal, displayName, webcamStream, webcamOn } =
    useParticipant(participantId);

  const audioPlayer = useRef();

  const videoStream = useMemo(() =&gt; {
    if (webcamOn &amp;&amp; webcamStream) {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(webcamStream.track);
      return mediaStream;
    }
  }, [webcamStream, webcamOn]);

  useEffect(() =&gt; {
    if (!isLocal &amp;&amp; audioPlayer.current &amp;&amp; micOn &amp;&amp; micStream) {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(micStream.track);

      audioPlayer.current.srcObject = mediaStream;
      audioPlayer.current.play().catch((err) =&gt; {
        if (
          err.message ===
          "play() failed because the user didn't interact with the document first. https://goo.gl/xX8pDD"
        ) {
          console.error("audio" + err.message);
        }
      });
    } else {
      audioPlayer.current.srcObject = null;
    }
  }, [micStream, micOn, isLocal, participantId]);

  return (
    &lt;div style={{ height: 200, width: 360, position: "relative" }}&gt;
      &lt;audio autoPlay playsInline controls={false} ref={audioPlayer} /&gt;
      &lt;div
        style={{ position: "absolute", background: "#ffffffb3", padding: 8 }}
      &gt;
        &lt;p&gt;Name: {displayName}&lt;/p&gt;
        &lt;p&gt;Webcam: {webcamOn ? "on" : "off"}&lt;/p&gt;
        &lt;p&gt;Mic: {micOn ? "on" : "off"}&lt;/p&gt;
      &lt;/div&gt;
      {webcamOn &amp;&amp; (
        &lt;ReactPlayer
          playsinline // very very imp prop
          pip={false}
          light={false}
          controls={false}
          muted={true}
          playing={true}
          url={videoStream}
          height={"100%"}
          width={"100%"}
          onError={(err) =&gt; {
            console.log(err, "participant video error");
          }}
        /&gt;
      )}
    &lt;/div&gt;
  );
};

export default SingleParticipantContainer;</code></pre><p>Our speaker screen is completed, not we can start coding <code>ViewerScreenContainer</code></p><!--kg-card-begin: markdown--><h3 id="step-4-viewer-screen">STEP 4: Viewer Screen</h3>
<!--kg-card-end: markdown--><p>A viewer screen will be used for viewer participants, they will be watching the HLS stream when the speaker starts to stream.</p><p>Same as the Speaker screen this screen will also have an initialization process. </p><p><code>src/screens/ViewerScreenContainer.js</code></p><pre><code class="language-js">import {
  MeetingConsumer,
  Constants,
  MeetingProvider,
  useMeeting,
} from "@videosdk.live/react-sdk";
import React, { useEffect, useMemo, useRef } from "react";
import Hls from "hls.js";
import { authToken } from "../api";

const HLSPlayer = () =&gt; {
  const { hlsUrls, hlsState } = useMeeting();

  const playerRef = useRef(null);

  const hlsPlaybackHlsUrl = useMemo(() =&gt; hlsUrls.playbackHlsUrl, [hlsUrls]);

  useEffect(() =&gt; {
    if (Hls.isSupported()) {
      const hls = new Hls({
        capLevelToPlayerSize: true,
        maxLoadingDelay: 4,
        minAutoBitrate: 0,
        autoStartLoad: true,
        defaultAudioCodec: "mp4a.40.2",
      });

      let player = document.querySelector("#hlsPlayer");

      hls.loadSource(hlsPlaybackHlsUrl);
      hls.attachMedia(player);
    } else {
      if (typeof playerRef.current?.play === "function") {
        playerRef.current.src = hlsPlaybackHlsUrl;
        playerRef.current.play();
      }
    }
  }, [hlsPlaybackHlsUrl, hlsState]);

  return (
    &lt;video
      ref={playerRef}
      id="hlsPlayer"
      autoPlay
      controls
      style={{ width: "70%", height: "70%" }}
      playsInline
      playing
      onError={(err) =&gt; console.log(err, "hls video error")}
    &gt;&lt;/video&gt;
  );
};

const ViewerScreenContainer = ({ meetingId }) =&gt; {
  return (
    &lt;MeetingProvider
      token={authToken}
      config={{ meetingId, name: "C.V. Raman", mode: "VIEWER" }}
      joinWithoutUserInteraction
    &gt;
      &lt;MeetingConsumer&gt;
        {({ hlsState }) =&gt;
          hlsState === Constants.hlsEvents.HLS_PLAYABLE ? (
            &lt;HLSPlayer /&gt;
          ) : (
            &lt;p&gt;Waiting for host to start stream...&lt;/p&gt;
          )
        }
      &lt;/MeetingConsumer&gt;
    &lt;/MeetingProvider&gt;
  );
};

export default ViewerScreenContainer;</code></pre><p>Our ViewerScreen is completed, and now we can test our application.</p><p><code>npm run start</code> </p><!--kg-card-begin: markdown--><h2 id="%F0%9F%93%BA-output-of-interactive-live-streaming-app">? Output of Interactive Live Streaming App</h2>
<!--kg-card-end: markdown--><!--kg-card-begin: html--><iframe width="560" height="315" src="https://www.youtube.com/embed/FfQZnBH3zMQ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen=""/><!--kg-card-end: html--><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/React-Live-Streaming.gif" class="kg-image" alt="How to Build Live Streaming App with React?" loading="lazy" width="540" height="340"/></figure><p>The Source Code of this app is available in this <a href="https://github.com/ChintanRajpara/videosdk-interactive-live-streaming-app">Github Repo</a>.</p><!--kg-card-begin: markdown--><h2 id="%E2%9D%93-what-next">❓ What Next?</h2>
<!--kg-card-end: markdown--><p>This was a very basic example of an interactive Live Streaming App using Video SDK, you can customize it in your own way. </p><ul><li>Add more CSS to make the UI more interactive</li><li>Add Chat using <a href="https://docs.videosdk.live/react/api/sdk-reference/use-pubsub">PubSub</a></li><li>Implement <a href="https://docs.videosdk.live/react/api/sdk-reference/use-meeting/methods#changemode">Change Mode</a>, by this, we can switch any participant from Viewer to Speaker, or vice versa.</li><li>You can also take reference from our Prebuilt App which is built using VideoSDK's React package. Here is the <a href="https://github.com/videosdk-live/videosdk-rtc-react-prebuilt-ui">Github Repo</a>.</li></ul><h2 id="%F0%9F%93%98-more-react-resources">? More React Resources</h2><ul><li><a href="https://www.videosdk.live/blog/react-js-video-calling">React Video Calling App</a></li><li><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start">React Video Call Quick Start Docs</a></li><li><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start-ILS">React Interactive Live Streaming Quick Start Docs</a></li><li><a href="https://dev.to/video-sdk/build-video-calling-app-using-react-hooks-1a79">Build a Video Chat App with React Hooks</a></li><li><a href="https://docs.videosdk.live/code-sample">Code Samples</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Best Vonage Video Competitors in (2025)]]></title><description><![CDATA[Explore Vonage's competition by comparing Vonage(TokBox) with its alternatives in the realm of real-time communication.]]></description><link>https://www.videosdk.live/blog/vonage-competitors</link><guid isPermaLink="false">64b524fb9eadee0b8b9e71c8</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Sun, 05 Jan 2025 12:50:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/08/Vonage-competitors-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Vonage-competitors-1.jpg" alt="Best Vonage Video Competitors in (2025)"/><p>If you're currently exploring the world of <strong>video communication API</strong>s, Vonage(TokBox) might be one of the options you're considering, along with several other potential contenders that could meet your requirements. </p><p>Navigating the landscape of <a href="https://www.videosdk.live/blog/vonage-alternative">alternatives to Vonage</a>, or Vonage competitors, and finding the perfect fit can be a complex task given the diverse range of choices available. Before delving into the array of video API offerings, it's essential to define your project's budget, primary use case, and must-have features.</p><p>To simplify this process, we've curated a list of the <strong>top competitors to Vonage(TokBox)</strong>. This compilation aims to help you narrow down your choices and discover the <a href="https://www.videosdk.live/audio-video-conferencing"><strong>optimal video API</strong></a> solution that aligns seamlessly with your application's unique needs. It's important to note that while we might have a preference for a particular competitor, our goal is to provide an unbiased overview of each provider, enabling you to make an informed decision that suits your requirements.</p><h2 id="vonagetokbox-video-api">Vonage(TokBox) Video API</h2>
<h3 id="key-points-about-vonage">Key points about Vonage</h3>
<ul><li>Vonage provides developers with the tools to create personalized audio and video streams enriched with various effects, filters, and augmented or virtual reality capabilities on mobile devices.</li><li>During a call, participants can effortlessly share their screens, engage in chat conversations, and exchange real-time data.</li><li>However, there are some <strong>considerations</strong> to keep in mind with Vonage. As your user base expands, <strong>scaling costs</strong> can become a factor of concern, as the price per stream per minute increases.</li><li>Users should also be aware of potential <strong>additional costs</strong> associated with features like <strong>recording</strong> and <strong>interactive broadcasting</strong> as they evaluate the platform.</li><li>Additionally, it's worth noting that once the number of connections surpasses 2,000, the platform switches to Content Delivery Network (CDN) delivery, which might lead to <strong>higher latency</strong>.</li><li>Achieving <strong>real-time streaming</strong> at a large scale can be <strong>intricate</strong>. For instance, accommodating more than 3,000 viewers requires transitioning to <a href="https://www.videosdk.live/blog/what-is-http-live-streaming">HTTP Live Streaming (HLS)</a>, which can introduce <strong>notable latency</strong>.</li><li>It's crucial to weigh these factors while considering Vonage Video Conferencing as your video streaming solution.</li></ul><h3 id="vonage-video-api-pricing">Vonage Video API Pricing</h3>
<ul><li>Vonage employs a usage-based <a href="https://www.vonage.com/communications-apis/video/pricing/">pricing</a> model for its video sessions, which is determined by the number of participants and recalculated dynamically every minute. </li><li>The <strong>pricing plans</strong> have a starting point of <strong>$9.99</strong> per month and encompass a free allowance of 2,000 minutes per month across all plans. </li><li>Once this free allowance is utilized, users are charged at a rate of <strong>$0.00395</strong> per participant per minute for any additional usage.</li><li>Furthermore, Vonage offers <strong>additional services</strong> to enhance the video experience, each with its own associated <strong>cost</strong>. </li><li><strong>Recording</strong> services are available starting at <strong>$0.010</strong> per minute, allowing you to capture important sessions for future reference. </li><li>If you're interested in <strong>HLS</strong> streaming, the pricing for this feature is set at <strong>$0.003</strong> per minute.</li><li>It's important to consider these pricing details and assess how they align with your anticipated usage and budget while evaluating Vonage as a video communication solution.</li></ul><h2 id="direct-comparison-vonagetokbox-vs-top-competitors">Direct Comparison: Vonage(TokBox) vs Top Competitors</h2>
<p>The <strong>top competitors of Vonage(TokBox) </strong>are VideoSDK, <a href="https://www.videosdk.live/blog/amazon-chime-sdk-competitors">AWS Chime</a>, 100ms, WebRTC, <a href="https://www.videosdk.live/blog/jitsi-competitors">Jitsi</a>, Daily, and LiveKit.</p><p>Let's compare Vonage with each of the above competitors.</p><h2 id="1-vonage-vs-videosdk">1. Vonage vs VideoSDK</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Vonage-vs-Videosdk.jpg" class="kg-image" alt="Best Vonage Video Competitors in (2025)" loading="lazy" width="1429" height="525"/></figure><p>VideoSDK offers developers a seamless API that simplifies incorporating robust, scalable, and dependable audio-video capabilities into their applications. With only a few lines of code, developers can introduce live audio and video experiences to various platforms within minutes. One of the primary benefits of opting for the <a href="https://www.videosdk.live/">VideoSDK</a> is its remarkable ease of integration. This characteristic enables developers to concentrate their efforts on crafting innovative features that contribute to enhanced user engagement and retention.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Vonage Video API Pricing</strong></td>
        <td><strong>VideoSDK pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$2</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$2</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>Starts from <strong>$1</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$1</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td><strong>$15</strong> per 1,000 minutes</td>
        <td>
            <strong>$15</strong> per 1,000 minutes, No limit on participants
        </td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$15</strong> per 1,000 minutes</td>
        <td>
            <strong>$15</strong> per 1,000 minutes, No limit on participants
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/alternative/vonage-vs-videosdk">Vonage vs VideoSDK</a>.</blockquote>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-vonage-vs-aws-chime">2. Vonage vs AWS Chime</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Vonage-vs-AWS-Chime.jpg" class="kg-image" alt="Best Vonage Video Competitors in (2025)" loading="lazy" width="1429" height="525"/></figure><p>The Amazon Chime SDK is tailored to simplify the process of integrating video conferencing, audio calls, and messaging directly into your applications. Leveraging the capabilities of Amazon Web Services (AWS), the <a href="https://www.videosdk.live/blog/amazon-chime-sdk-alternative">Chime SDK</a> offers essential features for live video communication. By utilizing AWS' user management capabilities, the SDK enhances the overall functionality of your applications, ensuring a seamless and reliable user experience for video conferencing and audio calls.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Vonage Video API Pricing</strong></td>
        <td><strong>AWS Chime pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$2</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$1.7</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>Starts from <strong>$1</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td><strong>$15</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$15</strong> per 1,000 minutes</td>
        <td><strong>$12.5</strong> per 1,000 minutes</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/vonage-vs-amazon-chime-sdk">Vonage vs AWS Chime</a>.</blockquote><h2 id="3-vonage-vs-100ms">3. Vonage vs 100ms</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Vonage-vs-100ms.jpg" class="kg-image" alt="Best Vonage Video Competitors in (2025)" loading="lazy" width="1429" height="525"/></figure><p><a href="https://www.videosdk.live/blog/100ms-alternative">100ms</a> offers a cloud platform that empowers developers to effortlessly incorporate video and audio conferencing into a variety of applications, including web, Android, and iOS platforms. This platform is equipped with a range of powerful tools, including REST APIs, software development kits (SDKs), and an intuitive user-friendly dashboard. These tools collectively streamline the process of capturing, distributing, recording, and displaying live interactive audio and video content.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Vonage Video API Pricing</strong></td>
        <td><strong>100ms pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$2</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$4</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>Starts from <strong>$1</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$4</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td><strong>$15</strong> per 1,000 minutes</td>
        <td><strong>$40</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$15</strong> per 1,000 minutes</td>
        <td><strong>$13.5</strong> per 1,000 minutes</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/vonage-vs-100ms">Vonage vs 100ms</a>.</blockquote><h2 id="4-vonage-vs-webrtc">4. Vonage vs WebRTC</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Vonage-vs-WebRTC.jpg" class="kg-image" alt="Best Vonage Video Competitors in (2025)" loading="lazy" width="1429" height="525"/></figure><p>WebRTC is an open-source project that empowers web browsers to integrate Real-Time Communications (RTC) capabilities through user-friendly JavaScript APIs. The various components of <a href="https://www.videosdk.live/blog/webrtc-alternative">WebRTC</a> have been meticulously optimized to effectively facilitate real-time communication functionalities within web browsers. This initiative enables developers to seamlessly embed features like audio and video calls, chat, file sharing, and more directly into their web applications, creating dynamic and interactive user experiences.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Vonage Video API Pricing</strong></td>
        <td><strong>WebRTC pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$2</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>Starts from <strong>$1</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td><strong>$15</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$15</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/vonage-vs-webrtc">Vonage vs WebRTC</a>.</blockquote><h2 id="5-vonage-vs-jitsi">5. Vonage vs Jitsi</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Vonage-vs-Jitsi-Meet.jpg" class="kg-image" alt="Best Vonage Video Competitors in (2025)" loading="lazy" width="1429" height="525"/></figure><p>Jitsi is an open-source platform that focuses on simplifying the process of video conferencing. It offers a user-friendly experience that doesn't require any downloads or plugins, which makes it an attractive option for individuals or businesses seeking a straightforward and cost-effective solution for live video communication. With <a href="https://www.videosdk.live/blog/jitsi-alternative">Jitsi</a>, users can easily initiate video calls, participate in virtual meetings, and collaborate seamlessly without the need for extensive technical know-how or complex setups.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Vonage pricing</strong></td>
        <td><strong>Jitsi pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$2</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>Starts from <strong>$1</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td><strong>$15</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$15</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/vonage-vs-jitsi">Vonage vs Jitsi</a>.</blockquote><h2 id="6-vonage-vs-daily">6. Vonage vs Daily</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Vonage-vs-Daily.jpg" class="kg-image" alt="Best Vonage Video Competitors in (2025)" loading="lazy" width="1429" height="525"/></figure><p>Daily is a developer-friendly platform that empowers developers to effortlessly integrate real-time video and audio calls into their applications, directly within web browsers. With <a href="https://www.videosdk.live/blog/daily-co-alternative">Daily</a>'s tools and features, developers can easily handle the complex backend aspects of video calls across different platforms.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Vonage pricing</strong></td>
        <td><strong>Daily pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$2</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$4</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>Starts from <strong>$1</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$1.2</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td><strong>$15</strong> per 1,000 minutes</td>
        <td><strong>$15</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$15</strong> per 1,000 minutes</td>
        <td><strong>$13.49</strong> per 1,000 minutes</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/vonage-vs-daily">Vonage vs Daily</a>.</blockquote><h2 id="7-vonage-vs-livekit">7. Vonage vs LiveKit</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Vonage-vs-LiveKit.jpg" class="kg-image" alt="Best Vonage Video Competitors in (2025)" loading="lazy" width="1429" height="525"/></figure><p>Livekit offers a comprehensive solution for developers who want to integrate live video and audio capabilities into their native applications. With its set of software development kits (SDKs), <a href="https://www.videosdk.live/blog/livekit-alternative">Livekit</a> makes it seamless to incorporate various real-time communication features into your applications, enhancing user engagement and interaction.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Vonage pricing</strong></td>
        <td><strong>LiveKit pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$2</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$20</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>Starts from <strong>$1</strong> per 1,000 minutes</td>
        <td>
            <strong>$69</strong> per hour (upto 500 viewers only and doesn't support Full HD)
        </td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td><strong>$15</strong> per 1,000 minutes</td>
        <td>No accurate data available</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$15</strong> per 1,000 minutes</td>
        <td>No accurate data available</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/vonage-vs-livekit">Vonage vs LiveKit</a>.</blockquote><h2 id="have-you-determined-whether-vonage-aligns-with-your-requirements-or-have-you-found-an-alternative">Have you determined whether Vonage aligns with your requirements, or have you found an alternative?</h2>
<p>The alternatives to Vonage that have been mentioned earlier provide a wide range of solutions for developers looking to improve in-app user experiences. However, if your needs go beyond just in-app communication and you're looking for a more comprehensive engagement strategy that includes voice and video capabilities, solutions like <a href="https://www.videosdk.live/signup/">VideoSDK</a> might align better with your requirements. These solutions offer the tools and features to create immersive and interactive experiences for your users, enabling you to offer more than just text-based communication.</p><p>Considering your specific requirements, budget constraints, and the essential features you're looking for, Vonage might not be the exact fit for your needs. To ensure you make a well-informed decision, it's recommended to delve into the alternatives discussed earlier. Some of these alternatives, such as VideoSDK, offer free trial options. By testing these alternatives in real-world projects, you can better assess their suitability before making a substantial commitment. And keep in mind that if your requirements evolve over time, it's always possible to transition away from Vonage to a more suitable solution.</p>]]></content:encoded></item><item><title><![CDATA[Top Twilio Programmable Video Competitor in 2025]]></title><description><![CDATA[Explore Twilio Programmable Video's competition by comparing Twilio Programmable Video with its competitors in the realm of real-time communication.]]></description><link>https://www.videosdk.live/blog/twilio-video-competitors</link><guid isPermaLink="false">64b524689eadee0b8b9e71b6</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Sat, 04 Jan 2025 09:14:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/07/Twilio-competitors-2.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Twilio-competitors-2.jpg" alt="Top Twilio Programmable Video Competitor in 2025"/><p>If you're currently in the market for a Twilio Video Conferencing API, you may find yourself considering Twilio as well as other potential competitors for your needs.</p><p>However, navigating through the extensive range of <a href="https://www.videosdk.live/blog/twilio-video-alternative#:~:text=The%20top%2010%20Twilio%20Video,based%20on%20your%20specific%20needs.">alternatives to Twilio</a> programmable video and identifying a suitable option can be quite challenging, given the multitude of choices available. Before you delve into the landscape of video API offerings, it's essential to define your project's budget, primary use case, and must-have features.</p><p>To simplify this process, we've assembled a list of the <strong>top Twilio programmable video competitors</strong>. This compilation aims to help you streamline your selection and identify the <a href="https://www.videosdk.live/">perfect video API</a> solution that aligns with your application's requirements. It's worth noting that, while we may hold a preference for a particular Twilio competitor, our objective is to provide an unbiased overview of each provider, allowing you to make an informed decision based on your unique needs.</p><h2 id="twilio-programmable-video">Twilio Programmable Video</h2>
<h3 id="key-points-about-twilio-programmable-video">Key points about Twilio Programmable Video</h3>
<ul><li>Twilio Programmable Video provides a comprehensive array of web, iOS, and Android SDKs, equipping developers with versatile tools for seamlessly integrating live video capabilities into their applications and enhancing user experiences. This makes it useful not only for real-time communication apps but also for solutions that support <a href="https://www.digitalwebsolutions.com/video-production-services/" rel="noreferrer">video production service</a> workflows where reliable streaming infrastructure is essential.<br/></li><li>However, it's important to note that using Twilio might entail <strong>manual configuration</strong> and the inclusion of <strong>extra code</strong> to effectively utilize multiple audio and video inputs, thereby introducing an elevated level of development <strong>complexity</strong>.</li><li>As your usage scales, pricing considerations may arise. Twilio lacks a built-in tiering system in its dashboard to facilitate seamless scaling, which might become a <strong>concern</strong>.</li><li>In terms of capacity, Twilio supports <strong>up to 50 hosts</strong> and participants, which should cater to the needs of many use cases.</li><li>An aspect to consider is that Twilio doesn't offer <strong>readily available plugins</strong> for streamlined product development. This could potentially require developers to invest <strong>additional time and effort</strong>.</li><li>Lastly, while Twilio does provide customization options, the extent of customization it offers may not align with the <strong>specific requirements</strong> of all developers, potentially necessitating <strong>further code</strong> development to meet those needs.</li></ul><h3 id="twilio-video-pricing">Twilio Video Pricing</h3>
<ul><li>Twilio Programmable Video offers a <a href="https://www.twilio.com/en-us/video/pricing">pricing</a> structure that starts at <strong>$4</strong> per 1,000 minutes for their <strong>video room</strong>.</li><li>For <strong>Twilio</strong> <strong>video recordings</strong>, Twilio charges <strong>$0.004</strong> per participant minute, while recording <strong>compositions</strong> incur a cost of <strong>$0.01</strong> per composed minute.</li><li><strong>Storage</strong> fees are set at <strong>$0.00167</strong> per GB per day after the initial 10 GB of storage usage.</li></ul><h2 id="direct-comparison-twilio-vs-top-competitors">Direct Comparison: Twilio vs Top Competitors</h2>
<p>The <strong>top competitors of Twilio </strong>Programmable<strong> V</strong>ideo Conferencing are VideoSDK, <a href="https://www.videosdk.live/blog/zoom-video-sdk-competitors">Zoom</a>, <a href="https://www.videosdk.live/blog/vonage-competitors">Vonage</a>, <a href="https://www.videosdk.live/blog/amazon-chime-sdk-competitors">AWS Chime</a>, 100ms, WebRTC, <a href="https://www.videosdk.live/blog/jitsi-competitors">Jitsi</a>, Daily, and LiveKit.</p><p>Let's compare Twilio with each of the above competitors.</p><h2 id="1-videosdk-vs-twilio-video-call">1. VideoSDK vs. Twilio Video Call</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Twilio-vs-Videosdk.jpg" class="kg-image" alt="Top Twilio Programmable Video Competitor in 2025" loading="lazy" width="1429" height="525"/></figure><p>VideoSDK offers developers a seamless API that simplifies incorporating robust, scalable, and dependable audio-video capabilities into their applications. With only a few lines of code, developers can introduce live audio and video experiences to various platforms within minutes. One of the primary benefits of opting for the <a href="https://www.videosdk.live/">VideoSDK</a> is its remarkable ease of integration. This characteristic enables developers to concentrate their efforts on crafting innovative features that contribute to enhanced user engagement and retention.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><b>Twilio Programmable Video Pricing</b></td>
        <td><b>VideoSDK pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3</b> per 1,000 minutes</td>
        <td>Starts from <b>$2</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>NA</td>
        <td>Starts from <b>$1</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>NA</td>
        <td>
            <ul>
                <li>
                    <b>$15</b> per 1,000 minutes, No limit on participants
                </li>				
            </ul>
        </td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>
                    <b>$4</b> per 1,000 minutes + <b>$10</b> per 1,000 minutes composition charges
                </li>
            </ul>
        </td>
        <td>
            <ul>
                <li>
                    <b>$15</b> per 1,000 minutes, No limit on participants
                </li>
            </ul>
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/alternative/twilio-vs-videosdk">Twilio vs VideoSDK</a>.</blockquote>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-zoom-vs-twilio-video-call-flexible-video-solutions-for-developers">2. Zoom vs. Twilio Video Call: Flexible Video Solutions for Developers</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Twilio-vs-Zoom-1.jpg" class="kg-image" alt="Top Twilio Programmable Video Competitor in 2025" loading="lazy" width="1429" height="525"/></figure><p>The Zoom Video SDK provides a reliable video conferencing experience similar to the original client but with added flexibility. <a href="https://www.videosdk.live/blog/zoom-video-sdk-alternative">Zoom</a> encompasses a wide range of services, including audio, video, chat, screen sharing, data streams, server-side APIs, and webhooks. This extensive set of features allows developers to create customized and immersive video communication experiences within their applications.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><b>Twilio Programmable Video Pricing</b></td>
        <td><b>Zoom pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3</b> per 1,000 minutes</td>
        <td>Starts from <b>$3.5</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>NA</td>
        <td>
            <ul>
                <li>
                    <b>$3.5</b> per 1,000 minutes
                </li>				
            </ul>
        </td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>
                    <b>$4</b> per 1,000 minutes + <b>$10</b> per 1,000 minutes composition charges
                </li>
            </ul>
        </td>
        <td>
            <ul>
                <li>
                    <b>$4</b> per 1,000 minutes
                </li>
            </ul>
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/twilio-vs-zoom">Twilio vs Zoom</a>.</blockquote><h2 id="3-vonage-video-api-vs-twilio-video">3. Vonage Video API vs Twilio Video</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Twilio-vs-Vonage-1.jpg" class="kg-image" alt="Top Twilio Programmable Video Competitor in 2025" loading="lazy" width="1429" height="525"/></figure><p>The Vonage Video API, formerly known as TokBox, empowers developers to craft personalized video experiences for their web, mobile, or desktop applications. With <a href="https://www.videosdk.live/blog/vonage-alternative">Vonage</a> API, developers can seamlessly integrate video communication features, offering users a unique and immersive video experience within their respective apps.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><b>Twilio Programmable Video Pricing</b></td>
        <td><b>Vonage pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3</b> per 1,000 minutes</td>
        <td>Starts from <b>$2</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>NA</td>
        <td>Starts from <b>$1</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>NA</td>
        <td>
            <ul>
                <li>
                    <b>$15</b> per 1,000 minutes, No limit on participants
                </li>				
            </ul>
        </td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>
                    <b>$4</b> per 1,000 minutes + <b>$10</b> per 1,000 minutes composition charges
                </li>
            </ul>
        </td>
        <td>
            <ul>
                <li>
                    <b>$15</b> per 1,000 minutes, No limit on participants
                </li>
            </ul>
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/twilio-vs-vonage">Twilio vs Vonage</a>.</blockquote><h2 id="4-twilio-video-vs-aws-chime">4. Twilio Video vs AWS Chime</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Twilio-vs-AWS-Chime-1.jpg" class="kg-image" alt="Top Twilio Programmable Video Competitor in 2025" loading="lazy" width="1429" height="525"/></figure><p>The Amazon Chime is an SDK that simplifies the integration of video conferencing, audio calls, and messaging into your applications. <a href="https://www.videosdk.live/blog/amazon-chime-sdk-alternative">Amazon Chime SDK</a> offers essential live video features and utilizes AWS' user management capabilities to enhance the overall functionality.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><b>Twilio Programmable Video Pricing</b></td>
        <td><b>AWS Chime pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3</b> per 1,000 minutes</td>
        <td>Starts from <b>$1.7</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>
                    <b>$4</b> per 1,000 minutes + <b>$10</b> per 1,000 minutes composition charges
                </li>
            </ul>
        </td>
        <td>
            <ul>
                <li>
                    <b>$12.5</b> per 1,000 minutes
                </li>
            </ul>
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/twilio-vs-amazon-chime-sdk">Twilio vs AWS Chime</a>.</blockquote><h2 id="5-twilio-programmable-video-vs-100ms">5. Twilio Programmable Video vs 100ms</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Twilio-vs-100-ms-1.jpg" class="kg-image" alt="Top Twilio Programmable Video Competitor in 2025" loading="lazy" width="1429" height="525"/></figure><p>100ms is a cloud platform that empowers developers to effortlessly integrate video and audio conferencing into Web, Android, and iOS applications. <a href="https://www.videosdk.live/blog/100ms-alternative">100ms</a> offers a comprehensive set of REST APIs, SDKs, and a user-friendly dashboard, streamlining the process of capturing, distributing, recording, and displaying live interactive audio and video content.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><b>Twilio Programmable Video Pricing</b></td>
        <td><b>100ms pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3</b> per 1,000 minutes</td>
        <td>Starts from <b>$4</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>NA</td>
        <td>Starts from <b>$4</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>NA</td>
        <td>
            <ul>
                <li>
                    <b>$40</b> per 1,000 minutes
                </li>				
            </ul>
        </td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>
                    <b>$4</b> per 1,000 minutes + <b>$10</b> per 1,000 minutes composition charges
                </li>
            </ul>
        </td>
        <td>
            <ul>
                <li>
                    <b>$13.5</b> per 1,000 minutes
                </li>
            </ul>
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/twilio-vs-100ms">Twilio vs 100ms</a>.</blockquote><h2 id="6-twilio-programmable-video-vs-webrtc">6. Twilio Programmable Video vs WebRTC</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Twilio-vs-WebRTC-1.jpg" class="kg-image" alt="Top Twilio Programmable Video Competitor in 2025" loading="lazy" width="1429" height="525"/></figure><p>WebRTC is a free and open-source initiative that empowers web browsers with Real-Time Communications (RTC) capabilities through straightforward JavaScript APIs. The components of <a href="https://www.videosdk.live/blog/webrtc-alternative">WebRTC</a> have been meticulously optimized to effectively facilitate real-time communication functionalities within web browsers.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><b>Twilio Programmable pricing</b></td>
        <td><b>WebRTC pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3</b> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>
                    <b>$4</b> per 1,000 minutes + <b>$10</b> per 1,000 minutes composition charges
                </li>
            </ul>
        </td>
        <td>NA</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/twilio-vs-webrtc">Twilio vs WebRTC</a>.</blockquote><h2 id="7-twilio-programmable-video-vs-jitsi">7. Twilio Programmable Video vs Jitsi</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Twilio-vs-Jitsi-Meet-1.jpg" class="kg-image" alt="Top Twilio Programmable Video Competitor in 2025" loading="lazy" width="1429" height="525"/></figure><p>Jitsi is a free and open-source platform crafted to simplify video conferencing. <a href="https://www.videosdk.live/blog/jitsi-alternative">Jitsi</a> offers a user-friendly experience that eliminates the need for downloads or plugins, making it an excellent option for individuals seeking a straightforward solution for live video communication without significant investment.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><b>Twilio Programmable video pricing</b></td>
        <td><b>Jitsi pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3</b> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>
                    <b>$4</b> per 1,000 minutes + <b>$10</b> per 1,000 minutes composition charges
                </li>
            </ul>
        </td>
        <td>NA</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/twilio-vs-jitsi">Twilio vs Jitsi</a>.</blockquote><h2 id="8-twilio-programmable-video-vs-daily">8. Twilio Programmable Video vs Daily</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Twilio-vs-Daily-1.jpg" class="kg-image" alt="Top Twilio Programmable Video Competitor in 2025" loading="lazy" width="1429" height="525"/></figure><p>Daily enables developers to effortlessly create real-time video and audio calls that operate seamlessly within web browsers. <a href="https://www.videosdk.live/blog/daily-co-alternative">Daily</a> simplifies the handling of common backend video call functionalities across various platforms, providing practical defaults that streamline the development process and enhance the overall user experience.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><b>Twilio Programmable video pricing</b></td>
        <td><b>Daily pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3</b> per 1,000 minutes</td>
        <td>Starts from <b>$4</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>NA</td>
        <td>Starts from <b>$1.2</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>NA</td>
        <td>
            <ul>
                <li>
                    <b>$15</b> per 1,000 minutes
                </li>				
            </ul>
        </td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>
                    <b>$4</b> per 1,000 minutes + <b>$10</b> per 1,000 minutes composition charges
                </li>
            </ul>
        </td>
        <td>
            <ul>
                <li>
                    <b>$13.49</b> per 1,000 minutes
                </li>
            </ul>
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/twilio-vs-daily">Twilio vs Daily</a>.</blockquote><h2 id="9-twilio-programmable-video-vs-livekit">9. Twilio Programmable Video vs LiveKit</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/08/Twilio-vs-LiveKit-1.jpg" class="kg-image" alt="Top Twilio Programmable Video Competitor in 2025" loading="lazy" width="1429" height="525"/></figure><p>Livekit offers a collection of SDKs specifically crafted to seamlessly integrate live video and audio functionalities into your native applications. <a href="https://www.videosdk.live/blog/livekit-alternative">LiveKit</a> encompasses a range of notable features, such as live streaming, in-game communication, video calls, and more. By leveraging a modern end-to-end WebRTC stack, Livekit guarantees smooth and high-quality real-time communication experiences for users.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><b>Twilio programmable video pricing</b></td>
        <td><b>LiveKit pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3</b> per 1,000 minutes</td>
        <td>Starts from <b>$20</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>NA</td>
        <td>
            <b>$69</b> per hour (upto 500 viewers only and doesn't support Full HD)		</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>NA</td>
        <td>No accurate data available</td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>
                    <b>$4</b> per 1,000 minutes + <b>$10</b> per 1,000 minutes composition charges
                </li>
            </ul>
        </td>
        <td>No accurate data available</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/twilio-vs-livekit">Twilio vs LiveKit</a>.</blockquote><h2 id="have-you-determined-whether-twilio-programmable-video-aligns-with-your-requirements-or-have-you-found-competitors">Have you determined whether Twilio Programmable video aligns with your requirements, or have you found competitors?</h2>
<p>The alternatives to Twilio programmable video mentioned earlier provide a range of solutions for developers looking to improve in-app user experiences. On the other hand, if you're seeking a comprehensive engagement strategy that extends beyond in-app communication to encompass voice and video capabilities, solutions like VideoSDK might be a more suitable option.</p><p>Considering your unique needs, budget constraints, and essential feature requirements, Twilio might not be an exact match for your criteria. To ensure a well-informed decision, it's recommended to explore the alternatives mentioned earlier, some of which provide free trial options like VideoSDK. Testing these competitors through practical projects will offer valuable insights into their compatibility before making a substantial commitment. And keep in mind, if circumstances change, transitioning away from Twilio remains a viable possibility.</p>]]></content:encoded></item><item><title><![CDATA[How to Build React Video Chat App with VideoSDK]]></title><description><![CDATA[In this step-by-step tutorial, You'll learn how to build the best video calling app in React using Video SDK.]]></description><link>https://www.videosdk.live/blog/react-js-video-calling</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb86</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Thu, 02 Jan 2025 06:15:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/05/React.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<img src="http://assets.videosdk.live/static-assets/ghost/2022/05/React.jpg" alt="How to Build React Video Chat App with VideoSDK"/><p>Do you find the existing video-calling apps to be limiting? If yes, why not make your own video chat app that you can use for anything. With the help of <a href="https://www.videosdk.live/">VideoSDK</a>, It is very easy to integrate and build your own video calling app with features like chat, poll, whiteboard, and much more.</p><p>VideoSDK is a platform that allows developers to create rich in-app experiences such as embedding real-time video, voice, real-time recording, live streaming, and real-time messaging. VideoSDK is available in <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start">JavaScript</a>, <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start">Reactjs</a>, <a href="https://www.videosdk.live/blog/how-to-make-a-video-calling-app-using-react-native">React-Native</a>, <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start">iOS</a>, <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/getting-started">Android</a> and <a href="https://www.videosdk.live/blog/video-calling-in-flutter">Flutter</a> to be seamlessly integrated. Video SDK also provides a <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/getting-started">pre-built SDK</a>, which enables you to integrate real-time communication with your application in just 10 minutes.</p><p>In this tutorial, You are going to explore the group calling feature of VideoSDK. You will go through step by step guide on integrating video calling with <a href="https://en.wikipedia.org/wiki/React_(software)">React</a> Video SDK.</p><p>This guide will get you running with the Video SDK video &amp; audio calling in minutes. Let us create a video calling app using ReactJS and VideoSDK.</p><h2 id="prerequisites">Prerequisites</h2>
<p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (Not having one? Follow <a href="https://app.videosdk.live/" rel="noopener noreferrer">VideoSDK Dashboard</a>)</li><li>Basic understanding of React</li><li><a href="https://www.npmjs.com/package/@videosdk.live/react-sdk" rel="noopener noreferrer">React Video SDK</a></li><li>Have <a href="https://en.wikipedia.org/wiki/Node.js">Node</a> and NPM installed on your device</li><li>The basic understanding of Hooks (useState, useRef, useEffect)</li><li>React Context API (optional)</li></ul><blockquote>One should have a Video SDK account to generate tokens. Visit Video SDK <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">Dashboard</a> to generate a token</blockquote><h2 id="getting-started-with-the-code">Getting Started with the Code!?</h2>
<p>Follow the steps to create the environment necessary to add video chat to your app.</p><h3 id="create-a-new-react-app">Create a new React app</h3>
<p>Create a new React App using the below command</p><pre><code class="language-js">$ npx create-react-app videosdk-rtc-react-app
</code></pre>
<h3 id="install-video-sdk">Install Video SDK</h3>
<p>Install the Video SDK using the below-mentioned npm command. Make sure you are in your react app directory before you run this command.</p><pre><code class="language-javascript">$ npm install "@videosdk.live/react-sdk"

//For the Participants Video
$ npm install "react-player"</code></pre><h3 id="structure-of-the-project">Structure of the project</h3>
<p>Your project structure should look like this after creating the app with create-react-app</p><pre><code class="language-javascript">root
   ├── node_modules
   ├── public
   ├── src
   │    ├── api.js
   │    ├── App.js
   │    ├── App.css
   │    ├── index.js
   │    ├── index.js
   .    .</code></pre><p>We are going to use functional components to leverage React's reusable component architecture. There will be components for users, videos and controls (mic, camera, leave) over the video.</p><h3 id="app-architecture">App Architecture</h3>
<p>The app will contain a <code>MeetingView</code> component which includes <code>ParticipantView</code> which will render the participant's name, video, audio, etc. We will also have a <code>Controls</code> component that will allow users to perform operations like leave and toggle media.</p><p><img src="https://docs.videosdk.live/assets/images/react-quick-start-fafbfbc2ed2d7cbfd4c5ee2e36296f9e.png" alt="How to Build React Video Chat App with VideoSDK" loading="lazy"/></p>
<p>We are going to work on two files:</p><ul><li>API.js: Responsible for handling API calls such as generating unique meetingId and token</li><li>App.js: Responsible to render <code>MeetingView</code> and join the meeting.</li></ul><h2 id="to-build-a-react-video-chat-app-use-these-5-steps">To build a React video chat app, use these 5 steps</h2><h3 id="step-1-get-started-with-apijs">STEP 1: Get started with API.js</h3><p>Before moving on, we must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><pre><code class="language-javascript">//Auth token we will use to generate a meeting and connect to it
export const authToken = "&lt;Generated-from-dashbaord&gt;";
// API call to create meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${authToken}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });
  //Destructuring the roomId from the response
  const { roomId } = await res.json();
  return roomId;
};</code></pre><h3 id="step-2-wireframe-appjs-with-all-the-components">STEP 2: Wireframe App.js with all the components</h3>
<p>To build up a wireframe of App.js, we are going to use Video SDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and use participant hooks. Let's understand each of them.</p><p>First, we will explore Context Provider and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: It is a Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: It is Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: It is meeting react hook API for a meeting. It includes all the information related to the meeting such as participants, streams etc.</li><li><strong>useParticipant</strong>: It is participant hook API. useParticipant hook is responsible for handling all the events and props related to one particular participant such as join, leave, mute, etc.</li></ul><p>Meeting Context helps to listen to all the changes when a participant joins a meeting or changes Mic or Camera etc.</p><p>Let's get started with changing a couple of lines of code in App.js.</p><pre><code class="language-javascript">import "./App.css";
import React, { useEffect, useRef, useState } from "react";
import {
  MeetingProvider,
  MeetingConsumer,
  useMeeting,
  useParticipant,
} from "@videosdk.live/react-sdk";
import { authToken, createMeeting } from "./API";

function JoinScreen() {
  return null;
}

function VideoComponent(props) {
  return null;
}

function Controls(props) {
  return null;
}

function Container(props) {
  return null;
}

function App() {
  const [meetingId, setMeetingId] = useState(null);

  const getMeetingAndToken = async (id) =&gt; {
    const meetingId =
      id == null ? await createMeeting({ token: authToken }) : id;
    setMeetingId(meetingId);
  };

  return authToken &amp;&amp; meetingId ? (
    &lt;MeetingProvider
      config={{
        meetingId,
        micEnabled: true,
        webcamEnabled: false,
        name: "C.V. Raman",
      }}
      token={authToken}
    &gt;
      &lt;MeetingConsumer&gt;
        {() =&gt; &lt;Container meetingId={meetingId} /&gt;}
      &lt;/MeetingConsumer&gt;
    &lt;/MeetingProvider&gt;
  ) : (
    &lt;JoinScreen getMeetingAndToken={getMeetingAndToken} /&gt;
  );
}

export default App;</code></pre><h3 id="step-3-implement-join-screen">Step 3: Implement Join Screen</h3>
<p>The join screen will work as a medium to either schedule a new meeting or to join an existing meeting.</p><pre><code class="language-javascript">function JoinScreen({ getMeetingAndToken }) {
  const [meetingId, setMeetingId] = useState(null);
  const onClick = async () =&gt; {
    await getMeetingAndToken(meetingId);
  };
  return (
    &lt;div&gt;
      &lt;input
        type="text"
        placeholder="Enter Meeting Id"
        onChange={(e) =&gt; {
          setMeetingId(e.target.value);
        }}
      /&gt;
      &lt;button onClick={onClick}&gt;Join&lt;/button&gt;
      {" or "}
      &lt;button onClick={onClick}&gt;Create Meeting&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><h4 id="output">Output:</h4>
<figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/04/image-1.png" class="kg-image" alt="How to Build React Video Chat App with VideoSDK" loading="lazy" width="720" height="130"/></figure><h3 id="step-4-implement-container-and-controls">Step 4: Implement Container and Controls</h3>
<p>The next step is to create <code>MeetingView</code> and <code>Controls</code> components to manage features such as join, leave, mute, and unmute.</p><pre><code class="language-javascript">function MeetingView(props) {
  const [joined, setJoined] = useState(null);
  //Get the method which will be used to join the meeting.
  //We will also get the participants list to display all participants
  const { join, participants } = useMeeting({
    //callback for when meeting is joined successfully
    onMeetingJoined: () =&gt; {
      setJoined("JOINED");
    },
    //callback for when meeting is left
    onMeetingLeft: () =&gt; {
      props.onMeetingLeave();
    },
  });
  const joinMeeting = () =&gt; {
    setJoined("JOINING");
    join();
  };

  return (
    &lt;div className="container"&gt;
      &lt;h3&gt;Meeting Id: {props.meetingId}&lt;/h3&gt;
      {joined &amp;&amp; joined == "JOINED" ? (
        &lt;div&gt;
          &lt;Controls /&gt;
          //For rendering all the participants in the meeting
          {[...participants.keys()].map((participantId) =&gt; (
            &lt;ParticipantView
              participantId={participantId}
              key={participantId}
            /&gt;
          ))}
        &lt;/div&gt;
      ) : joined &amp;&amp; joined == "JOINING" ? (
        &lt;p&gt;Joining the meeting...&lt;/p&gt;
      ) : (
        &lt;button onClick={joinMeeting}&gt;Join&lt;/button&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><p>Apart from that Control Component will be required to handle user actions. </p><pre><code class="language-js">function Controls() {
  const { leave, toggleMic, toggleWebcam } = useMeeting();
  return (
    &lt;div&gt;
      &lt;button onClick={leave}&gt;Leave&lt;/button&gt;
      &lt;button onClick={toggleMic}&gt;toggleMic&lt;/button&gt;
      &lt;button onClick={toggleWebcam}&gt;toggleWebcam&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><h4 id="a-output-of-container-component">(a)  Output of Container Component:</h4>
<figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/04/image-2.png" class="kg-image" alt="How to Build React Video Chat App with VideoSDK" loading="lazy" width="720" height="177"/></figure><h4 id="b-output-of-controls-component">(b)  Output of Controls Component:</h4>
<figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/04/image-3.png" class="kg-image" alt="How to Build React Video Chat App with VideoSDK" loading="lazy" width="720" height="177"/></figure><h3 id="step-5-implement-participant-view">STEP 5: Implement Participant View</h3>
<p>Before implementing the video component, We need to understand a couple of concepts.</p><h4 id="a-forwarding-ref-for-mic-and-camera">(a)  Forwarding Ref for mic and camera</h4>
<p>Ref forwarding is a technique for automatically passing a ref through a component to one of its children. We are going to use Refs to attach audio and video tracks with components.</p><pre><code class="language-javascript">const webcamRef = useRef(null);
const micRef = useRef(null);</code></pre><h4 id="b-useparticipant-hook%E2%80%8B">(b)  useParticipant Hook​</h4>
<p>useParticipant hook is responsible for handling all the properties and events of one particular participant who joined in the meeting. It will take participantId as an argument.</p><pre><code class="language-javascript">const { webcamStream, micStream, webcamOn, micOn } = useParticipant(
  props.participantId
);</code></pre><h4 id="c-mediastream-api">(c)  MediaStream API</h4>
<p>MediaStream is useful to add MediaTrack to the audio/video tag to play the audio or video.<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#3-mediastream-api">​</a></p><pre><code class="language-javascript">const webcamRef = useRef(null);
const mediaStream = new MediaStream();
mediaStream.addTrack(webcamStream.track);

webcamRef.current.srcObject = mediaStream;
webcamRef.current
  .play()
  .catch((error) =&gt; console.error("videoElem.current.play() failed", error));</code></pre><p>Now let's use all these APIs to create ParticipantView<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#4-implemeting-video-component">​</a></p><pre><code class="language-javascript">function ParticipantView(props) {
  const micRef = useRef(null);
  const { webcamStream, micStream, webcamOn, micOn, isLocal, displayName } =
    useParticipant(props.participantId);

  const videoStream = useMemo(() =&gt; {
    if (webcamOn &amp;&amp; webcamStream) {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(webcamStream.track);
      return mediaStream;
    }
  }, [webcamStream, webcamOn]);

  useEffect(() =&gt; {
    if (micRef.current) {
      if (micOn &amp;&amp; micStream) {
        const mediaStream = new MediaStream();
        mediaStream.addTrack(micStream.track);

        micRef.current.srcObject = mediaStream;
        micRef.current
          .play()
          .catch((error) =&gt;
            console.error("videoElem.current.play() failed", error)
          );
      } else {
        micRef.current.srcObject = null;
      }
    }
  }, [micStream, micOn]);

  return (
    &lt;div&gt;
      &lt;p&gt;
        Participant: {displayName} | Webcam: {webcamOn ? "ON" : "OFF"} | Mic:{" "}
        {micOn ? "ON" : "OFF"}
      &lt;/p&gt;
      &lt;audio ref={micRef} autoPlay playsInline muted={isLocal} /&gt;
      {webcamOn &amp;&amp; (
        &lt;ReactPlayer
          //
          playsinline // very very imp prop
          pip={false}
          light={false}
          controls={false}
          muted={true}
          playing={true}
          //
          url={videoStream}
          //
          height={"300px"}
          width={"300px"}
          onError={(err) =&gt; {
            console.log(err, "participant video error");
          }}
        /&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/05/giphy--1-.gif" class="kg-image" alt="How to Build React Video Chat App with VideoSDK" loading="lazy" width="480" height="270"/></figure><p>We are done with the implementation of a customized video calling app in ReactJS using VideoSDK. To explore more features go through <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/setup-call/initialise-meeting#">Basic</a> and <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/plugins/virtual-background">Advanced features</a>.</p>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h2 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h2>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="conclusion">Conclusion</h2>
<p>We have successfully completed the video calling app using ReactJS. If you wish to add functionalities like chat messaging, screen sharing, polls, etc, you can always check out our <a href="https://docs.videosdk.live/">documentation</a>. If you face any difficulty with the implementation, you can check out the <a href="https://github.com/videosdk-live/videosdk-rtc-react-sdk-example">example on GitHub</a> or connect with us on our <a href="https://discord.gg/Gpmj6eCq5u">Discord community</a>.</p><h2 id="more-react-resources">More React Resources</h2>
<ul><li><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start">React Audio/Video call documentation</a></li><li><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start-ILS">React interactive live streaming documentation</a></li><li><a href="https://www.videosdk.live/blog/react-interactive-live-streaming">React interactive live streaming blog</a></li><li><a href="https://github.com/videosdk-live/videosdk-rtc-react-sdk-example">React Audio/Video call example code</a></li><li><a href="https://github.com/videosdk-live/videosdk-live-streaming-react-api-example">React live streaming example code</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Best Agora Competitors in 2026]]></title><description><![CDATA[Explore Agora's competition by comparing Agora with its alternatives in the realm of real-time communication.]]></description><link>https://www.videosdk.live/blog/agora-competitors</link><guid isPermaLink="false">64b509929eadee0b8b9e71a8</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Wed, 01 Jan 2025 10:03:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/07/Agora-competitors-min.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Agora-competitors-min.jpg" alt="Best Agora Competitors in 2026"/><p>If you're searching for a <a href="https://www.videosdk.live/audio-video-conferencing" rel="noreferrer"><strong>video communication API</strong></a>, you might be contemplating Agora or one of its competitors as a potential choice.</p><p>However, finding a fair and direct alternative to Agora and your other top choices, such as Agora Competitors, can be difficult with thousands of <a href="https://www.videosdk.live/blog/agora-alternative">alternatives</a>. Before delving into the competitive landscape of video API offerings, it's crucial to identify your project's budget, primary use case, and essential features.</p><p>To simplify the process, we've compiled a list of the <strong>top ten competitors to Agora</strong>, enabling you to streamline your options and discover the optimal video API solution for your application. However, before <strong>comparing alternative solutions to Agora</strong>, it's important to acquaint yourself with the company's product offerings. While we might have a slight bias towards a particular competitor, we will strive to provide an impartial overview of each provider.</p><h2 id="agora">Agora</h2>
<h3 id="key-points-about-agora">Key points about Agora</h3>
<ul><li>Agora's SDK offers <strong>integrated voice</strong> and <strong>video chat</strong>, <strong>real-time recording</strong>, <strong>live streaming</strong>, and <strong>instant messaging</strong> capabilities.</li><li>Users also have the option to enhance their experience with additional features such as <strong>AR facial masks</strong>, <strong>sound effects</strong>, and <strong>whiteboards</strong>, <strong>albeit</strong> at an <strong>additional cost</strong>.</li><li>However, it's worth noting that Agora's <strong>pricing structure</strong> can be <strong>intricate</strong> and <strong>might not align</strong> well with businesses operating on limited budgets.</li><li>For users seeking direct support from Agora's team, there could be potential <strong>delays</strong> in response time as the support team may require <strong>extra time</strong> to provide assistance.</li></ul><h3 id="agora-pricing">Agora pricing</h3>
<ul><li>Agora provides users with two <a href="https://www.agora.io/en/pricing/">pricing</a> options: <strong>Premium</strong> and <strong>Standard</strong>, tailored to the duration of audio and video calls, with calculations made monthly.</li><li>Their pricing is organized into four tiers, categorized by video resolution, offering users the advantage of flexibility, but not cost-efficiency.</li><li>The <strong>pricing structure</strong> encompasses <strong>Audio calls</strong> at <strong>$0.99</strong> per 1,000 participant minutes, <strong>HD Video calls</strong> at <strong>$3.99</strong> per 1,000 participant minutes, and <strong>Full HD</strong> <strong>Video calls</strong> at <strong>$8.99</strong> per 1,000 participant minutes.</li></ul><h2 id="direct-comparison-agora-vs-top-10-competitors">Direct Comparison: Agora vs Top 10 Competitors</h2>
<p>The <strong>top 10 competitors of Agora</strong> are VideoSDK, <a href="https://www.videosdk.live/blog/twilio-video-competitors">Twilio</a>, <a href="https://www.videosdk.live/blog/zoom-video-sdk-competitors">Zoom</a>, <a href="https://www.videosdk.live/blog/vonage-competitors">Vonage</a>, <a href="https://www.videosdk.live/blog/amazon-chime-sdk-competitors">AWS Chime</a>, 100ms, WebRTC, <a href="https://www.videosdk.live/blog/jitsi-competitors">Jitsi</a>, Daily, and LiveKit.</p><p>Let's compare Agora with each of the above competitors.</p><h2 id="1-agora-vs-videosdk">1. Agora vs VideoSDK</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Agora-vs-Videosdk-min.jpg" class="kg-image" alt="Best Agora Competitors in 2026" loading="lazy" width="5716" height="2100"/></figure><p>VideoSDK offers developers a seamless API that simplifies the process of incorporating robust, scalable, and dependable audio-video capabilities into their applications. With only a few lines of code, developers can introduce <a href="https://www.videosdk.live/interactive-live-streaming">live audio and video</a> experiences to various platforms within minutes. One of the primary benefits of opting for the <a href="https://www.videosdk.live/">VideoSDK</a> is its remarkable ease of integration. This characteristic enables developers to concentrate their efforts on crafting innovative features that contribute to enhanced user engagement and retention.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><b>Agora pricing</b></td>
        <td><b>Video SDK pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>Starts from <b>$2</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>Starts from <b>$1</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>
            <ul>
                <li>H.264 Full HD Output Video: <b>$15.99</b> per 1000 mins</li>
                <li>H.265 HD Output Video: <b>$19.99</b> per 1000 mins</li>
                <li>H.265 Full HD Output Video: <b>$39.99</b> per 1000 mins</li>
            </ul>
        </td>
        <td>
            <ul>
                <li><b>$15</b> per 1,000 minutes, No limit on participants</li>				</ul>
        </td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>Voice: <b>$1.49</b></li>
                <li>SD: <b>$5.99</b></li>
                <li>HD: <b>$5.99</b></li>
                <li>Full HD: <b>$13.49</b></li>
                <li>2K: <b>$23.99</b></li>
                <li>2K+: <b>$53.99</b></li>
            </ul>
        </td>
        <td>
            <ul>
                <li>
                    <b>$15</b> per 1,000 minutes, No limit on participants
                </li>
            </ul>
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/alternative/agora-vs-videosdk">Agora vs VideoSDK</a>.</blockquote>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-agora-vs-twilio">2. Agora vs Twilio</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Agora-vs-Twilio-min.jpg" class="kg-image" alt="Best Agora Competitors in 2026" loading="lazy" width="5716" height="2100"/></figure><p>Twilio initially began by assisting developers in automating traditional phone calls and SMS text messages. However today, developers can leverage <a href="https://www.videosdk.live/blog/twilio-video-alternative">Twilio</a> to construct business communication, both within apps and beyond, across the entire customer journey.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <th/>
        <td><b>Agora pricing</b></td>
        <td><b>Twilio pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>Starts from <b>$3</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>No data available</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>
            <ul>
                <li>H.264 Full HD Output Video: <b>$15.99</b> per 1000 mins</li>
                <li>H.265 HD Output Video: <b>$19.99</b> per 1000 mins</li>
                <li>H.265 Full HD Output Video: <b>$39.99</b> per 1000 mins</li>
            </ul>
        </td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>Voice: <b>$1.49</b></li>
                <li>SD: <b>$5.99</b></li>
                <li>HD: <b>$5.99</b></li>
                <li>Full HD: <b>$13.49</b></li>
                <li>2K: <b>$23.99</b></li>
                <li>2K+: <b>$53.99</b></li>
            </ul>
        </td>
        <td>
            <ul>
                <li>
                    <b>$0.004</b>/user/minute + <b>$0.01</b>/minute composition charges
                </li>
            </ul>
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/agora-vs-twilio">Agora vs Twilio</a>.</blockquote><h2 id="3-agora-vs-zoom">3. Agora vs Zoom</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Agora-vs-Zoom-min.jpg" class="kg-image" alt="Best Agora Competitors in 2026" loading="lazy" width="5716" height="2100"/></figure><p>Zoom Video SDK offers a dependable video conferencing experience akin to the original Client but with enhanced flexibility. <a href="https://www.videosdk.live/blog/zoom-video-sdk-alternative">Zoom</a> encompasses a comprehensive array of services, encompassing audio, video, chat, screen sharing, data streams, server-side APIs, and webhooks. This robust collection of features empowers developers to craft tailored and engaging video communication experiences within their applications.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <th/>
        <td><b>Agora pricing</b></td>
        <td><b>Zoom pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>Starts from <b>$3.5</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>
            <ul>
                <li>H.264 Full HD Output Video: <b>$15.99</b> per 1000 mins</li>
                <li>H.265 HD Output Video: <b>$19.99</b> per 1000 mins</li>
                <li>H.265 Full HD Output Video: <b>$39.99</b> per 1000 mins</li>
            </ul>
        </td>
        <td>
            <ul>
                <li><b>$3.5</b> per 1,000 minutes</li>
            </ul>
        </td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>Voice: <b>$1.49</b></li>
                <li>SD: <b>$5.99</b></li>
                <li>HD: <b>$5.99</b></li>
                <li>Full HD: <b>$13.49</b></li>
                <li>2K: <b>$23.99</b></li>
                <li>2K+: <b>$53.99</b></li>
            </ul>
        </td>
        <td>
            <ul>
                <li><b>$4</b> per 1,000 minutes</li>
            </ul>
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/agora-vs-zoom">Agora vs Zoom</a>.</blockquote><h2 id="4-agora-vs-vonage">4. Agora vs Vonage</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Agora-vs-Vonage-min.jpg" class="kg-image" alt="Best Agora Competitors in 2026" loading="lazy" width="5716" height="2100"/></figure><p>The Vonage Video API, previously referred to as TokBox, empowers developers to design customized video experiences for their web, mobile, or desktop applications. Through <a href="https://www.videosdk.live/blog/vonage-alternative">Vonage</a> API, developers can seamlessly integrate video communication features, delivering users a distinctive and immersive video experience directly within their apps.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <th/>
        <td><b>Agora pricing</b></td>
        <td><b>Vonage pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>Starts from <b>$3.95</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>Starts from <b>$1.5</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>
            <ul>
                <li>H.264 Full HD Output Video: <b>$15.99</b> per 1000 mins</li>
                <li>H.265 HD Output Video: <b>$19.99</b> per 1000 mins</li>
                <li>H.265 Full HD Output Video: <b>$39.99</b> per 1000 mins</li>
            </ul>
        </td>
        <td>
            <ul>
                <li><b>$15</b> per 1,000 minutes</li>
            </ul>
        </td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>Voice: <b>$1.49</b></li>
                <li>SD: <b>$5.99</b></li>
                <li>HD: <b>$5.99</b></li>
                <li>Full HD: <b>$13.49</b></li>
                <li>2K: <b>$23.99</b></li>
                <li>2K+: <b>$53.99</b></li>
            </ul>
        </td>
        <td>
            <ul>
                <li>
                    <b>$12.5</b> per 1,000 minutes
                </li>
            </ul>
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/agora-vs-vonage">Agora vs Vonage</a>.</blockquote><h2 id="5-agora-vs-aws-chime">5. Agora vs AWS Chime</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Agora-vs-AWS-Chime-min.jpg" class="kg-image" alt="Best Agora Competitors in 2026" loading="lazy" width="5716" height="2100"/></figure><p>Amazon Chime is an SDK designed to streamline the integration of video conferencing, audio calls, and messaging into your applications. <a href="https://www.videosdk.live/blog/amazon-chime-sdk-alternative">Amazon Chime SDK</a> provides fundamental live video features and leverages AWS' user management capabilities to elevate the overall functionality.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <th/>
        <td><b>Agora pricing</b></td>
        <td><b>AWS Chime pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>Starts from <b>$1.7</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>
            <ul>
                <li>H.264 Full HD Output Video: <b>$15.99</b> per 1000 mins</li>
                <li>H.265 HD Output Video: <b>$19.99</b> per 1000 mins</li>
                <li>H.265 Full HD Output Video: <b>$39.99</b> per 1000 mins</li>
            </ul>
        </td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>Voice: <b>$1.49</b></li>
                <li>SD: <b>$5.99</b></li>
                <li>HD: <b>$5.99</b></li>
                <li>Full HD: <b>$13.49</b></li>
                <li>2K: <b>$23.99</b></li>
                <li>2K+: <b>$53.99</b></li>
            </ul>
        </td>
        <td>
            <ul>
                <li>
                    <b>$12.5</b> per 1,000 minutes
                </li>
            </ul>
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/agora-vs-amazon-chime-sdk">Agora vs AWS Chime</a>.</blockquote><h2 id="6-agora-vs-100ms">6. Agora vs 100ms</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Agora-vs-100ms-min.jpg" class="kg-image" alt="Best Agora Competitors in 2026" loading="lazy" width="5716" height="2100"/></figure><p>100ms is a cloud platform that empowers developers to seamlessly integrate video and audio conferencing into Web, Android, and iOS applications. <a href="https://www.videosdk.live/blog/100ms-alternative">100ms</a> provides a set of REST APIs, SDKs, and a user-friendly dashboard, simplifying the process of capturing, distributing, recording, and presenting live interactive audio and video content.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <th/>
        <td><b>Agora pricing</b></td>
        <td><b>100ms pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>Starts from <b>$4</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>Starts from <b>$4</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>
            <ul>
                <li>H.264 Full HD Output Video: <b>$15.99</b> per 1000 mins</li>
                <li>H.265 HD Output Video: <b>$19.99</b> per 1000 mins</li>
                <li>H.265 Full HD Output Video: <b>$39.99</b> per 1000 mins</li>
            </ul>
        </td>
        <td>
            <ul>
                <li><b>$15</b> per 1,000 minutes</li>
            </ul>
        </td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>Voice: <b>$1.49</b></li>
                <li>SD: <b>$5.99</b></li>
                <li>HD: <b>$5.99</b></li>
                <li>Full HD: <b>$13.49</b></li>
                <li>2K: <b>$23.99</b></li>
                <li>2K+: <b>$53.99</b></li>
            </ul>
        </td>
        <td>
            <ul>
                <li>
                    <b>$13.5</b> per 1,000 minutes
                </li>
            </ul>
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/agora-vs-100ms">Agora vs 100ms</a>.</blockquote><h2 id="7-agora-vs-webrtc">7. Agora vs WebRTC</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Agora-vs-webRTC-min.jpg" class="kg-image" alt="Best Agora Competitors in 2026" loading="lazy" width="5716" height="2100"/></figure><p>WebRTC is a free and open-source initiative that empowers web browsers with Real-Time Communications (RTC) capabilities using simple JavaScript APIs. The components of <a href="https://www.videosdk.live/blog/webrtc-alternative">WebRTC</a> have been carefully optimized to efficiently enable real-time communication functionalities within web browsers.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <th/>
        <td><b>Agora pricing</b></td>
        <td><b>WebRTC pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>
            <ul>
                <li>H.264 Full HD Output Video: <b>$15.99</b> per 1000 mins</li>
                <li>H.265 HD Output Video: <b>$19.99</b> per 1000 mins</li>
                <li>H.265 Full HD Output Video: <b>$39.99</b> per 1000 mins</li>
            </ul>
        </td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>Voice: <b>$1.49</b></li>
                <li>SD: <b>$5.99</b></li>
                <li>HD: <b>$5.99</b></li>
                <li>Full HD: <b>$13.49</b></li>
                <li>2K: <b>$23.99</b></li>
                <li>2K+: <b>$53.99</b></li>
            </ul>
        </td>
        <td>NA</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/agora-vs-webrtc">Agora vs WebRTC</a>.</blockquote><h2 id="8-agora-vs-jitsi">8. Agora vs Jitsi</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Agora-vs-Jitsi-Meet-min.jpg" class="kg-image" alt="Best Agora Competitors in 2026" loading="lazy" width="5716" height="2100"/></figure><p>Jitsi is a free and open-source platform designed to streamline video conferencing. <a href="https://www.videosdk.live/blog/jitsi-alternative">Jitsi</a> provides a user-friendly experience that doesn't require downloads or plugins, making it a great choice for those looking for an uncomplicated solution for live video communication without extensive investment.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <th/>
        <td><b>Agora pricing</b></td>
        <td><b>Jitsi pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>
            <ul>
                <li>H.264 Full HD Output Video: <b>$15.99</b> per 1000 mins</li>
                <li>H.265 HD Output Video: <b>$19.99</b> per 1000 mins</li>
                <li>H.265 Full HD Output Video: <b>$39.99</b> per 1000 mins</li>
            </ul>
        </td>
        <td>NA</td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>Voice: <b>$1.49</b></li>
                <li>SD: <b>$5.99</b></li>
                <li>HD: <b>$5.99</b></li>
                <li>Full HD: <b>$13.49</b></li>
                <li>2K: <b>$23.99</b></li>
                <li>2K+: <b>$53.99</b></li>
            </ul>
        </td>
        <td>NA</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/agora-vs-jitsi">Agora vs Jitsi</a>.</blockquote><h2 id="9-agora-vs-daily">9. Agora vs Daily</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Agora-vs-Daily-min.jpg" class="kg-image" alt="Best Agora Competitors in 2026" loading="lazy" width="5716" height="2100"/></figure><p>Daily enables developers to easily create real-time video and audio calls that function directly within web browsers. <a href="https://www.videosdk.live/blog/daily-co-alternative">Daily</a> simplifies the management of typical backend video call features across different platforms, offering sensible defaults to streamline the development process and enhance the overall user experience.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <th/>
        <td><b>Agora pricing</b></td>
        <td><b>Daily pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>Starts from <b>$4</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>Starts from <b>$1.2</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>
            <ul>
                <li>H.264 Full HD Output Video: <b>$15.99</b> per 1000 mins</li>
                <li>H.265 HD Output Video: <b>$19.99</b> per 1000 mins</li>
                <li>H.265 Full HD Output Video: <b>$39.99</b> per 1000 mins</li>
            </ul>
        </td>
        <td>
            <ul>
                <li><b>$15</b> per 1,000 minutes</li>
            </ul>
        </td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>Voice: <b>$1.49</b></li>
                <li>SD: <b>$5.99</b></li>
                <li>HD: <b>$5.99</b></li>
                <li>Full HD: <b>$13.49</b></li>
                <li>2K: <b>$23.99</b></li>
                <li>2K+: <b>$53.99</b></li>
            </ul>
        </td>
        <td>
            <ul>
                <li>
                    <b>$13.49</b> per 1,000 minutes
                </li>
            </ul>
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/agora-vs-daily">Agora vs Daily</a>.</blockquote><h2 id="10-agora-vs-livekit">10. Agora vs LiveKit</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Agora-vs-LiveKit-min.jpg" class="kg-image" alt="Best Agora Competitors in 2026" loading="lazy" width="5716" height="2100"/></figure><p>Livekit comprises a suite of SDKs designed to seamlessly incorporate live video and audio capabilities into your native applications. Noteworthy features include live streaming, in-game communication, video calls, and more. Leveraging a contemporary, end-to-end WebRTC stack, <a href="https://www.videosdk.live/blog/livekit-alternative">Livekit</a> ensures smooth and high-quality real-time communication experiences for users.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <th/>
        <td><b>Agora pricing</b></td>
        <td><b>LiveKit pricing</b></td>
    </tr>
    <tr>
        <td><b>Video calling</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>Starts from <b>$20</b> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><b>Interactive live streaming</b></td>
        <td>Starts from <b>$3.99</b> per 1,000 minutes</td>
        <td>Starts from <b>$69</b> per hour</td>
    </tr>
    <tr>
        <td><b>RTMP</b></td>
        <td>
            <ul>
                <li>H.264 Full HD Output Video: <b>$15.99</b> per 1000 mins</li>
                <li>H.265 HD Output Video: <b>$19.99</b> per 1000 mins</li>
                <li>H.265 Full HD Output Video: <b>$39.99</b> per 1000 mins</li>
            </ul>
        </td>
        <td>No accurate data available</td>
    </tr>
    <tr>
        <td><b>Cloud Recording<b/></b></td>
        <td>
            <ul>
                <li>Voice: <b>$1.49</b></li>
                <li>SD: <b>$5.99</b></li>
                <li>HD: <b>$5.99</b></li>
                <li>Full HD: <b>$13.49</b></li>
                <li>2K: <b>$23.99</b></li>
                <li>2K+: <b>$53.99</b></li>
            </ul>
        </td>
        <td>No accurate data available</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/agora-vs-livekit">Agora vs LiveKit</a>.</blockquote><h2 id="have-you-determined-whether-agora-aligns-with-your-requirements-or-have-you-found-an-alternative">Have you determined whether Agora aligns with your requirements, or have you found an alternative?</h2>
<p>The ten competitors to Agora mentioned above offer diverse solutions for developers aiming to enhance in-app user experiences. Depending on your project's scale and requirements, you can opt for options like Twilio or Firebase for smaller projects where video communication is a secondary feature. Conversely, for a comprehensive engagement strategy spanning beyond in-app communication to include voice and video, solutions such as <a href="https://www.videosdk.live/signup">Video SDK</a> could be a more fitting choice.</p><p>Based on your specific needs, budget considerations, and the critical features you're seeking, Agora might not align perfectly with your requirements. To make an informed decision, it's advisable to explore the above-mentioned alternatives, some of which offer free trials like Video SDK. By testing these options through proof-of-concept projects, you can gain a better understanding of their suitability before making a significant commitment. Remember, if your needs change in the future, migrating away from Agora is always an option.</p>]]></content:encoded></item><item><title><![CDATA[Build a Video Calling App with Call Trigger in Android (Kotlin) using Firebase and VideoSDK]]></title><description><![CDATA[In this tutorial, you’ll learn how to make a native Android Kotlin video calling app with call trigger using the firebase and VideoSDK.]]></description><link>https://www.videosdk.live/blog/call-trigger-in-android-kotlin</link><guid isPermaLink="false">671b3459c646c7d24e60ea34</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Wed, 27 Nov 2024 07:40:04 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/11/clone_build_android_kotlin.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/11/clone_build_android_kotlin.png" alt="Build a Video Calling App with Call Trigger in Android (Kotlin) using Firebase and VideoSDK"/><p>In a world where connectivity through audio and video calls is essential, if you're planning to create a video-calling app with call-trigger functionality, you've come to the right place.</p><p>In this tutorial, we will build a comprehensive video calling app for Android that enables smooth call handling and high-quality video communication. We’ll utilize the Telecom Framework to manage call functionality, Firebase for real-time data synchronization, and VideoSDK to deliver clear, reliable video conferencing. </p><p> Take a moment to watch the video demonstration and review <a href="https://github.com/videosdk-live/videosdk-rtc-android-kotlin-call-trigger-example" rel="noreferrer">the complete code </a>for the sample app to see exactly what you'll be building in this blog.</p><h2 id="understanding-the-telecom-framework">Understanding the Telecom Framework</h2><p>Before we get started, let’s take a closer look at the Telecom Framework. This framework manages both audio and video calls on Android devices, supporting traditional SIM-based calls as well as VoIP calls via the ConnectionService API.<br>The major components that Telecom manages are <code>ConnectionService</code> and <code>InCallService</code>:</br></p><ul><li><code>ConnectionService</code> handles the technical aspects of call connections, managing states, and audio/video routing.</li><li><code>InCallService</code> manages the user interface, allowing users to see and interact with ongoing calls.</li></ul><p>Understanding how the app will function internally before building it will make the development process smoother.</p><h2 id="app-functionality-overview">App Functionality Overview</h2><p>To understand how the app functions, consider the following scenario: John wants to call his friend Max<strong>.</strong> John opens the app, enters Max's caller ID, and presses "Call." Max sees an incoming call UI on his device, with options to accept or reject the call. If he accepts, a video call is established between them using VideoSDK.</p><h3 id="steps-in-the-process">Steps in the Process:</h3><ol><li><strong>User Action</strong>: John enters Max's Caller ID and initiates the call.</li><li><strong>Database and Notification</strong>: The app maps the ID in the Firebase database and sends a notification to Max's device.</li><li><strong>Incoming Call UI</strong>: Max’s device receives the notification, triggering the incoming call UI using the Telecom Framework.</li><li><strong>Call Connection</strong>: If Max accepts, the video call begins using VideoSDK.</li></ol><p>Here is a pictorial representation of the flow for better understanding.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/10/connectionServiceFlow-1.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Android (Kotlin) using Firebase and VideoSDK" loading="lazy" width="2330" height="890"/></figure><p/><p>Now that we have established the flow of the app and how it functions, let's get started with development!</p><h2 id="core-functionality-of-the-app">Core Functionality of the App</h2><p>The app relies on several key libraries to manage video calling and notifications:</p><ul><li><strong>Android Telecom Framework</strong>: Manages call routing and interaction with the system UI for incoming and outgoing calls.</li><li><strong>Retrofit</strong>: Used for sending and receiving API requests, including call initiation and status updates.</li><li><strong>Firebase Cloud Messaging (FCM)</strong>: Handles push notifications to trigger call events.</li><li><strong>Firebase Realtime Database</strong>: Stores user tokens and caller IDs for establishing video calls.</li><li><strong>VideoSDK</strong>: Manages the actual video conferencing features.</li></ul><h2 id="prerequisites">Prerequisites</h2><ul><li>Android Studio (for Android app development)</li><li>Firebase Project (for notifications and database management)</li><li>VideoSDK Account (for video conferencing functionality)</li><li>Node.js and Firebase Tools (for backend development)</li></ul><p>Make sure you have a basic understanding of Android development, Retrofit, and Firebase Cloud Messaging.</p><p>Now that we've covered the prerequisites, let's dive into building the app.</p><h2 id="android-app-setup">Android App Setup<br/></h2><h3 id="add-dependencies">Add Dependencies</h3><ol><li>In your<code>settings.gradle</code>, add Jetpack and Maven repositories:</li></ol><pre><code class="language-gradle">dependencyResolutionManagement {
    repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
    repositories {
        google()
        mavenCentral()
        maven { url 'https://www.jitpack.io' }
        maven { url "https://maven.aliyun.com/repository/jcenter" }
    }
}</code></pre><ol start="2"><li>In your <code>build.gradle</code> file, add the following dependencies:</li></ol><pre><code class="language-gradle">   //VideoSdk
    implementation 'live.videosdk:rtc-android-sdk:0.1.35'
    implementation 'com.nabinbhandari.android:permissions:3.8'
    implementation 'com.amitshekhar.android:android-networking:1.0.2'

    //Firebase
    implementation 'com.google.firebase:firebase-messaging:23.0.0'
    implementation platform('com.google.firebase:firebase-bom:33.4.0')
    implementation 'com.google.firebase:firebase-analytics'

    //Retrofit
    implementation 'com.squareup.retrofit2:retrofit:2.9.0'
    implementation 'com.squareup.retrofit2:converter-gson:2.9.0'
    implementation 'com.squareup.retrofit2:converter-scalars:2.9.0'</code></pre><h3 id="set-permissions-in-androidmanifestxml">Set Permissions in <code>AndroidManifest.xml</code></h3><p>Ensure the following permissions are configured:</p><pre><code class="language-xml">    &lt;uses-feature
        android:name="android.hardware.camera"
        android:required="false" /&gt;
    &lt;uses-feature
        android:name="android.hardware.telephony"
        android:required="false" /&gt;

    &lt;uses-permission android:name="android.permission.ANSWER_PHONE_CALLS" /&gt;
    &lt;uses-permission android:name="android.permission.POST_NOTIFICATIONS" /&gt;
    &lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
    &lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
    &lt;uses-permission android:name="android.permission.INTERNET" /&gt;
    &lt;uses-permission android:name="android.permission.CAMERA" /&gt;
    &lt;uses-permission android:name="android.permission.READ_PHONE_STATE" /&gt;</code></pre><h2 id="how-to-configure-firebase-for-notifications-and-realtime-database">How to Configure Firebase for Notifications and Realtime Database<br/></h2><h3 id="a-firebase-setup-for-notifications-realtime-database">[a] Firebase Setup for Notifications &amp; Realtime Database<br/></h3><p><strong>Step 1: Add Firebase to Your Android App</strong></p><ul><li>Go to the Firebase Console and create a new project.</li><li>Download the <code>google-services.json</code> file and place it in your project’s <code>app/</code> directory</li></ul><p><strong>Step 2: Add Firebase Dependencies</strong></p><p>In your <code>build.gradle</code> files, add the necessary Firebase dependencies:</p><pre><code class="language-gradle">// Project-level build.gradle
classpath 'com.google.gms:google-services:4.3.10'

// App-level build.gradle
apply plugin: 'com.google.gms.google-services'

implementation 'com.google.firebase:firebase-messaging:23.0.0'
implementation 'com.google.firebase:firebase-database:20.0.0'
</code></pre><p><strong>Step 3: Enable Firebase Messaging and Real-time Database</strong></p><ul><li>Enable <strong>Firebase Cloud Messaging (FCM)</strong> and <strong>Realtime Database</strong> in the Firebase console under your project settings.</li></ul><p><strong>Step 4: Firebase Service Configuration</strong></p><p>Ensure your app is registered in Firebase, and implement a <code>FirebaseMessagingService</code> to handle notifications, which we will do later.</p><h3 id="b-firebase-server-side-setup-serviceaccountjson-file">[b] Firebase Server-Side Setup (serviceAccount.json file)</h3><p>To set up Firebase Admin SDK for your server, follow these steps:</p><ol><li>Go to the Firebase Console and select your project.</li><li>In <strong>Project Settings</strong>, navigate to the <strong>Service Accounts </strong>tab.</li><li>Click on <strong>Generate a new private key</strong> to download your service account JSON file. This file will be named something like <code>&lt;your-project&gt;-firebase-adminsdk-&lt;unique-id&gt;.json</code>.</li></ol><h3 id="project-structure">Project Structure</h3><pre><code class="language-Project Structure">Root/
├── app/
│   ├── src/
│   │   ├── main/
│   │   │   ├── java/
│   │   │   │   ├── FirebaseDatabase/
│   │   │   │   │   └── DatabaseUtils.kt
│   │   │   │   ├── Meeting/
│   │   │   │   │   ├── MeetingActivity.kt
│   │   │   │   │   └── ParticipantAdapter.kt
│   │   │   │   ├── Network/
│   │   │   │   │   ├── ApiClient.kt
│   │   │   │   │   ├── ApiService.kt
│   │   │   │   │   ├── NetworkCallhandler.kt
│   │   │   │   │   └── NetworkUtils.kt
│   │   │   │   ├── Services/
│   │   │   │   │   ├── CallConnectionService.kt
│   │   │   │   │   ├── MyFirebaseMessagingService.kt
│   │   │   │   │   └── MyInCallService.kt
│   │   │   │   ├── MainActivity.kt
│   │   │   │   ├── MainApplication.kt
│   │   │   │   └── MeetingIdCallBack.kt
│   │   ├── res/
│   │   │   └── layout/
│   │   │       ├── activity_main.xml
│   │   │       ├── activity_meeting.xml
│   │   │       └── item_remote_peer.xml
├── build.gradle
└── settings.gradle
</code></pre><p>Let's get started with the basic UI of the call-initiating screen.</p><h2 id="ui-development">UI Development</h2><p>The <strong>MainActivity layout</strong> provides the initial screen where users can initiate video calls. The key components include:</p><ul><li><strong>Caller ID Input</strong>: An <code>EditText</code> for entering the unique caller ID to initiate a call.</li><li><strong>Call Button</strong>: A button to trigger the call.</li><li><strong>Unique ID Display</strong>: Displays a unique ID for each user.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-xml"> &lt;LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    android:orientation="vertical"
    android:background="?android:attr/windowBackground"&gt;

    &lt;com.google.android.material.appbar.MaterialToolbar
        android:layout_width="match_parent"
        android:layout_height="?attr/actionBarSize"
        app:title="VideoSDK CallKit Example"
        app:titleTextColor="@color/white"
        android:background="?attr/colorPrimaryDark"/&gt;

    &lt;LinearLayout
        android:layout_width="match_parent"
        android:layout_height="0dp"
        android:orientation="vertical"
        android:gravity="center"
        android:padding="24dp"
        android:layout_weight="1"&gt;

   &lt;LinearLayout
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:background="@android:drawable/editbox_background_normal"
        android:orientation="vertical"
        android:layout_marginBottom="50dp"
        android:backgroundTint="#E7E1E1"&gt;

        &lt;TextView
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:text="Your Caller ID"
            android:textSize="23sp"
            android:textColor="@color/black"
            android:paddingLeft="12dp"
            android:paddingRight="12dp"
            android:fontFamily="sans-serif-medium"
            android:gravity="center"
            android:elevation="4dp"
            android:layout_margin="8dp"
            android:clipToOutline="true"
            /&gt;

        &lt;View
            android:layout_width="match_parent"
            android:layout_height="1dp"
            android:background="@color/black"
            /&gt;

        &lt;LinearLayout
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:orientation="horizontal"
            android:gravity="center"
            &gt;
            &lt;TextView
                android:id="@+id/txt_callId"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:text="Call Id"
                android:textSize="32sp"
                android:textColor="@color/black"
                android:paddingLeft="12dp"
                android:paddingRight="12dp"
                android:fontFamily="sans-serif-medium"
                android:gravity="center"
                android:layout_margin="8dp"
                android:textIsSelectable="true"
                android:clipToOutline="true"
                /&gt;

            &lt;ImageView
                android:id="@+id/copyIcon"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:src="@drawable/baseline_content_copy_24"
                android:padding="8dp"
                /&gt;
        &lt;/LinearLayout&gt;
    &lt;/LinearLayout&gt;


    &lt;LinearLayout
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:background="@android:drawable/editbox_background_normal"
        android:orientation="vertical"
        android:gravity="center"
        android:backgroundTint="#E7E1E1"
        android:padding="14dp"
        &gt;
        &lt;TextView
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:text="Enter call ID of user you want to call"
            android:textSize="23sp"
            android:textColor="@color/black"
            android:fontFamily="sans-serif-medium"
            android:elevation="4dp"
            android:layout_marginBottom="20dp"
            /&gt;

        &lt;EditText
            android:inputType="number"
            android:id="@+id/caller_id_input"
            android:layout_width="match_parent"
            android:layout_height="64dp"
            android:hint="Enter Caller ID"
            android:layout_marginBottom="10dp"
            android:background="@android:drawable/editbox_background_normal"
            android:textSize="18sp"
            android:textColor="@android:color/black"
            android:gravity="center"/&gt;

        &lt;Button
            android:id="@+id/call_button"
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:text="Call"
            android:padding="16dp"
            android:layout_marginTop="20dp"
            android:textSize="18sp"
            android:textColor="@android:color/white"
            /&gt;

    &lt;/LinearLayout&gt;

    &lt;/LinearLayout&gt;
&lt;/LinearLayout&gt;
</code></pre><figcaption><p><span style="white-space: pre-wrap;">activity_main.xml</span></p></figcaption></figure><p>This is how the UI will look:</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/10/1000000166--1-.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Android (Kotlin) using Firebase and VideoSDK" loading="lazy" width="200" height="445"/></figure><p>With the UI for calling in place let's start with the actual calling development.</p><h3 id="firebase-messaging-for-call-initiation">Firebase Messaging for Call Initiation</h3><ol><li>To initiate the call process, we first need to secure user permission to manage notifications and calls.</li></ol><pre><code class="language-java">class MainActivity : AppCompatActivity() {
    private lateinit var callerIdInput: EditText
    private lateinit var myId: TextView
    private lateinit var copyIcon: ImageView
    private var myCallId: String = (10000000 + Random().nextInt(90000000)).toString()
    private lateinit var FcmToken: String

    @RequiresApi(Build.VERSION_CODES.O)
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)

        checkSelfPermission(REQUESTED_PERMISSIONS[0], PERMISSION_REQ_ID)
        if (Build.VERSION.SDK_INT &gt;= Build.VERSION_CODES.TIRAMISU) {
            checkSelfPermission(REQUESTED_PERMISSIONS[1], PERMISSION_REQ_ID)
        }
    }

    override fun onRequestPermissionsResult(requestCode: Int, permissions: Array&lt;String&gt;, grantResults: IntArray
    ) {
        super.onRequestPermissionsResult(requestCode, permissions, grantResults)
        if (requestCode == 133) {
            var allPermissionsGranted = true
            for (result in grantResults) {
                if (result != PackageManager.PERMISSION_GRANTED) {
                    allPermissionsGranted = false
                    break
                }
            }

            if (allPermissionsGranted) {
                registerPhoneAccount()
            } else {
                Toast.makeText(this, "Permissions are required for call management", Toast.LENGTH_LONG).show()
            }
        }
    }

    private fun checkSelfPermission(permission: String, requestCode: Int): Boolean {
        if (ContextCompat.checkSelfPermission(this, permission) != PackageManager.PERMISSION_GRANTED)
        {
            ActivityCompat.requestPermissions(this, REQUESTED_PERMISSIONS, requestCode)
            return false
        }
        return true
    }

    companion object {
        private const val PERMISSION_REQ_ID = 22

        private val REQUESTED_PERMISSIONS = arrayOf(
            Manifest.permission.READ_PHONE_STATE,
            Manifest.permission.POST_NOTIFICATIONS
        )
    }
}</code></pre><ol start="2"><li>Once permission is granted, the next step is to identify each user and retrieve their messaging token, enabling us to send notifications effectively.</li></ol><blockquote>Don't worry if you encounter an error due to a missing file. Please continue following the steps, as the required file will be provided later in the guide.</blockquote><pre><code class="language-java">class MainActivity : AppCompatActivity() {
    private lateinit var callerIdInput: EditText
    private lateinit var myId: TextView
    private lateinit var copyIcon: ImageView
    private var myCallId: String = (10000000 + Random().nextInt(90000000)).toString()
    private lateinit var FcmToken: String

    @RequiresApi(Build.VERSION_CODES.O)
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)

        myId = findViewById(R.id.txt_callId)
        callerIdInput = findViewById(R.id.caller_id_input)

        copyIcon = findViewById(R.id.copyIcon)
        val callButton = findViewById&lt;Button&gt;(R.id.call_button)
        myId.text = myCallId


        val clipboardManager = getSystemService(CLIPBOARD_SERVICE) as ClipboardManager
        val clipData = ClipData.newPlainText("copied text", myCallId)

        copyIcon.setOnClickListener {
            clipboardManager.setPrimaryClip(clipData)
            Toast.makeText(this, "Copied to clipboard", Toast.LENGTH_SHORT).show()
        }

        NetworkCallHandler.myCallId = myCallId
        DatabaseUtils.myCallId = myCallId


        NetworkUtils().createMeeting(object : MeetingIdCallBack {
            override fun onMeetingIdReceived(meetingId: String, token: String) {
                MainApplication.meetingId=meetingId

            }
        })
        //Firebase Notification
        val channel = NotificationChannel("notification_channel", "notification_channel",
            NotificationManager.IMPORTANCE_DEFAULT
        )
        val manager = getSystemService(NotificationManager::class.java)
        manager.createNotificationChannel(channel)
        FirebaseMessaging.getInstance().subscribeToTopic("general")
            .addOnCompleteListener { task -&gt;
                var msg = "Subscribed Successfully"
                if (!task.isSuccessful) {
                    msg = "Subscription failed"
                }
                Toast.makeText(this@MainActivity, msg, Toast.LENGTH_SHORT).show()
            }

        //Firebase Database Actions
        val databaseUtils = DatabaseUtils()
        val databaseReference = FirebaseDatabase.getInstance().reference
        FirebaseMessaging.getInstance().token.addOnCompleteListener { task: Task&lt;String&gt; -&gt;
            FcmToken = task.result
            NetworkCallHandler.FcmToken = task.result
            DatabaseUtils.FcmToken = task.result
            databaseUtils.sendUserDataToFirebase(databaseReference)
        }

        //telecom Api
        registerPhoneAccount()

        callButton.setOnClickListener {
            val callerNumber = callerIdInput.text.toString()
            if (callerNumber.length == 8) {
                databaseUtils.retrieveUserData(databaseReference, callerNumber)
            } else {
                Toast.makeText(
                    this@MainActivity,
                    "Please input the correct caller ID",
                    Toast.LENGTH_SHORT
                ).show()
            }
        }
    }
    
    private fun registerPhoneAccount() {
        val phoneAccount = PhoneAccount.builder(MainApplication.phoneAccountHandle, "VideoSDK")
            .setCapabilities(PhoneAccount.CAPABILITY_CALL_PROVIDER)
            .build()

        MainApplication.telecomManager?.registerPhoneAccount(phoneAccount)


        if (ActivityCompat.checkSelfPermission(
                this,
                Manifest.permission.READ_PHONE_STATE
            ) != PackageManager.PERMISSION_GRANTED
        ) {
            return
        }
        var checkAccount = 0
        val list: List&lt;PhoneAccountHandle&gt; =
            MainApplication.telecomManager!!.callCapablePhoneAccounts
        for (handle in list) {
            if (handle.componentName.className == "live.videosdk.ConnectionService.quickstart.Services.CallConnectionService") {
                checkAccount++
                break
            }
        }
        if (checkAccount == 0) {
            val intent = Intent(TelecomManager.ACTION_CHANGE_PHONE_ACCOUNTS)
            startActivity(intent)
        }
    }
}</code></pre><p>Check if the FirebaseMessaging token is already present in the database. If it exists, update the callee ID; otherwise, create a new entry in the database.</p><pre><code class="language-java">class DatabaseUtils {
    companion object{
    lateinit var myCallId: String
    lateinit var FcmToken: String
}
 private lateinit var calleeInfoToken : String

    val networkUtils: NetworkCallHandler = NetworkCallHandler()
    fun sendUserDataToFirebase(databaseReference: DatabaseReference) {
        val usersRef = databaseReference.child("User")

        usersRef.orderByChild("token").equalTo(FcmToken)
            .addListenerForSingleValueEvent(object : ValueEventListener {
                override fun onDataChange(dataSnapshot: DataSnapshot) {
                    if (dataSnapshot.exists()) {
                        // Token exists, update the callerId
                        for (userSnapshot in dataSnapshot.children) {
                            userSnapshot.ref.child("callerId").setValue(myCallId)
                                .addOnSuccessListener { aVoid: Void? -&gt;
                                    Log.d("FirebaseData", "CallerId successfully updated.")
                                }
                                .addOnFailureListener { e: Exception? -&gt;
                                    Log.e("FirebaseError", "Failed to update callerId.", e)
                                }
                        }
                    } else {
                        // Token doesn't exist, create new entry
                        val userId = usersRef.push().key
                        val map: MutableMap&lt;String, Any?&gt; = HashMap()
                        map["callerId"] = myCallId
                        map["token"] = FcmToken
                        
                        if (userId != null) {
                            usersRef.child(userId).setValue(map)
                                .addOnSuccessListener { aVoid: Void? -&gt;
                                    Log.d("FirebaseData", "Data successfully saved.")
                                }
                                .addOnFailureListener { e: Exception? -&gt;
                                    Log.e("FirebaseError", "Failed to save data.", e)
                                }
                        }
                    }
                }

                override fun onCancelled(databaseError: DatabaseError) {
                    Log.e(
                        "FirebaseError",
                        "Error checking for existing token",
                        databaseError.toException()
                    )
                }
            })
    }
}</code></pre><p>When the call is initiated, first verify if the caller ID exists in the Firebase database. If it does, proceed to invoke the notification method.</p><pre><code class="language-java">class DatabaseUtils {

    fun retrieveUserData(databaseReference: DatabaseReference, callerNumber: String) {
        databaseReference.child("User").orderByChild("callerId").equalTo(callerNumber)
            .addListenerForSingleValueEvent(object : ValueEventListener {
                override fun onDataChange(snapshot: DataSnapshot) {
                    if (snapshot.exists()) {
                        for (data in snapshot.children) {
                            val token = data.child("token").getValue(
                                String::class.java
                            )
                            if (token != null) {
                                calleeInfoToken = token
                                NetworkCallHandler.calleeInfoToken = token
                                networkUtils.initiateCall()
                                break
                            }
                        }
                    } else {
                        Log.d("TAG", "retrieveUserData: No matching callerId found")
                    }
                }

                override fun onCancelled(error: DatabaseError) {
                    Log.e("FirebaseError", "Failed to read data from Firebase", error.toException())
                }
            })
    }
}</code></pre><p><br>We'll configure the VideoSDK token and Meeting ID as soon as the home screen loads, ensuring they're ready when the user initiates a call.</br></p><pre><code class="language-java">class NetworkUtils {
    //Replace with the token you generated from the VideoSDK Dashboard
    var sampleToken: String = MainApplication.token
    fun createMeeting(callBack: MeetingIdCallBack) {

        AndroidNetworking.post("https://api.videosdk.live/v2/rooms")
            .addHeaders("Authorization", sampleToken) //we will pass the token in the Headers
            .build()
            .getAsJSONObject(object : JSONObjectRequestListener {
                override fun onResponse(response: JSONObject) {
                    try {
                        // response will contain `roomId`
                        val meetingId = response.getString("roomId")
                        callBack.onMeetingIdReceived(meetingId,sampleToken)
                        //
                    } catch (e: JSONException) {
                        e.printStackTrace()
                        Log.d("TAG", "onResponse: $e")
                    }
                }

                override fun onError(anError: ANError) {
                    anError.printStackTrace()
                    Log.d("TAG", "onError: ")
                }
            })
    }
}</code></pre><p>The <code>MeetingIdCallBack</code> the interface allows us to receive the <code>meetingId</code> and <code>token</code> in our <code>MainActivity</code></p><pre><code class="language-java">interface MeetingIdCallBack {
    fun onMeetingIdReceived(meetingId: String, token: String)
}</code></pre><p>using  <code>onMeetingIdReceived</code> callback method to get <code>meetingId</code> and <code>token</code>.</p><pre><code class="language-java">class MainActivity : AppCompatActivity() {

    @RequiresApi(Build.VERSION_CODES.O)
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)

            NetworkUtils().createMeeting(object : MeetingIdCallBack {
            override fun onMeetingIdReceived(meetingId: String, token: String) {
                MainApplication.meetingId=meetingId

            }
        })
    }
}</code></pre><ol start="3"><li>The next step is to initiate the call. </li></ol><p>For this, we’ll set up an Express server with two APIs as Firebase functions—one to trigger notifications on the other device and another to update the call status (accepted or rejected).</p><p>Start by importing and initializing the required packages in &nbsp;<code>server.js</code>&nbsp;.</p><p>we will also need to initialize Firebase Admin SDK.</p><pre><code class="language-Express.js">const functions = require("firebase-functions");
const express = require("express");
const cors = require("cors");
const morgan = require("morgan");
var admin = require("firebase-admin");
const { v4: uuidv4 } = require("uuid");

const app = express();
app.use(cors());
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
app.use(morgan("dev"));

// Path to your service account key file for Firebase Admin SDK
var serviceAccount = require("add_path_here");

// Initialize Firebase Admin SDK
admin.initializeApp({
  credential: admin.credential.cert(serviceAccount),
  databaseURL: "database_url" // Replace with your database URL
});

// Home Route
app.get("/", (req, res) =&gt; {
  res.send("Hello World!");
});

// Start the Express server
app.listen(9000, () =&gt; {
  console.log(`API server listening at http://localhost:9000`);
});

// Export app as a Firebase Cloud Function
exports.app = functions.https.onRequest(app);</code></pre><ul><li>The first API we need is&nbsp;<code>initiate-call</code>, which will be used to send a notification to the receiving user and start the call by sending details like caller information and VideoSDK room details.</li></ul><pre><code class="language-Express.js">// Initiate call notification (for Android)
app.post("/initiate-call", (req, res) =&gt; {
  const { calleeInfo, callerInfo, videoSDKInfo } = req.body;

  var FCMtoken = calleeInfo.token;
    const info = JSON.stringify({
      callerInfo,
      videoSDKInfo,
      type: "CALL_INITIATED",
    });
    var message = {
      data: {
        info,
      },
      android: {
        priority: "high",
      },
      token: FCMtoken,
    };
    
  // Send the FCM message using firebase-admin
  admin.messaging().send(message)
    .then((response) =&gt; {
      console.log("Successfully sent FCM message:", response);
      res.status(200).send(response);
    })
    .catch((error) =&gt; {
      console.log("Error sending FCM message:", error);
      res.status(400).send("Error sending FCM message: " + error);
    });
});</code></pre><ul><li>The second API we need is <code>update-call</code>, which updates the status of the incoming call (such as accepted, rejected, etc.) and sends a notification to the caller.</li></ul><pre><code class="language-Express.js">// Update call notification (for Android)
app.post("/update-call", (req, res) =&gt; {
  const { callerInfo, type } = req.body;
  const info = JSON.stringify({
    callerInfo,
    type,
  });

  var message = {
    data: {
      info,
    },
    token: callerInfo.token,
  };
  var message = {
    data: { info },
    token: callerInfo.token, // Token for the target device
    android: {
      priority: "high",
      notification: {
        title: "Call Updated",
        body: "Your call has been updated by " + callerInfo.name,
      },
    },
  };

  // Send the update message through firebase-admin
  admin.messaging().send(message)
    .then((response) =&gt; {
      console.log("Successfully updated call:", response);
      res.status(200).send(response);
    })
    .catch((error) =&gt; {
      console.log("Error updating call:", error);
      res.status(400).send("Error updating call: " + error);
    });
});</code></pre><p>4. Now that the APIs are created, we will trigger them from the app.</p><p>Here the&nbsp;<code>FCM_SERVER_URL</code>&nbsp;needs to be updated with the URL of your Firebase functions.</p><p>You can either deploy the server or run the server in a local environment using&nbsp;<code>npm run server.js</code> </p><p> If you're running it on a local device, you need to use the device's IP address.</p><pre><code class="language-java">object ApiClient {
// Base URL for the API endpoint, replace "YOUR_BASE_URL" with the actual URL. e.g http://172.20.10.6:9000/
    private const val BASE_URL = "YOUR_BASE_URL" 
    private var retrofit: Retrofit? = null

    val client: Retrofit?
        get() {
            if (retrofit == null) {
                retrofit = Retrofit.Builder()
                    .baseUrl(BASE_URL) // ScalarsConverterFactory handles plain text responses
                    .addConverterFactory(ScalarsConverterFactory.create()) // GsonConverterFactory handles JSON responses
                    .addConverterFactory(GsonConverterFactory.create())
                    .build()
            }
            return retrofit
        }
}</code></pre><p>Now, let's define endpoints for API Calls.</p><pre><code class="language-java">interface ApiService {

    @POST("/initiate-call")
    @JvmSuppressWildcards
    fun initiateCall(@Body callRequestBody: Map&lt;String, Any&gt;): Call&lt;String&gt;

    @POST("/update-call")
    @JvmSuppressWildcards
    fun updateCall(@Body callUpdateBody: Map&lt;String, Any&gt;): Call&lt;String&gt;
}
</code></pre><p>Initiates a call by sending caller, callee, and VideoSDK information to the server<br>and handles the server response and logs success or failure accordingly.</br></p><pre><code class="language-java">class NetworkCallHandler  {

    fun initiateCall() {
        val apiService: ApiService = ApiClient.client!!.create(ApiService::class.java)

        val callerInfo: MutableMap&lt;String, String&gt; = HashMap()
        val calleeInfo: MutableMap&lt;String, String&gt; = HashMap()
        val videoSDKInfo: MutableMap&lt;String, String&gt; = HashMap()

        // callerInfo
        callerInfo["callerId"] = myCallId
        callerInfo["token"] = FcmToken
        // calleeInfo
        calleeInfo["token"] = calleeInfoToken
        // videoSDKInfo
        videoSDKInfo["meetingId"] = MainApplication.meetingId ?: return
        videoSDKInfo["token"] = MainApplication.token

        val callRequestBody: MutableMap&lt;String, Any&gt; = HashMap()
        callRequestBody["callerInfo"] = callerInfo
        callRequestBody["calleeInfo"] = calleeInfo
        callRequestBody["videoSDKInfo"] = videoSDKInfo

        val call: Call&lt;String&gt; = apiService.initiateCall(callRequestBody)
        call.enqueue(object : Callback&lt;String&gt; {
            override fun onResponse(call: Call&lt;String&gt;, response: Response&lt;String&gt;) {
                if (response.isSuccessful) {
                    Log.d("API", "Call initiated: " + response.body())
                } else {
                    Log.e("API", "Failed to initiate call: " + response.message())
                }
            }

            override fun onFailure(call: Call&lt;String&gt;, t: Throwable) {
                Log.e("API", "API call failed: " + t.message)
            }
        })
    }
}</code></pre><p>5. The notification sent is now configured. Now we need to invoke the call when you receive the notification.</p><p>You can extract all the information from the notification body, which will help us to create a meeting.</p><pre><code class="language-java">class MyFirebaseMessagingService : FirebaseMessagingService() {

    companion object {
        private const val TAG = "FCMService"
        private const val CHANNEL_ID = "notification_channel"
        lateinit var FCMtoken: String
    }

    private var callerID: String? = null
    private var meetingId: String? = null
    private var token: String? = null

    override fun onNewToken(token: String) {
        super.onNewToken(token)
    }

    @RequiresApi(Build.VERSION_CODES.O)
    override fun onMessageReceived(remoteMessage: RemoteMessage) {
        val data = remoteMessage.data

        if (data.isNotEmpty()) {
            try {
                val `object` = JSONObject(data["info"]!!)

                val callerInfo = `object`.getJSONObject("callerInfo")
                callerID = callerInfo.getString("callerId")
                FCMtoken = callerInfo.getString("token")

                if (`object`.has("videoSDKInfo")) {
                    val videoSdkInfo = `object`.getJSONObject("videoSDKInfo")
                    meetingId = videoSdkInfo.getString("meetingId")
                    token = videoSdkInfo.getString("token")
                    handleIncomingCall(callerID)
                }

                val type = `object`.getString("type")
                when (type) {
                    "ACCEPTED" -&gt; startMeeting()
                    "REJECTED" -&gt; {
                        showIncomingCallNotification(callerID)
                        Handler(Looper.getMainLooper()).post {
                            Toast.makeText(applicationContext, "CALL REJECTED FROM CALLER ID: $callerID", Toast.LENGTH_SHORT).show()
                        }
                    }
                }

            } catch (e: JSONException) {
                throw RuntimeException(e)
            }
        } else {
            Log.d(TAG, "onMessageReceived: No data found in the notification payload.")
        }
    }

    private fun startMeeting() {
        val intent = Intent(applicationContext, MeetingActivity::class.java).apply {
            addFlags(Intent.FLAG_ACTIVITY_NEW_TASK)
            putExtra("meetingId", MainApplication.meetingId)
            putExtra("token", MainApplication.token)
        }
        startActivity(intent)
    }

    private fun handleIncomingCall(callerId: String?) {
        val extras = Bundle().apply {
            val uri = Uri.fromParts("tel", callerId, null)
            putParcelable(TelecomManager.EXTRA_INCOMING_CALL_ADDRESS, uri)
            putString("meetingId", meetingId)
            putString("token", token)
            putString("callerID", callerId)
        }

        try {
            MainApplication.telecomManager?.addNewIncomingCall(MainApplication.phoneAccountHandle, extras)
        } catch (cause: Throwable) {
            Log.e("handleIncomingCall", "error in addNewIncomingCall", cause)
        }
    }

    @RequiresApi(Build.VERSION_CODES.O)
    private fun showIncomingCallNotification(callerId: String?) {
        createNotificationChannel()

        val intent = Intent(this, MainActivity::class.java)
        val pendingIntent = PendingIntent.getActivity(this, 0, intent, PendingIntent.FLAG_MUTABLE)

        val notification = NotificationCompat.Builder(this, CHANNEL_ID)
            .setSmallIcon(R.drawable.baseline_call_24)
            .setContentTitle("Call REJECTED")
            .setContentText("Call from $callerId")
            .setPriority(NotificationCompat.PRIORITY_HIGH)
            .setFullScreenIntent(pendingIntent, true)
            .setAutoCancel(true)
            .setContentIntent(pendingIntent)
            .setCategory(NotificationCompat.CATEGORY_CALL)
            .build()

        val notificationManager = getSystemService(NOTIFICATION_SERVICE) as NotificationManager
        notificationManager.notify(1, notification)
    }

    @RequiresApi(Build.VERSION_CODES.O)
    private fun createNotificationChannel() {
        val channel = NotificationChannel(CHANNEL_ID, "Incoming Calls", NotificationManager.IMPORTANCE_HIGH)
        val notificationManager = getSystemService(NotificationManager::class.java)
        notificationManager.createNotificationChannel(channel)
    }
}</code></pre><p>Add this <code>MyFirebaseMessagingService</code> to <code>AndroidManifest.xml</code></p><pre><code class="language-xml">&lt;service
  android:name=".Services.MyFirebaseMessagingService"
  android:enabled="true"
  android:exported="true"
  android:permission="android.permission.BIND_TELECOM_CONNECTION_SERVICE"&gt;
  &lt;intent-filter&gt;
      &lt;action android:name="com.google.firebase.MESSAGING_EVENT" /&gt;
      &lt;action android:name="com.google.android.c2dm.intent.RECEIVE" /&gt;
  &lt;/intent-filter&gt;
&lt;/service&gt;</code></pre><p>Now when the notification is received, this should trigger a call.</p><p>To achieve this, we need to register  <code>TelecomManager</code> with a <code>PhoneAccountHandle</code> and <code>CallConnectionService</code> to manage and handle calls.</p><p>Initialize the <code>TelecomManager</code> and <code>PhoneAccountHandle</code> in the <code>MainApplication</code> class to make them accessible throughout the application.</p><p>Also, to initiate the call with the VideoSDK, you need to add the VideoSDK token to the main application. You can obtain this token from the <a href="https://app.videosdk.live/" rel="noreferrer">VideoSDK Dashboard</a>.</p><p/><pre><code class="language-java">class MainApplication : Application() {
    override fun onCreate() {
        super.onCreate()
        VideoSDK.initialize(applicationContext)

        telecomManager = getSystemService(TELECOM_SERVICE) as TelecomManager
        val componentName: ComponentName = ComponentName(this, CallConnectionService::class.java)
        phoneAccountHandle = PhoneAccountHandle(componentName, "myAccountId")
    }

    companion object {
        var telecomManager: TelecomManager? = null
        var phoneAccountHandle: PhoneAccountHandle? = null
        var meetingId: String? = null
        var token: String = "VideoSDK token"
    }
}</code></pre><p>Now, In <code>MainActivity</code> we'll register the PhoneAccount and check for permission to manage Call Activity.</p><pre><code class="language-java">class MainActivity : AppCompatActivity() {
    private lateinit var callerIdInput: EditText
    private lateinit var myId: TextView
    private lateinit var copyIcon: ImageView
    private var myCallId: String = (10000000 + Random().nextInt(90000000)).toString()
    private lateinit var FcmToken: String

    @RequiresApi(Build.VERSION_CODES.O)
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)

        //telecom Api
        registerPhoneAccount()
    }

    private fun registerPhoneAccount() {
        val phoneAccount = PhoneAccount.builder(MainApplication.phoneAccountHandle, "VideoSDK")
            .setCapabilities(PhoneAccount.CAPABILITY_CALL_PROVIDER)
            .build()

        MainApplication.telecomManager?.registerPhoneAccount(phoneAccount)


        if (ActivityCompat.checkSelfPermission(
                this,
                Manifest.permission.READ_PHONE_STATE
            ) != PackageManager.PERMISSION_GRANTED
        ) {
            return
        }
        var checkAccount = 0
        val list: List&lt;PhoneAccountHandle&gt; =
            MainApplication.telecomManager!!.callCapablePhoneAccounts
        for (handle in list) {
            if (handle.componentName.className == "live.videosdk.ConnectionService.quickstart.Services.CallConnectionService") {
                checkAccount++
                break
            }
        }
        if (checkAccount == 0) {
            val intent = Intent(TelecomManager.ACTION_CHANGE_PHONE_ACCOUNTS)
            startActivity(intent)
        }
    }
}</code></pre><p>Next, to manage VoIP calls, we’ll create a new service <code>CallConnectionService</code> that extends <code>ConnectionService</code>. This service will handle the technical aspects of call connections, such as managing call states and routing audio/video.</p><pre><code class="language-java">class CallConnectionService : ConnectionService() {
    var callerID: String? = null
    val obj = NetworkCallHandler()

    override fun onCreateIncomingConnection(
        connectionManagerPhoneAccount: PhoneAccountHandle,
        request: ConnectionRequest
    ): Connection {
        // Create a connection for the incoming call
        val connection: Connection = object : Connection() {
            override fun onAnswer() {
                super.onAnswer()
                //getting videosdk info
                val extras = request.extras
                val meetingId = extras.getString("meetingId")
                val token = extras.getString("token")
                callerID = extras.getString("callerID")
                obj.updateCall("ACCEPTED")
                // Start the meeting activity with the extracted data
                val intent = Intent(
                    applicationContext,
                    MeetingActivity::class.java
                )
                intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK)
                intent.putExtra("meetingId", meetingId)
                intent.putExtra("token", token)
                startActivity(intent)


                //update
                setDisconnected(DisconnectCause(DisconnectCause.LOCAL))
                destroy()
            }

            override fun onReject() {
                super.onReject()
                val intent = Intent(
                    applicationContext,
                    MainActivity::class.java
                )
                intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK)
                startActivity(intent)
                //update
                obj.updateCall("REJECTED")
                setDisconnected(DisconnectCause(DisconnectCause.LOCAL))
                destroy()
            }
        }

        // Set call address
        connection.setAddress(request.address, TelecomManager.PRESENTATION_ALLOWED)
        connection.setCallerDisplayName(callerID, TelecomManager.PRESENTATION_ALLOWED)
        connection.setInitializing() // Indicates that the call is being set up
        connection.setActive() // Activate the call

        return connection
    }

    override fun onCreateOutgoingConnection(
        connectionManagerPhoneAccount: PhoneAccountHandle,
        request: ConnectionRequest
    ): Connection {
        // Create a connection for the outgoing call
        val connection: Connection = object : Connection() {}
        connection.setAddress(request.address, TelecomManager.PRESENTATION_ALLOWED)
        connection.setActive()
    }
}</code></pre><p>Add the service to the <code>AndroidManifest.xml</code></p><pre><code class="language-xml">&lt;service
    android:name=".Services.CallConnectionService"
    android:enabled="true"
    android:exported="true"
    android:permission="android.permission.BIND_TELECOM_CONNECTION_SERVICE"&gt;
    &lt;intent-filter&gt;
        &lt;action android:name="android.telecom.ConnectionService" /&gt;
    &lt;/intent-filter&gt;
&lt;/service&gt;</code></pre><p>Now, To display the default Android call UI, we'll create another service called <code>MyInCallService</code>.</p><pre><code class="language-java">package live.videosdk.connectionservice.connectionservice_quickstart.Services

import android.Manifest
import android.content.pm.PackageManager
import android.telecom.Call
import android.telecom.InCallService
import android.telecom.TelecomManager
import androidx.core.app.ActivityCompat

class MyInCallService : InCallService() {
    override fun onCallAdded(call: Call) {
        super.onCallAdded(call)
        call.registerCallback(object : Call.Callback() {
            override fun onStateChanged(call: Call, state: Int) {
                super.onStateChanged(call, state)
                if (state == Call.STATE_ACTIVE) {
                    // Handle the active call state
                }
            }
        })
        // Bring up the default UI for managing the call
        setUpDefaultCallUI(call)
    }

    override fun onCallRemoved(call: Call) {
        super.onCallRemoved(call)
        // Clean up call-related resources
    }

    private fun setUpDefaultCallUI(call: Call) {
        // Start the default in-call UI
        val telecomManager = getSystemService(TELECOM_SERVICE) as TelecomManager
        if (telecomManager != null) {
            if (ActivityCompat.checkSelfPermission(
                    this,
                    Manifest.permission.READ_PHONE_STATE
                ) != PackageManager.PERMISSION_GRANTED
            ) {
                return
            }
            telecomManager.showInCallScreen(true)
        }
    }
}</code></pre><p>Add the service to <code>AndroidMenifest.xml</code>The first</p><pre><code class="language-xml">&lt;service
  android:name=".Services.MyInCallService"
  android:exported="true"
  android:permission="android.permission.BIND_INCALL_SERVICE"&gt;
  &lt;intent-filter&gt;
      &lt;action android:name="android.telecom.InCallService" /&gt;
  &lt;/intent-filter&gt;
&lt;/service&gt;</code></pre><p/><blockquote>Wow!! You just implemented the calling feature, which works like a charm.</blockquote><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/10/1000000167--1--1.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Android (Kotlin) using Firebase and VideoSDK" loading="lazy" width="224" height="500"/></figure><h2 id="videosdk-integration-for-videocall">VideoSDK Integration for VideoCall</h2><ol><li>The first step in integrating the VideoSDK is to <em>initialize</em> VideoSDK.</li></ol><p>MainApplication.kt</p><pre><code class="language-java">class MainApplication : Application() {
    override fun onCreate() {
        super.onCreate()
        VideoSDK.initialize(applicationContext)
    }
}</code></pre><p>When the call is received, the intent is used to transition from the call screen to the meeting screen, passing the <code>meetingId</code> and <code>videoSDK token</code> along with it.</p><pre><code class="language-java">class CallConnectionService : ConnectionService() {
    
    override fun onCreateIncomingConnection(
        connectionManagerPhoneAccount: PhoneAccountHandle,
        request: ConnectionRequest
    ): Connection {
        // Create a connection for the incoming call
        val connection: Connection = object : Connection() {
            override fun onAnswer() {
                super.onAnswer()
                
                val intent = Intent(
                    applicationContext,
                    MeetingActivity::class.java
                )
                intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK)
                intent.putExtra("meetingId", meetingId)
                intent.putExtra("token", token)
                startActivity(intent)
            }
        }
            //...
    }
}</code></pre><p>Next, we will add our MeetingView which will show the buttons and Participants View in the <code>MeetingActivity</code>.</p><pre><code class="language-xml">&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:gravity="center"
    android:orientation="vertical"
    tools:context=".MeetingActivity"&gt;

    &lt;TextView
        android:id="@+id/tvMeetingId"
        style="@style/TextAppearance.AppCompat.Display1"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="Meeting Id" /&gt;

    &lt;androidx.recyclerview.widget.RecyclerView
        android:id="@+id/rvParticipants"
        android:layout_width="match_parent"
        android:layout_height="0dp"
        android:layout_weight="1" /&gt;

    &lt;LinearLayout
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"&gt;

        &lt;Button
            android:id="@+id/btnMic"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginVertical="8dp"
            android:text="Mic"/&gt;

        &lt;Button
            android:id="@+id/btnLeave"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginVertical="8dp"
            android:layout_marginHorizontal="8dp"
            android:text="Leave"/&gt;

        &lt;Button
            android:id="@+id/btnWebcam"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginVertical="8dp"
            android:text="Webcam" /&gt;

    &lt;/LinearLayout&gt;


&lt;/LinearLayout&gt;</code></pre><p>Here is the logic for the <code>MeetingActivity</code></p><pre><code class="language-kotlin">class MeetingActivity : AppCompatActivity() {
    // declare the variables we will be using to handle the meeting
    private  var meeting: Meeting? = null
    private var micEnabled = true
    private var webcamEnabled = true

    private lateinit var rvParticipants: RecyclerView

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_meeting)

        checkSelfPermission(REQUESTED_PERMISSIONS[0], PERMISSION_REQ_ID)
        checkSelfPermission(REQUESTED_PERMISSIONS[1], PERMISSION_REQ_ID)

        val token = intent.getStringExtra("token")
        val meetingId = intent.getStringExtra("meetingId")
        val participantName = "John Doe"

        // 1. Configuration VideoSDK with Token
        VideoSDK.config(token)
        // 2. Initialize VideoSDK Meeting
        meeting = VideoSDK.initMeeting(
            this@MeetingActivity, meetingId, participantName,
            micEnabled, webcamEnabled, null, null, true, null, null
        )

        // 3. Add event listener for listening upcoming events
        meeting!!.addEventListener(meetingEventListener)

        //4. Join VideoSDK Meeting
        meeting!!.join()

        (findViewById&lt;View&gt;(R.id.tvMeetingId) as TextView).text =
            meetingId

        // actions
        setActionListeners()

        rvParticipants = findViewById(R.id.rvParticipants)
        rvParticipants.setLayoutManager(GridLayoutManager(this, 2))
        rvParticipants.setAdapter(ParticipantAdapter(meeting!!))
    }

    // creating the MeetingEventListener
    private val meetingEventListener: MeetingEventListener = object : MeetingEventListener() {
        override fun onMeetingJoined() {
            Log.d("#meeting", "onMeetingJoined()")
        }

        override fun onMeetingLeft() {
            Log.d("#meeting", "onMeetingLeft()")
            meeting = null
            if (!isDestroyed) finish()
        }

        override fun onParticipantJoined(participant: Participant) {
            Toast.makeText(
                this@MeetingActivity,
                participant.displayName + " joined",
                Toast.LENGTH_SHORT
            ).show()
        }

        override fun onParticipantLeft(participant: Participant) {
            Toast.makeText(
                this@MeetingActivity,
                participant.displayName + " left",
                Toast.LENGTH_SHORT
            ).show()
        }
    }

    private fun setActionListeners() {
        // toggle mic
        findViewById&lt;View&gt;(R.id.btnMic).setOnClickListener { view: View? -&gt;
            if (micEnabled) {
                // this will mute the local participant's mic
                meeting!!.muteMic()
                Toast.makeText(
                    this@MeetingActivity,
                    "Mic Disabled",
                    Toast.LENGTH_SHORT
                ).show()
            } else {
                // this will unmute the local participant's mic
                meeting!!.unmuteMic()
                Toast.makeText(
                    this@MeetingActivity,
                    "Mic Enabled",
                    Toast.LENGTH_SHORT
                ).show()
            }
            micEnabled = !micEnabled
        }

        // toggle webcam
        findViewById&lt;View&gt;(R.id.btnWebcam).setOnClickListener { view: View? -&gt;
            if (webcamEnabled) {
                // this will disable the local participant webcam
                meeting!!.disableWebcam()
                Toast.makeText(
                    this@MeetingActivity,
                    "Webcam Disabled",
                    Toast.LENGTH_SHORT
                ).show()
            } else {
                // this will enable the local participant webcam
                meeting!!.enableWebcam()
                Toast.makeText(
                    this@MeetingActivity,
                    "Webcam Enabled",
                    Toast.LENGTH_SHORT
                ).show()
            }
            webcamEnabled = !webcamEnabled
        }

        // leave meeting
        findViewById&lt;View&gt;(R.id.btnLeave).setOnClickListener { view: View? -&gt;
            // this will make the local participant leave the meeting
            meeting!!.leave()
        }
    }

    override fun onDestroy() {
        rvParticipants!!.adapter = null
        super.onDestroy()
    }

    private fun checkSelfPermission(permission: String, requestCode: Int): Boolean {
        if (ContextCompat.checkSelfPermission(this, permission) !=
            PackageManager.PERMISSION_GRANTED
        ) {
            ActivityCompat.requestPermissions(this, REQUESTED_PERMISSIONS, requestCode)
            return false
        }
        return true
    }

    companion object {
        private const val PERMISSION_REQ_ID = 22

        private val REQUESTED_PERMISSIONS = arrayOf(
            Manifest.permission.RECORD_AUDIO,
            Manifest.permission.CAMERA
        )
    }
}</code></pre><p>Here, we display the participants in a <code>RecyclerView</code>. To implement this, you'll need to use the following <code>RecyclerView</code> Adapter components:</p><p>a. <code>item_remote_peer</code> : UI for each participant</p><pre><code class="language-xml">&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="200dp"
    android:background="@color/cardview_dark_background"
    tools:layout_height="200dp"&gt;

    &lt;live.videosdk.rtc.android.VideoView
        android:id="@+id/participantView"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:visibility="gone" /&gt;

    &lt;LinearLayout
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:layout_gravity="bottom"
        android:background="#99000000"
        android:orientation="horizontal"&gt;

        &lt;TextView
            android:id="@+id/tvName"
            android:layout_width="0dp"
            android:layout_height="wrap_content"
            android:layout_weight="1"
            android:gravity="center"
            android:padding="4dp"
            android:textColor="@color/white" /&gt;

    &lt;/LinearLayout&gt;

&lt;/FrameLayout&gt;</code></pre><p>b.  <code>ParticipantAdapter</code>: Responsible for displaying video call participants in a <code>RecyclerView</code>. It manages participant join/leave events, shows video streams, and updates the view when a participant's video starts or stops.</p><pre><code class="language-java">public class ParticipantAdapter extends RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt; {

    private final List&lt;Participant&gt; participants = new ArrayList&lt;&gt;();

    public ParticipantAdapter(Meeting meeting) {
        // adding the local participant(You) to the list
        participants.add(meeting.getLocalParticipant());

        // adding Meeting Event listener to get the participant join/leave event in the meeting.
        meeting.addEventListener(new MeetingEventListener() {
            @Override
            public void onParticipantJoined(Participant participant) {
                // add participant to the list
                participants.add(participant);
                notifyItemInserted(participants.size() - 1);
            }

            @Override
            public void onParticipantLeft(Participant participant) {
                int pos = -1;
                for (int i = 0; i &lt; participants.size(); i++) {
                    if (participants.get(i).getId().equals(participant.getId())) {
                        pos = i;
                        break;
                    }
                }
                // remove participant from the list
                participants.remove(participant);

                if (pos &gt;= 0) {
                    notifyItemRemoved(pos);
                }
            }
        });
    }

    @NonNull
    @Override
    public PeerViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
        return new PeerViewHolder(LayoutInflater.from(parent.getContext()).inflate(R.layout.item_remote_peer, parent, false));
    }

    @Override
    public void onBindViewHolder(@NonNull PeerViewHolder holder, int position) {
        Participant participant = participants.get(position);

        holder.tvName.setText(participant.getDisplayName());

        // adding the initial video stream for the participant into the 'VideoView'
        for (Map.Entry&lt;String, Stream&gt; entry : participant.getStreams().entrySet()) {
            Stream stream = entry.getValue();
            if (stream.getKind().equalsIgnoreCase("video")) {
                holder.participantView.setVisibility(View.VISIBLE);
                VideoTrack videoTrack = (VideoTrack) stream.getTrack();
                holder.participantView.addTrack(videoTrack);
                break;
            }
        }
        // add Listener to the participant which will update start or stop the video stream of that participant
        participant.addEventListener(new ParticipantEventListener() {
            @Override
            public void onStreamEnabled(Stream stream) {
                if (stream.getKind().equalsIgnoreCase("video")) {
                    holder.participantView.setVisibility(View.VISIBLE);
                    VideoTrack videoTrack = (VideoTrack) stream.getTrack();
                    holder.participantView.addTrack(videoTrack);
                }
            }

            @Override
            public void onStreamDisabled(Stream stream) {
                if (stream.getKind().equalsIgnoreCase("video")) {
                    holder.participantView.removeTrack();
                    holder.participantView.setVisibility(View.GONE);
                }
            }
        });
    }

    @Override
    public int getItemCount() {
        return participants.size();
    }

    static class PeerViewHolder extends RecyclerView.ViewHolder {
        // 'VideoView' to show Video Stream
        public VideoView participantView;
        public TextView tvName;
        public View itemView;

        PeerViewHolder(@NonNull View view) {
            super(view);
            itemView = view;
            tvName = view.findViewById(R.id.tvName);
            participantView = view.findViewById(R.id.participantView);
        }
    }

    @Override
    public void onViewRecycled(@NonNull PeerViewHolder holder) {
        holder.participantView.releaseSurfaceViewRenderer();
        super.onViewRecycled(holder);
    }
}</code></pre><p>The meeting is set up on the callee's side, and now you need to set up the meeting on the caller's side as well.</p><p>When the callee accepts the call, the <code>update-call</code> function on the server is called, which triggers a silent notification to the caller.</p><pre><code class="language-java">class NetworkCallHandler  {

        fun updateCall(call_update: String){
        val fcmToken: String = MyFirebaseMessagingService.FCMtoken

        val callerInfo: MutableMap&lt;String, String&gt; = HashMap()
        val callUpdateBody: MutableMap&lt;String, Any&gt; = HashMap()

        //callerInfo
        callerInfo["callerId"] = myCallId
        callerInfo["token"] = fcmToken
        //CallUpdateBody
        callUpdateBody["callerInfo"] = callerInfo
        callUpdateBody["type"] = call_update

        val apiService = ApiClient.client!!.create(ApiService::class.java)
        val call : Call&lt;String&gt; = apiService.updateCall(callUpdateBody)
        call.enqueue(object :Callback&lt;String&gt;{
            override fun onFailure(call: Call&lt;String&gt;, t: Throwable) {
                Log.d("TAG", "onFailure: "+ t.message)
            }

            override fun onResponse(call: Call&lt;String&gt;, response: Response&lt;String&gt;) {
                Log.d("TAG", "Call updated successfully: " + response.body())
            }
        })
    }
}</code></pre><p>When the caller's phone receives this notification, indicating either acceptance or rejection, the meeting is started, or a Toast message showing rejection is displayed, respectively.</p><pre><code class="language-java">class MyFirebaseMessagingService : FirebaseMessagingService() {
    
    @RequiresApi(Build.VERSION_CODES.O)
    override fun onMessageReceived(remoteMessage: RemoteMessage) {
        val data = remoteMessage.data

        if (data.isNotEmpty()) {
            try {
                val `object` = JSONObject(data["info"]!!)

                val callerInfo = `object`.getJSONObject("callerInfo")
                callerID = callerInfo.getString("callerId")
                FCMtoken = callerInfo.getString("token")

                if (`object`.has("videoSDKInfo")) {
                    val videoSdkInfo = `object`.getJSONObject("videoSDKInfo")
                    meetingId = videoSdkInfo.getString("meetingId")
                    token = videoSdkInfo.getString("token")
                    handleIncomingCall(callerID)
                }

                val type = `object`.getString("type")
                when (type) {
                    "ACCEPTED" -&gt; startMeeting()
                    "REJECTED" -&gt; {
                        showIncomingCallNotification(callerID)
                        Handler(Looper.getMainLooper()).post {
                            Toast.makeText(applicationContext, "CALL REJECTED FROM CALLER ID: $callerID", Toast.LENGTH_SHORT).show()
                        }
                    }
                }

            } catch (e: JSONException) {
                throw RuntimeException(e)
            }
        } else {
            Log.d(TAG, "onMessageReceived: No data found in the notification payload.")
        }
    }

    private fun startMeeting() {
        val intent = Intent(applicationContext, MeetingActivity::class.java).apply {
            addFlags(Intent.FLAG_ACTIVITY_NEW_TASK)
            putExtra("meetingId", MainApplication.meetingId)
            putExtra("token", MainApplication.token)
        }
        startActivity(intent)
    }
}</code></pre><p>Here is how the video call will look with two participants:</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/10/Screenshot_20241023-172447--1--1.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Android (Kotlin) using Firebase and VideoSDK" loading="lazy" width="224" height="500"/></figure><p/><blockquote>Hurray!!! With these steps, our video calling feature is complete. Here’s a video demonstrating how it looks.</blockquote>
<!--kg-card-begin: html-->
<video width="900" height="500" controls="">
  <source src="https://cdn.videosdk.live/website-resources/docs-resources/android_call_keep_demo.mp4" type="video/mp4">
</source></video>
<!--kg-card-end: html-->
<h2 id="conclusion">Conclusion</h2><p>With this, we've successfully built the Android video calling app using the Telecom framework, VideoSDK, and Firebase. For additional features like chat messaging and screen sharing, feel free to refer to our <a href="https://docs.videosdk.live" rel="noreferrer">documentation</a>. If you encounter any issues with the implementation, don’t hesitate to reach out to us through our <a href="https://discord.gg/Gpmj6eCq5u" rel="noreferrer">Discord community</a>.</p><blockquote>Here is the Github repo you can clone to access all the source code <a href="https://github.com/videosdk-live/videosdk-rtc-android-kotlin-call-trigger-example" rel="noreferrer">here</a></blockquote>]]></content:encoded></item><item><title><![CDATA[Build a Video Calling App with Call Trigger in Android (Java) using Firebase and VideoSDK]]></title><description><![CDATA[In this tutorial, you’ll learn how to make a native Android (Java) video calling app with call trigger using the firebase and VideoSDK.]]></description><link>https://www.videosdk.live/blog/call-trigger-in-android-java</link><guid isPermaLink="false">6718efb9c646c7d24e60ea27</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Wed, 27 Nov 2024 07:39:57 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/11/clone_build_android_java.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/11/clone_build_android_java.png" alt="Build a Video Calling App with Call Trigger in Android (Java) using Firebase and VideoSDK"/><p><br>In a world where connectivity through audio and video calls is essential, if you're planning to create a video-calling app with call-trigger functionality, you've come to the right place.</br></p><p>In this tutorial, we will build a comprehensive video calling app for Android that enables smooth call handling and high-quality video communication. We’ll utilize the Telecom Framework to manage call functionality, Firebase for real-time data synchronization, and VideoSDK to deliver clear, reliable video conferencing. </p><p> Take a moment to watch the video demonstration and review <a href="https://github.com/videosdk-live/videosdk-rtc-android-java-call-trigger-example" rel="noreferrer">the complete code </a>for the sample app to see exactly what you'll be building in this blog.</p><h2 id="understanding-the-telecom-framework">Understanding the Telecom Framework</h2><p>Before we get started, let’s take a closer look at the Telecom Framework. This framework manages both audio and video calls on Android devices, supporting traditional SIM-based calls as well as VoIP calls via the ConnectionService API.<br>The major components that Telecom manages are <code>ConnectionService</code> and <code>InCallService</code>:</br></p><ul><li><code>ConnectionService</code> handles the technical aspects of call connections, managing states, and audio/video routing.</li><li><code>InCallService</code> manages the user interface, allowing users to see and interact with ongoing calls.</li></ul><p>Understanding how the app will function internally before building it will make the development process smoother.</p><h2 id="app-functionality-overview">App Functionality Overview</h2><p>To understand how the app functions, consider the following scenario: John wants to call his friend Max<strong>.</strong> John opens the app, enters Max's caller ID, and presses "Call." Max sees an incoming call UI on his device, with options to accept or reject the call. If he accepts, a video call is established between them using VideoSDK.</p><h3 id="steps-in-the-process">Steps in the Process<strong>:</strong></h3><ol><li><strong>User Action</strong>: John enters Max's Caller ID and initiates the call.</li><li><strong>Database and Notification</strong>: The app maps the ID in the Firebase database and sends a notification to Max's device.</li><li><strong>Incoming Call UI</strong>: Max’s device receives the notification, triggering the incoming call UI using the Telecom Framework.</li><li><strong>Call Connection</strong>: If Max accepts, the video call begins using VideoSDK.</li></ol><p>Here is a pictorial representation of the flow for better understanding.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/10/connectionServiceFlow-1.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Android (Java) using Firebase and VideoSDK" loading="lazy" width="2330" height="890"/></figure><p/><p>Now that we have established the flow of the app and how it functions, let's get started with development!</p><h2 id="core-functionality-of-the-app">Core Functionality of the App</h2><p>The app relies on several key libraries to manage video calling and notifications:</p><ul><li><strong>Android Telecom Framework</strong>: Manages call routing and interaction with the system UI for incoming and outgoing calls.</li><li><strong>Retrofit</strong>: Used for sending and receiving API requests, including call initiation and status updates.</li><li><strong>Firebase Cloud Messaging (FCM)</strong>: Handles push notifications to trigger call events.</li><li><strong>Firebase Realtime Database</strong>: Stores user tokens and caller IDs for establishing video calls.</li><li><strong>VideoSDK</strong>: Manages the actual video conferencing features.</li></ul><h2 id="prerequisites">Prerequisites</h2><ul><li>Android Studio (for Android app development)</li><li>Firebase Project (for notifications and database management)</li><li>VideoSDK Account (for video conferencing functionality)</li><li>Node.js and Firebase Tools (for backend development)</li></ul><p>Make sure you have a basic understanding of Android development, Retrofit, and Firebase Cloud Messaging.</p><p>Now that we've covered the prerequisites, let's dive into building the app</p><h2 id="android-app-setup">Android App Setup<br/></h2><h3 id="add-dependencies">Add Dependencies</h3><ol><li>In your <code>settings.gradle</code>, add Jetpack and Maven repositories:</li></ol><pre><code class="language-gradle">dependencyResolutionManagement {
    repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
    repositories {
        google()
        mavenCentral()
        maven { url 'https://www.jitpack.io' }
        maven { url "https://maven.aliyun.com/repository/jcenter" }
    }
}</code></pre><ol start="2"><li>In your <code>build.gradle</code> file, add the following dependencies:</li></ol><pre><code class="language-gradle">   //VideoSdk
    implementation 'live.videosdk:rtc-android-sdk:0.1.35'
    implementation 'com.nabinbhandari.android:permissions:3.8'
    implementation 'com.amitshekhar.android:android-networking:1.0.2'

    //Firebase
    implementation 'com.google.firebase:firebase-messaging:23.0.0'
    implementation platform('com.google.firebase:firebase-bom:33.4.0')
    implementation 'com.google.firebase:firebase-analytics'

    //Retrofit
    implementation 'com.squareup.retrofit2:retrofit:2.9.0'
    implementation 'com.squareup.retrofit2:converter-gson:2.9.0'
    implementation 'com.squareup.retrofit2:converter-scalars:2.9.0'</code></pre><h3 id="set-permissions-in-androidmanifestxml">Set Permissions in <code>AndroidManifest.xml</code></h3><p>Ensure the following permissions are configured:</p><pre><code class="language-xml">    &lt;uses-feature
        android:name="android.hardware.camera"
        android:required="false" /&gt;
    &lt;uses-feature
        android:name="android.hardware.telephony"
        android:required="false" /&gt;

    &lt;uses-permission android:name="android.permission.ANSWER_PHONE_CALLS" /&gt;
    &lt;uses-permission android:name="android.permission.POST_NOTIFICATIONS" /&gt;
    &lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
    &lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
    &lt;uses-permission android:name="android.permission.INTERNET" /&gt;
    &lt;uses-permission android:name="android.permission.CAMERA" /&gt;
    &lt;uses-permission android:name="android.permission.READ_PHONE_STATE" /&gt;</code></pre><h2 id="how-to-configure-firebase-for-notifications-and-realtime-database"><br>How to Configure Firebase for Notifications and Realtime Database</br></h2><h3 id=""/><h3 id="a-firebase-setup-for-notifications-realtime-database">[a] Firebase Setup for Notifications &amp; Realtime Database</h3><p><strong>Step 1: Add Firebase to Your Android App</strong></p><ul><li>Go to the Firebase Console and create a new project.</li><li>Download the <code>google-services.json</code> file and place it in your project’s <code>app/</code> directory</li></ul><p><strong>Step 2: Add Firebase Dependencies</strong></p><ul><li>In your <code>build.gradle</code> files, add the necessary Firebase dependencies:</li></ul><pre><code class="language-gradle">// Project-level build.gradle
classpath 'com.google.gms:google-services:4.3.10'

// App-level build.gradle
apply plugin: 'com.google.gms.google-services'

implementation 'com.google.firebase:firebase-messaging:23.0.0'
implementation 'com.google.firebase:firebase-database:20.0.0'
</code></pre><p><strong>Step 3: Enable Firebase Messaging and Real-time Database</strong></p><ul><li>Enable Firebase Cloud Messaging (FCM) and Realtime Database in the Firebase console under your project settings.</li></ul><p><strong>Step 4: Firebase Service Configuration</strong></p><p>Ensure your app is registered in Firebase, and implement a <code>FirebaseMessagingService</code> to handle notifications, which we will do later.</p><h3 id="b-firebase-server-side-setup-serviceaccountjson-file">[b] Firebase Server-Side Setup (serviceAccount.json file)</h3><p>To set up Firebase Admin SDK for your server, follow these steps:</p><ol><li>Go to the <a href="https://console.firebase.google.com/" rel="noreferrer">Firebase Console</a> and select your project.</li><li>In <strong>Project Settings</strong>, navigate to the <strong>Service Accounts </strong>tab.</li><li>Click on <strong>Generate a new private key</strong> to download your service account JSON file. This file will be named something like <code>&lt;your-project&gt;-firebase-adminsdk-&lt;unique-id&gt;.json</code>.</li></ol><h3 id="project-structure">Project Structure</h3><pre><code class="language-Project Structure">Root/
├── app/
│   ├── src/
│   │   ├── main/
│   │   │   ├── java/
│   │   │   │   ├── FirebaseDatabase/
│   │   │   │   │   └── DatabaseUtils.java
│   │   │   │   ├── Meeting/
│   │   │   │   │   ├── MeetingActivity.java
│   │   │   │   │   └── ParticipantAdapter.java
│   │   │   │   ├── Network/
│   │   │   │   │   ├── ApiClient.java
│   │   │   │   │   ├── ApiService.java
│   │   │   │   │   ├── NetworkCallhandler.java
│   │   │   │   │   └── NetworkUtils.java
│   │   │   │   ├── Services/
│   │   │   │   │   ├── CallConnectionService.java
│   │   │   │   │   ├── MyFirebaseMessagingService.java
│   │   │   │   │   └── MyInCallService.java
│   │   │   │   ├── MainActivity.java
│   │   │   │   ├── MainApplication.java
│   │   │   │   └── MeetingIdCallBack.java
│   │   ├── res/
│   │   │   └── layout/
│   │   │       ├── activity_main.xml
│   │   │       ├── activity_meeting.xml
│   │   │       └── item_remote_peer.xml
├── build.gradle
└── settings.gradle
</code></pre><p>Let's get started with the basic UI of the call-initiating screen.</p><h2 id="ui-development">UI Development</h2><p>The <strong>MainActivity layout</strong> provides the initial screen where users can initiate video calls. The key components include:</p><ul><li><strong>Caller ID Input</strong>: An <code>EditText</code> for entering the unique caller ID to initiate a call.</li><li><strong>Call Button</strong>: A button to trigger the call.</li><li><strong>Unique ID Display</strong>: Displays a unique ID for each user.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-xml"> &lt;LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    android:orientation="vertical"
    android:background="?android:attr/windowBackground"&gt;

    &lt;com.google.android.material.appbar.MaterialToolbar
        android:layout_width="match_parent"
        android:layout_height="?attr/actionBarSize"
        app:title="VideoSDK CallKit Example"
        app:titleTextColor="@color/white"
        android:background="?attr/colorPrimaryDark"/&gt;

    &lt;LinearLayout
        android:layout_width="match_parent"
        android:layout_height="0dp"
        android:orientation="vertical"
        android:gravity="center"
        android:padding="24dp"
        android:layout_weight="1"&gt;

   &lt;LinearLayout
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:background="@android:drawable/editbox_background_normal"
        android:orientation="vertical"
        android:layout_marginBottom="50dp"
        android:backgroundTint="#E7E1E1"&gt;

        &lt;TextView
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:text="Your Caller ID"
            android:textSize="23sp"
            android:textColor="@color/black"
            android:paddingLeft="12dp"
            android:paddingRight="12dp"
            android:fontFamily="sans-serif-medium"
            android:gravity="center"
            android:elevation="4dp"
            android:layout_margin="8dp"
            android:clipToOutline="true"
            /&gt;

        &lt;View
            android:layout_width="match_parent"
            android:layout_height="1dp"
            android:background="@color/black"
            /&gt;

        &lt;LinearLayout
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:orientation="horizontal"
            android:gravity="center"
            &gt;
            &lt;TextView
                android:id="@+id/txt_callId"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:text="Call Id"
                android:textSize="32sp"
                android:textColor="@color/black"
                android:paddingLeft="12dp"
                android:paddingRight="12dp"
                android:fontFamily="sans-serif-medium"
                android:gravity="center"
                android:layout_margin="8dp"
                android:textIsSelectable="true"
                android:clipToOutline="true"
                /&gt;

            &lt;ImageView
                android:id="@+id/copyIcon"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:src="@drawable/baseline_content_copy_24"
                android:padding="8dp"
                /&gt;
        &lt;/LinearLayout&gt;
    &lt;/LinearLayout&gt;


    &lt;LinearLayout
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:background="@android:drawable/editbox_background_normal"
        android:orientation="vertical"
        android:gravity="center"
        android:backgroundTint="#E7E1E1"
        android:padding="14dp"
        &gt;
        &lt;TextView
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:text="Enter call ID of user you want to call"
            android:textSize="23sp"
            android:textColor="@color/black"
            android:fontFamily="sans-serif-medium"
            android:elevation="4dp"
            android:layout_marginBottom="20dp"
            /&gt;

        &lt;EditText
            android:inputType="number"
            android:id="@+id/caller_id_input"
            android:layout_width="match_parent"
            android:layout_height="64dp"
            android:hint="Enter Caller ID"
            android:layout_marginBottom="10dp"
            android:background="@android:drawable/editbox_background_normal"
            android:textSize="18sp"
            android:textColor="@android:color/black"
            android:gravity="center"/&gt;

        &lt;Button
            android:id="@+id/call_button"
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:text="Call"
            android:padding="16dp"
            android:layout_marginTop="20dp"
            android:textSize="18sp"
            android:textColor="@android:color/white"
            /&gt;

    &lt;/LinearLayout&gt;

    &lt;/LinearLayout&gt;
&lt;/LinearLayout&gt;
</code></pre><figcaption><p><span style="white-space: pre-wrap;">activity_main.xml</span></p></figcaption></figure><p>This is how the UI will look:</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/10/1000000166--1-.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Android (Java) using Firebase and VideoSDK" loading="lazy" width="200" height="445"/></figure><p>With the UI for calling in place let's start with the actual calling development.</p><h3 id="firebase-messaging-for-call-initiation">Firebase Messaging for Call Initiation</h3><ol><li>To initiate the call process, we first need to secure user permission to manage notifications and calls.</li></ol><pre><code class="language-java">public class MainActivity extends AppCompatActivity {

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        checkSelfPermission(REQUESTED_PERMISSIONS[0], PERMISSION_REQ_ID);
        if(Build.VERSION.SDK_INT &gt;= Build.VERSION_CODES.TIRAMISU)
        {
            checkSelfPermission(REQUESTED_PERMISSIONS[1], PERMISSION_REQ_ID);
        }
    }

    @Override
    public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
        super.onRequestPermissionsResult(requestCode, permissions, grantResults);
        if (requestCode == 133) {
            boolean allPermissionsGranted = true;
            for (int result : grantResults) {
                if (result != PackageManager.PERMISSION_GRANTED) {
                    allPermissionsGranted = false;
                    break;
                }
            }

            if (allPermissionsGranted) {
                registerPhoneAccount();
            } else {
                Toast.makeText(this, "Permissions are required for call management", Toast.LENGTH_LONG).show();
            }
        }
    }

    private static final int PERMISSION_REQ_ID = 22;

    private static final String[] REQUESTED_PERMISSIONS = new String[]{
            Manifest.permission.READ_PHONE_STATE,
            Manifest.permission.POST_NOTIFICATIONS
    };

    private boolean checkSelfPermission(String permission, int requestCode) {
        if (ContextCompat.checkSelfPermission(this, permission) != PackageManager.PERMISSION_GRANTED) {
            ActivityCompat.requestPermissions(this, REQUESTED_PERMISSIONS, requestCode);
            return false;
        }
        return true;
    }
}</code></pre><ol start="2"><li>Once permission is granted, the next step is to identify each user and retrieve their messaging token, enabling us to send notifications effectively.</li></ol><blockquote>Don't worry if you encounter an error due to a missing file. Please continue following the steps, as the required file will be provided later in the guide.</blockquote><pre><code class="language-java">public class MainActivity extends AppCompatActivity {

    private EditText callerIdInput;
    private TextView myId;
    ImageView copyIcon;
    String myCallId = String.valueOf(10000000 + new Random().nextInt(90000000));

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        checkSelfPermission(REQUESTED_PERMISSIONS[0], PERMISSION_REQ_ID);
        if(Build.VERSION.SDK_INT &gt;= Build.VERSION_CODES.TIRAMISU)
        {
            checkSelfPermission(REQUESTED_PERMISSIONS[1], PERMISSION_REQ_ID);
        }
        myId = findViewById(R.id.txt_callId);
        callerIdInput = findViewById(R.id.caller_id_input);

        copyIcon = findViewById(R.id.copyIcon);
        Button callButton = findViewById(R.id.call_button);
        myId.setText(myCallId);

        ClipboardManager clipboardManager = (ClipboardManager) getSystemService(Context.CLIPBOARD_SERVICE);
        ClipData clipData = ClipData.newPlainText("copied text", myCallId);

        copyIcon.setOnClickListener(v -&gt; {
            clipboardManager.setPrimaryClip(clipData);
            Toast.makeText(this, "Copied to clipboard", Toast.LENGTH_SHORT).show();
        });

        NetworkUtils initMeeting = new NetworkUtils();

        initMeeting.createMeeting(new MeetingIdCallBack() {
            @Override
            @Async
            public void onMeetingIdReceived(String meetingId, String token) {
                MainApplication.setMeetingId(meetingId);
            }
        });

        NetworkCallHandler.myCallId = myCallId;
        DatabaseUtils.myCallId = myCallId;

        //Firebase Notification
        NotificationChannel channel = new NotificationChannel("notification_channel", "notification_channel", NotificationManager.IMPORTANCE_DEFAULT);
        NotificationManager manager = getSystemService(NotificationManager.class);
        manager.createNotificationChannel(channel);
        FirebaseMessaging.getInstance().subscribeToTopic("general")
                .addOnCompleteListener(new OnCompleteListener&lt;Void&gt;() {
                    @Override
                    public void onComplete(@NonNull Task&lt;Void&gt; task) {
                        String msg = "Subscribed Successfully";
                        if (!task.isSuccessful()) {
                            msg = "Subscription failed";
                        }
                        Toast.makeText(MainActivity.this, msg, Toast.LENGTH_SHORT).show();
                    }
                });

       //Firebase Database Actions
        DatabaseUtils databaseUtils = new DatabaseUtils();

        DatabaseReference databaseReference = FirebaseDatabase.getInstance().getReference();
        FirebaseMessaging.getInstance().getToken().addOnCompleteListener( task -&gt; {
            NetworkCallHandler.FcmToken = task.getResult();
            DatabaseUtils.FcmToken= task.getResult();
            databaseUtils.sendUserDataToFirebase(databaseReference);
        });

        callButton.setOnClickListener(v -&gt; {
            String callerNumber = callerIdInput.getText().toString();

            if (callerNumber.length() == 8) {
                databaseUtils.retrieveUserData(databaseReference, callerNumber);
            } else {
                Toast.makeText(MainActivity.this, "Please input the correct caller ID", Toast.LENGTH_SHORT).show();
            }
        });
    }
</code></pre><p>Check if the FirebaseMessaging token is already present in the database. If it exists, update the callee ID; otherwise, create a new entry in the database.</p><pre><code class="language-java">public class DatabaseUtils {

    String calleeInfoToken ;
    public static String FcmToken ;
    public static String myCallId;

    public void sendUserDataToFirebase(DatabaseReference databaseReference) {

        DatabaseReference usersRef = databaseReference.child("User");

        usersRef.orderByChild("token").equalTo(FcmToken).addListenerForSingleValueEvent(new ValueEventListener() {
            @Override
            public void onDataChange(@NonNull DataSnapshot dataSnapshot) {
                if (dataSnapshot.exists()) {
                    // Token exists, update the callerId
                    for (DataSnapshot userSnapshot : dataSnapshot.getChildren()) {
                        userSnapshot.getRef().child("callerId").setValue(myCallId)
                                .addOnSuccessListener(aVoid -&gt; {
                                    Log.d("FirebaseData", "CallerId successfully updated.");
                                })
                                .addOnFailureListener(e -&gt; {
                                    Log.e("FirebaseError", "Failed to update callerId.", e);
                                });
                    }
                } else {
                    // Token doesn't exist, create new entry
                    String userId = usersRef.push().getKey();
                    Map&lt;String, Object&gt; map = new HashMap&lt;&gt;();
                    map.put("callerId", myCallId);
                    map.put("token", FcmToken);

                    if (userId != null) {
                        usersRef.child(userId).setValue(map)
                                .addOnSuccessListener(aVoid -&gt; {
                                    Log.d("FirebaseData", "Data successfully saved.");
                                })
                                .addOnFailureListener(e -&gt; {
                                    Log.e("FirebaseError", "Failed to save data.", e);
                                });
                    }
                }
            }
            @Override
            public void onCancelled(@NonNull DatabaseError databaseError) {
                Log.e("FirebaseError", "Error checking for existing token", databaseError.toException());
            }
        });
    }
}</code></pre><p>When the call is initiated, first verify if the caller ID exists in the Firebase database. If it does, proceed to invoke the notification method.</p><pre><code class="language-java">public class DatabaseUtils {

 //...
    public void retrieveUserData(DatabaseReference databaseReference, String callerNumber) {
    
        NetworkCallHandler callHandler = new NetworkCallHandler();
        databaseReference.child("User").orderByChild("callerId").equalTo(callerNumber).addListenerForSingleValueEvent(new ValueEventListener() {
            @Override
            public void onDataChange(@NonNull DataSnapshot snapshot) {
                if (snapshot.exists()) {
                    for (DataSnapshot data : snapshot.getChildren()) {
                        String token = data.child("token").getValue(String.class);
                        if (token != null) {
                            calleeInfoToken = token;
                            NetworkCallHandler.calleeInfoToken = token;
                            callHandler.initiateCall();
                            break;
                        }
                    }
                } else {
                    Log.d("TAG", "retrieveUserData: No matching callerId found");
                }
            }
            @Override
            public void onCancelled(@NonNull DatabaseError error) {
                Log.e("FirebaseError", "Failed to read data from Firebase", error.toException());
            }
        });
    }
}</code></pre><p>We'll configure the VideoSDK token and Meeting ID as soon as the home screen loads, ensuring they're ready when the user initiates a call.</p><pre><code class="language-java">public class NetworkUtils {
    
String sampleToken = MainApplication.getToken();

public void createMeeting(MeetingIdCallBack callBack) {
    // we will make an API call to VideoSDK Server to get a roomId
    AndroidNetworking.post("https://api.videosdk.live/v2/rooms")
            .addHeaders("Authorization", sampleToken) //we will pass the token in the Headers
            .build()
            .getAsJSONObject(new JSONObjectRequestListener() {
                @Override
                public void onResponse(JSONObject response) {
                    try {
                        // response will contain `meetingID`
                        final String meetingId = response.getString("roomId");
                        callBack.onMeetingIdReceived(meetingId,sampleToken);
                    } catch (JSONException e) {
                        e.printStackTrace();
                    }
                }

                @Override
                public void onError(ANError anError) {
                    anError.printStackTrace();
                }
            });
}
}</code></pre><p>The <code>MeetingIdCallBack</code> interface allows us to receive the <code>meetingId</code> and <code>token</code> in our MainActivity.</p><pre><code class="language-java">public interface MeetingIdCallBack {
    void onMeetingIdReceived(String meetingId,String token);
}</code></pre><p>using  <code>onMeetingIdReceived</code> callback method to get <code>meetingId</code> and <code>token</code>.</p><pre><code class="language-java">public class MainActivity extends AppCompatActivity {

  @Override
  protected void onCreate(Bundle savedInstanceState) {
      super.onCreate(savedInstanceState);
      setContentView(R.layout.activity_main);
      //...
          NetworkUtils networkUtils = new NetworkUtils();
          networkUtils.createMeeting(new MeetingIdCallBack() {
          @Override
          public void onMeetingIdReceived(String meetingId, String token) {
              MainApplication.setMeetingId(meetingId);
          }
      });
}</code></pre><ol start="3"><li>The next step is to initiate the call. </li></ol><p>For this, we’ll set up an Express server with two APIs as Firebase functions—one to trigger notifications on the other device and another to update the call status (accepted or rejected).</p><p>Start by importing and initializing the required packages in &nbsp;<code>server.js</code>&nbsp;.</p><p>we will also need to initialize Firebase Admin SDK.</p><pre><code class="language-Express.js">const functions = require("firebase-functions");
const express = require("express");
const cors = require("cors");
const morgan = require("morgan");
var admin = require("firebase-admin");
const { v4: uuidv4 } = require("uuid");

const app = express();
app.use(cors());
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
app.use(morgan("dev"));

// Path to your service account key file for Firebase Admin SDK
var serviceAccount = require("add_path_here");

// Initialize Firebase Admin SDK
admin.initializeApp({
  credential: admin.credential.cert(serviceAccount),
  databaseURL: "database_url" // Replace with your database URL
});

// Home Route
app.get("/", (req, res) =&gt; {
  res.send("Hello World!");
});

// Start the Express server
app.listen(9000, () =&gt; {
  console.log(`API server listening at http://localhost:9000`);
});

// Export app as a Firebase Cloud Function
exports.app = functions.https.onRequest(app);</code></pre><ul><li>The first API we need is&nbsp;<code>initiate-call</code>, which will be used to send a notification to the receiving user and start the call by sending details like caller information and VideoSDK room details.</li></ul><pre><code class="language-Express.js">// Initiate call notification (for Android)
app.post("/initiate-call", (req, res) =&gt; {
  const { calleeInfo, callerInfo, videoSDKInfo } = req.body;

  var FCMtoken = calleeInfo.token;
    const info = JSON.stringify({
      callerInfo,
      videoSDKInfo,
      type: "CALL_INITIATED",
    });
    var message = {
      data: {
        info,
      },
      android: {
        priority: "high",
      },
      token: FCMtoken,
    };
    
  // Send the FCM message using firebase-admin
  admin.messaging().send(message)
    .then((response) =&gt; {
      console.log("Successfully sent FCM message:", response);
      res.status(200).send(response);
    })
    .catch((error) =&gt; {
      console.log("Error sending FCM message:", error);
      res.status(400).send("Error sending FCM message: " + error);
    });
});</code></pre><ul><li>The second API we need is <code>update-call</code>, which updates the status of the incoming call (such as accepted, rejected, etc.) and sends a notification to the caller.</li></ul><pre><code class="language-Express.js">// Update call notification (for Android)
app.post("/update-call", (req, res) =&gt; {
  const { callerInfo, type } = req.body;
  const info = JSON.stringify({
    callerInfo,
    type,
  });

  var message = {
    data: {
      info,
    },
    token: callerInfo.token,
  };
  var message = {
    data: { info },
    token: callerInfo.token, // Token for the target device
    android: {
      priority: "high",
      notification: {
        title: "Call Updated",
        body: "Your call has been updated by " + callerInfo.name,
      },
    },
  };

  // Send the update message through firebase-admin
  admin.messaging().send(message)
    .then((response) =&gt; {
      console.log("Successfully updated call:", response);
      res.status(200).send(response);
    })
    .catch((error) =&gt; {
      console.log("Error updating call:", error);
      res.status(400).send("Error updating call: " + error);
    });
});</code></pre><p>4. Now that the APIs are created, we will trigger them from the app.</p><p>Here the&nbsp;<code>FCM_SERVER_URL</code>&nbsp;needs to be updated with the URL of your Firebase functions.</p><p>You can either deploy the server or run the server in a local environment using&nbsp;<code>npm run server.js</code> </p><p>  If you're running it on a local device, you need to use the device's IP address.</p><pre><code class="language-java">public class ApiClient {
    private static final String BASE_URL = "FCM_SERVER_URL";
    private static Retrofit retrofit = null;

    public static Retrofit getClient() {
        if (retrofit == null) {
            retrofit = new Retrofit.Builder()
                    .baseUrl(BASE_URL)
                    // ScalarsConverterFactory handles plain text responses
                    .addConverterFactory(ScalarsConverterFactory.create())
                    // GsonConverterFactory handles JSON responses
                    .addConverterFactory(GsonConverterFactory.create())
                    .build();
        }
        return retrofit;
    }
}</code></pre><p>Now, let's define endpoints for API Calls.</p><pre><code class="language-java">public interface ApiService {

    @POST("/initiate-call")
    Call&lt;String&gt; initiateCall(@Body Map&lt;String, Object&gt; callRequestBody);

    @POST("/update-call")
    Call&lt;String&gt; updateCall(@Body Map&lt;String,Object&gt; callUpdateBody);
}</code></pre><p>Initiates a call by sending caller, callee, and VideoSDK information to the server<br>and handles the server response and logs success or failure accordingly.</br></p><pre><code class="language-java">public  class NetworkCallHandler {

    static ApiService apiService = ApiClient.getClient().create(ApiService.class);
    public static String myCallId;
    public static String FcmToken;
    public static String calleeInfoToken;

    public void initiateCall() {
        ApiService apiService = ApiClient.getClient().create(ApiService.class);

        Map&lt;String,String&gt; callerInfo = new HashMap&lt;&gt;();
        Map&lt;String,String&gt; calleeInfo = new HashMap&lt;&gt;();
        Map &lt;String,String&gt; videoSDKInfo= new HashMap&lt;&gt;();

        //callerInfo
        callerInfo.put("callerId",myCallId);
        callerInfo.put("token",FcmToken);

        //calleeInfo
        calleeInfo.put("token",calleeInfoToken);

        //videoSDKInfo
        videoSDKInfo.put("meetingId", MainApplication.getMeetingId());
        videoSDKInfo.put("token",MainApplication.getToken());

        Map&lt;String,Object&gt; callRequestBody = new HashMap&lt;&gt;();
        callRequestBody.put("callerInfo",callerInfo);
        callRequestBody.put("calleeInfo",calleeInfo);
        callRequestBody.put("videoSDKInfo",videoSDKInfo);

        Call&lt;String&gt; call = apiService.initiateCall(callRequestBody);
        call.enqueue(new Callback&lt;String&gt;() {
            @Override
            public void onResponse(@NonNull Call&lt;String&gt; call, @NonNull Response&lt;String&gt; response) {
                if (response.isSuccessful()) {
                    Log.d("API", "Call initiated: " + response.body());
                } else {
                    Log.e("API", "Failed to initiate call: " + response.message());
                }
            }

            @Override
            public void onFailure(@NonNull Call&lt;String&gt; call, @NonNull Throwable t) {
                Log.e("API", "API call failed: " + t.getMessage());
            }
        });
    }
}</code></pre><p>5. The notification sent is now configured. Now we need to invoke the call when you receive the notification.</p><p>You can extract all the information from the notification body, which will help us to create a meeting.</p><pre><code class="language-java">public class MyFirebaseMessagingService extends FirebaseMessagingService {

    private static final String TAG = "FCMService";
    private static final String CHANNEL_ID = "notification_channel";

    String callerID;
    String meetingId ;
    String token ;
    public static String FCMtoken;

    @Override
    public void onNewToken(@NonNull String token) {
        super.onNewToken(token);
    }

    @Override
    public void onMessageReceived(RemoteMessage remoteMessage) {

        // Handle incoming message (call request)
        Map&lt;String, String&gt; data = remoteMessage.getData();

        if (!data.isEmpty()) {
            try {
                JSONObject object = new JSONObject(data.get("info"));
                JSONObject callerInfo = object.getJSONObject("callerInfo");
                callerID = callerInfo.getString("callerId");
                FCMtoken  =  callerInfo.getString("token");
                if (object.has("videoSDKInfo")) {
                    JSONObject videoSdkInfo = object.getJSONObject("videoSDKInfo");
                    meetingId = videoSdkInfo.getString("meetingId");
                    token = videoSdkInfo.getString("token");
                    handleIncomingCall(callerID);
                }
                String type = (String) object.get("type");

                if(type.equals("ACCEPTED")){
                    startMeeting();
                } else if (type.equals("REJECTED")) {
                    showIncomingCallNotification(callerID);
                    new Handler(Looper.getMainLooper()).post(new Runnable() {
                        @Override
                        public void run() {
                            Toast toast = Toast.makeText(getApplicationContext(), "CALL REJECTED FROM CALLER ID: " + callerID, Toast.LENGTH_SHORT);
                            toast.show();
                        }
                    });
                }
            } catch (JSONException e) {
                throw new RuntimeException(e);
            }
        } else {
            Log.d(TAG, "onMessageReceived: No data found in the notification payload.");
        }
    }

    private void startMeeting() {
        Intent intent = new Intent(getApplicationContext(), MeetingActivity.class);
        intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
        intent.putExtra("meetingId", MainApplication.getMeetingId());
        intent.putExtra("token", MainApplication.getToken());
        startActivity(intent);
    }

    private void handleIncomingCall(String callerId) {

        // Create a bundle to pass call details
        Bundle extras = new Bundle();
        Uri uri = Uri.fromParts("tel", callerId, null);
        extras.putParcelable(TelecomManager.EXTRA_INCOMING_CALL_ADDRESS, uri);
        extras.putString("meetingId", meetingId);
        extras.putString("token", token);
        extras.putString("callerID",callerId);

        try {
            MainApplication.telecomManager.addNewIncomingCall(MainApplication.phoneAccountHandle, extras);
        } catch (Throwable cause) {
            Log.e("handleIncomingCall", "error in addNewIncomingCall ", cause.getCause());
        }
    }
    
    private void showIncomingCallNotification(String callerId) {
        createNotificationChannel();

        Intent intent = new Intent(this, MainActivity.class);

        PendingIntent pendingIntent = PendingIntent.getActivity(this, 0, intent, PendingIntent.FLAG_MUTABLE);

        Notification notification = new NotificationCompat.Builder(this, CHANNEL_ID)
                .setSmallIcon(R.drawable.baseline_call_24)
                .setContentTitle("Call REJECTED ")
                .setContentText("Call from " + callerId)
                .setPriority(NotificationCompat.PRIORITY_HIGH)
                .setFullScreenIntent(pendingIntent, true)
                .setAutoCancel(true)
                .setContentIntent(pendingIntent)
                .setCategory(NotificationCompat.CATEGORY_CALL)
                .build();

        NotificationManager notificationManager = (NotificationManager) getSystemService(NOTIFICATION_SERVICE);
        notificationManager.notify(1, notification);
    }

    private void createNotificationChannel() {
        NotificationChannel channel = new NotificationChannel(CHANNEL_ID, "Incoming Calls", NotificationManager.IMPORTANCE_HIGH);
        NotificationManager notificationManager = getSystemService(NotificationManager.class);
        notificationManager.createNotificationChannel(channel);
    }
}</code></pre><p>Add this <code>MyFirebaseMessagingService</code> to <code>AndroidManifest.xml</code></p><pre><code class="language-xml">&lt;service
  android:name=".Services.MyFirebaseMessagingService"
  android:enabled="true"
  android:exported="true"
  android:permission="android.permission.BIND_TELECOM_CONNECTION_SERVICE"&gt;
  &lt;intent-filter&gt;
      &lt;action android:name="com.google.firebase.MESSAGING_EVENT" /&gt;
      &lt;action android:name="com.google.android.c2dm.intent.RECEIVE" /&gt;
  &lt;/intent-filter&gt;
&lt;/service&gt;</code></pre><p>Now when the notification is received, this should trigger a call.</p><p>To achieve this, we need to register  <code>TelecomManager</code> with a <code>PhoneAccountHandle</code> and <code>CallConnectionService</code> to manage and handle calls.</p><p>Initialize the <code>TelecomManager</code> and <code>PhoneAccountHandle</code> in the <code>MainApplication</code> class to make them accessible throughout the application.</p><p>Also, to initiate the call with the VideoSDK, you need to add the VideoSDK token to the main application. You can obtain this token from the <a href="https://app.videosdk.live/dashboard/5PDQH85UT27D" rel="noopener">VideoSDK Dashboard</a>.</p><p/><pre><code class="language-java">public class MainApplication extends Application {

    public static TelecomManager telecomManager;
    public static PhoneAccountHandle phoneAccountHandle;
    static String meetingId;
    static String token="VideoSDK token";
    public static void setMeetingId(String meetingId) {
        MainApplication.meetingId = meetingId;
    }
    public  static String getMeetingId(){
        return meetingId;
    }
    public static String getToken() {
        return token;
    }
    @Override
    public void onCreate() {
        super.onCreate();
        VideoSDK.initialize(getApplicationContext());

        telecomManager = (TelecomManager) getSystemService(TELECOM_SERVICE);
        ComponentName componentName = new ComponentName(this, CallConnectionService.class);
        phoneAccountHandle = new PhoneAccountHandle(componentName, "myAccountId");
    }
}</code></pre><p>Now, In <code>MainActivity</code> we'll register the PhoneAccount and check for permission to manage Call Activity.</p><pre><code class="language-java">public class MainActivity extends AppCompatActivity {

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        //telecom Api
        registerPhoneAccount();
    }

    private void registerPhoneAccount() {

    PhoneAccount phoneAccount = PhoneAccount.builder(MainApplication.phoneAccountHandle, "VideoSDK")
            .setCapabilities(PhoneAccount.CAPABILITY_CALL_PROVIDER)
            .build();

    MainApplication.telecomManager.registerPhoneAccount(phoneAccount);


        if (ActivityCompat.checkSelfPermission(this, Manifest.permission.READ_PHONE_STATE) != PackageManager.PERMISSION_GRANTED) {
            return;
        }
        int checkAccount=0;
        List&lt;PhoneAccountHandle&gt; list = MainApplication.telecomManager.getCallCapablePhoneAccounts();
        for (PhoneAccountHandle handle:
             list) {
            if(handle.getComponentName().getClassName().equals("live.videosdk.ConnectionService.quickstart.Services.CallConnectionService"))
            {
                checkAccount++;
                break;
            }
        }

        if(checkAccount == 0) {
            Intent intent = new Intent(TelecomManager.ACTION_CHANGE_PHONE_ACCOUNTS);
            startActivity(intent);
        }
    }
}</code></pre><p>Next, to manage VoIP calls, we’ll create a new service <code>CallConnectionService</code> that extends <code>ConnectionService</code>. This service will handle the technical aspects of call connections, such as managing call states and routing audio/video.</p><pre><code class="language-java">public class CallConnectionService extends ConnectionService {

String callerID;

    @Override
    public Connection onCreateIncomingConnection(PhoneAccountHandle connectionManagerPhoneAccount, ConnectionRequest request) {
        // Create a connection for the incoming call
        Connection connection = new Connection() {
            @Override
            public void onAnswer() {
                super.onAnswer();
               //getting videosdk info
                Bundle extras = request.getExtras();
                String meetingId = extras.getString("meetingId");
                String token = extras.getString("token");
                callerID = extras.getString("callerID");

                // Start the meeting activity with the extracted data
                Intent intent = new Intent(getApplicationContext(), MeetingActivity.class);
                intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
                intent.putExtra("meetingId", meetingId);
                intent.putExtra("token", token);
                startActivity(intent);
                NetworkCallHandler.updateCall("ACCEPTED");

                //update
                setDisconnected(new DisconnectCause(DisconnectCause.LOCAL));
                destroy();
            }

            @Override
            public void onReject() {
                super.onReject();
                Intent intent = new Intent(getApplicationContext(), MainActivity.class);
                intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
                startActivity(intent);
                //update
                NetworkCallHandler.updateCall("REJECTED");

                setDisconnected(new DisconnectCause(DisconnectCause.LOCAL));
                destroy();
            }
        };

        // Set call address
        connection.setAddress(request.getAddress(), TelecomManager.PRESENTATION_ALLOWED);
        connection.setCallerDisplayName(callerID,TelecomManager.PRESENTATION_ALLOWED);
        connection.setInitializing();  // Indicates that the call is being set up
        connection.setActive();  // Activate the call

        return connection;
    }

    @Override
    public Connection onCreateOutgoingConnection(PhoneAccountHandle connectionManagerPhoneAccount, ConnectionRequest request) {
        // Create a connection for the outgoing call
        Connection connection = new Connection(){};
        connection.setAddress(request.getAddress(), TelecomManager.PRESENTATION_ALLOWED);
        connection.setActive();
        return connection;
    }
}</code></pre><p>Add the service to the <code>AndroidManifest.xml</code></p><pre><code class="language-xml">&lt;service
    android:name=".Services.CallConnectionService"
    android:enabled="true"
    android:exported="true"
    android:permission="android.permission.BIND_TELECOM_CONNECTION_SERVICE"&gt;
    &lt;intent-filter&gt;
        &lt;action android:name="android.telecom.ConnectionService" /&gt;
    &lt;/intent-filter&gt;
&lt;/service&gt;</code></pre><p>Now, To display the default Android call UI, we'll create another service called <code>MyInCallService</code>.</p><pre><code class="language-java">public class MyInCallService extends InCallService {

    @Override
    public void onCallAdded(Call call) {
        super.onCallAdded(call);
        call.registerCallback(new Call.Callback() {
            @Override
            public void onStateChanged(Call call, int state) {
                super.onStateChanged(call, state);
                if (state == Call.STATE_ACTIVE) {
                    // Handle the active call state
                }
            }
        });

        // Bring up the default UI for managing the call
        setUpDefaultCallUI(call);
    }

    @Override
    public void onCallRemoved(Call call) {


        super.onCallRemoved(call);
        // Clean up call-related resources
    }

    private void setUpDefaultCallUI(Call call) {
        // Start the default in-call UI
        TelecomManager telecomManager = (TelecomManager) getSystemService(TELECOM_SERVICE);
        if (telecomManager != null) {
            if (ActivityCompat.checkSelfPermission(this, android.Manifest.permission.READ_PHONE_STATE) != PackageManager.PERMISSION_GRANTED) {
                return;
            }
            telecomManager.showInCallScreen(true);
        }
    }
}</code></pre><p>Add the service to <code>AndroidMenifest.xml</code></p><pre><code class="language-xml">&lt;service
  android:name=".Services.MyInCallService"
  android:exported="true"
  android:permission="android.permission.BIND_INCALL_SERVICE"&gt;
  &lt;intent-filter&gt;
      &lt;action android:name="android.telecom.InCallService" /&gt;
  &lt;/intent-filter&gt;
&lt;/service&gt;</code></pre><p/><blockquote>Wow!! You just implemented the calling feature, which works like a charm.</blockquote><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/10/1000000167--1--1.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Android (Java) using Firebase and VideoSDK" loading="lazy" width="224" height="500"/></figure><h2 id="videosdk-integration-for-videocall">VideoSDK Integration for VideoCall</h2><ol><li>The first step in integrating the VideoSDK is to <em>initialize</em> VideoSDK.</li></ol><p>MainApplication.java</p><pre><code class="language-java">public class MainApplication extends Application {
    }
    @Override
    public void onCreate() {
        super.onCreate();
        VideoSDK.initialize(getApplicationContext());
        //...
    }
}</code></pre><p>When the call is received, the intent is used to transition from the call screen to the meeting screen, passing the <code>meetingId</code> and <code>videoSDK token</code> along with it.</p><pre><code class="language-java">public class CallConnectionService extends ConnectionService {

String callerID;

    @Override
    public Connection onCreateIncomingConnection(PhoneAccountHandle connectionManagerPhoneAccount, ConnectionRequest request) {
        // Create a connection for the incoming call
        Connection connection = new Connection() {
            @Override
            public void onAnswer() {
                super.onAnswer();
                //...
                Intent intent = new Intent(getApplicationContext(), MeetingActivity.class);
                intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
                intent.putExtra("meetingId", meetingId);
                intent.putExtra("token", token);
                startActivity(intent);
   
            }
        };
}</code></pre><p>Next, we will add our MeetingView which will show the buttons and Participants View in the <code>MeetingActivity</code>.</p><pre><code class="language-xml">&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:gravity="center"
    android:orientation="vertical"
    tools:context=".MeetingActivity"&gt;

    &lt;TextView
        android:id="@+id/tvMeetingId"
        style="@style/TextAppearance.AppCompat.Display1"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="Meeting Id" /&gt;

    &lt;androidx.recyclerview.widget.RecyclerView
        android:id="@+id/rvParticipants"
        android:layout_width="match_parent"
        android:layout_height="0dp"
        android:layout_weight="1" /&gt;

    &lt;LinearLayout
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"&gt;

        &lt;Button
            android:id="@+id/btnMic"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginVertical="8dp"
            android:text="Mic"/&gt;

        &lt;Button
            android:id="@+id/btnLeave"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginVertical="8dp"
            android:layout_marginHorizontal="8dp"
            android:text="Leave"/&gt;

        &lt;Button
            android:id="@+id/btnWebcam"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginVertical="8dp"
            android:text="Webcam" /&gt;

    &lt;/LinearLayout&gt;


&lt;/LinearLayout&gt;</code></pre><p>Here is the logic for the&nbsp;<code>MeetingActivity</code></p><pre><code class="language-java">public class MeetingActivity extends AppCompatActivity {

    private static final int PERMISSION_REQ_ID = 22;

    private static final String[] REQUESTED_PERMISSIONS = {
            android.Manifest.permission.RECORD_AUDIO,
            Manifest.permission.CAMERA
    };

    // declare the variables we will be using to handle the meeting
    private Meeting meeting;
    private boolean micEnabled = true;
    private boolean webcamEnabled = true;

    private RecyclerView rvParticipants;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_meeting);

        checkSelfPermission(REQUESTED_PERMISSIONS[0], PERMISSION_REQ_ID);
        checkSelfPermission(REQUESTED_PERMISSIONS[1], PERMISSION_REQ_ID);

        final String token = getIntent().getStringExtra("token");
        final String meetingId = getIntent().getStringExtra("meetingId");
        final String participantName = "John Doe";

        // 1. Configuration VideoSDK with Token
        VideoSDK.config(token);
        // 2. Initialize VideoSDK Meeting
        meeting = VideoSDK.initMeeting(
                MeetingActivity.this, meetingId, participantName,
                micEnabled, webcamEnabled,null, null, true,null, null);

        // 3. Add event listener for listening upcoming events
        meeting.addEventListener(meetingEventListener);

        //4. Join VideoSDK Meeting
        meeting.join();

        ((TextView)findViewById(R.id.tvMeetingId)).setText(meetingId);

        // actions
        setActionListeners();

        rvParticipants = findViewById(R.id.rvParticipants);
        rvParticipants.setLayoutManager(new GridLayoutManager(this, 2));
        rvParticipants.setAdapter(new ParticipantAdapter(meeting));
    }

    // creating the MeetingEventListener
    private final MeetingEventListener meetingEventListener = new MeetingEventListener() {
        @Override
        public void onMeetingJoined() {
            Log.d("#meeting", "onMeetingJoined()");
        }

        @Override
        public void onMeetingLeft() {
            Log.d("#meeting", "onMeetingLeft()");
            meeting = null;
            if (!isDestroyed()) finish();
        }

        @Override
        public void onParticipantJoined(Participant participant) {
            Toast.makeText(MeetingActivity.this, participant.getDisplayName() + " joined", Toast.LENGTH_SHORT).show();
        }

        @Override
        public void onParticipantLeft(Participant participant) {
            Toast.makeText(MeetingActivity.this, participant.getDisplayName() + " left", Toast.LENGTH_SHORT).show();
        }
    };

    private void setActionListeners() {
        // toggle mic
        findViewById(R.id.btnMic).setOnClickListener(view -&gt; {
            if (micEnabled) {
                // this will mute the local participant's mic
                meeting.muteMic();
                Toast.makeText(MeetingActivity.this, "Mic Disabled", Toast.LENGTH_SHORT).show();
            } else {
                // this will unmute the local participant's mic
                meeting.unmuteMic();
                Toast.makeText(MeetingActivity.this, "Mic Enabled", Toast.LENGTH_SHORT).show();
            }
            micEnabled=!micEnabled;
        });

        // toggle webcam
        findViewById(R.id.btnWebcam).setOnClickListener(view -&gt; {
            if (webcamEnabled) {
                // this will disable the local participant webcam
                meeting.disableWebcam();
                Toast.makeText(MeetingActivity.this, "Webcam Disabled", Toast.LENGTH_SHORT).show();
            } else {
                // this will enable the local participant webcam
                meeting.enableWebcam();
                Toast.makeText(MeetingActivity.this, "Webcam Enabled", Toast.LENGTH_SHORT).show();
            }
            webcamEnabled=!webcamEnabled;
        });

        // leave meeting
        findViewById(R.id.btnLeave).setOnClickListener(view -&gt; {
            // this will make the local participant leave the meeting
            meeting.leave();
        });
    }

    @Override
    protected void onDestroy() {
        rvParticipants.setAdapter(null);
        super.onDestroy();
    }

    private boolean checkSelfPermission(String permission, int requestCode) {
        if (ContextCompat.checkSelfPermission(this, permission) !=
                PackageManager.PERMISSION_GRANTED) {
            ActivityCompat.requestPermissions(this, REQUESTED_PERMISSIONS, requestCode);
            return false;
        }
        return true;
    }
}</code></pre><p>Here, we display the participants in a <code>RecyclerView</code>. To implement this, you'll need to use the following <code>RecyclerView</code> Adapter components:</p><p>a. <code>item_remote_peer</code> : UI for each participant</p><pre><code class="language-xml">&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="200dp"
    android:background="@color/cardview_dark_background"
    tools:layout_height="200dp"&gt;

    &lt;live.videosdk.rtc.android.VideoView
        android:id="@+id/participantView"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:visibility="gone" /&gt;

    &lt;LinearLayout
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:layout_gravity="bottom"
        android:background="#99000000"
        android:orientation="horizontal"&gt;

        &lt;TextView
            android:id="@+id/tvName"
            android:layout_width="0dp"
            android:layout_height="wrap_content"
            android:layout_weight="1"
            android:gravity="center"
            android:padding="4dp"
            android:textColor="@color/white" /&gt;

    &lt;/LinearLayout&gt;

&lt;/FrameLayout&gt;</code></pre><p>b.  <code>ParticipantAdapter</code>: Responsible for displaying video call participants in a <code>RecyclerView</code>. It manages participant join/leave events, shows video streams, and updates the view when a participant's video starts or stops.</p><pre><code class="language-java">public class ParticipantAdapter extends RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt; {

    private final List&lt;Participant&gt; participants = new ArrayList&lt;&gt;();

    public ParticipantAdapter(Meeting meeting) {
        // adding the local participant(You) to the list
        participants.add(meeting.getLocalParticipant());

        // adding Meeting Event listener to get the participant join/leave event in the meeting.
        meeting.addEventListener(new MeetingEventListener() {
            @Override
            public void onParticipantJoined(Participant participant) {
                // add participant to the list
                participants.add(participant);
                notifyItemInserted(participants.size() - 1);
            }

            @Override
            public void onParticipantLeft(Participant participant) {
                int pos = -1;
                for (int i = 0; i &lt; participants.size(); i++) {
                    if (participants.get(i).getId().equals(participant.getId())) {
                        pos = i;
                        break;
                    }
                }
                // remove participant from the list
                participants.remove(participant);

                if (pos &gt;= 0) {
                    notifyItemRemoved(pos);
                }
            }
        });
    }

    @NonNull
    @Override
    public PeerViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
        return new PeerViewHolder(LayoutInflater.from(parent.getContext()).inflate(R.layout.item_remote_peer, parent, false));
    }

    @Override
    public void onBindViewHolder(@NonNull PeerViewHolder holder, int position) {
        Participant participant = participants.get(position);

        holder.tvName.setText(participant.getDisplayName());

        // adding the initial video stream for the participant into the 'VideoView'
        for (Map.Entry&lt;String, Stream&gt; entry : participant.getStreams().entrySet()) {
            Stream stream = entry.getValue();
            if (stream.getKind().equalsIgnoreCase("video")) {
                holder.participantView.setVisibility(View.VISIBLE);
                VideoTrack videoTrack = (VideoTrack) stream.getTrack();
                holder.participantView.addTrack(videoTrack);
                break;
            }
        }
        // add Listener to the participant which will update start or stop the video stream of that participant
        participant.addEventListener(new ParticipantEventListener() {
            @Override
            public void onStreamEnabled(Stream stream) {
                if (stream.getKind().equalsIgnoreCase("video")) {
                    holder.participantView.setVisibility(View.VISIBLE);
                    VideoTrack videoTrack = (VideoTrack) stream.getTrack();
                    holder.participantView.addTrack(videoTrack);
                }
            }

            @Override
            public void onStreamDisabled(Stream stream) {
                if (stream.getKind().equalsIgnoreCase("video")) {
                    holder.participantView.removeTrack();
                    holder.participantView.setVisibility(View.GONE);
                }
            }
        });
    }

    @Override
    public int getItemCount() {
        return participants.size();
    }

    static class PeerViewHolder extends RecyclerView.ViewHolder {
        // 'VideoView' to show Video Stream
        public VideoView participantView;
        public TextView tvName;
        public View itemView;

        PeerViewHolder(@NonNull View view) {
            super(view);
            itemView = view;
            tvName = view.findViewById(R.id.tvName);
            participantView = view.findViewById(R.id.participantView);
        }
    }

    @Override
    public void onViewRecycled(@NonNull PeerViewHolder holder) {
        holder.participantView.releaseSurfaceViewRenderer();
        super.onViewRecycled(holder);
    }
}</code></pre><p>The meeting is set up on the callee's side, and now you need to set up the meeting on the caller's side as well.</p><p>When the callee accepts the call, the <code>update-call</code> function on the server is called, which triggers a silent notification to the caller.</p><pre><code class="language-java">public  class NetworkCallHandler {

    public static void updateCall(String call_update) {

        String fcmToken = MyFirebaseMessagingService.FCMtoken;

        Map&lt;String, String&gt; callerInfo = new HashMap&lt;&gt;();
        Map&lt;String, Object&gt; callUpdateBody = new HashMap&lt;&gt;();

        //callerInfo
        callerInfo.put("callerId",myCallId);
        callerInfo.put("token",fcmToken);

        //CallUpdateBody
        callUpdateBody.put("callerInfo",callerInfo);
        callUpdateBody.put("type",call_update);

        Call&lt;String&gt; call = apiService.updateCall(callUpdateBody);
        call.enqueue(new Callback&lt;String&gt;() {
            @Override
            public void onResponse(@NonNull Call&lt;String&gt; call, @NonNull Response&lt;String&gt; response) {
                if (response.isSuccessful()) {
                    Log.d("API", "Call updated successfully: " + response.body());
                }
            }

            @Override
            public void onFailure(@NonNull Call&lt;String&gt; call, @NonNull Throwable t) {
                Log.e("API", "Call update failed", t);
            }
        });
    }
}</code></pre><p>When the caller's phone receives this notification, it either starts the meeting upon acceptance or displays a Toast message indicating rejection.</p><pre><code class="language-java"> public class MyFirebaseMessagingService extends FirebaseMessagingService {

    @Override
    public void onMessageReceived(RemoteMessage remoteMessage) {

        // Handle incoming message (call request)
        Map&lt;String, String&gt; data = remoteMessage.getData();

        if (!data.isEmpty()) {
            try {
                JSONObject object = new JSONObject(data.get("info"));
                JSONObject callerInfo = object.getJSONObject("callerInfo");
                callerID = callerInfo.getString("callerId");
                FCMtoken  =  callerInfo.getString("token");
                if (object.has("videoSDKInfo")) {
                    JSONObject videoSdkInfo = object.getJSONObject("videoSDKInfo");
                    meetingId = videoSdkInfo.getString("meetingId");
                    token = videoSdkInfo.getString("token");
                    handleIncomingCall(callerID);
                }
                String type = (String) object.get("type");

                if(type.equals("ACCEPTED")){
                    startMeeting();
                } else if (type.equals("REJECTED")) {
                    showIncomingCallNotification(callerID);
                    new Handler(Looper.getMainLooper()).post(new Runnable() {
                        @Override
                        public void run() {
                            Toast toast = Toast.makeText(getApplicationContext(), "CALL REJECTED FROM CALLER ID: " + callerID, Toast.LENGTH_SHORT);
                            toast.show();
                        }
                    });
                }
            } catch (JSONException e) {
                throw new RuntimeException(e);
            }
        } else {
            Log.d(TAG, "onMessageReceived: No data found in the notification payload.");
        }
    }
}</code></pre><p>Here is how the video call will look with two participants:</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/10/Screenshot_20241023-172447--1--1.png" class="kg-image" alt="Build a Video Calling App with Call Trigger in Android (Java) using Firebase and VideoSDK" loading="lazy" width="224" height="500"/></figure><p/><blockquote>Hurray!!! With these steps, our video calling feature is complete. Here’s a video demonstrating how it looks.</blockquote>
<!--kg-card-begin: html-->
<video width="900" height="500" controls="">
  <source src="https://cdn.videosdk.live/website-resources/docs-resources/android_call_keep_demo.mp4" type="video/mp4">
</source></video>
<!--kg-card-end: html-->
<h2 id="conclusion">Conclusion</h2><p>With this, we've successfully built the Android video calling app using the Call Trigger, VideoSDK, and Firebase. For additional features like chat messaging and screen sharing, feel free to refer to our <a href="https://docs.videosdk.live" rel="noreferrer">documentation</a>. If you encounter any issues with the implementation, don’t hesitate to reach out to us through our <a href="https://discord.gg/Gpmj6eCq5u" rel="noreferrer">Discord community</a>.</p><blockquote>Here is the Github repo you can clone to access all the source code <a href="https://github.com/videosdk-live/videosdk-rtc-android-java-call-trigger-example" rel="noreferrer">here</a></blockquote>]]></content:encoded></item><item><title><![CDATA[Build a VoIP Call App with CallKit in iOS]]></title><description><![CDATA[In this tutorial, you’ll learn how to make a iOS video calling app with callkit using the firebase and VideoSDK.]]></description><link>https://www.videosdk.live/blog/ios-voip-calling-app-with-callkit</link><guid isPermaLink="false">672f6ecfc646c7d24e60ed08</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 12 Nov 2024 07:40:09 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/11/call-trigger_ios--1-.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/11/call-trigger_ios--1-.png" alt="Build a VoIP Call App with CallKit in iOS"/><p>This guide will show you how to build a native <a href="https://www.videosdk.live/developer-hub/webrtc/ios-video-calling-app" rel="noreferrer">iOS video calling app</a> using Swift, integrating CallKit for call management, PushKit for VoIP notifications, Firebase for push notifications, and <a href="https://www.videosdk.live/" rel="noreferrer">VideoSDK</a> for real-time communication.</p><h2 id="app-overview">App Overview</h2><p>Imagine Ted wants to call Robin. Ted opens the app, enters Robin’s ID, and hits call. Robin receives an incoming call notification and can either accept or reject it. If she accepts, the app initiates a video call using VideoSDK.</p><p><br>Key Steps:<br>1. Call Initiation: Ted enters Robin's ID, which maps to Firebase, triggering a push notification to Robin's device.<br>2. Incoming Call UI: Robin’s device receives the notification, and CallKit presents the incoming call UI.<br>3. Call Connection: Upon acceptance, the app connects both users via VideoSDK for a video call.</br></br></br></br></p><h2 id="core-components">Core Components</h2><ol>
<li><a href="https://developer.apple.com/documentation/callkit/">CallKit</a>
<ul>
<li>Purpose: Manages call actions, such as answering and rejecting calls, by providing a native UI.</li>
<li>Function: Displays the incoming call UI and handles call lifecycle events.</li>
</ul>
</li>
<li><a href="https://developer.apple.com/documentation/pushkit">PushKit</a>
<ul>
<li>Purpose: Handles VoIP notifications, ensuring the app receives call alerts even when inactive.</li>
<li>Function: Wakes the app to handle incoming calls, integrating with CallKit for a seamless experience.</li>
</ul>
</li>
<li><a href="https://firebase.google.com/">Firebase &amp; VoIP Notifications</a>
<ul>
<li>Purpose: Manages push notifications via Firebase and APNs.              - Function: Triggers incoming call UI and manages call events.</li>
</ul>
</li>
<li>Node.js Server
<ul>
<li>Purpose: Acts as the backend for call initiation and status updates.</li>
<li>Function: Sends VoIP push notifications and updates call status (accepted/rejected).</li>
</ul>
</li>
</ol>
<p>If we look at the development requirements, here is what you will need:</p>
<ol>
<li>
<p>Development Environment:</p>
<ul>
<li>Xcode (latest version recommended) for iOS app development</li>
<li>macOS computer to run Xcode</li>
</ul>
</li>
<li>
<p>iOS Device:</p>
<ul>
<li>At least one physical iOS device for testing CallKit (CallKit features cannot be fully tested in simulators)</li>
</ul>
</li>
<li>
<p>Server-side Requirements:</p>
<ul>
<li>Node.js v12 or later</li>
<li>NPM v6 or later (typically included with Node.js)</li>
</ul>
</li>
<li>
<p>Apple Developer Account:</p>
<ul>
<li>Required for provisioning profiles and push notifications</li>
</ul>
</li>
<li>
<p>VideoSDK:</p>
<ul>
<li>Valid VideoSDK Token (obtainable from <a href="https://app.videosdk.live/api-keys">Dashboard &gt; API-Key</a>) <a href="https://www.youtube.com/watch?v=RLOA0U62tOc">Video Tutorial</a></li>
</ul>
</li>
</ol>
<p>Now that we've covered the prerequisites, let's dive into building the app. If you’d like a sneak peek at the final result, watch the video and review <a href="https://github.com/videosdk-live/videosdk-rtc-ios-swiftui-call-trigger-example" rel="noreferrer">the complete code </a>for a sample app.</p><h2 id="project-structure-in-xcode">Project Structure In Xcode</h2><pre><code class="language-md">CallKitSwiftUI
│
├── AppDelegate.swift
├── GoogleService-Info.plist
├── CallKitSwiftUI.entitlements
├── Info.plist // Default
│
├── Model
│   ├── CallStruct.swift
│   ├── InitiateCallInfo.swift
│   └── RoomsStruct.swift
│
├── ViewModel
│   ├── UserData.swift
│   ├── CallKitManager.swift
│   ├── PushNotificationManager.swift
│   └── MeetingViewController.swift
│
├── Views
│   ├── NavigationState.swift
│   ├── CallKitSwiftUIApp.swift // Default
│   ├── JoinView.swift
│   └── CallingView.swift
│   └── MeetingView.swift</code></pre><h2 id="firebase-setup">Firebase Setup</h2><ol>
<li>
<p><strong>Create a Firebase iOS App</strong>: Add your iOS app within the Firebase project.</p>
</li>
<li>
<p><strong>Add <code>GoogleService-info.plist</code></strong>: Download and include the <code>GoogleService-info.plist</code> file in your project.</p>
<p><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/image-25-1.png" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy"/></p>
</li>
<li>
<p><strong>Integrate Firebase SDK</strong>: Use SPM or CocoaPods to add the Firebase SDK and necessary frameworks to your iOS project.</p>
</li>
</ol>
<h2 id="register-of-voip-and-apns">Register of VoIP and APNS</h2><p>To enable PushKit notifications in your application, it is essential to acquire the necessary certificates from your Apple Developer Program account and set them up for your iOS VoIP application. These certificates are crucial for registering both your app and the device it operates on with APNs.</p><p><strong>Request a Certificate Using Keychain</strong></p><p>The initial procedure for enabling PushKit functionality within the application involves obtaining a private certificate through the Keychain Access application on a Mac. This certificate establishes a connection to your Apple Developer Program account and is essential for signing iOS VoIP applications that incorporate CallKit support. To generate the certificate, open the Keychain Access application.</p><ol><li>Select&nbsp;Certificate Assistant -&gt; Request a Certificate From a Certificate Authority.</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/Screenshot-202024-08-06-20at-205.33.25-E2-80-AFPM-1.png" class="kg-image" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy" width="1508" height="669"/></figure><ol start="2"><li>Choose your email, and common name, and click continue.</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/Screenshot-202024-08-07-20at-2010.57.32-E2-80-AFAM-1.png" class="kg-image" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy" width="1211" height="890"/></figure><ol start="3"><li>Modify the certificate’s name and save it.</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/Screenshot-202024-08-06-20at-205.38.22-E2-80-AFPM-1.png" class="kg-image" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy" width="1459" height="1028"/></figure><p><strong>Now we have to create the An App iD From Apple Developer Account</strong></p><p>This process requires an active Apple Developer Program account. In this segment, the following actions must be undertaken:</p><ul><li>Generate an App ID</li><li>Define the Bundle Identifier</li><li>Activate Push Notifications within the capabilities.</li></ul><p>To proceed with the addition of an App ID, log into your Apple Developer account and adhere to the outlined steps.</p><ol><li>Select Identifiers under the section Certificates, Identifiers &amp; Profiles.</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/Screenshot-202024-08-07-20at-2011.03.21-E2-80-AFAM.png" class="kg-image" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy" width="2272" height="1586"/></figure><ol start="2"><li>Click the plus (<strong>+</strong>) icon next to&nbsp;<strong>Identifiers</strong>&nbsp;and follow the steps.</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/Screenshot-202024-08-07-20at-2011.05.41-E2-80-AFAM-1.png" class="kg-image" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy" width="2593" height="456"/></figure><ol start="3"><li>Add the description, specify your bundle ID, check PushKit under Capabilities, and click continue.</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/Screenshot-202024-08-07-20at-2011.08.18-E2-80-AFAM-1.png" class="kg-image" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy" width="2650" height="915"/></figure><p>The image below shows the finished App ID.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/Screenshot-202024-08-07-20at-2011.05.41-E2-80-AFAM-2.png" class="kg-image" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy" width="2593" height="456"/></figure><p><strong>Now We have to Create a New VoIP Services Certificate</strong></p><p>Again Head to the Certificates category in your Apple Developer Program account and follow the steps below to add a new certificate.</p><ol><li>Choose the&nbsp;<strong>Certificates</strong>&nbsp;category next to&nbsp;<strong>Identifiers</strong>&nbsp;and click the plus (<strong>+</strong>) to add a new one.</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/Screenshot-202024-08-08-20at-205.51.27-E2-80-AFPM-1.png" class="kg-image" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy" width="2132" height="1313"/></figure><ol start="2"><li>Check&nbsp;<strong>VoIP Services Certificate</strong>&nbsp;and choose the&nbsp;<strong>App ID</strong>&nbsp;you created in the previous section of this tutorial. Apple recommends using a reverse domain for App IDs.</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/Screenshot-202024-08-06-20at-206.01.39-E2-80-AFPM.png" class="kg-image" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy" width="1504" height="626"/></figure><ol start="3"><li>Select the private certificate you generated in one of the previous steps using Keychain Access and click continue.</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/Screenshot-202024-08-06-20at-206.02.42-E2-80-AFPM.png" class="kg-image" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy" width="1525" height="735"/></figure><ol><li>Now,Download the VoIP services certificate provided by your VoIP service provider. It will be saved as a .cer file named voip_services.cer.</li><li>Now you have to&nbsp;<strong>Convert .cer to .p12</strong></li></ol><ul><li>Double-click the voip_services.cer file to open it in Keychain Access.</li><li>Locate the certificate titled "VoIP Services: YourProductName".</li><li>Right-click on it and choose the option to export it as a .p12 file.</li><li>You'll be prompted for a password. Create a strong password and remember it.</li><li>Save the .p12 file to a secure location on your Mac.</li></ul><ol start="6"><li>Open a terminal window and navigate to the directory where you saved the .p12 file. Run the following command, replacing YourFileName.p12 with the actual name of your .p12 file:</li></ol><pre><code class="language-swift">openssl pkcs12 -in YourCertificates.p12 -out Certificates.pem -nodes -clcerts -legacy
</code></pre><ul><li>You'll be asked for the password you set earlier.</li><li>A new file named Certificates.pem will be created in the same directory.</li></ul><p><strong>Note:</strong>&nbsp;The bundle ID of your VoIP services will influence the exact certificate name. This .pem file is now ready for use in your push notification implementation.</p><h2 id="pushkit-setup">PushKit Setup</h2><p>PushKit will allow us to send the notifications to the iOS device for which, You must upload an APN Auth Key to implement push notifications. We need the following details about the app when sending push notifications via an APN Auth Key:</p>
<ul>
<li>Auth Key file</li>
<li>Team ID</li>
<li>Key ID</li>
<li>Your app’s bundle ID</li>
</ul>
<p>To create an APN auth key, follow the steps below.</p>
<ol>
<li>
<p>Visit the Apple <a href="https://developer.apple.com/account/">Developer Member Center</a><br>
<img src="https://assets.videosdk.live/static-assets/ghost/2024/11/image-1.png" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy"/></br></p>
</li>
<li>
<p>Click on Certificates, Identifiers &amp; Profiles. Go to Keys from the left side. Create a new Auth Key by clicking on the plus button on the top right side.<br>
<img src="https://assets.videosdk.live/static-assets/ghost/2024/11/image-2.png" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy"/></br></p>
</li>
<li>
<p>On the following page, add a Key Name, and select APNs.<br>
<img src="https://assets.videosdk.live/static-assets/ghost/2024/11/image-3.png" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy"/></br></p>
</li>
<li>
<p>Click on the Register button.<br>
<img src="https://assets.videosdk.live/static-assets/ghost/2024/11/image-4.png" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy"/></br></p>
</li>
<li>
<p>You can download your auth key file from this page and upload this file to the Firebase dashboard without changing its name.<br>
<img src="https://assets.videosdk.live/static-assets/ghost/2024/11/image-5.png" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy"/></br></p>
</li>
<li>
<p>In your Firebase project, go to Settings and select the Cloud Messaging tab. Scroll down iOS app configurationand click Upload under APNs Authentication Key<br>
<img src="https://assets.videosdk.live/static-assets/ghost/2024/11/image-6.png" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy"/></br></p>
</li>
<li>
<p>Enter <code>Key ID</code> and <code>Team ID</code>. Key ID is in the file name <code>AuthKey\_{Key ID}.p8</code> and is 10 characters. Your Team ID is in the Apple Member Center under the membership tab or displayed always under your account name in the top right corner.<br>
<img src="https://assets.videosdk.live/static-assets/ghost/2024/11/image-7.png" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy"/></br></p>
</li>
<li>
<p>Enable Push Notifications in Capabilities<br>
<img src="https://assets.videosdk.live/static-assets/ghost/2024/11/image-8.png" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy"><br>
<img src="https://assets.videosdk.live/static-assets/ghost/2024/11/image-9.png" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy"/></br></img></br></p>
</li>
<li>
<p>Enable selected permission in Background Modes<br>
<img src="https://assets.videosdk.live/static-assets/ghost/2024/11/image-10.png" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy"/></br></p>
</li>
</ol>
<h2 id="server-setup">Server Setup</h2><h3 id="steps">Steps</h3><ol><li><strong>Create a new project directory:</strong></li></ol><ul><li>Open your terminal or command prompt.    </li><li> Navigate to the desired location for your project.    </li><li> Create a new directory:      </li></ul><pre><code class="language-md">mkdir server
cd server</code></pre><ol start="2"><li><strong>Initialize npm:</strong></li></ol><ul><li>Create a `package.json` file to manage project dependencies:  </li></ul><pre><code class="language-md">npm init -y</code></pre><ul><li>This will create a `package.json` file with default settings.</li></ul><ol start="3"><li>Install dependency</li></ol><pre><code class="language-md">npm install express body-parser cors firebase-admin morgan node-fetch uuid https://github.com/node-apn/node-apn.git</code></pre><ol start="4"><li>Create a server.js file</li></ol><ul><li>Create a file named `server.js` at the root of your project.</li><li>Add the following code to the file:</li></ul><pre><code class="language-js">const express = require("express");
const cors = require("cors");
const morgan = require("morgan");
const { v4: uuidv4 } = require("uuid");

const app = express();
const port = 3000;

app.use(cors());
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
app.use(morgan("dev"));

// Start the server
app.listen(port, () =&gt; {
  console.log(`Server running at http://localhost:${port}/`);
});</code></pre><ol start="5"><li>Start Development Server</li></ol><pre><code class="language-md">node index.js</code></pre><ul><li>Your application should now be running on port 3000 (or the specified port). You can access it by opening a web browser and going to <code>http://localhost:3000</code>.<br/></li></ul><h2 id="app-setup">App Setup<br/></h2><h2 id="appdelegate-setup">AppDelegate Setup</h2><p>The AppDelegate class manages the app's lifecycle, push notifications, and Firebase integration for VoIP. It configures Firebase for push notifications, handles FCM tokens, and registers the app for remote notifications. The AppDelegate is responsible for setting up the application, including logging the APNs token for debugging purposes.</p><p><strong>Device Registration in AppDelegate </strong></p><p>During the app's initial installation, key steps include:</p><ul><li><strong>Device Registration: </strong>Configures push notifications to enable updates.</li><li><strong>Token Generation:</strong> Creates Device and FCM tokens for notification identification.</li><li><strong>Error Handling:</strong> Manages errors related to remote notification registration.</li></ul><h3 id="voip-registration-and-firebase-cloud-messaging-fcm-integration">VoIP Registration and Firebase Cloud Messaging (FCM) Integration</h3><ul><li>This section explains how to set up VoIP registration with PushKit, enabling your iOS app to receive and handle incoming VoIP calls, even when it's not in the foreground. It also covers Firebase Cloud Messaging (FCM) integration to handle push notifications.</li><li>It will also store the device token and FCM token in your firebase database.</li></ul><p>Create a separate Swift file, <code>Model/CallStruct.swift</code>, to manage and store session information and variables.</p><p><code>CallStruct.swift</code></p><pre><code class="language-swift">import Foundation

struct CallingInfo {
    static var deviceToken: String?
    static var fcmTokenOfDevice: String?
    static var otherUIDOf: String?
    static var currentMeetingID: String? {
        get {
            return UserDefaults.standard.string(forKey: "currentMeetingID")
        }
        set {
            UserDefaults.standard.set(newValue, forKey: "currentMeetingID")
        }
    }
}</code></pre><p>Create a new Swift file, <code>AppDelegate.swift</code>, to get the device token.</p><p><code>AppDelegate.swift</code></p><pre><code class="language-swift">import UIKit
import FirebaseMessaging
import FirebaseCore

class AppDelegate: NSObject, UIApplicationDelegate {
    func application(_ application: UIApplication,
                    didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey : Any]? = nil) -&gt; Bool {
        UIApplication.shared.applicationIconBadgeNumber = 0
        return true
    }
    
    func application(_ application: UIApplication,
                     didRegisterForRemoteNotificationsWithDeviceToken deviceToken: Data) {
         // Set the APNS token for Firebase Messaging
         Messaging.messaging().apnsToken = deviceToken

         // Convert and log token
         let tokenString = deviceToken.map { String(format: "%02.2hhx", $0) }.joined()
         print("APNS token: \(tokenString)")
         
     }
     
     func application(_ application: UIApplication,
                     didFailToRegisterForRemoteNotificationsWithError error: Error) {
         print("Failed to register for remote notifications: \(error.localizedDescription)")
     }
}

extension Notification.Name {
    static let callAnswered = Notification.Name("callAnswered")
}</code></pre><p>Create a new Swift file, <code>ViewModel/PushNotificationManager.swift</code>, to manage the PushKit delegate and Remote notifications.</p><p><code>PushNotificationManager.swift</code></p><pre><code class="language-swift">import Foundation
import UserNotifications
import FirebaseMessaging
import PushKit
import UIKit
import SwiftUI

class PushNotificationManager: NSObject, ObservableObject {
    static let shared = PushNotificationManager()
    
    @Published var fcmToken: String?
    private var voipRegistry: PKPushRegistry?
    private var deviceToken: String?
    private var isFcmTokenAvailable: Bool = false
    private var isDeviceTokenAvailable: Bool = false
    @Published var isRegistering: Bool = false
    private var callStatus: String?
    
    override private init() {
        super.init()
        setupNotifications()
        setupVoIP()
    }
    
    private func setupNotifications() {
        UNUserNotificationCenter.current().delegate = self
        Messaging.messaging().delegate = self
        
        UNUserNotificationCenter.current().requestAuthorization(options: [.alert, .sound, .badge]) { granted, error in
            if granted {
                DispatchQueue.main.async {
                    UIApplication.shared.registerForRemoteNotifications()
                }
            }
        }
    }
    
    private func setupVoIP() {
        voipRegistry = PKPushRegistry(queue: .main)
        voipRegistry?.delegate = self
        voipRegistry?.desiredPushTypes = [.voIP]
    }
}

// MARK: - UNUserNotificationCenterDelegate
extension PushNotificationManager: UNUserNotificationCenterDelegate {
    
    func userNotificationCenter(_ center: UNUserNotificationCenter, willPresent notification: UNNotification, withCompletionHandler completionHandler: @escaping (UNNotificationPresentationOptions) -&gt; Void) {
        
        let userInfo = notification.request.content.userInfo
        if let callStatus = userInfo["type"] as? String {
            self.callStatus = callStatus
        }
        
        handleFcmNotification()
        completionHandler([.banner, .sound, .badge])
    }
    
    func userNotificationCenter(_ center: UNUserNotificationCenter, didReceive response: UNNotificationResponse, withCompletionHandler completionHandler: @escaping () -&gt; Void) {
        let userInfo = response.notification.request.content.userInfo
        print("Notification received with userInfo: \(userInfo)")
        
        handleFcmNotification()
        completionHandler()
    }
    
    private func handleFcmNotification() {
        if callStatus == "ACCEPTED" {
            DispatchQueue.main.async {
                      if let meetingId = CallingInfo.currentMeetingID {
                          NavigationState.shared.navigateToMeeting(meetingId: meetingId)
                      }
            }
        } else {
            DispatchQueue.main.async {
                NavigationState.shared.navigateToJoin()
            }
        }
    }
    
}

// MARK: - MessagingDelegate
extension PushNotificationManager: MessagingDelegate {
    func messaging(_ messaging: Messaging, didReceiveRegistrationToken fcmToken: String?) {
        self.fcmToken = fcmToken
        CallingInfo.fcmTokenOfDevice = fcmToken
        self.isFcmTokenAvailable = true
        
        self.isRegistering = true
        // Register user if both tokens are available
        if self.isDeviceTokenAvailable &amp;&amp; self.isFcmTokenAvailable {
            guard let deviceToken = deviceToken,
                  let fcmToken = fcmToken else { return }
            registerUser(deviceToken: deviceToken, fcmToken: fcmToken)
        } else {
            DispatchQueue.main.asyncAfter(deadline: .now() + 10.0) { [weak self] in
                guard let self = self,
                      let deviceToken = deviceToken,
                      let fcmToken = fcmToken else { return }
                
                if self.isDeviceTokenAvailable &amp;&amp; self.isFcmTokenAvailable {
                    self.registerUser(deviceToken: deviceToken, fcmToken: fcmToken)
                } else {
                    self.isRegistering = false
                }
            }
        }
    }
}

// MARK: - PKPushRegistryDelegate
extension PushNotificationManager: PKPushRegistryDelegate {
    func pushRegistry(_ registry: PKPushRegistry, didUpdate pushCredentials: PKPushCredentials, for type: PKPushType) {
        let token = pushCredentials.token.map { String(format: "%02x", $0) }.joined()
        CallingInfo.deviceToken = token
        self.deviceToken = token
        self.isDeviceTokenAvailable = true
    }
    
    func pushRegistry(_ registry: PKPushRegistry, didInvalidatePushTokenFor type: PKPushType) {
        print("Push token invalidated for type: \(type)")
    }
    
    func pushRegistry(_ registry: PKPushRegistry, didReceiveIncomingPushWith payload: PKPushPayload, for type: PKPushType, completion: @escaping () -&gt; Void) {
        // Handle VoIP push notification
        handleVoIPPushPayload(payload)
        completion()
    }
    
    private func handleVoIPPushPayload(_ payload: PKPushPayload) {
        let payloadDict = payload.dictionaryPayload
        guard let callerInfo = payloadDict["callerInfo"] as? [String: Any],
              let callerName = callerInfo["name"] as? String,
              let callerID = callerInfo["callerID"] as? String,
              let videoSDKInfo = payloadDict["videoSDKInfo"] as? [String: Any],
              let meetingId = videoSDKInfo["meetingId"] as? String else {
            return
        }
        
        CallingInfo.otherUIDOf = callerID
        CallingInfo.currentMeetingID = meetingId
        
        CallKitManager.shared.reportIncomingCall(callerName: callerName, meetingId: meetingId)
    }
}

extension PushNotificationManager {
    // Register user in firebase database
    private func registerUser(deviceToken: String, fcmToken: String) {
        let name = UIDevice.current.name
        UserData.shared.registerUser(name: name, deviceToken: deviceToken, fcmToken: fcmToken) { success in
            if success {
                print("user stored")
                self.isRegistering = false
            }
        }
    }
}</code></pre><p><br>By implementing these methods, your app can effectively manage incoming VoIP calls, providing a seamless experience for users even when the app is not active and it ensures that the FCM token is available for push notifications.</br></p><hr><h2 id="callkit-setup">CallKit Setup</h2><p>This section covers setting up CallKit to manage incoming and outgoing calls in your iOS app. CallKit integrates your app with the native iOS calling interface, providing a seamless VoIP experience.<br>Create a new Swift file, <code>CallKitManager.swift</code>, to manage the CallKit objects for observing, monitoring, and controlling calls.</br></p><p><code>CallKitManager.swift</code></p><pre><code class="language-swift">import CallKit
import AVFoundation

class CallKitManager: NSObject, ObservableObject, CXProviderDelegate {
    
    static let shared = CallKitManager()

    private var provider: CXProvider
    private var callController: CXCallController
    @Published var callerIDs: [UUID: String] = [:]
    @Published var meetingIDs = [UUID: String]()
    
    override private init() {
        provider = CXProvider(configuration: CXProviderConfiguration(localizedName: "In CallKitSwiftUI"))
        callController = CXCallController()
        super.init()
        provider.setDelegate(self, queue: nil)
    }
    
    func reportIncomingCall(callerName: String, meetingId: String) {
        let uuid = UUID()
        let update = CXCallUpdate()
        update.remoteHandle = CXHandle(type: .generic, value: callerName)
        update.localizedCallerName = callerName
        
        callerIDs[uuid] = callerName
        meetingIDs[uuid] = meetingId
        
        provider.reportNewIncomingCall(with: uuid, update: update) { error in
            if let error = error {
                print("Error reporting incoming call: \(error)")
            }
        }
    }
    
    func endCall() {
        // End all active calls
        for (uuid, _) in callerIDs {
            let endCallAction = CXEndCallAction(call: uuid)
            let transaction = CXTransaction(action: endCallAction)
            
            callController.request(transaction) { error in
                if let error = error {
                    print("Error ending call: \(error.localizedDescription)")
                } else {
                    print("Call ended successfully")
                }
            }
        }
    }
    
    // CXProviderDelegate methods
    func provider(_ provider: CXProvider, perform action: CXStartCallAction) {
        configureAudioSession()
        let update = CXCallUpdate()
        update.remoteHandle = action.handle
        update.localizedCallerName = action.handle.value
        provider.reportCall(with: action.callUUID, updated: update)
        action.fulfill()
    }
    
    func provider(_ provider: CXProvider, perform action: CXAnswerCallAction) {
        if let callerID = callerIDs[action.callUUID] {
            print("Establishing call connection with caller ID: \(callerID)")
        }
        NotificationCenter.default.post(name: .callAnswered, object: nil)
        UserData.shared.UpdateCallAPI(callType: "ACCEPTED")
        action.fulfill()
    }
    
    func provider(_ provider: CXProvider, perform action: CXEndCallAction) {
        callerIDs.removeValue(forKey: action.callUUID)
        let meetingViewController = MeetingViewController()
        meetingViewController.onMeetingLeft()
        action.fulfill()
        UserData.shared.UpdateCallAPI(callType: "REJECTED")
        DispatchQueue.main.async {
            NavigationState.shared.navigateToJoin()
        }
    }

    func providerDidReset(_ provider: CXProvider) {
        callerIDs.removeAll()
    }
}</code></pre><hr><h2 id="storing-user-data">Storing User Data</h2><p>Create a new Swift file, <code>ViewModel/UserData.swift</code>, to store the user data.</p><p><code>UserData.swift</code></p><pre><code class="language-swift">import SwiftUI
import Firebase
import FirebaseFirestore
import FirebaseMessaging


class UserData: ObservableObject {
    @Published var callerID: String = "" // Store the caller ID
    @Published public var otherUserID: String = ""
    static let shared = UserData()
    private let callerIDKey = "callerIDKey" // Key for UserDefaults
    
    let TOKEN_STRING = ""
    
    init() {
        self.callerID = UserDefaults.standard.string(forKey: callerIDKey) ?? ""
    }
    
    // MARK: Generating Unqiue CallerID
    func generateUniqueCallerID() -&gt; String {
        let randomNumber = Int.random(in: 10000...99999)
        print("caller id", randomNumber)
        return String(randomNumber)
    }
    
    // MARK: Check and Register User if Required
    func registerUser(name: String, deviceToken: String, fcmToken: String, completion: @escaping (Bool) -&gt; Void) {
        // First check if user exists with this FCM token
        Firestore.firestore().collection("users")
            .whereField("fcmToken", isEqualTo: fcmToken)
            .getDocuments { [weak self] snapshot, error in
                if let error = error {
                    print("Error checking for existing user: \(error.localizedDescription)")
                    completion(false)
                    return
                }
                
                // If documents exist with this FCM token
                if let snapshot = snapshot, !snapshot.isEmpty {
                    print("User already exists")
                    PushNotificationManager.shared.isRegistering = false
                    completion(false)
                    return
                }
                
                // If no existing user found, create new user
                let callerID = self?.generateUniqueCallerID() ?? ""
                
                DispatchQueue.main.async {
                    self?.callerID = callerID
                    UserDefaults.standard.set(callerID, forKey: self?.callerIDKey ?? "")
                }
                
                Firestore.firestore().collection("users").addDocument(data: [
                    "name": name,
                    "callerID": callerID,
                    "deviceToken": deviceToken,
                    "fcmToken": fcmToken
                ]) { [weak self] error in
                    if let error = error {
                        print("Error adding document: \(error.localizedDescription)")
                        completion(false)
                    } else {
                        print("Document added successfully")
                        self?.storeCallerID(callerID)
                        completion(true)
                    }
                }
            }
    }

    func storeCallerID(_ callerID: String) {
        // Save the caller ID to UserDefaults
        UserDefaults.standard.set(callerID, forKey: callerIDKey)
        self.callerID = callerID
    }
}</code></pre><p>We use this function to store the user information in firebase database as shown below in <code>PushNotificationManager.swift</code></p><p><code>PushNotificationManager.swift</code></p><pre><code class="language-swift">extension PushNotificationManager: MessagingDelegate {
    func messaging(_ messaging: Messaging, didReceiveRegistrationToken fcmToken: String?) {
        self.fcmToken = fcmToken
        CallingInfo.fcmTokenOfDevice = fcmToken
        self.isFcmTokenAvailable = true
        
        self.isRegistering = true
        // Register user if both tokens are available
        if self.isDeviceTokenAvailable &amp;&amp; self.isFcmTokenAvailable {
            guard let deviceToken = deviceToken,
                  let fcmToken = fcmToken else { return }
            registerUser(deviceToken: deviceToken, fcmToken: fcmToken)
        } else {
            DispatchQueue.main.asyncAfter(deadline: .now() + 10.0) { [weak self] in
                guard let self = self,
                      let deviceToken = deviceToken,
                      let fcmToken = fcmToken else { return }
                
                if self.isDeviceTokenAvailable &amp;&amp; self.isFcmTokenAvailable {
                    self.registerUser(deviceToken: deviceToken, fcmToken: fcmToken)
                } else {
                    self.isRegistering = false
                }
            }
        }
    }

    private func registerUser(deviceToken: String, fcmToken: String) {
        let name = UIDevice.current.name
        UserData.shared.registerUser(name: name, deviceToken: deviceToken, fcmToken: fcmToken) { success in
            if success {
                print("user stored")
                self.isRegistering = false
            }
        }
    }
}</code></pre><hr><h2 id="designing-the-app-interface-with-swiftui">Designing the App Interface with SwiftUI</h2><p>We'll start by creating three SwiftUI views: <code>JoinView</code>, <code>CallingView</code>, and <code>MeetingView</code>. We will also create a <code>NavigationState.swift</code> to manage the navigation between these views.<br/></p><pre><code class="language-md">├── Views
│   ├── CallKitSwiftUIApp.swift // Default
│   ├── JoinView.swift
│   └── CallingView.swift
│   └── NavigationState.swift</code></pre><h3 id="navigationstateswift"><code>NavigationState.swift</code></h3><p>This file manages the navigation between the views.</p><pre><code class="language-swift">import SwiftUI

enum AppScreen: Hashable {
    case join
    case calling(userName: String, userNumber: String)
    case meeting(meetingId: String)
}

class NavigationState: ObservableObject {
    static let shared = NavigationState()
    
    @Published var path = NavigationPath()
    @Published var currentScreen: AppScreen = .join
    
    func navigateToCall(userName: String, userNumber: String) {
        currentScreen = .calling(userName: userName, userNumber: userNumber)
        path.append(AppScreen.calling(userName: userName, userNumber: userNumber))
    }
    
    func navigateToMeeting(meetingId: String) {
        currentScreen = .meeting(meetingId: meetingId)
        path.append(AppScreen.meeting(meetingId: meetingId))
    }
    
    func navigateToJoin() {
        path.removeLast(path.count)
        currentScreen = .join
    }
}</code></pre><h3 id="joinviewswift"><code>JoinView.swift</code></h3><p>The <code>Views/JoinView.swift</code> is the initial screen users see when they open the app. It displays the user's Caller ID, allows them to enter another user's ID to initiate a call, and manages navigation based on call status.</p><pre><code class="language-swift">import SwiftUI
import Firebase
import FirebaseFirestore

struct JoinView: View {
    
    @EnvironmentObject private var userData: UserData
    @EnvironmentObject private var callKitManager: CallKitManager
    @StateObject private var pushNotificationManager = PushNotificationManager.shared
    @EnvironmentObject private var navigationState: NavigationState
    
    @State public var otherUserID: String = ""
    @State private var userName: String = ""
    @State private var userNumber: String = ""
    
    var body: some View {
        NavigationView {
            ZStack {
                VStack(spacing: 30) {
                    Spacer()
                    
                    VStack(alignment: .leading, spacing: 10) {
                        Text("Your Caller ID")
                            .font(.headline)
                            .foregroundColor(.white)
                        
                        HStack(spacing: 10) {
                            Text(userData.callerID)
                                .font(.title)
                                .fontWeight(.bold)
                                .foregroundColor(.white)
                            
                            Image(systemName: "lock.fill")
                                .foregroundColor(.white)
                        }
                    }
                    .padding()
                    .background(Color(red: 0.1, green: 0.1, blue: 0.1))
                    .cornerRadius(12)
                    
                    Spacer(minLength: 2)
                    
                    VStack(alignment: .leading, spacing: 10) {
                        Text("Enter call ID of another user")
                            .font(.headline)
                            .foregroundColor(.white)
                        
                        TextField("Enter ID", text: $otherUserID)
                            .foregroundColor(.black)
                            .textFieldStyle(RoundedBorderTextFieldStyle())
                            .padding(.horizontal)
                    }
                    .padding()
                    .background(Color(red: 0.1, green: 0.1, blue: 0.1))
                    .cornerRadius(12)
                    
                    Spacer(minLength: 2)
                    
                    Button(action: {
                        // initiate call
                        userData.initiateCall(otherUserID: otherUserID) { callerInfo, calleeInfo, videoSDKInfo in
                            print("Initiating call to \(calleeInfo?.name ?? "Unknown")")
                            self.userName = calleeInfo?.name ?? "Unknown"
                            self.userNumber = calleeInfo?.callerID ?? "Unknown"
                            navigationState.navigateToCall(userName: self.userName, userNumber: self.userNumber)
                        }
                    }) {
                        HStack {
                            Text("Start Call")
                            Image(systemName: "phone.circle.fill")
                                .imageScale(.large)
                        }
                    }
                    .buttonStyle(.borderedProminent)
                    .padding(.trailing)
                    
                    Spacer()
                    
                }
                .padding()
                .frame(maxWidth: .infinity, maxHeight: .infinity)
                .background(Color(red: 0.05, green: 0.05, blue: 0.05))
                .edgesIgnoringSafeArea(.all)
                
                if pushNotificationManager.isRegistering {
                    ProgressView()
                        .progressViewStyle(CircularProgressViewStyle(tint: .white))
                        .scaleEffect(1.5)
                        .frame(maxWidth: .infinity, maxHeight: .infinity)
                        .background(Color.black.opacity(0.4))
                }
            }
            .onAppear {
                userData.fetchCallerID()
                NotificationCenter.default.addObserver(forName: .callAnswered, object: nil, queue: .main) { _ in
                    if let meetingId = CallingInfo.currentMeetingID {
                        navigationState.navigateToMeeting(meetingId: meetingId)
                    }
                }
            }
        }
    }
}</code></pre><p>Snapshot of JoinView</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/join-view.png" class="kg-image" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy" width="1920" height="1080"/></figure><p/><h3 id="callingviewswift"><code>CallingView.swift</code></h3><pre><code class="language-swift">import SwiftUI

struct CallingView: View {
    var userNumber: String
    var userName: String

    @EnvironmentObject private var navigationState: NavigationState

    var body: some View {
        ZStack {
            Color.black.edgesIgnoringSafeArea(.all)

            VStack(spacing: 30) {
                HStack(alignment: .center) {
                    Image(systemName: "person.circle.fill")
                        .resizable()
                        .frame(width: 100, height: 100)
                        .foregroundColor(.gray)

                    VStack(alignment: .leading, spacing: 5) {
                        Text(userName)
                            .font(.largeTitle)
                            .foregroundColor(.white)

                        Text(userNumber)
                            .font(.title)
                            .foregroundColor(.white)
                    }
                }

                Spacer()

                Text("Calling...")
                    .font(.title2)
                    .foregroundColor(.gray)

                Spacer()

                Button(action: {
                    navigationState.navigateToJoin()
                }) {
                    Image(systemName: "phone.down.fill")
                        .font(.system(size: 24))
                        .foregroundColor(.white)
                        .padding()
                        .background(Color.red)
                        .clipShape(Circle())
                }
                .padding(.bottom, 50)
            }
            .padding(.horizontal, 30)
        }
    }
}</code></pre><p>Snapshot of Calling View</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/calling-view.png" class="kg-image" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy" width="1920" height="1080"/></figure><h3 id="meetingviewswift"><code>MeetingView.swift</code></h3><pre><code class="language-md">├── Views
│   ├── MeetingView.swift</code></pre><p>The <code>Views/MeetingView.swift</code> serves as the main meeting screen with controls for the local participant. We will integrate the <a href="https://www.videosdk.live/" rel="noreferrer">VideoSDK</a> here for Audio and Video Calling.</p><pre><code class="language-swift">import SwiftUI
import VideoSDKRTC
import WebRTC

struct MeetingView: View{
    
    @ObservedObject var meetingViewController = MeetingViewController()
    
    // Variables for keeping the state of various controls
    @State var meetingId: String?
    @State var userName: String? = "Demo"
    @State var isUnMute: Bool = true
    @State var camEnabled: Bool = true
    @State var isScreenShare: Bool = false
    
    var userData = UserData()
    
    var body: some View {
        
        VStack {
            if meetingViewController.participants.count == 0 {
                Text("Meeting Initializing")
            } else {
                VStack {
                    VStack(spacing: 20) {
                        Text("Meeting ID: \(CallingInfo.currentMeetingID!)")
                            .padding(.vertical)
                        
                        List {
                            ForEach(meetingViewController.participants.indices, id: \.self) { index in
                                Text("Participant Name: \(meetingViewController.participants[index].displayName)")
                                ZStack {
                                    ParticipantView(track: meetingViewController.participants[index].streams.first(where: { $1.kind == .state(value: .video) })?.value.track as? RTCVideoTrack).frame(height: 250)
                                    if meetingViewController.participants[index].streams.first(where: { $1.kind == .state(value: .video) }) == nil {
                                        Color.white.opacity(1.0).frame(width: UIScreen.main.bounds.width, height: 250)
                                        Text("No media")
                                    }
                                }
                            }
                        }
                    }
                    
                    VStack {
                        HStack(spacing: 15) {
                            // mic button
                            Button {
                                if isUnMute {
                                    isUnMute = false
                                    meetingViewController.meeting?.muteMic()
                                }
                                else {
                                    isUnMute = true
                                    meetingViewController.meeting?.unmuteMic()
                                }
                            } label: {
                                Text("Toggle Mic")
                                    .foregroundStyle(Color.white)
                                    .font(.caption)
                                    .padding()
                                    .background(
                                        RoundedRectangle(cornerRadius: 25)
                                            .fill(Color.blue))
                            }
                            // camera button
                            Button {
                                if camEnabled {
                                    camEnabled = false
                                    meetingViewController.meeting?.disableWebcam()
                                }
                                else {
                                    camEnabled = true
                                    meetingViewController.meeting?.enableWebcam()
                                }
                            } label: {
                                Text("Toggle WebCam")
                                    .foregroundStyle(Color.white)
                                    .font(.caption)
                                    .padding()
                                    .background(
                                        RoundedRectangle(cornerRadius: 25)
                                            .fill(Color.blue))
                            }
                        }
                        HStack{
                            // end meeting button
                            Button {
                                meetingViewController.meeting?.end()
                                NavigationState.shared.navigateToJoin()
                                CallKitManager.shared.endCall()
                                
                            } label: {
                                Text("End Call")
                                    .foregroundStyle(Color.white)
                                    .font(.caption)
                                    .padding()
                                    .background(
                                        RoundedRectangle(cornerRadius: 25)
                                            .fill(Color.red))
                            }
                        }
                        .padding(.bottom)
                    }
                }
            }
        }.onAppear()
        {
            /// MARK :- configuring the videoSDK
            VideoSDK.config(token: meetingViewController.token)
            if meetingId?.isEmpty == false {
                // join an existing meeting with provided meeting Id
                meetingViewController.joinMeeting(meetingId: meetingId!, userName: userName!)
            }
            else {
            }
        }
    }    
}

/// VideoView for participant's video
class VideoView: UIView {
    
    var videoView: RTCMTLVideoView = {
        let view = RTCMTLVideoView()
        view.videoContentMode = .scaleAspectFill
        view.backgroundColor = UIColor.black
        view.clipsToBounds = true
        view.frame = CGRect(x: 0, y: 0, width: UIScreen.main.bounds.width, height: 250)
        
        return view
    }()
    
    init(track: RTCVideoTrack?) {
        super.init(frame: CGRect(x: 0, y: 0, width: UIScreen.main.bounds.width, height: 250))
        backgroundColor = .clear
        DispatchQueue.main.async {
            self.addSubview(self.videoView)
            self.bringSubviewToFront(self.videoView)
            track?.add(self.videoView)
        }
    }
    
    required init?(coder: NSCoder) {
        fatalError("init(coder:) has not been implemented")
    }
}

/// ParticipantView for showing and hiding VideoView
struct ParticipantView: UIViewRepresentable {
    var track: RTCVideoTrack?
    
    func makeUIView(context: Context) -&gt; VideoView {
        let view = VideoView(track: track)
        view.frame = CGRect(x: 0, y: 0, width: 250, height: 250)
        return view
    }
    
    func updateUIView(_ uiView: VideoView, context: Context) {
        if track != nil {
            track?.add(uiView.videoView)
        } else {
            track?.remove(uiView.videoView)
        }
    }
}</code></pre><p>We'll use the <code>MeetingViewController</code> to manage the meeting.</p><h3 id="meetingviewcontrollerswift"><code>MeetingViewController.swift</code></h3><pre><code class="language-md">├── ViewModel
│   ├── MeetingViewController.swift</code></pre><pre><code class="language-swift">import Foundation
import VideoSDKRTC
import WebRTC

class MeetingViewController: ObservableObject {
    
    var token = "YOUR_TOKEN_HERE"
    var meetingId: String = ""
    var name: String = ""
    
    @Published var meeting: Meeting? = nil
    @Published var localParticipantView: VideoView? = nil
    @Published var videoTrack: RTCVideoTrack?
    @Published var participants: [Participant] = []
    @Published var meetingID: String = ""
    
    func initializeMeeting(meetingId: String, userName: String) {
        
        meeting = VideoSDK.initMeeting(
            meetingId: CallingInfo.currentMeetingID!,
            participantName: "iPhone",
            micEnabled: true,
            webcamEnabled: true
        )
        
        meeting?.addEventListener(self)
        meeting?.join()
    }
}

extension MeetingViewController: MeetingEventListener {
    
    func onMeetingJoined() {
        
        guard let localParticipant = self.meeting?.localParticipant else { return }
        
        // add to list
        participants.append(localParticipant)
        
        // add event listener
        localParticipant.addEventListener(self)
        
        localParticipant.setQuality(.high)
    }
    
    func onParticipantJoined(_ participant: Participant) {
        
        participants.append(participant)
        
        // add listener
        participant.addEventListener(self)
        
        participant.setQuality(.high)
    }
    
    func onParticipantLeft(_ participant: Participant) {
        participants = participants.filter({ $0.id != participant.id })
    }
    
    func onMeetingLeft() {
        meeting?.localParticipant.removeEventListener(self)
        meeting?.removeEventListener(self)
        NavigationState.shared.navigateToJoin()
        CallKitManager.shared.endCall()
    }
    
    func onMeetingStateChanged(meetingState: MeetingState) {
        switch meetingState {
            
        case .CLOSED:
            participants.removeAll()
            
        default:
            print("")
        }
    }
}

extension MeetingViewController: ParticipantEventListener {
    func onStreamEnabled(_ stream: MediaStream, forParticipant participant: Participant) {
        
        if participant.isLocal {
            if let track = stream.track as? RTCVideoTrack {
                DispatchQueue.main.async {
                    self.videoTrack = track
                }
            }
        } else {
            if let track = stream.track as? RTCVideoTrack {
                DispatchQueue.main.async {
                    self.videoTrack = track
                }
            }
        }
    }
    
    func onStreamDisabled(_ stream: MediaStream, forParticipant participant: Participant) {
        
        if participant.isLocal {
            if let _ = stream.track as? RTCVideoTrack {
                DispatchQueue.main.async {
                    self.videoTrack = nil
                }
            }
        } else {
            self.videoTrack = nil
        }
    }
}

extension MeetingViewController {
    
    // initialise a meeting with give meeting id (either new or existing)
    func joinMeeting(meetingId: String, userName: String) {
        
        if !token.isEmpty {
            // use provided token for the meeting
            self.meetingID = meetingId
            self.initializeMeeting(meetingId: meetingId, userName: userName)
        }
        else {
            print("Auth token required")
        }
    }
}</code></pre><p>With the basic UI in place, you can now focus on implementing the app's functionality, including method execution and API interactions.</p><hr><h2 id="integrating-the-call-initiation-api-on-the-join-screen">Integrating the call initiation API on the Join screen.</h2><h3 id="workflow-for-call-initiation">Workflow for Call initiation</h3><ol>
<li>
<p><strong>User Action</strong>: User enters recipient's ID and taps "Start Call".</p>
</li>
<li>
<p><strong>Data Fetching</strong>: Retrieve caller and callee information from local storage or backend.</p>
</li>
<li>
<p><strong>Meeting Creation</strong>: Initiate a meeting on the video conferencing platform.</p>
</li>
<li>
<p><strong>Call Request</strong>: Send a call request to the recipient, including meeting details.</p>
</li>
<li>
<p><strong>UI Update</strong>: Display a "Calling..." screen.</p>
</li>
</ol>
<h3 id="initiatecallinfoswift"><code>InitiateCallInfo.swift</code></h3><p>Before building the UI or making API calls, we need to create the <code>Model/InitiateCallInfo.swift</code> file. This file will contain the structures that hold call-related information.</p><p>Content:</p><ul><li><code>CallerInfo</code> and <code>CalleeInfo</code> structs contain details about the participants of the call, like IDs, names, and tokens.</li><li><code>VideoSDKInfo</code> struct holds information required by the video SDK for the call session.</li><li><code>CallRequest</code> struct combines all the above information into a single payload to initiate the call.</li></ul><pre><code class="language-swift">import Foundation

// USER A: The caller initiating the call
struct CallerInfo: Codable {
    let id: String
    let name: String
    let callerID: String
    let deviceToken: String
    let fcmToken: String
}

// USER B: The callee receiving the call
struct CalleeInfo: Codable {
    let id: String
    let name: String
    let callerID: String
    let deviceToken: String
    let fcmToken: String
}

// Meeting Info Can Be Static
struct VideoSDKInfo: Codable {
    var meetingId: String = MeetingManager.shared.currentMeetingID ?? "null"
}

// Combines all three and sends the information to the server
struct CallRequest: Codable {
    let callerInfo: CallerInfo
    let calleeInfo: CalleeInfo
    let videoSDKInfo: VideoSDKInfo
}</code></pre><h3 id="userdataswift"><code>UserData.swift</code></h3><p>This file contains the logic for fetching user data, creating meetings, and initiating calls.<br>Content:</br></p><ul><li>UserData class manages the user data, such as caller ID and tokens.</li><li>It includes functions to fetch `caller` and `callee` info, create a VideoSDK meeting, and initiate a call.</li><li>The `initiateCall` function combines all the gathered information and sends it to the server.</li></ul><pre><code class="language-swift">import SwiftUI
import Firebase

class UserData: ObservableObject {
    @Published var callerID: String = "" // Store the caller ID
    @Published public var otherUserID: String = ""
    static let shared = UserData()
    private let callerIDKey = "callerIDKey" // Key for UserDefaults
    private let TOKEN_STRING = "VideoSDK Token" // Token for Video SDK, you will get from https://app.videosdk.live/api-keys

    init() {
        self.callerID = UserDefaults.standard.string(forKey: callerIDKey) ?? ""
    }

    // MARK: - Fetch CallerID From Defaults

    /// Retrieves the caller ID from UserDefaults
    /// - Returns: The stored caller ID or nil if not found
  func fetchCallerID() -&gt; String? {
        // Retrieve the caller ID from UserDefaults
        if callerID.isEmpty {
            return UserDefaults.standard.string(forKey: callerIDKey)
        }
        return callerID
    }

    // MARK: - Fetch Caller Info

    /// Fetches caller information from Firestore
    /// - Parameter completion: Closure called with the fetched CallerInfo or nil if not found
    func fetchCallerInfo(completion: @escaping (CallerInfo?) -&gt; Void) {
        guard let callerIDDevice = UserDefaults.standard.string(forKey: callerIDKey) else {
            completion(nil)
            return
        }

        Firestore.firestore().collection("users")
            .whereField("callerID", isEqualTo: callerIDDevice)
            .getDocuments { [weak self] snapshot, error in
                self?.handleFirestoreResponse(snapshot: snapshot, error: error, completion: completion)
            }
    }

    // MARK: - Fetch Callee Info

    /// Fetches callee information from Firestore
    /// - Parameters:
    ///   - callerID: The ID of the callee
    ///   - completion: Closure called with the fetched CalleeInfo or nil if not found
    func fetchCalleeInfo(callerID: String, completion: @escaping (CalleeInfo?) -&gt; Void) {
        Firestore.firestore().collection("users")
            .whereField("callerID", isEqualTo: callerID)
            .getDocuments { [weak self] snapshot, error in
                self?.handleFirestoreResponse(snapshot: snapshot, error: error, completion: completion)
            }
    }

    // MARK: - Meeting ID Generation

    /// Creates a new meeting using the Video SDK API
    /// - Parameters:
    ///   - token: The authentication token
    ///   - completion: Closure called with the result containing the room ID or an error
    func createMeeting(token: String, completion: @escaping (Result&lt;String, Error&gt;) -&gt; Void) {
        guard let url = URL(string: "https://api.videosdk.live/v2/rooms") else {
            completion(.failure(NSError(domain: "Invalid URL", code: -1, userInfo: nil)))
            return
        }

        var request = URLRequest(url: url)
        request.httpMethod = "POST"
        request.addValue(token, forHTTPHeaderField: "Authorization")

        URLSession.shared.dataTask(with: request) { data, response, error in
            DispatchQueue.main.async {
                if let error = error {
                    completion(.failure(error))
                    return
                }

                guard let data = data else {
                    completion(.failure(NSError(domain: "No data", code: 500, userInfo: nil)))
                    return
                }

                do {
                    let dataArray = try JSONDecoder().decode(RoomsStruct.self, from: data)
                    let roomID = dataArray.roomID ?? ""
                    MeetingManager.shared.currentMeetingID = roomID
                    completion(.success(roomID))
                } catch {
                    completion(.failure(error))
                }
            }
        }.resume()
    }

    // MARK: - Initiate Call

    /// Initiates a call by fetching caller and callee info, creating a meeting, and sending a call request
    /// - Parameters:
    ///   - otherUserID: The ID of the user being called
    ///   - completion: Closure called with the caller info, callee info, and Video SDK info
    func initiateCall(otherUserID: String, completion: @escaping (CallerInfo?, CalleeInfo?, VideoSDKInfo?) -&gt; Void) {
        fetchCallerInfo { [weak self] callerInfo in
            guard let self = self, let callerInfo = callerInfo else {
                print("Error fetching caller info")
                completion(nil, nil, nil)
                return
            }

            self.fetchCalleeInfo(callerID: otherUserID) { calleeInfo in
                guard let calleeInfo = calleeInfo else {
                    print("Error fetching callee info")
                    completion(nil, nil, nil)
                    return
                }

                self.createMeeting(token: self.TOKEN_STRING) { result in
                    switch result {
                    case .success(let roomID):
                        print("Meeting created successfully with Room ID: \(roomID)")
                        DispatchQueue.main.asyncAfter(deadline: .now() + 3) {
                            let videoSDKInfo = VideoSDKInfo()
                            completion(callerInfo, calleeInfo, videoSDKInfo)

                            let callRequest = CallRequest(callerInfo: callerInfo, calleeInfo: calleeInfo, videoSDKInfo: videoSDKInfo)
                            self.sendCallRequest(callRequest) { result in
                                switch result {
                                case .success(let data):
                                    print("Call request successful: \(String(describing: data))")
                                case .failure(let error):
                                    print("Error sending call request: \(error)")
                                }
                            }
                        }
                    case .failure(let error):
                        print("Error creating meeting: \(error)")
                        completion(nil, nil, nil)
                    }
                }
            }
        }
    }

    // MARK: - API Calls

    /// Sends a call request to the server
    /// - Parameters:
    ///   - request: The CallRequest object containing call details
    ///   - completion: Closure called with the result of the API call
    public func sendCallRequest(_ request: CallRequest, completion: @escaping (Result&lt;Data?, Error&gt;) -&gt; Void) {
        guard let url = URL(string: "YOURSERVERURL") else {
            completion(.failure(NSError(domain: "Invalid URL", code: -1, userInfo: nil)))
            return
        }

        var urlRequest = URLRequest(url: url)
        urlRequest.httpMethod = "POST"
        urlRequest.setValue("application/json", forHTTPHeaderField: "Content-Type")

        do {
            let jsonData = try JSONEncoder().encode(request)
            urlRequest.httpBody = jsonData

            URLSession.shared.dataTask(with: urlRequest) { data, response, error in
                if let error = error {
                    completion(.failure(error))
                } else if let response = response as? HTTPURLResponse, response.statusCode == 200 {
                    completion(.success(data))
                } else {
                    let error = NSError(domain: "API Error", code: (response as? HTTPURLResponse)?.statusCode ?? -1, userInfo: nil)
                    completion(.failure(error))
                }
            }.resume()
        } catch {
            completion(.failure(error))
        }
    }

    // MARK: - Helper Methods

    /// Handles the response from Firestore queries
    private func handleFirestoreResponse&lt;T: Codable&gt;(snapshot: QuerySnapshot?, error: Error?, completion: @escaping (T?) -&gt; Void) {
        if let error = error {
            print("Error fetching documents: \(error.localizedDescription)")
            completion(nil)
            return
        }

        guard let snapshot = snapshot, !snapshot.isEmpty, let document = snapshot.documents.first else {
            print("No documents found for the given caller ID")
            completion(nil)
            return
        }

        let data = document.data()
        let name = data["name"] as? String ?? ""
        let deviceToken = data["deviceToken"] as? String ?? ""
        let callerID = data["callerID"] as? String ?? ""
        let fcmToken = data["fcmToken"] as? String ?? ""

        let info = T.self == CallerInfo.self ?
            CallerInfo(id: document.documentID, name: name, callerID: callerID, deviceToken: deviceToken, fcmToken: fcmToken) as? T :
            CalleeInfo(id: document.documentID, name: name, callerID: callerID, deviceToken: deviceToken, fcmToken: fcmToken) as? T

        completion(info)
    }
}</code></pre><p>We'll invoke <code>initiateCall</code> method on JoinView <code>Start Call</code> button action of the <code>JoinView.swift</code> file.</p><h3 id="server-side-api-implementation">Server Side API implementation</h3><h4 id="serverjs"><code>server.js</code></h4><pre><code class="language-js">const express = require("express");
const cors = require("cors");
const admin = require("firebase-admin");
const morgan = require("morgan");
var Key = "YOUR_APNS_AUTH_KEY_PATH"; // TODO: Change File Name
var apn = require("apn");
const { v4: uuidv4 } = require("uuid");
const serviceAccount = require("./serviceAccountKey.json"); // Replace with the path to your service account key

const app = express();
const port = 3000;

app.use(cors());
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
app.use(morgan("dev"));

admin.initializeApp({
  credential: admin.credential.cert(serviceAccount),
});

app.post("/initiate-call", (req, res) =&gt; {
  const { calleeInfo, callerInfo, videoSDKInfo } = req.body;

  let deviceToken = calleeInfo.deviceToken;

  var options = {
    token: {
      key: Key,
      keyId: "KEY_ID",
      teamId: "TEAM_ID",
    },
    production: false,
  };

  var apnProvider = new apn.Provider(options);

  var note = new apn.Notification();

  note.expiry = Math.floor(Date.now() / 1000) + 3600; // Expires 1 hour from now.

  note.badge = 1;
  note.sound = "ping.aiff";
  note.alert = "You have a new message";
  note.rawPayload = {
    callerName: callerInfo.name,
    aps: {
      "content-available": 1,
    },
    handle: callerInfo.name,
    callerInfo,
    videoSDKInfo,
    type: "CALL_INITIATED",
    uuid: uuidv4(),
  };
  note.pushType = "voip";
  note.topic = "com.videosdk.live.CallKitSwiftUI.voip";
  apnProvider.send(note, deviceToken).then((result) =&gt; {
    if (result.failed &amp;&amp; result.failed.length &gt; 0) {
      console.log("RESULT", result.failed[0].response);
      res.status(400).send(result.failed[0].response);
    } else {
      res.status(200).send(result);
    }
  });
});

app.post("/update-call", (req, res) =&gt; {
  const { callerInfo, type } = req.body;
  const { name, fcmToken } = callerInfo;

  const message = {
    notification: {
      title: name,
      body: "Hello VideoSDK",
    },
    data: {
      type,
    },
    token: fcmToken,
    apns: {
      payload: {
        aps: {
          sound: "default",
          badge: 1,
        },
      },
    },
  };

  admin
    .messaging()
    .send(message)
    .then((response) =&gt; {
      res.status(200).send(response);
      console.log("Successfully sent message:", response);
    })
    .catch((error) =&gt; {
      res.status(400).send(error);
      console.log("Error sending message:", error);
    });
});

// Start the server
app.listen(port, () =&gt; {
  console.log(`Server running at http://localhost:${port}/`);
});</code></pre><ul><li>Generate and download a new service account key from your firebase console and replace the <code>serviceAccountKey.json</code> file with the new service account key.</li><li>Use the Auth key that you have generated from your Apple Developer Account.</li><li>Replace <code>KEY_ID</code> and <code>TEAM_ID</code> with your key ID and team ID that was generated from your Apple Developer Account.</li></ul><hr><h2 id="acceptreject-incoming-call"> Accept/Reject Incoming Call</h2><p>Once the call is initiated and the calling view is displayed, we need to implement logic to manage call acceptance or rejection from the remote user. Depending on their decision, the application should navigate to the common meeting screen or terminate the call.<br>We must now change the some CallKit functionality and adjust navigation accordingly based on call status changes and before that we have to define UpdateCallAPI in <code>UserData.swift</code></br></p><pre><code class="language-swift">//MARK:  API Calling For Update Call
    public func UpdateCallAPI(callType: String) {
        let storedCallerID = OtherUserIDManager.SharedOtherUID.OtherUIDOf
        fetchCalleeInfo(callerID: storedCallerID ?? "null") { calleeInfo in
            guard let calleeInfo = calleeInfo else {
                print("No callee info found")
                return
            }
            guard let url = URL(string: "http://172.20.10.3:3000/update-call") else {
                print("Invalid URL")
                return
            }
            var request = URLRequest(url: url)
            request.httpMethod = "POST"
            request.setValue("application/json", forHTTPHeaderField: "Content-Type")

            let callerInfoDict: [String: Any] = [
                "id": calleeInfo.id,
                "name": calleeInfo.name,
                "callerID": calleeInfo.callerID,
                "deviceToken": calleeInfo.deviceToken,
                "fcmToken": calleeInfo.fcmToken
            ]
            let body: [String: Any] = ["callerInfo": callerInfoDict, "type": callType]
            do {
                request.httpBody = try JSONSerialization.data(withJSONObject: body, options: [])
            } catch {
                print("Error encoding request body: \(error)")
                return
            }
            URLSession.shared.dataTask(with: request) { data, response, error in
                if let error = error {
                    print("API call error: \(error)")
                    return
                }
                guard let httpResponse = response as? HTTPURLResponse, httpResponse.statusCode == 200 else {
                    print("Invalid response")
                    return
                }

                if let data = data {
                    print("Response data: \(String(data: data, encoding: .utf8) ?? "")")
                }
            }.resume()
        }
    }</code></pre><p>We call <code>UpdateCallAPI</code> method on <code>CallKitManager.swift</code> delegate method.</p><p><code>CallKitManager.swift</code></p><pre><code class="language-swift">    func provider(_ provider: CXProvider, perform action: CXAnswerCallAction) {
        configureAudioSession()
        if let callerID = callerIDs[action.callUUID] {
            print("Establishing call connection with caller ID: \(callerID)")
        }
        NotificationCenter.default.post(name: .callAnswered, object: nil)
        // Update Call API
        UserData.shared.UpdateCallAPI(callType: "ACCEPTED")
        action.fulfill()
    }
    
    func provider(_ provider: CXProvider, perform action: CXEndCallAction) {
        callerIDs.removeValue(forKey: action.callUUID)
        let meetingViewController = MeetingViewController()
        meetingViewController.onMeetingLeft()
        action.fulfill()
        // Update Call API
        UserData.shared.UpdateCallAPI(callType: "REJECTED")
        DispatchQueue.main.async {
            NavigationState.shared.navigateToJoin()
        }
    }</code></pre><ul><li>When a user accepts the call, the <code>NotificationManager</code> will send a notification to the caller’s device. The <code>CallingView</code> observes the <code>NotificationManager</code>. When the `NotificationManager` detects the notification indicating that the call has been accepted, it triggers navigateToMeeting method of NavigationState which automatically transitions to the `MeetingView`, where the actual video call takes place.</li><li>If a user rejects the call, it triggers navigateToJoin method of NavigationState which automatically transitions to the <code>JoinView</code>.</li><li>When meeting is ended, it triggers navigateToJoin method of NavigationState which automatically transitions to the <code>JoinView</code>.</li><li>Ending the meeting would also call endCall method of CallKitManager to end the call and navigate to JoinView.</li></ul><hr><p>With these, iOS devices should now be able to receive the call and join the video call. This is what the incoming call on an iOS device looks like.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/Untitled-design.png" class="kg-image" alt="Build a VoIP Call App with CallKit in iOS" loading="lazy" width="1920" height="1080"/></figure><p>Here is the video showing the incoming call and initiating a video session</p>
<!--kg-card-begin: html-->
<video width="900" height="500" controls="">
  <source src="https://cdn.videosdk.live/website-resources/docs-resources/ios-callkit-demo.mp4" type="video/mp4">
</source></video>
<!--kg-card-end: html-->
</hr></hr></hr></hr></hr></hr>]]></content:encoded></item><item><title><![CDATA[Product Updates : October 2024]]></title><description><![CDATA[Explore the latest product updates from October 2024 at VideoSDK Live's blog. Stay informed on cutting-edge features shaping the future of video technology.]]></description><link>https://www.videosdk.live/blog/product-updates-october-2024</link><guid isPermaLink="false">672da8e1c646c7d24e60ecc6</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 08 Nov 2024 06:20:54 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/11/October-2024.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/11/October-2024.png" alt="Product Updates : October 2024"/><p>We are thrilled to share some incredible updates from October. This month has been all about pushing boundaries and bringing new tools to life that empower you to create even more impactful applications.<br/></p><h2 id="our-character-sdk-wins-big-on-product-hunt-%F0%9F%9A%80"><strong>Our Character SDK Wins Big on Product Hunt! 🚀</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/Character-SDK.png" class="kg-image" alt="Product Updates : October 2024" loading="lazy" width="1280" height="720"/></figure><p>We’re excited to announce that our <strong>Character SDK</strong> made a huge splash on Product Hunt! Here’s how it performed</p><ul><li><strong>#1 Product of the day</strong></li><li><strong>#1 Product of the Week</strong> in the Artificial Intelligence category</li><li><strong>#2 Product of the Month</strong> in the Developer Tools category</li><li><strong>#3 Product of the Month</strong> overall</li></ul><p>We couldn’t have done it without your incredible support! Character SDK enables you to build multimodal AI companions that interact through speech, vision, and actions, opening up endless possibilities. Think AI-driven virtual friends, video KYC agents, sales reps, interviewers, and more—all personalized and ready to assist.<br><br>If you haven’t joined the waitlist yet, this is your chance! Be among the first to experience Character SDK’s capabilities by clicking the link below.<br><br>Join the waitlist: <a href="https://www.videosdk.live/character-sdk">https://www.videosdk.live/character-sdk</a></br></br></br></br></p><p><a href="https://character-demo.videosdk.live/" rel="noreferrer">Try the Demo Now! ↗️</a></p><p/><h2 id="whiteboard-feature-real-time-collaboration-made-easy"><strong>Whiteboard Feature: Real-Time Collaboration Made Easy </strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/11/Whiteboard.png" class="kg-image" alt="Product Updates : October 2024" loading="lazy" width="1280" height="720"/></figure><p>Alongside the Character SDK, we’ve introduced our new <strong>Whiteboard feature</strong>, available across major platforms like <strong>JavaScript, React, iOS, Android, Flutter, and React Native</strong>. Perfect for apps that focus on real-time collaboration, this feature allows you to:<br/></p><ul><li>Start and stop the whiteboard for all participants in a session</li><li>Generate a URL for easy embedding, making it simple to integrate into any app</li></ul><p>With this feature, you can seamlessly enhance teamwork and engagement in your apps, providing users with the power to collaborate visually in real time.</p><p>🔗 <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/whiteboard" rel="noreferrer">Whiteboard in Android SDK</a><br>🔗 <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/whiteboard" rel="noreferrer">Whiteboard in iOS SDK</a><br>🔗 <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/whiteboard" rel="noreferrer">Whiteboard in React Native SDK</a><br>🔗  <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/whiteboard" rel="noreferrer">Whiteboard in React SDK</a><br>🔗 <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/whiteboard" rel="noreferrer">Whiteboard in JS SDK</a><br>🔗  <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/whiteboard" rel="noreferrer">Whiteboard in Flutter SDK</a></br></br></br></br></br></p><p/><h2 id="general-updates"><strong>General Updates</strong></h2><ul><li><strong>New Room Stats Added</strong>: Now you can track pause_count, pause_duration, freeze_count, and total_freeze_duration for remote participant video in React &amp; Javascript SDK.</li><li><strong>Reconnection Issue Fix</strong>: Fixed a reconnection issue to ensure stable connections during session rejoining in iOS SDK.</li></ul><hr><h2 id="join-us-on-this-journey"><strong>Join Us on This Journey</strong></h2><p>Thanks for catching up with our October updates! We’re just getting started and can’t wait to see how you leverage these tools to create amazing experiences. Make sure to join the waitlist for Character SDK and let’s build something extraordinary together. </p><p>If you have any questions, suggestions, or issues, please don't hesitate to contact our support team.</p><p>➡️ New to VideoSDK? <a href="https://www.videosdk.live/signup">Sign up now</a> and get <strong><em>10,000 free minutes</em></strong> to start building amazing audio &amp; video experiences!</p><p>Stay tuned for more updates next month—see you then!<br>VideoSDK Team</br></p></hr>]]></content:encoded></item><item><title><![CDATA[Quality Comparison: VideoSDK vs Vonage in Web 1:1 Video Calls]]></title><description><![CDATA[Through real-life comparisons of VideoSDK vs Vonage under different network conditions, VideoSDK consistently outperformed in video quality, lower latency, and packet loss, making it the superior choice for reliable 1:1 video calls.]]></description><link>https://www.videosdk.live/blog/quality-comparison-videosdk-vs-vonage-in-web-1-1-video-calls</link><guid isPermaLink="false">6711f030c646c7d24e60e6ab</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 18 Oct 2024 05:22:46 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/10/VideoSDK-vs-Vonage-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/10/VideoSDK-vs-Vonage-1.png" alt="Quality Comparison: VideoSDK vs Vonage in Web 1:1 Video Calls"/><p>In the highly competitive video conferencing space, delivering top-notch video quality and low latency is paramount for user satisfaction. As more developers and businesses integrate video conferencing solutions into their platforms, choosing the right infrastructure can be challenging. VideoSDK and Vonage, both promise high-quality video, but how do they truly compare in real-world one-on-one video calls?</p><p>In this blog, we will explore a head-to-head comparison between VideoSDK and Vonage, focusing on critical aspects such as video quality, latency, bandwidth efficiency, and overall call performance to help you make an informed choice.</p><h2 id="quality-is-the-core">Quality is the Core!</h2><p>When it comes to video calls, quality is everything. You can throw in a bunch of cool features or fancy tech, but if you lack in quality, it's a deal-breaker. No arguments there! We understand this, and that's why in this blog, we're diving into the nitty-gritty of quality.</p><p>Numerous factors can affect the quality of a video call, with the most basic connection being tied to your network. Thus, while exploring options for migrating to different service providers, you must not compromise the quality. </p><p>That's why we're putting VideoSDK against Vonage in a quality showdown under various network bandwidths. See for yourself how VideoSDK not only keeps up but outperforms Vonage's standards.</p><h2 id="setting-up-the-platform">Setting up the platform</h2><p>Before we jump into the comparison, let's quickly examine key parameters for our evaluation, including devices, configurations, network throttling, and more.</p><ul><li>We'll be covering standard one-to-one web video calls</li><li>Both the sender and receiver are using MacBook Pro devices. </li><li>In maintaining an unbiased assessment, VideoSDK and Vonage configurations have been set to default. Since their default settings are almost identical.</li><li>Additionally, for measuring network bandwidth, we'll be using <a href="https://fast.com ">fast.com</a></li></ul><h2 id="test-scenarios">Test Scenarios</h2><p>Now that we've covered the setup, let us cover what scenarios we'll be creating to compare call quality between VideoSDK and Vonage.</p><p>Firstly, we'll evaluate metrics under normal conditions to establish a baseline.</p><p>Following that, we'll introduce network bandwidth throttling from only the narrator's side, simulating various scenarios at - </p><ul><li>1mbps</li><li>500kbps</li><li>250kbps</li></ul><h2 id="results">Results</h2><p>Here's a look at the important metrics we followed in different network situations, showing how VideoSDK and Vonage tackled challenges. Watch Harshit in the next video showcasing VideoSDK and Vonage demonstrating how they performed.</p>
<!--kg-card-begin: html-->
<div>
  <video src="https://cdn.videosdk.live/website-resources/comparison-pages/videosdk-benchmarking.mp4" controls="true" width="100%">
  </video>
</div>

<div>
  <video src="https://cdn.videosdk.live/website-resources/comparison-pages/vonage-benchmarking.mp4" controls="true" width="100%">
  </video>
</div>
<!--kg-card-end: html-->
<p>Let's break down the results, exploring how each handles latency, and packet loss under different network bandwidths.</p><h3 id="frames-per-second-fps">Frames Per Second (FPS)</h3><p>As shown in the video, VideoSDK outshines Vonage in providing video quality at a very smooth FPS rate, at different bandwidths. While Vonage frequently experiences interruptions.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/10/image-2.png" class="kg-image" alt="Quality Comparison: VideoSDK vs Vonage in Web 1:1 Video Calls" loading="lazy" width="1280" height="638"/></figure><h3 id="packet-loss">Packet-loss</h3><p>Once again, VideoSDK consistently delivers superior video quality with lower packet loss, while Vonage's video quality tends to degrade significantly under similar conditions.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/10/image-3.png" class="kg-image" alt="Quality Comparison: VideoSDK vs Vonage in Web 1:1 Video Calls" loading="lazy" width="1280" height="638"/></figure><h2 id="conclusion">Conclusion</h2><p>VideoSDK consistently maintained high-quality video, achieved lower latency, and handled packet loss more efficiently, providing a smoother and more reliable experience. Vonage, on the other hand, struggled to maintain the same level of performance, with noticeable drops in video quality and higher latency during challenging network scenarios.</p><p>Thus, it is evident from both the metrics and real-world testing that <strong>VideoSDK</strong> is the clear winner, offering superior performance for 1:1 video calls.</p>]]></content:encoded></item><item><title><![CDATA[Quality Comparison: VideoSDK vs Agora in Web 1:1 Video Calls]]></title><description><![CDATA[Through real-life comparisons of VideoSDK vs Agora under different network conditions, VideoSDK consistently outperformed in video quality, lower latency, and packet loss, making it the superior choice for reliable 1:1 video calls.]]></description><link>https://www.videosdk.live/blog/quality-comparison-videosdk-vs-agora-in-web-1-1-video-calls</link><guid isPermaLink="false">67111a75c646c7d24e60e620</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 17 Oct 2024 14:35:07 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/10/VideoSDK-vs-Agora-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/10/VideoSDK-vs-Agora-1.png" alt="Quality Comparison: VideoSDK vs Agora in Web 1:1 Video Calls"/><p>In the highly competitive video conferencing space, delivering top-notch video quality and low latency is paramount for user satisfaction. As more developers and businesses integrate video conferencing solutions into their platforms, choosing the right infrastructure can be challenging. VideoSDK and Agora, both promise high-quality video, but how do they truly compare in real-world one-on-one video calls?</p><p>In this blog, we will explore a head-to-head comparison between VideoSDK and Agora, focusing on critical aspects such as video quality, latency, bandwidth efficiency, and overall call performance to help you make an informed choice.</p><h2 id="quality-is-the-core">Quality is the Core!</h2><p>When it comes to video calls, quality is everything. You can throw in a bunch of cool features or fancy tech, but if you lack in quality, it's a deal-breaker. No arguments there! We understand this, and that's why in this blog, we're diving into the nitty-gritty of quality.</p><p>Numerous factors can affect the quality of a video call, with the most basic connection being tied to your network. Thus, while exploring options for migrating to different service providers, you must not compromise the quality. </p><p>That's why we're putting VideoSDK against Agora in a quality showdown under various network bandwidths. See for yourself how VideoSDK not only keeps up but outperforms Agora's standards.</p><h2 id="setting-up-the-platform">Setting up the platform</h2><p>Before we jump into the comparison, let's quickly examine key parameters for our evaluation, including devices, configurations, network throttling, and more.</p><ul><li>We'll be covering standard one-to-one web video calls</li><li>Both the sender and receiver are using MacBook Pro devices. </li><li>In maintaining an unbiased assessment, VideoSDK and Agora configurations have been set to default. Since their default settings are almost identical.</li><li>Additionally, for measuring network bandwidth, we'll be using <a href="https://fast.com ">fast.com</a></li></ul><h2 id="test-scenarios">Test Scenarios</h2><p>Now that we've covered the setup, let us cover what scenarios we'll be creating to compare call quality between VideoSDK and Agora.</p><p>Firstly, we'll evaluate metrics under normal conditions to establish a baseline.</p><p>Following that, we'll introduce network bandwidth throttling from only the narrator's side, simulating various scenarios at - </p><ul><li>1mbps</li><li>500kbps</li><li>250kbps</li></ul><h2 id="results">Results</h2><p>Here's a look at the important metrics we followed in different network situations, showing how VideoSDK and Agora tackled challenges. Watch Harshit in the next video showcasing VideoSDK and Agora demonstrating how they performed.</p>
<!--kg-card-begin: html-->
<div>
  <video src="https://cdn.videosdk.live/website-resources/comparison-pages/videosdk-benchmarking.mp4" controls="true" width="100%">
  </video>
</div>

<div>
  <video src="https://cdn.videosdk.live/website-resources/comparison-pages/agora-benchmarking.mp4" controls="true" width="100%">
  </video>
</div>
<!--kg-card-end: html-->
<p>Let's break down the results, exploring how each handles latency, and packet loss under different network bandwidths.</p><h3 id="latency">Latency</h3><p>As shown in the video, VideoSDK outshines Agora in managing latency by a considerable margin, at different bandwidths.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/10/image.png" class="kg-image" alt="Quality Comparison: VideoSDK vs Agora in Web 1:1 Video Calls" loading="lazy" width="1280" height="638"/></figure><h3 id="packet-loss">Packet-loss</h3><p>Once again, VideoSDK consistently delivers superior video quality with lower packet loss, while Agora's video quality tends to degrade significantly under similar conditions.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/10/image-1.png" class="kg-image" alt="Quality Comparison: VideoSDK vs Agora in Web 1:1 Video Calls" loading="lazy" width="1280" height="638"/></figure><h2 id="conclusion">Conclusion</h2><p>VideoSDK consistently maintained high-quality video, achieved lower latency, and handled packet loss more efficiently, providing a smoother and more reliable experience. Agora, on the other hand, struggled to maintain the same level of performance, with noticeable drops in video quality and higher latency during challenging network scenarios.</p><p>Thus, it is evident from both the metrics and real-world testing that <strong>VideoSDK</strong> is the clear winner, offering superior performance for 1:1 video calls.</p>]]></content:encoded></item><item><title><![CDATA[What is WebRTC Signaling? How Does WebRTC Signaling Work?]]></title><description><![CDATA[WebRTC signaling is the process of exchanging information between peers to establish and manage real-time communication sessions, covering details like session descriptions and network configurations.]]></description><link>https://www.videosdk.live/blog/what-is-webrtc-signaling</link><guid isPermaLink="false">65b221712a88c204ca9ce4db</guid><category><![CDATA[WebRTC]]></category><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Thu, 17 Oct 2024 11:23:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/image--7-.png" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-webrtc-signaling">What is WebRTC Signaling?</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/image--7-.png" alt="What is WebRTC Signaling? How Does WebRTC Signaling Work?"/><p>WebRTC signaling involves the exchange of metadata between devices to coordinate and manage the communication session. This metadata encompasses a variety of information, such as session descriptions, network details, and security parameters.</p><p>The WebRTC (<a href="https://www.videosdk.live/blog/webrtc">Web Real-Time Communication</a>), the provided metadata likely pertains to the signaling process. WebRTC involves exchanging information between peers to establish and manage communication sessions. Signaling is the process by which this information is exchanged, and it encompasses various details to facilitate effective communication. Let's break down the mentioned metadata components,</p><ol><li><strong>Session Descriptions: </strong>WebRTC uses Session Description Protocol (SDP) to describe the media capabilities, such as audio and video codecs, supported by each peer.</li><li><strong>Network Details: </strong>Information related to network details includes the exchange of candidates for ICE (Interactive Connectivity Establishment) negotiation. WebRTC utilizes ICE to determine the most efficient and reliable network path for communication between peers.</li><li><strong>Security Parameters:  </strong>WebRTC signaling also addresses security concerns. It may involve negotiating and exchanging cryptographic parameters for securing the communication channel.</li></ol><p>During a WebRTC session, signaling occurs in multiple phases. Initially, devices exchange Session Description Protocol (SDP) messages, outlining their capabilities and preferences. Subsequently, Interactive Connectivity Establishment (ICE) comes into play, addressing network-related challenges and determining the best communication path.</p><h2 id="components-of-webrtc-signaling">Components of WebRTC Signaling</h2><p>WebRTC signaling comprises several key components that collaborate to ensure a seamless communication experience.</p><h3 id="session-description-protocol-sdp">Session Description Protocol (SDP)</h3><p>SDP is a format that describes multimedia communication sessions. In WebRTC signaling, SDP messages convey information about the codecs, media types, and other parameters that each device supports.</p><h3 id="interactive-connectivity-establishment-ice">Interactive Connectivity Establishment (ICE)</h3><p>ICE is responsible for overcoming network hurdles during communication. It facilitates the discovery of the most efficient communication path, addressing issues like firewalls, NAT traversal, and dynamic IP assignments.</p><h3 id="signaling-servers">Signaling Servers</h3><p>Signaling servers act as intermediaries between peers, facilitating the exchange of SDP and ICE information. They play a crucial role in the negotiation process and ensure that both devices can establish a connection.</p><h2 id="interplay-between-components-during-a-communication-session">Interplay Between Components During a Communication Session</h2><p>The interplay between SDP, ICE, and signaling servers is intricate but crucial for the success of a WebRTC session. When two devices wish to communicate, they exchange SDP messages through a signaling server. The SDP messages detail each device's capabilities and preferences.</p><p>Meanwhile, ICE actively explores the network environment to identify the optimal path for communication. It considers factors such as firewall configurations and NAT traversal, ensuring that the chosen path is both efficient and secure. The signaling server assists in coordinating this process, helping the devices reach a consensus on the best communication parameters.</p><h2 id="why-are-signaling-servers-for-webrtc-needed">Why Are Signaling Servers for WebRTC Needed?</h2><h3 id="necessity-of-signaling-servers">Necessity of Signaling Servers</h3><p>Direct peer-to-peer communication faces challenges that necessitate the involvement of signaling servers. These challenges include the dynamic nature of networks, firewalls blocking direct communication paths, and the need for negotiation between devices with varying capabilities.</p><h3 id="communication-establishment">Communication Establishment</h3><p>The process of signaling in WebRTC is instrumental in initiating a communication session. When devices connect, signaling servers negotiate parameters such as video resolution, audio codecs, and encryption methods. This negotiation ensures that both devices can communicate effectively by aligning their capabilities.</p><h3 id="handling-network-dynamics">Handling Network Dynamics</h3><p>WebRTC signaling servers play a vital role in adapting to changes in the network environment. Networks are dynamic, with devices frequently changing IP addresses or encountering firewalls. Signaling servers assist in navigating these challenges, enabling continuous communication even in the face of network fluctuations.</p><h2 id="enhancing-developer-experience-in-creating-peer-to-peer-websites">Enhancing Developer Experience in Creating Peer-to-Peer Websites</h2><h3 id="challenges-in-webrtc-implementation">Challenges in WebRTC Implementation</h3><p>Developers often face challenges when implementing WebRTC for peer-to-peer communication. These challenges may include complexities in negotiating communication parameters, addressing network-related issues, and ensuring a smooth user experience.</p><h3 id="introducing-videosdk-as-a-solution">Introducing VideoSDK as a Solution</h3><p>To streamline the development process and address these challenges, developers can turn to <a href="https://www.videosdk.live/">VideoSDK</a>. VideoSDK is a comprehensive live video infrastructure for developers, offering <a href="https://www.videosdk.live/audio-video-conferencing">real-time audio and video SDKs</a>. It provides complete flexibility, scalability, and control, making it effortless to integrate audio-video conferencing and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a> into web and mobile apps.</p><h3 id="seamless-integration-with-videosdk">Seamless Integration with VideoSDK</h3><p>VideoSDK simplifies the integration process for developers. Here's a step-by-step guide on how VideoSDK can be incorporated into projects:</p><ul><li><strong>SDK Integration:</strong> Begin by integrating VideoSDK's SDKs into your application. The SDKs are designed to seamlessly work with various platforms, providing a consistent experience across different devices.</li><li><strong>Configuration:</strong> Customize the SDK according to your specific requirements. VideoSDK offers flexibility in configuring parameters such as video quality, audio settings, and security measures.</li><li><strong>Testing and Debugging:</strong> VideoSDK provides robust testing and debugging tools, along with <a href="https://www.opkey.com/ai-test-automation" rel="noreferrer">AI QA automation</a>, allowing developers to ensure flawless integration. This step ensures a smooth user experience during real-time communication sessions.</li><li><strong>Scalability:</strong> Leverage VideoSDK's scalability features to accommodate varying numbers of users. Whether your application serves a handful of users or a large audience, VideoSDK can scale to meet the demands of your project.</li></ul><p>By opting for VideoSDK, developers can overcome the challenges associated with WebRTC implementation, creating a more efficient and user-friendly peer-to-peer communication experience.</p><h2 id="advantages-of-videosdk-in-peer-to-peer-communication">Advantages of VideoSDK in Peer-to-Peer Communication</h2><h4 id="performance-improvements">Performance Improvements</h4><p>VideoSDK brings notable improvements to real-time communication performance. The SDKs are optimized to minimize latency, ensuring that audio and video data is transmitted with minimal delay. This results in a more responsive and immersive communication experience for users.</p><p>Additionally, VideoSDK addresses quality concerns by implementing advanced codecs and adaptive bitrate streaming. This ensures that the communication quality remains consistently high, even in varying network conditions.</p><h4 id="scalability-and-flexibility">Scalability and Flexibility</h4><p>One of the standout features of VideoSDK is its scalability. Whether your application caters to a small team or a global audience, VideoSDK can scale to meet the demand. This scalability is essential for applications with dynamic user bases, providing a reliable solution for projects of any size.</p><p>Furthermore, VideoSDK offers flexibility in terms of customization. Developers can tailor the SDK to suit the unique requirements of their projects, adjusting settings, layouts, and features as needed. This adaptability ensures that VideoSDK can seamlessly integrate into a diverse range of applications.</p><p>WebRTC signaling is a crucial component in establishing and maintaining peer-to-peer communication channels for real-time audio and video interactions. The intricacies of SDP, ICE, and signaling servers play a pivotal role in overcoming challenges related to network dynamics and device capabilities.</p><p>In a digital landscape where effective communication is paramount, VideoSDK stands out as a reliable partner for developers aiming to deliver top-tier real-time audio and video experiences in their web and mobile applications.</p><p/>]]></content:encoded></item><item><title><![CDATA[What is RTMP-In and RTMP-Out?]]></title><description><![CDATA[This comprehensive guide dives into the world of RTMP streaming, explaining what it is, how it works, and its key functionalities. ]]></description><link>https://www.videosdk.live/blog/what-is-rtmp-in-and-rtmp-out</link><guid isPermaLink="false">660691fb2a88c204ca9cf689</guid><category><![CDATA[RTMP]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Thu, 17 Oct 2024 06:09:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-in-RTMP-out.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-in-RTMP-out.png" alt="What is RTMP-In and RTMP-Out?"/><p>In the age of social media and cord-cutting, live streaming has evolved into a crucial tool for entertainment, education, and communication. Viewers demand real-time connection and involvement in everything from riveting esports events to live product debuts and helpful conferences.</p><p>Whether you're a streamer new to live broadcasting or a curious viewer looking to learn more about the technical components. In this article, <a href="https://www.videosdk.live/">VideoSDK</a> gives a simple yet in-depth explanation of RTMP, as well as a quick overview of how to get started with this important live-streaming technology.</p><h2 id="what-is-rtmp-streaming"><strong><strong>What is RTMP Streaming?</strong></strong></h2><p>RTMP stands for Real-Time Messaging Protocol. It is a specialized communication protocol that allows for the low-latency transmission of live video and audio data across the Internet. Unlike traditional web protocols like HTTP, which focus on providing entire files, RTMP promotes real-time data packet transmission, allowing for seamless playing without buffering. </p><h3 id="history-of-rtmp">History of RTMP</h3><p>RTMP was developed by Macromedia (later bought by Adobe) and was designed to interact with Adobe Flash Player, the dominant multimedia platform in the early days of the Internet. Flash Player's ability to handle audio, video, and interaction made it an ideal choice for live streaming, and RTMP became the de facto standard for sending live video content.</p><p>While Flash Player's popularity has dropped, RTMP remains a viable and extensively used protocol for live streaming due to its primary qualities of low latency and stability. Nowadays, it serves as a link between encoder software (which collects and processes video) and streaming platforms or media servers. Encoders employ RTMP to transfer encoded video data to the platform for distribution and playing on viewers' devices.</p><h2 id="understanding-rtmp-in-and-rtmp-out"><strong><strong>Understanding RTMP In and RTMP Out</strong></strong></h2><p>The words RTMP In and RTMP Out relate to the video stream's direction relative to a particular device or platform.</p><h3 id="rtmp-in-or-ingest">RTMP In (or Ingest)</h3><p>This is the video stream's receiving end. When a platform, such as YouTube or Facebook Live, accepts an RTMP stream, it operates as an RTMP In destination. These systems often include an RTMP server address and a stream key, which you may enter into your encoder software to create the connection. These platforms' servers are particularly built to receive and handle incoming RTMP streams, eventually making the video available to users.</p><h3 id="rtmp-out-or-output">RTMP Out (or Output)</h3><p>This refers to the video stream's transmitting end. RTMP Out refers to an encoder program that collects video from a camera or other source and sends it to a streaming platform over RTMP. Popular encoder tools such as OBS Studio and XSplit enable RTMP output, allowing users to seamlessly broadcast live material to several devices.</p><h3 id="analogy">Analogy</h3><p>Imagine a live sports event being aired. The video feed is captured by the stadium's outdoor broadcast van and sent via satellite. This vehicle serves as the RTMP Outsource, transmitting the video feed. The satellite uplink facility receives the signal and sends it via satellite to broadcasters. This facility serves as the RTMP IN destination.</p><h2 id="how-does-rtmp-streaming-work">How does RTMP streaming work?</h2><h3 id="heres-an-overview-of-the-standard-rtmp-streaming-workflowencoding"><br>Here's an overview of the standard RTMP streaming workflow:<br><br>Encoding</br></br></br></h3><p>A live video source (camera or gameplay capture) is loaded into an encoder program. The encoder converts the video and audio data into a streaming-compatible format (for example, H.264 for video and AAC for music).</p><h3 id="rtmp-out">RTMP Out</h3><p>The encoded data stream is transferred from the encoder to a streaming platform or server that accepts RTMP ingestion. This technique is known as RTMP Out. Popular streaming networks, such as YouTube Live and Facebook Live, have RTMP ingest options, allowing you to push your live feed directly.</p><h3 id="server-processing">Server Processing</h3><p>The streaming platform receives the RTMP stream and does numerous operations. It may also convert the video into multiple bitrates to appeal to users with differing internet connections. It may also include other elements like subtitles or overlays before distribution.</p><h3 id="delivery-and-playback">Delivery and playback</h3><p>The processed stream is sent to viewers using more popular protocols such as <a href="https://www.videosdk.live/blog/what-is-http-live-streaming">HTTP Live Streaming </a>(HLS) or Dynamic Adaptive Streaming over HTTP (DASH), allowing for playing on a variety of devices and browsers that do not require RTMP capability.</p><h2 id="benefits-of-using-rtmp-streaming">Benefits of Using RTMP Streaming</h2><p>There are several advantages to using RTMP for live streaming:</p><h3 id="low-latency">Low Latency</h3><p>One of the biggest strengths of RTMP is its focus on low latency. Latency refers to the delay between a live event happening and viewers seeing it on their screens. Unlike protocols like HTTP that prioritize data integrity over speed, RTMP prioritizes real-time delivery, minimizing the delay between what's happening live and what viewers see on their screens.</p><h3 id="reliability">Reliability</h3><p>RTMP prioritizes reliable data transmission. It uses techniques like error correction and congestion control to employ a persistent connection between the encoder and the server. This reduces the risk of dropped frames or buffering issues, leading to a smoother and uninterrupted viewing experience.</p><h3 id="compatibility">Compatibility</h3><p>As an open standard, RTMP is widely supported by various encoder applications, streaming platforms, and media servers. This flexibility allows streamers to choose the tools and platforms that best suit their needs without worrying about compatibility issues.</p><h3 id="security">Security</h3><p>While not inherently the most secure protocol, RTMP can be configured with authentication and encryption to protect live streams from unauthorized access. This is important for sensitive content or private events.</p><h2 id="the-future-of-rtmp">The Future of RTMP</h2><p>While other protocols such as HLS (HTTP Live Streaming) are gaining popularity, RTMP is expected to stay important in the live streaming space. Its simplicity, dependability, and low-latency capabilities continue to make it an valuable alternative for a variety of streaming applications. As live streaming technology advances, we should expect RTMP to adapt and interact with newer protocols to ensure the seamless and efficient delivery of live video content.<br/></p><h2 id="getting-started-with-rtmp">Getting Started with RTMP</h2><p>To begin using RTMP streaming, you will require the following equipment and software:</p><ul><li>Encoder </li><li>Streaming Platform.</li></ul><p>The encoder, whether software or hardware, collects video and audio and transforms them into a streaming format. The streaming platform accepts and distributes your stream to viewers; notable examples include YouTube Live, Twitch, and Facebook Live. To configure your encoder, input the RTMP server URL and streaming key provided by your platform into the encoder's settings. Once set up, you can start the stream, and the encoder will send it to the RTMP server for distribution to viewers. </p><p>For more thorough instructions, see the resources below.</p><h3 id="resources">Resources</h3><ul><li><a href="https://obsproject.com/wiki/OBS-Studio-Quickstart">OBS Studio Setup Guide</a></li><li><a href="https://support.google.com/youtube/answer/2474026?hl=en">YouTube Live Streaming Guide</a></li></ul><h2 id="conclusion"><strong>Conclusion</strong></h2><p>RTMP is a foundational technology that continues to support a large percentage of live streaming today. Its low latency, dependability, and interoperability make it an invaluable resource for broadcasters, gamers, educators, and anybody else wishing to share live video experiences. </p><p>Whether you're an experienced streamer or just getting started, knowing RTMP In and RTMP Out will provide you with a good basis for exploring the world of live streaming. As technology advances, RTMP will most certainly continue to play a role alongside newer protocols, delivering seamless and compelling live video experiences for viewers globally.</p>]]></content:encoded></item><item><title><![CDATA[Top 10 Zoom Video SDK Alternatives in 2026]]></title><description><![CDATA[Discover an impressive alternative to Zoom Video SDK that is set to transform your online experience. Unleash your full potential and seize the opportunity for success starting today.]]></description><link>https://www.videosdk.live/blog/zoom-video-sdk-alternative</link><guid isPermaLink="false">64a68cfd5badc3b21a5896ab</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Wed, 16 Oct 2024 14:29:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/07/Zoom-alternative.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Zoom-alternative.jpg" alt="Top 10 Zoom Video SDK Alternatives in 2026"/><p>Looking for an <a href="https://www.videosdk.live/alternative/zoom-vs-videosdk" rel="noreferrer"><strong>Alternative to Zoom</strong></a> that seamlessly integrates real-time video into your application? It's highly likely you've come across Zoom Video SDK - a platform established in 2011 that provides mobile and web applications with broadcast, voice, and video call capabilities through its software development kit (SDK).</p><p>P.S.: If you happen to be a Zoom Video SDK customer, I urge you to continue reading to discover the possibilities you might be missing out on by staying with the platform.</p><h2 id="explore-zoom-alternatives-for-video-conferencing">Explore Zoom Alternatives for Video conferencing</h2>
<p><a href="https://www.videosdk.live/blog/zoom-video-sdk-competitors">Zoom Video SDK</a> is a user-friendly video calling SDK designed for business meetings. While it has essential features and decent audio/video quality, there are limitations to consider. It's primarily focused on business communication, lacks interactivity, and may have audio/video issues with more participants. The SDK is resource-intensive and support can be lacking and expensive. Exploring alternatives is recommended for a better experience.</p><p>Here are the <strong>top 10 alternatives to Zoom Video SDK</strong>: VideoSDK, Twilio Video, Agora, Jitsi, ApiRTC, MirrorFly, EnableX, Whereby, AWS Chime, and SignalWire. These alternatives offer various features, pricing options, and more. You can carefully consider these options based on your specific needs.</p><blockquote>
<h2 id="top-10-zoom-alternatives-for-2024">Top 10 Zoom Alternatives for 2024</h2>
<ul>
<li>VideoSDK</li>
<li>Twilio Video</li>
<li>Agora</li>
<li>Jitsi</li>
<li>AWS Chime</li>
<li>ApiRTC</li>
<li>SignalWire</li>
<li>MirrorFly</li>
<li>Whereby</li>
<li>Enablex</li>
</ul>
</blockquote>
<h2 id="1-videosdk-quick-integration-for-custom-video-solutions">1. VideoSDK: Quick Integration for Custom Video Solutions</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-SDK-for-Real-time-Communication-Live-Streaming-Video-API-2.jpeg" class="kg-image" alt="Top 10 Zoom Video SDK Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p><a href="https://www.videosdk.live/">VideoSDK</a> provides an API that allows developers to easily add powerful, extensible, scalable, and resilient audio-video features to their apps with just a few lines of code. Add live audio and video experiences to any platform in minutes.</p><p>The key advantage of using VideoSDK is it’s quite easy and quick to integrate, allowing you to focus more on building innovative features to enhance user retention.</p><h3 id="with-videosdk-you-can-expect-the-following">With VideoSDK, you can expect the following</h3>
<ul><li>The Video SDK offers high scalability, <a href="https://www.videosdk.live/blog/what-is-adaptive-bitrate-streaming" rel="noreferrer">adaptive bitrate</a> technology, customized SDK with UI flexibility, quality recordings, detailed analytics, cross-platform streaming, seamless scaling, and platform support for mobile, web, and desktop. </li><li>It supports up to 300 attendees, offers &lt;99ms latency, and ensures uninterrupted availability. </li><li>With customizable UI and programmable layouts, it delivers immersive audio-video experiences. </li><li>The solution enables high-quality recordings, detailed analytics on participant interactions, and convenient storage options. </li><li>It also allows for seamless scaling and cross-platform streaming to millions of viewers.</li></ul><h3 id="videosdk-pricing">VideoSDK Pricing</h3>
<ul><li>Video SDK offers <a href="https://www.videosdk.live/pricing">$20 free credit</a>, with separate pricing for video and audio calls. </li><li><strong>Video calls</strong> start at <strong>$0.003</strong> per participant per minute, while <strong>audio calls</strong> start at <strong>$0.0006</strong> per participant per minute. </li><li>Additional costs include <strong>$0.015</strong> per minute for <strong>cloud recordings</strong> and <strong>$0.030</strong> per minute for <strong>RTMP output</strong>. </li><li>They provide a <a href="https://www.videosdk.live/pricing#pricingCalc">pricing calculator</a> for cost estimation. </li><li><strong>Free 24/7</strong> <strong>support</strong> is available to <strong>all customers</strong>, offering assistance for basic queries, upcoming events, and technical requirements.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/alternative/zoom-vs-videosdk"><strong>Zoom and VideoSDK</strong></a><strong>.</strong></blockquote>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>
		</div>
		<svg viewBox="0 0 1024 1024" class="absolute left-1/2 top-1/2 -z-10 h-[64rem] w-[64rem] -translate-x-1/2 [mask-image:radial-gradient(closest-side,white,transparent)]" aria-hidden="true">
			
			<defs>
				<radialGradient id="827591b1-ce8c-4110-b064-7cb85a0b1217">
					<stop stop-color="#CB4371"/>
					<stop offset="0.5" stop-color="#AE49B0"/>
					<stop offset="1" stop-color="#493BB9"/>
				</radialGradient>
			</defs>
		</svg>
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-twilio-video-advanced-business-communication">2. Twilio Video: Advanced Business Communication</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Communication-APIs-for-SMS-Voice-Video-Authentication_twilio-1.jpeg" class="kg-image" alt="Top 10 Zoom Video SDK Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Twilio offers APIs for business communication across various channels. They provide SDKs for web, iOS, and Android, but configuring multiple audio and video inputs requires manual code implementation. </p><h3 id="key-points-about-twilio-video">Key points about Twilio Video</h3>
<ul><li>Call insights help track and analyze errors or dropped calls. Twilio supports up to 50 hosts and participants in a call. </li><li>However, they lack plugins for simplified product development, and engineering resources are required to handle coding for potential video call disruptions.</li></ul><h3 id="twilio-video-pricing">Twilio Video pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/twilio-video-alternative"><strong>Twilio</strong></a> offers <a href="https://www.twilio.com/en-us/video/pricing">pricing</a> starting at <strong>$4</strong> per 1,000 minutes, with <strong>extra charges</strong> for <strong>recordings</strong>, <strong>compositions</strong>, and <strong>storage</strong>. </li><li>The <strong>free support plan</strong> includes <strong>API status notifications</strong> and <strong>email support</strong> during business hours. </li><li>Additional services like 24/7 live <strong>chat support</strong>, <strong>support escalation</strong>, and <strong>guaranteed response</strong> times are available at <strong>separate costs</strong>.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/twilio-vs-zoom"><strong>Zoom and Twilio</strong></a><strong>.</strong></blockquote><h2 id="3-agora-real-time-video-with-advanced-features">3. Agora: Real-Time Video with Advanced Features</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Agora-Real-Time-Voice-and-Video-Engagement-1.jpeg" class="kg-image" alt="Top 10 Zoom Video SDK Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>Agora is a renowned platform known for its powerful real-time video conferencing capabilities, but there are a few drawbacks to consider. </li><li>Customization options may be limited, occasional performance issues can affect communication quality, and users depend on Agora's servers, impacting accessibility. </li><li>Advanced features like recording, transcription, or specialized capabilities may incur additional costs.</li></ul><h3 id="agora-pricing">Agora pricing</h3>
<ul><li><strong>Video calling</strong> starts at <strong>$3.99</strong> per 1,000 minutes, while <strong>voice calling</strong> starts at <strong>$0.99</strong> per 1,000 minutes. </li><li>While evaluating the <a href="https://www.agora.io/en/pricing/">pricing</a> of <a href="https://www.videosdk.live/blog/agora-alternative"><strong>Agora</strong></a>, it's important to consider these factors and potential expenses.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/agora-vs-zoom"><strong>Zoom and Agora</strong></a><strong>.</strong></blockquote><h2 id="4-jitsi-open-source-video-conferencing">4. Jitsi: Open-Source Video Conferencing</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Free-Video-Conferencing-Software-for-Web-Mobile-Jitsi-2.jpeg" class="kg-image" alt="Top 10 Zoom Video SDK Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>Jitsi is an open-source suite for video conferencing, offering Jitsi Meet for web and mobile with features like screen sharing. </li><li><a href="https://www.videosdk.live/blog/jitsi-alternative"><strong>Jitsi</strong></a> is <strong>free</strong>, open-source, and provides end-to-end encryption. Call recording and support response time can be challenging, and user bandwidth issues may cause screen blankness. </li><li>Setting up servers and UI is required, and additional support comes at a cost.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/zoom-vs-jitsi"><strong>Zoom and Jitsi</strong></a><strong>.</strong></blockquote><h2 id="5-aws-chime-secure-and-scalable-video-meetings-for-businesses">5. AWS Chime: Secure and Scalable Video Meetings for Businesses</h2>
<ul><li><a href="https://www.videosdk.live/blog/amazon-chime-sdk-alternative"><strong>AWS Chime</strong></a> is a great video conferencing tool for businesses, providing VoIP calling, video messaging, and virtual meetings. </li><li>It ensures high-quality online meetings with clear video and audio, along with features like screen sharing, text chats, and efficient meeting management for up to 250 participants. </li><li>Security is enhanced through AWS Identity and Access Management, and recording and analytics features are available. </li><li>Basic bandwidth management is included, but users are responsible for managing edge cases.</li></ul><h3 id="aws-chime-pricing">AWS Chime pricing</h3>
<ul><li>The service <a href="https://aws.amazon.com/chime/pricing/">offers</a> three tiers: <strong>Basic (free)</strong>, <strong>Plus ($2.50/user/month)</strong>, and <strong>Pro ($15/user/month)</strong>. </li><li><strong>Basic</strong> includes <strong>one-on-one audio/video calls</strong> and <strong>group chat</strong>. <strong>Plus</strong> adds features like <strong>screen sharing</strong> and <strong>message history</strong>. </li><li><strong>Pro</strong> includes everything from Plus, along with <strong>meeting scheduling</strong>, <strong>recording</strong>, and <strong>Outlook integration</strong>.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/zoom-vs-amazon-chime-sdk"><strong>Zoom and AWS Chime</strong></a><strong>.</strong></blockquote><h2 id="6-apirtc-seamless-webrtc-integration-for-developers">6. ApiRTC: Seamless WebRTC Integration for Developers</h2>
<ul><li>ApiRTC is a remarkable WebRTC Platform as a Service (PaaS) that simplifies developers' access to WebRTC technology. </li><li>With their API, you can effortlessly incorporate real-time multimedia interactions into your websites and mobile apps with just a few lines of code.</li><li>However, The <strong>basic subscription</strong> for ApiRTC is priced at <strong>$54.37</strong>, which might be considered relatively expensive for small developers. </li></ul><h2 id="7-signalwire-flexible-video-calls-for-up-to-100-participants">7. SignalWire: Flexible Video Calls for Up to 100 Participants</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Building-The-Software-Defined-Telecom-Network-SignalWire-1.jpeg" class="kg-image" alt="Top 10 Zoom Video SDK Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>SignalWire is a platform for seamless video integration in applications. </li><li>It supports up to <strong>100 participants per video call</strong> and offers an SDK for web, iOS, and Android apps. </li><li>Developers handle disruptions and user logic separately. </li><li><a href="https://signalwire.com/pricing/video">Pricing</a> is based on per-minute usage, with options for HD and Full HD calls. </li><li>Additional features like <strong>recording</strong> and <strong>streaming</strong> are available at <strong>separate rates</strong>.</li></ul><h2 id="8-mirrorfly-comprehensive-in-app-communication">8. MirrorFly: Comprehensive In-App Communication</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Live-Video-Call-API-Best-Video-Chat-SDK-for-Android-iOS-mirrorfly-1.jpeg" class="kg-image" alt="Top 10 Zoom Video SDK Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>MirrorFly is an in-app communication suite designed for enterprises. </li><li>It offers intuitive APIs and SDKs that provide a remarkable chat and calling experience. </li><li>With a range of over 150 chat, voice, and video calling features, this cloud-based solution seamlessly integrates for a robust communication experience in all aspects. </li><li>However, <strong>MirrorFly's </strong><a href="https://www.mirrorfly.com/pricing.php"><strong>pricing</strong></a> starts at <strong>$299</strong> per month, which positions it as a higher-cost option.</li></ul><h2 id="9-whereby-easy-access-browser-based-meetings">9. Whereby: Easy Access Browser-Based Meetings</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Calling-API-for-Web-and-App-Developers-Whereby-2.jpeg" class="kg-image" alt="Top 10 Zoom Video SDK Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>Whereby is a browser-based meeting platform that offers permanent rooms for users. </li><li>Guests can join meetings with a simple click, and no downloads or registrations are required. </li><li>They recently introduced a hybrid meeting solution that reduces echo and eliminates the need for expensive hardware. </li><li>Whereby allows customization of the video interface but has limited options. </li><li>It prioritizes data privacy, provides basic collaborative features, and offers <a href="https://whereby.com/information/pricing">pricing</a> plans starting at <strong>$6.99</strong> per month with additional charges for extra usage.</li></ul><h2 id="10-enablex-customizable-live-video-for-devs">10. EnableX: Customizable Live Video for Devs</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Call-API-Video-Chat-API-Voice-API-Video-Conferencing_enebleX-2.jpeg" class="kg-image" alt="Top 10 Zoom Video SDK Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>EnableX offers SDKs for live video, voice, and messaging, facilitating the development of live experiences in applications. </li><li>It targets service providers, ISVs, SIs, and developers. The SDK enables the customization of video-calling solutions and personalized live video streams. </li><li>It supports JavaScript, PHP, and Python. <strong>Pricing</strong> starts at <strong>$0.004</strong> per participant minute for <strong>up to 50 participants</strong>. </li><li><strong>Additional </strong><a href="https://www.enablex.io/cpaas/pricing/our-pricing"><strong>charges</strong></a> apply for <strong>recording</strong>, <strong>transcoding</strong>, <strong>storage</strong>, and <strong>RTMP streaming</strong>.</li></ul><h2 id="certainly">Certainly!</h2>
<p>While all the video conferencing SDKs mentioned offer various features and capabilities, <a href="https://www.videosdk.live/">VideoSDK</a> stands out as an SDK that prioritizes a fast and seamless integration experience.</p><p>VideoSDK offers a low-code solution that allows developers to quickly build live video experiences in their applications. With Video SDK, it is possible to create and deploy custom video conferencing solutions in under 10 minutes, significantly reducing the time and effort required for integration.</p><p>Unlike other SDKs that may have longer integration times or limited customization options, <a href="https://www.videosdk.live/">VideoSDK</a> aims to provide a streamlined process. By leveraging Video SDK, developers can create and embed live video experiences with ease, allowing users to connect, communicate, and collaborate in real-time.</p><h2 id="still-skeptical">Still skeptical?</h2>
<p>Take a deep dive into VideoSDK's comprehensive <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start">Quickstart guide</a> and immerse yourself in the possibilities with our <a href="https://docs.videosdk.live/code-sample">powerful sample app</a>, built exclusively for Video SDK.</p><p><a href="https://app.videosdk.live/">Sign up</a> and embark on your integration journey today and seize the opportunity to claim your <a href="https://www.videosdk.live/pricing">complimentary free $20</a>, allowing you to unleash the full potential of VideoSDK. And if you ever need assistance along the way, our dedicated team is just a click away, ready to support you.</p><p>Get ready to witness the remarkable experiences you can create using the extraordinary capabilities of VideoSDK. Unleash your creativity and let the world see what you can build!</p>]]></content:encoded></item><item><title><![CDATA[What is Video API? | What are the Different Types of Video API?]]></title><description><![CDATA[A video API is a programming interface that enables developers to integrate video-related features and functionalities into applications, enhancing user experiences.]]></description><link>https://www.videosdk.live/blog/what-is-video-api</link><guid isPermaLink="false">65af77902a88c204ca9ce411</guid><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Wed, 16 Oct 2024 12:35:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/What-is-Video-API.png" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-video-api">What is Video API?</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/What-is-Video-API.png" alt="What is Video API? | What are the Different Types of Video API?"/><p>Video APIs, or Video Application Programming Interfaces, are instrumental in facilitating communication and integration between applications. These APIs empower developers to seamlessly embed video streaming, recording, and real-time communication into their applications, offering users an immersive and interactive experience.</p><h2 id="what-is-api">What is API?</h2><p><strong>API </strong>stands for <em>Application Programming Interface</em>. At its core, an API acts as a bridge that allows one piece of software to interact with another. It defines a set of rules and protocols that applications must follow to communicate effectively. APIs serve as intermediaries, facilitating the exchange of data and functionalities between different software components.</p><h3 id="importance-of-video-apis">Importance of Video APIs</h3><p>The significance of Video APIs is far-reaching, impacting industries such as healthcare, education, e-commerce, and social media. By enabling the integration of video functionalities, these APIs enhance user engagement, foster remote collaboration, and pave the way for innovative solutions in various sectors. Any <a href="https://axify.io/blog/10x-engineer">10x engineer</a> understands the transformative potential of video APIs, leveraging them to develop highly scalable and innovative solutions that meet complex industry demands</p><h2 id="understanding-video-api">Understanding Video API</h2><h3 id="core-components">Core Components:</h3><ul><li><strong>Video Streaming</strong>: As the backbone of live video interactions, streaming APIs allow applications to deliver real-time video content to users, creating engaging and interactive experiences.</li><li><strong>Video Recording</strong>: APIs supporting video recording functionalities are crucial for preserving and sharing content within applications, whether it's for documentation, content creation, or knowledge sharing.</li><li><strong>Real-time Communication</strong>: These APIs enable instant messaging and live interactions, redefining user engagement by facilitating real-time conversations and collaborations.</li></ul><h3 id="key-features">Key Features:</h3><ul><li><strong>Scalability</strong>: Video APIs must seamlessly scale to accommodate growing user bases and evolving application requirements, ensuring a consistent and reliable user experience.</li><li><strong>Customization</strong>: Tailoring video functionalities to specific needs ensures a personalized and user-friendly experience, allowing developers to create unique and engaging applications.</li><li><strong>Integration Capabilities</strong>: The ability to integrate with diverse platforms and technologies is crucial for versatile application development, ensuring compatibility and accessibility across different devices.</li></ul><h3 id="exploring-real-time-communication-apis">Exploring Real-Time Communication APIs</h3><p>Real-time communication APIs play a pivotal role in enhancing user engagement. From empowering customer support with instant responses to enabling interactive live events, these APIs provide the foundation for dynamic and meaningful interactions within applications.</p><h3 id="practical-uses-of-video-apis">Practical Uses of Video APIs</h3><p>Here are a few ways you can use video APIs to improve your product, service, or application:</p><table>
<thead>
<tr>
<th><strong>Feature Category</strong></th>
<th><strong>Feature Description</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>1:1 Video Chats</td>
<td>Host secure, GDPR-compliant meetings.</td>
</tr>
<tr>
<td>Group Video Chats</td>
<td>Bring together 3 or more video participants.</td>
</tr>
<tr>
<td>Live Streaming</td>
<td>Live stream content to your audience in real time.</td>
</tr>
<tr>
<td>Video Playback</td>
<td>- Add your branding to videos.</td>
</tr>
<tr>
<td/>
<td>- Allow users to change resolution, speed, and captions to preferences.</td>
</tr>
<tr>
<td>Video Analytics</td>
<td>Obtain audience information such as playtime, rewind timestamps,</td>
</tr>
<tr>
<td/>
<td>viewing location, and receiving video recommendations.</td>
</tr>
<tr>
<td>Interactive Video</td>
<td>Empower users with interactive features like quizzes, surveys,</td>
</tr>
<tr>
<td/>
<td>real-time chat, and more during video playback.</td>
</tr>
</tbody>
</table>
<h2 id="types-of-video-apis">Types of Video APIs</h2><p>There are three types of video APIs,</p><ol><li><em>Streaming Video APIs</em></li><li><em>Video Conferencing APIs</em></li><li><em>Video Editing APIs</em></li></ol><h3 id="video-streaming-apis">Video Streaming APIs: </h3><p>Video Streaming APIs facilitate the integration of video content into applications, enabling real-time delivery and playback. These APIs provide developers with the tools to manage, customize, and optimize video streaming, ensuring a seamless and efficient experience for users across various platforms. These APIs are Ideal for live events, gaming, and content delivery, <a href="https://www.videosdk.live/interactive-live-streaming">streaming video APIs</a> bring real-time experiences to users, allowing developers to create immersive and dynamic applications.</p><h3 id="video-conferencing-apis">Video Conferencing APIs: </h3><p>Video Conferencing APIs empower developers to embed video conferencing capabilities into applications, enabling seamless virtual meetings. These APIs offer features like <a href="https://www.videosdk.live/audio-video-conferencing">real-time video and audio communication</a>, screen sharing, and collaboration tools. Integration of Video Conferencing APIs enhances applications with robust and scalable virtual meeting functionalities. </p><h3 id="video-editing-apis">Video Editing APIs: </h3><p>Video Editing APIs provide developers with tools to programmatically edit and enhance videos within applications. These APIs offer features such as cutting, trimming, adding effects, and merging videos, allowing developers to create a customized and engaging video editing experience for users directly within their applications. Some platforms also integrate&nbsp;<a href="https://www.renderforest.com/text-to-video-ai" rel="noreferrer"><strong>text to video AI</strong></a>&nbsp;technology, enabling users to generate video content automatically from written scripts or prompts, streamlining the creative process.</p><h2 id="videosdk-a-game-changing-solution">VideoSDK: A Game-Changing Solution</h2><h3 id="what-is-videosdk">What is VideoSDK</h3><p><a href="https://www.videosdk.live/">VideoSDK </a>stands as a game-changing solution in the realm of live video infrastructure across the USA &amp; India. It not only provides developers with complete flexibility, scalability, and control but also bridges the gap between complex video functionalities and the development process.</p><ol><li><strong>Bridging the Gap Between Developers and Video Functionalities</strong>: VideoSDK simplifies the integration process, making it more accessible for developers to incorporate advanced audio-video features into their applications.</li><li><strong>Features and Capabilities of Video SDK</strong>: From robust video streaming to seamless real-time communication, VideoSDK offers a comprehensive suite of features that elevate the user experience.</li></ol><h3 id="how-video-sdk-enhances-video-api-integration">How Video SDK Enhances Video API Integration</h3><ol><li><strong>Simplifying Integration</strong>: VideoSDK streamlines the integration process, reducing complexity and saving valuable development time. Its user-friendly interface and extensive documentation empower developers, regardless of their expertise level.</li><li><strong>Boosting Development Efficiency</strong>: With pre-built tools and features, VideoSDK enhances development efficiency, allowing developers to focus on creating exceptional user experiences rather than grappling with intricate video functionalities.</li></ol><h3 id="choosing-the-right-video-api">Choosing the Right Video API</h3><p>Selecting the right Video API for a project is a critical decision that impacts its success. Considerations such as project requirements, scalability, and developer-friendly features should guide this decision, ensuring the chosen API aligns with the project's goals and future growth.</p><ol><li><strong>Project Requirements</strong>: Understanding the specific needs of the project is essential. Whether it's real-time communication, high-quality streaming, or advanced features, the chosen API should align with the project's objectives.</li><li><strong>Scalability and Future-Proofing</strong>: Opting for a Video API that can seamlessly scale as the user base grows ensures that the application remains robust and reliable in the long run.</li><li><strong>Developer-Friendly Features</strong>: A developer-friendly API, like VideoSDK, ensures that the integration process is smooth and efficient, allowing developers to focus on innovation rather than grappling with complex technicalities.</li><li><strong>Affordability:</strong> Pay for exactly what you need—nothing more, nothing less.</li><li><strong>Documentation:</strong> Choose a platform that has extensive <a href="https://docs.videosdk.live/">documentation</a>, <a href="https://www.videosdk.live/signup">demos</a>, and <a href="https://docs.videosdk.live/code-sample">sample code</a>.</li><li><strong>Video analytics:</strong> Learn how video performs within your application and make data-backed changes and improvements.</li><li><strong>Low latency:</strong> Choose the best geolocation to support your video application.</li><li><strong>Compliance:</strong> Keep your data secure and compliant with the GDPR, HIPPA Complaint, Data Localization, SOC 2 Type 2 &amp; VAPT.</li></ol><h2 id="build-a-better-video-experience-with-videosdk-video-api-implementation"><br>Build a better video experience with VideoSDK Video API <strong>Implementation</strong></br></h2><p>VideoSDK empowers you to craft personalized video experiences within your applications at scale. Whether you're implementing HIPAA-compliant 1:1 and group calls for your healthcare practice, the VideoSDK offers a comprehensive solution.</p><p>Discover the capabilities by initiating a free build of a 1:1 video application, or take advantage of a free trial to develop an application with group video calling functionality. Explore the optimal starter plan for your requirements on our <a href="https://www.videosdk.live/pricing">VideoSDK pricing page</a>. VideoSDK is designed to seamlessly integrate and enhance your application with customizable video features.</p>]]></content:encoded></item><item><title><![CDATA[What is Jitter? Importance of Jitter in VoIP and Video Calls]]></title><description><![CDATA[Discover what jitter is and its impact on communications. Learn how jitter affects network performance in VoIP, video calls, and real-time applications.]]></description><link>https://www.videosdk.live/blog/what-is-jitter</link><guid isPermaLink="false">667571bd20fab018df10ed1c</guid><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Wed, 16 Oct 2024 11:15:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/06/what-is-jitter.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h2 id="what-is-jitter">What is Jitter?</h2>
<!--kg-card-end: markdown--><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/what-is-jitter.jpg" alt="What is Jitter? Importance of Jitter in VoIP and Video Calls"/><p>In the realm of telecommunications and network engineering, jitter refers to network jitter, which describes the variability in time delay in milliseconds (ms) between data packets over a network. Unlike consistent delay, which can be predictable, jitter involves random and significant fluctuations that can affect the quality of network communication. This variability is especially problematic in real-time communications, such as voice-over-internet protocol (VoIP Jitter) calls and online gaming, where timing is crucial for performance and user experience.</p><!--kg-card-begin: markdown--><h3 id="why-does-jitter-matter">Why Does Jitter Matter?</h3>
<!--kg-card-end: markdown--><p>Understanding jitter is essential because it directly impacts the effectiveness of digital communications. For businesses, high levels of jitter can lead to poor voice call quality, unresponsive video conferencing, and even dropped connections. In a digital era where seamless communication is key to operational success, managing jitter is not just about enhancing quality; it’s about ensuring reliability and consistency in daily operations.</p><p>In the following sections, we will dive deeper into the causes of jitter, how it is measured, and the best practices for mitigating its effects to ensure smooth and reliable network performance. Stay tuned to learn more about optimizing your network to handle the challenges of jitter effectively.</p><!--kg-card-begin: markdown--><h2 id="understanding-jitter-in-depth">Understanding Jitter in Depth</h2>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="jitter-causes">Jitter Causes</h3>
<!--kg-card-end: markdown--><p>Jitter in network communications occurs due to the variability in packet travel times across a network. This variability can stem from several factors, including network congestion, improper queue configurations, and varying packet routes. When data packets travel across a network, they can take different paths to reach their destination, and these paths might not always have consistent traffic and speeds, leading to uneven arrival times.</p><!--kg-card-begin: markdown--><h3 id="jitter-measuring">Jitter Measuring</h3>
<!--kg-card-end: markdown--><p>To effectively manage jitter, it is crucial to measure it accurately. The jitter <strong>definition</strong> states it is typically measured in milliseconds (ms) and can be calculated by comparing the delay times of successive packets. The most common method to measure jitter involves computing the difference in packet inter-arrival time between received packets. Network diagnostic tools and software often include jitter measurement functionalities that help network administrators monitor and troubleshoot jitter in real-time communications.</p><!--kg-card-begin: markdown--><h3 id="impact-on-network-performance">Impact on Network Performance</h3>
<!--kg-card-end: markdown--><p>Jitter can severely impact applications that rely on real-time data transmission. In <a href="https://www.ecosmob.com/voip-solutions-provider">VoIP communications solutions</a>, high jitter can result in garbled or scrambled audio which degrades the quality of the communication. For video conferences, jitter can cause poor video quality and out-of-sync audio and video. Online gaming and streaming services are also susceptible, where jitter can lead to lagging and buffering issues, directly affecting the user experience.</p><!--kg-card-begin: markdown--><h2 id="real-world-impact-of-high-jitter">Real-World Impact of High Jitter</h2>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="voip-services">VoIP Services</h3>
<!--kg-card-end: markdown--><p>A telecommunications company experienced customer complaints about poor call quality. Upon investigation, it was found that high jitter levels due to inadequate bandwidth and network congestion were causing voice packets to arrive at uneven intervals.</p><!--kg-card-begin: markdown--><h3 id="live-streaming">Live Streaming</h3>
<!--kg-card-end: markdown--><p>A major broadcasting service noticed frequent dips in video quality during peak times. Analysis revealed that jitter, along with latency, was disrupting the smooth streaming of video, which was critical during live sports events.</p><!--kg-card-begin: markdown--><h2 id="mitigation-techniques-how-to-reduce-jitter">Mitigation Techniques: How to Reduce Jitter</h2>
<!--kg-card-end: markdown--><p>There are several strategies and technologies to mitigate jitter:</p><!--kg-card-begin: markdown--><h3 id="use-of-jitter-buffers">Use of Jitter Buffers</h3>
<!--kg-card-end: markdown--><p>A jitter buffer temporarily stores arriving packets in order to smooth out their arrival rate before they are processed. This can significantly reduce the effects of jitter in VoIP and video streaming applications.</p><!--kg-card-begin: markdown--><h3 id="upgrade-network-infrastructure">Upgrade Network Infrastructure</h3>
<!--kg-card-end: markdown--><p>Improving network hardware, increasing bandwidth, and optimizing router and switch settings can help in managing traffic more effectively and thus reduce jitter.</p><!--kg-card-begin: markdown--><h3 id="quality-of-service-qos-settings">Quality of Service (QoS) Settings:</h3>
<!--kg-card-end: markdown--><p>Implementing QoS on network devices can prioritize critical data traffic, especially for real-time services like VoIP and video conferencing, thereby minimizing jitter.</p><p>By understanding the causes of jitter, measuring it accurately, and implementing effective mitigation strategies, organizations can significantly enhance their network performance and reliability, ensuring that real-time communications are clear, consistent, and effective.</p><!--kg-card-begin: markdown--><h2 id="conclusion">Conclusion</h2>
<!--kg-card-end: markdown--><p>Jitter is an inevitable aspect of network communications, but its impact can be mitigated through strategic planning and robust network design. Understanding the sources and consequences of jitter is crucial for maintaining high-quality communications, especially in applications that depend on real-time data transmission. By implementing advanced tools and techniques such as jitter buffers, QoS settings, and ongoing network monitoring, businesses can effectively manage jitter and enhance the overall experience for their end users.</p><p>To ensure continuous improvement in network performance, it is essential to regularly assess the network infrastructure, update hardware and software as needed, and stay informed about the latest technologies and practices that can help reduce jitter. This proactive approach will not only solve current issues but also prepare the network to handle future demands, ensuring that digital communications remain seamless and effective.</p><!--kg-card-begin: markdown--><h2 id="faqs-for-jitter">FAQs for Jitter</h2>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="1-what-is-the-difference-between-jitter-and-latency">1. What is the difference between jitter and latency?</h3>
<!--kg-card-end: markdown--><p>While jitter refers to the variability in the time delay of data packets arriving at their destination, latency is the consistent delay experienced by a data packet to travel from source to destination. Both impact network performance, but jitter affects the predictability and stability of packet delivery, making it particularly troublesome for real-time applications.</p><!--kg-card-begin: markdown--><h3 id="2-how-can-i-monitor-jitter-on-my-network">2. How can I monitor jitter on my network?</h3>
<!--kg-card-end: markdown--><p>Network monitoring tools often provide functionalities to measure and track jitter. These tools analyze the arrival times of packets and calculate the variability to help network administrators understand current network conditions and foresee potential issues.</p><!--kg-card-begin: markdown--><h3 id="3-are-there-any-industry-standards-for-acceptable-jitter-levels">3. Are there any industry standards for acceptable jitter levels?</h3>
<!--kg-card-end: markdown--><p>Yes, industry standards typically suggest that for good quality VoIP or other real-time services, jitter should be kept below 30 milliseconds. Higher levels can lead to a noticeable degradation in service quality.</p><!--kg-card-begin: markdown--><h3 id="4-what-is-a-jitter-on-a-speed-test">4. What is a Jitter on a Speed test?</h3>
<!--kg-card-end: markdown--><p>Jitter in a speed test measures the variability in packet transmission times between devices. It reflects the stability of your connection. High jitter can cause issues like poor voice quality in VoIP calls and disrupted streaming experiences.</p><!--kg-card-begin: markdown--><h3 id="5-what-is-jitter-in-networking">5. What is Jitter in Networking?</h3>
<!--kg-card-end: markdown--><p>Jitter in networking refers to the variation in time between data packets arriving, caused by network congestion, route changes, or other inefficiencies. High jitter can lead to disrupted services, such as VoIP calls and real-time video streaming.</p><!--kg-card-begin: markdown--><h3 id="6-what-is-jitter-juice">6. What is Jitter Juice?</h3>
<!--kg-card-end: markdown--><p>Jitter juice is a playful term for a homemade drink typically made for children to help calm first-day-of-school nerves. It usually combines juice with sparkling water and is often accompanied by a fun, reassuring poem.</p>]]></content:encoded></item><item><title><![CDATA[Understanding Latency in WebRTC? How can VideoSDK Fix Latency?]]></title><description><![CDATA[Latency in WebRTC refers to the time delay between data transmission and reception, impacting real-time communication responsiveness in applications.]]></description><link>https://www.videosdk.live/blog/what-is-latency-in-webrtc</link><guid isPermaLink="false">65b7650a2a88c204ca9ce62e</guid><category><![CDATA[WebRTC]]></category><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Wed, 16 Oct 2024 09:49:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/image--8-.png" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-latency">What is Latency?</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/image--8-.png" alt="Understanding Latency in WebRTC? How can VideoSDK Fix Latency?"/><p>Latency refers to the time delay between the initiation of a process means it starting point and its completion means its endpoint. In the context of real-time communication, minimizing latency is crucial for achieving optimal user experiences. Whether it's <a href="https://www.videosdk.live/audio-video-conferencing" rel="noreferrer">video conferencing</a> or live streaming, minimizing latency ensures smoother and more engaging interactions.</p><h2 id="what-is-latency-in-webrtc">What is Latency in WebRTC?</h2><p>In <a href="https://www.videosdk.live/blog/webrtc" rel="noreferrer">WebRTC</a> (Web Real-Time Communication), latency refers to the time delay between the transmission of data from one endpoint to another within a real-time communication session, such as video or audio conferencing, over the internet. Latency in WebRTC is characterized by three main components:</p><ul><li><strong>Transmission Delay:</strong> The time it takes for data to travel from the sender to the receiver.</li><li><strong>Processing Delay:</strong> The time spent encoding and decoding audio and video data.</li><li><strong>Propagation Delay:</strong> The time it takes for data to traverse the network.</li></ul><h3 id="the-role-of-webrtc-and-the-critical-need-for-low-latency">The Role of WebRTC and the Critical Need for Low Latency</h3><p>WebRTC enables real-time communication directly between web browsers, facilitating applications such as video conferencing, online gaming, and live streaming. Low latency is of paramount importance in WebRTC as it directly impacts the user experience. Reduced latency ensures that communication feels natural and instantaneous, crucial for applications where responsiveness is key.</p><h2 id="exploring-causes-of-latency-in-webrtc-challenges-and-solutions">Exploring Causes of Latency in WebRTC: Challenges and Solutions</h2><p>Several factors contribute to internet latency in the context of WebRTC, each posing unique challenges to developers.</p><h3 id="network-congestion">Network Congestion</h3><p>Network congestion occurs when the available bandwidth is insufficient to handle the volume of data being transmitted. This can result in delays and disruptions in audio-video communication. VideoSDK addresses this challenge by incorporating adaptive algorithms that optimize real-time video streaming even in congested network conditions.</p><h3 id="packet-loss">Packet Loss</h3><p><a href="https://www.videosdk.live/developer-hub/social/packet-loss" rel="noreferrer">Packet loss</a> refers to the loss of data packets during transmission. In WebRTC, packet loss can lead to jittery video and audio playback, degrading the overall quality of the user experience. VideoSDK tackles this issue by implementing adaptive bitrate streaming, dynamically adjusting the quality of the stream to compensate for packet loss, and ensuring a smooth and uninterrupted experience for users.</p><h3 id="jitter">Jitter</h3><p><a href="https://www.videosdk.live/blog/what-is-jitter" rel="noreferrer">Jitter</a> is the variation in the arrival time of data packets. In WebRTC, jitter can result in synchronization issues and disrupt the flow of real-time communication. VideoSDK mitigates jitter by incorporating advanced mechanisms that compensate for variations in packet arrival times, maintaining a consistent and smooth streaming experience.</p><h2 id="comparing-latency-webrtc-versus-other-streaming-protocols">Comparing Latency: WebRTC Versus Other Streaming Protocols</h2><p><a href="https://www.videosdk.live/blog/webrtc">WebRTC </a>(Web Real-Time Communication) is renowned for its low-latency capabilities, making it well-suited for real-time communication applications. When compared to other streaming protocols, WebRTC generally excels in minimizing latency, especially in scenarios like video conferencing and live broadcasting. Let's briefly compare WebRTC latency to some other streaming protocols:</p><h3 id="webrtc-vs-hls-http-live-streaming">WebRTC vs. HLS (HTTP Live Streaming)</h3><p>WebRTC offers lower latency compared to traditional HLS. <a href="https://www.videosdk.live/blog/what-is-http-live-streaming">HLS </a>typically introduces latency in the range of several seconds due to its chunked delivery mechanism, while WebRTC can achieve much lower latency, often in the range of milliseconds to a few seconds.</p><h3 id="webrtc-vs-rtmp-real-time-messaging-protocol">WebRTC vs. RTMP (Real Time Messaging Protocol)</h3><p><a href="https://www.videosdk.live/blog/what-is-rtmp">RTMP </a>has been widely used for live streaming, but it can introduce noticeable latency. WebRTC, in contrast, is designed for real-time communication and can provide lower latency, making it a preferred choice for applications requiring quick and responsive interactions.</p><h3 id="webrtc-vs-mpeg-dash-dynamic-adaptive-streaming-over-http">WebRTC vs. MPEG-DASH (Dynamic Adaptive Streaming over HTTP)</h3><p>Similar to HLS, MPEG-DASH can introduce latency due to its segment-based delivery. When combined with Low-Latency CMAF (Common Media Application Format), MPEG-DASH can achieve reduced latency, but WebRTC often outperforms it in terms of real-time responsiveness.</p><h3 id="webrtc-vs-srt-secure-reliable-transport">WebRTC vs. SRT (Secure Reliable Transport)</h3><p>Both WebRTC and SRT focus on low-latency streaming, but they have different use cases. WebRTC is commonly associated with real-time communication on the web, while SRT is often used for secure and reliable video streaming over unreliable networks. The choice between them depends on the specific requirements of the application.</p><h3 id="webrtc-vs-quic-quick-udp-internet-connections">WebRTC vs. QUIC (Quick UDP Internet Connections)</h3><p>WebRTC and QUIC both aim to reduce latency, but they have different focuses. WebRTC is designed for real-time communication, while QUIC is a general-purpose transport protocol that can benefit various web applications, including streaming. The specific use case and requirements influence the choice between WebRTC and QUIC.</p><h2 id="how-videosdk-optimizes-webrtc-for-reduced-latency">How VideoSDK Optimizes WebRTC for Reduced Latency?</h2><h3 id="videosdk">VideoSDK</h3><p><a href="https://www.videosdk.live/">VideoSDK </a>is a comprehensive live video infrastructure designed for developers across the USA &amp; India. It offers real-time audio-video SDKs that provide complete flexibility, scalability, and control, making it seamless for developers to integrate <a href="https://www.videosdk.live/audio-video-conferencing">audio-video conferencing</a> and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a> into their web and mobile applications.</p><h3 id="features-of-videosdk">Features of VideoSDK</h3><ol><li><strong>Low-latency streaming capabilities:</strong> VideoSDK is engineered to deliver low-latency streaming, ensuring minimal delays in audio-video communication. This is particularly crucial for applications where real-time interaction is paramount.</li><li><strong>Adaptive bitrate streaming:</strong> VideoSDK employs <a href="https://www.videosdk.live/blog/what-is-adaptive-bitrate-streaming">adaptive bitrate streaming</a>, dynamically adjusting the quality of the video stream based on network conditions. This not only mitigates the impact of packet loss but also ensures a consistent viewing experience for users across varying internet speeds.</li></ol><h3 id="iimplementing-videosdk-to-combat-latency-issues-in-webrtc">IImplementing VideoSDK to Combat Latency Issues in WebRTC</h3><ol><li><strong>Real-time video optimization:</strong> VideoSDK optimizes real-time video streaming by minimizing transmission and processing delays. This is achieved through advanced encoding and decoding algorithms, ensuring a smooth and responsive user experience.</li><li><strong>Adaptive algorithms for network conditions:</strong> VideoSDK's adaptive algorithms intelligently adapt to changing network conditions, optimizing the audio-video stream in real time. Whether faced with network congestion or packet loss, VideoSDK dynamically adjusts, ensuring a reliable and low-latency connection.</li></ol><p>In the dynamic landscape of real-time communication, addressing latency is paramount for developers aiming to provide optimal user experiences. VideoSDK stands out as a powerful ally, offering a comprehensive solution to mitigate latency challenges in WebRTC. By integrating VideoSDK into their applications, developers can unlock the full potential of real-time audio-video communication, providing users with a seamless and immersive experience. It's time for developers to explore the possibilities that VideoSDK opens up and elevate their applications to new heights of performance and user satisfaction.</p>]]></content:encoded></item><item><title><![CDATA[What is RTMP Protocol (Real Time Messaging Protocol)?]]></title><description><![CDATA[Real-time messaging Protocol (RTMP) is a robust communication protocol designed for real-time transmission of audio, video, and data over the internet.]]></description><link>https://www.videosdk.live/blog/what-is-rtmp</link><guid isPermaLink="false">65a674af6c68429b5fdf116e</guid><category><![CDATA[RTMP]]></category><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Tue, 15 Oct 2024 13:01:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/image--1-.png" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-rtmp-real-time-messaging-protocol">What is RTMP (Real Time Messaging Protocol)?</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/image--1-.png" alt="What is RTMP Protocol (Real Time Messaging Protocol)?"/><p>Real time Messaging Protocol (RTMP) is a robust real-time communication protocol designed for real-time transmission of audio, video, and data over the internet. It facilitates low-latency streaming by maintaining a persistent connection between the server and the client. RTMP is widely used in live streaming applications, online gaming, and other scenarios requiring instantaneous data delivery for an immersive user experience.</p><h3 id="key-features-of-rtmp">Key Features of RTMP</h3><p>RTMP boasts several key features that contribute to its popularity in the streaming landscape. These include low latency streaming, adaptive bitrate streaming, and support for various content types. In comparison to other streaming protocols, RTMP offers unique advantages that cater to the demands of different applications and use cases.</p><h3 id="rtmp-vs-other-streaming-protocols">RTMP vs. Other Streaming Protocols</h3><p>While various streaming protocols exist, Real-Time Messaging Protocol holds its ground due to its distinctive features. A comparison with other protocols, such as <a href="https://www.videosdk.live/blog/what-is-http-live-streaming">HTTP Live Streaming</a> (HLS) and Dynamic Adaptive Streaming over HTTP (DASH), reveals the strengths and weaknesses of RTMP in different scenarios.</p><h2 id="how-does-rtmp-work">How Does RTMP Work?</h2><h3 id="real-time-messaging-protocol-architecture-and-components">Real-Time Messaging Protocol Architecture and Components</h3><p><strong>RTMP Server:</strong> At the core of RTMP communication is the RTMP server, responsible for managing client connections, handling streaming requests, and facilitating the transfer of multimedia data.</p><p><strong>RTMP Client: </strong>The RTMP client, typically embedded in applications or devices, establishes a connection with the RTMP server to send or receive audio, video, or data streams.</p><p><strong>RTMP Streams</strong>: RTMP streams are the channels through which multimedia content flows between the server and the client. These streams play a crucial role in delivering a seamless and uninterrupted streaming experience.</p><h3 id="the-rtmp-streaming-process-explained">The RTMP Streaming Process Explained</h3><p><strong>Handshake Phase:</strong> The RTMP communication begins with a handshake phase, where the client and server exchange information to establish a secure and reliable connection.</p><p><strong>Command Phase:</strong> During the command phase, the client and server exchange commands, enabling the client to request specific actions or stream-related tasks.</p><p><strong>Data Phase</strong>: In the data phase, the actual streaming data, including audio and video content, is transmitted between the client and server in real time, ensuring a smooth streaming experience.</p><h2 id="benefits-of-using-rtmp-for-live-streaming">Benefits of Using RTMP for Live Streaming</h2><p><strong>Low Latency Streaming:</strong> RTMP's low latency capabilities make it suitable for applications where real-time communication and interaction are crucial, such as live gaming and virtual events.</p><ul><li><strong>Adaptive Bitrate Streaming:</strong> The <a href="https://www.videosdk.live/blog/what-is-adaptive-bitrate-streaming">adaptive bitrate streaming</a> feature ensures a smooth viewing experience by adjusting the quality of the stream based on the viewer's internet connection, device, or other factors.</li><li><strong>Support for Various Types of Content:</strong> RTMP supports a wide range of multimedia content, making it versatile for applications beyond traditional video streaming, such as audio streaming and data transmission.</li><li><strong>Easy Integration with Different Platforms:</strong> RTMP's compatibility with various platforms simplifies integration, allowing developers to incorporate it seamlessly into web and mobile applications.</li></ul><h2 id="rtmps-impact-on-audio-video-conferencing">RTMP's Impact on Audio-Video Conferencing</h2><p>The Real-Time Messaging Protocol's role in facilitating real-time communication cannot be overstated. Whether powering audio-video conferencing or live streaming experiences, RTMP is the backbone of platforms that prioritize seamless user interactions. VideoSDK, a leading player in the field, harnesses the capabilities of RTMP to elevate the streaming experience.</p><h2 id="enhancing-streaming-with-videosdk-and-rtmp">Enhancing Streaming with VideoSDK and RTMP</h2><h3 id="what-is-videosdk">What is VideoSDK</h3><p><a href="https://www.videosdk.live/">VideoSDK </a>is a leading provider of real-time audio and video SDKs for every developer across the USA &amp; India. With <a href="https://www.videosdk.live/audio-video-conferencing">real-time audio-video SDKs</a>, VideoSDK empowers developers with unparalleled flexibility, scalability, and control, making it effortless to integrate audio-video conferencing and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a> into web and mobile apps.</p><h3 id="features-and-benefits-of-integrating-videosdk">Features and Benefits of Integrating VideoSDK</h3><ul><li><strong>High-Quality Video Streaming:</strong> VideoSDK ensures high-quality video streaming, enhancing the overall viewing experience for end-users.</li><li><strong>Customization Options</strong>: Developers can tailor the RTMP streaming experience with VideoSDK's customization options, allowing for a personalized and branded interface.</li><li><strong>Real-Time Analytics: </strong>VideoSDK provides real-time analytics, empowering content creators and platform owners with valuable insights into viewer engagement and streaming performance.</li></ul><h3 id="benefits-of-using-videosdk-with-real-time-messaging-protocol">Benefits of Using VideoSDK with Real-Time Messaging Protocol</h3><p>VideoSDK, in synergy with RTMP, offers enhanced audio-video quality and stability. Developers benefit from a streamlined integration process, and customization options ensure a personalized user experience. The combination of VideoSDK and RTMP is a game-changer in delivering high-quality streaming solutions.</p><p><br><em>Have questions about integrating RTMP? Our team offers expert advice tailored to your unique needs. Unlock the full potential—<a href="https://www.videosdk.live/blog/what-is-http-live-streaming?utm_source=blog&amp;utm_medium=google&amp;utm_campaign=organic">sign up </a>now to access resources and join our <a href="https://discord.com/invite/Qfm8j4YAUJ?utm_source=blog&amp;utm_medium=google&amp;utm_campaign=organic">developer community</a>. <a href="https://bookings.videosdk.live/#/discovery?utm_source=blog&amp;utm_medium=google&amp;utm_campaign=organic">Schedule a demo</a> to see features in action and discover how our solutions meet your streaming app needs.</em></br></p><h2 id="real-time-messaging-protocol-faqs">Real-Time Messaging Protocol FAQs</h2><h3 id="1-does-videosdk-support-rtmp">1. Does VideoSDK support RTMP?</h3><p>Yes, VideoSDK fully supports (Real-Time Messaging Protocol), providing developers with the capability to seamlessly integrate real-time audio-video streaming into their web and mobile applications.</p><h3 id="2-can-i-use-videosdk-with-popular-frameworks-like-react-or-javascript">2. Can I use VideoSDK with popular frameworks like React or JavaScript?</h3><p>Absolutely! VideoSDK is designed with flexibility in mind. It offers support for popular frameworks, including <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream">React</a>, <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream">JavaScript</a>, <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream">Flutter</a>, <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream">React Native</a> Android, and IOS. This ensures that developers can leverage their preferred frameworks while incorporating powerful audio-video features.</p><h3 id="3-how-easy-is-it-to-integrate-videosdk-into-my-existing-application">3. How easy is it to integrate VideoSDK into my existing application?</h3><p>Integrating VideoSDK is designed to be user-friendly. With <a href="https://docs.videosdk.live/?utm_source=blog&amp;utm_medium=google&amp;utm_campaign=organic">comprehensive documentation </a>and code examples, developers can follow a step-by-step process to smoothly integrate VideoSDK into their web or mobile applications.</p><h3 id="4-does-videosdk-support-cross-platform-development-for-both-ios-and-android">4. Does VideoSDK support cross-platform development for both iOS and Android?</h3><p>Yes, VideoSDK is built to support cross-platform development. Developers can use VideoSDK to add real-time audio-video capabilities to both iOS and Android applications, ensuring a consistent experience across platforms.</p><h3 id="5-how-does-videosdk-ensure-security-during-the-integration-process">5. How does VideoSDK ensure security during the integration process?</h3><p>Security is a top priority for VideoSDK. The integration process includes robust authentication mechanisms and encryption protocols to safeguard audio-video communication, providing a secure environment for users.</p>]]></content:encoded></item><item><title><![CDATA[What is Bitrate in WebRTC? How does WebRTC Bitrate work?]]></title><description><![CDATA[WebRTC Bitrate refers to the data transfer rate in WebRTC, crucial for real-time communication quality. Explore its impact on video conferencing performance.]]></description><link>https://www.videosdk.live/blog/what-is-bitrate</link><guid isPermaLink="false">65a8ed456c68429b5fdf15f3</guid><category><![CDATA[Bitrate]]></category><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Tue, 15 Oct 2024 11:58:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/image--5-.png" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-bitrate">What is Bitrate?</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/image--5-.png" alt="What is Bitrate in WebRTC? How does WebRTC Bitrate work?"/><p>Bitrate, short for "bit rate," refers to the rate at which bits are processed, transmitted, or received per second. In the context of multimedia, it directly influences the quality of audio and video playback.</p><h2 id="what-is-webrtc">What is WebRTC?</h2><p>WebRTC stands for <a href="https://www.videosdk.live/blog/webrtc">Web Real-Time Communication</a>, is a robust collection of APIs and communication protocols that enable real-time communication over peer-to-peer connections within web browsers. It serves as the driving force behind seamless audio-video conferencing and live streaming on the web.</p><h2 id="what-is-bitrate-in-webrtc">What is Bitrate in WebRTC?</h2><p>Bitrate in WebRTC refers to the rate at which data is transmitted over the network during a real-time communication session. It impacts audio and video quality, ensuring smooth, reliable exchanges.</p><p>The heartbeat of WebRTC lies in Bitrate, as it dictates the quality and efficiency of data transmission. Whether facilitating a video call or live streaming, comprehending how Bitrate works is pivotal for delivering a top-notch user experience.</p><h2 id="understanding-importance-bitrate">Understanding Importance Bitrate</h2><h3 id="role-of-bitrate-in-video-streaming">Role of Bitrate in Video Streaming</h3><p>Bitrate plays a multifaceted role in determining the quality of video streaming. It influences critical aspects such as resolution, frame rate, and overall visual clarity, shaping the user's perception of the content.</p><h3 id="types-of-bitrate-video-bitrate-audio-bitrate">Types of Bitrate (Video Bitrate  &amp; Audio Bitrate)</h3><p>Bitrate is not a one-size-fits-all term; rather, it is segregated into Video Bitrate and Audio Bitrate in the multimedia realm. Each type contributes uniquely to the overall data transmission, with Video Bitrate focusing on visual aspects and Audio Bitrate on sound quality.</p><h3 id="what-is-video-bitrate">What is Video Bitrate?</h3><p>Video bitrate refers to the amount of data processed per unit of time in a video file. It is measured in bits per second (bps) and directly affects video quality. Higher bitrates result in better image clarity but also larger file sizes. Proper bitrate selection is crucial for achieving optimal balance in video streaming and storage.</p><h3 id="what-is-audio-bitrate">What is Audio Bitrate?</h3><p>Audio bitrate refers to the amount of data processed per unit of time in an audio file. It is measured in bits per second (bps) and influences the quality of sound in a recording. Higher bitrates generally lead to better audio quality, but they also result in larger file sizes. Proper bitrate selection is important for achieving the desired balance between audio fidelity and file size.</p><h3 id="differentiating-between-audio-and-video-bitrate">Differentiating Between Audio and Video Bitrate</h3><p>Understanding the nuances between audio and video bitrate is essential. While both are integral, they have distinct characteristics. Video Bitrate influences the quality of visuals, while Audio Bitrate governs the clarity and richness of sound.</p><h2 id="the-role-of-bitrate-in-webrtc">The Role of Bitrate in WebRTC</h2><h3 id="how-webrtc-handles-bitrate">How WebRTC Handles Bitrate?</h3><p>WebRTC operates in a dynamic environment, adapting to varying network conditions. It utilizes adaptive bitrate control mechanisms to ensure a seamless and high-quality communication experience.</p><h3 id="impact-on-video-and-audio-quality">Impact on Video and Audio Quality</h3><p>The Bitrate directly affects the quality of both video and audio in WebRTC applications. Striking the right balance is paramount for delivering an immersive and satisfying user experience.</p><h3 id="relationship-with-network-conditions">Relationship with Network Conditions</h3><p>WebRTC's prowess in handling Bitrate is intricately tied to network conditions. It dynamically adjusts to factors such as latency, packet loss, and available bandwidth, ensuring a continuous and stable communication flow.</p><h2 id="how-does-webrtc-bitrate-work">How does WebRTC Bitrate work?</h2><h3 id="adaptive-bitrate-abr-in-webrtc">Adaptive Bitrate (ABR) in WebRTC</h3><ul><li><strong>Dynamic Adjustment Based on Network Conditions: </strong>WebRTC employs Adaptive Bitrate Control, a dynamic adjustment mechanism that responds in real time to changing network conditions. This adaptability ensures a seamless user experience by optimizing video quality based on available resources.</li><li><strong>Ensuring a Smooth User Experience: </strong>Adaptive Bitrate Control prevents buffering and optimizes video quality on the fly. It ensures a smooth user experience even in challenging network scenarios, making WebRTC applications resilient and user-friendly.</li></ul><h3 id="bitrate-control-mechanisms">Bitrate Control Mechanisms</h3><ul><li><strong>Sender-Side Bandwidth Estimation: </strong>WebRTC incorporates sender-side bandwidth estimation, a crucial aspect for gauging the available bandwidth and adjusting Bitrate accordingly. This mechanism enables the system to make informed decisions on data transmission rates.</li><li><strong>Receiver-Based Bitrate Adaptation: </strong>Receivers play a vital role in adapting to Bitrate changes. By dynamically adjusting to the sender's Bitrate, receivers contribute to the enhancement of overall communication quality.</li></ul><h3 id="factors-influencing-webrtc-bitrate">Factors Influencing WebRTC Bitrate</h3><ul><li><strong>Network Latency: </strong>Timely data transmission is critical for maintaining a smooth Bitrate. Network latency, the delay in data transmission, directly influences how WebRTC adapts to changing Bitrate requirements</li><li><strong>Packet Loss: </strong>Addressing packet loss is a fundamental consideration for maintaining Bitrate stability. WebRTC employs strategies to mitigate the impact of packet loss on the overall communication quality.</li><li><strong>Available Bandwidth: </strong>Continuous assessment of available bandwidth is central to WebRTC's ability to optimize Bitrate. By dynamically adjusting to the available resources, WebRTC ensures an optimal and reliable communication experience.</li></ul><h2 id="importance-of-bitrate-optimization">Importance of Bitrate Optimization</h2><h3 id="ensuring-a-smooth-and-uninterrupted-user-experience">Ensuring a Smooth and Uninterrupted User Experience</h3><p>Optimal Bitrate is the linchpin for preventing glitches, buffering, and ensuring a seamless user experience in WebRTC applications. VideoSDK, with its suite of features, takes the lead in optimizing Bitrate for uninterrupted communication.</p><h3 id="bandwidth-considerations-for-various-use-cases">Bandwidth Considerations for Various Use Cases</h3><p>Different use cases demand distinct levels of Bitrate optimization. VideoSDK, with its adaptive features, caters to these diverse requirements effortlessly, ensuring a tailored approach for varied applications.</p><h3 id="impact-on-scalability-in-large-scale-applications">Impact on Scalability in Large-Scale Applications</h3><p>Scalability is a pivotal aspect of any application, particularly in large-scale scenarios. Bitrate optimization with VideoSDK guarantees a scalable and reliable communication platform, addressing the needs of growing user bases.</p><h2 id="video-sdks-a-game-changer-in-bitrate-optimization">Video SDKs: A Game-Changer in Bitrate Optimization</h2><h3 id="introduction-to-videosdk">Introduction to VideoSDK</h3><p><a href="https://www.videosdk.live/">VideoSDK </a>stands tall as a comprehensive live video infrastructure designed for developers. It offers unparalleled flexibility, scalability, and control, empowering developers to seamlessly integrate <a href="https://www.videosdk.live/audio-video-conferencing">audio-video conferencing</a> and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a> into their applications.</p><h3 id="how-videosdk-enhances-bitrate-optimization-in-webrtc">How VideoSDK Enhances Bitrate Optimization in WebRTC</h3><p>VideoSDK emerges as a game-changer by providing developers with a suite of tools and features specifically designed to enhance Bitrate optimization in WebRTC applications. Let's explore how VideoSDK revolutionizes the Bitrate landscape:</p><h2 id="videosdk-features-for-optimizing-bitrate">VideoSDK Features for Optimizing Bitrate</h2><h3 id="real-time-bitrate-monitoring">Real-Time Bitrate Monitoring</h3><p>VideoSDK provides developers with real-time monitoring tools that empower them to gauge Bitrate performance as it happens. This proactive approach allows for dynamic adjustments, ensuring an optimized communication experience.</p><h3 id="adaptive-bitrate-control">Adaptive Bitrate Control</h3><p>At the core of VideoSDK's Bitrate optimization strategy is Adaptive Bitrate Control. This feature dynamically adjusts the Bitrate based on changing network conditions, preventing disruptions and guaranteeing a seamless user experience.</p><h2 id="best-practices-for-bitrate-management-in-webrtc">Best Practices for Bitrate Management in WebRTC</h2><h3 id="tips-for-developers-and-content-providers">Tips for Developers and Content Providers</h3><p>VideoSDK offers a comprehensive set of guidelines and best practices for developers and content providers. These tips cover a spectrum of considerations, from codec selection to network optimization, ensuring that applications achieve peak performance in Bitrate management.</p><h3 id="ensuring-optimal-performance-with-videosdk">Ensuring Optimal Performance with VideoSDK</h3><p>VideoSDK goes beyond being a tool; it serves as a knowledge hub for developers. By combining best practices, real-time monitoring, and adaptive features, VideoSDK ensures that developers can achieve and maintain optimal performance in Bitrate management throughout the lifecycle of their applications.</p><h2 id="faqs-about-bitrate">FAQs About Bitrate</h2><h3 id="how-does-videosdk-optimize-bitrate-in-webrtc-and-why-is-it-essential">How does VideoSDK optimize Bitrate in WebRTC, and why is it essential? </h3><p>VideoSDK optimizes Bitrate in WebRTC through features like real-time Bitrate monitoring and <a href="https://www.videosdk.live/blog/what-is-adaptive-bitrate-streaming">Adaptive Bitrate Streaming</a>. This ensures that your application dynamically adjusts to changing network conditions, delivering a smooth user experience. Optimizing Bitrate is essential for maintaining video and audio quality while adapting to varying network constraints.</p><h3 id="can-videosdk-handle-different-types-of-bitrate-including-both-video-and-audio">Can VideoSDK handle different types of Bitrate, including both video and audio?</h3><p>Yes, VideoSDK is equipped to handle both video and audio Bitrate. It offers tools and features for optimizing Video Bitrate, ensuring visual clarity, and Audio Bitrate, ensuring clear and rich sound quality. This comprehensive approach caters to the diverse needs of real-time communication applications.</p><h3 id="how-does-videosdk-assist-developers-in-implementing-best-practices-for-bitrate-management">How does VideoSDK assist developers in implementing best practices for Bitrate management?</h3><p>VideoSDK serves as a knowledge hub for developers, offering a comprehensive set of guidelines and best practices for Bitrate management. From <a href="https://docs.videosdk.live/">codec selection</a> to network optimization, VideoSDK empowers developers to achieve and maintain optimal performance in their applications.</p>]]></content:encoded></item><item><title><![CDATA[What is Real-Time Streaming Protocol(RTSP)?]]></title><description><![CDATA[RTSP (Real-Time Streaming Protocol) facilitates media streaming over networks, managing sessions between servers and clients efficiently.]]></description><link>https://www.videosdk.live/blog/what-is-rtsp-protocol</link><guid isPermaLink="false">6675746620fab018df10ed75</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 15 Oct 2024 11:15:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/06/what-is-rtsp.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<img src="https://assets.videosdk.live/static-assets/ghost/2024/06/what-is-rtsp.jpg" alt="What is Real-Time Streaming Protocol(RTSP)?"/><p>The Real-Time Streaming Protocol (RTSP), established as a foundational technology in the domain of multimedia streaming, serves as a cornerstone for managing the transmission and control of audio and video data across the internet and within IP networks. As the demand for more sophisticated media services increases, understanding RTSP's functionalities, capabilities, and applications is essential for developers, engineers, and technologists in the multimedia and streaming sectors.</p><p>This article aims to demystify RTSP, explaining its role, operational mechanisms, and how it compares with other streaming protocols such as WebRTC and HLS. By exploring RTSP's command sequences, practical applications, and evolving role in modern multimedia services, readers will gain a comprehensive understanding of how RTSP contributes to the complex landscape of digital streaming and its pivotal role in driving the future of live and on-demand media.</p><p>In the subsequent sections, we will delve into the technical workings of RTSP, outline its key commands, compare it with other prominent streaming technologies, and showcase its real-world applications in industries like surveillance and live broadcasting. Whether you're a seasoned professional or a curious newcomer to the field of media streaming, this exploration of RTSP will enhance your understanding and appreciation of this vital network protocol.</p><h2 id="what-is-rtspreal-time-streaming-protocol">What is RTSP(Real-Time Streaming Protocol)?</h2>
<p>The Real-Time Streaming Protocol (RTSP) is a network control protocol designed for use in entertainment and communications systems to control streaming media servers. RTSP acts much like a network remote control for multimedia streams, facilitating the real-time or on-demand distribution of audio or video content. It operates at the application layer and allows users to take on-demand control of the media stream such as playing, pausing, and stopping the media file. RTSP uses standard methods like those used in HTTP but is tailored for controlling real-time media streams rather than static web data.</p><h3 id="how-does-rtsp-work">How Does RTSP Work?</h3>
<p>RTSP primarily uses the Transmission Control Protocol (TCP) for maintaining a persistent connection which ensures reliable control of the streaming session, although it can also use User Datagram Protocol (UDP) for data where reliability is less critical. One of RTSP's key features is its ability to manage state by maintaining session identifiers. These identifiers allow RTSP to manage streams across multiple transport protocols, a capability that is pivotal when streaming content needs to switch between different network configurations or when scaling across varied user locations.</p><h3 id="rtsp-commands">RTSP Commands</h3>
<p>The efficacy of RTSP in managing streaming media is largely due to its suite of commands, which orchestrate every aspect of the streaming process:</p><h4 id="setup">SETUP</h4>
<p>This command initiates a streaming session and establishes the media parameters between the client and server. It's akin to tuning the instruments before a concert, ensuring that every parameter is aligned for the performance. The server responds with crucial information such as session identifiers which are used in subsequent requests.</p><h4 id="play">PLAY</h4>
<p>Once a session is established, the PLAY command cues the server to start streaming the media to the client. This command can specify not only when to start the playback but also supports playing the media from a given point, making it possible to jump to specific sections of the content.</p><h4 id="pause">PAUSE</h4>
<p>To temporarily halt the media stream without terminating the session, the PAUSE command is used. This functionality is essential for on-demand video services, where users may wish to halt the video and resume it without rebuffering or loss of connection state.</p><h4 id="teardown">TEARDOWN</h4>
<p>This command is used to end a session and release all allocated resources on the server. After a TEARDOWN request, a new SETUP command is needed to restart the streaming, effectively closing the curtain on the media session once the user is finished.</p><p>Each of these commands plays a vital role in ensuring that RTSP provides flexible and robust control over multimedia streaming, enabling a wide range of applications from video on demand to live broadcasting. The next section will explore how RTSP stands against newer protocols like WebRTC, highlighting its unique position in the streaming landscape.</p><h2 id="rtsp-vs-other-streaming-protocols-a-comparative-analysis">RTSP vs. Other Streaming Protocols: A Comparative Analysis</h2>
<h3 id="rtsp-vs-webrtc">RTSP vs WebRTC</h3>
<p>When evaluating streaming technologies, RTSP and WebRTC frequently come into comparison due to their prevalent use in video streaming. While RTSP is primarily a network control protocol used for managing media sessions, WebRTC is designed for peer-to-peer communication, providing real-time media streaming directly between browsers without the need for intermediate servers.</p><p>This architectural difference is pivotal; RTSP relies on a server to control the stream, making it suitable for applications like surveillance where centralized control is necessary. In contrast, WebRTC allows direct media exchange, reducing latency and enhancing the interaction in real-time applications such as video chats and collaborative platforms.</p><p>WebRTC also integrates seamlessly into modern web technologies, as it is designed around HTML5 and supported by all major browsers. It offers features like ultra-low latency streaming, adaptive network conditions, and built-in security measures, which are crucial for user-centric applications.</p><h3 id="rtsp-vs-hls">RTSP vs HLS</h3>
<p>HTTP Live Streaming (HLS) is another significant protocol in the landscape of video streaming. Unlike RTSP, which facilitates control over streaming sessions, HLS delivers content using a series of small, downloadable files over HTTP. This method, known as adaptive bitrate streaming, adjusts the video quality in real time based on the user's internet speed, thus providing a smooth viewing experience even under fluctuating network conditions. HLS is widely used for delivering content across various platforms, including mobile devices and desktops, due to its high compatibility and reliability.</p><p>While RTSP provides a more controlled streaming experience, suitable for applications requiring direct interaction with the media stream, HLS offers broader accessibility and ease of use, making it ideal for public broadcasting and entertainment.</p><h2 id="emerging-trends-the-role-of-cmaf">Emerging Trends: The Role of CMAF</h2>
<p>The Common Media Application Format (CMAF) aims to unify the streaming market around a single media format to simplify delivery and reduce latency. CMAF can bridge the gap between different streaming protocols by enabling a single, standardized format that supports both MPEG-DASH and HLS, reducing the costs and complexity associated with using multiple formats. For RTSP, the advent of CMAF might influence future developments, especially in improving interoperability with HTTP-based streaming protocols.</p><h2 id="real-world-applications-of-rtsp-surveillance-and-broadcasting">Real-World Applications of RTSP: Surveillance and Broadcasting</h2>
<p>RTSP's role in modern multimedia applications is predominantly evident in surveillance systems and live broadcasting, where control and real-time delivery of video are paramount.</p><h3 id="surveillance-systems">Surveillance Systems</h3>
<p>RTSP is indispensable in the world of security and surveillance, where it manages the streaming of live video feeds from cameras to monitoring stations. In security setups, whether it's for traffic monitoring on bustling highways, overseeing activities at international airports, or enhancing home security, RTSP allows for direct control of video feeds.</p><p>Users can command cameras to pan, tilt, and zoom in real-time, ensuring that surveillance personnel can react immediately to any incidents that occur. The protocol's ability to manage stateful, real-time streaming sessions makes it an ideal choice for applications where reliability and direct control are required.</p><h3 id="broadcasting">Broadcasting</h3>
<p>In the broadcasting industry, RTSP plays a crucial role, particularly in live event streaming. From capturing the high-octane excitement of sports arenas to the serene visuals of live cultural events, RTSP facilitates the seamless transmission of live video to global audiences. By managing the setup, control, and teardown of media sessions, RTSP ensures a synchronized viewing experience that is scalable to handle varying audience sizes. The protocol's robust control capabilities allow broadcasters to offer viewers a continuous, uninterrupted stream of high-quality video, which is essential for maintaining viewer engagement and satisfaction during live broadcasts.</p><p>Both of these applications highlight RTSP's unique capabilities in environments where control over the video stream is crucial, demonstrating the protocol's enduring relevance in the ever-evolving landscape of digital media. As streaming technology continues to advance, the adaptability and control offered by RTSP will keep it at the forefront of critical applications like surveillance and live broadcasting, ensuring its continued utility and importance in the digital age.</p><h2 id="conclusion">Conclusion</h2>
<p>The exploration of the Real-Time Streaming Protocol (RTSP) throughout this article underscores its significant role in the realm of digital streaming. As we have discussed, RTSP offers specialized capabilities that make it indispensable for applications requiring precise control over streaming media, such as in surveillance systems and live broadcasting. The protocol's ability to manage and maintain stateful streaming sessions allows users to command and control multimedia content dynamically and in real time, which is crucial for both security applications and live events.</p><p>Moreover, the comparison with other streaming protocols such as WebRTC and HLS reveals the distinct niches that RTSP fills. While newer technologies like WebRTC cater to real-time, peer-to-peer communications and HLS ensures broad accessibility and adaptive streaming, RTSP remains the protocol of choice for scenarios where direct interaction with the media stream is necessary. Its detailed control commands and robust session management make it uniquely suited for environments where every second of delay matters, and every command impacts the user experience.</p><p>As streaming technologies continue to evolve and integrate, the future of RTSP may see it adapting to new standards and formats, like CMAF, enhancing its interoperability and efficiency in the broader streaming ecosystem. For developers, engineers, and technologists, understanding RTSP’s capabilities and applications provides a solid foundation for leveraging this protocol in current and future multimedia projects.</p><p>The insights provided in this article aim to enhance comprehension and facilitate a deeper appreciation of RTSP's pivotal role in driving multimedia streaming forward. As we look to the future, the continuous advancements in streaming technology promise to expand the possibilities of what can be achieved with protocols like RTSP, ensuring their relevance in the ever-changing landscape of digital media.</p><h2 id="faqs-for-rtspreal-time-streaming-protocol">FAQs for RTSP(Real-Time Streaming Protocol)</h2>
<h3 id="1-what-is-the-real-time-streaming-protocol-rtsp">1. What is the Real-Time Streaming Protocol (RTSP)?</h3>
<p>RTSP is a network control protocol used to manage the streaming of audio and video over the internet. It allows for functions like play, pause, and stop, similar to using a remote control with a TV.</p><h3 id="2-how-does-rtsp-differ-from-webrtc">2. How does RTSP differ from WebRTC?</h3>
<p>RTSP is mainly used to control streaming media sessions and relies on a server to manage the streams, making it ideal for applications like surveillance. WebRTC, on the other hand, is a peer-to-peer communication protocol that allows for real-time streaming directly between browsers or devices, reducing latency and improving real-time interactions.</p><h3 id="3-what-are-the-key-commands-used-in-rtsp">3. What are the key commands used in RTSP?</h3>
<p>The main commands in RTSP are SETUP (to start a session), PLAY (to start streaming), PAUSE (to temporarily stop streaming), and TEARDOWN (to end the session and release resources).</p><h3 id="4-in-what-applications-is-rtsp-most-commonly-used">4. In what applications is RTSP most commonly used?</h3>
<p>RTSP is commonly used in surveillance systems to manage real-time video feeds and in broadcasting to stream live events. Its precise control over the media stream makes it ideal for these uses.</p><h3 id="5-can-rtsp-and-hls-be-used-together">5. Can RTSP and HLS be used together?</h3>
<p>RTSP and HLS serve different purposes and are generally used in different situations. RTSP controls media sessions, while HLS efficiently delivers streaming content over the internet using adaptive bitrate streaming. They are usually not used together but are chosen based on the specific needs of an application.</p><h3 id="6-what-is-the-impact-of-emerging-technologies-like-cmaf-on-rtsp">6. What is the impact of emerging technologies like CMAF on RTSP?</h3>
<p>CMAF may affect the future use of RTSP by making it easier to deliver streaming media through a unified format that supports both MPEG-DASH and HLS. This can improve RTSP's efficiency and interoperability in the streaming world.</p>]]></content:encoded></item><item><title><![CDATA[What is HLS (HTTP Live Streaming)?]]></title><description><![CDATA[HTTP Live Streaming is a streaming protocol developed by Apple for transmitting audio and video content over the internet, utilizing a client-server model to deliver an adaptive streaming experience across various devices and browsers.]]></description><link>https://www.videosdk.live/blog/what-is-http-live-streaming</link><guid isPermaLink="false">65a504b96c68429b5fdf0e8d</guid><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Tue, 15 Oct 2024 10:17:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/What-is--HTTP-Live-Streaming_.png" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-http-live-streaming-hls"><strong>What is HTTP Live Streaming (HLS)?</strong></h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/What-is--HTTP-Live-Streaming_.png" alt="What is HLS (HTTP Live Streaming)?"/><p>HTTP Live Streaming (HLS) changes the way videos are sent over the internet by automatically changing video quality to match how fast your internet is. This means videos play smoothly on any kind of device without stopping to buffer. Made by Apple and used everywhere, HLS breaks videos into small, smart pieces and can switch between different quality levels, so you hardly ever have to wait for videos to load. It's a top choice for people who put videos on the internet.</p><h3 id="advantages-of-hls">Advantages of HLS</h3><ol><li><strong>Adaptive Bitrate Streaming:</strong> HLS's <a href="https://www.videosdk.live/blog/what-is-adaptive-bitrate-streaming">adaptive streaming</a> capability allows the client device to switch between different bitrate streams based on available network bandwidth. This ensures optimal playback quality regardless of the user's internet speed.</li><li><strong>Wide Compatibility:</strong> HTTP Live Streaming is supported by a broad range of devices, including iOS and Android devices, web browsers, and more. This widespread compatibility makes it a versatile choice for content delivery.</li><li><strong>Reduced Buffering:</strong> The segmented nature of HLS, with each segment containing a few seconds of content, minimizes buffering times. This results in a smoother streaming experience, especially when network conditions vary.</li></ol><h2 id="how-does-hls-work"><strong>How Does HLS Work?</strong></h2><h3 id="http-live-streaming-hls-architecture-overview"><strong>HTTP Live Streaming (HLS) Architecture Overview</strong></h3><p><strong>HTTP Live Streaming</strong> operates on a client-server model, where the server generates a series of media files and playlists that the client downloads and plays sequentially. The key components of HLS include:</p><ul><li><strong>Encoding</strong>: Converts video data using H.264 or H.265 encoding for broad compatibility.</li><li><strong>Segmenting</strong>: Divides the video into short segments, creating multiple quality levels and an index file for stream navigation.</li><li><strong>Manifest Files</strong>: Lists available streams and their qualities to assist client selection.</li><li><strong>CDNs</strong>: Enhances streaming performance globally by minimizing latency.</li></ul><p><strong>Adaptive Bitrate Streaming</strong>: A core feature of HLS that adjusts the video quality based on the user's network conditions, preventing buffering by switching between different bitrate streams.</p><p><strong>HLS's Advantages</strong>: Offers a consistent viewing experience across varying network conditions, ensures wide device and browser compatibility, and improves user experience with minimal buffering.</p><h3 id="wide-compatibility-with-devices-and-browsers">Wide Compatibility with Devices and Browsers</h3><p>HLS's compatibility extends to a wide range of devices and browsers, making it an accessible solution for content providers. Whether users are on mobile devices or desktops, HLS ensures seamless playback across platforms.</p><h3 id="enhanced-user-experience-with-reduced-buffering"><strong>Enhanced User Experience with Reduced Buffering</strong></h3><p>The segmented nature of HLS and adaptive bitrate streaming contribute to a superior user experience. Reduced buffering times and the ability to adapt to varying network speeds make HTTP Live Streaming a reliable choice for streaming services aiming to provide high-quality content delivery.</p><h2 id="how-videosdk-enhances-hls-http-live-streaming"><strong>How VideoSDK Enhances HLS (HTTP Live Streaming)?</strong></h2><h3 id="introduction-to-videosdk"><strong>Introduction to VideoSDK</strong></h3><p><a href="https://www.videosdk.live/">VideoSDK </a>is a powerful set of real-time audio and video SDKs that empower developers across the USA &amp; India to seamlessly integrate audio-video conferencing and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a> into web and mobile applications. It offers complete flexibility, scalability, and control over the streaming experience.</p><h3 id="enhancing-hls-with-videosdk">Enhancing HLS with VideoSDK:</h3><ul><li><strong>Interactive Communication</strong>: Enables <a href="https://www.videosdk.live/audio-video-conferencing">real-time audio and video interactions</a>.</li><li><strong>Scalability and Flexibility</strong>: Allows streaming services to grow and adapt to user demands.</li><li><strong>Customization and Reliability</strong>: Offers tailored streaming experiences with dependable performance.</li></ul><h3 id="benefits-of-using-videosdk-with-hls"><strong>Benefits of Using VideoSDK with HLS</strong></h3><ol><li><strong>Seamless Integration:</strong> VideoSDK seamlessly integrates with HLS, providing developers with the tools needed to enhance streaming capabilities within their applications.</li><li><strong>Customization:</strong> VideoSDK allows developers to customize the streaming experience, ensuring it aligns with the unique requirements of their applications.</li><li><strong>Reliability:</strong> With VideoSDK, developers can build robust and reliable streaming applications, ensuring a consistent and high-quality experience for end-users.</li></ol><h2 id="technical-aspects-of-videosdk-and-hls-integration"><strong>Technical Aspects of VideoSDK and HLS Integration</strong></h2><h3 id="api-documentation-and-developer-resources"><strong>API Documentation and Developer Resources</strong></h3><p>Developers can leverage comprehensive API documentation provided by <a href="https://docs.videosdk.live/">VideoSDK to integrate HLS</a> seamlessly. The documentation includes detailed guides, code samples, and support resources to facilitate a smooth integration process.</p><h3 id="compatibility-with-different-platforms-and-devices"><strong>Compatibility with Different Platforms and Devices</strong></h3><p>VideoSDK ensures cross-platform compatibility, enabling developers to create applications that work seamlessly on a variety of devices, including smartphones, tablets, and desktops. This versatility is crucial for reaching a broad audience.</p><h3 id="performance-optimization-tips-for-hls-streaming-with-videosdk"><strong>Performance Optimization Tips for HLS Streaming with VideoSDK</strong></h3><p>To optimize HLS streaming performance with VideoSDK, developers should consider the following:</p><ol><li><strong>Bandwidth Adaptation:</strong> Leverage VideoSDK's capabilities to adjust bandwidth dynamically, aligning with HLS's adaptive streaming for an optimal user experience.</li><li><strong>Caching and Content Delivery:</strong> Implement effective caching mechanisms and leverage CDNs to enhance content delivery speed and reduce latency.</li><li><strong>Quality of Service (QoS) Monitoring:</strong> Integrate tools for monitoring QoS to track streaming performance and identify areas for improvement.</li></ol><h2 id="best-practices-for-implementing-hls-streaming"><strong>Best Practices for Implementing HLS Streaming</strong></h2><h3 id="tips-for-optimizing-hls-streaming-performance"><strong>Tips for Optimizing HLS Streaming Performance</strong></h3><ol><li><strong>Use Efficient Codecs:</strong> Employ modern and efficient video and audio codecs to ensure high-quality streaming at lower bitrates.</li><li><strong>Optimize Segmentation:</strong> Adjust the duration of media segments to balance between reducing buffering and minimizing the number of requests.</li><li><strong>CDN Optimization:</strong> Choose a reliable CDN and optimize its configuration to ensure efficient content delivery.</li></ol><h3 id="ensuring-compatibility-across-devices-and-browsers"><strong>Ensuring Compatibility Across Devices and Browsers</strong></h3><ol><li><strong>Browser Support:</strong> Verify the compatibility of HTTP Live Streaming with popular browsers and implement fallback options for non-supporting environments.</li><li><strong>Device Testing:</strong> Conduct thorough testing on various devices to ensure a consistent streaming experience across platforms.</li></ol><h3 id="security-considerations-for-hls-streaming"><strong>Security Considerations for HLS Streaming</strong></h3><ol><li><strong>Encryption:</strong> Implement encryption to secure the content and prevent unauthorized access.</li><li><strong>Tokenization:</strong> Use tokenization to control access to streaming content and protect against unauthorized sharing.</li></ol><p><strong>Integration Tips for VideoSDK and HLS</strong>:</p><ul><li><strong>Comprehensive API Support</strong>: Utilize detailed documentation for seamless integration.</li><li><strong>Cross-Platform Compatibility</strong>: Ensure smooth operation across all devices and platforms.</li><li><strong>Performance Optimization</strong>: Implement bandwidth adaptation, caching, and quality monitoring for superior streaming quality.</li></ul><p><strong>Optimizing HLS Streaming</strong>:</p><ul><li>Use efficient codecs and optimize segment duration for better streaming performance.</li><li>Leverage a reliable CDN and ensure browser and device compatibility.</li><li>Prioritize security through encryption and tokenization to protect content.</li></ul><p><strong>Support and Integration with VideoSDK</strong>:</p><ul><li>VideoSDK fully supports HLS, enabling developers to incorporate adaptive, high-quality streaming into applications.</li><li>Compatible with a wide range of web and mobile frameworks, offering flexibility in development environments.</li></ul>]]></content:encoded></item><item><title><![CDATA[What is Adaptive Bitrate Streaming? How Does ABR Work?]]></title><description><![CDATA[Adaptive Bitrate Streaming optimizes video quality by dynamically adjusting to network conditions, enhancing the streaming experience over HTTP networks. Explore its mechanics and benefits for seamless video delivery.]]></description><link>https://www.videosdk.live/blog/what-is-adaptive-bitrate-streaming</link><guid isPermaLink="false">65a7acfa6c68429b5fdf140b</guid><category><![CDATA[Adaptive Bitrate Streaming]]></category><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Mon, 14 Oct 2024 13:07:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/image--3-.png" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-adaptive-bitrate-streaming">What is Adaptive Bitrate Streaming?</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/image--3-.png" alt="What is Adaptive Bitrate Streaming? How Does ABR Work?"/><p><strong>Adaptive Bitrate Streaming</strong> is a dynamic streaming technique that adjusts the quality of video playback in real time based on the viewer's network conditions and device capabilities. It involves the creation of multiple versions of a video at different bitrates and resolutions. These versions are then divided into small segments, and a manifest file guides the player in selecting the appropriate segment to deliver a smooth viewing experience.</p><h3 id="evolution-of-video-streaming-technologies">Evolution of Video Streaming Technologies</h3><p>The evolution of video ABR streaming technologies has been marked by a continuous quest for improved quality and adaptability. Adaptive Bitrate Streaming<strong> </strong>represents a significant leap forward, surpassing traditional streaming methods by providing a flexible and responsive solution to varying network conditions.</p><h3 id="how-abr-enhances-user-experience">How ABR Enhances User Experience</h3><p>Understanding key terms like bitrate, encoding, and manifest files is crucial for grasping the intricacies of ABR. Bitrate refers to the amount of data processed per unit of time, encoding involves compressing video files for efficient transmission, and manifest files act as guides, providing information on available bitrates and helping the player make informed decisions during playback.</p><h2 id="how-adaptive-bitrate-streamingabr-works">How Adaptive Bitrate Streaming(ABR) Works?</h2><h3 id="overview-of-the-abr-workflow">Overview of the ABR Workflow </h3><p>Adaptive Bitrate Streaming (ABR) is a sophisticated video streaming technique that enhances user experience by dynamically adjusting video quality based on available network conditions. It ensures seamless playback by adapting to varying bandwidths, preventing buffering issues. ABR encodes a video into multiple quality versions (bitrates) and breaks it into small, manageable chunks. During playback, the streaming player monitors the viewer's internet connection and switches between these versions in real-time.</p><p>When network conditions improve, ABR increases the video quality for a clearer experience. Conversely, in poor conditions, it switches to lower bitrates to prevent buffering and maintain uninterrupted playback. Popular streaming services like Netflix and YouTube utilize ABR to deliver optimal video quality across diverse devices and network scenarios, providing users with a smooth and enjoyable viewing experience regardless of their internet speed.</p><h3 id="importance-of-multiple-bitrates-for-different-devices-and-network-conditions">Importance of Multiple Bitrates for Different Devices and Network Conditions</h3><p>The availability of multiple bitrates for a single video allows the ABR system to adapt to various devices and network conditions. Higher bitrates deliver superior quality on high-speed connections and powerful devices, while lower bitrates ensure uninterrupted playback on slower networks or less capable devices.</p><h3 id="explanation-of-the-abr-decision-making-process">Explanation of the ABR Decision-Making Process</h3><p>The ABR decision-making process involves analyzing available bandwidth, device capabilities, and current network conditions. The player dynamically selects the appropriate bitrate and resolution for each segment, aiming to provide the best possible quality without causing buffering or interruptions.</p><h2 id="benefits-of-using-adaptive-high-bitrate-streamingabr">Benefits of using Adaptive High Bitrate Streaming(ABR)</h2><h3 id="improved-video-quality-and-user-experience">Improved Video Quality and User Experience</h3><p>The primary benefit of ABR is evident in the enhanced video quality and overall user experience. By adapting to changing network conditions, ABR ensures that viewers receive the best possible quality without disruptions, creating a more satisfying and engaging watching experience.</p><h3 id="seamless-playback-across-network-conditions">Seamless Playback Across Network Conditions</h3><p>ABR's ability to adjust on the fly allows for seamless playback, even in challenging network environments. Whether a viewer is on a high-speed connection or facing intermittent disruptions, ABR ensures uninterrupted streaming by dynamically optimizing the video quality.</p><h3 id="optimized-bandwidth-usage">Optimized Bandwidth Usage</h3><p>The adaptive nature of ABR not only benefits users but also optimizes bandwidth usage for content providers. By dynamically adjusting the video quality based on network conditions, ABR helps minimize bandwidth requirements, leading to cost-effective content delivery.</p><h2 id="streaming-protocols-that-support-abr">Streaming Protocols That Support ABR</h2><p>Adaptive Bitrate Streaming (ABR) has become a cornerstone in delivering high-quality video content over the internet, ensuring a seamless viewing experience for users. Several streaming protocols support ABR, providing a versatile range of options for content delivery.</p><ol><li><strong>HLS (HTTP Live Streaming):</strong> <a href="https://www.videosdk.live/blog/what-is-http-live-streaming">HLS </a>is one of the most widely adopted streaming protocols that supports ABR. Developed by Apple, it segments video content into smaller chunks and dynamically adjusts the bitrate based on the viewer's network conditions, ensuring optimal playback on various devices.</li><li><strong>DASH (Dynamic Adaptive Streaming over HTTP):</strong> DASH is an open-source standard that operates similarly to HLS. It divides video content into segments and utilizes manifest files to adaptively switch between different bitrates, allowing for a smooth streaming experience across various platforms.</li><li><strong>HDS (HTTP Dynamic Streaming):</strong> Adobe's HDS is an ABR protocol that utilizes HTTP for content delivery. Like other ABR protocols, it offers adaptive streaming by dividing content into segments and dynamically adjusting the bitrate to optimize playback.</li></ol><h2 id="integrating-videosdk-for-adaptive-bitrate-streaming">Integrating VideoSDK for Adaptive Bitrate Streaming</h2><h3 id="what-is-videosdk">What is VideoSDK?</h3><p><a href="https://www.videosdk.live/">VideoSDK </a>– the live video infrastructure for every developer across the USA &amp; India. Offering full flexibility, scalability, and control, VideoSDK simplifies the integration of audio-video conferencing and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a> into web and mobile apps.</p><h3 id="features-and-capabilities-of-video-sdk-for-abr">Features and Capabilities of Video SDK for ABR</h3><p>VideoSDK takes ABR to the next level with its advanced features and capabilities. Whether you're looking for real-time audio-video communication or seamless live streaming, VideoSDK provides the tools needed to implement adaptive bitrate streaming effortlessly.</p><h3 id="step-by-step-guide-on-integrating-videosdk-for-abr-in-applications">Step-by-Step Guide on Integrating VideoSDK for ABR in Applications</h3><p>To harness the power of ABR with VideoSDK, developers can follow a step-by-step guide for seamless integration. This ensures that applications benefit from adaptive bitrate streaming, delivering an optimal viewing experience to users.</p><h2 id="best-practices-for-abr-adaptive-bitrate-streaming">Best Practices for ABR (Adaptive Bitrate Streaming)</h2><h3 id="optimal-encoding-settings">Optimal Encoding Settings:</h3><p>Achieving the best results with ABR requires careful consideration of encoding settings. Optimal encoding ensures that video files are compressed efficiently, allowing for smoother playback and better adaptation to varying network conditions.</p><h3 id="choosing-appropriate-bitrates">Choosing Appropriate Bitrates:</h3><p>Selecting appropriate bitrates for different resolutions is crucial in maximizing the benefits of ABR. Striking a balance between video quality and bandwidth consumption ensures that users receive a consistent and enjoyable viewing experience.</p><h3 id="regularly-updating-abr-algorithms-for-dynamic-performance">Regularly Updating ABR Algorithms for Dynamic Performance</h3><p>The digital landscape is ever-changing, and regular updates to ABR algorithms are essential to maintain dynamic performance. VideoSDK, with its commitment to cutting-edge technology, ensures that ABR algorithms stay ahead of the curve, providing developers with the tools needed for top-tier adaptive bitrate streaming.</p><h3 id="does-videosdk-support-adaptive-bitrate-streaming">Does VideoSDK support Adaptive Bitrate Streaming?</h3><p><br>Yes, VideoSDK supports ABR streaming, allowing dynamic adjustment of video quality based on the viewer's internet connection. This ensures a smooth playback experience with optimal quality in varying network conditions. Learn more about improving playback with <a href="https://www.videosdk.live/audio-video-conferencing">Audio/Video Calling API</a>.</br></p>]]></content:encoded></item><item><title><![CDATA[What is WebRTC Video Bitrate? How Does it Affect Video Quality?]]></title><description><![CDATA[WebRTC video bitrate refers to the amount of data transmitted per second during real-time communication, influencing video quality and bandwidth usage.]]></description><link>https://www.videosdk.live/blog/what-is-webrtc-video-bitrate</link><guid isPermaLink="false">65b23a8a2a88c204ca9ce552</guid><category><![CDATA[WebRTC]]></category><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Mon, 14 Oct 2024 11:48:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/image--6-.png" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-webrtc-video-bitrate">What is WebRTC Video Bitrate?</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/image--6-.png" alt="What is WebRTC Video Bitrate? How Does it Affect Video Quality?"/><p>WebRTC Video Bitrate refers to the rate at which video data is transmitted over the network during a real-time communication session. It plays a pivotal role in determining the quality of the video stream, impacting factors like resolution, clarity, and overall user experience.</p><h2 id="what-is-video-bitrate">What is Video Bitrate?</h2><p>Video Bitrate is the amount of data transmitted per second in a video stream. It directly influences the quality of the video, with higher bitrates generally resulting in better quality. In the context of WebRTC, Video Bitrate holds the utmost significance due to its direct impact on the overall video communication experience.</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>Resolution</th>
<th>Bitrate (Mbps)</th>
<th>Upload Speed (Mbps)</th>
<th>Frame Rate (fps)</th>
</tr>
</thead>
<tbody>
<tr>
<td>240p</td>
<td>0.5-1.5</td>
<td>1-2</td>
<td>24-30</td>
</tr>
<tr>
<td>480p</td>
<td>1.5-3</td>
<td>2-4</td>
<td>24-30</td>
</tr>
<tr>
<td>720p (HD)</td>
<td>2-5</td>
<td>3-6</td>
<td>24-30</td>
</tr>
<tr>
<td>1080p (FHD)</td>
<td>5-10</td>
<td>7-12</td>
<td>24-60</td>
</tr>
<tr>
<td>1440p (QHD)</td>
<td>10-20</td>
<td>15-25</td>
<td>24-60</td>
</tr>
<tr>
<td>2160p (4K)</td>
<td>20-50</td>
<td>25-50</td>
<td>24-60</td>
</tr>
<tr>
<td>4320p (8K)</td>
<td>50-100</td>
<td>50-100</td>
<td>24-60</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><h2 id="what-is-webrtc">What is WebRTC?</h2><p><a href="https://www.videosdk.live/blog/webrtc">WebRTC </a>(Web Real-Time Communication) is an open-source project that enables real-time communication through web browsers and mobile applications. It facilitates peer-to-peer communication, allowing audio and video transmission without the need for plugins or additional software.</p><h2 id="importance-of-video-quality-in-webrtc">Importance of Video Quality in WebRTC</h2><p>Ensuring high-quality video is paramount in WebRTC applications. Whether it's video conferencing or live streaming, users expect a seamless and clear visual experience. Video Quality directly affects user satisfaction, engagement, and the effectiveness of communication.</p><h2 id="how-does-video-bitrate-work-in-webrtc">How does Video Bitrate Work in WebRTC?</h2><p>WebRTC (Web Real-Time Communication) employs video bitrate to determine data transfer rates during video transmission. Bitrate impacts video quality and bandwidth usage. Higher bitrates enhance quality but require more bandwidth. Adjustable dynamically, WebRTC adapts to network conditions, optimizing video streaming for a smoother, responsive communication experience.</p><h3 id="role-of-video-bitrate-in-webrtc">Role of Video Bitrate in WebRTC</h3><p>In WebRTC, Video Bitrate plays a crucial role in determining the amount of data transmitted between peers during a video call or streaming session. The goal is to strike a balance between maintaining optimal video quality and ensuring a smooth communication experience.</p><h3 id="factors-influencing-video-bitrate">Factors Influencing Video Bitrate</h3><p>Several factors influence Video Bitrate in WebRTC, including network conditions, device capabilities, and application requirements. The dynamic nature of real-time communication necessitates adaptive bitrate control to respond to changing conditions and deliver a consistent user experience.</p><h2 id="how-does-bitrate-affect-video-quality">How Does Bitrate Affect Video Quality?</h2><h3 id="relationship-between-bitrate-and-video-quality">Relationship between Bitrate and Video Quality</h3><p>The relationship between Video Bitrate and Video Quality is direct and proportional. Higher bitrates generally result in better video quality, offering clearer images and smoother motion. However, finding the optimal bitrate is crucial to prevent overconsumption of bandwidth.</p><h3 id="importance-of-optimal-bitrate-for-a-seamless-viewing-experience">Importance of Optimal Bitrate for a Seamless Viewing Experience</h3><p>Maintaining an optimal bitrate is essential for ensuring a seamless viewing experience. Overly high bitrates may lead to buffering and playback delays, while excessively low bitrates can result in pixelation and reduced clarity. Striking the right balance is key to providing users with a high-quality, uninterrupted video stream.</p><h3 id="common-video-quality-issues-associated-with-bitrate">Common Video Quality Issues Associated with Bitrate</h3><p>Insufficient or excessive bitrates can lead to common video quality issues. These include pixelation, artifacts, freezing, and synchronization problems between audio and video. Addressing these issues is vital for delivering a polished and professional real-time communication experience.</p><h2 id="problems-awair-with-video-bitrate">Problems Awair with Video Bitrate</h2><p>While Video Bitrate is essential for video communication, four are common challenges associated with its implementation:</p><ul><li><strong>Overcompression Issues: </strong>Overcompression occurs when the bitrate is set too low, leading to a significant loss in video quality. This can result in pixelation, blurriness, and an overall degradation of the viewing experience.</li><li><strong>Buffering and Playback Delays: </strong>Insufficient Video Bitrate can cause buffering and playback delays as the application struggles to transmit and render the video data in real time.</li><li><strong>Compatibility and Bandwidth Challenges:</strong> Different devices and network conditions require adaptive bitrate strategies. Lack of compatibility and inefficient bandwidth usage can hinder the seamless transmission of video data.</li><li><strong>Audio-Video Synchronization Problem:</strong> Improperly managed Video Bitrate can lead to synchronization issues between audio and video, creating a disjointed and unpleasant user experience</li></ul><h2 id="why-does-webrtc-video-bitrate-matter">Why does WebRTC Video Bitrate Matter?</h2><h3 id="importance-in-real-time-communication-applications">Importance in Real-time Communication Applications</h3><p>WebRTC Video Bitrate holds paramount importance in real-time communication applications. Whether it's a business conference call or a live streaming event, users expect a reliable and high-quality video experience. Video Bitrate directly influences the overall effectiveness of communication in these applications.</p><h3 id="user-experience-implications">User Experience Implications</h3><p>The user experience in WebRTC applications is heavily influenced by Video Bitrate. A smooth and clear video stream enhances engagement and satisfaction, leading to a positive perception of the application. On the contrary, poor video quality can result in user frustration and disengagement.</p><h2 id="best-solution-for-webrtc-video-bitrate-management-videosdk">Best Solution for WebRTC Video Bitrate Management: VideoSDK</h2><h4 id="videosdk">VideoSDK</h4><p><a href="https://www.videosdk.live/">VideoSDK </a>emerges as the ultimate solution for WebRTC Video Bitrate management, offering developers a powerful toolkit for seamless integration of <a href="https://www.videosdk.live/audio-video-conferencing">audio-video conferencing</a> and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a> into web and mobile applications. Its features set it apart as a comprehensive and efficient solution.</p><h3 id="adaptive-bitrate-control">Adaptive Bitrate Control</h3><p>VideoSDK incorporates <a href="https://www.videosdk.live/blog/what-is-adaptive-bitrate-streaming">adaptive bitrate control</a>, allowing the system to dynamically adjust the bitrate based on network conditions. This ensures optimal video quality without compromising on bandwidth usage.</p><h3 id="real-time-optimization-capabilities">Real-time Optimization Capabilities</h3><p>With real-time optimization capabilities, VideoSDK continually monitors network conditions and adjusts the video bitrate on the fly. This adaptive approach guarantees a smooth and uninterrupted communication experience for users.</p><h3 id="cross-platform-compatibility">Cross-platform Compatibility</h3><p>VideoSDK is designed to be compatible across various platforms, ensuring a consistent experience for users regardless of the device or browser they are using. This cross-platform compatibility enhances the reach and usability of applications integrated with VideoSDK.</p><h2 id="tips-for-optimizing-webrtc-video-bitrate">Tips for Optimizing WebRTC Video Bitrate</h2><h3 id="bandwidth-management-strategies">Bandwidth Management Strategies</h3><p>Implementing effective bandwidth management strategies is crucial for optimizing WebRTC Video Bitrate. This involves dynamically adjusting the bitrate based on available bandwidth to ensure a smooth and uninterrupted communication experience.</p><h4 id="adaptive-bitrate-techniques">Adaptive Bitrate Techniques</h4><p>Adopting adaptive bitrate techniques ensures that the application can respond to changing network conditions. VideoSDK's adaptive bitrate control is an excellent example of how dynamic adjustments can be made to maintain video quality in real time.</p><p>Understanding and effectively managing WebRTC Video Bitrate is crucial for developers seeking to provide high-quality real-time communication experiences in their applications. VideoSDK emerges as a comprehensive solution, addressing the challenges associated with bitrate management and offering a range of features that make it a standout choice for developers.</p>]]></content:encoded></item><item><title><![CDATA[Product Updates : September 2024]]></title><description><![CDATA[Explore the latest product updates from September 2024 at VideoSDK Live's blog. Stay informed on cutting-edge features shaping the future of video technology.]]></description><link>https://www.videosdk.live/blog/product-updates-september-2024</link><guid isPermaLink="false">6709227d988cc8510f930131</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 11 Oct 2024 13:34:44 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/10/sept-2024-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/10/sept-2024-1.png" alt="Product Updates : September 2024"/><p>September was a monumental month for us at VideoSDK! From exciting feature releases to an overwhelming response on Product Hunt for the launch of CharacterSDK, we’re thrilled to share updates that add meaningful value to every stakeholder.</p><h2 id="charactersdk-build-multimodal-real-time-ai-characters-easily"><strong>CharacterSDK: Build Multimodal Real-time AI Characters Easily</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/10/CharacterSDK.png" class="kg-image" alt="Product Updates : September 2024" loading="lazy" width="1920" height="1080"/></figure><p>We officially introduced CharacterSDK, which allows developers to build realtime AI characters that can listen, speak, see and even take action - all in real time. CharacterSDK allows for seamless integration into existing workflows, empowering developers to create highly personalized and engaging experiences.</p><p><a href="https://www.videosdk.live/character-sdk" rel="noreferrer">Explore CharacterSDK and its endless possibilities ↗️</a></p><p><a href="https://character-demo.videosdk.live/" rel="noreferrer">Try the Demo Now! ↗️</a></p><h3 id="videosdk-30-was-1-product-of-the-day-on-product-hunt"><strong>VideoSDK 3.0 was #1 Product of the Day on Product Hunt</strong></h3><figure class="kg-card kg-image-card"><a href="https://www.producthunt.com/posts/video-sdk-3-0"><img src="https://assets.videosdk.live/static-assets/ghost/2024/10/no-1-product.png" class="kg-image" alt="Product Updates : September 2024" loading="lazy" width="1080" height="1080"/></a></figure><p>Thanks to our incredible community, VideoSDK 3.0 became the #1 Product of the Day and ranked #3 Product of the Week on Product Hunt. If you haven’t explored our launch yet, click the link below and don’t forget to upvote us!</p><p><a href="https://www.producthunt.com/posts/video-sdk-3-0" rel="noreferrer">Checkout VideoSDK 3.0 ↗️</a></p><h2 id="introducing-participants-timeline-enhanced-troubleshooting-at-your-fingertips"><strong>Introducing Participants Timeline: Enhanced Troubleshooting at Your Fingertips</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/10/Participant-timeline.png" class="kg-image" alt="Product Updates : September 2024" loading="lazy" width="1280" height="720"/></figure><p>The new Participants Timeline feature is a game-changer for troubleshooting, providing a detailed breakdown of participant activity within a session. It allows you to pinpoint specific durations and investigate exactly which media-related events occurred, such as muting, unmuting, joining, or any errors encountered. This granular level of insight enables faster identification of issues, making it easier to resolve problems and ensure a smooth, uninterrupted session experience.</p><h2 id="collaborate-seamlessly-with-whiteboardgeneral-availability"><strong>Collaborate Seamlessly with Whiteboard - General availability</strong></h2><p>We’re excited to announce that our Whiteboard feature is now available across the React, JavaScript, and Flutter SDKs. This feature is perfect for real-time collaboration, enabling teams to brainstorm, sketch ideas, and share thoughts visually and interactively. It’s an ideal addition for any app requiring live collaboration—whether it’s a classroom, a creative workshop, or a business meeting.</p><p>🔗  <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/whiteboard" rel="noreferrer">Whiteboard in React SDK</a><br>🔗 <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/whiteboard" rel="noreferrer">Whiteboard in JS SDK</a><br>🔗  <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/whiteboard" rel="noreferrer">Whiteboard in Flutter SDK</a></br></br></p><h2 id="general-updates"><strong>General Updates</strong></h2><ul><li><strong>Custom Video Processor</strong>: We’ve added methods to set a custom video processor, allowing you to apply unique effects to your video stream before it’s transmitted during video calls.</li><li><strong>Mozilla Browser Fix</strong>: We’ve resolved the video rotation issue in Mozilla browsers, improving overall user experience.</li></ul><hr><h3 id="wrapping-up-september"><strong>Wrapping Up September</strong></h3><p>We're constantly working to make VideoSDK.live the most developer-friendly real-time communication solution. These updates are aimed at improving performance, expanding capabilities, and making your development process smoother.</p><p>As always, we'd love to hear your feedback! If you have any questions, suggestions, or issues, please don't hesitate to contact our support team.</p><p>➡️ New to VideoSDK? <a href="https://www.videosdk.live/signup">Sign up now</a> and get <strong><em>10,000 free minutes</em></strong> to start building amazing audio &amp; video experiences!</p><p>See you next month!<br>VideoSDK Team</br></p></hr>]]></content:encoded></item><item><title><![CDATA[What is a WebSocket? How Does Websocket Work?]]></title><description><![CDATA[A WebSocket is a communication protocol that enables bidirectional, real-time data exchange between a client and a server, enhancing interactive web applications.]]></description><link>https://www.videosdk.live/blog/what-is-a-websocket</link><guid isPermaLink="false">65ae43052a88c204ca9ce338</guid><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Fri, 11 Oct 2024 12:10:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/what-is-websocket.png" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-a-websocket">What is a WebSocket?</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/what-is-websocket.png" alt="What is a WebSocket? How Does Websocket Work?"/><p><em>WebSocket </em>is a communication protocol that provides a full-duplex communication channel over a single, long-lived connection. Unlike traditional request-response protocols, such as HTTP, WebSockets enable bidirectional communication between clients and servers, allowing for real-time data exchange.</p><h3 id="evolution-of-communication-protocols">Evolution of Communication Protocols</h3><p>To appreciate the significance of WebSockets, it's essential to understand the evolution of communication protocols. Traditional protocols like HTTP follow a request-response model, which can lead to latency issues in real-time applications. WebSockets emerged as a solution to this problem, offering a persistent connection for continuous data exchange.</p><h3 id="importance-of-real-time-communication">Importance of Real-time Communication</h3><p>Real-time communication is crucial in various applications, from live streaming to interactive collaboration tools. WebSockets play a pivotal role in achieving low-latency, high-performance communication, making them an ideal choice for developers aiming to deliver immersive user experiences.</p><h2 id="key-features-of-websocket-technology">Key Features of WebSocket Technology</h2><h3 id="1-client-server-communication">1. Client-Server Communication</h3><p>WebSockets facilitate communication between a client (usually a web browser) and a server. Unlike traditional protocols where the server can only respond to client requests, WebSockets allows both the client and server to send messages to each other at any time.</p><h3 id="2-full-duplex-communication">2. Full-Duplex Communication</h3><p>Full-duplex communication means that data can be sent and received simultaneously. This feature is particularly beneficial for real-time applications, as it ensures instant updates and responses.</p><h3 id="3-api-gateway">3. API Gateway</h3><p>API Gateways play a crucial role in managing WebSocket connections. They can help route and load balance WebSocket traffic, ensuring that real-time data flows smoothly and efficiently across your infrastructure.</p><h3 id="4-transmission-control-protocol-tcp">4. Transmission Control Protocol (TCP)</h3><p>WebSockets rely on the Transmission Control Protocol (TCP) to establish a connection. TCP provides a reliable, ordered, and error-checked delivery of a stream of bytes, ensuring that data integrity is maintained throughout the WebSocket communication.</p><h3 id="5-server-sent-events-sse">5. Server-Sent Events (SSE)</h3><p>While WebSockets offer full-duplex communication, Server-Sent Events (SSE) provide a unidirectional stream from server to client. SSE is simpler to implement but lacks the bidirectional capabilities that WebSockets provide.</p><h3 id="6-cloud-run">6. Cloud Run</h3><p>Implementing WebSockets in a serverless environment like Google Cloud Run allows developers to build scalable, real-time applications without managing server infrastructure. WebSockets can handle high-throughput data streams, making them ideal for modern web applications.</p><h3 id="7-http-streaming">7. HTTP Streaming</h3><p>HTTP streaming allows servers to push updates to the client over an open HTTP connection, similar to long polling. However, WebSockets offer a more efficient solution by maintaining a persistent, bidirectional connection.</p><h2 id="comparative-analysis-websocket-versus-traditional-protocols">Comparative Analysis: WebSocket Versus Traditional Protocols</h2><h3 id="1-http-vs-websocket">1. HTTP vs. WebSocket</h3><p>Contrasting with the stateless nature of HTTP, WebSockets provide a persistent connection, eliminating the need for repeated handshakes for each data exchange.</p><h3 id="2-short-polling-vs-websockets">2. Short Polling vs. WebSockets</h3><p>Short polling involves the client repeatedly sending requests to the server, resulting in increased latency. WebSockets, on the other hand, maintain an open connection, reducing latency and resource consumption.</p><h3 id="3-long-polling-vs-web-sockets">3. Long Polling vs. Web Sockets</h3><p>While long polling minimizes latency by keeping the request open until new data is available, WebSockets offer a more efficient solution with constant, bidirectional communication.</p><h3 id="4-persistent-connection-advantages">4. Persistent Connection Advantages</h3><p>The persistent connection established by WebSockets ensures a continuous flow of data, making them ideal for applications requiring real-time updates.</p><h2 id="how-websockets-works">How WebSockets Works?</h2><h3 id="handshake-process">Handshake Process</h3><ul><li><strong>Establishing Connection</strong>: The WebSocket connection begins with a handshake process, where the client requests an upgrade to the WebSocket protocol.</li><li><strong>Upgrade Header</strong>: Upon receiving the upgrade request, the server responds with an acknowledgment, and the connection is upgraded to a WebSocket connection.</li></ul><h3 id="websocket-frames">WebSocket Frames</h3><ul><li><strong>Data Frames</strong>: WebSockets use frames to encapsulate data. Data frames can be either text or binary, allowing for the transmission of various types of information.</li><li><strong>Control Frames</strong>: Control frames manage the connection, providing functionalities like closing the connection or responding to ping requests.</li></ul><h3 id="websocket-apis">WebSocket APIs</h3><ul><li><strong>JavaScript WebSocket API</strong>: WebSockets are well-supported in modern web browsers through the JavaScript WebSocket API, enabling developers to easily implement real-time communication in web applications.</li><li><strong>Server-side WebSocket Libraries</strong>: On the server side, various libraries and frameworks, such as Socket.io in Node.js, facilitate WebSocket implementation, ensuring compatibility across different platforms.</li></ul><h2 id="use-cases-of-websockets">Use Cases of WebSockets</h2><p>WebSockets find applications across various industries, enhancing user experiences in,</p><ul><li><strong>Real-time Chat Applications</strong>: Instant messaging platforms leverage WebSockets to deliver messages promptly, creating a seamless communication experience.</li><li><strong>Online Gaming Platforms</strong>: Multiplayer online games benefit from WebSockets' bidirectional communication, ensuring real-time updates for a smooth gaming experience.</li><li><strong>Financial Trading Systems</strong>: In the fast-paced world of financial trading, WebSockets enable instantaneous updates on stock prices and market changes.</li><li><strong>Live Streaming Services</strong>: WebSockets are integral to live streaming platforms, allowing for the real-time transmission of audio and video data.</li></ul><h2 id="advantages-of-adopting-websockets">Advantages of Adopting WebSockets</h2><ul><li><strong>Reduced Latency</strong>: The persistent connection established by WebSockets significantly reduces latency, ensuring that data is delivered in near real-time.</li><li><strong>Efficient Resource Utilization</strong>: WebSockets eliminate the need for repeated connections and reduce the overhead associated with protocols like HTTP, leading to more efficient resource utilization.</li><li><strong>Scalability</strong>: The lightweight nature of WebSockets makes them scalable, allowing developers to handle a large number of simultaneous connections without sacrificing performance</li><li><strong>Bi-Directional Communication</strong>: Bidirectional communication enables instant updates and responses, making WebSockets an ideal choice for interactive applications.</li></ul><h2 id="implementation-insights-with-videosdk">Implementation Insights with VideoSDK</h2><h3 id="what-is-videosdk">What is VideoSDK</h3><p><a href="https://www.videosdk.live/">VideoSDK </a>is a cutting-edge live video infrastructure designed to empower developers with real-time audio-video capabilities. Offering complete flexibility, scalability, and control, VideoSDK seamlessly integrates <a href="https://www.videosdk.live/audio-video-conferencing">audio-video conferencing</a> and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a> into web and mobile apps.</p><h3 id="how-videosdk-utilizes-websockets">How VideoSDK Utilizes WebSockets</h3><p>VideoSDK leverages WebSockets to provide a robust and low-latency real-time video streaming experience. This ensures that users receive high-quality video with minimal delay.</p><h3 id="interactive-features">Interactive Features</h3><p>The bidirectional communication enabled by WebSockets allows VideoSDK to implement interactive features seamlessly. From screen sharing to interactive whiteboarding, VideoSDK empowers developers to create engaging and interactive applications.</p><p>For developers looking to harness the full potential of real-time communication, VideoSDK stands as a reliable ally. Explore the features and capabilities of VideoSDK to unlock a world of possibilities in audio-video conferencing and live streaming.</p><p>The integration of WebSockets into the realm of real-time communication, exemplified by VideoSDK, opens up exciting possibilities for developers. As the digital landscape continues to evolve, the synergy between WebSockets and innovative platforms like VideoSDK will play a pivotal role in shaping the future of interactive and dynamic applications.</p>]]></content:encoded></item><item><title><![CDATA[What is RTP Protocol (Real-time Transport Protocol)?]]></title><description><![CDATA[Real-time Transport Protocol (RTP) is a network protocol delivering audio and video in real-time applications, ensuring timely data transmission.]]></description><link>https://www.videosdk.live/blog/what-is-rtp-protocol</link><guid isPermaLink="false">65a8c5b36c68429b5fdf150a</guid><category><![CDATA[RTP]]></category><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Fri, 11 Oct 2024 11:57:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/image--4-.png" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-the-rtp-protocol">What is the RTP Protocol?</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/image--4-.png" alt="What is RTP Protocol (Real-time Transport Protocol)?"/><p>RTP stands for <a href="https://www.videosdk.live/blog/what-is-rtp-protocol" rel="noreferrer">Real-Time Transport Protocol</a> and is a standardized protocol used for delivering audio and video over IP networks in real-time. RTP plays a crucial role in a wide array of communication and entertainment systems, involving streaming media, such as telephony, video teleconference applications, and television services.</p><p>RTP usually works with User Datagram Protocol (UDP). It teams up with RTP Control Protocol (RTCP). RTP handles stuff like audio and video, while RTCP keeps an eye on how things are going, like checking transmission stats and quality. It also helps sync up different streams. RTP is a key part of Voice over IP, often teaming up with a signaling protocol like Session Initiation Protocol (SIP) to set up connections across the network.</p><h3 id="how-rtp-enhances-video-communication">How RTP Enhances Video Communication?</h3><p>RTP plays a pivotal role in the world of video communication by providing a reliable framework for delivering real-time content. Whether it's a live video stream or an interactive video call, RTP ensures a smooth and synchronized experience for users.</p><h3 id="key-mechanisms-of-rtp-for-real-time-data-handling">Key Mechanisms of RTP for Real-time Data Handling</h3><p>RTP achieves real-time data transmission through its unique set of features, including packetization, timestamping, and sequence numbering. These elements work in harmony to deliver a continuous and coherent audio-video experience.</p><h2 id="basics-of-rtp">Basics of RTP</h2><h3 id="key-features-of-rtp">Key Features of RTP</h3><ul><li><strong>Packetization:</strong> RTP breaks down audio and video data into smaller packets, enabling efficient transmission over the network.</li><li><strong>Timestamping:</strong> Precise timing is crucial in real-time communication. RTP assigns timestamps to each packet, allowing for synchronization between sender and receiver.</li><li><strong>Sequence Numbering:</strong> To ensure the correct order of packets upon arrival, RTP assigns a sequence number to each packet.</li></ul><h3 id="rtp-vs-other-protocols-a-comparative-overview">RTP vs. Other Protocols: A Comparative Overview</h3><p>RTP (Real-Time Protocol) contrasts with UDP (User Datagram Protocol) and TCP (Transmission Control Protocol) in streaming applications. While RTP operates over UDP, offering low-latency delivery for real-time media, TCP ensures reliable but potentially delayed transmission. <a href="https://www.videosdk.live/blog/what-is-rtsp-protocol" rel="noreferrer">RTSP (Real-Time Streaming Protocol)</a> facilitates control over media sessions but isn't designed for direct data delivery like RTP. RTP stands out for its focus on real-time media, UDP for low latency, TCP for reliability, and RTSP for session control in streaming scenarios. Each protocol serves distinct roles in optimizing communication and media streaming.</p><h3 id="incorporation-of-rtp-in-video-and-audio-communication">Incorporation of RTP in Video and Audio Communication</h3><p>RTP audio and RTP video are specific applications of the Real-Time Transport Protocol, designed to handle real-time audio and video data, respectively. RTP audio is crucial for ensuring clear, synchronized audio delivery, while RTP video focuses on delivering high-quality video streams with minimal delay. Together, they enable effective and seamless communication in various real-time applications, from video conferencing to live streaming.</p><h2 id="how-does-rtp-work">How Does RTP Work?</h2><h3 id="the-role-of-rtp-in-multimedia-communication">The Role of RTP in Multimedia Communication</h3><p>RTP acts as a mediator between applications, ensuring that audio and video data is transmitted seamlessly. It doesn't guarantee the delivery of every packet but focuses on maintaining real-time communication.</p><h3 id="packet-structure-and-format">Packet Structure and Format</h3><p>RTP packets consist of a header and payload. The header contains essential information, including timestamps and sequence numbers, while the payload carries the actual audio or video data.</p><h3 id="header-information-and-its-significance">Header Information and Its Significance</h3><p>The header plays a crucial role in maintaining synchronization and order. Timestamps enable the reconstruction of timing at the receiver's end, ensuring a coherent playback experience.</p><h3 id="transmission-process-from-sender-to-receiver">Transmission Process from Sender to Receiver</h3><p>RTP operates on a simple sender-receiver model. The sender packetizes audio and video data, adds the RTP header, and transmits it over the network, where network monitoring software helps ensure quality and reliability. The receiver reconstructs the data using timestamps and sequence numbers, ensuring a synchronized playback.</p><h2 id="real-time-transport-protocol-in-video-streaming">Real-time Transport Protocol in Video Streaming</h2><h3 id="rtps-role-in-video-streaming-applications">RTP's Role in Video Streaming Applications</h3><p><br>RTP serves as the backbone for video streaming, enabling the real-time delivery of content to end-users. Its low latency and efficient packetization make it an ideal choice for platforms aiming to provide a seamless streaming experience.</br></p><h3 id="addressing-latency-challenges">Addressing Latency Challenges</h3><p>RTP addresses latency challenges by prioritizing the timely delivery of data. This is crucial in live-streaming scenarios where minimal delay is paramount for user engagement</p><h3 id="adaptive-bitrate-streaming-with-rtp">Adaptive Bitrate Streaming with RTP</h3><p>RTP facilitates <a href="https://www.videosdk.live/blog/what-is-adaptive-bitrate-streaming" rel="noreferrer">adaptive bitrate</a> streaming, adjusting the quality of the video stream based on the user's network conditions. This ensures a consistent viewing experience across varying internet speeds.</p><h2 id="significance-of-rtp-in-video-conferencing">Significance of RTP in Video Conferencing</h2><h3 id="rtps-impact-on-real-time-communication">RTP's Impact on Real-Time Communication</h3><p>In video conferencing, real-time communication is non-negotiable. RTP ensures that audio and video streams remain synchronized, providing a natural and responsive interaction for the user.</p><h3 id="synchronization-of-audio-and-video">Synchronization of Audio and Video</h3><p>RTP's timestamping mechanism ensures that audio and video streams arrive at the receiver simultaneously. This synchronization is crucial for maintaining the integrity of the conversation in video conferences.</p><h3 id="handling-packet-loss-and-jitter">Handling Packet Loss and Jitter</h3><p>RTP incorporates mechanisms to handle packet loss and jitter, common challenges in network transmission. This ensures a smooth video conferencing experience even in less-than-ideal network conditions.</p><h2 id="integrating-rtp-with-videosdk-for-enhanced-video-solutions">Integrating RTP with VideoSDK for Enhanced Video Solutions</h2><h3 id="overview-of-videosdk">Overview of VideoSDK</h3><p><a href="https://www.videosdk.live/">VideoSDK</a>, your live video infrastructure, empowers developers across the USA &amp; India to integrate <a href="https://www.videosdk.live/audio-video-conferencing">real-time audio-video conferencing </a>and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a> effortlessly. With a focus on flexibility, scalability, and control, VideoSDK ensures a superior user experience.</p><h3 id="how-videosdk-utilizes-rtp-for-seamless-video-communication">How VideoSDK Utilizes RTP for Seamless Video Communication</h3><p>VideoSDK leverages RTP to guarantee the real-time delivery of audio and video data. The integration ensures that developers can create applications with minimal latency and optimal performance.</p><h3 id="benefits-of-using-videosdk-for-developers-and-businesses">Benefits of Using VideoSDK for Developers and Businesses</h3><ul><li><strong>Flexibility:</strong> VideoSDK offers developers the flexibility to tailor audio-video communication features according to their application's unique requirements.</li><li><strong>Scalability:</strong> As your user base grows, VideoSDK scales seamlessly to meet the increasing demand for real-time communication.</li><li><strong>Control:</strong> Developers have granular control over the integration, ensuring a customized and optimized experience for end-users.</li></ul><h2 id="best-practices-for-implementing-rtp">Best Practices for Implementing RTP</h2><h3 id="ensuring-optimal-performance">Ensuring Optimal Performance</h3><ul><li><strong>Network Optimization:</strong> Prioritize a robust and low-latency network infrastructure for optimal RTP performance.</li><li><strong>Codec Selection:</strong> Choose codecs wisely based on the application's requirements, balancing between compression efficiency and quality.</li></ul><h3 id="mitigating-latency-issues">Mitigating Latency Issues</h3><ul><li><strong>Real-Time Monitoring:</strong> Implement tools for real-time monitoring to identify and address latency issues promptly.</li><li><strong>Packet Loss Recovery:</strong> Integrate mechanisms for packet loss recovery to maintain a smooth audio-video experience.</li></ul><h3 id="security-considerations-with-rtp">Security Considerations with RTP</h3><ol><li><strong>Encryption:</strong> Implement end-to-end encryption to secure audio and video data during transmission.</li><li><strong>Authentication:</strong> Employ authentication mechanisms to ensure that only authorized parties can access the RTP streams.</li></ol><h2 id="future-trends-in-rtp-and-video-communication">Future Trends in RTP and Video Communication</h2><h3 id="evolving-technologies-and-their-impact-on-rtp">Evolving Technologies and Their Impact on RTP</h3><ul><li><strong>5G Integration:</strong> The rollout of 5G networks will enhance the capabilities of RTP, enabling even faster and more reliable real-time communication.</li><li><strong>AI and Machine Learning:</strong> Integration of AI and machine learning algorithms to further optimize RTP for diverse network conditions</li></ul><h3 id="emerging-standards-in-real-time-communication">Emerging Standards in Real-Time Communication</h3><ul><li><strong>WebRTC:</strong> <a href="https://www.videosdk.live/blog/webrtc">WebRTC</a>, a driving force in real-time communication, leverages RTP (Real-Time Protocol) for seamless media transmission. This emerging standard enables direct browser-to-browser communication, fostering audio and video interactions. RTP, integral to WebRTC, ensures timely and synchronized delivery, enhancing the overall user experience.</li></ul><h2 id="frequently-asked-questions-about-rtp-and-videosdk">Frequently Asked Questions about RTP and VideoSDK</h2><h3 id="does-videosdk-support-rtp">Does VideoSDK support RTP?</h3><p>Yes, VideoSDK supports RTP (Real-Time Protocol), enabling efficient and reliable transmission of audio and video streams. This compatibility ensures seamless integration with real-time communication applications, enhancing the overall performance and user experience.</p><h3 id="how-does-videosdk-ensure-security-with-rtp">How does VideoSDK ensure security with RTP?</h3><p>VideoSDK enhances security by implementing end-to-end encryption to protect audio and video data during transmission. Additionally, authentication mechanisms are in place to ensure that only authorized parties can access RTP streams.</p><h3 id="why-should-i-choose-videosdk-for-real-time-communication">Why should I choose VideoSDK for real-time communication?</h3><p>VideoSDK is your gateway to unparalleled live video infrastructure, offering flexibility, scalability, and control. By leveraging RTP, VideoSDK empowers developers to create applications with minimal latency, optimal performance, and a superior user experience.</p><h3 id="what-role-does-rtp-play-in-videosdk-and-how-does-it-benefit-developers-and-businesses">What role does RTP play in VideoSDK, and how does it benefit developers and businesses?</h3><p>VideoSDK leverages RTP for seamless video communication. Developers benefit from the flexibility to tailor audio-video features, scalability to accommodate growing user bases, and granular control over integration, resulting in a superior user experience.</p>]]></content:encoded></item><item><title><![CDATA[WebSocket vs WebTransport: A Comprehensive Comparison]]></title><description><![CDATA[WebSockets enable real-time bidirectional communication between client and server. WebTransport, an evolving standard, extends this by offering multiplexed streams for enhanced performance and flexibility in web communication.]]></description><link>https://www.videosdk.live/blog/websocket-vs-webtransport</link><guid isPermaLink="false">65bb42172a88c204ca9ce813</guid><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Fri, 11 Oct 2024 11:06:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/02/Websockets-vs-Webtransport.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Websockets-vs-Webtransport.png" alt="WebSocket vs WebTransport: A Comprehensive Comparison"/><p>In the fast-evolving landscape of real-time communication, developers face crucial decisions when selecting protocols to integrate audio-video conferencing and live streaming into their applications. Two prominent contenders in this space are WebSockets and WebTransport, each offering its unique set of features and advantages.</p><h2 id="what-is-websocket">What is WebSocket?</h2><p><a href="https://www.videosdk.live/blog/what-is-a-websocket">WebSocket </a>is a communication protocol that provides full-duplex communication channels over a single, long-lived connection. Unlike traditional HTTP, which follows a request-response model, WebSockets allow for bidirectional communication, enabling real-time data transfer between clients and servers.</p><h2 id="how-websockets-facilitate-real-time-communication">How WebSockets Facilitate Real-time Communication</h2><p>WebSocket facilitate real-time communication by establishing a persistent connection between the client and server. This connection remains open, allowing data to be transmitted in both directions without the need for repeated connection establishment.</p><h2 id="what-is-common-use-cases-for-websocket">What is Common Use Cases for WebSocket</h2><p>WebSockets are widely adopted in applications that require low-latency communication, such as chat applications, online gaming, financial trading platforms, and collaborative tools.</p><h3 id="pros-and-cons-of-using-websocket">Pros and Cons of Using WebSocket</h3><h3 id="pros-of-websocket">Pros of WebSocket:</h3><ul><li><a href="https://www.videosdk.live/blog/what-is-low-latency-http-live-streaming" rel="noreferrer">Low latency</a></li><li>Bidirectional communication</li><li>Efficient for small messages and frequent updates</li></ul><h3 id="cons-of-pros">Cons of Pros</h3><ul><li>Lack of built-in support for advanced features like multiplexing</li><li>May face challenges with scalability under a high number of concurrent connections</li></ul><h2 id="what-is-webtransport">What is WebTransport?</h2><p><a href="https://www.videosdk.live/blog/what-is-webtransport">WebTransport </a>is an emerging protocol designed to overcome some of the limitations of WebSockets. It provides a multiplexed transport with a range of built-in features, aiming to enhance the efficiency and scalability of real-time communication.</p><h3 id="what-is-key-features-distinguishing-it-from-websockets">What is Key Features Distinguishing It from WebSockets</h3><p>WebTransport introduces multiplexing, allowing multiple streams of data to be transmitted over a single connection. This feature enhances bandwidth efficiency and reduces the overhead associated with maintaining multiple connections.</p><h3 id="advantages-of-using-webtransport-for-real-time-communication">Advantages of Using WebTransport for Real-time Communication</h3><ul><li>Improved bandwidth efficiency</li><li>Enhanced scalability through multiplexing</li><li>Built-in support for reliable and unreliable data delivery</li></ul><h3 id="what-are-the-use-cases-for-webtransport">What are the Use Cases for WebTransport?</h3><p>WebTransport is well-suited for applications requiring high-performance, reliable, and scalable real-time communication. Use cases include online collaboration tools, virtual classrooms, and interactive live-streaming platforms.</p><h2 id="webtransport-vs-websockets-a-head-to-head-comparison">WebTransport vs. WebSockets: A Head-to-Head Comparison</h2><h3 id="performance-comparison">Performance Comparison</h3><ul><li><strong>Bandwidth Efficiency</strong> - WebTransport excels in bandwidth efficiency due to its multiplexing capabilities. It allows simultaneous transmission of multiple streams over a single connection, reducing the overall data transfer overhead.</li><li><strong>Latency </strong>- While WebSockets offer low-latency communication, WebTransport further optimizes latency through multiplexing, making it a compelling choice for applications demanding real-time responsiveness.</li><li><strong>Scalability </strong>- WebTransport's multiplexing significantly improves scalability by efficiently managing multiple data streams over a single connection. This makes it more scalable than traditional WebSockets under high loads.</li></ul><p>Below is the feature comparison of WebTransport vs. WebSockets.</p><table>
<thead>
<tr>
<th>Feature</th>
<th>WebTransport</th>
<th>WebSockets</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Protocol</strong></td>
<td>Uses the QUIC protocol</td>
<td>Uses the WebSocket protocol</td>
</tr>
<tr>
<td><strong>Multiplexing</strong></td>
<td>Built-in multiplexing support</td>
<td>Supports multiplexing</td>
</tr>
<tr>
<td><strong>Reliability</strong></td>
<td>Provides reliability with built-in error correction</td>
<td>Requires application-level error handling</td>
</tr>
<tr>
<td><strong>Security</strong></td>
<td>Encrypted by default using TLS</td>
<td>Requires secure WebSocket (wss) for encryption</td>
</tr>
<tr>
<td><strong>Flexibility</strong></td>
<td>Designed for low-level communication</td>
<td>Higher-level messaging protocol</td>
</tr>
<tr>
<td><strong>Stream Support</strong></td>
<td>Supports multiple streams for parallel data</td>
<td>Single, bi-directional communication channel</td>
</tr>
<tr>
<td><strong>Binary Data Support</strong></td>
<td>Efficient support for sending binary data</td>
<td>Supports binary and text data</td>
</tr>
<tr>
<td><strong>Flow Control</strong></td>
<td>Built-in flow control for better resource usage</td>
<td>Requires manual implementation for flow control</td>
</tr>
<tr>
<td><strong>Browser Support</strong></td>
<td>Limited support, evolving in major browsers</td>
<td>Widely supported in modern browsers</td>
</tr>
<tr>
<td><strong>Use Cases</strong></td>
<td>Low-latency, real-time applications, game development</td>
<td>Real-time communication, messaging applications</td>
</tr>
<tr>
<td><strong>Transport Layer</strong></td>
<td>Built on top of UDP, designed for performance</td>
<td>Built on top of TCP, reliable but potentially has a higher latency</td>
</tr>
<tr>
<td><strong>Connection Setup</strong></td>
<td>Faster connection setup time</td>
<td>Slightly slower connection setup compared to WebTransport</td>
</tr>
<tr>
<td><strong>Standardization</strong></td>
<td>Still in the process of standardization by IETF</td>
<td>Well-established standard by the IETF</td>
</tr>
</tbody>
</table>
<h3 id="browser-support-and-compatibility">Browser Support and Compatibility</h3><p>As an established protocol, WebSockets enjoys broad support across various browsers. WebTransport is an evolving standard with increasing support, primarily in modern browsers, making it essential to consider your target audience when choosing between the two.</p><h3 id="security-considerations">Security Considerations</h3><p>Both WebSockets and WebTransport provide secure communication through the standard HTTPS protocol. However, developers should remain vigilant about implementing secure practices, such as encryption and authentication, regardless of the chosen protocol.</p><h3 id="developer-experience-and-ease-of-implementation">Developer Experience and Ease of Implementation</h3><p>WebSockets, being a mature technology, has extensive documentation and community support, making it relatively easy for developers to implement. WebTransport, being newer, might have a steeper learning curve, but its advanced features offer a compelling case for those seeking enhanced performance.</p><h2 id="choosing-the-right-technology-for-seamless-video-integration">Choosing the Right Technology for Seamless Video Integration</h2><h3 id="factors-influencing-the-decision">Factors Influencing the Decision</h3><ul><li><strong>Application Requirements</strong>: Consider the specific requirements of your application. If low latency is critical and your use case involves frequent updates with small messages, WebSockets might be a suitable choice. For applications demanding enhanced scalability and bandwidth efficiency, WebTransport could be more appropriate.</li><li>Scalability Needs: Evaluate the scalability needs of your application. WebTransport's multiplexing makes it more scalable under high loads, making it advantageous for applications with a large user base.</li><li>Browser Compatibility: Ensure that the chosen protocol aligns with the browsers your target audience commonly uses. While WebSockets have broader support, WebTransport is gaining traction and may be suitable for modern applications</li></ul><h2 id="videosdk-and-webtransport">VideoSDK and WebTransport</h2><p><a href="https://www.videosdk.live/">VideoSDK </a>is a comprehensive solution providing real-time audio-video SDKs for developers. It offers complete flexibility, scalability, and control, making it effortless to integrate <a href="https://www.videosdk.live/audio-video-conferencing">audio-video conferencing</a> and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a> into web and mobile apps.</p><p>Integrating VideoSDK with WebTransport involves straightforward steps, thanks to the developer-friendly design of VideoSDK. Developers can leverage the multiplexing capabilities of WebTransport to enhance the overall performance of their real-time communication features.</p><p>Applications using VideoSDK with WebTransport have reported significant performance improvements. The combination of VideoSDK's robust features and WebTransport's efficiency has resulted in smoother and more responsive audio-video communication.</p><h2 id="choosing-the-right-solution-for-your-application">Choosing the Right Solution for Your Application</h2><p>Consider the following factors when choosing between WebTransport and WebSockets for your application,</p><ul><li><strong>Nature of Communication:</strong> For applications requiring low latency and frequent updates, WebSockets may be suitable. For bandwidth-efficient and scalable communication, WebTransport could be a better fit.</li><li><strong>Scalability:</strong> Evaluate the scalability needs of your application, especially if it involves a large user base. WebTransport's multiplexing capabilities make it advantageous for scalable solutions.</li><li><strong>Compatibility:</strong> Ensure that the chosen protocol aligns with the browsers commonly used by your target audience. WebSockets have broader support, while WebTransport is gaining traction in modern browsers.</li></ul><h2 id="how-videosdk-integrates-with-both-webtransport-and-websockets">How VideoSDK Integrates with Both WebTransport and WebSockets</h2><p>VideoSDK provides seamless integration with both WebTransport and WebSockets, offering developers the flexibility to choose the protocol that best aligns with their application requirements. The integration process is streamlined, allowing developers to focus on building robust real-time communication features.</p><p>In the dynamic realm of real-time communication, the choice between WebTransport and WebSockets depends on various factors, including application requirements, scalability needs, and browser compatibility. VideoSDK emerges as a versatile solution, seamlessly integrating with both protocols to cater to diverse developer needs. Experience firsthand how VideoSDK empowers developers with the flexibility, scalability, and control needed for building cutting-edge audio-video conferencing and live-streaming features.</p>]]></content:encoded></item><item><title><![CDATA[What is WebRTC? and How does WebRTC work and use?]]></title><description><![CDATA[This blog believes in a well-defined explanation of the set of theories involved in WebRTC. It is not a tutorial and does not contain many codes. As mentioned above, the series makes full efforts for the readers to make them avail the best knowledge.]]></description><link>https://www.videosdk.live/blog/webrtc</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb78</guid><category><![CDATA[WebRTC]]></category><dc:creator><![CDATA[Arjun Kava]]></dc:creator><pubDate>Fri, 11 Oct 2024 10:45:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/11/webRTC-thumbnail1--1-.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/11/webRTC-thumbnail1--1-.jpg" alt="What is WebRTC? and How does WebRTC work and use?"/><p>This blog is wholly constructed to provide firm guidance on WebRTC. Well-drafted and collectively merged by developers, this written volume makes a layman orient with the concept from the beginning. It consists of data and activities which make the WebRTC concept desirable to work with, at all times.</p><p>This Blog is drafted with several protocols and APIs explained in a homely manner. Nothing to elaborate on the software, it is an uncomplicated summary of the RFC. The reader will enjoy much-undocumented knowledge in easy-to-understand terms.</p><p>This conceptualized writing is curated with a set of dignified theories for the developers to make WebRTC more predictable and simple to acknowledge. We have classified the reader community into some divisions</p><ul><li>An individual who is curious about the Web RTC technology</li><li>A novice developer completely new to the concept of WebRTC use</li><li>A developer who is aware of what is Web RTC and wants to dig into a deeper knowledge</li><li>A developer looking for a solution to a specific part of WebRTC </li><li>An evident implementer who wants clarity to debug</li></ul><p>This blog believes in a well-defined explanation of the set of theories involved in WebRTC. It is not a tutorial and does not contain many codes. As mentioned above, the blog makes full efforts for the readers to make them avail the best knowledge.</p><h2 id="what-is-webrtc">What is WebRTC?</h2><p><a href="https://webrtc.org/">WebRTC</a> (Web Real-Time Communication) is an open-source technology that enables <a href="https://www.videosdk.live/blog/introduction-to-real-time-communication-sdk" rel="noreferrer">real-time communication</a> and data exchange across different browsers and devices. It enables the transmission of sound, video, and data via the Internet. It is a protocol that allows two browsers to communicate in real-time.</p><p>WebRTC is an amazing specification that sets communication without the need to set up any external installation or plug-ins. With negligible latency, webRTC stimulates the exchanging of data from two different sources. This open-source protocol is majorly supported by all browsers.</p><p>WebRTC enabled remote <a href="https://www.videosdk.live/developer-hub/webrtc/webrtc-p2p" rel="noreferrer">peer-to-peer connection</a> by voice and video chats making corporate and cultural functioning simpler being distant too. It has been one of the most important tools for communication and data sharing currently. Have you ever wondered how virtual interactions function with the help of WebRTC? Let’s understand.</p><h2 id="how-does-webrtc-work">How does WebRTC work?</h2><p>WebRTC embeds communications technologies into web browsers by utilizing JavaScript, APIs, and Hypertext Markup Language. It is intended to make audio, video, and data communication across browsers simple and straightforward. <a href="https://bloggeek.me/how-webrtc-works/">WebRTC is compatible</a> with the majority of popular web browsers.</p><h3 id="a-media-stream-in-webrtc">A) Media Stream in WebRTC</h3><p>Media Stream is an API that provides a way to access the camera and microphone of the device. It controls the multimedia activities of the device over the data consumed. The Media stream looks after the information of the device concerning capturing and rendering media. Ideally, it supports audio and video data streaming through the devices.</p><h3 id="b-peer-connection-on-webrtc">B) Peer Connection on WebRTC</h3><p>WebRTC has all been developed to establish a peer-to-peer connection through the web. RTC peer connection has the primary objective of creating direct communication without the aid of any intermediary connection. Peers can even acquire or consume the media, specifically the audio and the video, and also produce it.</p><h3 id="c-data-channel-in-webrtc">C) Data Channel in WebRTC</h3><p>RTC data channel helps to create a bi-directional transfer of arbitrary data between peers. This works on SCTP (Stream Control Transmission Protocol). To reduce congestion on the networks like UDP, a data channel is designed. It ensures reliable delivery of streams over the web.</p><h2 id="the-steps-involved-in-establishing-communication-through-webrtc">The steps involved in establishing communication through webRTC</h2><p>WebRTC protocol is a collection of several technologies that combine up to set one secured communication over the web. There are some steps involved to build up the framework.</p><p>These steps happen sequentially and only on completion of the preceding step fully, the next step can be commenced. All these steps are made up of a set of several protocols. A combination of these protocols makes one step. Similarly, a combination of several other protocols is required to integrate the following steps. We can say that developing webRTC is an extensive theory and also difficult to understand.</p><p>Let us understand how each step has its importance, and how each preceding step matters to get along to the next step and develop the audio and video calls for a device.</p><h3 id="a-signalling-of-webrtc">A) Signalling of WebRTC</h3><p>Signaling refers to setting up and controlling a communication session. The peers connecting to a real-time communication send their stream to the server, which the server encodes and delivers to the receiving peer.</p><p>This communication can be bi-directional. The source encodes the stream and sends them in a suitable resolution to another peer. <a href="https://webrtchacks.com/signalling-options-for-webrtc-applications/">Signalling uses an SDP protocol</a> which is in a plain-text format, containing the lists of media sections. It exchanging several details between the peers.</p><ul><li>The location of the peers, which is the IP Address of both the agents.</li><li>The consuming audio and video tracks that an agent comes across</li><li>The audio and video tracks that an agent transfers</li><li>The data channels that determine the media type with a resolution exchange</li></ul><p>Signaling allows peers to exchange metadata for coordinated communication. An app consuming the webRTC technology requires browser support, but beneath the line, the browsers communicate through the signals they send to the servers. This is the role of signaling. It helps the server in sending and receiving data, making direct communication between the peers through the STUN and TURN servers.</p><h3 id="b-connecting-on-webrtc">B) Connecting on WebRTC</h3><p>Connecting refers to securing bi-directional communication between two peers. In webRTC, communication happens to be in <a href="https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers">P2P connection</a>, rather than the client-server connection. The connection is equally distributed among the two communicating agents through their transport addresses.</p><p>In general terms, it can be said that establishing communication can be a difficult task due to the different network protocols or transport addresses of the peers. These difficulties make the setup difficult but can be solved by the ICE and NAT protocol servers.</p><p>ICE servers-  ICE is a protocol that tries to find out the best possible way to connect two agents or peers. Each ICE agent publishes its reachable known as candidates. These candidates ate nothing but the transport address of the agent to reach the other connecting agent.</p><p>ICE ensures the best possible connection between the two peers even if the location of the two is difficult to connect. For these difficulties to be solved in an efficient way it uses two servers, STUN and TURN.</p><p><strong>STUN</strong>- STUN is an acronym for <a href="https://www.videosdk.live/developer-hub/stun-turn-server/what-is-a-stun-server" rel="noreferrer">Standard Traversal Utilities for NAT</a>. These lightweight servers allow webRTC to find their public IP address by making Stun server requests. These protocols work with NATs to program NAT Mappings.</p><p>These servers make an effort towards mapping and allow you to share them with others, to generate traffic by the reversal of the mapping you shared with other agents. It also helps to obtain the IP of the public networks, allowing sharing of media between peers with the help of ICE agents.</p><p><strong>TURN</strong>- TURN is an acronym for <a href="https://www.videosdk.live/developer-hub/stun-turn-server/what-is-a-turn-server" rel="noreferrer">Traversal Using Relays around NAT.</a> TURN servers help to establish connections between two agents when a direct connection between two agents is not possible due to firewall restrictions.</p><p>This can be a situation when the two agents serve higher incompatibility or maybe they do not rely on the same protocol. These servers are used to work with the privacy of the agents by not letting the servers locate the IP of the communicators. TURN creates a temporary IP for the agents to generate traffic to and fro, acting as a proxy.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/11/what-is-webRTC.jpg" class="kg-image" alt="What is WebRTC? and How does WebRTC work and use?" loading="lazy" width="1600" height="900"><figcaption><span style="white-space: pre-wrap;">How does WebRTC work?</span></figcaption></img></figure><h3 id="c-securing-on-webrtc">C) Securing on WebRTC</h3><p>WebRTC ensures security. It ensures that all the communication shared between the two agents is encrypted and remains confidential with any third party. It has two pre-existing protocols, namely, DTLS and SRTP. these two <a href="https://webrtcforthecurious.com/docs/04-securing/">protocols make sure that the connection</a> is secured and encrypted within the two agents and it does not assist in any malware.</p><p><strong>DTLS</strong>-  Datagram Transport Layer Security (DTLS) allows webRTC to establish secured and encrypted communication between two peers. The client and the server to communicate, need a agree on certain values known as ciphers in a DTLS handshake. To secure the data streams of the peers, DTLS is required.</p><p><strong>SRTP</strong>-  SRTP is an acronym for Secure Real-time Transport Protocol. It secures and encrypts the media streams between two connecting peers. It is initialized by using keys generated by DTLS. This protocol is specifically designed for encrypting RTP packets.</p><h3 id="d-webrtc-communication">D) WebRTC Communication</h3><p>WebRTC is developed for transferring data, audio, and video over the web. This technology allows the sharing of unlimited data over the web. It allows a user to make activities such as adding and removing streams anytime over a call. These streams could be bundled together with two core protocols of webRTC communication.</p><p><strong>RTC</strong>- <a href="https://www.videosdk.live/developer-hub/rtmp/rtc-server" rel="noreferrer">RTC</a> stands for Real-time Transport Protocol. This protocol is designed to carry real-time delivery of video. It gives the agent streams which can be run over multiple media feeds from one connection. This protocol does not ensure media transfer covers latency and reliability, but it provides a tool to implement them.</p><p><strong>RTCP</strong>- RTCP is a Real-time Control Protocol. It allows administrators to monitor the quality of calls from the collected metadata. This protocol allows an agent to add any metadata they want to communicate the statistics of the call. This protocol also tracks packet loss, latency, and other VoIP concerns.</p><p>WebRTC ensures all the provisions to establish better connectivity, focusing on;</p><ul><li>Quality over latency</li><li>Authenticity of messages</li><li>Reduced bandwidth cist</li><li>Secured E2E communication</li><li>Coordinating with SDP values and more</li></ul><p>For efficient functioning, webRTC requires a dedicated subsystem for connection peer-to-peer. It needs to deal with the above-mentioned servers and protocols to make communication equally distributed between the two agents, delivering a better connection. The webRTC protocol looks in for the best delivery of communication.</p><h2 id="history-of-webrtc">History of WebRTC</h2><p>Rapid advancements in technology in the ’90s gave emergence to many new inventions that considerably have become an important source for day-to-day living to carry out activities easily. <a href="https://www.callstats.io/blog/2018/05/11/history-of-webrtc-infographic">The evolution of the World Wide Web and text messaging played a crucial role in communication and knowledge orientation</a>.</p><p>No sooner, these inventions started to grow strong, than there was a need for communication virtually, over video conferencing and meetings. There was a huge demand to connect with people from different places in a shorter span to get work done faster. This was contributed by plug-ins and several installations to make video conferencing possible. This decade also led to the emergence of WebRTC.</p><p>WebRTC was initially invented in 1999, by Global IP Solutions (GIPS) in Sweden. WebRTC (Web Real-Time Communication) is an open-source project to enable peer-to-peer communication over the web through APIs. WebRTC later in 2011 was taken over by Google. Since 2011, it has been quite a decade, and webRTC has shown a massive uprise.</p><p>The GIPS technology on WebRTC was open-sourced by Google in 2011 as a browser-based real-time communication project and kept it for standardization. In 2011, Ericsson used webRTC to implement its project and further claimed modifications. In the same year, W3C had put forth its final draft for webRTC including the first cross-browser video call in 2013.</p><p>2014 was remarkable when it was put into integration with Google Hangouts. Google’s Chrome and other fellow browsers like Firefox, Opera, and more have shown a big thumbs up to this RTC technology.</p><p>As webRTC evolved new in real-time communication, several companies made efforts at curating this open-source and also suffered various outcomes. It looks like webRTC has raised concerns for security and privacy in communication and technical data sharing.</p><p>The main focus that stood in gearing up webRTC was standardization. Google started working with webRTC to develop Gmail Voice and video chat. It was difficult to set up as the components like audio and video required licensing from different companies.</p><p>It was a network of several protocols for which knowledge in various fields like networking, media, cryptography, and more was notably important. Meanwhile, there was also the start of the Chrome project by Google which was flooded with terms like WebGL, offline data capabilities, inputs for low latency gaming, and more. To manage these devices effectively, <a href="https://scalefusion.com/chromeos-mdm-solution" rel="noreferrer">Chrome OS MDM solutions</a> emerged to provide centralized control and security.</p><p>WebRTC was further standardized keeping in mind that the protocols are developed with user privacy and on any mishap, the user data cannot be misused.</p><h2 id="present-of-webrtc">Present of WebRTC</h2><p>This period of rise gave notable signals of an exemplary boost of the term WebRTC. And it has not been a surprise either. Presently, <a href="https://webrtc.ventures/2022/09/arin-sime-and-alberto-gonzalez-to-present-at-tadsummit-2022/">we are experiencing most of our daily communications over the web</a>, and it is all majorly due to one tech frame WebRTC.</p><p>The reach is not only limited to peer-to-peer calling or video conferencing, WebRTC has covered and spread its wings up with user privacy and secured data transmission. Now there has been a more genuine reach to developers for webRTC.</p><p>WebRTC is a framework of protocols and APIs, it requires no plug-ins or external installations to facilitate real-time communications. This becomes desirable and appealing to put up integrations in the applications and make it easily accessible on mobile phones, tablets, laptops,  and other communication devices.</p><p>What can be a more genuine approach? WebRTC along with rapid advancements has made it easy and economical for developers to rely on. WebRTC functions with a good internet delivering supreme audio and video quality. Nevertheless, it also consumes low internet data, working with low latency.</p><p>Industrie is experiencing massive gains, overpowering its global outreach and brand awareness. The presence of the webRTC has come up with an astounding approach to enhancing communication and is surely going to excel in the future.</p><p>WebRTC being an open-source project, developers find it easy to use and modify them as per their needs. They have managed well in developing virtual platforms and applications with customization, security, and easy-to-use API keys, webRTC has become an idealistic approach for the future too.</p><h2 id="future-of-webrtc">Future of WebRTC</h2><p>WebRTC is an open-source platform that can be used and made available to everyone for free use. A lot has changed after the hit of the pandemic. There have been more digital curiosity among people generating innovation.</p><p>Real-time communication has made work easier. <a href="https://medium.com/ringcentral-developers/the-future-of-webrtc-477ff42fbddb">Though the term and setting up is difficult and exhaustive to understand, the future seems optimistic</a>. Several new launches in webRTC have taken place which will certainly make its identity sustainable for the future too.</p><p>I am hopeful that the future will observe new technologies in webRTC with the introduction of 5G expanding the OTT offerings. The foundation of WebRTC is growing stronger by each day and so we can assume that the future is going to be versatile and more efficient.</p><p>I see webRTC playing a stronger part in the Telecom industry. The sector will emerge and extend with new technologies through webRTC. Telecoms will advance with interactive chat with clients and cloud data management.</p><p>Cloud computing has already gained massive popularity. Due to increasing internet usage, a huge amount of data needs to be stored. WebRTC has considerably made cloud computing immensely important for the easy functioning of voice and video calls.</p><p>The gaming industry has shot up due to webRTC. I believe that webRTC will make the <a href="https://invogames.com/blog/future-of-game-development-unity-6-release/" rel="noreferrer">future of game development</a> more intensive. Real-time communication channels could be integrated into gaming applications and instill enjoyment and entertainment in a single application itself.</p><p>The benefits of webRTC will also be observed in trade, content, machine learning, backend integrations, and more. I believe that the future is going to observe a rapid increase in use cases of webRTC. New industries will increase and emerge with consistent use of this open-source technology.</p><h2 id="take-advantage-of-webrtc-with-videosdk">Take advantage of WebRTC with VideoSDK</h2><p>Build live, collaborative, and engaging applications with VideoSDK for <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/getting-started">JavaScript</a>, <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/getting-started">React JS</a>, Android, IOS, Flutter, and <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/getting-started">React Native</a>.</p><p>VideoSDK provides the software layer and server, low-latency media relay, and signaling needed to power WebRTC-based Programmable SDKs and <a href="https://docs.videosdk.live/api-reference/realtime-communication/intro">REST API</a>s to build up scalable audio &amp; video conferencing applications</p><p>Our cutting-edge technology and algorithms like Adaptive Bitrate Streaming, low latency UDP Streaming, and real-time data streaming are drafted to perfection.</p><p>VideoSDK WebRTC provides the full benefits of enterprise-grade insights, security, and reliability with a global, elastically scalable platform and intelligent bandwidth optimization. Plus, only pay for what you need with pay-as-you-go pricing.</p><p>VideoSDK platform offers a cost-effective alternative solution with saves time and effort, proprietary networks, prevents platform abuse, and reliable encoding/decoding technologies, professional enterprise-level technical support, and dedicated teams that you can trust for your project or application.</p><p>More importantly, it is <strong>FREE</strong> to start. You are guaranteed to receive <a href="https://app.videosdk.live"><strong>10,000 minutes of free EVERY MONTH</strong>.</a></p>]]></content:encoded></item><item><title><![CDATA[WebRTC vs RTMP: An In-Depth Comparison]]></title><description><![CDATA[Explore a comprehensive comparison between WebRTC and RTMP protocols, helping you make informed decisions for your real-time communication or streaming needs.]]></description><link>https://www.videosdk.live/blog/webrtc-vs-rtmp</link><guid isPermaLink="false">65a79b8e6c68429b5fdf1317</guid><category><![CDATA[RTMP]]></category><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Thu, 10 Oct 2024 13:03:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/image--2-.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/01/image--2-.png" alt="WebRTC vs RTMP: An In-Depth Comparison"/><p>In the world of live streaming, two protocols have emerged as the frontrunners: WebRTC (Web Real-Time Communication) and RTMP (Real-Time Messaging Protocol). Both offer unique advantages and are suitable for different use cases. In this article, we'll compare the RTMP vs WebRTC and see the fundamental differences between WebRTC and RTMP to help you make an informed decision for your live streaming requirements.</p><h2 id="what-is-webrtc">What is WebRTC?</h2><p><a href="https://www.videosdk.live/blog/webrtc">Web Real-Time Communication</a>, commonly known as Webrtc, is an open-source project that enables real-time communication through web browsers. It facilitates peer-to-peer communication for video, audio, and data sharing without the need for additional plugins or software installations.</p><h3 id="understanding-webrtc">Understanding WebRTC</h3><p>Webrtc allows direct communication between browsers, reducing the need for intermediary servers for a high-quality, low-latency audio and video transmission for a seamless user experience. Besides audio and video, Webrtc supports data channels for real-time information exchange. Major browsers such as Chrome, Firefox, Safari, and Edge support WebRTC, ensuring wide accessibility.</p><h3 id="usecases">UseCases</h3><p>Webrtc is widely adopted in applications like VideoSDK.live, Zoom, and Microsoft Teams. Its real-time communication enhances multiplayer gaming experiences. </p><h2 id="what-is-rtmp">What is RTMP?</h2><p><a href="https://www.videosdk.live/blog/what-is-rtmp">Real-Time Messaging Protocol</a> (RTMP) is a multimedia streaming protocol developed by Adobe. It is initially designed for streaming audio, video, and data over the internet, RTMP has become a benchmark in live streaming applications.</p><h3 id="understanding-rtmp">Understanding RTMP</h3><p>RTMP offers low-latency streaming, making it suitable for live broadcasts. It uses Dynamic Adaptive Bitrate streaming to deliver content based on the user's internet speed. It is compatible with various platforms and devices.</p><h3 id="use-cases">Use Cases</h3><p>RTMP is widely used for live-streaming events, concerts, and sports. It is popular in streaming live gaming sessions on platforms like Twitch. Even, OTT platforms and media services often utilize RTMP for content delivery.</p><h2 id="rtmp-vs-webrtc-direct-comparison">RTMP vs WebRTC: Direct Comparison</h2><h3 id="latency">Latency</h3><ul><li><strong>Explanation of Latency in Video Streaming: </strong>Latency refers to the delay between the transmission of data and its reception, crucial in real-time applications.</li><li><strong>How Webrtc Addresses Latency Challenges: </strong>Webrtc minimizes latency through direct peer-to-peer communication, ensuring near-instantaneous data transfer.</li><li><strong>RTMP's Approach to Latency: </strong>RTMP achieves low latency by optimizing data transmission and supporting adaptive bitrate streaming.</li></ul><h3 id="browser-compatibility">Browser Compatibility</h3><ul><li><strong>Webrtc's Compatibility with Web Browsers: </strong>Webrtc is natively supported by major browsers like Chrome, Firefox, and Safari, providing seamless user experiences.</li><li><strong>RTMP's Compatibility with Different Platforms: </strong>RTMP is versatile, and compatible with various platforms, making it accessible across different devices and environments.</li></ul><h3 id="scalability">Scalability</h3><ul><li><strong>Webrtc's Scalability Features: </strong>Webrtc offers scalability through its decentralized architecture, allowing easy expansion as user numbers grow.</li><li><strong>RTMP's Scalability Options: </strong>RTMP's dynamic streaming adapts to varying network conditions, ensuring a smooth viewing experience even during peak demand.</li></ul><h3 id="security">Security</h3><ul><li><strong>Security Considerations with Webrtc: </strong>Webrtc ensures security through encryption protocols, safeguarding user data during transmission.</li><li><strong>Security Features of RTMP: </strong>RTMP incorporates secure streaming protocols, providing a protected environment for content delivery.</li></ul><p>Below is the comparison table between WebRTC and RTMP.</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th><strong>Feature</strong></th>
<th><strong>WebRTC</strong></th>
<th><strong>RTMP</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Latency</strong></td>
<td>Low latency, suitable for real-time communication applications.</td>
<td>Can have higher latency, more suitable for traditional streaming.</td>
</tr>
<tr>
<td><strong>Browser Support</strong></td>
<td>Widely supported in modern browsers (Chrome, Firefox, Safari, Edge).</td>
<td>Originally required Flash Player (now deprecated), diminishing support.</td>
</tr>
<tr>
<td><strong>Mobile Support</strong></td>
<td>Well-supported on mobile devices and platforms.</td>
<td>Can be challenging on some mobile devices due to Flash limitations.</td>
</tr>
<tr>
<td><strong>Firewall/NAT Traversal</strong></td>
<td>Built-in support for NAT traversal and firewall penetration.</td>
<td>May require additional configuration for firewall and NAT traversal.</td>
</tr>
<tr>
<td><strong>Encryption</strong></td>
<td>Supports encryption via SRTP (Secure Real-time Transport Protocol).</td>
<td>Originally lacked built-in encryption; can be secured using RTMPS.</td>
</tr>
<tr>
<td><strong>Codec Support</strong></td>
<td>Supports VP8, VP9, H.264 for video and Opus, G.711 for audio.</td>
<td>Typically uses H.264 for video and AAC for audio, but supports others.</td>
</tr>
<tr>
<td><strong>Peer-to-Peer Communication</strong></td>
<td>Supports peer-to-peer communication.</td>
<td>Primarily designed for server-based streaming; peer-to-peer is possible but less common.</td>
</tr>
<tr>
<td><strong>Adaptability</strong></td>
<td>Suitable for various real-time communication scenarios, including peer-to-peer.</td>
<td>Primarily designed for server-based streaming, may require additional server infrastructure.</td>
</tr>
<tr>
<td><strong>Security Features</strong></td>
<td>Includes built-in security features for real-time communication.</td>
<td>Originally lacked built-in security; security can be added with additional protocols (e.g., RTMPS).</td>
</tr>
<tr>
<td><strong>Usage</strong></td>
<td>Commonly used for video conferencing, online gaming, and live broadcasting.</td>
<td>Historically used for live streaming and on-demand video, now declining in popularity.</td>
</tr>
<tr>
<td><strong>Popularity</strong></td>
<td>Increasingly popular for real-time communication applications.</td>
<td>Declining in popularity due to Flash deprecation and emergence of newer streaming protocols.</td>
</tr>
<tr>
<td><strong>Development Environment</strong></td>
<td>Requires browser or native app integration for development.</td>
<td>Primarily used with Flash-based applications; alternatives used for newer developments.</td>
</tr>
<tr>
<td><strong>Open Source</strong></td>
<td>WebRTC is an open-source project.</td>
<td>RTMP is not open source; Adobe's specification was open, but Flash is deprecated.</td>
</tr>
<tr>
<td><strong>Community Support</strong></td>
<td>Has a large and active community supporting the technology.</td>
<td>Community support has diminished with the decline of Flash and RTMP.</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><h2 id="advantages-and-disadvantages-webrtc-rtmp">Advantages and Disadvantages <strong>WebRTC &amp; RTMP</strong></h2><p><strong>Advantages of WebRTC</strong></p><ul><li>Low latency for real-time communication.</li><li>Native browser support enhances user accessibility.</li><li>Decentralized architecture allows for scalability.</li></ul><p><strong>Limitations of WebRTC</strong></p><ul><li>Complex implementation in certain scenarios.</li><li>Limited support for certain legacy browsers.</li></ul><p><strong>Advantages of RTMP</strong></p><ul><li>Low-latency streaming is suitable for live broadcasts.</li><li>Versatile and compatible with various platforms.</li></ul><p><strong>Limitations of WebRTC</strong></p><ul><li>Dependence on Adobe Flash, which is becoming obsolete.</li><li>Requires server infrastructure for streaming.</li></ul><h2 id="choosing-between-webrtc-and-rtmp-what-to-consider"><br><strong>Choosing Between WebRTC and RTMP: What to Consider</strong></br></h2><h3 id="factors-to-consider">Factors to Consider</h3><ul><li><strong>Latency Requirements:</strong> If low latency is crucial, Webrtc may be the better choice.</li><li><strong>Compatibility:</strong> Consider the platforms and devices your audience uses.</li><li><strong>Scalability:</strong> Assess the potential growth of your user base.</li></ul><h3 id="how-to-assess-your-streaming-requirements">How to Assess Your Streaming Requirements</h3><ul><li><strong>Evaluate Your Use Case:</strong> Different applications may have varying demands for latency and scalability.</li><li><strong>Consider User Experience:</strong> Choose a protocol that aligns with the desired user experience.</li></ul><h3 id="integrating-rtmp-webrtc">Integrating RTMP &amp; WebRTC</h3><p>Some applications use RTMP for ingesting live streams and then convert them to WebRTC for real-time distribution to browsers. Combining RTMP and WebRTC can leverage the strengths of both protocols, allowing RTMP's low-latency streaming with WebRTC's peer-to-peer capabilities. </p><p>RTMP-WebRTC gateway can facilitate the interoperability between RTMP streams and WebRTC clients, ensuring seamless content delivery across different platforms.</p><h2 id="videosdk-leveraging-the-power-of-webrtc-rtmp">VideoSDK: Leveraging the Power of WebRTC &amp; RTMP</h2><h3 id="what-is-videosdk">What is VideoSDK?</h3><p><a href="https://www.videosdk.live/">VideoSDK </a>is a revolutionary live video infrastructure designed for developers across the USA &amp; India, providing the ultimate flexibility, scalability, and control over <a href="https://www.videosdk.live/audio-video-conferencing">audio-video conferencing </a>and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a> in web and mobile apps.</p><h3 id="how-does-videosdk-incorporate-both-webrtc-and-rtmp">How does VideoSDK incorporate both WebRTC and RTMP?</h3><p>VideoSDK seamlessly integrates the strengths of WebRTC and RTMP, offering developers the best of both worlds. Whether it's the low-latency real-time communication of WebRTC or the versatile streaming capabilities of RTMP, VideoSDK ensures a comprehensive solution.</p><h3 id="leveraging-videosdk-features-and-integration">Leveraging VideoSDK: Features and Integration</h3><ul><li><strong>Real-Time Communication:</strong> Enable peer-to-peer communication with minimal latency.</li><li><strong>Scalability:</strong> Easily scale your applications to accommodate growing user bases.</li><li><strong>Security:</strong> Leverage robust encryption protocols to safeguard user data.</li></ul><h2 id="webrtc-and-rtmp-faqs"><br>WebRTC and  RTMP FAQs</br></h2><h3 id="does-videosdk-support-webrtc-vs-rtmp">Does VideoSDK support Webrtc vs RTMP?</h3><p>Yes, VideoSDK supports both WebRTC and RTMP protocols, offering flexibility for real-time communication. Developers can seamlessly integrate VideoSDK to enable diverse video applications with ease.</p><h3 id="does-videosdk-support-popular-web-frameworks-like-react-and-javascript">Does VideoSDK support popular web frameworks like React and JavaScript?</h3><p>Yes, VideoSDK is designed to integrate seamlessly with popular web frameworks, including <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream">React</a>, <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream">JavaScript</a>, <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream">Flutter</a>, <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream">React Native</a> Android, and IOS. It provides dedicated libraries and documentation for smooth integration.</p><h3 id="how-can-i-integrate-videosdk-into-my-custom-developed-application">How can I integrate VideoSDK into my custom-developed application?</h3><p>VideoSDK offers <a href="https://docs.videosdk.live/?utm_source=blog&amp;utm_medium=google&amp;utm_campaign=organic">comprehensive documentation</a> including step-by-step guides and sample code snippets, making it straightforward for developers to integrate the SDK into custom applications.</p><h3 id="what-is-the-main-difference-between-the-rtmp-and-webrtc">What is the Main Difference Between the RTMP and WebRTC?</h3><p>The main difference between RTMP and WebRTC lies in their purposes and technology. WebRTC is designed for real-time communication, while RTMP is traditionally used for streaming multimedia content over the Internet.<br/></p>]]></content:encoded></item><item><title><![CDATA[WebRTC vs WebSocket: Ideal Protocol for Real-Time Communication]]></title><description><![CDATA[WebRTC excels in peer-to-peer communication with audio/video, while WebSocket ensures efficient, bidirectional data transfer, making both ideal for real-time communication.]]></description><link>https://www.videosdk.live/blog/webrtc-vs-websocket</link><guid isPermaLink="false">65a904e56c68429b5fdf16df</guid><category><![CDATA[WebRTC]]></category><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Thu, 10 Oct 2024 11:59:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/WebRTC-vs-Websocket.png" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-webrtc">What is WebRTC?</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/WebRTC-vs-Websocket.png" alt="WebRTC vs WebSocket: Ideal Protocol for Real-Time Communication"/><p>WebRTC, an acronym for <a href="https://www.videosdk.live/blog/webrtc">Web Real-Time Communication</a>, is an open-source project that empowers real-time communication between browsers and mobile applications. Its fundamental purpose lies in facilitating peer-to-peer communication by providing a set of APIs and protocols for seamless audio and video streaming, as well as efficient data exchange.</p><h3 id="key-advantages-and-functions-of-webrtc">Key Advantages and Functions of WebRTC</h3><p>WebRTC boasts an impressive array of features, including but not limited to low-latency communication, high-quality audio and video capabilities, and support for a variety of codecs. Its capabilities extend beyond mere audio and video, encompassing screen sharing, file transfer, and the establishment of connections without the need for plugins or third-party software.</p><h3 id="practical-uses-of-webrtc">Practical Uses of WebRTC</h3><p>WebRTC finds application in diverse scenarios, such as video conferencing, online gaming, telehealth services, and more. Its flexibility and ease of integration make it a popular choice for developers aiming to incorporate real-time communication features into their applications.</p><h3 id="strengths-and-limitations-of-webrtc">Strengths and Limitations of WebRTC</h3><p><strong>Pros of WebRTC:</strong></p><ul><li>Native support in web browsers.</li><li>Peer-to-peer communication without intermediaries.</li><li>Versatility for multiple use cases.</li></ul><p><strong>Cons of WebRTC:</strong></p><ul><li>Limited scalability for larger audiences.</li><li>Firewall and NAT traversal challenges.</li></ul><h2 id="what-is-websockets">What is WebSockets?</h2><p><a href="https://www.videosdk.live/blog/what-is-a-websocket" rel="noreferrer">WebSockets</a> represent a communication protocol that enables bidirectional, full-duplex communication between clients and servers. Unlike traditional HTTP connections, WebSockets maintain a persistent connection, allowing real-time data exchange without the need for constant polling.</p><h3 id="how-websockets-enable-interactive-communication">How WebSockets Enable Interactive Communication</h3><p>WebSockets achieve bidirectional communication through a persistent connection, where both the server and the client can send and receive data at any time. This approach reduces latency and overhead associated with traditional request-response models.</p><h3 id="typical-applications-for-websockets">Typical Applications for WebSockets</h3><p>WebSockets excel in applications requiring real-time updates, such as chat applications, financial trading platforms, and online gaming. Their ability to push data instantly to connected clients makes them a preferred choice for a dynamic and interactive experience.</p><h3 id="strengths-and-limitations-of-websockets">Strengths and Limitations of WebSockets</h3><p><strong>Pros of WebSockets:</strong></p><ul><li>Low-latency communication.</li><li>Efficient use of resources with a persistent connection.</li><li>Ideal for applications with constant data updates</li></ul><p><strong>Cons of WebSockets:</strong></p><ul><li>Lack of native support in all browsers.</li><li>Challenges with proxy servers and firewalls</li></ul><h2 id="websockets-vs-webrtc-understanding-the-key-differences">WebSockets vs WebRTC: Understanding the Key Differences</h2><h3 id="architecture-and-design-principles"><br>Architecture and Design Principles</br></h3><p>WebRTC focuses on peer-to-peer communication, allowing devices to connect directly. In contrast, WebSockets employ a client-server architecture, maintaining a persistent connection between the client and the server.</p><h3 id="latency-and-bandwidth-considerations">Latency and Bandwidth Considerations</h3><p>WebRTC, optimized for low-latency communication, excels in scenarios where real-time interaction is critical. WebSockets, while still low-latency, may not match the instantaneous responsiveness of WebRTC.</p><h3 id="scalability-and-flexibility-comparison">Scalability and Flexibility Comparison</h3><p>WebRTC's peer-to-peer nature can pose challenges in scalability for larger audiences. WebSockets, with a centralized server, can scale more efficiently to accommodate a growing user base.</p><h3 id="security-features-and-considerations">Security Features and Considerations</h3><p>WebRTC employs encryption for secure communication, making it suitable for privacy-sensitive applications. WebSockets, while secure, may require additional measures for data protection, especially in critical use cases.</p><p>Here is a comparative breakdown of the key differences between WebRTC and WebSocket:</p><table>
<thead>
<tr>
<th>Feature</th>
<th>WebRTC</th>
<th>WebSocket</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Communication Type</strong></td>
<td>Real-time, peer-to-peer communication for audio, video, and data</td>
<td>Real-time, bidirectional communication for data</td>
</tr>
<tr>
<td><strong>Use Cases</strong></td>
<td>Video/audio conferencing, live streaming, file sharing</td>
<td>Real-time web applications, chat applications</td>
</tr>
<tr>
<td><strong>Protocol</strong></td>
<td>Uses both UDP and TCP for data transmission</td>
<td>Typically uses WebSocket protocol over TCP</td>
</tr>
<tr>
<td><strong>Browser Support</strong></td>
<td>Widely supported in modern browsers (Chrome, Firefox, Safari, Edge)</td>
<td>Widely supported in modern browsers</td>
</tr>
<tr>
<td><strong>Native APIs</strong></td>
<td>Provides APIs for audio, video, and data communication (getUserMedia, RTCPeerConnection)</td>
<td>Provides APIs for establishing and managing WebSocket connections (WebSocket API)</td>
</tr>
<tr>
<td><strong>Data Channels</strong></td>
<td>Supports data channels for sending arbitrary data</td>
<td>Primarily designed for sending textual or binary data</td>
</tr>
<tr>
<td><strong>Connection Setup</strong></td>
<td>Requires signaling server for initial setup and negotiation</td>
<td>Establishes a direct connection between client and server without a signaling server</td>
</tr>
<tr>
<td><strong>Latency</strong></td>
<td>Low latency due to peer-to-peer communication</td>
<td>Low latency for bidirectional data communication</td>
</tr>
<tr>
<td><strong>Firewall Traversal</strong></td>
<td>May require TURN servers for traversal of restrictive firewalls and NATs</td>
<td>Can traverse firewalls and NATs easily</td>
</tr>
<tr>
<td><strong>Encryption</strong></td>
<td>End-to-end encryption for media streams</td>
<td>Secure communication with WebSocket Secure (WSS)</td>
</tr>
<tr>
<td><strong>Scalability</strong></td>
<td>Scalable for peer-to-peer scenarios, may require additional infrastructure for large-scale deployments</td>
<td>Can be scaled using load balancers for multiple WebSocket server instances</td>
</tr>
<tr>
<td><strong>Use of Signaling</strong></td>
<td>Requires signaling for setting up and managing connections</td>
<td>Does not inherently require signaling but often used for connection setup and teardown</td>
</tr>
<tr>
<td><strong>Flexibility</strong></td>
<td>More focused on real-time media communication</td>
<td>More general-purpose for real-time bidirectional data communication</td>
</tr>
<tr>
<td><strong>Mobile Support</strong></td>
<td>Supports mobile devices for real-time communication</td>
<td>Widely supported on mobile devices</td>
</tr>
</tbody>
</table>
<h2 id="webrtc-and-websockets-functionalities">WebRTC and WebSockets Functionalities</h2><h3 id="full-duplex-communication-with-websockets">Full Duplex Communication with WebSockets</h3><p>WebSockets enable a full-duplex communication mode, allowing data to flow simultaneously in both directions without waiting for a request-response cycle. This is crucial for applications that require real-time responsiveness and is a significant improvement over the traditional HTTP request model, which is half-duplex and can introduce delays. By utilizing full duplex communication, WebSockets minimize latency and maximize the efficiency of data transmission, making them ideal for real-time applications like interactive games and live financial trading platforms.</p><h3 id="socketio-enhancing-websocket-capabilities">Socket.io: Enhancing WebSocket Capabilities</h3><p>Socket.io is a popular library that enhances WebSocket capabilities by enabling real-time, bidirectional, and event-based communication. It provides a higher-level API that includes features like auto-reconnection, disconnection detection, and room partitioning, which are not natively supported by the WebSocket API. Incorporating Socket.io into the discussion can help developers understand how to implement robust chat applications that require high-level features beyond the standard WebSocket protocol.</p><h3 id="criteria-for-the-best-chat-app">Criteria for the Best Chat App</h3><p>When evaluating the best chat app, the choice of communication protocol plays a significant role. WebRTC is often preferred for its ability to handle direct peer-to-peer communication, which is ideal for private and secure video and audio calls. On the other hand, WebSockets are better suited for chat applications that require constant data updates from a server, such as multi-user environments and live event chats. Discussing these criteria can help developers choose the right protocol based on the specific needs of their application.</p><h3 id="mobile-applications-performance-and-challenges">Mobile Applications: Performance and Challenges</h3><p>Both WebRTC and WebSockets are supported on mobile devices, but they face unique challenges in these environments. Mobile applications using WebRTC must carefully manage battery consumption and data usage, especially during peer-to-peer communication. Meanwhile, WebSockets in mobile applications can experience connectivity issues with mobile networks that frequently change IP addresses. Explaining these challenges can guide developers in optimizing real-time communication features for mobile platforms.</p><h3 id="websockets-vs-http-efficiency-in-real-time-communication">WebSockets vs. HTTP: Efficiency in Real-Time Communication</h3><p>Traditional HTTP connections are less efficient for real-time communication because they require opening a new TCP connection for each request, which introduces latency and overhead. WebSockets overcome these limitations by establishing a single persistent connection that allows continuous data flow without the need to repeatedly establish connections. This section could delve into the technical differences and show why WebSockets are more efficient for applications that require frequent, small updates, such as real-time dashboards or online multiplayer games.</p><h3 id="the-role-of-data-servers-in-websocket-applications">The Role of Data Servers in WebSocket Applications</h3><p>In WebSocket applications, data servers play a crucial role in managing connections and ensuring the efficient transmission of data to numerous clients. These servers must handle the complexities of multiple simultaneous connections, maintain session persistence, and efficiently distribute messages. This is especially important in environments where high volumes of data are transmitted in real-time, such as in large-scale chat applications or streaming services.</p><h3 id="tcp-connection-impacts-on-websocket-performance">TCP Connection: Impacts on WebSocket Performance</h3><p>The reliance on TCP connections by WebSockets has both benefits and drawbacks. While TCP ensures that packets are delivered reliably and in order, it can introduce latency due to its congestion control algorithms and the need for acknowledgment packets. Discussing how these characteristics of TCP affect WebSocket performance can provide developers with a deeper understanding of how to optimize their applications for speed and reliability.</p><h2 id="choosing-between-webrtc-and-websockets-considerations-for-developers"><strong>Choosing Between WebRTC and WebSockets</strong>: Considerations for Developers</h2><h3 id="factors-influencing-the-choice-between-webrtc-and-websockets">Factors Influencing the Choice Between WebRTC and WebSockets</h3><p>Developers must consider factors such as the nature of the application, audience size, and real-time requirements when choosing between WebRTC and WebSockets. Each protocol has its strengths and is better suited to specific use cases.</p><h3 id="use-case-scenarios-for-each-protocol">Use Case Scenarios for Each Protocol</h3><p>WebRTC is ideal for applications requiring direct peer-to-peer communication, such as video conferencing. WebSockets shine in scenarios demanding constant data updates, such as real-time dashboards.</p><h3 id="performance-considerations-for-different-applications">Performance Considerations for Different Applications</h3><p>Performance considerations, including latency, bandwidth usage, and scalability, play a pivotal role in selecting the appropriate protocol. Understanding the specific needs of the application ensures optimal performance.</p><h2 id="introducing-videosdk-a-fusion-of-webrtc-and-websockets">Introducing VideoSDK: A Fusion of WebRTC and WebSockets</h2><p><a href="https://www.videosdk.live/">VideoSDK </a>emerges as a game-changer in the realm of real-time communication. VideoSDK is a versatile platform that empowers developers across the USA &amp; India to create rich in-app experiences by embedding real-time video, <a href="https://www.eworldtrade.com/importers/voice-recording-buyer/">voice recording</a>, live streaming, and messaging functionalities. It serves as a comprehensive live video infrastructure, offering developers complete flexibility, scalability, and control over <a href="https://www.videosdk.live/audio-video-conferencing">audio-video conferencing</a> and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming.</a></p><h3 id="how-videosdk-leverages-webrtc-and-websockets">How VideoSDK Leverages WebRTC and WebSockets?</h3><p>VideoSDK harnesses the strengths of both WebRTC and WebSockets to provide a robust and versatile solution. By seamlessly integrating these protocols, VideoSDK ensures an unparalleled real-time communication experience for developers and end-users alike.</p><h3 id="key-features-of-videosdk-for-superior-communication">Key Features of VideoSDK for Superior Communication</h3><ul><li><strong>Adaptive Streaming:</strong> VideoSDK introduces <a href="https://www.videosdk.live/blog/what-is-adaptive-bitrate-streaming">adaptive streaming</a>, ensuring optimal video quality across varying network conditions.</li><li><strong>Cross-Platform Compatibility:</strong> Developers can utilize VideoSDK across different platforms, providing a consistent and reliable experience for users.</li><li><strong>Easy Integration:</strong> <a href="https://www.videosdk.live/">VideoSDK</a> simplifies integration with well-documented APIs and SDKs, facilitating a smooth integration process for developers.</li></ul><p>WebRTC and WebSockets depend on the specific needs of the application, considering factors such as audience size, real-time requirements, and scalability. With the introduction of VideoSDK, developers now have a powerful tool that combines the strengths of both protocols, offering a versatile and scalable solution for elevating real-time communication experiences. Explore VideoSDK today and revolutionize your applications with state-of-the-art live video infrastructure.</p>]]></content:encoded></item><item><title><![CDATA[Video API vs Video SDK: What's the Difference?]]></title><description><![CDATA[Video SDK offers pre-built tools for seamless integration, ensuring quick implementation. Video API provides a customizable interface, allowing flexible integration with third-party applications.]]></description><link>https://www.videosdk.live/blog/video-sdk-vs-video-api</link><guid isPermaLink="false">65b8958e2a88c204ca9ce6b4</guid><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Thu, 10 Oct 2024 11:26:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/image--9-.png" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-video-sdk">What is Video SDK?</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/image--9-.png" alt="Video API vs Video SDK: What's the Difference?"/><p>Video SDK, or Video Software Development Kit, is a comprehensive set of tools, libraries, and documentation that empowers developers to seamlessly <a href="https://www.videosdk.live/audio-video-conferencing">integrate real-time audio and video</a> capabilities into their web and mobile applications. The primary purpose of a Video SDK is to offer flexibility, scalability, and control over the audio-video conferencing and live-streaming functionalities.</p><h3 id="core-features-of-video-sdk">Core Features of Video SDK</h3><ul><li><strong>Real-time Communication:</strong> Video SDK facilitates seamless real-time audio and video communication, ensuring a smooth and engaging user experience.</li><li><strong>Ease of Integration:</strong> Video SDKs are designed to simplify the integration process, allowing developers to seamlessly incorporate audio-video features without significant hurdles.</li><li><strong>Scalability:</strong> A robust Video SDK should be scalable, accommodating various user loads and adapting to the growing demands of an application.</li></ul><h3 id="applications-of-video-sdk">Applications of Video SDK</h3><ul><li><strong>Video Conferencing:</strong> Video SDKs are commonly used to enable real-time video conferences, facilitating virtual meetings and collaboration.</li><li><strong>Live Streaming:</strong> Developers leverage Video SDKs to incorporate live streaming capabilities into their applications, enhancing user engagement and interaction.</li><li><strong>In-App Video Capabilities:</strong> Video SDKs empower developers to embed video functionalities directly within their applications, enhancing user experience.</li></ul><h3 id="benefits-of-video-sdk">Benefits of Video SDK</h3><ul><li><strong>Improved User Experience and Engagement:</strong> Video SDKs contribute to a more engaging user experience, particularly in applications where real-time video interactions are crucial.</li><li><strong>Enhanced Video Quality and Performance:</strong> Developers can ensure high-quality video streaming and conferencing experiences by utilizing the advanced features of Video SDKs.</li><li><strong>Streamlined Development Process and Reduced Time-to-Market:</strong> Video SDKs accelerate the development process by offering pre-built components, reducing the time it takes to bring a product to market.</li></ul><h3 id="ideal-scenarios-for-video-sdk-usage">Ideal Scenarios for Video SDK Usage</h3><ul><li><strong>Ease of Implementation:</strong> Video SDKs are well-suited for projects where quick and straightforward integration of video capabilities is a priority.</li><li><strong>Specific Platform Compatibility:</strong> If the application needs to run seamlessly on specific platforms, Video SDKs provide tailored solutions.</li><li><strong>Support and Maintenance:</strong> Video SDKs are advantageous for applications requiring ongoing support and maintenance, as they often come with comprehensive documentation and support.</li></ul><h2 id="what-is-video-api">What is Video API?</h2><p>Video API, or Video Application Programming Interface, is another essential tool for developers looking to integrate video-related functionalities into their applications. Unlike SDKs, APIs are interfaces that allow applications to communicate with each other, and Video APIs specifically focus on providing a bridge for developers to interact with video-related services.</p><h3 id="core-features-and-functionalities-of-video-api">Core Features and Functionalities of Video API</h3><ul><li><strong>Seamless Integration with Third-Party Applications:</strong> Video APIs facilitate easy integration with other third-party applications, enabling a broader range of functionalities.</li><li><strong>Scalability and Flexibility:</strong> Video APIs are designed to be scalable, adapting to the varying needs of an application and offering flexibility in handling diverse video-related tasks.</li><li><strong>Customization:</strong> While Video API offers customization options, they might be more limited compared to the flexibility provided by Video SDK.</li></ul><h3 id="applications-of-video-api">Applications of Video API</h3><ul><li><strong>Integrating Video into Websites:</strong> Video APIs are commonly used to embed video content into websites, offering a seamless multimedia experience to users.</li><li><strong>Building Custom Video Applications:</strong> Developers can utilize Video APIs to build custom video applications tailored to their specific requirements.</li></ul><h3 id="benefits-of-video-api">Benefits of Video API</h3><ul><li><strong>Cost-Effectiveness and Resource Optimization:</strong> Video APIs allow developers to leverage existing video services, reducing the need to build everything from scratch and optimizing resource utilization.</li><li><strong>Scalability and Flexibility in Handling Diverse Video-Related Tasks:</strong> Video APIs provide the flexibility to handle various video-related tasks, adapting to the specific needs of different applications.</li><li><strong>Seamless Integration:</strong> Video API excels in integrating with third-party applications, ensuring a smooth incorporation of video functionalities.</li></ul><h3 id="ideal-scenarios-for-video-api-usage">Ideal Scenarios for Video API Usage</h3><p><strong>Customization:</strong> If the application demands highly customized video functionalities, or if integration with specific services is crucial, Video APIs offer the necessary flexibility.</p><ul><li><strong>Integration with Other Services:</strong> Video APIs are preferable when seamless integration with third-party applications or services is a priority.</li><li><strong>Scalability:</strong> For applications with varying or unpredictable video-related tasks, Video APIs provide scalability and adaptability.</li></ul><h2 id="video-sdk-vs-video-api-in-depth-comparision">Video SDK vs Video API: In-depth Comparision</h2><p>A Video SDK (Software Development Kit) is a set of tools and libraries that enable developers to build custom video applications. In contrast, a Video API (Application Programming Interface) provides pre-built functionalities for integrating video capabilities into applications without requiring extensive coding. Both empower developers to create diverse video solutions.</p><h3 id="technical-distinctions">Technical Distinctions</h3><ul><li><strong>Interaction with Software:</strong> Video SDK requires a more hands-on approach, allowing developers to have greater control over the software's behavior. On the other hand, Video API offers a simpler integration process, abstracting much of the complexity associated with video functionalities.</li><li><strong>Level of Customization:</strong> Video SDK provides a higher level of customization, making it suitable for projects with specific requirements. Video API, while customizable, might have limitations in tailoring functionalities extensively.</li></ul><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>Aspect</th>
<th>Video SDK</th>
<th>Video API</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Definition</strong></td>
<td>A set of tools and libraries for custom video application development.</td>
<td>Pre-built functionalities for integrating video features into applications.</td>
</tr>
<tr>
<td><strong>Customization</strong></td>
<td>Offers greater flexibility and customization options.</td>
<td>Limited customization, as it provides predefined features.</td>
</tr>
<tr>
<td><strong>Development Time</strong></td>
<td>May require more development time for implementation.</td>
<td>Faster implementation due to pre-built features.</td>
</tr>
<tr>
<td><strong>Complexity</strong></td>
<td>Can be more complex, and  suitable for advanced use cases.</td>
<td>Simplifies integration, suitable for basic video functionality.</td>
</tr>
<tr>
<td><strong>Control</strong></td>
<td>Provides more control over video-related processes.</td>
<td>Offers less control but streamlines video integration.</td>
</tr>
<tr>
<td><strong>Use Cases</strong></td>
<td>Ideal for projects with specific and unique video requirements.</td>
<td>Suitable for applications needing standard video functionalities.</td>
</tr>
<tr>
<td><strong>Coding Skills</strong></td>
<td>Requires proficient coding skills for implementation.</td>
<td>Allows integration with minimal coding expertise.</td>
</tr>
<tr>
<td><strong>Examples</strong></td>
<td>VideoSDK, Twilio Video SDK, Agora Video SDK</td>
<td>VideoSDK Conferencing APIs, Twilio Programmable Video API, Agora Video API, Dyte.</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><h2 id="how-videosdk-helps">How VideoSDK Helps</h2><p><a href="https://www.videosdk.live/">VideoSDK </a>is a leading provider of Live video infrastructure for developers in the USA &amp; India. VideoSDK offers real-time audio-video SDKs with complete flexibility, scalability, and control, making it effortless for developers to integrate audio-video conferencing and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a> into web and mobile apps.</p><h3 id="highlight-its-features">Highlight its Features</h3><ul><li>VideoSDK provides a comprehensive set of tools, libraries, and <a href="https://docs.videosdk.live/">documentation </a>for seamless integration.</li><li>It offers advanced features for high-quality video conferencing, live streaming, and in-app video capabilities.</li><li>The SDK is designed for ease of use, providing developers with the necessary components to enhance user experiences.</li></ul><p><strong>Discuss Ease of Integration:</strong></p><ul><li>VideoSDK prioritizes ease of integration, allowing developers to incorporate audio-video functionalities into their applications with minimal effort.</li><li>The SDK comes with detailed documentation and support, ensuring a smooth integration process for developers of varying expertise levels</li></ul><p>The distinction between Video SDK and Video API is essential for developers navigating the complex landscape of live video infrastructure. While Video SDKs provide a ready-made solution with ease of implementation, Video APIs offer flexibility and customization options. VideoSDK, with its advanced features and focus on seamless integration, stands out as a top choice for developers looking to enhance their applications with real-time audio-video capabilities.</p>]]></content:encoded></item><item><title><![CDATA[Video MER (Medical Examination Report)]]></title><description><![CDATA[Video MER (Motion Estimation and Refinement) is a technique used to improve video quality by accurately analyzing and enhancing motion within video frames, optimizing compression and visual clarity.]]></description><link>https://www.videosdk.live/blog/video-mer-medical-examination-report</link><guid isPermaLink="false">66742bed20fab018df10ecb8</guid><category><![CDATA[Industry Update]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Thu, 10 Oct 2024 11:11:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/06/Video-MER.jpg" medium="image"/><content:encoded><![CDATA[<h3 id="what-is-a-medical-examination">What is a Medical Examination?</h3><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/Video-MER.jpg" alt="Video MER (Medical Examination Report)"/><p>A medical examination is an evaluation conducted by a healthcare professional to assess a person's physical health. This process usually includes a review of the patient's medical history, a physical examination, and sometimes laboratory tests or imaging studies. The primary goal is to identify any potential health problems, assess overall health, and determine appropriate treatment if necessary. </p><p>In the early days of insurance, physical examinations were the primary method of considering a patient's health. <a href="https://www.fmcsa.dot.gov/regulations/medical/medical-examination-report-form-commercial-driver-medical-certification"><strong>Physical MER</strong></a><strong> (Medical Examination Report)</strong> is the standard practice, where a doctor examines the patient with a detailed medical history and completes a physical examination. Yet, it has several drawbacks that have led to the development of alternative methods.</p><h3 id="why-is-a-medical-examination-needed">Why is a Medical Examination Needed?</h3><p>One of the primary issues with Physical MER is the inconvenience to the patients (proposers), particularly those living in remote or underserved areas. Traveling to a medical facility for an examination can be time-consuming and expensive, making it difficult for some patients to experience this process. </p><p>Also, Physical MER can be lengthy, usually requiring multiple visits and tests, which can be stressful for patients (proposers). To address these issues, more efficient and accessible methods of medical examinations became an obvious need. </p><h2 id="what-is-video-mer-medical-examination-report">What is Video MER (Medical Examination Report)?</h2><p><a href="https://www.videosdk.live/solutions/video-mer" rel="noreferrer">Video MER</a>, or <strong>Video Medical Examination Report</strong>, is a process insurance companies use to conduct medical examinations remotely through <a href="https://www.videosdk.live/audio-video-conferencing" rel="noreferrer">video conferencing</a> technology. This method allows qualified medical professionals to assess applicants or proposers without the need for in-person visits, thereby enhancing accessibility and efficiency in the insurance underwriting process.</p><p>In a Video MER, the insurance company streamlines a medical examination where a doctor interacts with the policyholders via video calls. This includes collecting health history, performing assessments, and discussing any pre-existing conditions. The examination is often complemented by physical tests, such as blood and urine samples, which are collected by a phlebotomist before the video consultation. </p><p>The results of these tests are then compiled into a Medical Examination Report (MER), and are fed into the respective EMR system. </p><h2 id="why-is-video-based-medical-examination-vmer-important">Why is Video-based Medical Examination (VMER) Important?</h2><h3 id="convenience-and-accessibility">Convenience and Accessibility</h3><p>Video MER provides an alternative to traditional in-person medical examinations, which can be time-consuming and costly. With the Video MER test, Proposers undergo medical examinations from the comfort of their own homes, reducing the need for travel and increasing the accessibility of medical services.</p><h3 id="reduced-risk-of-conflict-of-interest">Reduced Risk of Conflict of Interest</h3><p>Video MER can also help to reduce the risk of conflict of interest or influence between the medical professional and the proposer. In a traditional in-person medical examination, the medical professional and proposer may have direct contact, which can create a potential conflict of interest. Video MER eliminates this risk by allowing the medical professional and proposer to interact remotely.</p><h3 id="data-security">Data Security</h3><p>Video MER recordings can be stored and retained securely, ensuring the privacy and security of policyholders and proposers. This is particularly important in the insurance industry, where sensitive medical information like copies of test results and pathology reports are being shared, making robust <a href="https://nordlayer.com/blog/healthcare-data-security/" rel="noreferrer">healthcare data security</a> practices essential for maintaining compliance and trust.</p><h3 id="increased-efficiency">Increased Efficiency</h3><p>Video MER can help streamline the medical examination process, reducing the time and cost associated with traditional in-person examinations. This can increase efficiency and productivity for medical professionals and insurance companies.</p><h2 id="how-video-mer-helps-insurance-companies">How Video MER Helps Insurance Companies</h2><p>Video MER (Medical Examination Report) has revolutionized the insurance industry by providing a more efficient and cost-effective way for insurance companies to conduct medical examinations. By leveraging video conferencing technology, insurance companies can review medical exams remotely, reducing the need for in-person visits and associated costs. This approach not only saves time and resources but also enhances the overall efficiency of the underwriting process.</p><p>Video MER allows insurance companies to assess applicants' health conditions more accurately, which helps them make informed decisions about coverage and premiums. This results in better risk assessment and more reliable underwriting, ultimately benefiting both insurance providers and policyholders.</p><h2 id="what-is-the-difference-between-videography-and-video-mer">What is the difference between Videography and Video MER?</h2><p>Videography encompasses the recording of medical procedures and various events for documentation and verification. On the other hand, Video MER is a distinct procedure employed by insurance companies to conduct medical examinations of proposers remotely using video conferencing technology.</p><p>The primary objective of Video MER is to accurately and efficiently evaluate the health conditions of proposers, enabling insurance companies to make well-informed decisions regarding coverage and premiums. Conversely, videography plays a crucial role in visually verifying and recording all medical procedures, thereby enhancing the integrity and precision of health data utilized in underwriting.</p><h2 id="key-features-of-video-mer">Key Features of Video MER</h2><h3 id="ai-enhanced-mer-checks">AI-enhanced MER Checks</h3><p>Video MER checks are analyzed <strong>using advanced AI-driven tools</strong>, ensuring that all necessary medical information is accurately captured and assessed. This allows insurance companies to conduct more thorough and reliable health evaluations, leading to better risk assessment and more informed underwriting decisions.</p><h3 id="pre-issuance-verification-call-pivc">Pre-Issuance Verification Call (PIVC)</h3><p>Insurance companies conduct <strong>Pre-Issuance Verification cal</strong>ls to v<strong>erify the identity of the proposer</strong>, ensuring that the person undergoing the Video MER is indeed the proposer. PIVC (Pre-Issuance Verification Call) is a verification process that assures <strong>the proposer and the insurance company are on the same page</strong>, thereby fostering trust and transparency in the insurance contract.</p><p>This step <strong>improves the accuracy of identity verification</strong> and significantly reduces the risk of impersonation fraud, maintaining the integrity of the insurance process.</p><h3 id="mer-analysis">MER Analysis</h3><p>After completing the Video MER, the examination report is thoroughly checked through a process called "<strong>MER Check</strong>" and analyzed AI. This assures that all necessary medical information has been accurately captured and assessed, <strong>maintaining high standards for medical examination</strong> quality and consistency.</p><h2 id="how-does-videosdk-transform-insurtech-via-the-video-mer-solution">How does VideoSDK transform InsurTech via the Video MER solution?</h2><p><a href="https://www.videosdk.live/"><strong>VideoSDK</strong></a> has transformed the insurance sector by offering a robust and secure infrastructure for Video MER. Our solution enables insurance companies to conduct remote medical examinations efficiently, leveraging advanced AI-driven analysis and enhanced security features. Here are some key ways <a href="https://www.videosdk.live/solutions/telehealth"><strong>VideoSDK's Video MER solution</strong></a> has transformed InsurTech:</p><h3 id="streamlined-insurance-operations">Streamlined Insurance Operations</h3><p>VideoSDK's Video MER solution helps insurance companies perform remote medical exams efficiently, reducing the need for in-person visits and costs. This allows companies to handle a large number of exams per day, ensuring policyholders get timely and accurate assessments.</p><h3 id="enhanced-claims-assessment">Enhanced Claims Assessment</h3><p>VideoSDK's AI-driven transcription and summary features ensure accurate documentation and analysis of medical exams. This reduces the risk of misinformation or fraud, helping companies make better decisions about coverage and premiums.</p><h3 id="improved-customer-experience">Improved Customer Experience</h3><p>VideoSDK's high-quality video calls provide a seamless exam experience for policyholders, even in Tier-2 and Tier-3 regions. This boosts overall customer satisfaction, building trust and loyalty with the insurance company.</p><h3 id="data-driven-decision-making">Data-Driven Decision Making</h3><p>VideoSDK's recording and analytics offer detailed insights into client-agent interactions, helping companies make informed decisions about coverage and premiums. This transparency enhances trust between the company and policyholders.</p><h3 id="compliance-and-security">Compliance and Security</h3><p>VideoSDK's enterprise-grade infrastructure ensures secure storage and transmission of data, following guidelines like IRDAI and GDPR. This maintains the integrity of the insurance process and protects policyholder information.</p><p>This innovative solution has enhanced the accuracy of claims assessment, improved the customer experience, and enabled data-driven decision-making, ultimately revolutionizing the insurance industry.</p><h2 id="conclusion">Conclusion</h2><p>As the insurance landscape continues to evolve, Video MER is playing a pivotal role in shaping the future of medical examinations. By providing a more convenient, efficient, and transparent experience to proposers, Video MER empowers insurance companies to offer higher sums assured at lower premiums, ultimately benefiting both the industry and the consumers.</p><p>Video MER has emerged as a transformative solution in the insurance industry, revolutionizing the way medical examinations are conducted. By leveraging video conferencing technology, insurance companies can now offer a more convenient, accessible, and efficient process for proposers to undergo medical assessments.</p><p>As we mentioned, one of the key players driving this innovation is VideoSDK, a leading provider of video conferencing solutions. VideoSDK's robust and secure platform has been instrumental in enabling insurance companies to seamlessly implement Video MER. Our solution ensures high-quality video and audio, advanced AI-driven analysis, and enhanced security features, making it an ideal choice for insurers.</p>]]></content:encoded></item><item><title><![CDATA[Top 10 Alternatives to Vonage Video API (formerly TokBox) in 2026]]></title><description><![CDATA[Experience an alternative to Vonage Video API that will revolutionize your online interactions. Unlock your true potential and embrace the opportunity for unparalleled success, starting right now.]]></description><link>https://www.videosdk.live/blog/vonage-alternative</link><guid isPermaLink="false">64a7dc235badc3b21a58c684</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 10 Oct 2024 11:00:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/07/Vonage-alternative.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Vonage-alternative.jpg" alt="Top 10 Alternatives to Vonage Video API (formerly TokBox) in 2026"/><p>If you're in search of a <a href="https://www.videosdk.live/alternative/vonage-vs-videosdk" rel="noreferrer"><strong>Vonage(TokBox) alternative</strong></a> that offers seamless integration of real-time video into your application, you've come to the right place. While Vonage(TokBox) is a well-established platform providing communication capabilities through its SDK, there might be untapped possibilities waiting for you beyond its offering.</p><p>Continue reading to explore the potential you might miss out on by sticking with the Vonage(TokBox) platform, especially if you're an existing customer.</p><h2 id="comprehensive-guide-to-top-vonage-tokbox-alternatives">Comprehensive Guide to Top Vonage (TokBox) Alternatives</h2>
<p><a href="https://www.videosdk.live/blog/vonage-competitors">Vonage </a>(TokBox-OpenTok) is a user-friendly solution for business meetings, offering essential video calling features, including Vonage conference calls and Vonage video conferencing through the Vonage API. However, there are certain limitations to consider. The SDK is primarily focused on business communication, which means it may lack interactivity compared to other options, especially when it comes to larger participant numbers in a Vonage conference call or Vonage video conferencing scenario.</p><p>One potential drawback is that audio and video issues can arise, particularly when dealing with larger participant numbers in a Vonage video call or <a href="https://www.videosdk.live/audio-video-conferencing" rel="noreferrer">video conferencing SDK</a>. Additionally, it's worth noting that the SDK requires significant resources, and support may be lacking or come at a high cost, which could impact the overall Vonage API experience. Exploring competitors of Vonage is recommended for a more enhanced and satisfying experience.</p><p>The <strong>top 10 Vonage(TokBox) Alternatives</strong> are VideoSDK, Twilio, MirrorFly, Jitsi, AWS Chime, ApiRTC, EnableX, Agora, SignalWire, and WhereBy.</p><blockquote>
<h2 id="top-10-vonage-tokbox-opentok-alternatives-for-2026">Top 10 Vonage (TokBox-OpenTok) Alternatives for 2026</h2>
<ul>
<li><strong>VideoSDK</strong></li>
<li><strong>Twilio Video</strong></li>
<li><strong>Mirrorfly</strong></li>
<li><strong>Jitsi</strong></li>
<li><strong>AWS Chime</strong></li>
<li><strong>ApiRTC</strong></li>
<li><strong>Enablex</strong></li>
<li><strong>Agora</strong></li>
<li><strong>SignalWire</strong></li>
<li><strong>Whereby</strong></li>
</ul>
</blockquote>
<p>These <strong>Vonage(TokBox) alternatives</strong> provide a range of features and pricing options to suit your specific needs. Take your time to carefully consider these options and choose the one that best aligns with your requirements.</p><h2 id="1-videosdk-real-time-video-infra-for-devs">1. VideoSDK: Real-time Video Infra for Devs</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-SDK-for-Real-time-Communication-Live-Streaming-Video-API-3.jpeg" class="kg-image" alt="Top 10 Alternatives to Vonage Video API (formerly TokBox) in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li><a href="https://www.videosdk.live/">VideoSDK</a> provides an API that allows developers to easily add powerful, extensible, scalable, and resilient audio-video features to their apps with just a few lines of code. </li><li>Add live audio and video experiences to any platform in minutes.</li><li>The essential advantage of using Video SDK is it’s quite easy and quick to integrate, allowing you to focus more on building innovative features to enhance user retention.</li><li>Experience the ultimate in video technology with our Video SDK. </li><li>It offers high scalability, adaptive bitrate technology, end-to-end customization, quality recordings, detailed analytics, cross-platform streaming, seamless scaling, and platform support. </li><li>Whether you're on mobile (Flutter, Android, iOS), web (JavaScript Core SDK + UI Kit), or desktop (Flutter Desktop), our Video SDK brings you the power to create immersive video experiences effortlessly. </li><li>Start using Video SDK and revolutionize your video capabilities today!</li><li>Get incredible value with Video SDK! Enjoy <a href="https://www.videosdk.live/pricing" rel="noreferrer">$20 free credit</a> and <a href="https://www.videosdk.live/pricing#pricingCalc">flexible pricing</a> for video and audio calls. </li><li><strong>Video calls</strong> start at <strong>$0.003</strong> per participant per minute, while <strong>audio calls</strong> start at <strong>$0.0006</strong>. </li><li>Additional costs include <strong>$0.015</strong> per minute for <strong>cloud recordings</strong> and <strong>$0.030</strong> per minute for <strong>RTMP output</strong>. </li><li>They also provide <strong>free 24/7</strong> <strong>customer support</strong> to assist with any queries or technical needs. Upgrade your video capabilities today!</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/alternative/vonage-vs-videosdk"><strong>Vonage and VideoSDK</strong></a><strong>.</strong></blockquote>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-twilio-video-features-and-pricing">2. Twilio Video: Features and Pricing</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Communication-APIs-for-SMS-Voice-Video-Authentication_twilio-2.jpeg" class="kg-image" alt="Top 10 Alternatives to Vonage Video API (formerly TokBox) in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li><a href="https://www.videosdk.live/blog/twilio-video-alternative"><strong>Twilio</strong></a> offers SDKs for the web, iOS, and Android. Multiple audio and video inputs require manual code implementation. </li><li>Call insights to help analyze errors. Twilio supports <strong>up to 50 hosts</strong> <strong>and participants</strong> in a call. </li><li>Simplified product development is not provided. <strong>Hard coding</strong> is necessary for handling disruptions. </li></ul><h3 id="twilio-pricing">Twilio pricing</h3>
<ul><li><a href="https://www.twilio.com/en-us/video/pricing">Pricing</a> starts at <strong>$4</strong> per 1,000 minutes. <strong>Free</strong> support includes <strong>API status notifications</strong> and <strong>email support</strong> during <strong>business hours</strong>.</li><li><strong>Additional services</strong> are available at <strong>extra cost</strong>.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/twilio-vs-vonage"><strong>Vonage and Twilio</strong></a><strong>.</strong></blockquote><h2 id="3-mirrorfly-enterprise-communication-suite">3. MirrorFly: Enterprise Communication Suite</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Live-Video-Call-API-Best-Video-Chat-SDK-for-Android-iOS-mirrorfly-2.jpeg" class="kg-image" alt="Top 10 Alternatives to Vonage Video API (formerly TokBox) in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>MirrorFly is a comprehensive in-app communication suite created for enterprises. </li><li>It provides intuitive APIs and SDKs that deliver an exceptional chat and calling experience. </li><li>With a diverse array of over 150 chat, voice, and video calling features, this cloud-based solution seamlessly integrates to offer a robust communication experience.</li></ul><h3 id="mirrorfly-pricing">MirrorFly pricing</h3>
<ul><li>It's important to note that MirrorFly's pricing starts at <strong>$299</strong> per month, positioning it as a higher-cost option.</li></ul><h2 id="4-jitsi-open-source-video-conferencing">4. Jitsi: Open-Source Video Conferencing</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Free-Video-Conferencing-Software-for-Web-Mobile-Jitsi-3.jpeg" class="kg-image" alt="Top 10 Alternatives to Vonage Video API (formerly TokBox) in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>Jitsi is an open-source suite for video conferencing solutions. </li><li>Jitsi Meet offers features like screen sharing and collaboration, accessible on web browsers and mobile apps. </li><li>Jitsi Videobridge provides an XMPP server for encrypted video chats. It's free, open-source, and offers end-to-end encryption. </li><li>Call recording and support may require additional steps. Jitsi doesn't handle user bandwidth during network issues. </li><li><a href="https://www.videosdk.live/blog/jitsi-alternative"><strong>Jitsi</strong></a> is 100% open source and <strong>free</strong> to use, but setting up servers and creating the UI is required. </li><li>Additional support comes at a cost.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/vonage-vs-jitsi"><strong>Vonage and Jitsi</strong></a><strong>.</strong></blockquote><h2 id="5-aws-chime-business-video-conferencing">5. AWS Chime: Business Video Conferencing</h2>
<ul><li><a href="https://www.videosdk.live/blog/amazon-chime-sdk-alternative"><strong>AWS Chime</strong></a> is a fantastic video conferencing tool for businesses. </li><li>It offers VoIP calling, video messaging, and virtual meetings for seamless remote collaboration. </li><li>Enjoy high-quality online meetings with clear video and audio, collaborative features like screen sharing and text chats, and meeting management for <strong>up to 250 participants</strong>. </li><li>Security is enhanced through AWS Identity and Access Management, and recording and analytics features are available. </li><li>Basic bandwidth management is included, but edge case management is the user's responsibility.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/vonage-vs-amazon-chime-sdk"><strong>Vonage and AWS Chime</strong></a><strong>.</strong></blockquote><h2 id="6-apirtc-integrating-webrtc-with-ease">6. ApiRTC: Integrating WebRTC with Ease</h2>
<ul><li>ApiRTC is an exceptional WebRTC Platform as a Service (PaaS) that streamlines developers' access to WebRTC technology. </li><li>With their user-friendly API, you can effortlessly integrate real-time multimedia interactions into your websites and mobile apps using just a few lines of code. </li><li>However, it's worth noting that the <strong>basic subscription</strong> for ApiRTC is priced at <strong>$54.37</strong>, which may be considered relatively expensive for small developers.</li></ul><h2 id="7-enablex-customizable-video-calling-solutions">7. EnableX: Customizable Video Calling Solutions</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Call-API-Video-Chat-API-Voice-API-Video-Conferencing_enebleX-3.jpeg" class="kg-image" alt="Top 10 Alternatives to Vonage Video API (formerly TokBox) in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>EnableX offers SDKs for live video, voice, and messaging, simplifying the development of live experiences in applications. </li><li>It targets service providers, ISVs, SIs, and developers. The SDK enables the customization of video-calling solutions and personalized live video streams. </li><li>It supports JavaScript, PHP, and Python. </li><li><a href="https://www.enablex.io/cpaas/pricing/our-pricing">Pricing</a> starts at <strong>$0.004</strong> per participant minute for <strong>up to 50 participants</strong>, with <strong>additional charges</strong> for <strong>recording</strong>, <strong>transcoding</strong>, <strong>storage</strong>, and <strong>RTMP streaming</strong>.</li></ul><h2 id="8-agora-real-time-video-conferencing">8. Agora: Real-Time Video Conferencing</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Agora-Real-Time-Voice-and-Video-Engagement-2.jpeg" class="kg-image" alt="Top 10 Alternatives to Vonage Video API (formerly TokBox) in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li><a href="https://www.videosdk.live/blog/agora-alternative"><strong>Agora</strong></a> is a renowned platform known for its powerful real-time video conferencing capabilities. </li><li>However, there are limitations to consider, such as limited customization options and occasional performance issues. </li><li><strong>Additional costs</strong> may apply for advanced features like <strong>recording</strong> or <strong>transcription</strong>. </li><li><a href="https://www.agora.io/en/pricing/">Pricing</a> starts at <strong>$3.99</strong> per 1,000 minutes for <strong>video calling</strong> and <strong>$0.99</strong> per 1,000 minutes for <strong>voice calling</strong>.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/agora-vs-vonage"><strong>Vonage and Agora</strong></a><strong>.</strong></blockquote><h2 id="9-signalwire-flexible-video-integration">9. SignalWire: Flexible Video Integration</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Building-The-Software-Defined-Telecom-Network-SignalWire-2.jpeg" class="kg-image" alt="Top 10 Alternatives to Vonage Video API (formerly TokBox) in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>SignalWire is a platform that enables effortless video integration into applications. </li><li>It allows up to 100 participants in each video call and provides an SDK for web, iOS, and Android apps. </li><li>Developers are responsible for managing disruptions and user logic independently. </li><li><a href="https://signalwire.com/pricing/video">Pricing</a> is determined by per-minute usage, offering options for both HD and Full HD calls. </li><li>Additional features like recording and streaming can be added at separate rates.</li></ul><h2 id="10-whereby-browser-based-meetings">10. Whereby: Browser-Based Meetings</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Calling-API-for-Web-and-App-Developers-Whereby-3.jpeg" class="kg-image" alt="Top 10 Alternatives to Vonage Video API (formerly TokBox) in 2026" loading="lazy" width="1920" height="967"/></figure><ul><li>Whereby is a browser-based meeting platform that offers permanent rooms for users. </li><li>Joining meetings is hassle-free with a simple click, no downloads or registrations are needed. </li><li>They introduced a hybrid meeting solution that reduces echo and eliminates the need for expensive hardware. </li><li>Customization options for the video interface are limited. </li><li>Data privacy is a priority, and <a href="https://whereby.com/information/pricing">pricing</a> plans start at <strong>$6.99</strong> per month with additional charges for extra usage.</li></ul><h2 id="certainly">Certainly!</h2>
<p><a href="https://www.videosdk.live">VideoSDK</a> takes the spotlight among the mentioned video conferencing SDKs, placing a strong emphasis on a fast and seamless integration experience. With its low-code solution, developers can swiftly build live video experiences within their applications. In just under 10 minutes, custom video conferencing solutions can be created and deployed, significantly reducing integration time and effort. Unlike other SDKs with longer integration processes and limited customization options, Video SDK prioritizes a streamlined approach. By harnessing the power of Video SDK, developers effortlessly create and embed live video experiences, enabling real-time connections, communication, and collaboration for users.</p><h2 id="still-skeptical">Still skeptical?</h2>
<p>Dive into VideoSDK's <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start">Quickstart guide</a> and explore its powerful <a href="https://docs.videosdk.live/code-sample">sample app</a>. <a href="https://app.videosdk.live/">Sign up</a> today to claim your <a href="https://www.videosdk.live/pricing">complimentary $20 free</a> and unleash your creativity. Our dedicated team is available to support you along the way. Get ready to create remarkable experiences with VideoSDK!</p>]]></content:encoded></item><item><title><![CDATA[WebRTC Android:  A Step-by-Step Comprehensive Guide]]></title><description><![CDATA[WebRTC on Android devices opens up a myriad of possibilities for app developers. From live streaming and video conferencing apps to real-time gaming and collaboration tools, WebRTC offers a robust foundation for building complex real-time interaction features within apps.]]></description><link>https://www.videosdk.live/blog/webrtc-android</link><guid isPermaLink="false">661ce5bc2a88c204ca9d3d66</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Wed, 09 Oct 2024 13:49:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/pexels-vanessa-garcia-6325970.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction-to-webrtc-on-android">Introduction to WebRTC on Android</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/pexels-vanessa-garcia-6325970.jpg" alt="WebRTC Android:  A Step-by-Step Comprehensive Guide"/><p><a href="https://www.videosdk.live/blog/webrtc" rel="noreferrer">Web Real-Time Communication (WebRTC)</a> is a powerful, open-source project that enables web browsers and mobile applications to communicate in real time via simple APIs. It supports video, voice, and generic data to be sent between peers, making it an essential tool for developing interactive, real-time communication applications on Android devices. This guide will walk you through building a complete WebRTC application on Android.</p><h2 id="what-is-webrtc-android">What is WebRTC Android?</h2><p>WebRTC Android is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. It allows audio and video communication to work inside web pages by allowing direct peer-to-peer communication, eliminating the need to install plugins or download native apps. WebRTC Android has numerous components, but at its core are three main JavaScript APIs:</p><ul><li><strong>getUserMedia</strong>: captures audio and video.</li><li><strong>RTCPeerConnection</strong>: enables audio and video communication between peers.</li><li><strong>RTCDataChannel</strong>: allows bi-directional data transfer between peers.</li></ul><h2 id="why-is-webrtc-important-for-android">Why is WebRTC Important for Android?</h2><p>Integrating WebRTC on Android devices opens up a myriad of possibilities for app developers. From <a href="https://www.videosdk.live/interactive-live-streaming" rel="noreferrer">live streaming</a> and video conferencing apps to real-time gaming and collaboration tools, WebRTC offers a robust foundation for building complex real-time interaction features within apps. Its compatibility with Android ensures that developers can reach a wide audience, enhancing user engagement and satisfaction.</p><h2 id="setting-up-the-development-environment">Setting Up the Development Environment</h2><p>Before diving into coding, it's important to set up the development environment correctly. Developing WebRTC applications for Android requires specific tools and libraries, which can be integrated into Android Studio, the official IDE for Android development.</p><h3 id="tools-and-libraries">Tools and Libraries</h3><ol><li><strong>Android Studio:</strong> Ensure you have the latest version of Android Studio installed to support all new features and necessary updates.</li><li><strong>WebRTC Library:</strong> Add the WebRTC library to your project. For Android Studio 3 and newer, add the following dependency to your <code>build.gradle</code> file:</li></ol><pre><code class="language-groovy">implementation 'org.webrtc:google-webrtc:1.0.+'
</code></pre><p>This library includes the necessary WebRTC classes and methods tailored for Android.</p><h3 id="android-studio-configuration">Android Studio Configuration</h3><p>Ensure your development environment is set up on a compatible operating system, such as Linux, which supports Android development for WebRTC. The setup includes downloading the <a href="https://webrtc.github.io/webrtc-org/native-code/android/">WebRTC source specifically structured for Android</a>, which integrates additional Android-specific components​</p><p>In summary, understanding the basics of WebRTC and setting up the development environment are crucial first steps in leveraging this technology in Android applications. With the right setup, developers can start building innovative, real-time communication features that enhance user engagement and provide value in various application scenarios.</p><p>In the next part of this guide, we will delve into how to establish a basic peer connection and handle media streams and tracks, complete with code snippets and detailed explanations. Stay tuned!</p><h2 id="basic-implementation-of-webrtc-android">Basic Implementation of WebRTC Android</h2><p>Having set up your development environment, you can now dive into the core of WebRTC functionality on Android. This section covers the fundamental steps to establish a peer connection and manage media streams, providing practical code snippets to help you understand the process.</p><h3 id="establishing-a-peer-connection">Establishing a Peer Connection</h3><p>A peer connection forms the backbone of any WebRTC android application, facilitating the direct communication link between two devices. This connection handles the transmission of audio, video, and data.</p><h3 id="initialization-of-peerconnectionfactory">Initialization of PeerConnectionFactory</h3><p>Before creating a peer connection, you need to initialize the <code>PeerConnectionFactory</code>. This factory is crucial as it generates the instances required for managing the media streams and the connections themselves. Include the following setup in your Android project:</p><pre><code class="language-java">// Initialize PeerConnectionFactory globals.
PeerConnectionFactory.InitializationOptions initializationOptions =
    PeerConnectionFactory.InitializationOptions.builder(context)
        .createInitializationOptions();
PeerConnectionFactory.initialize(initializationOptions);

// Create a new PeerConnectionFactory instance.
PeerConnectionFactory.Options options = new PeerConnectionFactory.Options();
PeerConnectionFactory peerConnectionFactory = PeerConnectionFactory.builder()
    .setOptions(options)
    .createPeerConnectionFactory();
</code></pre><h3 id="creating-the-peerconnection">Creating the PeerConnection</h3><p>After setting up the factory, the next step is to create a peer connection object. This object uses configurations for STUN and TURN servers, which facilitate the connection across different networks and NATs (Network Address Translators):</p><pre><code class="language-java">// Configuration for the peer connection.
List&lt;PeerConnection.IceServer&gt; iceServers = new ArrayList&lt;&gt;();
iceServers.add(PeerConnection.IceServer.builder("stun:stun.l.google.com:19302").createIceServer());

PeerConnection.RTCConfiguration rtcConfig = new PeerConnection.RTCConfiguration(iceServers);
// Create the peer connection instance.
PeerConnection peerConnection = peerConnectionFactory.createPeerConnection(rtcConfig, new PeerConnection.Observer() {
    @Override
    public void onSignalingChange(PeerConnection.SignalingState signalingState) {}

    @Override
    public void onIceConnectionChange(PeerConnection.IceConnectionState iceConnectionState) {}

    @Override
    public void onIceConnectionReceivingChange(boolean b) {}

    @Override
    public void onIceGatheringChange(PeerConnection.IceGatheringState iceGatheringState) {}

    @Override
    public void onIceCandidate(IceCandidate iceCandidate) {}

    @Override
    public void onIceCandidatesRemoved(IceCandidate[] iceCandidates) {}

    @Override
    public void onAddStream(MediaStream mediaStream) {}

    @Override
    public void onRemoveStream(MediaStream mediaStream) {}

    @Override
    public void onDataChannel(DataChannel dataChannel) {}

    @Override
    public void onRenegotiationNeeded() {}
});
</code></pre><h3 id="handling-media-streams-and-tracks">Handling Media Streams and Tracks</h3><p>Managing media involves creating and manipulating audio and video streams that are transmitted over the network. This involves capturing media from the device's hardware, like the camera and microphone, and preparing it for transmission.</p><h3 id="creating-audio-and-video-tracks">Creating Audio and Video Tracks</h3><p>You need to create audio and video sources and tracks from these sources. These tracks are then added to the peer connection and managed throughout the life cycle of the application:</p><pre><code class="language-java">// Create an AudioSource instance.
AudioSource audioSource = peerConnectionFactory.createAudioSource(new MediaConstraints());
AudioTrack localAudioTrack = peerConnectionFactory.createAudioTrack("101", audioSource);

// Create a VideoSource instance.
VideoSource videoSource = peerConnectionFactory.createVideoSource(false);
SurfaceTextureHelper surfaceTextureHelper = SurfaceTextureHelper.create("CaptureThread", eglBaseContext);
VideoCapturer videoCapturer = createCameraCapturer(new Camera1Enumerator(false));
videoCapturer.initialize(surfaceTextureHelper, context, videoSource.getCapturerObserver());
videoCapturer.startCapture(1000, 1000, 30);

VideoTrack localVideoTrack = peerConnectionFactory.createVideoTrack("102", videoSource);
localVideoTrack.setEnabled(true);

// Add tracks to peer connection.
peerConnection.addTrack(localAudioTrack);
peerConnection.addTrack(localVideoTrack);
</code></pre><p>In this section, we explored how to establish a peer connection and handle audio and video streams in a WebRTC Android application. By following these steps and integrating the provided code snippets, you can build a basic real-time</p><h3 id="advanced-features-and-use-cases-of-webrtc-android">Advanced Features and Use Cases of WebRTC Android</h3><p>After establishing the basic peer connection and handling media streams, it's time to explore more advanced features and potential use cases for WebRTC on Android. This part of the article delves into enhancing WebRTC applications with additional UI components, and building a comprehensive video chat application.</p><h3 id="enhancing-webrtc-android-with-ui-components">Enhancing WebRTC Android with UI Components</h3><p>Integrating user interface (UI) components effectively is crucial for developing functional and user-friendly real-time communication apps. The use of <code>SurfaceViewRenderer</code> and <code>VideoTextureViewRenderer</code> enables the display and manipulation of video streams in the UI, providing a seamless user experience.</p><h3 id="customizing-video-components">Customizing Video Components</h3><p>The following snippet shows how to set up a <code>SurfaceViewRenderer</code> to display video:</p><pre><code class="language-java">// Setup the local video track to be displayed in a SurfaceViewRenderer.
SurfaceViewRenderer localVideoView = findViewById(R.id.local_video_view);
localVideoView.init(eglBaseContext, null);
localVideoView.setZOrderMediaOverlay(true);
localVideoView.setMirror(true);

// Attach the video track to the renderer.
localVideoTrack.addSink(localVideoView);
</code></pre><p>This setup includes initializing the renderer, setting its properties for overlay and mirroring, and attaching the video track. It ensures that the video stream from the device's camera is displayed correctly in the application's interface.</p><h3 id="handling-multiple-video-streams">Handling Multiple Video Streams</h3><p>In more complex applications, such as multi-user video conferences, managing multiple video streams becomes necessary. Each participant's video needs to be displayed simultaneously, requiring dynamic management of UI components:</p><pre><code class="language-java">// Assume a dynamic layout that can add or remove video views as needed.
for (PeerConnection pc : allPeerConnections) {
    SurfaceViewRenderer remoteVideoView = new SurfaceViewRenderer(context);
    remoteVideoView.init(eglBaseContext, null);
    videoLayout.addView(remoteVideoView);
    pc.getRemoteVideoTrack().addSink(remoteVideoView);
}</code></pre><p>This code snippet suggests a way to iterate over all active peer connections, initializing a new video renderer for each and attaching the remote video track. It exemplifies how to dynamically add video views to a layout, accommodating any number of participants.</p><h3 id="building-a-complete-webrtc-android-video-chat-application">Building a Complete WebRTC Android Video Chat Application</h3><p>Creating a full-fledged video chat application involves not only managing video streams but also handling signaling and network traversal, session descriptions, and ICE candidates efficiently.</p><h3 id="integrating-signaling">Integrating Signaling</h3><p>Signaling is an essential part of establishing a peer connection in WebRTC Android, used for coordinating communication and managing sessions. Here's a basic overview of how signaling could be implemented using WebSocket:</p><pre><code class="language-java">WebSocketClient client = new WebSocketClient(uri) {
    @Override
    public void onOpen(ServerHandshake handshakedata) {
        // Send offer/answer and ICE candidates to the remote peer.
    }

    @Override
    public void onMessage(String message) {
        // Handle incoming offers, answers, and ICE candidates.
    }

    @Override
    public void onClose(int code, String reason, boolean remote) {
        // Handle closure of connection.
    }

    @Override
    public void onError(Exception ex) {
        // Handle errors during communication.
    }
};
client.connect();
</code></pre><p>This WebSocket client setup facilitates the real-time exchange of signaling data necessary to initiate and maintain WebRTC Android connections.</p><h3 id="session-management-and-ice-handling">Session Management and ICE Handling</h3><p>Efficiently managing session descriptions and ICE candidates ensures that connections are established quickly and remain stable, even across complex network configurations:</p><pre><code class="language-java">// Handling an offer received from a remote peer.
peerConnection.setRemoteDescription(new SimpleSdpObserver(), sessionDescription);
peerConnection.createAnswer(new SimpleSdpObserver() {
    @Override
    public void onCreateSuccess(SessionDescription sdp) {
        peerConnection.setLocalDescription(new SimpleSdpObserver(), sdp);
        // Send the answer back to the remote peer through the signaling channel.
    }
}, new MediaConstraints());

// Adding received ICE candidates.
peerConnection.addIceCandidate(iceCandidate);
</code></pre><p>This section focuses on integrating UI components and building a comprehensive system for a multi-user video chat application using WebRTC on Android. These advanced implementations illustrate the capabilities of WebRTC Android in handling real-time media and data interactions, providing developers with the <a href="https://developer.android.com/">tools</a> needed to create robust and interactive communication applications.</p><h3 id="faqs-and-troubleshooting-for-webrtc-android">FAQs and Troubleshooting for WebRTC Android</h3><p>As developers dive deeper into integrating WebRTC into Android applications, questions and challenges are bound to arise. This section addresses common FAQs inspired by the "People Also Ask" section on Google, and provides troubleshooting tips to help developers <a href="https://developer.android.com/topic/performance/overview">optimize performance</a> and enhance<a href="https://developer.android.com/privacy-and-security/security-tips"> security</a> in their WebRTC implementations.</p><h3 id="faqs-on-webrtc-android">FAQs on WebRTC Android</h3><p><strong>What are the most common issues when implementing WebRTC on Android?</strong></p><ul><li><strong>Compatibility:</strong> Ensuring that WebRTC is compatible across different Android versions and devices.</li><li><strong>Connection Stability:</strong> Managing ICE candidates and network changes to maintain stable connections.</li><li><strong>Audio and Video Quality:</strong> Optimizing media stream configurations to improve quality.</li></ul><p><strong>How do I optimize WebRTC's performance on mobile devices?</strong></p><ul><li>Focus on efficient bandwidth management, adjust the resolution and frame rate based on network conditions, and utilize hardware-accelerated codecs where available.</li></ul><p><strong>Can WebRTC be used for data channels only, without audio or video?</strong></p><ul><li>Yes, WebRTC supports the creation of data channels independent of audio and video streams, allowing for the transfer of arbitrary data directly between clients.</li></ul><p><strong>What security practices should be followed when using WebRTC on Android?</strong></p><ul><li>Always use secure protocols (e.g., HTTPS, WSS) for signaling and encrypt all peer-to-peer communication. Implement application-level authentication and authorization mechanisms to control access.</li></ul><h2 id="troubleshooting-common-webrtc-issues-optimizing-video-and-audio-streams">Troubleshooting Common WebRTC Issues Optimizing Video and Audio Streams</h2><p>Poor video or audio quality can often be due to inadequate handling of media streams, particularly in fluctuating network environments. Consider implementing adaptive bitrate algorithms that dynamically adjust the media quality. Here’s an example approach using WebRTC's native support:</p><pre><code class="language-java">// Configure media constraints for optimal performance.
MediaConstraints videoConstraints = new MediaConstraints();
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxHeight", Integer.toString(desiredHeight)));
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxWidth", Integer.toString(desiredWidth)));
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxFrameRate", Integer.toString(desiredFps)));
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("minFrameRate", Integer.toString(minFps)));
</code></pre><h3 id="handling-network-changes">Handling Network Changes</h3><p>Mobile devices frequently switch networks (e.g., from WiFi to cellular), which can disrupt active WebRTC sessions. Implementing robust network change listeners and re-establishing connections where necessary is critical:</p><pre><code class="language-Java">// Monitor network changes.
ConnectivityManager connectivityManager = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE);
NetworkRequest.Builder builder = new NetworkRequest.Builder();
connectivityManager.registerNetworkCallback(builder.build(), new ConnectivityManager.NetworkCallback() {
    @Override
    public void onAvailable(Network network) {
        // Re-establish connections or update ICE candidates.
    }
});
</code></pre><h3 id="security-enhancements">Security Enhancements</h3><p>Security in WebRTC applications goes beyond encryption, focusing also on securing the signaling pathways and ensuring that ICE candidates are gathered and transferred securely:</p><pre><code class="language-java">// Securely transfer ICE candidates and session descriptions.
secureWebSocket.send(encrypt(iceCandidate.toString()));</code></pre><h2 id="conclusion">Conclusion</h2><p>The integration of WebRTC Android applications opens up a plethora of possibilities for real-time communication and data exchange. By following the best practices for setup, handling streams, and troubleshooting as discussed in this guide, developers can create robust, efficient, and secure applications. Continue to test and optimize based on real-world usage to ensure the best user experience.</p>]]></content:encoded></item><item><title><![CDATA[Introduction to Video Calling API Pricing]]></title><description><![CDATA[Pricing with no hassle. Study the affordability and quality of several entities and make a choice! Make video calling experiences supreme

]]></description><link>https://www.videosdk.live/blog/video-conferencing-api-pricing</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb6b</guid><category><![CDATA[Pricing]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Wed, 09 Oct 2024 07:30:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/09/Pricing-thumbnail.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/09/Pricing-thumbnail.jpg" alt="Introduction to Video Calling API Pricing"/><p/>
<!--kg-card-begin: html-->
<iframe width="560" height="315" src="https://www.youtube.com/embed/eAf_hWgqWvI" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""/>
<!--kg-card-end: html-->
<p/><p><br>VideoSDK believes in delivering the best real-time audio and video experience to developers for their apps and websites. It builds APIs that integrate with a few lines of code in the quickest time, enhancing user engagement. We cater to video calling APIs with affordable pricing with all flexibility and customization. <br/></br></p><p><strong>VideoSDK brings its pricing!</strong></p><p><strong>Free Credit: $20 free credit</strong><br>A user can integrate video calling of its own choice of appearance and widgets to the web and app effortlessly with our APIs. Our video calling pre-built helps developers to design one with the best customizations upgrading their user interface. <br/></br></p><blockquote>Before we begin with pricing, we serve you a general tip on how to integrate a video call in the easiest way possible with us. We make sure that each of our clients is served with the most amazing experience <br/></blockquote><ul><li>You can build a video calling API without any compulsorily required assistance. It just demands integration with a few lines of code. You just need <a href="https://app.videosdk.live/">10 minutes</a> before you start!<br/></li><li>There is no necessity to speak to sales support to get acknowledged with pricing. You can view our <a href="https://videosdk.live/pricing">video calling pricing policies </a>here.<br/></li></ul><p>This blog makes readers understand how our API pricing is different and exclusive from other <a href="https://videosdk.live">video calling API providers</a>. We have simplified pricing plans that help in making effective strategies for the users. It gives readers create a clear picture of how we effectively function with pricing. <br/></p><h2 id="how-to-calculate-participant-minutes">How to Calculate Participant Minutes</h2><blockquote>Participant Minutes are the total number of minutes spent by each participant in one meeting. Videosdk.live calculates Participant Minutes based on the number of participants present in a meeting. The computation is simple.</blockquote><p><strong>Participant Minutes = Number of participants(N) x Minutes Consumed(M)</strong></p><p>The references below will make you understand the calculation effortlessly.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/08/Paid-Plans2.jpg" class="kg-image" alt="Introduction to Video Calling API Pricing" loading="lazy" width="2000" height="588"><figcaption><span style="white-space: pre-wrap;">6 (N) x 50 (M) = 300 (PM)</span></figcaption></img></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/08/image-2.jpg" class="kg-image" alt="Introduction to Video Calling API Pricing" loading="lazy" width="2000" height="588"><figcaption><span style="white-space: pre-wrap;">2 (N) x 20 (M) = 40 (PM)</span></figcaption></img></figure><h2 id="free-plans">Free plans</h2><blockquote>As we stated in the prior part of this writing, we have secured and kept 10,000 minutes for you each month. A question may arise that how do these minutes count? It can be illustrated with some examples.</blockquote><blockquote><strong>Note that</strong>- At videosdk.live, you can create <strong>unlimited rooms</strong> <strong>at the same time</strong> even when you are consuming free minutes!</blockquote><p><strong>Example:</strong></p><p><strong><em>Video Conferencing of 30 participants for 60 minutes</em></strong></p><ul><li>Total Minutes consumed - 30 Participants x 60 Minutes = 1,800 Participant Minutes  <strong>BUT, </strong>these minutes are free, there is no cost to be borne by you.</li><li><strong>On a very simple calculation, your free minutes will be Deducted by 1,800 Free Minutes.</strong></li><li><strong>Remaining Free Minutes = 10,000 - 1,800 = 8,200 Minutes</strong></li><li>Similarly, the participant minutes of your next meeting will be further deducted from the remaining free minutes.<br/></li></ul><p><strong><em>Get a bonus example</em></strong></p><p>On an estimation, we calculated that, if 6 Participants show up their presence for 50 minutes in a meeting, regularly for 30 days, they can manage <strong>30 FREE meetings each month!! </strong></p><h2 id="pro-plans">Pro Plans</h2><blockquote>Pro plans are the paid plans which are implied to your video calls after you consume 10,000 free minutes in a month. These pro plans are recommended for companies to scale engagement. <strong>Believe us, with the quality we develop, you cannot find a price lower than us! It’s a profitable deal!</strong></blockquote><blockquote><strong>Note that</strong>- At videosdk.live, you can create <strong>unlimited rooms</strong> <strong>at the same time </strong>on your pro plan with<strong> no additional expenses!</strong></blockquote><p>We compute Pro Plans with flexibility in resolutions. You can manage your projects easily at your convenience. We work on a <a href="https://growthrocks.com/blog/saas-subscription-models/" rel="noreferrer">simplified pricing strategy</a>, i.e.,<strong> </strong>based on the number of users. The references below will clarify our pricing strategy. </p><p><strong>Pricing = Number of participants x Meeting minutes x Unit Price per minute</strong><br/></p><p><strong>Example 1:</strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/08/image-2--1-.jpg" class="kg-image" alt="Introduction to Video Calling API Pricing" loading="lazy" width="2000" height="588"><figcaption><span style="white-space: pre-wrap;">2 (N) x 50 (M) = 100 (PM)</span></figcaption></img></figure><p>The above reference signifies that with 2 participants in a meeting for 50 minutes, it sums up to 100 Participant Minutes as per our formulae. You just need to multiply the unit price to make a final pricing estimation.</p><ul><li>In a 480p resolution, the price per minute is $ 0.00199. So, total pricing= N x M x price per minute= 2 x 50 x 0.00199 = $0.20</li><li>In a 720p resolution, the price per minute is $ 0.00299. So, total pricing= N x M x price per minute= 2 x 50 x 0.00299 = $0.30</li><li>In a 1080p resolution, the price per minute is $ 0.00699. So, total pricing= N x M x price per minute= 2 x 50 x 0.00699 = $0.70</li></ul><p><br><strong>Example 2:</strong></br></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/08/Paid-Plans2-1.jpg" class="kg-image" alt="Introduction to Video Calling API Pricing" loading="lazy" width="2000" height="588"><figcaption><span style="white-space: pre-wrap;">6 (N) x 50 (M) = 300 (PM)</span></figcaption></img></figure><p>This reference shows that with 6 participants in a meeting of 50 minutes, the Participant minutes sum up to 300. The further step is to correlate and multiply with the unit price.</p><p>We provide our Video calling APIs with resolutions of 480p (SD), 720p (HD), and 1080p (Full HD). Putting up examples, we can make the pricing concept on the mark!<br/></p><ul><li>In a 480p resolution, the price per minute is $ 0.00199. So, total pricing= N x M x price per minute= 6 x 50 x 0.00199 = $0.60</li><li>In a 720p resolution, the price per minute is $ 0.00299. So, total pricing= N x M x price per minute= 6 x 50 x 0.00299 = $0.90</li><li>In a 1080p resolution, the price per minute is $ 0.00699. So, total pricing= N x M x price per minute= 5 x 60 x 0.00699 = $2.10</li></ul><h2 id="enterprise-plan"><strong>Enterprise Plan</strong></h2><p>An Enterprise Plan is a plan for companies dedicated to management, support, and other technicalities. We bring this plan to promote mass engagement at affordable prices.</p><blockquote><a href="https://videosdk.live/contact">Contact Support</a> for the best pricing deals.</blockquote><h3 id="comparison-with-other-api-providers">Comparison with Other API Providers</h3><p>Several other companies work on the same approach. They also look for developing video calling APIs for their developer clients. A major difference between VideoSDK and other API providers is pricing. These companies compute pricing as per <strong>Subscriber Minutes </strong>With the same quality, there is a huge pricing difference.<br/></p><p><strong><em>How do these companies compute Subscriber Minutes?</em></strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/08/Other-Providers-calculation-2.jpg" class="kg-image" alt="Introduction to Video Calling API Pricing" loading="lazy" width="2000" height="588"><figcaption><span style="white-space: pre-wrap;">2(N) x (2-1) x 50 (M) = 100 (PM)</span></figcaption></img></figure><p>The ideal formula that they practice is-</p><p><strong>Participant Minutes (N) x (N-1) x (Unit Price per minute).</strong></p><ul><li>In a 720p resolution,  price per minute= $ 0.004. So, total pricing= N x M x price per minute= 2 x (2-1) x 50 x 0.00399 = $0.40</li><li>In a 1080p resolution, price per minute= $ 0.00699. So, total pricing= N x M x price per minute= 2 x (2-1) x 50 x 0.00899 = $0.90</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/08/Other-Providers-calculation-1.jpg" class="kg-image" alt="Introduction to Video Calling API Pricing" loading="lazy" width="2000" height="588"><figcaption><span style="white-space: pre-wrap;">6(N) x (6-1) x 50 (M) = 1500 (PM)</span></figcaption></img></figure><ul><li>This is a 2K+ resolution, price per minute= $ 0.03599. So, total pricing= N x (N-1) x M x price per minute= 6 x (6-1) x 50 x 0.03599 = $54.00</li></ul><blockquote>The blog will explain what is 2K+ in the coming part</blockquote><p>The Subscriber Minutes pricing policy adopted by several other API-providing companies sums up to a very high price. VideoSDK follows a simple policy of calculation by the number of users.<br/></p><h2 id="understanding-2k-resolution">Understanding 2K+ Resolution?</h2><p>2K+ is a resolution that exceeds the resolution of more than 2160 pixels. The video-calling API providers that compose video calling often calculate resolution based on the number of participants and their respective screen resolutions. The reference below will make the readers understand how 2K+ is computed.<br/></p><p><strong><em>How is 2K+ resolution calculated?</em></strong></p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/08/Pixels-calculation.jpg" class="kg-image" alt="Introduction to Video Calling API Pricing" loading="lazy" width="2000" height="588"/></figure><p>The image comprehends itself that on totaling the resolutions of each participant of the meeting shown on the screen, it exceeds the 2K resolution pixels, making it 2K+.</p><p><strong>A point to note:</strong></p><p>Even if a meeting is conducted on a resolution of 480p, if the total resolution on the screen exceeds 2160p, it will be considered 2K+<br/></p><h2 id="price-comparison%E2%80%9Cvideosdklive-vs-other-api-providers%E2%80%9D">Price Comparison- “Videosdk.live vs. other API providers”</h2><p>The below table is significant. It shows the price comparison of Videosdk.live with other providers and their pricing. To reckon the pricing, majorly all the companies have pricing similar to the estimates we have claimed. <br/></p><figure class="kg-card kg-image-card kg-width-wide"><img src="http://assets.videosdk.live/static-assets/ghost/2021/08/pricing-table.jpg" class="kg-image" alt="Introduction to Video Calling API Pricing" loading="lazy" width="1576" height="1040"/></figure><p>The above table may raise questions about the deliverables served by us. We put forward all the deliverables subjected to quality and other aspects which these companies do along with. The approach is identical but the prices vary with huge numbers. You can always <a href="https://videosdk.live/contact">get in touch</a> with us through our fastest support chain. We welcome all your queries.</p>
<!--kg-card-begin: html-->
<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "VideoObject",
  "name": "Video Calling & Conferencing API | Pricing | Pricing Comparison | Video SDK",
  "description": "Video Calling & Conferencing API | Pricing | Pricing Comparison | Video SDK We also offer you 10,000 minutes, free every month.",
  "thumbnailUrl": "https://img.youtube.com/vi/eAf_hWgqWvI/maxresdefault.jpg",
  "uploadDate": "2021-09-27",
  "duration": "PT7M19S",
  "contentUrl": "https://youtu.be/eAf_hWgqWvI",
  "embedUrl": "https://www.videosdk.live/blog/video-conferencing-api-pricing"
}
</script>
<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[Embed Video Calls with Our Prebuilt SDK]]></title><description><![CDATA[Explore the power of Video SDK Embed with our insightful blog. Uncover tips, trends, and tech insights for seamless integration.]]></description><link>https://www.videosdk.live/blog/video-sdk-embed</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb66</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[Getting Started]]></category><category><![CDATA[video-conferencing]]></category><category><![CDATA[Video-sdk]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Wed, 09 Oct 2024 07:15:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/10/Untitled-design--26-.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h2 id="effortless-video-call-integration-in-just-10-minutes">Effortless Video Call Integration in Just 10 Minutes</h2>
<!--kg-card-end: markdown--><img src="http://assets.videosdk.live/static-assets/ghost/2021/10/Untitled-design--26-.png" alt="Embed Video Calls with Our Prebuilt SDK"/><p>Following our announcement on launching the pre-built in the previous blog, we are excited to deliver the manual for the same. This blog will get the readers up and running with the pre-built in no time. </p><h3 id="seamless-video-call-embedding-with-videosdk">Seamless Video Call Embedding with VideoSDK</h3><p>Embedding a video call to your application becomes easy with VideoSDK. With our Pre-built, you can add video calls on your website and application with just a few lines of code and nothing could be simpler than this. You can share URLs with the participants for the video calls and accommodate more than 5000 people over the same call.</p><p>We provide you with a free set-up to experience how to use our prebuilt so you can make the best of it. We also offer you 10,000 minutes, free every month.</p><p>Embed video calling enables the opportunity to integrate real-time communication SDK without writing explicit code. It supports all modern javascript frameworks such as React JS, Vue, and Angular including Vanilla JS.</p><h3 id="start-for-free-experience-our-prebuilt-sdk">Start for Free: Experience Our Prebuilt SDK</h3><p>For better understanding, we have divided the prebuilt setup into a few steps. Each step describes the code snippet that one needs to use while constructing and integrating the SDK.</p><p>The prebuilt has codes that sometimes become difficult for a fresher or an inexperienced developer. In that case, one can take the help of a developer to configure the setup. In aid, you can always reach us, we provide 24/7 customer support for our clients.</p><h3 id="video-calls-with-videosdk-offer-users-amazing-features-with-quality-as-our-prior-most-concern">Video calls with VideoSDK offer users amazing features with quality as our prior most concern.</h3><!--kg-card-begin: markdown--><ul>
<li>10,000 minutes free each month</li>
<li>Participant capacity up to 5,000</li>
<li>End-to-end encrypted calls</li>
<li>HD and Full HD Video calls</li>
<li>Audio support of 16kHz to 48kHz</li>
<li>360 Spatial Audio</li>
<li>Intelligence Active Speaker Switch</li>
<li>Real-time messaging</li>
<li>Cloud recording</li>
<li>Whiteboard and poll support</li>
<li>HIPAA Compliance in enterprise mode</li>
</ul>
<!--kg-card-end: markdown--><p>Let's begin with the prebuilt setup. Read all the steps carefully before installing one in your application.</p><p>It supports all the modern frameworks such as plain JavaScript, React JS, Vue, and Angular.</p><h2 id="step-by-step-guide-to-using-the-prebuilt-sdk">Step-by-Step Guide to Using the Prebuilt SDK</h2><h3 id="1-sign-up-for-a-free-account-prebuit-videosdk">1. Sign up for a free account Prebuit VideoSDK</h3><p>Visit <a href="https://app.videosdk.live">https://app.videosdk.live</a> and signup with your Google or GitHub account to generate a new <strong>API key</strong> to run the prebuilt.</p><h3 id="2-generate-an-api-key-and-secret">2: Generate an <a href="https://app.videosdk.live/settings/api-keys">API key and Secret</a></h3><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/12/image-3.png" class="kg-image" alt="Embed Video Calls with Our Prebuilt SDK" loading="lazy" width="1024" height="382"/></figure><h3 id="3-import-the-script-into-your-html-page">3: Import the script into your HTML page<br/></h3><figure class="kg-card kg-code-card"><pre><code class="language-Javascipt"> &lt;script src="https://sdk.videosdk.live/rtc-js-prebuilt/0.2.6/rtc-js-prebuilt.js"&gt;&lt;/script&gt;
</code></pre><figcaption>javascipt</figcaption></figure><blockquote>Note: You can also use official Npm package: <a href="https://www.npmjs.com/package/@videosdk.live/rtc-js-prebuilt">rtc-js-prebuilt</a></blockquote><h3 id="4-add-script-and-set-up-the-meeting">4: Add script and set up the meeting</h3><p>Create an index.html<strong> </strong>file and add the following &lt;script&gt; tag at the end of your code's &lt;body&gt; tag. Initialize VideoSDKMeeting after the script gets loaded. Replace the apiKey with the one generated in <strong>Step 2</strong>.<br/></p><pre><code class="language-Javascript">&lt;script&gt;
  var script = document.createElement("script");
  script.type = "text/javascript";

  script.addEventListener("load", function (event) {
    const meeting = new VideoSDKMeeting();

    const config = {
      name: "John Doe",
      apiKey: "&lt;API KEY&gt;", // generated in step 1
      meetingId: "milkyway", // enter your meeting id

      containerId: null,
      redirectOnLeave: "https://www.videosdk.live/",

      micEnabled: true,
      webcamEnabled: true,
      participantCanToggleSelfWebcam: true,
      participantCanToggleSelfMic: true,

      chatEnabled: true,
      screenShareEnabled: true,
      pollEnabled: true,
      whiteboardEnabled: true,
      raiseHandEnabled: true,

      recordingEnabled: true,
      recordingWebhookUrl: "https://www.videosdk.live/callback",
      recordingAWSDirPath: `/meeting-recordings/${meetingId}/`, // automatically save recording in this s3 path
      autoStartRecording: true, // auto start recording on participant joined

      brandingEnabled: true,
      brandLogoURL: "https://picsum.photos/200",
      brandName: "Awesome startup",
      poweredBy: true,

      participantCanLeave: true, // if false, leave button won't be visible

      // Live stream meeting to youtube
      livestream: {
        autoStart: true,
        outputs: [
          // {
          //   url: "rtmp://x.rtmp.youtube.com/live2",
          //   streamKey: "&lt;STREAM KEY FROM YOUTUBE&gt;",
          // },
        ],
      },

      permissions: {
        askToJoin: false, // Ask joined participants for entry in meeting
        toggleParticipantMic: true, // Can toggle other participant's mic
        toggleParticipantWebcam: true, // Can toggle other participant's webcam
        drawOnWhiteboard: true, // Can draw on whiteboard
        toggleWhiteboard: true, // Can toggle whiteboard
        toggleRecording: true, // Can toggle meeting recording
        removeParticipant: true, // Can remove participant
        endMeeting: true, // Can end meeting
      },

      joinScreen: {
        visible: true, // Show the join screen ?
        title: "Daily scrum", // Meeting title
        meetingUrl: window.location.href, // Meeting joining url
      },

      pin: {
        allowed: true, // participant can pin any participant in meeting
        layout: "SPOTLIGHT", // meeting layout - GRID | SPOTLIGHT | SIDEBAR
      },

      leftScreen: {
        // visible when redirect on leave not provieded
        actionButton: {
          // optional action button
          label: "Video SDK Live", // action button label
          href: "https://videosdk.live/", // action button href
        },
      },

      notificationSoundEnabled: true,

      maxResolution: "sd", // "hd" or "sd"
    };

    meeting.init(config);
  });

  script.src =
    "https://sdk.videosdk.live/rtc-js-prebuilt/0.2.6/rtc-js-prebuilt.js";
  document.getElementsByTagName("head")[0].appendChild(script);
&lt;/script&gt;</code></pre><h3 id="5-run-the-application">5: Run the application</h3><p>Install any HTTP server if you don't already have one and run the server to join the meeting from the browser.</p><figure class="kg-card kg-code-card"><pre><code class="language-Node.JS">$ npm install -g live-server
$ live-server --port=8000</code></pre><figcaption>Node.JS</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-Python">$ python3 -m http.server</code></pre><figcaption>Python</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-PHP">$ php -S localhost:8000
</code></pre><figcaption>PHP</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-XAMPP">Move the html file to C:\xampp\htdocs and start the XAMPP server</code></pre><figcaption>XAMPP</figcaption></figure><p/><p>and open local host in your web browser<br/></p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/12/giphy--4-.gif" class="kg-image" alt="Embed Video Calls with Our Prebuilt SDK" loading="lazy" width="480" height="270"/></figure><p><strong>NOTE</strong></p><blockquote>Also check out this <a href="https://github.com/videosdk-live/videosdk-rtc-js-prebuilt-embedded-example" rel="noopener noreferrer">example code</a> on github or <a href="https://github.com/videosdk-live/videosdk-rtc-js-prebuilt-embedded-example/archive/refs/tags/v0.1.0.zip" rel="noopener noreferrer">download</a> the full source code and unzip on your computer to get started quickly.</blockquote><p><strong>Find our documentation here:</strong></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/docs/tutorials/realtime-communication/prebuilt-sdk/quickstart-prebuilt-js"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Quickstart for Prebuilt JS | video sdk Documentation</div><div class="kg-bookmark-description">videosdk.live tutorials will help you to deep dive into details of all the SDK and API. We tried to include example of all the possible programming langaguges.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Embed Video Calls with Our Prebuilt SDK"><span class="kg-bookmark-author">video sdk Documentation</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Embed Video Calls with Our Prebuilt SDK"/></div></a></figure><!--kg-card-begin: html--><iframe width="100%" height="315" src="https://www.youtube.com/embed/b3IzLHeDvyE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""/><!--kg-card-end: html--><p/><p/>]]></content:encoded></item><item><title><![CDATA[VBIP - Video Based Identification Process]]></title><description><![CDATA[A video Based identification procedure involves verifying a person's identity through video footage, often used in security, remote verification, or authentication processes.]]></description><link>https://www.videosdk.live/blog/video-based-identification-procedure-vbip</link><guid isPermaLink="false">66742bae20fab018df10eca8</guid><category><![CDATA[Industry Update]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Tue, 08 Oct 2024 11:10:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/06/VBIP---Video-based-Identity-Verification---Insurance.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-vbip">What is VBIP?</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/VBIP---Video-based-Identity-Verification---Insurance.jpg" alt="VBIP - Video Based Identification Process"/><p>VBIP (Video Based Identification Procedure) is a digital verification method that allows individuals to authenticate their identity remotely using video technology. VBIP is part of the onboarding journey that includes a consent-based, real-time interaction conducted digitally by an insurance company representative with a prospective customer.</p><p>This process is becoming important in the insurance industry. VBIP (Video-based Identification Procedure) enables insurers to onboard new customers fast and securely through a live video session, without the need for in-person meetings or physical document submissions.</p><p><strong>The VBIP process involves the following steps:</strong></p><ul><li>The customer initiates the video verification process through a mobile app or web portal provided by the insurance company.</li><li>The customer starts to capture a live video of themselves, following specific instructions such as looking at the camera, declaring their name, and performing certain actions to verify their identity.</li><li>The video is then analyzed by the insurance company's VBIP system, which uses <strong>advanced facial recognition</strong> and <strong>liveness detection algorithms</strong> to confirm the customer's identity via government-issued ID and ensure they are a real person (not a pre-recorded video or a deepfake).</li><li>Once the customer's identity is verified, the insurance company can proceed with the onboarding process, including collecting any necessary personal or financial information and issuing a suitable insurance policy.</li></ul><h2 id="difference-between-vbip-and-vcip">Difference between VBIP and VCIP</h2><p>While VBIP and <a href="https://www.videosdk.live/blog/rbi-compliance-for-video-kyc">VCIP (Video Based Customer Identification Process)</a> are both video-based identity verification methods, there are some key differences between the two:</p><p>VBIP is a more comprehensive process that verifies the customer's identity with liveness and authenticity checks, ensuring the person is real and not a pre-recorded video or deepfake. While VCIP on the other hand, is a simpler video-based identification process that primarily focuses on verifying the customer's identity using a live video session and government-issued ID, without the additional liveness and authenticity checks.</p><p>In the insurance industry, VBIP is generally preferred over VCIP as it provides a higher level of security and fraud prevention, which is crucial for claim settlement and sensitive personal data.</p><h2 id="what-does-irdai-vbip-mean-for-insurers">What does IRDAI VBIP mean for Insurers?</h2><p>The <a href="https://irdai.gov.in/document-detail?documentId=395494">IRDAI</a> (Insurance Regulatory and Development Authority of India) has issued guidelines for the implementation of VBIP as a mandatory requirement for insurance companies. In a circular sent exclusively to all general, life, and standalone health insurers; IRDAI expressed its readiness to introduce a Video-based Identification Process (VBIP) allowing insurers to provide a Video-based KYC method to their customers for KYC verification.</p><p>The IRDAI's VBIP guidelines aim to modernize the insurance industry, making it more accessible and customer-centric while maintaining robust security and compliance standards. This means that insurers must adopt VBIP-enabled solutions to ensure compliance with regulatory standards. VBIP enhances the security and efficiency of the onboarding process and also helps insurers reduce costs and improve customer satisfaction.</p><p>Some of the key implications of IRDAI's VBIP guidelines for insurers include:</p><ul><li><strong>Mandatory Adoption</strong>: Insurers are mandated to implement VBIP as an alternative to traditional in-person KYC (Know Your Customer) processes, providing customers with a more convenient and secure onboarding experience.</li><li><strong>Compliance Requirements</strong>: Insurers must ensure that their VBIP systems and procedures adhere to the guidelines set by IRDAI. These guidelines include data security, customer consent, and record-keeping.</li><li><strong>Improved Efficiency</strong>: By digitizing the customer onboarding process through VBIP, insurers can significantly reduce the time and resources required for manual KYC verification, leading to faster policy issuance and improved customer satisfaction.</li></ul><p>The Video-Based Identification Process (VBIP) market is experiencing significant growth, driven by advancements in AI and biometric technologies. The global digital identity solutions market, which includes VBIP, is projected to grow from $34.5 billion in 2023 to $83.2 billion by 2029, at a compound annual growth rate (CAGR) of 19.3%. The video surveillance market, closely tied to VBIP, is estimated to grow from $81.68 billion in 2023 to $145.38 billion by 2029, with a CAGR of 12.22% during the forecast period.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/VBIP.png" class="kg-image" alt="VBIP - Video Based Identification Process" loading="lazy" width="2475" height="1425"/></figure><h2 id="download-the-latest-irdai-guidelines-e-book-for-free">Download the Latest IRDAI Guidelines e-book for Free!</h2><p>To learn more about VBIP and how it can benefit your insurance company, download our latest e-book</p><!--kg-card-begin: html--><!DOCTYPE html>
<html>

<head>
	<style>
		.center {
			text-align: center;
		}

		.my-button {
			width: 160px;
			font-size: 16px;
			background-color: #596bff;
			color: white;
			border: none;
			padding: 6px 6px;
			text-align: center;
			text-decoration: none;
			display: inline-block;
			cursor: pointer;
			border-radius: 5px; 
            margin-bottom: 20px;
		}

		.my-button:hover {
			background-color: #5f7ada;
		}
	</style>
</head>

<body>

	<div class="center">
		<a href="https://www.videosdk.live/resources/ebook/ebook-irdai-kyc-compliances">
			<button class="my-button">Download Now!</button>
		</a>
	</div>

</body>
</html><!--kg-card-end: html--><h2 id="what-are-the-problems-with-traditional-insurance-onboarding">What are the Problems with Traditional Insurance Onboarding?</h2><p>Traditional insurance onboarding processes often involve lengthy paperwork, manual data entry, and in-person document verification. This disparate workflow can be time-consuming, inconvenient, and prone to errors, leading to a frustrating customer experience and increased operational costs for insurers. The lack of standardization and the need for manual intervention makes it difficult for insurers to scale their operations efficiently.</p><p>Traditional insurance onboarding processes often involve several pain points for both customers and insurers, including:</p><ul><li><strong>Lengthy and Inconvenient</strong>: Customers are required to physically visit an insurance office or agent, fill out lengthy paper forms, and submit various identity documents, which can be time-consuming and inconvenient.</li><li><strong>Manual Verification</strong>: Insurance agents or back-office staff must manually review and verify the customer's identity documents, leading to delays and potential errors.</li><li><strong>Fragmented Workflows</strong>: The onboarding process may involve multiple touchpoints, such as document submission, payment, and policy issuance, which are often not integrated, resulting in a disjointed customer experience.</li><li><strong>Increased Risk of Fraud</strong>: Traditional onboarding methods are more susceptible to identity fraud, as they rely on physical documents that can be forged or misused.</li></ul><h2 id="how-will-vbip-help-insurance-companies">How will VBIP help Insurance companies?</h2><p>VBIP is part of the onboarding journey that includes a consent-based, real-time interaction conducted digitally by an insurance company representative with a prospective customer. VBIP-enabled solutions help Insurers reduce the time and resources required for verification during the digital onboarding process.</p><p><strong>This leads to several benefits for insurance companies such as:</strong></p><ul><li><strong>Faster Onboarding</strong>: VBIP allows customers to complete the onboarding process quickly and conveniently, without the need for in-person visits or physical document submissions, reducing the time and effort required for in-person verification.</li><li><strong>Regulatory Compliance</strong>: VBIP helps insurers comply with IRDAI's guidelines, ensuring that their onboarding practices meet the necessary security and data protection standards.</li><li><strong>Improved Security</strong>: VBIP's advanced facial recognition and liveness detection capabilities help insurers mitigate the risk of identity fraud, ensuring that only genuine customers are onboarded.</li><li><strong>Enhanced customer experience</strong>: VBIP provides a more convenient and seamless, digital-first onboarding experience that aligns with modern customer expectations, leading to higher satisfaction and retention rates.</li><li><strong>Reduced operational costs</strong>: VBIP virtualized the identity verification process, eliminating the need for manual data entry and physical document handling, which can significantly reduce operational costs.</li><li><strong>Streamlined Workflows</strong>: By integrating VBIP into their onboarding processes, insurers can eliminate manual steps, reduce errors, and create a more efficient and cohesive customer journey.</li></ul><h2 id="what-are-the-key-verification-and-checks-involved-in-vbip">What are the Key Verification and Checks involved in VBIP?</h2><p>Insurers can ensure the authenticity of their customers by conducting verification and checks, allowing them to onboard new customers securely and in compliance with regulations.</p><p><strong>The process typically involves the following key verification and checks:</strong></p><ul><li><strong>Liveness detection</strong>: Ensuring that the customer is a real person and not a pre-recorded video or image.</li><li><strong>ID document verification</strong>: Verifying the authenticity and validity of the customer's government-issued ID.</li><li><strong>Facial matching</strong>: Comparing the customer's live video with the photo on the ID document to ensure a match with AI technologies.</li><li><strong>Address verification</strong>: Confirm the customer's residential address, either by cross-checking the information on the ID document or additional proof.</li><li><strong>Fraud detection</strong>: Include fraud detection mechanisms, such as checking for suspicious behavior patterns or inconsistencies during the VBIP process.</li></ul><h2 id="how-does-videosdk-help-insurance-insurtech-companies">How does VideoSDK help Insurance (InsurTech) Companies?</h2><p><a href="https://www.videosdk.live/solutions/video-kyc">VideoSDK</a> is the <strong>only provider of IRDAI-compliant VBIP-enabled solutions</strong> for Insurance companies. With VideoSDK, InsurTech companies can streamline their operations, reduce costs, and improve customer satisfaction through <strong>virtual claim settlement</strong>. Our solution integrates seamlessly with insurers' existing systems, providing compliance with IRDAI guidelines.</p><p><strong>Some of the key features and benefits of VideoSDK's solutions include:</strong></p><ul><li><strong>IRDAI-Compliant VBIP</strong>: Our VBIP solution is designed to meet all the requirements and guidelines set forth by the IRDAI, ensuring that InsurTech companies can seamlessly integrate it into their onboarding workflows.</li><li><strong>Robust Identity Verification</strong>: Integration with advanced facial recognition and liveness detection algorithms provides a high degree of accuracy in verifying customer identities, mitigating the risk of fraud.</li><li><strong>Seamless Integration</strong>: SDK and APIs can be easily integrated into InsurTech companies' existing platforms and applications, allowing for a smooth and efficient onboarding experience.</li><li><strong>Scalable and Secure</strong>: VideoSDK's cloud-based infrastructure and data security measures ensure that InsurTech companies can handle high volumes of customer onboarding while maintaining the highest standards of data protection.</li><li><strong>Comprehensive Support</strong>: Our team of experts provides comprehensive support and guidance to InsurTech companies throughout the implementation and ongoing maintenance of the VBIP solution.</li></ul><p>By integrating the VideoSDK solution, InsurTech companies can leverage a best-in-class VBIP solution that not only meets IRDAI's requirements but also delivers a superior customer experience with real-time video solutions.</p><h2 id="conclusion">Conclusion</h2><p>VBIP (Video-based Identification Procedure) is a transformative technology that is reshaping the insurance industry. By leveraging the power of video, VBIP offers a faster, more secure, and more convenient way for insurers to verify customer identities, leading to improved customer experiences and reduced operational costs.</p><p>As the IRDAI continues to provide regulatory guidance on VBIP, insurance companies that embrace this technology will be well-positioned to stay ahead of the curve and deliver exceptional service to their policyholders. By partnering with trusted providers like VideoSDK, <a href="https://www.videosdk.live/solutions/telehealth">InsurTech companies</a> can seamlessly integrate VBIP into their operations and unlock the full potential of this innovative solution.</p>]]></content:encoded></item><item><title><![CDATA[How to Add Video Calling in WordPress with Prebuilt SDK]]></title><description><![CDATA[In this tutorial, I will explain how to create a video calling app in WordPress with Prebuilt SDK in Just 5 minutes.]]></description><link>https://www.videosdk.live/blog/video-calling-wordpress</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb89</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Tue, 08 Oct 2024 09:27:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/05/Wordpress.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/05/Wordpress.jpg" alt="How to Add Video Calling in WordPress with Prebuilt SDK"/><p>This tutorial will provide a step-by-step guide that can help you create a video calling app in WordPress with the Prebuilt Video SDK in Just 5 minutes. I've been creating various guides, such as this one, on different subjects. My personal goal is to make it easier for new developers to start building Web and Mobile applications without all the unnecessary hurdles.</p><p>Prebuilt SDK is the easiest way to add a video calling feature to your website. It works on any device, with no installation needed. Create meetings, join meetings and let people talk face to face over webcam in real-time. </p><h2 id="step-1-create-an-account-and-generate-api-key">Step 1: Create an account and generate API Key</h2><p>First, you will need to <a href="https://app.vidoesdk.live"><strong>Sign Up</strong></a> to create a Video SDK account, which is absolutely <strong>free</strong>. Once you set up the account go to <a href="https://app.videosdk.live/settings/api-keys" rel="noopener noreferrer">settings&gt;api-keys</a> and generate a new <strong>API key</strong> for integration. (For more info, you can follow <a href="https://docs.videosdk.live/docs/guide/prebuilt-video-and-audio-calling/signup-and-create-api" rel="noopener noreferrer">How to generate API Key?</a>)</p><h2 id="step-2-add-a-custom-html-block-to-your-page">Step 2: Add a Custom HTML block to your page</h2><p>Once you've logged in to your WordPress dashboard., you'll want to add a Custom HTML block to the page of your site where you wish to create a video call embed. (For more info, you can follow <a href="https://wordpress.org/support/article/custom-html/">this support article</a> on Wordpress.org)</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2022/05/wordprees-design--1-.gif" class="kg-image" alt="How to Add Video Calling in WordPress with Prebuilt SDK" loading="lazy" width="1440" height="810" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/05/wordprees-design--1-.gif 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/05/wordprees-design--1-.gif 1000w, http://assets.videosdk.live/static-assets/ghost/2022/05/wordprees-design--1-.gif 1440w" sizes="(min-width: 720px) 720px"><figcaption>Add a Custom HTML block to your page</figcaption></img></figure><p/><h2 id="step-3-add-prebuilt-code-in-html-block">Step 3: Add Prebuilt Code in HTML block</h2><p>Simply copy and paste the following code into your Custom HTML block</p><pre><code class="language-Javascript">&lt;script&gt;
  var script = document.createElement("script");
  script.type = "text/javascript";

  script.addEventListener("load", function (event) {
    const config = {
      name: "Video SDK Live",
      meetingId: "prebuilt",
      apiKey: "&lt;YOUR API KEY&gt;",

      containerId: null,

      micEnabled: true,
      webcamEnabled: true,
      participantCanToggleSelfWebcam: true,
      participantCanToggleSelfMic: true,

      chatEnabled: true,
      screenShareEnabled: true,
    };

    const meeting = new VideoSDKMeeting();
    meeting.init(config);
  });

  script.src =
    "https://sdk.videosdk.live/rtc-js-prebuilt/0.3.1/rtc-js-prebuilt.js";
  document.getElementsByTagName("head")[0].appendChild(script);
&lt;/script&gt;</code></pre><p>Now, Add the <em><strong>apiKey:"&lt;YOUR API KEY&gt;"</strong></em> (in the code above) in <a href="https://app.videosdk.live/settings/api-keys">apiKey</a> the property you need to provide your <strong>API KEY</strong> which we have generated in <strong><a href="https://www.videosdk.live/blog/video-calling-wordpress#step-1-create-an-account-and-generate-api-key">Step 1</a></strong>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2022/05/wordpress-7-985981f79e1c6e48639fc189b6e8b138.png" class="kg-image" alt="How to Add Video Calling in WordPress with Prebuilt SDK" loading="lazy" width="1550" height="870" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/05/wordpress-7-985981f79e1c6e48639fc189b6e8b138.png 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/05/wordpress-7-985981f79e1c6e48639fc189b6e8b138.png 1000w, http://assets.videosdk.live/static-assets/ghost/2022/05/wordpress-7-985981f79e1c6e48639fc189b6e8b138.png 1550w" sizes="(min-width: 720px) 720px"><figcaption>Add the apiKey:"<strong>&lt;YOUR API KEY&gt;</strong>" and Preview your site and Go live ?&nbsp;</figcaption></img></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2022/05/giphy--1--1.gif" class="kg-image" alt="How to Add Video Calling in WordPress with Prebuilt SDK" loading="lazy" width="480" height="270"><figcaption>You can also see step-by-step <a href="https://youtu.be/o8DIxmT1Ubg">video tutorial</a></figcaption></img></figure><blockquote>Create <a href="https://video-sdks.notion.site/Create-different-roles-host-guest-etc-b925cc4b97dd4e4c9989a03985b4f3a6">different roles</a> (host, guest, etc.)<br>Create unique <a href="https://video-sdks.notion.site/Create-unique-meeting-links-each-time-2858d22083674ab2acdf35d8dcc5df03">meeting links</a> each time</br></blockquote><!--kg-card-begin: markdown--><h4 id="advanced"><strong>Advanced</strong></h4>
<!--kg-card-end: markdown--><blockquote>Developers, see our <a href="https://docs.videosdk.live/prebuilt/api/sdk-reference/setup">Prebuilt SDK</a>  for more information on creating and configuring rooms. <br><br><strong>Basic features:</strong> <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/join-screen">Join<strong> </strong>Screen</a>, <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/camera-controls">Camera Control</a>, <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/mic-controls">Mic Controls</a>, <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/redirect-on-leave">Redirect on Leave</a>, <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/screenshare">Share Your Screen</a>, <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/send-messages">Send Messages</a>, <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/pin-participants">Pin Participants</a>.<br><br><strong>Advanced features</strong>:  <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/recording-meeting">Cloud Recording</a>, <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/go-live-social-media">Go Live On Social Media</a>, <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/change-layout">Change Layout</a>, <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/customize-branding">Customize Branding</a>, <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/customize-permissions">Customize Permissions</a>, <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/whitelist-domain">Whitelist Domain For Better Securit</a>, </br></br></br></br></blockquote><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2022/05/configuration-5b7667850e669d8e6d755692b451eec0--2-.png" class="kg-image" alt="How to Add Video Calling in WordPress with Prebuilt SDK" loading="lazy" width="1366" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/05/configuration-5b7667850e669d8e6d755692b451eec0--2-.png 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/05/configuration-5b7667850e669d8e6d755692b451eec0--2-.png 1000w, http://assets.videosdk.live/static-assets/ghost/2022/05/configuration-5b7667850e669d8e6d755692b451eec0--2-.png 1366w" sizes="(min-width: 720px) 720px"><figcaption>Advanced features in Prebuilt SDK</figcaption></img></figure><p>If you have questions while using the Video SDK Prebuilt SDK, I invite you to join the <a href="https://discord.gg/Gpmj6eCq5u" rel="noopener ugc nofollow">Discord Community</a>. You can ask your questions in the <code>#no-code channel</code>.Or you can reach out to me on <a href="https://twitter.com/sagarkava_" rel="noopener ugc nofollow">Twitter</a>.</p>]]></content:encoded></item><item><title><![CDATA[Build a Scalable JavaScript Video Calling App with Video SDK]]></title><description><![CDATA[This tutorial will walk you through the process of building a scalable, responsive JavaScript video calling app with Video SDK.]]></description><link>https://www.videosdk.live/blog/video-calling-javascript</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb7d</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Rajan Surani]]></dc:creator><pubDate>Tue, 08 Oct 2024 05:26:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/02/Javascript.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/02/Javascript.jpg" alt="Build a Scalable JavaScript Video Calling App with Video SDK"/><p>We have seen a major increase in the usage of virtual meetings in the past year and the existing platforms cannot cater to everyone's custom needs. Also, building a custom feature-rich solution for your need from scratch is not a great option as you need to manage a lot of tasks, this is where <strong>VideoSDK.live</strong> comes to rescue.<br><br>With <strong>video SDK</strong> you can build a customized Video Chat App with all features your need. We will see in this guide, how you can build a video chat app using<strong> Javascript.</strong></br></br></p><h2 id="prerequisites">Prerequisites</h2><ol><li>Node.js v12+</li><li>NPM v6+ (comes pre-installed with newer Node versions)</li><li>You should have a video SDK account to generate token. Visit video SDK <strong><a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">dashboard</a></strong> to generate token.</li></ol><h2 id="project-structure">Project Structure</h2><!--kg-card-begin: markdown--><pre><code class="language-js">root
 ├── index.html
 ├── assets
 │    ├── css
 │    │    ├── index.css
 │    ├── js
 │         ├── index.js</code></pre>
<!--kg-card-end: markdown--><h2 id="implementation">Implementation</h2><h3 id="step-1-adding-videosdk">Step 1: Adding VideoSDK </h3><p>Update the <strong>index.html</strong> file with the <strong>&lt;script ... /&gt;</strong> as shown to add the VideoSDK Javascript SDK to your project.</p><!--kg-card-begin: markdown--><pre><code class="language-html">&lt;html&gt;
  &lt;head&gt;
    ....
  &lt;/head&gt;
  &lt;body&gt;
    .....
    &lt;script src=&quot;https://sdk.videosdk.live/js-sdk/0.0.20/videosdk.js&quot;&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;
</code></pre>
<!--kg-card-end: markdown--><p>If you don't want to use <strong>&lt;script ... /&gt;</strong> you can use <strong>npm</strong> to install VideoSDK in your project.</p><!--kg-card-begin: markdown--><pre><code class="language-js">npm install @videosdk.live/js-sdk

//or you can use yarn
yarn add @videosdk.live/js-sdk
</code></pre>
<!--kg-card-end: markdown--><h3 id="step-2-creating-the-ui">Step 2: Creating the UI</h3><p>For the interface, we will have simple Join and Create Meeting meeting buttons which will join and create a new meeting room respectively.</p><p>The meeting room will show the local participant view, remote participant view and show buttons to toggle mic, webcam, and leave the meeting. </p><!--kg-card-begin: markdown--><pre><code class="language-html">&lt;html&gt;
  &lt;head&gt;
    &lt;!--favicon--&gt;
    &lt;link
      rel=&quot;shortcut icon&quot;
      href=&quot;https://videosdk.live/favicon/favicon.ico&quot;
    /&gt;
    &lt;meta charset=&quot;UTF-8&quot; /&gt;
    &lt;link rel=&quot;stylesheet&quot; href=&quot;./assets/css/index.css&quot; /&gt;
    &lt;!--add necessary bootstrap links here --&gt;
    ...
  &lt;/head&gt;
  &lt;body class=&quot;bg-secondary&quot;&gt;
    &lt;!--join-screen--&gt;
    &lt;div
      id=&quot;join-screen&quot;
      class=&quot;flex flex-row align-items-center justify-content-center h-100&quot; &gt;
      &lt;button
        class=&quot;btn btn-primary&quot;
        id=&quot;btnCreateMeeting&quot;
        onclick=&quot;meetingHandler(true)&quot; &gt;
        New Meeting
      &lt;/button&gt;
      &lt;input
        type=&quot;text&quot;
        id=&quot;txtMeetingCode&quot;
        placeholder=&quot;Enter Meeting Code ..&quot; /&gt;
      &lt;button
        id=&quot;btnJoinMeeting&quot;
        onclick=&quot;meetingHandler(false)&quot;
        class=&quot;btn btn-primary&quot; &gt;
        Join
      &lt;/button&gt;
    &lt;/div&gt;
      
    &lt;!--grid-screen--&gt;
    &lt;div id=&quot;grid-screen&quot;&gt;
      &lt;div&gt;
        &lt;input
          type=&quot;text&quot;
          class=&quot;form-control navbar-brand&quot;
          id=&quot;lblMeetingId&quot;
          readonly
        /&gt;
        &lt;button class=&quot;btn btn-dark&quot; id=&quot;btnToggleMic&quot;&gt;Unmute Mic&lt;/button&gt;
        &lt;button class=&quot;btn btn-dark&quot; id=&quot;btnToggleWebCam&quot;&gt;Disable Webcam&lt;/button&gt;
        &lt;button class=&quot;btn btn-dark&quot; id=&quot;btnLeaveMeeting&quot;&gt;Leave Meeting&lt;/button&gt;
      &lt;/div&gt;
      &lt;br /&gt;
      &lt;div id=&quot;videoContainer&quot;&gt;&lt;/div&gt;
      &lt;div
        style=&quot;position: absolute;
              top: 10px;
              right: 0px;
              height: 50%;
              overflow-y: scroll;&quot; &gt;
        &lt;h3&gt;Participants List&lt;/h3&gt;
        &lt;div id=&quot;participantsList&quot;&gt;&lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
      
    &lt;!--scripts--&gt;
    &lt;script src=&quot;./assets/js/config.js&quot;&gt;&lt;/script&gt;
    &lt;script src=&quot;./assets/js/index.js&quot;&gt;&lt;/script&gt;
    &lt;script src=&quot;https://sdk.videosdk.live/js-sdk/0.0.20/videosdk.js&quot;&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;
</code></pre>
<!--kg-card-end: markdown--><p>You can get the complete custom <a href="https://gist.github.com/rajansurani/eaf2e4b6f6d7d446f3b977aad6761f22">CSS style from here</a>.</p><p>We will declare all the <strong>DOM</strong> variables we will need in the <strong>index.js</strong> file.</p><!--kg-card-begin: markdown--><pre><code class="language-js">//DOM elements
let btnCreateMeeting = document.getElementById(&quot;btnCreateMeeting&quot;);
let btnJoinMeeting = document.getElementById(&quot;btnJoinMeeting&quot;);
let videoContainer = document.getElementById(&quot;videoContainer&quot;);
let btnLeaveMeeting = document.getElementById(&quot;btnLeaveMeeting&quot;);
let btnToggleMic = document.getElementById(&quot;btnToggleMic&quot;);
let btnToggleWebCam = document.getElementById(&quot;btnToggleWebCam&quot;);
</code></pre>
<!--kg-card-end: markdown--><h3 id="step-3-meeting-implementation">Step 3: Meeting Implementation</h3><p>To start the meeting implementation, we will need the <strong>token</strong> so if you don't have one, you can generate it from <a href="https://app.videosdk.live/api-keys">here</a>.</p><ol><li>Now update your token in the <strong>index.js</strong> file as shown to add the <strong>token</strong> in the script and add a validator.</li></ol><!--kg-card-begin: markdown--><pre><code class="language-js">//variables
let token = &quot;YOUR_TOKEN_HERE&quot;;

//handlers
async function tokenGeneration() {
  //Update this function with a server-side token generation for Production version
  if (token != &quot;&quot;) {
    console.log(&quot;token : &quot;, token);
  } else {
    alert(&quot;Please Provide Your Token First&quot;);
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>2. We have got the <strong>token</strong>. Now we will add the <strong>meetingHandler</strong> which will create or join a meeting room.</p><!--kg-card-begin: markdown--><pre><code class="language-js">//variables
let meetingId = &quot;&quot;;

async function meetingHandler(newMeeting) {
  let joinMeetingName = &quot;JS-SDK&quot;;
  
  tokenGeneration();
  if (newMeeting) {
    const url = `${API_BASE_URL}/api/meetings`;
    const options = {
      method: &quot;POST&quot;,
      headers: { Authorization: token, &quot;Content-Type&quot;: &quot;application/json&quot; },
    };

    const { meetingId } = await fetch(url, options)
      .then((response) =&gt; response.json())
      .catch((error) =&gt; alert(&quot;error&quot;, error));
    document.getElementById(&quot;lblMeetingId&quot;).value = meetingId;
    document.getElementById(&quot;home-screen&quot;).style.display = &quot;none&quot;;
    document.getElementById(&quot;grid-screen&quot;).style.display = &quot;inline-block&quot;;
    startMeeting(token, meetingId, joinMeetingName);
  } else {
    document.getElementById(&quot;lblMeetingId&quot;).value = meetingId;
    document.getElementById(&quot;home-screen&quot;).style.display = &quot;none&quot;;
    document.getElementById(&quot;grid-screen&quot;).style.display = &quot;inline-block&quot;;
    startMeeting(token, meetingId, joinMeetingName);
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>3. Now the <strong>meetingId</strong> is either generated or updated with the value user entered. After this, <strong>startMeeting() </strong>is triggered which is responsible to initialize the meeting with the required configuration and join the meeting.</p><!--kg-card-begin: markdown--><pre><code class="language-js">function startMeeting(token, meetingId, name) {
  // Meeting config
  window.ZujoSDK.config(token);

  // Meeting Init
  meeting = window.ZujoSDK.initMeeting({
    meetingId: meetingId, // required
    name: name, // required
    micEnabled: true, // optional, default: true
    webcamEnabled: true, // optional, default: true
    maxResolution: &quot;hd&quot;, // optional, default: &quot;hd&quot;
  });

  //join meeting
  meeting.join();
}
</code></pre>
<!--kg-card-end: markdown--><p>4. Now we will create the local participant view.</p><!--kg-card-begin: markdown--><pre><code class="language-js">function startMeeting(token, meetingId, name) {
  //...Meeting intializating and joining code is here

  //create Local Participant
  createParticipant(meeting.localParticipant.id);

  //local participant stream-enabled event
  meeting.localParticipant.on(&quot;stream-enabled&quot;, (stream) =&gt; {
    setTrack(
      stream,
      localParticipant,
      localParticipantAudio,
      meeting.localParticipant.id
    );
  });

}

//createParticipant
function createParticipant(participant) {

  //create videoElem of participant
  let participantVideo = createVideoElement(
    participant.id,
    participant.displayName
  );

  //create audioEle of participant
  let participantAudio = createAudioElement(participant.id);

  //append video and audio of participant to videoContainer div
  videoContainer.appendChild(participantVideo);
  videoContainer.appendChild(participantAudio);
}

// creating video element
function createVideoElement(id, name) {
  let videoFrame = document.createElement(&quot;div&quot;);
  videoFrame.classList.add(&quot;video-frame&quot;);

  //create video
  let videoElement = document.createElement(&quot;video&quot;);
  videoElement.classList.add(&quot;video&quot;);
  videoElement.setAttribute(&quot;id&quot;, `v-${id}`);
  videoElement.setAttribute(&quot;autoplay&quot;, true);
  videoFrame.appendChild(videoElement);

  //add overlay
  let overlay = document.createElement(&quot;div&quot;);
  overlay.classList.add(&quot;overlay&quot;);
  overlay.innerHTML = `Name : ${name}`;

  videoFrame.appendChild(overlay);
  return videoFrame;
}

// creating audio element
function createAudioElement(pId) {
  let audioElement = document.createElement(&quot;audio&quot;);
  audioElement.setAttribute(&quot;autoPlay&quot;, false);
  audioElement.setAttribute(&quot;playsInline&quot;, &quot;false&quot;);
  audioElement.setAttribute(&quot;controls&quot;, &quot;false&quot;);
  audioElement.setAttribute(&quot;id&quot;, `a-${pId}`);
  audioElement.style.display = &quot;none&quot;;
  return audioElement;
}

//set the video or audio stream in the video element
function setTrack(stream, videoElem, audioElement, id) {
  if (stream.kind == &quot;video&quot;) {
    // enablePermission(id);
    const mediaStream = new MediaStream();
    mediaStream.addTrack(stream.track);
    videoElem.srcObject = mediaStream;
    videoElem
      .play()
      .catch((error) =&gt;
        console.error(&quot;videoElem.current.play() failed&quot;, error)
      );
  }
  if (stream.kind == &quot;audio&quot;) {
    if (id == meeting.localParticipant.id) return;
    const mediaStream = new MediaStream();
    mediaStream.addTrack(stream.track);
    audioElement.srcObject = mediaStream;
    audioElement
      .play()
      .catch((error) =&gt; console.error(&quot;audioElem.play() failed&quot;, error));
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>5. To show the remote participants, we will add the event listeners to <strong>meeting</strong> which will notify us when a participant joins or leaves the meeting.</p><!--kg-card-begin: markdown--><pre><code class="language-js">function startMeeting(token, meetingId, name) {
  //... Meeting intialization and joining
  
  //... creating the local participant

  //participant joined
  meeting.on(&quot;participant-joined&quot;, (participant) =&gt; {
    //create viewe for participant who joined
    createParticipant(participant);
    
    //listen for the stream change
    participant.on(&quot;stream-enabled&quot;, (stream) =&gt; {
      setTrack(
        stream,
        document.querySelector(`#v-${participant.id}`),
        document.getElementById(`a-${participant.id}`),
        participant.id
      );
    });
  });

  // participants left
  meeting.on(&quot;participant-left&quot;, (participant) =&gt; {
    let vElement = document.querySelector(`#v-${participant.id}`);
    vElement.parentNode.removeChild(vElement);
    let aElement = document.getElementById(`a-${participant.id}`);
    aElement.parentNode.removeChild(aElement);
    participants = new Map(meeting.participants);
    //remove it from participant list participantId;
    document.getElementById(`p-${participant.id}`).remove();
  });

}
</code></pre>
<!--kg-card-end: markdown--><p>6. At last, we will add the event listeners to the toggle buttons and leave button.</p><!--kg-card-begin: markdown--><pre><code class="language-js">function startMeeting(token, meetingId, name) {
  //... Meeting intialization and joining

  //...creating local participants

  //...remote participant listeners

  addDomEvents();
}

function addDomEvents() {
  btnToggleMic.addEventListener(&quot;click&quot;, () =&gt; {
    if (btnToggleMic.innerText == &quot;Unmute Mic&quot;) {
      meeting.unmuteMic();
      btnToggleMic.innerText = &quot;Mute Mic&quot;;
    } else {
      meeting.muteMic();
      btnToggleMic.innerText = &quot;Unmute Mic&quot;;
    }
  });

  btnToggleWebCam.addEventListener(&quot;click&quot;, () =&gt; {
    if (btnToggleWebCam.innerText == &quot;Disable Webcam&quot;) {
      meeting.disableWebcam();
      btnToggleWebCam.innerText = &quot;Enable Webcam&quot;;
    } else {
      meeting.enableWebcam();
      btnToggleWebCam.innerText = &quot;Disable Webcam&quot;;
    }
  });

  btnLeaveMeeting.addEventListener(&quot;click&quot;, async () =&gt; {
    // leavemeeting
    meeting.leave();
    document.getElementById(&quot;join-screen&quot;).style.display = &quot;inline-block&quot;;
    document.getElementById(&quot;grid-screen&quot;).style.display = &quot;none&quot;;
  });
}
</code></pre>
<!--kg-card-end: markdown--><h3 id="run-and-test">Run and Test</h3><p>To run the app you will need <strong>live-server</strong>. If you don't have it installed already you can do it using:</p><!--kg-card-begin: markdown--><pre><code class="language-js">npm install -g live-server
</code></pre>
<!--kg-card-end: markdown--><p>Once you have the <strong>live-server</strong> installed, just run it using:</p><!--kg-card-begin: markdown--><pre><code class="language-js">live-server
</code></pre>
<!--kg-card-end: markdown--><p>However, when deploying your JavaScript on the web, shorten it using <a href="https://www.minifier.org/">javascript minifer</a>. This tool will ensure that your code is parsed quickly by the server, improving the overall UX of the platform</p><!--kg-card-begin: markdown--><h2 id="conclusion">Conclusion</h2>
<!--kg-card-end: markdown--><p>With this, we successfully built the video chat app using the video SDK in Javascript. If you wish to add functionalities like chat messaging, screen sharing, you can always check out our <a href="https://docs.videosdk.live/">documentation</a>. If you face any difficulty with the implementation you can check out the <a href="https://github.com/videosdk-live/videosdk-rtc-javascript-sdk-example">example on GitHub</a> or connect with us on our <a href="https://discord.gg/Gpmj6eCq5u">discord community</a>.</p><!--kg-card-begin: markdown--><h2 id="more-javascript-resources">More JavaScript Resources</h2>
<!--kg-card-end: markdown--><ul><li><a href="https://www.videosdk.live/blog/javascript-live-streaming">JavaScript Interactive Live Streaming App with Video SDK</a></li><li><a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start">JavaScript Documentation</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Top 10 WebRTC Alternatives in 2026]]></title><description><![CDATA[Uncover an innovative and game-changing alternative to WebRTC that will transform your online experience and drive you to unprecedented success. Embrace this golden opportunity to unlock boundless potential and embark on an extraordinary journey toward new heights.]]></description><link>https://www.videosdk.live/blog/webrtc-alternative</link><guid isPermaLink="false">64b4cb489eadee0b8b9e7035</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 07 Oct 2024 09:51:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/07/webrtc-alternative.jpg" medium="image"/><content:encoded><![CDATA[<h3 id="the-webrtc-dilemmabuild-or-buy">The WebRTC Dilemma - Build or Buy?</h3><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/webrtc-alternative.jpg" alt="Top 10 WebRTC Alternatives in 2026"/><p>Are you looking for a seamless <strong>alternative to WebRTC</strong> that effortlessly integrates real-time video into your application? Look no further! Before using a video calling API, The main confusion in a developer’s mind is: <a href="https://www.videosdk.live/blog/build-or-buy-video-calling-api">Should I build or buy it?</a></p><p>Stay tuned to uncover what you might have missed, mainly if you already use <strong>WebRTC</strong>.</p><h2 id="why-consider-a-webrtc-alternative">Why Consider a WebRTC Alternative?</h2>
<p><a href="https://www.videosdk.live/blog/webrtc" rel="noreferrer">Webrtc</a> open source enables real-time audio, video, and data sharing directly within web browsers. It’s ideal for building robust communication apps without needing additional plugins or installations<br>Finding an <strong>alternative to WebRTC</strong> may be necessary for <a href="https://github.com/webrtc/samples/issues">reasons</a> such as needing performance improvement, bugs in broadcast features, scalability, and many more. To build, A company must invest a significant amount of money as it involves substantial development costs, including UI/UX design, database management, cloud services, scaling, security, and more. Additionally, there are expenses related to infrastructure, hosting, maintenance, salaries for personnel, and other factors that contribute to an increase in the overall budget.</br></p><p>The <strong>top 10 WebRTC Alternatives</strong> are VideoSDK, Twilio, MirrorFly, Agora, Jitsi, Vonage, AWS Chime, EnableX, WhereBy, and SignalWire.</p><blockquote>
<h2 id="top-10-webrtc-alternatives-for-2026">Top 10 WebRTC Alternatives for 2026</h2>
<ul>
<li>VideoSDK</li>
<li>Twilio Video</li>
<li>MirrorFly</li>
<li>Agora</li>
<li>Jitsi</li>
<li>Vonage</li>
<li>AWS Chime SDK</li>
<li>Enablex</li>
<li>Whereby</li>
<li>SignalWire</li>
</ul>
</blockquote>
<h2 id="1-videosdk">1. VideoSDK</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-SDK-for-Real-time-Communication-Live-Streaming-Video-API-6.jpeg" class="kg-image" alt="Top 10 WebRTC Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Immerse yourself in the amazing capabilities of <a href="https://www.videosdk.live/">VideoSDK</a>, an API designed to seamlessly integrate robust audio and video features into your applications. With minimal effort, you can enhance your app by providing live audio and video experiences across different platforms. Elevate your user experience and take your application to the next level with VideoSDK's powerful features.</p><h3 id="key-points-about-videosdk">Key points about VideoSDK</h3>
<ul><li>Experience the <strong>seamless integration</strong> of VideoSDK, where simplicity and speed come together to let you focus on developing innovative features that enhance user retention.</li><li>With VideoSDK, you gain exceptional <strong>scalability</strong>, <strong>adaptive bitrate technology</strong> for optimal video quality, <strong>extensive customization</strong> options to tailor the user experience, <strong>high-quality recording</strong> capabilities, <strong>comprehensive analytics</strong> for valuable insights, cross-platform streaming for broader reach, effortless scalability for growing needs, and robust support across various platforms.</li><li>Whether you're developing for mobile (Flutter, Android, iOS), web (JavaScript Core SDK + UI Kit), or desktop (Flutter Desktop), Video SDK empowers you to effortlessly create immersive and engaging video experiences.</li></ul><h3 id="videosdk-pricing">VideoSDK Pricing</h3>
<ul><li>Discover the extraordinary value of VideoSDK! Embrace their generous offer of <a href="https://www.videosdk.live/pricing" rel="noreferrer">$20 free credit</a> and experience the flexibility of their <a href="https://www.videosdk.live/pricing#pricingCalc">pricing options</a> for video and audio calls.</li><li>With VideoSDK, indulge in <strong>video calls</strong> starting at an incredible rate of only <strong>$0.003</strong> per participant per minute, ensuring seamless connections without straining your budget. </li><li>Additionally, their <strong>audio calls</strong> are available at a minimal cost of just <strong>$0.0006</strong> per minute, making communication affordable and accessible.</li><li>For enhanced experiences, they provide <strong>cloud recordings</strong> at an affordable rate of <strong>$0.015</strong> per minute, preserving important moments for future reference. </li><li>For those seeking <strong>RTMP output</strong>, they offer competitive pricing at <strong>$0.030</strong> per minute, ensuring smooth and uninterrupted streaming capabilities.</li><li>Rest assured, their dedicated support team is available <strong>24/7</strong>, offering prompt and reliable customer assistance whenever you require it.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/webrtc-vs-videosdk"><strong>WebRTC and VideoSDK</strong></a><strong>.</strong></blockquote>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-twilio-video-a-versatile-sdk-for-live-video-integration">2. Twilio Video:  A Versatile SDK for Live Video Integration</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Communication-APIs-for-SMS-Voice-Video-Authentication_twilio-5.jpeg" class="kg-image" alt="Top 10 WebRTC Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Twilio stands as a leading SDK solution, empowering businesses to smoothly integrate live video into their mobile and web applications. Its key strength lies in its versatility, providing the flexibility to build apps from scratch or enhance existing solutions with robust communication features. Whether you're starting anew or expanding your app's capabilities, Twilio offers a dependable and all-encompassing solution for seamlessly integrating live video into your applications.</p><h3 id="key-points-about-twilio">Key points about Twilio</h3>
<ul><li>Twilio offers web, iOS, and Android SDKs, providing developers with versatile tools to seamlessly integrate live video into their applications and enhance user experiences.</li><li>However, using Twilio may require <strong>manual configuration</strong> and <strong>additional code</strong> for utilizing multiple audio and video inputs, leading to increased development <strong>complexity</strong>.</li><li>Twilio's call insights feature allows error tracking and analysis, but integrating it requires additional code implementation.</li><li>As usage grows, pricing may become a concern, as Twilio lacks a built-in tiering system in the dashboard to effectively handle scaling needs.</li><li>Twilio supports <strong>up to 50 hosts and participants</strong>, which should be sufficient for many use cases.</li><li>Notably, Twilio doesn't offer <strong>plugins</strong> for easy product development, demanding additional time and effort from developers.</li><li>Lastly, Twilio provides customization options, but the level of customization offered by Twilio may not meet all developers' specific requirements, potentially necessitating further code development.</li></ul><h3 id="pricing-for-twilio">Pricing for Twilio</h3>
<ul><li><a href="https://www.videosdk.live/blog/twilio-video-alternative"><strong>Twilio</strong></a> provides <a href="https://www.twilio.com/en-us/video/pricing">pricing</a> that begins at <strong>$4</strong> per 1,000 minutes for their <strong>video services</strong>.</li><li>For <strong>recordings</strong>, they charge <strong>$0.004</strong> per participant minute, and <strong>recording compositions</strong> cost <strong>$0.01</strong> per composed minute. <strong>Storage</strong> is priced at <strong>$0.00167</strong> per GB per day after the initial 10 GB.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/twilio-vs-webrtc"><strong>WebRTC and Twilio</strong></a><strong>.</strong></blockquote><h2 id="3-mirrorfly-tailored-communication-solutions-for-enterprises">3. MirrorFly: Tailored Communication Solutions for Enterprises</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Live-Video-Call-API-Best-Video-Chat-SDK-for-Android-iOS-mirrorfly-5.jpeg" class="kg-image" alt="Top 10 WebRTC Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>MirrorFly is an exceptional in-app communication suite tailor-made for enterprises. It boasts an extensive array of powerful APIs and SDKs that provide unparalleled chat and calling experiences. With over 150 remarkable features for chat, voice, and video calling, this cloud-based solution seamlessly integrates to form a robust communication platform.</p><h3 id="key-points-about-mirrorfly">Key points about MirrorFly</h3>
<ul><li>MirrorFly may have <strong>limited customization options</strong>, which can restrict the ability to tailor the platform according to specific branding or user experience requirements. This may limit the uniqueness and personalization of the communication features.</li><li>MirrorFly may face <strong>difficulties in scaling</strong> for larger applications or handling a high volume of users. The platform may <strong>struggle to maintain performance</strong> and <strong>stability</strong> when dealing with significant traffic or complex use cases.</li><li>Users have reported <strong>mixed experiences</strong> with MirrorFly's technical support. Some have found it <strong>lacking in responsiveness</strong>, <strong>leading to delays</strong>, and <strong>difficulties in resolving issues</strong> or <strong>addressing concerns</strong>.</li><li>MirrorFly's <strong>pricing</strong> structure may not be suitable for all budgets or use cases. Depending on the desired features and scalability requirements, the <strong>costs</strong> associated with using MirrorFly may be <strong>higher</strong> compared to alternative communication platforms.</li><li><strong>Integrating MirrorFly</strong> into existing applications or workflows may require <strong>significant effort</strong> and <strong>technical expertise</strong>. The platform might <strong>lack comprehensive documentation</strong> or <strong>robust developer resources</strong>, making the integration process <strong>challenging</strong> or <strong>time-consuming</strong>.</li></ul><h3 id="mirrorfly-pricing">MirrorFly pricing</h3>
<ul><li>MirrorFly's <a href="https://www.mirrorfly.com/pricing.php">pricing</a> starts at <strong>$299</strong> per month, positioning it as a <strong>higher-cost option</strong> to take into account.</li></ul><h2 id="4-agora-applications-with-real-time-communication">4. Agora: Applications with Real-Time Communication</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Agora-Real-Time-Voice-and-Video-Engagement-5.jpeg" class="kg-image" alt="Top 10 WebRTC Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Agora's <a href="https://www.videosdk.live/audio-video-conferencing" rel="noreferrer">video calling SDK</a> offers a wide range of features, including embedded voice and video chat, real-time recording, live streaming, and instant messaging. These features provide developers with the tools they need to create captivating and immersive live experiences within their applications.</p><h3 id="key-points-about-agora">Key points about Agora</h3>
<ul><li>Agora's SDK provides embedded voice and video chat, real-time recording, live streaming, and instant messaging. Additionally, users can opt for additional add-ons like <strong>AR facial masks</strong>, <strong>sound effects</strong>, and <strong>whiteboards</strong> at an <strong>extra cost</strong>.</li><li>With Agora's SD-RTN (Software Defined Real-Time Network), users can enjoy extensive global coverage and benefit from ultra-low latency streaming capabilities.</li><li>However, the <strong>pricing structure</strong> may be <strong>complex</strong> and might not be suitable for businesses with limited budgets. Users seeking hands-on support from Agora's team may experience <strong>delays</strong>, as the support team may require <strong>additional time</strong> to provide assistance.</li></ul><h3 id="agora-pricing">Agora pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/agora-alternative"><strong>Agora</strong></a> offers two <a href="https://www.agora.io/en/pricing/">pricing</a> options, <strong>Premium</strong> and <strong>Standard</strong>, for their services. The pricing is based on the duration of audio and video calls, calculated monthly. </li><li>It is categorized into four types, depending on the video resolution, providing users with flexibility and cost-effectiveness.</li><li>The pricing structure includes <strong>Audio calls</strong> at <strong>$0.99</strong> per 1,000 participant minutes, <strong>HD Video calls</strong> at <strong>$3.99</strong> per 1,000 participant minutes, and <strong>Full HD Video calls</strong> at <strong>$8.99</strong> per 1,000 participant minutes. </li><li>Users can choose the option that best suits their needs and budget.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/agora-vs-webrtc"><strong>WebRTC and Agora</strong></a><strong>.</strong></blockquote><h2 id="5-jitsi-open-source-initiative-for-video-conferencing">5. Jitsi: Open-Source Initiative for Video Conferencing</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Free-Video-Conferencing-Software-for-Web-Mobile-Jitsi-6.jpeg" class="kg-image" alt="Top 10 WebRTC Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Jitsi is a collection of several open-source projects designed to facilitate video conferencing. Being an open-source platform, it offers the flexibility to customize and utilize it based on specific needs and requirements. The core components of Jitsi consist of Jitsi Meet, Jitsi Videobridge, Jibri, and Jigsai, each fulfilling a distinct role within the Jitsi ecosystem.</p><h3 id="key-points-about-jitsi">Key points about Jitsi</h3>
<ul><li>Jitsi is an open-source and free platform that allows users to utilize it according to their preferences and requirements. </li><li>One of its prominent projects, Jitsi Meet, provides various features such as text sharing via Etherpad, room locking, text chatting (web only), raising hands, YouTube video access during calls, audio-only calls, and integrations with third-party apps.</li><li>However, Jitsi Meet alone <strong>lacks</strong> essential collaborative features like <strong>screen sharing</strong>, <strong>recording</strong>, or <strong>telephone dial-in</strong> to a conference. To access these features, <strong>additional setup</strong> of projects like Jibri and Jigsai is necessary, which can involve <strong>more time</strong>, <strong>resources</strong>, and <strong>coding efforts</strong>. This additional setup may make Jitsi <strong>less suitable</strong> for users seeking a low-code option.</li><li>While Jitsi ensures end-to-end encryption for video calls, it does not cover <strong>chat</strong> or <strong>polls</strong>, so users prioritizing robust security may need to consider additional measures. </li><li>It's worth noting that Jitsi can consume a significant amount of bandwidth due to the functioning of Jitsi Videobridge.</li><li>For large organizations requiring an SDK for frequent long video sessions with a substantial number of participants, Jitsi might not meet their specific needs and could feel <strong>less satisfactory</strong> in comparison.</li></ul><h3 id="jitsi-pricing">Jitsi pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/jitsi-alternative"><strong>Jitsi</strong></a> is available for <strong>free</strong>, which means you can use its components without any payment. </li><li>However, it's important to note that dedicated technical support is not provided by the platform. In case you encounter any issues or require assistance, you can seek help from the active community of contributors who participate in the Jitsi project.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/webrtc-vs-jitsi"><strong>WebRTC and Jitsi</strong></a><strong>.</strong></blockquote><h2 id="6-vonage-customized-communication-on-amazons-platform">6. Vonage: Customized Communication on Amazon's Platform</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-API-Fully-Programmable-and-Customizable-Vonage-4.jpeg" class="kg-image" alt="Top 10 WebRTC Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Even after being acquired by Vonage and renamed "Vonage API," TokBox is still widely known by its original name. TokBox's SDKs provide developers with the capability to establish dependable point-to-point communication, making it an excellent choice for creating proof of concepts during hackathons or meeting investor deadlines. With Vonage's SDKs, developers have all the essential tools to build secure and seamless communication experiences within their applications.</p><h3 id="key-points-about-vonage">Key points about Vonage</h3>
<ul><li>Vonage empowers developers to create customized audio/video streams with various effects, filters, and AR/VR capabilities on mobile devices.</li><li>It provides support for diverse use cases, including 1:1 video calls, group video chat, and large-scale broadcast sessions. </li><li>During a call, participants can easily share screens, exchange messages through chat, and send data in real-time.</li><li>However, Vonage does pose some <strong>challenges</strong>. As the user base grows, <strong>scaling costs</strong> can become a concern since the price per stream per minute increases.</li><li>Users should be aware of <strong>additional costs</strong> for features like <strong>recording</strong> and <strong>interactive broadcasts</strong> while considering the platform. </li><li>Moreover, once the number of connections reaches 2,000, the platform switches to CDN delivery, which may lead to <strong>higher latency</strong>. </li><li><strong>Real-time streaming</strong> at scale can be <strong>challenging</strong>, as accommodating over 3,000 viewers requires switching to HLS, resulting in <strong>significant latency</strong>.</li></ul><h3 id="vonage-pricing">Vonage pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/vonage-alternative"><strong>Vonage</strong></a> adopts a usage-based <a href="https://www.vonage.com/communications-apis/video/pricing/">pricing</a> model for video sessions.</li><li>The cost is determined by the number of participants and calculated dynamically every minute.</li><li><strong>Pricing plans</strong> start at <strong>$9.99</strong> per month and include a free allowance of 2,000 minutes per month for all plans.</li><li>Once the free allowance is used up, users are billed at a rate of <strong>$0.00395</strong> per participant per minute for additional usage.</li><li><strong>Recording</strong> services are available at a starting rate of <strong>$0.010</strong> per minute.</li><li><strong>HLS streaming</strong> is priced at <strong>$0.003</strong> per minute.</li><li>These additional services come with their respective costs to enhance the video experience.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/vonage-vs-webrtc"><strong>WebRTC and Vonage</strong></a><strong>.</strong></blockquote><h2 id="7-aws-chime-sdk-customized-communication-on-amazons-platform">7. AWS Chime SDK: Customized Communication on Amazon's Platform</h2>
<p>The Amazon Chime SDK serves as the core technology behind Amazon Chime, functioning independently without its user interface or outer shell. It provides the essential components for integrating real-time audio and video communication into applications, empowering developers to create customized communication experiences tailored to their specific needs.</p><h3 id="key-points-about-aws-chime-sdk">Key points about AWS Chime SDK</h3>
<ul><li>The Amazon Chime SDK allows video meetings with a maximum of <strong>25 participants</strong> (50 for mobile users), facilitating effective collaboration among users.</li><li>It ensures consistent video quality across different devices and networks through the integration of simulcast technology.</li><li>To prioritize security, the Amazon Chime SDK encrypts all calls, videos, and chats, providing a secure communication environment for users.</li><li>However, some features like <strong>polling</strong>, <strong>auto-sync with Google Calendar</strong>, and <strong>background blur effects</strong> are <strong>not available</strong> in the Amazon Chime SDK, which might be a <strong>limitation</strong> for users seeking these specific functionalities.</li><li>Additionally, there are compatibility <strong>issues</strong> reported in <strong>Linux</strong> environments, and participants using the <strong>Safari browser</strong> may encounter <strong>challenges</strong> while using the SDK, which can affect the <strong>overall user experience</strong>.</li><li>Moreover, <strong>customer support</strong> experiences with the Amazon Chime SDK may vary, as the query resolution times can be <strong>inconsistent</strong> and depend on the specific support agent assigned to the case.</li></ul><h3 id="aws-chime-pricing">AWS Chime pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/amazon-chime-sdk-alternative"><strong>Amazon Chime</strong></a> <a href="https://aws.amazon.com/chime/pricing/">offers</a> a <strong>free basic plan</strong> that allows users to engage in <strong>one-on-one audio/video calls</strong> and <strong>group chats</strong> without any cost.</li><li>For users who require additional features and functionalities, there is a <strong>Plus plan</strong> available for <strong>$2.50</strong> per month per user. This plan includes valuable additions such as <strong>screen sharing</strong>, <strong>remote desktop control</strong>, <strong>1 GB of message history</strong> per user, and <strong>Active Directory integration</strong>.</li><li>For more extensive collaboration needs, there is the <strong>Pro plan</strong> available for <strong>$15</strong> per user per month. This comprehensive plan encompasses all the features of the Plus plan and allows for <strong>meetings</strong> with <strong>three</strong> or more participants, making it suitable for larger group discussions and presentations.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/amazon-chime-sdk-vs-webrtc"><strong>WebRTC and AWS Chime</strong></a><strong>.</strong></blockquote><h2 id="8-enablex-interactions-with-rich-communication-features">8. EnableX: Interactions with Rich Communication Features</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Call-API-Video-Chat-API-Voice-API-Video-Conferencing_enebleX-6.jpeg" class="kg-image" alt="Top 10 WebRTC Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>The EnableX SDK provides video and audio calling, along with collaborative features like whiteboard, screen sharing, recording, and more. It offers a video builder tool for custom solutions, allowing personalized user interfaces and hosting options.</p><h3 id="key-points-about-enablex">Key points about EnableX</h3>
<ul><li>EnableX provides a self-service portal with reporting and live analytics, allowing users to track communication quality and handle online payments.</li><li>The SDK supports JavaScript, PHP, and Python, making integration seamless for developers. Users can stream live content from their app or website and reach a larger audience through platforms like YouTube and Facebook.</li><li>However, the support team's <strong>response time</strong> may take up to <strong>72 hours</strong>, which could be a <strong>concern</strong> for users needing timely assistance.</li></ul><h3 id="enablex-pricing">EnableX pricing</h3>
<ul><li>EnableX offers <a href="https://www.enablex.io/cpaas/pricing/our-pricing"><strong>pricing</strong></a><strong> plans</strong> starting at <strong>$0.004</strong> per minute per participant for rooms with <strong>up to 50 people</strong>. For larger meetings or events, custom pricing options are available through their sales team.</li><li><strong>Recording</strong> services are provided at a rate of <strong>$0.010</strong> per minute per participant, allowing you to capture and preserve video sessions for future reference.</li><li><strong>Transcoding</strong> video into different formats is available at a rate of <strong>$0.010</strong> per minute, enabling you to convert content as needed.</li><li><strong>Additional storage</strong> can be obtained at <strong>$0.05</strong> per GB per month, accommodating your growing video content.</li><li><strong>RTMP (Real-Time Messaging Protocol) streaming</strong> is offered at a rate of <strong>$0.10</strong> per minute, facilitating real-time delivery to various platforms and devices.</li><li>Please be aware that pricing may vary based on specific needs.</li></ul><h2 id="9-whereby-simplifying-video-conferencing">9. Whereby: Simplifying Video Conferencing</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Calling-API-for-Web-and-App-Developers-Whereby-6.jpeg" class="kg-image" alt="Top 10 WebRTC Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Whereby is a user-friendly video conferencing platform tailored for small to medium-sized meetings. It provides a straightforward and intuitive experience for users. However, it might not be the most suitable option for larger businesses or those seeking advanced features that cater to their specific needs.</p><h3 id="key-points-about-whereby">Key points about Whereby</h3>
<ul><li><strong>Basic customization</strong> options for the video interface are available, but they are <strong>limited</strong> and do not offer a fully customized experience.</li><li>Video calls can be embedded directly into websites, mobile apps, and web products, eliminating the need for external links or additional apps.</li><li>While Whereby offers a seamless video conferencing experience, it may <strong>lack</strong> some <strong>advanced features</strong> found in other tools.</li><li>The <strong>maximum capacity</strong> for meetings on Whereby is <strong>50 participants</strong>, which may be sufficient for many use cases.</li><li><strong>Screen sharing</strong> for mobile users and <strong>customization</strong> options for the host interface may have some <strong>limitations</strong>.</li><li>Whereby does not support <strong>virtual backgrounds</strong>, and some users have reported <strong>issues</strong> with the <strong>mobile app</strong>, which could affect the overall <strong>user experience</strong>.</li></ul><h3 id="whereby-pricing">Whereby pricing</h3>
<ul><li>Whereby's <a href="https://whereby.com/information/pricing">pricing</a> starts at <strong>$6.99</strong> per month.</li><li><strong>The basic plan</strong> includes an allocation of up to 2,000 user minutes per month, which is renewed monthly.</li><li>If users exceed the allocated minutes, additional usage is charged at a rate of <strong>$0.004</strong> per minute.</li><li><strong>Cloud recording</strong> and <strong>live streaming</strong> options are available at a rate of <strong>$0.01</strong> per minute.</li><li><strong>Email</strong> and <strong>chat support</strong> are provided <strong>free</strong> of charge to all users, ensuring accessible assistance.</li><li><strong>Paid </strong>support plans offer additional features such as <strong>technical onboarding</strong> and <strong>customer success management</strong>.</li><li><strong>HIPAA</strong> compliance is also available as part of the <strong>paid</strong> support plans, catering to specific security and privacy requirements.</li></ul><h2 id="10-signalwire-seamless-integration-of-live-video-experiences">10. SignalWire: Seamless Integration of Live Video Experiences</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Building-The-Software-Defined-Telecom-Network-SignalWire-5.jpeg" class="kg-image" alt="Top 10 WebRTC Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>SignalWire is a platform that utilizes APIs to enable developers to effortlessly integrate live and on-demand video experiences into their applications. The main objective of SignalWire is to simplify the processes of video encoding, delivery, and renditions, ensuring a seamless and uninterrupted video streaming experience for users.</p><h3 id="overview-of-signalwire">Overview of SignalWire</h3>
<ul><li>SignalWire offers an SDK that allows developers to integrate real-time video and live streams into web, iOS, and Android applications. The SDK enables video calls with <strong>up to 100 participants</strong> in a real-time webRTC environment.</li><li>However, it's essential to mention that the SDK does not provide built-in <strong>support for managing disruptions</strong> or <strong>user publish-subscribe logic</strong>, which developers will need to implement <strong>separately</strong>.</li></ul><h3 id="signalwire-pricing">SignalWire pricing</h3>
<ul><li>SignalWire utilizes a <a href="https://signalwire.com/pricing/video"><strong>pricing</strong></a><strong> model</strong> based on per-minute usage. For <strong>HD video calls</strong>, the pricing is <strong>$0.0060</strong> per minute, while for <strong>Full HD</strong> <strong>video calls</strong>, it is <strong>$0.012</strong> per minute. The actual cost may vary depending on the desired video quality for your application.</li><li>Additionally, SignalWire offers additional features such as <strong>recording</strong>, which is available at a rate of <strong>$0.0045</strong> per minute. This allows you to capture and store video content for future use. </li><li>The platform also provides <strong>streaming</strong> capabilities priced at <strong>$0.10</strong> per minute, enabling you to broadcast your video content in real-time.</li></ul><h2 id="certainly">Certainly!</h2>
<p><a href="https://www.videosdk.live">VideoSDK</a> stands out as an SDK that prioritizes fast and seamless integration. With a low-code solution, developers can quickly build live video experiences in their applications, deploying custom video conferencing solutions in under 10 minutes. Unlike other SDKs, VideoSDK offers a streamlined process with easy creation and embedding of live video experiences, enabling real-time connections, communication, and collaboration.</p><h2 id="still-skeptical">Still Skeptical?</h2>
<p>Dive into the possibilities of VideoSDK by exploring its comprehensive <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start">Quickstart</a> guide. Discover its potential with the powerful <a href="https://docs.videosdk.live/code-sample">sample app</a> exclusively designed for VideoSDK. <a href="https://app.videosdk.live/">Sign up</a> now and embark on your integration journey, and don't miss the chance to claim your <a href="https://www.videosdk.live/pricing">complimentary $20 free credit</a> to unlock the full potential of VideoSDK. Our dedicated team is always ready to assist you whenever you need support. Get ready to showcase your creativity and build remarkable experiences with VideoSDK. Let the world see what you can create!</p>]]></content:encoded></item><item><title><![CDATA[Top 10 Twilio Video Alternatives for 2026]]></title><description><![CDATA[Unearth a mighty Twilio Video alternative that's poised to reshape your online journey. Unleash your utmost potential and seize the reins of triumph this very day.]]></description><link>https://www.videosdk.live/blog/twilio-video-alternative</link><guid isPermaLink="false">64a58fde8ecddeab7f17f5a8</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 07 Oct 2024 05:04:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/07/Twilio-alternative-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Twilio-alternative-1.jpg" alt="Top 10 Twilio Video Alternatives for 2026"/><p>Looking for top Twilio Video alternatives in 2026? This article presents a carefully curated list of 10 video API platforms that can replace Twilio's Programmable Video features in your apps. These Twilio competitors have been meticulously chosen for their compatibility and functionality.</p><p>Now, let's focus on the key factors of pricing and features to enhance your in-app conversations. Without any delay, let's jump straight into the specifics.</p><h2 id="exploring-twilio-programmable-video-alternativesreplacement">Exploring Twilio Programmable Video Alternatives/Replacement</h2>
<p>Twilio Programmable Video is a software development kit (SDK) designed to facilitate live video communication within applications. It allows developers to integrate real-time communication features into their apps through signaling, user access control, media processing, and delivery functionalities. The tool enables direct media exchange between users or utilizes Twilio's servers, depending on the type of Video Room being utilized</p><p>Twilio offers a range of services such as video communication, voice calling, SMS and MMS messaging, and email, There are certain areas where it may fall short. These include limited coverage in certain countries, not providing analytical data, and the need for more comprehensive customer service and support. Moreover, <a href="https://www.videosdk.live/blog/twilio-video-competitors">Twilio</a>'s pricing may not be suitable for smaller startups, making it worthwhile to consider alternative vendors.</p><p>The <strong>top 10 Twilio Video Alternatives</strong> are VideoSDK, Agora, Jitsi, EnableX, Zoom Video SDK, TokBox OpenTok [Vonage], Whereby, AWS Chime, Daily, and SignalWire. These alternatives to Twilio offer a range of features, pricing options, and more, allowing you to make an informed decision based on your specific needs.</p><p>By exploring these Twilio alternatives, you can find the right Twilio video API replacement that suits your application's requirements and budget.</p><blockquote>
<h2 id="top-10-twilio-video-alternatives-for-2026">Top 10 Twilio Video Alternatives for 2026</h2>
<ul>
<li>VideoSDK</li>
<li>Zoom Video SDK</li>
<li>Agora</li>
<li>Jitsi</li>
<li>AWS Chime</li>
<li>Enablex</li>
<li>TokBox Opentok [Vonage]</li>
<li>Whereby</li>
<li>Daily</li>
<li>SignalWire</li>
</ul>
</blockquote>
<h2 id="1-videosdk">1. VideoSDK</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-SDK-for-Real-time-Communication-Live-Streaming-Video-API-1.jpeg" class="kg-image" alt="Top 10 Twilio Video Alternatives for 2026" loading="lazy" width="1920" height="967"/></figure><p><a href="https://www.videosdk.live"><strong>VideoSDK</strong></a> provides an API that allows developers to easily add powerful, extensible, scalable, and resilient audio-video features to their apps with just a few lines of code. Add live audio and video experiences to any platform in minutes.</p><p>The key advantage of using VideoSDK is it’s quite easy and quick to integrate, allowing you to focus more on building innovative features to enhance user retention.</p><h3 id="with-videosdk-you-can-expect-the-following">With VideoSDK, you can expect the following</h3>
<h4 id="high-scalability">High scalability</h4>
<p>VideoSDK’s high scalability with infinite room and zero maintenance ensures uninterrupted availability, and that too with &lt;99ms latency. It supports up to 300 attendees, including 50 presenters, and empowers large-scale collaborations. This advanced infrastructure enables global reach and success in the digital landscape.</p><h4 id="high-adaptive-bitrate">High adaptive bitrate</h4>
<p>VideoSDK offers high adaptive bitrate technology for an immersive audio-video experience. It auto-adjusts stream quality under bandwidth constraints and adapts to varying network conditions. With a global infrastructure and secure usage in restricted network environments, VideoSDK delivers optimal performance and seamless streaming.</p><h4 id="end-to-end-customized-sdk">End-to-end customized SDK</h4>
<p>With their end-to-end customized SDK, you have the power to fully customize the UI to meet your unique needs. Their code samples help accelerate your time-to-market, while template layouts can be easily customized in any orientation. Leveraging their PubSub feature, you can build engaging and interactive features, enhancing the overall user experience.</p><h4 id="quality-recordings">Quality Recordings</h4>
<p>Experience high-quality recordings on any connection with VideoSDK. Their solution supports 1080p video recording capability, ensuring crystal-clear and detailed footage. With programmable layouts and custom templates, you can tailor the recording experience to your specific requirements. Easily store your recordings in VideoSDK cloud or popular cloud storage providers such as AWS, GCP, or Azure. Access your recordings conveniently from the dashboard itself, providing seamless management and retrieval of your valuable content.</p><h4 id="detailed-analytics">Detailed analytics</h4>
<p>Gain access to in-depth analytics on video call metrics, including participant interactions and duration, allowing you to analyze participant interest throughout the session.</p><h4 id="cross-platform-streaming">Cross-platform streaming</h4>
<p>Stream live events to millions of viewers across platforms such as YouTube, LinkedIn, Facebook, and more with built-in RTMP support.</p><h4 id="seamless-scaling">Seamless scaling</h4>
<p>Effortlessly scale live audio/video within your web app, accommodating from just a few users to over 10,000, and reach millions of viewers through RTMP output.</p><h4 id="platform-support">Platform support</h4>
<p>Build your live video app for a specific platform and seamlessly run it across browsers, devices, and operating systems with minimal development efforts.</p><ul><li><strong>Mobile</strong></li></ul><p>Flutter, Android (Java/Kotlin), iOS (Objective-C/Swift), React Native </p><ul><li><strong>Web</strong></li></ul><p>JavaScript Core SDK + UI Kit for React JS, Angular, Web Components for other frameworks </p><ul><li><strong>Desktop</strong></li></ul><p>Flutter Desktop</p><h3 id="videosdk-pricing">VideoSDK Pricing</h3>
<ul><li>VideoSDK offers <a href="https://www.videosdk.live/pricing"><strong>10,000 free minutes</strong></a> that renew monthly. </li><li>You only start paying once you exhaust the free minutes. </li><li>The best thing is, that pricing for video and audio calls is considered separately.</li><li>Pricing for <strong>video calls</strong> begins at <strong>$0.003</strong> per participant minute and for <strong>audio calls</strong>, it begins at <strong>$0.0006</strong> per participant minute. </li><li>The additional cost for <strong>cloud recordings</strong> is <strong>$0.015</strong> per minute and <strong>RTMP output</strong> is <strong>$0.030</strong> per minute. </li><li>You can estimate your costs using their <a href="https://www.videosdk.live/pricing#pricingCalc">pricing calculator</a>.</li><li>VideoSDK provides free <strong>24/7</strong> <strong>support </strong>to all customers. Their dedicated team is available to assist you through your preferred communication channel whenever you need help with basic queries, upcoming events, or technical requirements.</li></ul><blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/alternative/twilio-vs-videosdk">Twilio vs VideoSDK</a>.</blockquote>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>
		</div>

	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-zoom-video-sdk">2. Zoom Video SDK</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-SDK-from-Zoom-Zoom-1.jpeg" class="kg-image" alt="Top 10 Twilio Video Alternatives for 2026" loading="lazy" width="1920" height="967"/></figure><p>The Zoom Video SDK enables developers to create customized live video applications with access to Zoom's underlying technology. It offers video, audio, screen sharing, chat, and data streams, with the flexibility to choose specific features. The SDK also provides server-side APIs and webhooks.</p><h3 id="heres-an-overview-of-the-zoom-video-sdk">Here's an overview of the Zoom Video SDK</h3>
<ul><li>The Zoom Video SDK allows for customizable video compositions with support for up to 1,000 participants. </li><li>It enhances collaboration with features like screen sharing, live streaming, and in-session chat while providing layout control. </li><li>Zoom supports multiple languages and offers support plans for faster assistance. </li><li>However, role limitations and bandwidth management may have some limitations.</li></ul><h3 id="pricing-for-zoom-video-sdk">Pricing for Zoom Video SDK</h3>
<ul><li>Regarding <a href="https://zoom.us/buy/videosdk">pricing</a>, <a href="https://www.videosdk.live/blog/zoom-video-sdk-alternative"><strong>Zoom</strong></a> offers 10,000 free minutes per month, with charges applying only once you exceed this limit. </li><li>Pricing starts at <strong>$0.31</strong> per user minute, and <strong>recordings</strong> are available for <strong>$100</strong> per month for <strong>1 TB</strong> of storage. </li><li><strong>Telephony services</strong> are priced at <strong>$100</strong> per month.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/twilio-vs-zoom"><strong>Zoom and Twilio</strong></a><strong>.</strong></blockquote><h2 id="3-agora">3. Agora</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Agora-Real-Time-Voice-and-Video-Engagement.jpeg" class="kg-image" alt="Top 10 Twilio Video Alternatives for 2026" loading="lazy" width="1920" height="967"/></figure><p>Agora is a platform renowned for its powerful video conferencing capabilities, allowing users to seamlessly connect and communicate in real-time. However, it is crucial to consider a few drawbacks that may impact the overall experience.</p><h3 id="key-points-about-agora">Key points about Agora</h3>
<ul><li>Agora's customization options may be limited, and occasional performance issues can impact the quality of communication. </li><li>Users are dependent on Agora's servers, which may affect accessibility. Advanced features may come with additional costs. </li><li>Consider these factors when evaluating Agora's suitability for your needs.</li></ul><h3 id="pricing-for-agora">Pricing for Agora</h3>
<ul><li>Some <strong>advanced features</strong> in <a href="https://www.videosdk.live/blog/agora-alternative"><strong>Agora</strong></a> may require <strong>additional </strong><a href="https://www.agora.io/en/pricing/"><strong>costs</strong></a>, such as <strong>recording</strong>, <strong>transcription</strong>, or <strong>specialized capabilities</strong>. </li><li>Users should consider these potential expenses while assessing the affordability and suitability of Agora for their specific needs.</li><li>Pricing for <strong>video calling</strong> starts at <strong>$3.99 </strong>per<strong> </strong>1,000 minutes.</li><li>Pricing for <strong>voice calling</strong> starts at <strong>$0.99</strong> per 1,000 minutes.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/agora-vs-twilio"><strong>Agora and Twilio</strong></a><strong>.</strong></blockquote><h2 id="4-jitsi">4. Jitsi</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Free-Video-Conferencing-Software-for-Web-Mobile-Jitsi-1.jpeg" class="kg-image" alt="Top 10 Twilio Video Alternatives for 2026" loading="lazy" width="1920" height="967"/></figure><p>Jitsi is an open-source collection for building video conferencing solutions. Jitsi Meet is a JavaScript client for video chatting with screen sharing and collaboration features. It's accessible through web browsers and mobile apps. Jitsi Videobridge is an XMPP server (Prosody) for large-scale video chats with WebRTC compatibility and default encryption.</p><h3 id="key-points-about-jitsi">Key Points about Jitsi</h3>
<ul><li>Jitsi is free, open-source, and offers end-to-end encryption. </li><li>It provides a live experience with features like active speakers, text chatting (web only), screen sharing, and more. </li><li>Additional features require Jibri configuration.</li><li>Jitsi requires extra steps for call recording, either by live streaming to YouTube or setting up Jibri. </li><li>Support response time may exceed 48 hours. </li><li>The tool doesn't handle user bandwidth in case of network issues, leading to potential screen blankness.</li></ul><h3 id="pricing-for-jitsi">Pricing for Jitsi</h3>
<ul><li><a href="https://www.videosdk.live/blog/jitsi-alternative"><strong>Jitsi</strong></a> is 100% open source and available for <strong>free</strong> usage and development.</li><li>However, you are responsible for setting up your own servers and creating the UI from scratch. </li><li>Product support comes at an additional cost.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/twilio-vs-jitsi"><strong>Twilio and Jitsi</strong></a><strong>.</strong></blockquote><h2 id="5-aws-chime">5. AWS Chime</h2>
<p>AWS Chime is a video conferencing tool for business users, offering VoIP calling, video messaging, and virtual meetings for remote collaboration.</p><h3 id="heres-a-concise-overview-of-aws-chime">Here's a concise overview of AWS Chime</h3>
<ul><li><a href="https://www.videosdk.live/blog/amazon-chime-sdk-alternative"><strong>AWS Chime</strong></a> offers high-quality online meetings with crisp video and audio, collaborative capabilities including screen sharing and text chats, and meeting management for up to 250 participants. </li><li>Enhanced security is provided through AWS Identity and Access Management, and recording and analytics features are available. </li><li>Basic bandwidth management is included, but edge case management is the user's responsibility.</li></ul><h3 id="aws-chime-pricing">AWS Chime Pricing</h3>
<ul><li><strong>Basic Tier</strong>: <strong>Free of </strong><a href="https://aws.amazon.com/chime/pricing/"><strong>charge</strong></a>, offering <strong>one-on-one audio</strong> and <strong>video calls</strong> along with <strong>group chat</strong> functionality.</li><li><strong>Plus Tier</strong>: Available at <strong>$2.50</strong> per user month, this tier includes all <strong>basic features</strong>, <strong>screen sharing</strong>, <strong>remote desktop control</strong>, <strong>1 GB message history per user</strong>, and <strong>Active Directory integration</strong>.</li><li><strong>Pro Tier</strong>: Priced at <strong>$15</strong> per user month, the Pro Tier encompasses all <strong>Plus features</strong>. It allows <strong>scheduling and hosting meetings</strong> for three or more participants (<strong>up to 100 attendees</strong>), <strong>meeting recording</strong>, <strong>Outlook integration</strong>, and more.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/twilio-vs-amazon-chime-sdk"><strong>Twilio and AWS Chime</strong></a><strong>.</strong></blockquote><h2 id="6-enablex">6. EnableX</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Call-API-Video-Chat-API-Voice-API-Video-Conferencing_enebleX-1.jpeg" class="kg-image" alt="Top 10 Twilio Video Alternatives for 2026" loading="lazy" width="1920" height="967"/></figure><p>EnableX offers a complete suite of SDKs for live video, voice, and messaging. Designed for service providers, ISVs, SIs, and developers, EnableX provides essential building blocks for creating engaging live video solutions.</p><h3 id="key-points-about-enablex">Key points about EnableX</h3>
<ul><li>The SDK enables customized video solutions with UI customization, hosting, and billing integration. </li><li>It provides live analytics, and reporting, and supports various programming languages. </li><li>Users can stream live content from their app, website, or popular platforms like YouTube and Facebook. </li><li>Support responses may take up to 72 hours, and integration may take weeks. Video optimization for device and network issues requires separate handling.</li></ul><h3 id="enablex-pricing">EnableX Pricing</h3>
<ul><li><strong>Participant-Based </strong><a href="https://www.enablex.io/cpaas/pricing/our-pricing"><strong>Pricing</strong></a>: The SDK is priced at <strong>$0.004</strong> per participant minute for <strong>up to 50 participants</strong> per room. For pricing involving over 50 participants, it is necessary to contact their sales team.</li><li><strong>Additional Costs</strong>: <strong>Recording</strong> is charged at <strong>$0.10</strong> per participant per minute, <strong>transcoding</strong> at <strong>$0.10</strong> per minute, and <strong>storage</strong> at <strong>$0.05</strong> per GB per month.</li><li><strong>RTMP streaming</strong> incurs a cost of <strong>$0.10</strong> per minute.</li></ul><h2 id="7-tokbox-opentok-vonage-video-api">7. TokBox OpenTok (Vonage Video API)</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-API-Fully-Programmable-and-Customizable-Vonage-1.jpeg" class="kg-image" alt="Top 10 Twilio Video Alternatives for 2026" loading="lazy" width="1920" height="967"/></figure><p>TokBox OpenTok (Vonage Video API) enables customized video experiences in mobile, web, and desktop apps. Originally founded in 2008, it transitioned from a consumer product to a technology provider for embedding video conference components on websites.</p><h3 id="key-points-about-opentok-vonage-video-api">Key points about OpenTok (Vonage Video API)</h3>
<ul><li>The API offers live video, voice, messaging, and screen-sharing capabilities with client libraries for multiple platforms. </li><li>It enables customized audio/video streams with effects and supports collaboration features. </li><li>Performance analysis is available, and the SDK ensures security and compliance. </li><li>It scales to accommodate participants and provides chat-based support. </li><li>Edge case management is the user's responsibility.</li></ul><h3 id="opentok-pricing">OpenTok Pricing</h3>
<ul><li><strong>Usage-Based Model</strong>: <a href="https://www.videosdk.live/blog/vonage-alternative"><strong>Vonage</strong></a> follows a usage-based <a href="https://www.vonage.com/communications-apis/video/pricing/">pricing</a> model, dynamically calculated per minute based on the number of participants in a video session. </li><li>Plans start at <strong>$9.99</strong> per month, including free 2,000 minutes per month across all plans. </li><li>Once the free minutes are exhausted, pricing is set at <strong>$0.00395</strong> per participant minute. </li><li><strong>Recording</strong> starts at <strong>$0.10</strong> per minute, and <strong>HLS streaming</strong> is priced at <strong>$0.15</strong> per minute.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/twilio-vs-vonage"><strong>Twilio and Vonage</strong></a><strong>.</strong></blockquote><h2 id="8-whereby">8. Whereby</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Calling-API-for-Web-and-App-Developers-Whereby-1.jpeg" class="kg-image" alt="Top 10 Twilio Video Alternatives for 2026" loading="lazy" width="1920" height="967"/></figure><p>Whereby is a browser-based meeting solution with permanent rooms for easy access. Guests can join meetings effortlessly, without downloads or registrations. It also offers a hybrid meeting solution that eliminates echo and the need for expensive hardware, ideal for distributed teams.</p><h3 id="key-points-about-whereby">Key points about Whereby</h3>
<ul><li>Whereby offers limited video interface customization, seamless in-app video calls, GDPR-compliant data privacy, basic collaborative features (screen sharing, recording, picture-in-picture, text chat), and requires manual implementation for user-host publish-subscribe logic.</li></ul><h3 id="whereby-pricing">Whereby Pricing:</h3>
<ul><li><a href="https://whereby.com/information/pricing"><strong>Pricing</strong></a><strong> Plans</strong>: Whereby offers various pricing plans starting at <strong>$6.99</strong> per month. </li><li>The <strong>base plan</strong> includes up to 2,000 user minutes that renew monthly. <strong>Additional minutes</strong> are charged at a rate of <strong>$0.004</strong> per minute. <strong>Cloud recording</strong> and <strong>live streaming</strong> features are available at a cost of <strong>$0.01</strong> per minute.</li><li><strong>Support Options</strong>: <strong>Email</strong> and <strong>chat support</strong> are provided for <strong>free</strong> to all accounts. Additionally, <strong>technical onboarding</strong>, <strong>customer success management</strong>, and <strong>HIPAA</strong> compliance options are available for <strong>enterprise plans</strong>, offering additional support and specialized features.</li></ul><h2 id="9-daily">9. Daily</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/WebRTC-Video-Audio-APIs-for-Every-Developer-Daily-1.jpeg" class="kg-image" alt="Top 10 Twilio Video Alternatives for 2026" loading="lazy" width="1920" height="967"/></figure><p>Daily is a powerful platform designed to enable developers to create real-time video and audio calls directly within the browser. It provides a range of SDKs and APIs to handle various backend video call use cases across different platforms.</p><h3 id="heres-an-overview-of-daily">Here's an overview of Daily</h3>
<ul><li>Daily provides two approaches: the Daily Client SDKs for custom UI development and the Daily Prebuilt widget for easy integration. </li><li>It offers collaborative features like HD screen sharing, breakout rooms, raise hand, live transcription, whiteboard, and text chat. </li><li>Daily supports scalability, and real-time call data for debugging, and optimization. Please be aware that the mobile SDKs are in beta development. </li><li>Support is available through email and chat, with a response time of up to 72 hours. </li><li>Users are responsible for their own publish-subscribe logic, and edge case management is not built-in.</li></ul><h3 id="daily-pricing">Daily Pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/daily-co-alternative"><strong>Daily</strong></a>'s pricing is based on a per-participant minute model, with a rate of <strong>$0.004</strong> per minute. </li><li>Each month, you receive 10,000 free participant minutes that are refreshed monthly. </li><li>Additional charges apply for <strong>audio</strong> <strong>$0.00099</strong> per user minute, <strong>streaming</strong> <strong>$0.0012</strong> per minute, <strong>RTMP output</strong> <strong>$0.015</strong> per minute, and <strong>recording</strong> <strong>$0.01349</strong> per GB.</li><li><strong>Email</strong> and <strong>chat support</strong> are available for <strong>free</strong> for all accounts. </li><li><strong>Advanced support features</strong> can be accessed through add-on packages, starting from <strong>$250</strong> per month.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/twilio-vs-daily"><strong>Twilio and Daily</strong></a><strong>.</strong></blockquote><h2 id="10-signalwire">10. SignalWire</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Conferencing-API-SignalWire.jpeg" class="kg-image" alt="Top 10 Twilio Video Alternatives for 2026" loading="lazy" width="1920" height="967"/></figure><p>SignalWire is an API-driven platform designed to enable developers to integrate live and on-demand video experiences into their applications. It aims to simplify video encoding, delivery, and renditions, providing a seamless video streaming experience for users.</p><h3 id="heres-an-overview-of-signalwire">Here's an overview of SignalWire</h3>
<ul><li>SignalWire offers an SDK for integrating real-time video and live streams into web, iOS, and Android applications. Each video call can support up to 100 participants in a real-time WebRTC environment. </li><li>However, the SDK doesn't provide built-in support for managing disruptions or user publish-subscribe logic, requiring separate implementation by developers.</li></ul><h3 id="signalwire-pricing">SignalWire Pricing</h3>
<ul><li>SignalWire follows a <a href="https://signalwire.com/pricing/video">pricing</a> model based on per-minute usage. </li><li>The pricing includes <strong>$0.0060</strong> per minute for <strong>HD</strong> <strong>video calls</strong> and <strong>$0.012</strong> for <strong>Full HD</strong> <strong>video calls</strong>. </li><li>The cost may vary depending on the video quality required for your application.</li><li>Additional features such as <strong>recording</strong> are available at a rate of <strong>$0.0045</strong> per minute, allowing you to capture and store video content. </li><li><strong>Streaming</strong> is another feature provided by SignalWire, priced at <strong>$0.10</strong> per minute.</li></ul><h2 id="final-thoughts">Final Thoughts!</h2>
<p>While all the video conferencing SDKs mentioned offer various features and capabilities, <a href="https://www.videosdk.live/">VideoSDK </a>stands out as an SDK that prioritizes a fast and seamless integration experience.</p><p>VideoSDK offers a low-code solution that allows developers to quickly build live video experiences in their applications. With the help of VideoSDK, it is possible to create and deploy custom video conferencing solutions in under 10 minutes, significantly reducing the time and effort required for integration.</p><p>Unlike other SDKs that may have longer integration times or limited customization options, VideoSDK aims to provide a streamlined process. By leveraging VideoSDK, developers can create and embed live video experiences with ease, allowing users to connect, communicate, and collaborate in real-time.</p><h2 id="still-skeptical">Still skeptical?</h2>
<p>Take a deep dive into VideoSDK's comprehensive <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start">Quickstart guide</a> and immerse yourself in the possibilities with our <a href="https://docs.videosdk.live/code-sample">powerful sample app</a>, built exclusively for VideoSDK.</p><p><a href="https://app.videosdk.live/signup">Sign up</a> and embark on your integration journey today and seize the opportunity to claim your <a href="https://www.videosdk.live/pricing">complimentary free $20 credit</a>, allowing you to unleash the full potential of VideoSDK. And if you ever need assistance along the way, our dedicated team is just a click away, ready to support you.</p><p>Get ready to witness the remarkable experiences you can create using the extraordinary capabilities of VideoSDK. Unleash your creativity and let the world see what you can build!</p>]]></content:encoded></item><item><title><![CDATA[Standard Live Streaming API Pricing]]></title><description><![CDATA[Stream with ease. Grasp the crystal clear pricing with cost-effective plans and supreme quality.]]></description><link>https://www.videosdk.live/standard-live-streaming-pricing/</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb6d</guid><category><![CDATA[Pricing]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 04 Oct 2024 07:38:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/09/Pricing-thumbnail--1-.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/09/Pricing-thumbnail--1-.jpg" alt="Standard Live Streaming API Pricing"/><p>Live streaming is gradually becoming a great engagement-cum-entertainment source for streamers. Apart from social media streamers, various huge brands and corporates are also keeping up with live streams to showcase their workflow and events to a mass audience. Live streaming has been seen as a lucratively popular deal for communication over the web. </p><p>We make sure that each of our clients is served with the most amazing experience. You can always <a href="https://videosdk.live/contact">get in touch</a> with us for a detailed conversation.<br/></p><blockquote>Videosdk.live also deals with developing live streaming APIs. This blog is dedicated to <strong>Standard Live Streaming. </strong>It will make a reader understand its price computation and contrast the prices of other providers. </blockquote><h3 id="how-to-calculate-the-cost-of-standard-live-streaming">How to Calculate the Cost of Standard Live Streaming?</h3><p>Considering A video for live streaming is divided into three major steps, from production to final output. These are <strong>1. Encoding  2. Storage  3. Delivery</strong></p><figure class="kg-card kg-image-card kg-width-wide"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/Standarad-LS-components.jpg" class="kg-image" alt="Standard Live Streaming API Pricing" loading="lazy" width="722" height="473"/></figure><ul><li>The total price of live-streaming a video is the sum of the costs of all three components.</li><li><strong>The total cost of a live streaming= Encoding + Storage + Delivery</strong><br/></li></ul><h2 id="pro-plan">Pro Plan</h2><p>A pro plan is a plan which is devised by VideoSDK  to keep up with the costs for the viewers in a trouble-free</p><p>Let’s understand the computation</p><p><strong>Example 1 :</strong></p><figure class="kg-card kg-image-card kg-width-wide"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/Pro-plan-pricing--2-.jpg" class="kg-image" alt="Standard Live Streaming API Pricing" loading="lazy" width="1457" height="717"/></figure><p>We can observe how the price is computed for a month.</p><p><strong>To make a note;</strong></p><ul><li>The encoding price is only charged once in a lifetime.</li><li>The video will not comply with further encoding costs for the next time</li><li>The lifetime encoding cost does not fall with any other video uploaded by the streamer.</li><li>Storage costs will be computed for each month.</li><li>We calculate costs for 100% of the video<br/></li></ul><p><strong>Lifetime video encoding= $0.05 per minute; Per month video storage= $0.003 per minute</strong><br/></p><p>As we see, the image explains the cost with a resolution of 1080p. Similarly, the cost can also be calculated for different resolutions as well. Let’s take the same example. <br/></p><p><strong>Calculation of cost for Live Streaming</strong></p><p><strong>Total Minutes= 30; Total Views= 100</strong></p><p><strong>Encoding- 30 x 0.05= $1.5; Storage- 30 x 0.003= $0.09</strong></p><ol><li>Resolution- 240p; Unit price per Minute- 0.0004</li></ol><p>Calculation- Delivery= 30 x 100 x 0.0004 = $1.2</p><p><strong>Total cost at 240p= 1.5 + 0.09 + 1.2= $ 2.79</strong><br/></p><p>2. Resolution- 360p; Unit price per Minute- 0.0006</p><p>Calculation- Delivery= 30 x 100 x 0.0006 = $1.8</p><p><strong>Total cost at 360p= 1.5 + 0.09 + 1.8= $ 3.39</strong><br/></p><p>3. Resolution- 480p; Unit price per Minute- 0.0008</p><p>Calculation- Delivery= 30 x 100 x 0.0008 = $2.4</p><p><strong>Total cost at 480p= 1.5 + 0.09 + 2.4= $ 3.99</strong><br/></p><p>4. Resolution- 720p; Unit price per Minute- 0.0010</p><p>Calculation- Delivery= 30 x 100 x 0.0010 = $3</p><p><strong>Total cost at 720p= 1.5 + 0.09 + 3= $ 4.59</strong><br/></p><p>5. Resolution- 1080p; Unit price per Minute- 0.0012</p><p>Calculation- Delivery= 30 x 100 x 0.0012 = $3.6</p><p><strong>Total cost at 1080p= 1.5 + 0.09 + 3.6= $ 5.19</strong><br/></p><blockquote>Let's take another example and look at the pricing where the streaming minutes are increased. This example will let you understand the calculations with change in different units- Minutes, or Views, or both.</blockquote><p><br><strong>Example 2;</strong></br></p><p><strong>Calculation of cost of Live Streaming</strong></p><p><strong>Total Minutes- 150; Total views- 100</strong></p><p><strong>Encoding- 150 x 0.05= $7.5; Storage- 150 x 0.003= $0.45</strong><br/></p><ol><li>Resolution- 240p; Unit price per Minute- 0.0004</li></ol><p>Calculation- Delivery= 150 x 100 x 0.0004 = $6</p><p><strong>Total cost at 240p= 7.5 + 0.45 + 6= $ 13.95</strong><br/></p><p>2. Resolution- 360p; Unit price per Minute- 0.0006</p><p>Calculation- Delivery= 150 x 100 x 0.0009 = $6</p><p><strong>Total cost at 360p= 7.5 + 0.45 + 9= $ 16.95</strong><br/></p><p>3. Resolution- 480p; Unit price per Minute- 0.0008</p><p>Calculation- Delivery= 150 x 100 x 0.0008 = $12</p><p><strong>Total cost at 480p= 7.5 + 0.45 + 12= $ 19.95</strong><br/></p><p>4. Resolution- 720p; Unit price per Minute- 0.0010</p><p>Calculation- Delivery= 150 x 100 x 0.0010 = $15</p><p><strong>Total cost at 720p= 7.5 + 0.45 + 15= $ 22.95</strong><br/></p><p>5. Resolution- 1080p; Unit price per Minute- 0.0012</p><p>Calculation- Delivery= 150 x 100 x 0.0012 = $18</p><p><strong>Total cost at 1080p= 7.5 + 0.45 + 18= $ 25.95</strong><br/></p><h2 id="enterprise-plan">Enterprise plan</h2><p>An Enterprise Plan is a plan for companies that stream live regularly and demand increasing viewers on their platform. We bring this plan to promote mass engagement at affordable prices.<br/></p><p><a href="https://videosdk.live/contact">Contact Support</a> for the best pricing deals.<br/></p><h2 id="amazon-web-services-aws-pricing">Amazon Web Services (AWS) Pricing</h2><p>AWS is a popular name among the providers of <a href="https://www.videosdk.live/interactive-live-streaming">live-streaming services</a>. To be on point, it provides streaming services in two specific resolutions- 1080p and 4K. We will put an eye focus on the 1080p resolution. <br/></p><p>Note that: AWS works with only two resolutions. It does not provide a lower streaming resolution than 1080p. Where other companies make a provision of streaming on different resolutions from 240p to 1080p, AWS will only provide streaming with 1080p resolution.<br/></p><p>AWS calculates its pricing based on GB. We have converted the pricing units into minutes to gain a better understanding of the pricing concept. We have put forward the unit conversion in the next part of this blog.<br/></p><p><strong>AWS Pricing at 1080p per minute</strong></p><p>Encoding= $ 0.062</p><p>Storage= $ 0.0024</p><p>Delivery= $ 0.003<br/></p><h2 id="calculation-of-costs-in-minutes-or-gb">Calculation of costs in Minutes or GB?</h2><p>Calculating the cost of live streaming can be done in either of the ways. Some companies compute costs with GB, the reason being they provide multiple services and prefer keeping a unified unit for all. <br/></p><blockquote>While other companies prefer calculating costs through the video minutes to make calculations effortless for their clients. A video is always uploaded in minutes, therefore calculation in minutes is easy and not challenging. <strong>VideoSDK  calculates costs based on video minutes.</strong></blockquote><h2 id="how-to-convert-gb-into-minutes">How to convert GB into minutes?</h2><p>This is a simple table that describes the ratio of GB to minute.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/GB-into-min--1-.jpg" class="kg-image" alt="Standard Live Streaming API Pricing" loading="lazy" width="722" height="473"/></figure><h2 id="comparison">Comparison</h2><p>VideoSDK and AWS both work on an identical aspect and that is providing the best quality live streaming for streamers. We have distinguished these two based on their pricing policies. <br/></p><blockquote>Note that we have converted the streaming units of AWS into minutes from GB to make an unbiased comparison of pricing.</blockquote><figure class="kg-card kg-image-card kg-width-wide"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/pricing-table_standard-live-streaming.jpg" class="kg-image" alt="Standard Live Streaming API Pricing" loading="lazy" width="1292" height="717"/></figure><p>VideoSDK comes up to be an affordable approach for live-streaming when we compare it to AWS. It might become a questionable point of quality, but AWS lies on the same page concerning quality. Everything they offer is identical. As AWS deals in a resolution of 1080p and VideoSDK provides playback in multiple resolutions, we observe a huge pricing gap. And even when we compare the 1080p resolution of both companies, there is a huge price difference, we can say, double it!<br><br/></br></p>]]></content:encoded></item><item><title><![CDATA[How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2]]></title><description><![CDATA[In this article, you'll learn how to create a react native video calling app with callkeep using the firebase and video SDK.]]></description><link>https://www.videosdk.live/blog/react-native-ios-video-calling-app-with-callkeep</link><guid isPermaLink="false">63be5cb6bd44f53bde5cfca2</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Rajan Surani]]></dc:creator><pubDate>Fri, 04 Oct 2024 05:58:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2023/01/Build-a-React-Native-Video-Calling-App-with-Callkeep-using-Firebase-and-Video-SDK--Par-2-.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Build-a-React-Native-Video-Calling-App-with-Callkeep-using-Firebase-and-Video-SDK--Par-2-.jpg" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2"/><p/><p>Let us start with our second part for the implementation of the React Native Video Call App using Call Keep. If you have not checked the first part, it's recommended to <a href="https://www.videosdk.live/blog/react-native-android-video-calling-app-with-callkeep">start from there</a>.</p><p>Just to give a quick recap, we started out by understating what is Call Keep and what it does and then understanding the flow and functioning of the app. Next, we moved on to installing and setting up the libraries for Android devices. By the end of the previous article, we managed to get the video calling up and running on the Android Devices. You can get the complete code at <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-call-trigger-example">our GitHub repository</a> for the series.</p><p>In this part, we will focus on tweaking the implementation to provide support for iOS devices as well. So without any more delay, let's jump right into it.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/react-native-android-video-calling-app-with-callkeep"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Build a React Native Android Video Calling App with Callkeep using Firebase and Video SDK</div><div class="kg-bookmark-description">In this tutorial, you’ll learn how to make a react native video calling app with callkeep using the firebase and video SDK.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/favicons/android-icon-192x192.png" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2"><span class="kg-bookmark-author">Rajan Surani</span><span class="kg-bookmark-publisher">more posts</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Build-a-React-Native-Video-Calling-App-with-Callkeep-using-Firebase-and-Video-SDK--2--1.jpg" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2"/></div></a></figure><h2 id="libraries">Libraries</h2><p>We will need one additional library that will handle the push notifications on the iOS devices since there are a few cases where the Firebase notification fails.</p><p><a href="https://www.npmjs.com/package/react-native-voip-push-notification">React Native VoIP Push Notification</a> - This library is used to send push notifications on iOS devices, as the Firebase notifications do not function well on iOS devices when the app is in a killed state.</p><h2 id="client-side-setup">Client Side Setup</h2><p>Let's install the library we discussed above by using the command:</p><pre><code class="language-js">npm install react-native-voip-push-notification</code></pre><h2 id="ios-setup">iOS Setup </h2><h3 id="videosdk-setup">VideoSDK Setup</h3><ol><li>Perform the manual linking of the <code>react-native-incall-manager</code>. <br>Select <code>Your_Xcode_Project/TARGETS/BuildSettings</code>, in Header Search Paths, add <code>"$(SRCROOT)/../node_modules/@videosdk.live/react-native-incall-manager/ios/RNInCallManager"</code></br></li><li>Update the <code>Podfile</code> in the <code>ios</code> directory to include the <code>react-native-webrtc</code></li></ol><pre><code class="language-js">pod 'react-native-webrtc', :path =&gt; '../node_modules/@videosdk.live/react-native-webrtc'</code></pre><p>3. Update the <code>platform</code> field to <code>12.0</code> as <code>react-native-webrtc</code> does not support iOS &lt; 12.</p><pre><code class="language-js">platform: ios, '12.0'</code></pre><p>4. Declare the permissions in the <code>Info.plist</code> to allow access to the camera and microphone.</p><pre><code class="language-js">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Camera permission description&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Microphone permission description&lt;/string&gt;</code></pre><h3 id="firebase-setup">Firebase Setup</h3><ol><li>Create a Firebase iOS App within the Firebase Project.</li><li>Download and add <code>GoogleService-info.plist</code> files to the project</li></ol><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-25.png" class="kg-image" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2" loading="lazy" width="730" height="657"/></figure><p>3. Update the <code>Podfile</code> in the <code>ios</code> directory to include the Firebase.</p><pre><code class="language-js">pod 'Firebase', :modular_headers =&gt; true
pod 'FirebaseCoreInternal', :modular_headers =&gt; true
pod 'FirebaseCore', :modular_headers =&gt; true
pod 'GoogleUtilities', :modular_headers =&gt; true</code></pre><pre><code class="language-js">#import &lt;Firebase.h&gt;

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
	 //...
     
    // Add these line in the start
	[FIRApp configure];
    
    //...
}</code></pre><h3 id="pushkit-setup">PushKit Setup</h3><p>PushKit will allow us to send the notifications to the iOS device for which, You must upload an APN Auth Key to implement push notifications. We need the following details about the app when sending push notifications via an APN Auth Key:</p><ul><li>Auth Key file</li><li>Team ID</li><li>Key ID</li><li>Your app’s bundle ID</li></ul><p>To create an APN auth key, follow the steps below.</p><ol><li>Visit the Apple <a href="https://developer.apple.com/account/" rel="nofollow">Developer Member Center</a></li></ol><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-26.png" class="kg-image" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2" loading="lazy" width="2980" height="2028"/></figure><p>2. Click on <code>Certificates, Identifiers &amp; Profiles</code>. Go to Keys from the left side. Create a new Auth Key by clicking on the plus button on the top right side.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-27.png" class="kg-image" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2" loading="lazy" width="2980" height="2028"/></figure><p>3. On the following page, add a Key Name, and select APNs.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-28.png" class="kg-image" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2" loading="lazy" width="2980" height="2028"/></figure><p>4. Click on the Register button.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-29.png" class="kg-image" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2" loading="lazy" width="2980" height="2028"/></figure><p>5. You can download your auth key file from this page and upload this file to the Firebase dashboard without changing its name.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-30.png" class="kg-image" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2" loading="lazy" width="2980" height="2028"/></figure><p>6. In your Firebase project, go to <code>Settings</code> and select the <code>Cloud Messaging</code> tab. Scroll down  <code>iOS app configuration</code>and click Upload under <code>APNs Authentication Key</code></p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-31.png" class="kg-image" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2" loading="lazy" width="1024" height="601"/></figure><p>7. Enter Key ID and Team ID. Key ID is in the file name <code>AuthKey_{Key ID}.p8</code> and is 10 characters. Your Team ID is in the Apple Member Center under the <a href="https://developer.apple.com/account/#/membership" rel="nofollow">membership tab</a> or displayed always under your account name in the top right corner.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-32.png" class="kg-image" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2" loading="lazy" width="1024" height="582"/></figure><p>8. Enable Push Notifications in Capabilities</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-33.png" class="kg-image" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2" loading="lazy" width="1440" height="900"/></figure><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-34.png" class="kg-image" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2" loading="lazy" width="1440" height="900"/></figure><p>9. Enable selected permission in Background Modes</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-35.png" class="kg-image" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2" loading="lazy" width="1116" height="598"/></figure><h3 id="callkeep-setup">CallKeep Setup</h3><ol><li>Update the <code>Podfile</code> with the Call Keep library.</li></ol><pre><code class="language-js">pod 'RNCallKeep', :path =&gt; '../node_modules/react-native-callkeep'</code></pre><p>2.  Update the <code>ios/YourProject/Info.plist</code> file to allow deep linking.</p><pre><code class="language-js">&lt;array&gt;
    &lt;dict&gt;
        &lt;key&gt;CFBundleTypeRole&lt;/key&gt;
        &lt;string&gt;Editor&lt;/string&gt;
        &lt;key&gt;CFBundleURLName&lt;/key&gt;
        &lt;string&gt;videocalling&lt;/string&gt;
        &lt;key&gt;CFBundleURLSchemes&lt;/key&gt;
        &lt;array&gt;
        	&lt;string&gt;videocalling&lt;/string&gt;
        &lt;/array&gt;
    &lt;/dict&gt;
&lt;/array&gt;</code></pre><p>3.  Update the <code>ios/YourProject/AppDelegate.m</code> file with the following code changes to support call keep. These delegates will help invoke the React Native CallKeep.</p><pre><code class="language-js">#import "RNCallKeep.h"
#import &lt;React/RCTLinkingManager.h&gt;

//Update the tehse deleegate with the CallKeep setup
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{

  [FIRApp configure];
  RCTBridge *bridge = [[RCTBridge alloc] initWithDelegate:self launchOptions:launchOptions];
  
  //Add these Lines
  [RNCallKeep setup:@{
      @"appName": @"VideoSDK Call Trigger",
      @"maximumCallGroups": @3,
      @"maximumCallsPerCallGroup": @1,
      @"supportsVideo": @YES,
    }];
  
  RCTRootView *rootView = [[RCTRootView alloc] initWithBridge:bridge
                                                   moduleName:@"ReactNativeCallTrigger"
                                            initialProperties:nil];

  if (@available(iOS 13.0, *)) {
      rootView.backgroundColor = [UIColor systemBackgroundColor];
  } else {
      rootView.backgroundColor = [UIColor whiteColor];
  }

  self.window = [[UIWindow alloc] initWithFrame:[UIScreen mainScreen].bounds];
  UIViewController *rootViewController = [UIViewController new];
  rootViewController.view = rootView;
  self.window.rootViewController = rootViewController;
  [self.window makeKeyAndVisible];
  return YES;
}

//Add below delegate to handle invocking of call
 - (BOOL)application:(UIApplication *)application
 continueUserActivity:(NSUserActivity *)userActivity
   restorationHandler:(void(^)(NSArray * __nullable restorableObjects))restorationHandler
 {
   return [RNCallKeep application:application
            continueUserActivity:userActivity
              restorationHandler:restorationHandler];
 }

//Add below delegate to allow deep linking
- (BOOL)application:(UIApplication *)application
   openURL:(NSURL *)url
   options:(NSDictionary&lt;UIApplicationOpenURLOptionsKey,id&gt; *)options
{
  return [RCTLinkingManager application:application openURL:url options:options];
}
</code></pre><h3 id="voip-push-notification-setup">VoIP Push Notification Setup</h3><ol><li>Update the <code>ios/YourProject/AppDelegate.m</code> file with the following code changes to support call keep. These delegates will help us with to receive the VoIP Push Notification</li></ol><pre><code class="language-js">#import &lt;PushKit/PushKit.h&gt;
#import "RNVoipPushNotificationManager.h"

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
  //...
    
  // Add these line to regiser voip
  [RNVoipPushNotificationManager voipRegistration];
  
  //...
}


- (void)pushRegistry:(PKPushRegistry *)registry didUpdatePushCredentials:(PKPushCredentials *)credentials forType:(PKPushType)type {
  // Register VoIP push token (a property of PKPushCredentials) with server
  [RNVoipPushNotificationManager didUpdatePushCredentials:credentials forType:(NSString *)type];
}

- (void)pushRegistry:(PKPushRegistry *)registry didInvalidatePushTokenForType:(PKPushType)type
{
  // --- The system calls this method when a previously provided push token is no longer valid for use. No action is necessary on your part to reregister the push type. Instead, use this method to notify your server not to send push notifications using the matching push token.
}

- (void)pushRegistry:(PKPushRegistry *)registry didReceiveIncomingPushWithPayload:(PKPushPayload *)payload forType:(PKPushType)type withCompletionHandler:(void (^)(void))completion {
  

  // --- NOTE: apple forced us to invoke callkit ASAP when we receive voip push
  // --- see: react-native-callkeep

  // --- Retrieve information from your voip push payload
  NSString *uuid = payload.dictionaryPayload[@"uuid"];
  NSString *callerName = [NSString stringWithFormat:@"%@ Calling from VideoSDK", payload.dictionaryPayload[@"callerName"]];
  NSString *handle = payload.dictionaryPayload[@"handle"];

  // --- this is optional, only required if you want to call `completion()` on the js side
  [RNVoipPushNotificationManager addCompletionHandler:uuid completionHandler:completion];

  // --- Process the received push
  [RNVoipPushNotificationManager didReceiveIncomingPushWithPayload:payload forType:(NSString *)type];
//  NSDictionary *extra = [payload.dictionaryPayload valueForKeyPath:@"custom.path.to.data"];

  [RNCallKeep reportNewIncomingCall: uuid
                               handle: handle
                           handleType: @"generic"
                             hasVideo: YES
                  localizedCallerName: callerName
                      supportsHolding: YES
                         supportsDTMF: YES
                     supportsGrouping: YES
                   supportsUngrouping: YES
                          fromPushKit: YES
                              payload: nil
                withCompletionHandler: completion];
  
  // --- You don't need to call it if you stored `completion()` and will call it on the js side.
  completion();
}
</code></pre><h3 id="server-side-setup">Server Side Setup</h3><p>You have to add <code>AuthKey_{Key ID}.p8</code> it under <code>functions</code> directory which we generated from Apple Dev and uploaded to Firebase in the client setup. This will help us with VoIP push notifications.</p><hr><h3 id="client-side-code">Client Side Code</h3><p>With our library all set, let's make the required changes on the app side.</p><ol><li> Let us start by storing the APN token in the Firestore. To do that update the <code>getFCMToken()</code> and declare the state for the APN.</li></ol><pre><code class="language-js">const [APN, setAPN] = useState(null);
APNRef.current = APN;
const APNRef = useRef();


//replace the getFCMToken() with below.
async function getFCMtoken() {
    const authStatus = await messaging().requestPermission();
    const enabled =
          authStatus === messaging.AuthorizationStatus.AUTHORIZED ||
          authStatus === messaging.AuthorizationStatus.PROVISIONAL;

    //Register the APN Token.
    Platform.OS === "ios" &amp;&amp; VoipPushNotification.registerVoipToken();

    if (enabled) {
        const token = await messaging().getToken();
        const querySnapshot = await firestore()
        .collection("users")
        .where("token", "==", token)
        .get();

        const uids = querySnapshot.docs.map((doc) =&gt; {
            if (doc &amp;&amp; doc?.data()?.callerId) {

                //We added the APN to the Data and firebaseUserConfig.
                const { token, platform, APN, callerId } = doc?.data();
                setfirebaseUserConfig({
                    callerId,
                    token,
                    platform,
                    APN,
                });
            }
            return doc;
        });

        if (uids &amp;&amp; uids.length == 0) {
            addUser({ token });
        } else {
            console.log("Token Found");
        }
    }
}
</code></pre><p>2. Update the <code>addUser()</code> to set the generated APN token in the Firestore database.</p><pre><code class="language-js">const addUser = ({ token }) =&gt; {
    const platform = Platform.OS === "android" ? "ANDROID" : "iOS";
    const obj = {
      callerId: Math.floor(10000000 + Math.random() * 90000000).toString(),
      token,
      platform,
    };
    
    //We will add the APN to firestore
    if (platform == "iOS") {
      obj.APN = APNRef.current;
    }
    firestore()
      .collection("users")
      .add(obj)
      .then(() =&gt; {
        setfirebaseUserConfig(obj);
        console.log("User added!");
      });
  };
</code></pre><p>3. Now we will listen to the VoipPushNotification for the <code>notification</code> event and initiate the call.</p><pre><code class="language-js">useEffect(() =&gt; {
    VoipPushNotification.addEventListener("register", (token) =&gt; {
      setAPN(token);
    });

    VoipPushNotification.addEventListener("notification", (notification) =&gt; {
      const { callerInfo, videoSDKInfo, type } = notification;
      if (type === "CALL_INITIATED") {
        const incomingCallAnswer = ({ callUUID }) =&gt; {
          updateCallStatus({
            callerInfo,
            type: "ACCEPTED",
          });
          navigation.navigate(SCREEN_NAMES.Meeting, {
            name: "Person B",
            token: videoSDKInfo.token,
            meetingId: videoSDKInfo.meetingId,
          });
        };
        const endIncomingCall = () =&gt; {
          Incomingvideocall.endAllCall();
          updateCallStatus({ callerInfo, type: "REJECTED" });
        };
        Incomingvideocall.configure(incomingCallAnswer, endIncomingCall);
      } else if (type === "DISCONNECT") {
        Incomingvideocall.endAllCall();
      }
      VoipPushNotification.onVoipNotificationCompleted(notification.uuid);
    });

    VoipPushNotification.addEventListener("didLoadWithEvents", (events) =&gt; {
      const { callerInfo, videoSDKInfo, type } =
        events.length &gt; 1 &amp;&amp; events[1].data;
      if (type === "CALL_INITIATED") {
        const incomingCallAnswer = ({ callUUID }) =&gt; {
          updateCallStatus({
            callerInfo,
            type: "ACCEPTED",
          });
          navigation.navigate(SCREEN_NAMES.Meeting, {
            name: "Person B",
            token: videoSDKInfo.token,
            meetingId: videoSDKInfo.meetingId,
          });
        };

        const endIncomingCall = () =&gt; {
          Incomingvideocall.endAllCall();
          updateCallStatus({ callerInfo, type: "REJECTED" });
        };

        Incomingvideocall.configure(incomingCallAnswer, endIncomingCall);
      }
    });

    return () =&gt; {
      VoipPushNotification.removeEventListener("didLoadWithEvents");
      VoipPushNotification.removeEventListener("register");
      VoipPushNotification.removeEventListener("notification");
    };
  }, []);
</code></pre><p>4. Inside the <code>index.js</code> file, update the <code>AppRegistry</code> with a <code>HeadlessCheck</code> so that iOS will be able to launch the app while showing UI in the background.</p><pre><code class="language-js">function HeadlessCheck({ isHeadless }) {
  if (isHeadless) {
    // App has been launched in the background by iOS, ignore
    return null;
  }

  return &lt;App /&gt;;
}

AppRegistry.registerComponent(appName, () =&gt; HeadlessCheck);</code></pre><p>With these, the client-side code is all set. But we need our Firebase function to send the APN notification instead of the simple FCM notification. So let's update the firbase function to do the same.</p><h3 id="server-side-code">Server Side Code</h3><ol><li>Add the required <code>node-apn</code> library by running:</li></ol><pre><code class="language-js">npm install https://github.com/node-apn/node-apn.git</code></pre><p>2.  Add the imports for the AuthKey and apn.</p><pre><code class="language-js">var apn = require("apn");
var Key = "./AuthKey_{KEY ID}.p8";
</code></pre><p>3.  Inside our <code>initiate-call</code> API we will check for the platform and send the notification on an iOS basis.</p><pre><code class="language-js">app.post("/initiate-call", (req, res) =&gt; {
  const { calleeInfo, callerInfo, videoSDKInfo } = req.body;

  //Check for the platform and send the notification accordingly.
  if (calleeInfo.platform === "iOS") {
    let deviceToken = calleeInfo.APN;
    var options = {
      token: {
        key: Key,
        keyId: "YOUR_KEY_ID",
        teamId: "YOUR_TEAM_ID",
      },
      production: true,
    };

    var apnProvider = new apn.Provider(options);

    var note = new apn.Notification();

    note.expiry = Math.floor(Date.now() / 1000) + 3600; // Expires 1 hour from now.
    note.badge = 1;
    note.sound = "ping.aiff";
    note.alert = "You have a new message";
    note.rawPayload = {
      callerName: callerInfo.name,
      aps: {
        "content-available": 1,
      },
      handle: callerInfo.name,
      callerInfo,
      videoSDKInfo,
      type: "CALL_INITIATED",
      uuid: uuidv4(),
    };
    note.pushType = "voip";
    note.topic = "org.reactjs.ReactNativeCallTrigger.voip";
    apnProvider.send(note, deviceToken).then((result) =&gt; {
      if (result.failed &amp;&amp; result.failed.length &gt; 0) {
        console.log("RESULT", result.failed[0].response);
        res.status(400).send(result.failed[0].response);
      } else {
        res.status(200).send(result);
      }
    });
  } else if (calleeInfo.platform === "ANDROID") {
    var FCMtoken = calleeInfo.token;
    const info = JSON.stringify({
      callerInfo,
      videoSDKInfo,
      type: "CALL_INITIATED",
    });
    var message = {
      data: {
        info,
      },
      android: {
        priority: "high",
      },
      token: FCMtoken,
    };
    FCM.send(message, function (err, response) {
      if (err) {
        res.status(200).send(response);
      } else {
        res.status(400).send(response);
      }
    });
  } else {
    res.status(400).send("Not supported platform");
  }
});</code></pre><p>4. In the above API, you will have to update <strong><code>TEAM_ID</code> </strong>and<strong> <code>KEY_ID</code></strong> in the above code which you can get from the Firebase Project Setting &gt; Cloud Messaging</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Untitled-design--2-.png" class="kg-image" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2" loading="lazy" width="1920" height="1080"/></figure><p>With these, iOS devices should now be able to receive the call and join the video call. This is what the incoming call on an iOS device looks like.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/IMG_0370.png" class="kg-image" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2" loading="lazy" width="225" height="400"/></figure><blockquote>Congratulations!!! You made the complete video calling app which works both on Android and iOS devices.</blockquote><p>Here is the video showing the incoming call and initiating a video session</p><!--kg-card-begin: html--><iframe type="text/html" frameborder="0" width="560" height="315" src="https://www.youtube.com/embed/mJOei_fCrQs" allowfullscreen=""/><!--kg-card-end: html--><h2 id="conclusion">Conclusion</h2><p>With this, we successfully built the React native video calling app with call keep using the video SDK and Firebase. You can always refer to our <a href="https://docs.videosdk.live/">documentation</a> if you want to add features like chat messaging and screen sharing. If you have any problems with the implementation, please contact us via our <a href="https://discord.gg/Gpmj6eCq5u">Discord community</a>.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Click-here-to-join-the-meeting-1.gif" class="kg-image" alt="How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2" loading="lazy" width="720" height="720"/></figure></hr>]]></content:encoded></item><item><title><![CDATA[Build React Native Live Streaming App: Step-by-Step Guide]]></title><description><![CDATA[In this tutorial, you’ll learn how to integrate Live Streaming in your React Native app using Video SDK.]]></description><link>https://www.videosdk.live/blog/react-native-live-streaming</link><guid isPermaLink="false">642d13c22c7661a49f382209</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Ahmed]]></dc:creator><pubDate>Fri, 04 Oct 2024 03:33:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/04/ils_react_native_blog--2-.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction-of-live-streaming">Introduction of Live Streaming</h2>
<img src="https://assets.videosdk.live/static-assets/ghost/2023/04/ils_react_native_blog--2-.jpg" alt="Build React Native Live Streaming App: Step-by-Step Guide"/><p>Live streaming has become an increasingly popular way to share information and connect with audiences in real-time. Whether you're a musician, a business owner, or a teacher, live streaming can be a powerful tool for engaging with your audience and sharing your message. </p><p>Choosing the right live stream platform is an important decision, as it can impact the quality of your stream, the size of your audience, and the overall success of your live stream. Here are a few factors to consider while choosing a live-stream platform:</p><ul><li><strong>Features</strong>: Look for live stream platforms that offer the features you need to create high-quality streams. For example, you might need a platform that supports multiple camera angles and allows you to share your screen or multiple broadcasters. </li><li><strong>Integration</strong>: Integration of live streaming in the React native platform should be simple and quick.</li><li><strong>Budget</strong>: Most live-streaming platforms that provide SDK or API are expensive, it is important to choose a platform that fits within your budget.</li></ul><h2 id="why-choose-the-videosdk">Why choose the VideoSDK?</h2><p>VideoSDK is a perfect choice for those seeking a live-streaming platform that offers the necessary features to create high-quality streams. The platform supports screen sharing and real-time Messaging, allows broadcasters to invite audience members to the stage, and supports 1<strong>00k+</strong> Participants, ensuring that your live streams are interactive and engaging. With VideoSDK, you can also use your own custom-designed layout template for React native live streaming.</p><p>In terms of integration, VideoSDK offers a simple and quick integration process, allowing you to integrate live streaming into your React native app. This ensures that you can enjoy the benefits of live streaming without any technical difficulties or lengthy implementation processes.</p><p>Furthermore, VideoSDK provides a <a href="https://www.videosdk.live/pricing">budget-friendly</a> API, making it an affordable option for businesses of all sizes. You can enjoy the benefits of a feature-rich live-streaming platform without breaking the bank, making it an ideal choice for startups and small businesses.</p><h2 id="5-steps-to-integrate-live-streaming-in-react-native-app">5 Steps to Integrate Live Streaming in React Native app</h2><p>The steps will give you all the information to quickly build a React Native <a href="https://www.videosdk.live/interactive-live-streaming">Live Streaming</a> app. Please carefully read this guide, and if you have any trouble, let us know right away on <a href="https://discord.gg/f2WsNDN9S5">Discord</a>, we will be happy to help you.</p><h3 id="prerequisites">Prerequisites</h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (Not having one? Follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a>)</li><li>Basic understanding of React Native</li><li>Node.js v12+</li><li>NPM v6+ (comes installed with newer Node versions)</li><li>Android Studio or Xcode installed</li></ul><blockquote>
<p>One should have a VideoSDK account to generate token. Visit VideoSDK <a href="https://app.videosdk.live/login">dashboard</a> to generate token</p>
</blockquote>
<h3 id="app-architecture%E2%80%8B">App Architecture<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start-ILS#app-architecture">​</a></h3><p><br>The App will contain two screens :</br></p><ol><li><code>Join Screen</code> : This screen allows <code>SPEAKER</code> to create a studio or join a predefined studio and <code>VIEWER</code> to join the predefined studio.</li><li><code>Speaker Screen</code> : This screen contains a speaker list and studio controls such as Enable / Disable Mic &amp; Camera and Leave studio.</li><li><code>Viewer Screen</code> : This screen contains a live stream player in which the viewer will play the stream.</li></ol><center>
<img src="https://cdn.videosdk.live/website-resources/docs-resources/ils_app_arch.png" alt="Build React Native Live Streaming App: Step-by-Step Guide">
</img></center><h2 id="step-1-getting-started-with-the-code%E2%80%8B">STEP 1:  Getting Started with the Code!<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start-ILS#getting-started-with-the-code">​</a></h2><h3 id="a-create-app%E2%80%8B"><br>[a] Create App<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start-ILS#create-app">​</a></br></h3><p>Create a new react-native app by applying the below commands.</p><pre><code class="language-js">npx react-native init AppName
</code></pre><p>For react-native setup, you can follow <a href="https://reactnative.dev/docs/environment-setup" rel="noopener noreferrer">Official Docs</a>.</p><h3 id="b-video-sdk-installation%E2%80%8B"><br>[b] Video SDK Installation<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start-ILS#videosdk-installation">​</a></br></h3><p>Install the Video SDK by following the below command. Do make sure you should be in your project directory before you run this command.</p><p><strong>For NPM :</strong></p><pre><code class="language-javascript">npm install "@videosdk.live/react-native-sdk"

</code></pre><p><strong>For Yarn :</strong></p><pre><code class="language-javascript">yarn add "@videosdk.live/react-native-sdk"

</code></pre><h3 id="c-project-structure%E2%80%8B">[c] Project Structure<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start-ILS#project-structure">​</a></h3><pre><code class="language-javascript">  root
   ├── node_modules
   ├── android
   ├── ios
   ├── App.js
   ├── api.js
   ├── index.js</code></pre><h2 id="step-2-android-setup">STEP 2:  Android Setup</h2><h3 id="a-add-the-required-permission-in-the-androidmanifestxml-file">(a) Add the required permission in the AndroidManifest.xml file.</h3>
<pre><code class="language-xml">&lt;manifest
  xmlns:android="http://schemas.android.com/apk/res/android"
  package="com.cool.app"
&gt;
    &lt;!-- Give all the required permissions to app --&gt;
    &lt;uses-permission android:name="android.permission.INTERNET" /&gt;
    &lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Legacy up to Android 11) --&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH"
        android:maxSdkVersion="30" /&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH_ADMIN"
        android:maxSdkVersion="30" /&gt;

    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Android 12 upwards)--&gt;
    &lt;uses-permission android:name="android.permission.BLUETOOTH_CONNECT" /&gt;

    &lt;uses-permission android:name="android.permission.CAMERA" /&gt;
    &lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
    &lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
    &lt;uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" /&gt;
    &lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE"/&gt;
    &lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;
​
  &lt;application&gt;
   &lt;meta-data
      android:name="live.videosdk.rnfgservice.notification_channel_name"
      android:value="Meeting Notification"
     /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_channel_description"
    android:value="Whenever meeting started notification will appear."
    /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_color"
    android:resource="@color/red"
    /&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundService" android:foregroundServiceType="mediaProjection"&gt;&lt;/service&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundServiceTask"&gt;&lt;/service&gt;
  &lt;/application&gt;
&lt;/manifest&gt;
</code></pre>
<h3 id="b-link-a-couple-of-internal-library-dependencies-in-androidappbuildgradle-file">(b) Link a couple of internal library dependencies in android/app/build.gradle file</h3>
<pre><code class="language-js">dependencies {
    compile project(':rnfgservice') 
    compile project(':rnwebrtc') 
    compile project(':rnincallmanager')
  }

</code></pre>
<p>Include dependencies in <strong>android/settings.gradle</strong></p><pre><code class="language-js">include ':rnwebrtc'
project(':rnwebrtc').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-webrtc/android')

include ':rnincallmanager'
project(':rnincallmanager').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-incallmanager/android')

include ':rnfgservice'
project(':rnfgservice').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-foreground-service/android')

</code></pre>
<p>Update <strong>MainApplication.java </strong>to use InCallManager and run some foreground services.</p><pre><code class="language-js">import live.videosdk.rnfgservice.ForegroundServicePackage;
import live.videosdk.rnincallmanager.InCallManagerPackage;
import live.videosdk.rnwebrtc.WebRTCModulePackage;

public class MainApplication extends Application implements ReactApplication {
  private static List&lt;ReactPackage&gt; getPackages() {
      return Arrays.&lt;ReactPackage&gt;asList(
          /* Initialise foreground service, incall manager and webrtc module */
          new ForegroundServicePackage(),
          new InCallManagerPackage(),
          new WebRTCModulePackage(),
      );
  }
}
</code></pre>
<p>Some devices might face WebRTC problems and to solve that, update your <strong>android/gradle.properties</strong> file with the following</p><pre><code class="language-JS">/* This one fixes a weird WebRTC runtime problem on some devices. */
android.enableDexingArtifactTransform.desugaring=false</code></pre><p>If you use <strong>Proguard</strong>, make the changes shown below in <strong>android/app/proguard-rules.pro </strong>file (this is optional)</p><pre><code class="language-js">-keep class org.webrtc.** { *; }
</code></pre><h3 id="c-update-colorsxml-file-with-some-new-colors-for-internal-dependencies">(c) Update colors.xml file with some new colors for internal dependencies.</h3>
<pre><code class="language-xml">&lt;resources&gt;
    &lt;item name="red" type="color"&gt;#FC0303&lt;/item&gt;
    &lt;integer-array name="androidcolors"&gt;
    &lt;item&gt;@color/red&lt;/item&gt;
    &lt;/integer-array&gt;
&lt;/resources&gt;
</code></pre>
<h2 id="step-3-ios-setup">STEP 3:  iOS Setup </h2><h3 id="a-install-react-native-incallmanager">(a)  Install react-native-incallmanager</h3>
<pre><code class="language-JS">$ yarn add @videosdk.live/react-native-incallmanager
</code></pre><h3 id="b-install-the-gem-again-to-update-cocoapods">(b) Install the gem again to update Cocoapods.</h3>
<p>Make sure you are using CocoaPods 1.10 or higher. To update CocoaPods, you can simply install the gem again.</p><pre><code class="language-JS">$[sudo] gem install cocoapods
</code></pre><h3 id="c-manually-linking-if-react-native-incall-manager-is-not-linked-automatically">(c) Manually linking (if react-native-incall-manager is not linked automatically)</h3>
<ul>
<li>
<p>Drag node_modules/@videosdk.live/react-native-incall-manager/ios/RNInCallManager.xcodeproj under &lt;your_xcode_project&gt;/Libraries</p>
</li>
<li>
<p>Select &lt;your_xcode_project&gt; --&gt; Build Phases --&gt; Link Binary With Libraries</p>
</li>
<li>
<p>Drag Libraries/RNInCallManager.xcodeproj/Products/libRNInCallManager.a to Link Binary With Libraries</p>
</li>
<li>
<p>Select &lt;your_xcode_project&gt; --&gt; Build Settings In Header Search Paths, add $(SRCROOT)/../node_modules/@videosdk.live/react-native-incall-manager/ios/RNInCallManager</p>
</li>
</ul>
<h3 id="d-change-the-path-of-react-native-webrtc">(d) Change the path of react-native-webrtc</h3>
<pre><code class="language-JS">pod ‘react-native-webrtc’, :path =&gt; ‘../node_modules/@videosdk.live/react-native-webrtc’</code></pre><h3 id="e-%E2%AC%86%EF%B8%8F-change-your-platform-version">(e) ⬆️ Change your platform version</h3>
<ul>
<li>You have to change the platform field of podfile to 11.0 or above it, as react-native-webrtc doesn’t support IOS &lt; 11 platform :ios, ‘11.0’</li>
</ul>
<h3 id="f-after-updating-the-version-you-have-to-install-pods">(f) After updating the version, you have to install pods</h3>
<pre><code class="language-JS">Pod install
</code></pre><h3 id="g-then-add-%E2%80%9Clibreact-native-webrtca%E2%80%9D-in-link-binary-with-libraries-in-target-of-the-main-project-folder">(g)  Then Add “libreact-native-webrtc.a” in Link Binary with libraries. In target of the main project folder.</h3>
<h3 id="h-now-add-the-following-permissions-to-the-infoplist-project-folderiosprojectnameinfoplist">(h) Now add the following permissions to the info.plist (project folder/IOS/projectname/info.plist):</h3>
<pre><code class="language-JS">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Camera permission description&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Microphone permission description&lt;/string&gt;</code></pre><h2 id="step-4-register-service">STEP 4:  Register Service</h2><p>Register Video SDK services in the root <strong>index.js</strong> file for initialization service.</p><pre><code class="language-js">import { register } from '@videosdk.live/react-native-sdk';
import { AppRegistry } from 'react-native';
import { name as appName } from './app.json';
import App from './src/App.js';
​
// Register the service
register();
AppRegistry.registerComponent(appName, () =&gt; App);
</code></pre>
<h2 id="%E2%9C%8D-step-5-start-writing-your-code-now">✍ STEP 5:  Start Writing Your Code Now</h2><h3 id="a-get-started-with-apijs">(a)  Get started with API.js</h3><p>Before jumping to anything else, we have to write API to generate a unique meetingId. You will require an auth token, you can generate it either by using <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or generating it from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developer.</p><pre><code class="language-js">// Auth token we will use to generate a meeting and connect to it
export const authToken = "&lt;Generated-from-dashbaord&gt;";

// API call to create meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v1/meetings`, {
    method: "POST",
    headers: {
      authorization: `${token}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({ region: "sg001" }),
  });

  const { meetingId } = await res.json();
  return meetingId;
};
</code></pre>
<h3 id="b-wireframe-appjs-with-all-the-components">(b) <strong>Wireframe App.js with all the components</strong></h3><p>To build up a wireframe of App.js, we are going to use VideoSDK Hooks and Context Providers. Video SDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks. Let's understand each of them.</p><p>First, we will explore Context Provider and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider : </strong>It is a Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer : </strong>It is Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting : </strong>It is meeting react hook API for the meeting. It includes all the information related to meeting such as join, leave, enable/disable a mic or webcam, etc.</li><li><strong>useParticipant : </strong>It is participant hook API. useParticipant hook is responsible for handling all the events and props related to one particular participant such as name, webcamStream, micStream, etc.</li></ul><p>Meeting Context helps to listen to all the changes when a participant joins a meeting or changes mic or camera etc.</p><p>Let's get started with changing a couple of lines of code in App.js<br/></p><pre><code class="language-js">import React, { useState, useMemo, useRef, useEffect } from "react";
import {
  SafeAreaView,
  TouchableOpacity,
  Text,
  TextInput,
  View,
  FlatList,
  Clipboard,
} from "react-native";
import {
  MeetingProvider,
  useMeeting,
  useParticipant,
  MediaStream,
  RTCView,
  Constants,
} from "@videosdk.live/react-native-sdk";
import { createMeeting, authToken } from "./api";

// Responsible for either schedule new meeting or to join existing meeting as a host or as a viewer.
function JoinScreen({ getMeetingAndToken, setMode }) {
  return null;
}

// Responsible for managing participant video stream
function ParticipantView(props) {
  return null;
}

// Responsible for managing meeting controls such as toggle mic / webcam and leave
function Controls() {
  return null;
}

// Responsible for Speaker side view, which contains Meeting Controls(toggle mic/webcam &amp; leave) and Participant list
function SpeakerView() {
  return null;
}

// Responsible for Viewer side view, which contains video player for streaming HLS and managing HLS state (HLS_STARTED, HLS_STOPPING, HLS_STARTING, etc.)
function ViewerView() {
  return null;
}

// Responsible for managing two view (Speaker &amp; Viewer) based on provided mode (`CONFERENCE` &amp; `VIEWER`)
function Container(props) {
  return null;
}

function App() {
  const [meetingId, setMeetingId] = useState(null);

  //State to handle the mode of the participant i.e. CONFERNCE or VIEWER
  const [mode, setMode] = useState("CONFERENCE");

  //Getting MeetingId from the API we created earlier

  const getMeetingAndToken = async (id) =&gt; {
    const meetingId =
      id == null ? await createMeeting({ token: authToken }) : id;
    setMeetingId(meetingId);
  };


  return authToken &amp;&amp; meetingId ? (
    &lt;MeetingProvider
      config={{
        meetingId,
        micEnabled: true,
        webcamEnabled: true,
        name: "C.V. Raman",
        //These will be the mode of the participant CONFERENCE or VIEWER
        mode: mode,
      }}
      token={authToken}
    &gt;
      &lt;Container /&gt;
    &lt;/MeetingProvider&gt;
  ) : (
    &lt;JoinScreen getMeetingAndToken={getMeetingAndToken} setMode={setMode} /&gt;
  );
}

export default App;
</code></pre>
<h3 id="c-implement-join-screen">(c)  Implement Join Screen</h3><p>The join screen will work as a medium to either schedule a new meeting or to join an existing meeting as a host or as a viewer.</p><p>These will have 3 buttons:</p><ol><li>Join as Host: When this button is clicked, the person will join the entered 'meetingId' as 'HOST'.</li><li>Join as Viewer: When this button is clicked, the person will join the entered 'meetingId' as 'VIEWER'.</li><li>Create Studio Room: When this button is clicked, the person will join a new meeting as 'HOST'.</li></ol><pre><code class="language-js">function JoinScreen({ getMeetingAndToken, setMode }) {
  const [meetingVal, setMeetingVal] = useState("");

  const JoinButton = ({ value, onPress }) =&gt; {
    return (
      &lt;TouchableOpacity
        style={{
          backgroundColor: "#1178F8",
          padding: 12,
          marginVertical: 8,
          borderRadius: 6,
        }}
        onPress={onPress}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          {value}
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;
    );
  };
  return (
    &lt;SafeAreaView
      style={{
        flex: 1,
        backgroundColor: "black",
        justifyContent: "center",
        paddingHorizontal: 6 * 10,
      }}
    &gt;
      &lt;TextInput
        value={meetingVal}
        onChangeText={setMeetingVal}
        placeholder={"XXXX-XXXX-XXXX"}
        placeholderTextColor={"grey"}
        style={{
          padding: 12,
          borderWidth: 1,
          borderColor: "white",
          borderRadius: 6,
          color: "white",
          marginBottom: 16,
        }}
      /&gt;
      &lt;JoinButton
        onPress={() =&gt; {
          getMeetingAndToken(meetingVal);
        }}
        value={"Join as Host"}
      /&gt;
      &lt;JoinButton
        onPress={() =&gt; {
          setMode("VIEWER");
          getMeetingAndToken(meetingVal);
        }}
        value={"Join as Viewer"}
      /&gt;
      &lt;Text
        style={{
          alignSelf: "center",
          fontSize: 22,
          marginVertical: 16,
          fontStyle: "italic",
          color: "grey",
        }}
      &gt;
        ---------- OR ----------
      &lt;/Text&gt;

      &lt;JoinButton
        onPress={() =&gt; {
          getMeetingAndToken();
        }}
        value={"Create Studio Room"}
      /&gt;
    &lt;/SafeAreaView&gt;
  );
}
</code></pre>
<h3 id="d-implement-container-component">(d) Implement Container Component</h3><ul><li>The next step is to create a container that will manage <code>Join screen</code>, <code>SpeakerView</code> and <code>ViewerView</code> component based on <code>mode</code>.</li><li>We will check the mode of the <code>localParticipant</code>, if its <code>CONFERENCE</code> we will show <code>SpeakerView</code> else we will show <code>ViewerView</code>.</li></ul><pre><code class="language-js">function Container() {
  const { join, changeWebcam, localParticipant } = useMeeting({
    onError: (error) =&gt; {
      console.log(error.message);
    },
  });

  return (
    &lt;View style={{ flex: 1 }}&gt;
      {localParticipant?.mode == Constants.modes.CONFERENCE ? (
        &lt;SpeakerView /&gt;
      ) : localParticipant?.mode == Constants.modes.VIEWER ? (
        &lt;ViewerView /&gt;
      ) : (
        &lt;View
          style={{
            flex: 1,
            justifyContent: "center",
            alignItems: "center",
            backgroundColor: "black",
          }}
        &gt;
          &lt;Text style={{ fontSize: 20, color: "white" }}&gt;
            Press Join button to enter studio.
          &lt;/Text&gt;
          &lt;Button
            btnStyle={{
              marginTop: 8,
              paddingHorizontal: 22,
              padding: 12,
              borderWidth: 1,
              borderColor: "white",
              borderRadius: 8,
            }}
            buttonText={"Join"}
            onPress={() =&gt; {
              join();
              setTimeout(() =&gt; {
                changeWebcam();
              }, 300);
            }}
          /&gt;
        &lt;/View&gt;
      )}
    &lt;/View&gt;
  );
}

// Common Component which will also be used in Controls Component
const Button = ({ onPress, buttonText, backgroundColor, btnStyle }) =&gt; {
  return (
    &lt;TouchableOpacity
      onPress={onPress}
      style={{
        ...btnStyle,
        backgroundColor: backgroundColor,
        padding: 10,
        borderRadius: 8,
      }}
    &gt;
      &lt;Text style={{ color: "white", fontSize: 12 }}&gt;{buttonText}&lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
  );
};
</code></pre>
<h3 id="e-implement-speakerview">(e) <strong>Implement SpeakerView</strong></h3><p>The next step is to create <code>SpeakerView</code> and <code>Controls</code> components to manage features such as join, leave, mute, and unmute.</p><ol><li>We will get all the <code>participants</code> from <code>useMeeting</code> hook and filter them for the mode set to <code>CONFERENCE</code> so only Speakers are shown on the screen.</li></ol><pre><code class="language-js">function SpeakerView() {
  // Get the Participant Map and meetingId
  const { meetingId, participants } = useMeeting({});

  // For getting speaker participant, we will filter out `CONFERENCE` mode participant
  const speakers = useMemo(() =&gt; {
    const speakerParticipants = [...participants.values()].filter(
      (participant) =&gt; {
        return participant.mode == Constants.modes.CONFERENCE;
      }
    );
    return speakerParticipants;
  }, [participants]);

  return (
    &lt;SafeAreaView style={{ backgroundColor: "black", flex: 1 }}&gt;
      {/* Render Header for copy meetingId and leave meeting*/}
      &lt;HeaderView /&gt;

      {/* Render Participant List */}
      {speakers.length &gt; 0 ? (
        &lt;FlatList
          data={speakers}
          renderItem={({ item }) =&gt; {
            return &lt;ParticipantView participantId={item.id} /&gt;;
          }}
        /&gt;
      ) : null}

      {/* Render Controls */}
      &lt;Controls /&gt;
    &lt;/SafeAreaView&gt;
  );
}

function HeaderView() {
  const { meetingId, leave } = useMeeting();
  return (
    &lt;View
      style={{
        flexDirection: "row",
        padding: 16,
        justifyContent: "space-evenly",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 24, color: "white" }}&gt;{meetingId}&lt;/Text&gt;
      &lt;Button
        btnStyle={{
          borderWidth: 1,
          borderColor: "white",
        }}
        onPress={() =&gt; {
          Clipboard.setString(meetingId);
          alert("MeetingId copied successfully");
        }}
        buttonText={"Copy MeetingId"}
        backgroundColor={"transparent"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          leave();
        }}
        buttonText={"Leave"}
        backgroundColor={"#FF0000"}
      /&gt;
    &lt;/View&gt;
  );
}

function Container(){
  ...

  const mMeeting = useMeeting({
    onMeetingJoined: () =&gt; {
      // We will pin the local participant if he joins in CONFERENCE mode
      if (mMeetingRef.current.localParticipant.mode == "CONFERENCE") {
        mMeetingRef.current.localParticipant.pin();
      }
    },
    ...
  });

  // We will create a ref to meeting object so that when used inside the
  // Callback functions, meeting state is maintained
  const mMeetingRef = useRef(mMeeting);
  useEffect(() =&gt; {
    mMeetingRef.current = mMeeting;
  }, [mMeeting]);

  return &lt;&gt;...&lt;/&gt;;
}
</code></pre>
<p>2. We will be creating the <code>ParticipantView</code> to show the participants media. For which, will be using the <code>webcamStream</code> from the <code>useParticipant</code> hook to play the media of the participant.</p><pre><code class="language-js">function ParticipantView({ participantId }) {
  const { webcamStream, webcamOn } = useParticipant(participantId);
  return webcamOn &amp;&amp; webcamStream ? (
    &lt;RTCView
      streamURL={new MediaStream([webcamStream.track]).toURL()}
      objectFit={"cover"}
      style={{
        height: 300,
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        backgroundColor: "grey",
        height: 300,
        justifyContent: "center",
        alignItems: "center",
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    &gt;
      &lt;Text style={{ fontSize: 16 }}&gt;NO MEDIA&lt;/Text&gt;
    &lt;/View&gt;
  );
}
</code></pre>
<p>3. We will add the <code>Controls</code> component which will allow the speaker to toggle media and start/stop HLS.</p><pre><code class="language-js">function Controls() {
  const { toggleWebcam, toggleMic, startHls, stopHls, hlsState } = useMeeting(
    {}
  );

  const _handleHLS = async () =&gt; {
    if (!hlsState || hlsState === "HLS_STOPPED") {
      startHls({
        layout: {
          type: "SPOTLIGHT",
          priority: "PIN",
          gridSize: 4,
        },
        theme: "DARK",
        orientation: "portrait",
      });
    } else if (hlsState === "HLS_STARTED" || hlsState === "HLS_PLAYABLE") {
      stopHls();
    }
  };

  return (
    &lt;View
      style={{
        padding: 24,
        flexDirection: "row",
        justifyContent: "space-between",
      }}
    &gt;
      &lt;Button
        onPress={() =&gt; {
          toggleWebcam();
        }}
        buttonText={"Toggle Webcam"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleMic();
        }}
        buttonText={"Toggle Mic"}
        backgroundColor={"#1178F8"}
      /&gt;
      {hlsState === "HLS_STARTED" ||
      hlsState === "HLS_STOPPING" ||
      hlsState === "HLS_STARTING" ||
      hlsState === "HLS_PLAYABLE" ? (
        &lt;Button
          onPress={() =&gt; {
            _handleHLS();
          }}
          buttonText={
            hlsState === "HLS_STARTED"
              ? `Live Starting`
              : hlsState === "HLS_STOPPING"
              ? `Live Stopping`
              : hlsState === "HLS_PLAYABLE"
              ? `Stop Live`
              : `Loading...`
          }
          backgroundColor={"#FF5D5D"}
        /&gt;
      ) : (
        &lt;Button
          onPress={() =&gt; {
            _handleHLS();
          }}
          buttonText={`Go Live`}
          backgroundColor={"#1178F8"}
        /&gt;
      )}
    &lt;/View&gt;
  );
}
</code></pre>
<h3 id="f-implement-viewerview">(f) Implement ViewerView</h3><p>When <strong>HOST</strong> ('CONFERENCE' mode participant) starts the live streaming, the viewer will be able to see the live streaming.</p><p>To implement the player view, we are going to use <a href="https://www.npmjs.com/package/react-native-video">react-native-video</a>. It will be helpful to play the HLS stream.</p><p>Let's first add this package.</p><ul><li><strong>For NPM :</strong></li></ul><pre><code class="language-bash">npm install react-native-video
</code></pre>
<ul><li><strong>For Yarn :</strong></li></ul><pre><code class="language-bash">yarn add react-native-video
</code></pre>
<p>With <code>react-native-video</code> installed, we will get the <code>hlsUrls</code> and <code>isHlsPlayable</code> from the <code>useMeeting</code> hook which will be used to play the HLS in the player.</p><pre><code class="language-js">//Add imports
// imports react-native-video
import Video from "react-native-video";

function ViewerView({}) {
  const { hlsState, hlsUrls } = useMeeting();

  return (
    &lt;SafeAreaView style={{ flex: 1, backgroundColor: "black" }}&gt;
      {hlsState == "HLS_PLAYABLE" ? (
        &lt;&gt;
          {/* Render Header for copy meetingId and leave meeting*/}
          &lt;HeaderView /&gt;

          {/* Render VideoPlayer that will play `downstreamUrl`*/}
          &lt;Video
            controls={true}
            source={{
              uri: hlsUrls.downstreamUrl,
            }}
            resizeMode={"stretch"}
            style={{
              flex: 1,
              backgroundColor: "black",
            }}
            onError={(e) =&gt; console.log("error", e)}
          /&gt;
        &lt;/&gt;
      ) : (
        &lt;SafeAreaView
          style={{ flex: 1, justifyContent: "center", alignItems: "center" }}
        &gt;
          &lt;Text style={{ fontSize: 20, color: "white" }}&gt;
            HLS is not started yet or is stopped
          &lt;/Text&gt;
        &lt;/SafeAreaView&gt;
      )}
    &lt;/SafeAreaView&gt;
  );
}
</code></pre>
<h2 id="run-your-code-now">Run Your Code Now</h2><pre><code class="language-js">//for android
npx react-native run-android

//for ios
npx react-native run-ios
</code></pre>
<blockquote>
<p>Stuck anywhere? Check out this <a href="https://github.com/videosdk-live/quickstart/tree/main/react-native-hls">example code</a> on GitHub</p>
</blockquote>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/05/Untitled-design.gif" class="kg-image" alt="Build React Native Live Streaming App: Step-by-Step Guide" loading="lazy" width="1152" height="648"/></figure><h2 id="conclusion">Conclusion</h2><p>With this, we have successfully built the React Native Live Streaming app using the VideoSDK in React-Native. If you wish to add functionalities like chat messaging, and screen sharing, you can always check out our <a href="https://docs.videosdk.live/">Documentation</a>. If you face any difficulty with the implementation, You can connect with us on our <a href="https://discord.gg/Gpmj6eCq5u">Discord Community</a>.</p><h2 id="resources">Resources</h2><ul><li><a href="https://www.youtube.com/watch?v=L1x7wtH-ok8">React Interactive Live Streaming with VideoSDK - YouTube</a></li><li><a href="https://www.videosdk.live/blog/how-to-make-a-video-calling-app-using-react-native">Build a React Native Video Calling App with VideoSDK</a></li><li><a href="https://www.videosdk.live/blog/react-native-android-video-calling-app-with-callkeep">Build a React Native Android Video Calling App with ? Callkeep using ? Firebase and VideoSDK</a></li><li><a href="https://youtu.be/pqg1y3eRyK4">React Native Group Video Calling App Tutorial - YouTube</a></li><li><a href="https://www.videosdk.live/blog/react-native-ios-video-calling-app-with-callkeep\">How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Build a React Native Video Calling App with Callkeep using Firebase and VideoSDK Part -1]]></title><description><![CDATA[In this tutorial, you’ll learn how to make a react native video calling app with callkeep using the firebase and video SDK.]]></description><link>https://www.videosdk.live/blog/react-native-android-video-calling-app-with-callkeep</link><guid isPermaLink="false">63bbdc39bd44f53bde5cf599</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Rajan Surani]]></dc:creator><pubDate>Thu, 03 Oct 2024 11:54:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2023/01/Build-a-React-Native-Video-Calling-App-with-Callkeep-using-Firebase-and-Video-SDK--2--1.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Build-a-React-Native-Video-Calling-App-with-Callkeep-using-Firebase-and-Video-SDK--2--1.jpg" alt="Build a React Native Video Calling App with Callkeep using Firebase and VideoSDK Part -1"/><p>In a world where we are all connected through phones over audio and video calls, if you are planning to make one such app, you have landed at the right place.</p><p>We will be building a complete video calling app in React Native, which will allow you to make and receive video calls seamlessly. We'll use <a href="https://www.videosdk.live/" rel="noreferrer">VideoSDK</a> for video conferencing and React Native CallKeep to manage the call UI. This is a two-part series in which we will first implement CallKeep in Android and then configure and tweak it for iOS.</p><p>Now that all the requirements are well explained, let us dive right into the fun part, but if you are too eager to see the results, here is the <a href="https://appdistribution.firebase.dev/i/e977b56536d45796">link to test the app</a> and the <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-call-trigger-example"><strong>complete code for the app</strong></a><strong>.</strong></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/react-native-ios-video-calling-app-with-callkeep"><div class="kg-bookmark-content"><div class="kg-bookmark-title">How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part 2</div><div class="kg-bookmark-description">In this tutorial, you’ll learn how to make a react native video calling app with callkeep using the firebase and video SDK.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/favicons/android-icon-192x192.png" alt="Build a React Native Video Calling App with Callkeep using Firebase and VideoSDK Part -1"><span class="kg-bookmark-author">Rajan Surani</span><span class="kg-bookmark-publisher">more posts</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Build-a-React-Native-Video-Calling-App-with-Callkeep-using-Firebase-and-Video-SDK--Par-2-.jpg" alt="Build a React Native Video Calling App with Callkeep using Firebase and VideoSDK Part -1" onerror="this.style.display = 'none'"/></div></a></figure><h2 id="what-is-callkeep">What is CallKeep?</h2><p><a href="https://www.videosdk.live/developer-hub/social/callkeep" rel="noreferrer">CallKeep</a> is a React Native library that allows you to handle the incoming call UI on the Android and iOS device in any given state of the app, i.e., foreground (running), background, quit, locked device, etc.</p><p>Before building the app, you should be aware of how the app will function internally, which in turn will help with the easy development process.</p><h3 id="how-will-the-app-function">How will the app function?</h3><p>To better understand how the app functions, let's take a scenario where John wants to call his friend Max. John will start by opening our app, where he will enter Max's caller ID and hit call. Max will see an incoming call UI on his phone, where he can accept or reject the call. Once he accepts the call, we will setup the <a href="https://www.videosdk.live/blog/how-to-make-a-video-calling-app-using-react-native">React Native video call</a> between them using VideoSDK.</p><p>You might think these are super simple. Well, let's elaborate a little more on the nuance of the implementation.</p><ol><li>When John enters Max's Caller ID and hits the Call button, the first thing we do is map it to our Firebase database and send a notification on his device.</li><li>When Max's device receives these notifications, our app's logic will show him the incoming call UI using the <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-call-trigger-example">React Native CallKeep library</a>.</li><li>When Max accepts or rejects the incoming call, we will send the status back to John using notifications and eventually start up the video call between them.</li></ol><p>Here is a pictorial representation of the flow for a better understanding.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/call-keep-flow.png" class="kg-image" alt="Build a React Native Video Calling App with Callkeep using Firebase and VideoSDK Part -1" loading="lazy" width="1080" height="342"/></figure><p>Now that we have established the flow of the app and how it functions, let's get started with the development without any more chit-chat.</p><h3 id="core-functionality-of-the-app">Core Functionality of the App</h3><p>First, let's have a look at the set of libraries we will be using to establish the functionalities of the app.</p><ol><li><a href="https://www.npmjs.com/package/react-native-callkeep">React Native CallKeep</a>: These libraries will help with invoking the incoming call on the device.</li><li><a href="https://www.npmjs.com/package/react-native-voip-push-notification">React Native VoIP Push Notification</a>: These libraries are used to send push notifications on iOS devices, as the Firebase notifications do not function well on iOS devices when the app is in a killed state.</li><li><a href="https://www.npmjs.com/package/videosdk-rn-android-overlay-permission">VideoSDK RN Android Overlay Permission</a>: These libraries will handle overlay permission for newer Android versions, making sure the incoming call is always visible.</li><li><a href="https://rnfirebase.io/messaging/usage">React Native Firebase Messaging</a>: These libraries are used for the sending and receiving of the Firebase notification, which will invoke our incoming call UI.</li><li><a href="https://rnfirebase.io/firestore/usage">React Native Firebase Firestore</a>: These libraries are used for storing the caller ID and device token, which will be used for establishing video calls.</li></ol><p>If we look at the development requirements, here is what you will need:</p><ul><li>Node.js v12+</li><li>NPM v6+ (Included with newer Node versions)</li><li>Android Studio and Xcode are installed.</li><li>A Video SDK Token (<a href="https://app.videosdk.live/api-keys">Dashboard &gt; Api-Key</a>) (<a href="https://youtu.be/RLOA0U62tOc">Video Tutorial</a>)</li><li>A minimum of two physical devices is required to test the calling feature.</li></ul><h2 id="client-side-setup-for-a-react-native-android-app">Client-Side Setup for a React Native Android App</h2><p>Let's start by creating a new React Native app using the command:</p><pre><code class="language-js">npx react-native init VideoSdkCallKeepExample</code></pre><p>Now that our basic app is created, let's start by installing all the dependencies.</p><ol><li>First, we will install <code>@react-navigation/native</code> and its dependencies to provide navigation within the app.</li></ol><pre><code class="language-js">npm install @react-navigation/native
npm install @react-navigation/stack
npm install react-native-screens react-native-safe-area-context react-native-gesture-handler</code></pre><p>2. Second on our list of dependencies is the VideoSDK library which will provide video conferencing to the app.</p><pre><code class="language-js">npm install "@videosdk.live/react-native-sdk"
npm install "@videosdk.live/react-native-incallmanager"</code></pre><p>3. Next will be installing dependencies related to Firebase.</p><pre><code class="language-js">npm install @react-native-firebase/app
npm install @react-native-firebase/messaging
npm install @react-native-firebase/firestore
npm install firebase
</code></pre><p>4. And finally, the React Native CallKeep library and the other libraries required for the push notification and permissions</p><pre><code class="language-js">npm install git+https://github.com/react-native-webrtc/react-native-callkeep#4b1fa98a685f6502d151875138b7c81baf1ec680
npm install react-native-voip-push-notification
npm install videosdk-rn-android-overlay-permission
npm install react-native-uuid</code></pre><blockquote>Note: We have put the reference to the React Native CallKeep library using the github repository link, as the NPM version has build issues with Android.</blockquote><p>We are all set up with our dependencies. Let us now start with the Android setup for all the libraries that we have installed.</p><h2 id="react-native-android-setup">React Native Android Setup</h2><h3 id="videosdk-setup">VideoSDK Setup</h3><ol><li>Let's start by adding the required permissions and meta-data in the <code>AndroidManifest.xml</code> file. Below are all the permissions you need to add in the  <code>android/app/src/mainAndroidManifest.xml</code> </li></ol><pre><code class="language-xml">&lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
&lt;!-- Needed to communicate with already-paired Bluetooth devices. (Legacy up to Android 11) --&gt;
&lt;uses-permission
                 android:name="android.permission.BLUETOOTH"
                 android:maxSdkVersion="30" /&gt;
&lt;uses-permission
                 android:name="android.permission.BLUETOOTH_ADMIN"
                 android:maxSdkVersion="30" /&gt;

&lt;!-- Needed to communicate with already-paired Bluetooth devices. (Android 12 upwards)--&gt;
&lt;uses-permission android:name="android.permission.BLUETOOTH_CONNECT" /&gt;
    
&lt;!-- Needed to access Camera and Audio --&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
&lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" /&gt;
&lt;uses-permission android:name="android.permission.ACTION_MANAGE_OVERLAY_PERMISSION" /&gt; 
&lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE"/&gt;
&lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;  

&lt;application&gt;
    // ...
   	&lt;meta-data android:name="live.videosdk.rnfgservice.notification_channel_name"
      android:value="Meeting Notification"
     /&gt;
    &lt;meta-data android:name="live.videosdk.rnfgservice.notification_channel_description"
    android:value="Whenever meeting started notification will appear."
    /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_color"
    android:resource="@color/red"
    /&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundService" android:foregroundServiceType="mediaProjection"&gt;&lt;/service&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundServiceTask"&gt;&lt;/service&gt;
    // ...
&lt;/application&gt;</code></pre><p>2.  Add the following lines in the app-level build.gradle file at <code>android/app/build.gradle</code> inside the <code>dependencies {}</code></p><pre><code class="language-js">implementation project(':rnfgservice')
implementation project(':rnwebrtc')
implementation project(':rnincallmanager')</code></pre><p>3. Add the following lines in the <code>android/settings.gradle</code> file.</p><pre><code class="language-js">include ':rnwebrtc'
project(':rnwebrtc').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-webrtc/android')

include ':rnincallmanager'
project(':rnincallmanager').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-incallmanager/android')

include ':rnfgservice'
project(':rnfgservice').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-foreground-service/android')</code></pre><p>4. Update the <code>MainApplication.java</code> with the following packages.</p><pre><code class="language-java">//Add these imports
import live.videosdk.rnfgservice.ForegroundServicePackage; 
import live.videosdk.rnincallmanager.InCallManagerPackage;
import live.videosdk.rnwebrtc.WebRTCModulePackage;

public class MainApplication extends Application implements ReactApplication {
  private static List&lt;ReactPackage&gt; getPackages() {
    @SuppressWarnings("UnnecessaryLocalVariable")
    List&lt;ReactPackage&gt; packages = new PackageList(this).getPackages();
    // Packages that cannot be autolinked yet can be added manually here, for example:
    // packages.add(new MyReactNativePackage());
    
    //Add these packages
    packages.add(new ForegroundServicePackage());
    packages.add(new InCallManagerPackage());
    packages.add(new WebRTCModulePackage());
    return packages;
  }
}</code></pre><p>5. Lastly register the VideoSDK service to the app in the <code>index.js</code> file.</p><pre><code class="language-js">// Import the library
import { register } from '@videosdk.live/react-native-sdk';

// Register the VideoSDK service
register();
</code></pre><h3 id="callkeep-setup-for-react-native-android-app">CallKeep Setup for React Native Android App</h3><ol><li>Let's start by adding the required permissions and meta-data in the <code>AndroidManifest.xml</code> file. Below are all the permissions you need to add to the  <code>android/app/src/mainAndroidManifest.xml</code> </li></ol><pre><code class="language-xml">&lt;!-- Needed to for the call trigger purpose --&gt;
&lt;uses-permission android:name="android.permission.BIND_TELECOM_CONNECTION_SERVICE"/&gt;
&lt;uses-permission android:name="android.permission.READ_PHONE_STATE" /&gt;
&lt;uses-permission android:name="android.permission.CALL_PHONE" /&gt;

&lt;application&gt;
    // ...
    
    &lt;activity
        android:name=".MainActivity"
        android:label="@string/app_name"
        android:configChanges="keyboard|keyboardHidden|orientation|screenSize|uiMode"
        android:launchMode="singleTask"
        android:windowSoftInputMode="adjustResize"
        android:exported="true"
        &gt;
        &lt;intent-filter&gt;
            &lt;action android:name="android.intent.action.MAIN" /&gt;
            &lt;category android:name="android.intent.category.LAUNCHER" /&gt;
        &lt;/intent-filter&gt;
        
        
        //...Add these intent filter to allow deep linking
        &lt;intent-filter&gt;
            &lt;action android:name="android.intent.action.VIEW" /&gt;
            &lt;category android:name="android.intent.category.DEFAULT" /&gt;
            &lt;category android:name="android.intent.category.BROWSABLE" /&gt;
            &lt;data android:scheme="videocalling" /&gt;
          &lt;/intent-filter&gt;
      &lt;/activity&gt;
    
    &lt;service android:name="io.wazo.callkeep.VoiceConnectionService"
        android:label="Wazo"
        android:permission="android.permission.BIND_TELECOM_CONNECTION_SERVICE"
        android:foregroundServiceType="camera|microphone"
        android:exported:"true"
    &gt;
        
        &lt;intent-filter&gt;
            &lt;action android:name="android.telecom.ConnectionService" /&gt;
        &lt;/intent-filter&gt;
    &lt;/service&gt;
    &lt;service android:name="io.wazo.callkeep.RNCallKeepBackgroundMessagingService" /&gt;
    // ....
&lt;/application&gt;</code></pre><h3 id="firebase-setup-for-react-native-android-app">Firebase Setup for React Native Android App</h3><ol><li>To start, go ahead and create a new Firebase project from here.</li><li>Once the project is created, add your React Native Android app to the Firebase project by clicking on the Android icon.</li><li>Fill in the applicationId for your app in the provided fields and click Register App.</li></ol><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Screenshot-from-2023-01-09-17-09-32.png" class="kg-image" alt="Build a React Native Video Calling App with Callkeep using Firebase and VideoSDK Part -1" loading="lazy" width="1440" height="900"/></figure><p>4. Download the <code>google-services.json</code> file and move it to <code>android/app</code></p><p>5. Follow the steps shown to add the Firebase SDK to your Android app.</p><p>6. Create a new web app in your Firebase project that will be used to access the Firebase database.</p><p>7. Add the configuration file shown in the <code>database/firebaseDb.js</code> file in your project.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-24.png" class="kg-image" alt="Build a React Native Video Calling App with Callkeep using Firebase and VideoSDK Part -1" loading="lazy" width="1440" height="900"/></figure><p>8. Go to Firebase Firestore in the left panel and create a database, which we will use to store caller IDs.</p><p>9. With these, we are all set with Firebase on Android.</p><h2 id="server-side-setup">Server-Side Setup</h2><p>Now that we have completed the setup for our app, Let us set up the server-side APIs as well. For creating these APIs, we will use Firebase functions. So let's get straight into it.</p><ol><li>Go to Firebase Functions in the left panel. To use Firebase functions, you will need to upgrade to a pay-as-you-go plan. Although there is no need to worry about <a href="https://cloud.google.com/functions/pricing">charges</a> if you are just building as a hobby project, there is a generous free quota available.</li><li>Let's get started with Firebase functions by installing the Firebase CLI using the below command.</li></ol><pre><code class="language-js">npm install -g firebase-tools</code></pre><p>3.  Run <code>firebase login</code> to log in via the browser and authenticate the Firebase CLI.</p><p>4.  Go to your Firebase project directory.</p><p>5. Run <code>firebase init functions</code> to initialize the firebase functions project where we will write our APIs. Follow the setup instructions shown in the CLI, and once the process completes, you should see the <code>functions</code> folder created in your directory.</p><p>6. Download the service account key from the project settings and place it inside the <code>functions/serviceAccountKey.json</code>.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-23.png" class="kg-image" alt="Build a React Native Video Calling App with Callkeep using Firebase and VideoSDK Part -1" loading="lazy" width="1366" height="768"/></figure><p>With these, we have completed the setup that we require to run our app.</p><h2 id="app-side-programming">App Side Programming</h2><p>Let's hop on to start the code on the React Native side. We will be creating two screens, the first of which is where the user can see his Caller ID and enter the other person's Caller ID to initiate a new call.</p><p>We will be following the folder structure below:</p><pre><code class="language-js">.
└── Root/
    ├── android
    ├── ios
    ├── src/
    │   ├── api/
    │   │   └── api.js
    │   ├── assets/
    │   │   └── Get it from our repository
    │   ├── components/
    │   │   ├── Get it from our repository
    │   ├── navigators/
    │   │   └── screenNames.js
    │   ├── scenes/
    │   │   ├── home/
    │   │   │   └── index.js
    │   │   └── meeting/
    │   │       ├── OneToOne/
    │   │       ├── index.js
    │   │       └── MeetingContainer.js
    │   ├── styles/
    │   │   ├── Get it from our repository
    │   └── utils/
    │       └── incoming-video-call.js
    ├── App.js
    ├── index.js
    └── package.json</code></pre><p>Let's get started with the basic UI of the call-initiating screen.</p><ol><li>To give you a head start, we have already created the basic components we will need, like buttons, text fields, avatars, and icons. You can get direct access to all the <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-call-trigger-example/tree/master/client/src/assets/icons">icons</a> and <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-call-trigger-example/tree/master/client/src/components">components</a> from our <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-call-trigger-example">GitHub repository</a>.</li><li>With our basic components setup, let's add Navigation screens to the app. We will have a Home screen which will have the caller ID input and a call button and a meeting screen which will have the video call.So update the<code>src/navigators/screenNames.js</code> file with the following screen names.</li></ol><pre><code class="language-js">export const SCREEN_NAMES = {
  Home: "homescreen",
  Meeting: "meetingscreen",
};
</code></pre><p>3. Update the App.js file with the Navigation stack.</p><pre><code class="language-js">import React, { useEffect } from "react";
import "react-native-gesture-handler";
import { NavigationContainer } from "@react-navigation/native";
import { createStackNavigator } from "@react-navigation/stack";
import { SCREEN_NAMES } from "./src/navigators/screenNames";
import Meeting from "./src/scenes/meeting";
import { LogBox, Text, Alert } from "react-native";
import Home from "./src/scenes/home";
import RNCallKeep from "react-native-callkeep";
LogBox.ignoreLogs(["Warning: ..."]);
LogBox.ignoreAllLogs();

const { Navigator, Screen } = createStackNavigator();

const linking = {
  prefixes: ["videocalling://"],
  config: {
    screens: {
      meetingscreen: {
        path: `meetingscreen/:token/:meetingId`,
      },
    },
  },
};

export default function App() {

  return (
    &lt;NavigationContainer linking={linking} fallback={&lt;Text&gt;Loading...&lt;/Text&gt;}&gt;
      &lt;Navigator
        screenOptions={{
          animationEnabled: false,
          presentation: "modal",
        }}
        initialRouteName={SCREEN_NAMES.Home}
      &gt;
        &lt;Screen
          name={SCREEN_NAMES.Meeting}
          component={Meeting}
          options={{ headerShown: false }}
        /&gt;
        &lt;Screen
          name={SCREEN_NAMES.Home}
          component={Home}
          options={{ headerShown: false }}
        /&gt;
      &lt;/Navigator&gt;
    &lt;/NavigationContainer&gt;
  );
}
</code></pre><p>4. With our Navigation stack ready, let us set up the home screen UI.<br>For which you have to update the <code>src/scenes/home/index.js</code></br></p><pre><code class="language-js">import React, { useEffect, useState, useRef } from "react";
import {
  Platform, KeyboardAvoidingView, TouchableWithoutFeedback,
  Keyboard, View, Text, Clipboard, Alert, Linking,
} from "react-native";
import { TouchableOpacity } from "react-native-gesture-handler";
import { CallEnd, Copy } from "../../assets/icons";
import TextInputContainer from "../../components/TextInputContainer";
import colors from "../../styles/colors";
import firestore from "@react-native-firebase/firestore";
import messaging from "@react-native-firebase/messaging";
import Toast from "react-native-simple-toast";
import {
  updateCallStatus, initiateCall,
  getToken, createMeeting,
} from "../../api/api";
import { SCREEN_NAMES } from "../../navigators/screenNames";
import Incomingvideocall from "../../utils/incoming-video-call";

export default function Home({ navigation }) {
  
  //These is the number user will enter to make a call
  const [number, setNumber] = useState("");
  
  //These will store the detials of the users callerId and fcm token
  const [firebaseUserConfig, setfirebaseUserConfig] = useState(null);
  
  //Used to render the UI conditionally, whether the person on making a call or not
  const [isCalling, setisCalling] = useState(false);

  return (
    &lt;KeyboardAvoidingView
      behavior={Platform.OS === "ios" ? "padding" : "height"}
      style={{
        flex: 1,
        backgroundColor: colors.primary["900"],
        justifyContent: "center",
        paddingHorizontal: 42,
      }}
    &gt;
      {!isCalling ? (
        &lt;TouchableWithoutFeedback onPress={Keyboard.dismiss}&gt;
        {/*CALLER ID and Call Option UI*/}
        &lt;/TouchableWithoutFeedback&gt;
      ) : (
        &lt;View style={{ flex: 1, justifyContent: "space-around" }}&gt;
          {/*OUT GOING CALL UI*/}
        &lt;/View&gt;
      )}
    &lt;/KeyboardAvoidingView&gt;
  );
}
</code></pre><p>With the states and bare screen setup, let's first add the UI where the user will be able to see his caller ID and have the option to call another person.</p><pre><code class="language-js">{/*CALLER ID and Call Option UI*/}
&lt;&gt;
  &lt;View
    style={{
      padding: 35,
      backgroundColor: "#1A1C22",
      justifyContent: "center",
      alignItems: "center",
      borderRadius: 14,
    }}
  &gt;
    &lt;Text
      style={{
        fontSize: 18,
        color: "#D0D4DD",
        fontFamily: ROBOTO_FONTS.Roboto,
      }}
    &gt;
      Your Caller ID
    &lt;/Text&gt;
    &lt;View
      style={{
        flexDirection: "row",
        marginTop: 12,
        alignItems: "center",
      }}
    &gt;
      &lt;Text
        style={{
          fontSize: 32,
          color: "#ffff",
          letterSpacing: 8,
          fontFamily: ROBOTO_FONTS.Roboto,
        }}
      &gt;
        {firebaseUserConfig
          ? firebaseUserConfig.callerId
          : "Loading.."}
      &lt;/Text&gt;
      &lt;TouchableOpacity
        style={{
          height: 30,
          aspectRatio: 1,
          backgroundColor: "#2B3034",
          marginLeft: 12,
          justifyContent: "center",
          alignItems: "center",
          borderRadius: 4,
        }}
        onPress={() =&gt; {
          Clipboard.setString(
            firebaseUserConfig &amp;&amp; firebaseUserConfig.callerId
          );
          if (Platform.OS === "android") {
            Toast.show("Copied");
            Alert.alert(
              "Information",
              "This callerId will be unavailable, once you uninstall the App."
            );
          }
        }}
      &gt;
        &lt;Copy fill={colors.primary[100]} width={16} height={16} /&gt;
      &lt;/TouchableOpacity&gt;
    &lt;/View&gt;
  &lt;/View&gt;

  &lt;View
    style={{
      backgroundColor: "#1A1C22",
      padding: 40,
      marginTop: 25,
      justifyContent: "center",
      borderRadius: 14,
    }}
  &gt;
    &lt;Text
      style={{
        fontSize: 18,
        color: "#D0D4DD",
        fontFamily: ROBOTO_FONTS.Roboto,
      }}
    &gt;
      Enter call id of another user
    &lt;/Text&gt;
    &lt;TextInputContainer
      placeholder={"Enter Caller ID"}
      value={number}
      setValue={setNumber}
      keyboardType={"number-pad"}
    /&gt;
    &lt;TouchableOpacity
      onPress={async () =&gt; {
        if (number) {
          const data = await getCallee(number);
          if (data) {
            if (data.length === 0) {
              Toast.show("CallerId Does not Match");
            } else {
              Toast.show("CallerId Match!");
              const { token, platform, APN } = data[0]?.data();
              initiateCall({
                callerInfo: {
                  name: "Person A",
                  ...firebaseUserConfig,
                },
                calleeInfo: {
                  token,
                  platform,
                  APN,
                },
                videoSDKInfo: {
                  token: videosdkTokenRef.current,
                  meetingId: videosdkMeetingRef.current,
                },
              });
              setisCalling(true);
            }
          }
        } else {
          Toast.show("Please provide CallerId");
        }
      }}
      style={{
        height: 50,
        backgroundColor: "#5568FE",
        justifyContent: "center",
        alignItems: "center",
        borderRadius: 12,
        marginTop: 16,
      }}
    &gt;
      &lt;Text
        style={{
          fontSize: 16,
          color: "#FFFFFF",
        }}
      &gt;
        Call Now
      &lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
  &lt;/View&gt;
&lt;/&gt;</code></pre><p>Now we will add the UI for Outgoing calls which will show the Caller ID and end call option.</p><pre><code class="language-js">{/*OUT GOING CALL*/}
&lt;View
  style={{
    padding: 35,
    justifyContent: "center",
    alignItems: "center",
    borderRadius: 14,
  }}
&gt;
  &lt;Text
    style={{
      fontSize: 16,
      color: "#D0D4DD",
      fontFamily: ROBOTO_FONTS.Roboto,
    }}
  &gt;
    Calling to...
  &lt;/Text&gt;

  &lt;Text
    style={{
      fontSize: 36,
      marginTop: 12,
      color: "#ffff",
      letterSpacing: 8,
      fontFamily: ROBOTO_FONTS.Roboto,
    }}
  &gt;
    {number}
  &lt;/Text&gt;
&lt;/View&gt;
&lt;View
  style={{
    justifyContent: "center",
    alignItems: "center",
  }}
&gt;
  &lt;TouchableOpacity
    onPress={async () =&gt; {
      const data = await getCallee(number);
      if (data) {
        updateCallStatus({
          callerInfo: data[0]?.data(),
          type: "DISCONNECT",
        });
        setisCalling(false);
      }
    }}
    style={{
      backgroundColor: "#FF5D5D",
      borderRadius: 30,
      height: 60,
      aspectRatio: 1,
      justifyContent: "center",
      alignItems: "center",
    }}
  &gt;
    &lt;CallEnd width={50} height={12} /&gt;
  &lt;/TouchableOpacity&gt;
&lt;/View&gt;</code></pre><blockquote>Dont worry if you see error poping up, as we will be adding the methods soon.</blockquote><p>You will come across the following methods in the above code : </p><ul><li><code>getCallee()</code> : getCallee() is used to get the details of the user you are trying to initiate a call with.</li><li><code>initiateCall()</code> : initiateCall() is used to send a notification to the receiving user and start the call. </li><li><code>updateCallStatus()</code> : updateCallStatus() is used to updated the status of the incoming call, like accepted, rejected, etc.</li></ul><p>5. With the UI for calling in place let's start with the actual calling development.</p><p>This is how the UI will look:</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/home-screen--2-.jpg" class="kg-image" alt="Build a React Native Video Calling App with Callkeep using Firebase and VideoSDK Part -1" loading="lazy" width="185" height="400"/></figure><h3 id="firebase-messaging-to-initiate-calls">Firebase messaging to initiate calls</h3><p>The first step in establishing the call is to identify each user and get the messaging token for the user, which will allow us to send them notifications.</p><ol><li>So on the home page of our app, we will get the Firebase Messaging Token. Using these tokens, we will query the Firestore database to see if the user is present in the database or not. If the user is present, we will update the <code>firebaseUserConfig</code> state in the app; otherwise, we will register the user in the database and update that state.</li></ol><pre><code class="language-js">  useEffect(() =&gt; {
    async function getFCMtoken() {
      const authStatus = await messaging().requestPermission();
      const enabled =
        authStatus === messaging.AuthorizationStatus.AUTHORIZED ||
        authStatus === messaging.AuthorizationStatus.PROVISIONAL;

      if (enabled) {
        const token = await messaging().getToken();
        const querySnapshot = await firestore()
          .collection("users")
          .where("token", "==", token)
          .get();

        const uids = querySnapshot.docs.map((doc) =&gt; {
          if (doc &amp;&amp; doc?.data()?.callerId) {
            const { token, platform, callerId } = doc?.data();
            setfirebaseUserConfig({
              callerId,
              token,
              platform,
            });
          }
          return doc;
        });

        if (uids &amp;&amp; uids.length == 0) {
          addUser({ token });
        } else {
          console.log("Token Found");
        }
      }
    }

    getFCMtoken();
  }, []);

const addUser = ({ token }) =&gt; {
    const platform = Platform.OS === "android" ? "ANDROID" : "iOS";
    const obj = {
      callerId: Math.floor(10000000 + Math.random() * 90000000).toString(),
      token,
      platform,
    };
    firestore()
      .collection("users")
      .add(obj)
      .then(() =&gt; {
        setfirebaseUserConfig(obj);
        console.log("User added!");
      });
  };</code></pre><p>2. We will set up the VideoSDK token and Meeting ID when the home screen loads so that we have them ready when the user wants to start the call.</p><pre><code class="language-js">  
  const [videosdkToken, setVideosdkToken] = useState(null);
  const [videosdkMeeting, setVideosdkMeeting] = useState(null);
  
  const videosdkTokenRef = useRef();
  const videosdkMeetingRef = useRef();
  videosdkTokenRef.current = videosdkToken;
  videosdkMeetingRef.current = videosdkMeeting;
  
  useEffect(() =&gt; {
    async function getTokenAndMeetingId() {
      const videoSDKtoken = getToken();
      const videoSDKMeetingId = await createMeeting({ 
      	token: videoSDKtoken
      });
      setVideosdkToken(videoSDKtoken);
      setVideosdkMeeting(videoSDKMeetingId);
    }
    getTokenAndMeetingId();
  }, []);
</code></pre><p>3. We have to create the <code>getToken()</code> and <code>createMeeting()</code> used in the above step in the <code>src/api/api.js</code> file.</p><pre><code class="language-js">const API_BASE_URL = "https://api.videosdk.live/v2";
const VIDEOSDK_TOKEN = "UPDATE YOUR VIDEOSDK TOKEN HERE WHICH YOU GENERATED FROM DASHBOARD ";

export const getToken = () =&gt; {
  return VIDEOSDK_TOKEN;
};

export const createMeeting = async ({ token }) =&gt; {
  const url = `${API_BASE_URL}/rooms`;
  const options = {
    method: "POST",
    headers: { Authorization: token, "Content-Type": "application/json" },
  };

  const { roomId } = await fetch(url, options)
    .then((response) =&gt; response.json())
    .catch((error) =&gt; console.error("error", error));

  return roomId;
};
</code></pre><p>4. The next step is to initiate the call. To achieve that, we will have to create two APIs as Firebase functions that will trigger notifications on the other device and update the status of the call, whether it was rejected or accepted.</p><p>Start by updating <code>functions and index.js</code> with the basic Express server setup.</p><pre><code class="language-js">const functions = require("firebase-functions");
const express = require("express");
const cors = require("cors");
const morgan = require("morgan");
var fcm = require("fcm-notification");
var FCM = new fcm("./serviceAccountKey.json");
const app = express();
const { v4: uuidv4 } = require("uuid");

app.use(cors());
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
app.use(morgan("dev"));

//

app.get("/", (req, res) =&gt; {
  res.send("Hello World!");
});

app.listen(9000, () =&gt; {
  console.log(`API server listening at http://localhost:9000`);
});

exports.app = functions.https.onRequest(app);
</code></pre><ul><li>The first API we need is <code>initiate-call</code>, which will be used to send a notification to the receiving user and start the call by sending details like caller information and VideoSDK room details.</li></ul><pre><code class="language-js">app.post("/initiate-call", (req, res) =&gt; {
  const { calleeInfo, callerInfo, videoSDKInfo } = req.body;

  if (calleeInfo.platform === "ANDROID") {
    var FCMtoken = calleeInfo.token;
    const info = JSON.stringify({
      callerInfo,
      videoSDKInfo,
      type: "CALL_INITIATED",
    });
    var message = {
      data: {
        info,
      },
      android: {
        priority: "high",
      },
      token: FCMtoken,
    };
    FCM.send(message, function (err, response) {
      if (err) {
        res.status(200).send(response);
      } else {
        res.status(400).send(response);
      }
    });
  } else {
    res.status(400).send("Not supported platform");
  }
});</code></pre><ul><li>The second API which we need is <code>update-call</code> which will update the status of the incoming call, like accepted, rejected, etc, and send the notification to the caller.</li></ul><pre><code class="language-js">app.post("/update-call", (req, res) =&gt; {
  const { callerInfo, type } = req.body;
  const info = JSON.stringify({
    callerInfo,
    type,
  });

  var message = {
    data: {
      info,
    },
    apns: {
      headers: {
        "apns-priority": "10",
      },
      payload: {
        aps: {
          badge: 1,
        },
      },
    },
    token: callerInfo.token,
  };

  FCM.send(message, function (err, response) {
    if (err) {
      res.status(200).send(response);
    } else {
      res.status(400).send(response);
    }
  });
});</code></pre><p>5. Now that the APIs are created we will trigger them from the app. Update the <code>src/api/api.js</code> with the following API calls.</p><p>Here the <code>FCM_SERVER_URL</code> needs to be updated with the URL of your Firebase functions.</p><p>You will get these when you deploy the functions or when you run the functions in a local environment using <code>npm run serve</code> </p><pre><code class="language-js">const FCM_SERVER_URL = "YOUR_FCM_URL";

export const initiateCall = async ({
  callerInfo,
  calleeInfo,
  videoSDKInfo,
}) =&gt; {
  await fetch(`${FCM_SERVER_URL}/initiate-call`, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      callerInfo,
      calleeInfo,
      videoSDKInfo,
    }),
  })
    .then((response) =&gt; {
      console.log(" RESP", response);
    })
    .catch((error) =&gt; console.error("error", error));
};

export const updateCallStatus = async ({ callerInfo, type }) =&gt; {
  await fetch(`${FCM_SERVER_URL}/update-call`, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      callerInfo,
      type,
    }),
  })
    .then((response) =&gt; {
      console.log("##RESP", response);
    })
    .catch((error) =&gt; console.error("error", error));
};
</code></pre><p>6. The notification sent is now configured. Now we will have to invoke the call when you receive the notification; this is where the React-Native Call Keep comes into play.</p><h3 id="integration-of-call-keep-services">Integration of Call-Keep Services</h3><ol><li>Before initiating the call, we will have to ask for a few permissions and also set up the React-Native Call Keep. In order to do so, update the <code>App.js</code> with the following code:</li></ol><pre><code class="language-js">  useEffect(() =&gt; {
    const options = {
      ios: {
        appName: "VideoSDK",
      },
      android: {
        alertTitle: "Permissions required",
        alertDescription:
          "This application needs to access your phone accounts",
        cancelButton: "Cancel",
        okButton: "ok",
        imageName: "phone_account_icon",
      },
    };
    RNCallKeep.setup(options);
    RNCallKeep.setAvailable(true);

    if (Platform.OS === "android") {
      OverlayPermissionModule.requestOverlayPermission();
    }
  }, []);</code></pre><p>These will ask for the overlay permissions for the Android devices and also setup the CallKeep library. Here is the reference for <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-call-trigger-example#step-3-allow-calling-and-overlay-permissions">how to grant these permissions.</a></p><p>2. You might remember that we had set up the app to send message notifications but did not add any listeners for those notifications. So let's add those listeners and show the Call UI when the notification is received.</p><p>Update the <code>utils/incoming-video-call.js</code> file, which will handle all the functionalities related to the incoming call.</p><pre><code class="language-js">import { Platform } from "react-native";
import RNCallKeep from "react-native-callkeep";
import uuid from "react-native-uuid";

class IncomingCall {
  constructor() {
    this.currentCallId = null;
  }

  configure = (incomingcallAnswer, endIncomingCall) =&gt; {
    try {
      this.setupCallKeep();
      Platform.OS === "android" &amp;&amp; RNCallKeep.setAvailable(true);
      RNCallKeep.addEventListener("answerCall", incomingcallAnswer);
      RNCallKeep.addEventListener("endCall", endIncomingCall);
    } catch (error) {
      console.error("initializeCallKeep error:", error?.message);
    }
  };

  //These emthod will setup the call keep.
  setupCallKeep = () =&gt; {
    try {
      RNCallKeep.setup({
        ios: {
          appName: "VideoSDK",
          supportsVideo: false,
          maximumCallGroups: "1",
          maximumCallsPerCallGroup: "1",
        },
        android: {
          alertTitle: "Permissions required",
          alertDescription:
            "This application needs to access your phone accounts",
          cancelButton: "Cancel",
          okButton: "Ok",
        },
      });
    } catch (error) {
      console.error("initializeCallKeep error:", error?.message);
    }
  };
  
  // Use startCall to ask the system to start a call - Initiate an outgoing call from this point
  startCall = ({ handle, localizedCallerName }) =&gt; {
    // Your normal start call action
    RNCallKeep.startCall(this.getCurrentCallId(), handle, localizedCallerName);
  };

  reportEndCallWithUUID = (callUUID, reason) =&gt; {
    RNCallKeep.reportEndCallWithUUID(callUUID, reason);
  };

  //These method will end the incoming call
  endIncomingcallAnswer = () =&gt; {
    RNCallKeep.endCall(this.currentCallId);
    this.currentCallId = null;
    this.removeEvents();
  };

  //These method will remove all the event listeners
  removeEvents = () =&gt; {
    RNCallKeep.removeEventListener("answerCall");
    RNCallKeep.removeEventListener("endCall");
  };

  //These method will display the incoming call
  displayIncomingCall = (callerName) =&gt; {
    Platform.OS === "android" &amp;&amp; RNCallKeep.setAvailable(false);
    RNCallKeep.displayIncomingCall(
      this.getCurrentCallId(),
      callerName,
      callerName,
      "number",
      true,
      null
    );
  };

  //Bring the app to foreground
  backToForeground = () =&gt; {
    RNCallKeep.backToForeground();
  };

  //Return the ID of current Call
  getCurrentCallId = () =&gt; {
    if (!this.currentCallId) {
      this.currentCallId = uuid.v4();
    }
    return this.currentCallId;
  };

  //These Method will end the call
  endAllCall = () =&gt; {
    RNCallKeep.endAllCalls();
    this.currentCallId = null;
    this.removeEvents();
  };
  
}

export default Incomingvideocall = new IncomingCall();
</code></pre><blockquote>Note: Check the code comments to learn about the function of each method.</blockquote><p>3. We have to add the notification listener on the firebase with which we will invoke the CallKeep to handle the Call UI, which we can do by adding the following code in the <code>src/home/index.js</code></p><pre><code class="language-js">
  useEffect(() =&gt; {
    const unsubscribe = messaging().onMessage((remoteMessage) =&gt; {
      const { callerInfo, videoSDKInfo, type } = JSON.parse(
        remoteMessage.data.info
      );
      switch (type) {
        case "CALL_INITIATED":
          const incomingCallAnswer = ({ callUUID }) =&gt; {
            updateCallStatus({
              callerInfo,
              type: "ACCEPTED",
            });
            Incomingvideocall.endIncomingcallAnswer(callUUID);
            setisCalling(false);
            Linking.openURL(
              `videocalling://meetingscreen/${videoSDKInfo.token}/${videoSDKInfo.meetingId}`
            ).catch((err) =&gt; {
              Toast.show(`Error`, err);
            });
          };

          const endIncomingCall = () =&gt; {
            Incomingvideocall.endIncomingcallAnswer();
            updateCallStatus({ callerInfo, type: "REJECTED" });
          };

          Incomingvideocall.configure(incomingCallAnswer, endIncomingCall);
          Incomingvideocall.displayIncomingCall(callerInfo.name);

          break;
        case "ACCEPTED":
          setisCalling(false);
          navigation.navigate(SCREEN_NAMES.Meeting, {
            name: "Person B",
            token: videosdkTokenRef.current,
            meetingId: videosdkMeetingRef.current,
          });
          break;
        case "REJECTED":
          Toast.show("Call Rejected");
          setisCalling(false);
          break;
        case "DISCONNECT":
          Platform.OS === "ios"
            ? Incomingvideocall.endAllCall()
            : Incomingvideocall.endIncomingcallAnswer();
          break;
        default:
          Toast.show("Call Could not placed");
      }
    });

    return () =&gt; {
      unsubscribe();
    };
  }, []);
  
 //Used to get the detials of the user you are trying to intiate a call with.
  const getCallee = async (num) =&gt; {
    const querySnapshot = await firestore()
      .collection("users")
      .where("callerId", "==", num.toString())
      .get();
    return querySnapshot.docs.map((doc) =&gt; {
      return doc;
    });
  };</code></pre><p>4. After adding the above code, you might observe that when the app is in the foreground, the call UI works as expected but not when the app is in the background. So to handle the case in background mode, we will have to add a background listener for the notifications. In order to add the listener, add the below-mentioned code in the <code>index.js</code> file of your project.</p><pre><code class="language-js">
const firebaseListener = async (remoteMessage) =&gt; {
  const { callerInfo, videoSDKInfo, type } = JSON.parse(
    remoteMessage.data.info
  );

  if (type === "CALL_INITIATED") {
    const incomingCallAnswer = ({ callUUID }) =&gt; {
      Incomingvideocall.backToForeground();
      updateCallStatus({
        callerInfo,
        type: "ACCEPTED",
      });
      Incomingvideocall.endIncomingcallAnswer(callUUID);
      Linking.openURL(
        `videocalling://meetingscreen/${videoSDKInfo.token}/${videoSDKInfo.meetingId}`
      ).catch((err) =&gt; {
        Toast.show(`Error`, err);
      });
    };

    const endIncomingCall = () =&gt; {
      Incomingvideocall.endIncomingcallAnswer();
      updateCallStatus({ callerInfo, type: "REJECTED" });
    };

    Incomingvideocall.configure(incomingCallAnswer, endIncomingCall);
    Incomingvideocall.displayIncomingCall(callerInfo.name);
    Incomingvideocall.backToForeground();
  }
};

// Register background handler
messaging().setBackgroundMessageHandler(firebaseListener);</code></pre><p>Here how the incoming and outgoing calls will look like:</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Untitled-design--1-.png" class="kg-image" alt="Build a React Native Video Calling App with Callkeep using Firebase and VideoSDK Part -1" loading="lazy" width="1920" height="1080"/></figure><blockquote>Wow!! You just implemented the calling feature, which works like a charm. </blockquote><blockquote>But without video call, it still feels incomplete. Well, for that, we have VideoSDK, which we will implement in the upcoming steps.</blockquote><h3 id="videosdk-integration">VideoSDK Integration</h3><ol><li>We will be showing the video call on the meeting screen that we created earlier. These screens will have the room section created before the meeting is joined, and after that, it will have the remote participant in the large view and the local participant in the mini-view. We will have three buttons to toggle the mic, toggle the webcam, and leave the call.</li></ol><pre><code class="language-js">.
└── scenes/
    ├── home/
    └── meeting/
        ├── OneToOne/
        │   ├── LargeView/
        │   │   └── index.js
        │   ├── MiniView/
        │   │   └── index.js
        │   └── index.js
        ├── index.js
        └── MeetingContainer.js</code></pre><p>2.  First step in integrating the VideoSDK is adding the MeetingProvider in the <code>src/scene/meeting/index.js</code> which will initiate the meeting and join it.</p><pre><code class="language-js">import React from "react";
import { Platform, SafeAreaView } from "react-native";
import colors from "../../styles/colors";
import {
  MeetingConsumer,
  MeetingProvider,
} from "@videosdk.live/react-native-sdk";
import MeetingContainer from "./MeetingContainer";
import { SCREEN_NAMES } from "../../navigators/screenNames";
import IncomingVideoCall from "../../utils/incoming-video-call";

export default function ({ navigation, route }) {
  const token = route.params.token;
  const meetingId = route.params.meetingId;
  const micEnabled = route.params.micEnabled ? route.params.micEnabled : true;
  const webcamEnabled = route.params.webcamEnabled
    ? route.params.webcamEnabled
    : true;
  const name = route.params.name;

  return (
    &lt;SafeAreaView
      style={{ flex: 1, backgroundColor: colors.primary[900], padding: 12 }}
    &gt;
      &lt;MeetingProvider
        config={{
          meetingId: meetingId,
          micEnabled: micEnabled,
          webcamEnabled: webcamEnabled,
          name: name,
          notification: {
            title: "Video SDK Meeting",
            message: "Meeting is running.",
          },
        }}
        token={token}
      &gt;
        &lt;MeetingConsumer
          {...{
            onMeetingLeft: () =&gt; {
              Platform.OS == "ios" &amp;&amp; IncomingVideoCall.endAllCall();
              navigation.navigate(SCREEN_NAMES.Home);
            },
          }}
        &gt;
          {() =&gt; {
            return &lt;MeetingContainer webcamEnabled={webcamEnabled} /&gt;;
          }}
        &lt;/MeetingConsumer&gt;
      &lt;/MeetingProvider&gt;
    &lt;/SafeAreaView&gt;
  );
}
</code></pre><p>3. We used the MeetingContainer Component which will hold the different layouts for our meeting like showing Waiting to Join before the meeting is joined and also the complete meeting view once the meeting is joined</p><pre><code class="language-js">import {
  useMeeting,
  ReactNativeForegroundService,
} from "@videosdk.live/react-native-sdk";
import { useEffect, useState } from "react";
import OneToOneMeetingViewer from "./OneToOne";
import WaitingToJoinView from "./Components/WaitingToJoinView";
import React from "react";
import { convertRFValue } from "../../../styles/spacing";
import { Text, View } from "react-native";
import colors from "../../../styles/colors";


export default function MeetingContainer({ webcamEnabled }) {
  const [isJoined, setJoined] = useState(false);
  
  const { join, changeWebcam, participants, leave } = useMeeting({
    onMeetingJoined: () =&gt; {
      setTimeout(() =&gt; {
        setJoined(true);
      }, 500);
    },
  });

  useEffect(() =&gt; {
    setTimeout(() =&gt; {
      if (!isJoined) {
        join();
        if (webcamEnabled) changeWebcam();
      }
    }, 1000);

    return () =&gt; {
      leave();
      ReactNativeForegroundService.stopAll();
    };
  }, []);

  return isJoined ? (
    &lt;OneToOneMeetingViewer /&gt;
  ) : (
    &lt;View
      style={{
        flexDirection: "column",
        justifyContent: "center",
        alignItems: "center",
        height: "100%",
        width: "100%",
      }}
    &gt;
      &lt;Text
        style={{
          fontSize: convertRFValue(18),
          color: colors.primary[100],
          marginTop: 28,
        }}
      &gt;
        Creating a room
      &lt;/Text&gt;
    &lt;/View&gt;
  );
}
</code></pre><p>4. Next we will add our MeetingView which will show the buttons and Participants View in the <code>src/scenes/meeting/OneToOne/index.js</code></p><pre><code class="language-js">import React from "react";
import {
  View, Text,Clipboard, TouchableOpacity, ActivityIndicator,
} from "react-native";
import { useMeeting } from "@videosdk.live/react-native-sdk";
import { 
	CallEnd, CameraSwitch, Copy, MicOff, MicOn, VideoOff, VideoOn,
} from "../../../assets/icons";
import colors from "../../../styles/colors";
import IconContainer from "../../../components/IconContainer";
import LocalViewContainer from "./LocalViewContainer";
import LargeView from "./LargeView";
import MiniView from "./MiniView";
import Toast from "react-native-simple-toast";

export default function OneToOneMeetingViewer() {
  const {
    participants,
    localWebcamOn,
    localMicOn,
    leave,
    changeWebcam,
    toggleWebcam,
    toggleMic,
    meetingId,
  } = useMeeting({
    onError: (data) =&gt; {
      const { code, message } = data;
      Toast.show(`Error: ${code}: ${message}`);
    },
  });

  const participantIds = [...participants.keys()];

  const participantCount = participantIds ? participantIds.length : null;

  return (
    &lt;&gt;
      &lt;View
        style={{
          flexDirection: "row",
          alignItems: "center",
          width: "100%",
        }}
      &gt;
        &lt;View
          style={{
            flex: 1,
            justifyContent: "space-between",
          }}
        &gt;
          &lt;View style={{ flexDirection: "row" }}&gt;
            &lt;Text
              style={{
                fontSize: 16,
                color: colors.primary[100],
              }}
            &gt;
              {meetingId ? meetingId : "xxx - xxx - xxx"}
            &lt;/Text&gt;

            &lt;TouchableOpacity
              style={{
                justifyContent: "center",
                marginLeft: 10,
              }}
              onPress={() =&gt; {
                Clipboard.setString(meetingId);
                Toast.show("Meeting Id copied Successfully");
              }}
            &gt;
              &lt;Copy fill={colors.primary[100]} width={18} height={18} /&gt;
            &lt;/TouchableOpacity&gt;
          &lt;/View&gt;
        &lt;/View&gt;
        &lt;View&gt;
          &lt;TouchableOpacity
            onPress={() =&gt; {
              changeWebcam();
            }}
          &gt;
            &lt;CameraSwitch height={26} width={26} fill={colors.primary[100]} /&gt;
          &lt;/TouchableOpacity&gt;
        &lt;/View&gt;
      &lt;/View&gt;
      {/* Center */}
      &lt;View style={{ flex: 1, marginTop: 8, marginBottom: 12 }}&gt;
        {participantCount &gt; 1 ? (
          &lt;&gt;
            &lt;LargeView participantId={participantIds[1]} /&gt;
            &lt;MiniView participantId={participantIds[0]} /&gt;
          &lt;/&gt;
        ) : participantCount === 1 ? (
          &lt;LargeView participantId={participantIds[0]} /&gt;
        ) : (
          &lt;View
            style={{ flex: 1, justifyContent: "center", alignItems: "center" }}
          &gt;
            &lt;ActivityIndicator size={"large"} /&gt;
          &lt;/View&gt;
        )}
      &lt;/View&gt;
      {/* Bottom */}
      &lt;View
        style={{
          flexDirection: "row",
          justifyContent: "space-evenly",
        }}
      &gt;
        &lt;IconContainer
          backgroundColor={"red"}
          onPress={() =&gt; {
            leave();
          }}
          Icon={() =&gt; {
            return &lt;CallEnd height={26} width={26} fill="#FFF" /&gt;;
          }}
        /&gt;
        &lt;IconContainer
          style={{
            borderWidth: 1.5,
            borderColor: "#2B3034",
          }}
          backgroundColor={!localMicOn ? colors.primary[100] : "transparent"}
          onPress={() =&gt; {
            toggleMic();
          }}
          Icon={() =&gt; {
            return localMicOn ? (
              &lt;MicOn height={24} width={24} fill="#FFF" /&gt;
            ) : (
              &lt;MicOff height={28} width={28} fill="#1D2939" /&gt;
            );
          }}
        /&gt;
        &lt;IconContainer
          style={{
            borderWidth: 1.5,
            borderColor: "#2B3034",
          }}
          backgroundColor={!localWebcamOn ? colors.primary[100] : "transparent"}
          onPress={() =&gt; {
            toggleWebcam();
          }}
          Icon={() =&gt; {
            return localWebcamOn ? (
              &lt;VideoOn height={24} width={24} fill="#FFF" /&gt;
            ) : (
              &lt;VideoOff height={36} width={36} fill="#1D2939" /&gt;
            );
          }}
        /&gt;
      &lt;/View&gt;
    &lt;/&gt;
  );
}
</code></pre><p>5. Here we are showing the participants in two different views, first, if there is one participant we will show the local participant in the full screen and second, when there are two participants, we will show the local participant in the MiniView.</p><p>To achieve these, you need to follow two components:</p><p>a. <code>src/scenes/meeting/OneToOne/LargeView/index.js</code></p><pre><code class="language-js">import { useParticipant, RTCView, MediaStream } from "@videosdk.live/react-native-sdk";
import React, { useEffect } from "react";
import { View } from "react-native";
import colors from "../../../../styles/colors";
import Avatar from "../../../../components/Avatar";

export default LargeViewContainer = ({ participantId }) =&gt; {
  const { webcamOn, webcamStream, displayName, setQuality, isLocal } =
    useParticipant(participantId, {});

  useEffect(() =&gt; {
    setQuality("high");
  }, []);

  return (
    &lt;View
      style={{
        flex: 1,
        backgroundColor: colors.primary[800],
        borderRadius: 12,
        overflow: "hidden",
      }}
    &gt;
      {webcamOn &amp;&amp; webcamStream ? (
        &lt;RTCView
          objectFit={'cover'}
          mirror={isLocal ? true : false}
          style={{ flex: 1, backgroundColor: "#424242" }}
          streamURL={new MediaStream([webcamStream.track]).toURL()}
        /&gt;
      ) : (
        &lt;Avatar
          containerBackgroundColor={colors.primary[800]}
          fullName={displayName}
          fontSize={26}
          style={{
            backgroundColor: colors.primary[700],
            height: 70,
            aspectRatio: 1,
            borderRadius: 40,
          }}
        /&gt;
      )}
    &lt;/View&gt;
  );
};
</code></pre><p>a. <code>src/scenes/meeting/OneToOne/MiniView/index.js</code></p><pre><code class="language-js">import { useParticipant, RTCView, MediaStream } from "@videosdk.live/react-native-sdk";
import React, { useEffect } from "react";
import { View } from "react-native";
import Avatar from "../../../../components/Avatar";
import colors from "../../../../styles/colors";

export default MiniViewContainer = ({ participantId }) =&gt; {
  const { webcamOn, webcamStream, displayName, setQuality, isLocal } =
    useParticipant(participantId, {});

  useEffect(() =&gt; {
    setQuality("high");
  }, []);

  return (
    &lt;View
      style={{
        position: "absolute",
        bottom: 10,
        right: 10,
        height: 160,
        aspectRatio: 0.7,
        borderRadius: 8,
        borderColor: "#ff0000",
        overflow: "hidden",
      }}
    &gt;
      {webcamOn &amp;&amp; webcamStream ? (
        &lt;RTCView
          objectFit="cover"
          zOrder={1}
          mirror={isLocal ? true : false}
          style={{ flex: 1, backgroundColor: "#424242" }}
          streamURL={new MediaStream([webcamStream.track]).toURL()}
        /&gt;
      ) : (
        &lt;Avatar
          fullName={displayName}
          containerBackgroundColor={colors.primary[600]}
          fontSize={24}
          style={{
            backgroundColor: colors.primary[500],
            height: 60,
            aspectRatio: 1,
            borderRadius: 40,
          }}
        /&gt;
      )}
    &lt;/View&gt;
  );
};
</code></pre><p>Here is how the video call will look with two participants:</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Screenshot_2023-01-10-18-17-36-98_d45cb549aa30b1e8d309ba53306ffaf2--1-.jpg" class="kg-image" alt="Build a React Native Video Calling App with Callkeep using Firebase and VideoSDK Part -1" loading="lazy" width="185" height="400"/></figure><blockquote>Hurray!!! With these, our video calling feature is complete. Here is a video of how it looks.</blockquote><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/SF4pVzRbuu4?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="" title="React native callkeep in Video SDK"/></figure><p>Head over to the second part of the series to see how you can configure the iOS to receive the calls and initiate the video call.</p><h2 id="conclusion">Conclusion</h2><p>With this, we successfully built the React Native video calling app with Callkeep using the VideoSDK and Firebase. You can always refer to our <a href="https://docs.videosdk.live/">documentation</a> if you want to add features like chat messaging and screen sharing. If you have any problems with the implementation, please contact us via our <a href="https://discord.gg/Gpmj6eCq5u">Discord community</a>.</p>]]></content:encoded></item><item><title><![CDATA[RBI Video KYC (VKYC) Guidelines with the Important Compliances]]></title><description><![CDATA[Detailed information on RBI KYC Compliance updates for financial institutions with specific guidance and procedures.]]></description><link>https://www.videosdk.live/blog/rbi-compliance-for-video-kyc</link><guid isPermaLink="false">65156c3d9eadee0b8b9e84af</guid><category><![CDATA[Video KYC]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Wed, 02 Oct 2024 05:54:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/09/Video-KYC-GTM--1-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/09/Video-KYC-GTM--1-.jpg" alt="RBI Video KYC (VKYC) Guidelines with the Important Compliances"/><p>RBI introduced essential amendments to the Master Direction on Video KYC, reinforcing the importance of the KYC and Customer Identification process in financial transactions and introducing <a href="https://www.videosdk.live/solutions/video-kyc">Video KYC</a> as a modern and secure method of customer identification for Banks, NBFCs, and other financial entities.</p><p>It is a significant step to enhance security and streamline the Know Your Customer (KYC) process through the VCIP (Video-based Customer Identification Process).</p><p>This amendment’s major focus is on the Customer Due Diligence (CDD) process of improving KYC guidelines including Video KYC, video customer identification process, and Facial recognition systems. It allows businesses to adhere to <a href="https://www.cert-in.org.in/">CERT-in</a> (The Indian Computer Emergency Response Team) compliance standards while onboarding new customers on their platform. To know more, You can download RBI KYC Master Direction:</p><blockquote><a href="https://www.videosdk.live/resources/ebook/ebook-for-video-kyc-compliances">Download Free eBook : New Compliance for Video KYC Guideline RBI (2024)</a></blockquote><h3 id="what-is-video-kyc">What is Video KYC?</h3><p>Video KYC also known as VCIP (Video Customer Identification Process) is a process of customer identification used by regulated entities (REs) by RBI. This process involves the usage of facial recognition technology and customer due diligence completed by an authorized official of the RE. Video KYC (V-CIP) is a secure, live, informed consent-based audio-visual interaction with the customer to collect the required identification information for Customer Due Diligence (CDD) purposes.</p><h2 id="rbi-video-kyc-guidelines">RBI Video KYC Guidelines</h2><h3 id="what-are-the-rbi-guidelines-on-video-kyc">What are the RBI guidelines on video KYC?</h3><p>RBI Video KYC (Know Your Customer) is a method introduced by the Reserve Bank of India (RBI) to enable banks and other financial institutions to verify the identity of their customers remotely through video conferencing. This process allows customers to open accounts or avail of various financial services without physically visiting a branch.</p><p>With Video KYC, customers can complete the KYC process from the comfort of their homes or offices using a smartphone or computer with internet connectivity. During the video call, the customer interacts with a bank representative who verifies their identity by asking for relevant documents and conducting necessary checks. This video customer identification method is aimed at enhancing customer convenience while ensuring compliance with regulatory requirements for identity verification.</p><h2 id="which-regulations-matter-for-video-kyc-in-your-app">Which Regulations matter for Video KYC in your app?</h2><p>It's essential for all Regulated Entities as well as <a href="https://www.videosdk.live/solutions/video-kyc"><strong>First Layer Video-based Infrastructure providers</strong></a> who provide video banking services to all major Banks, Fintech, Payment aggregators, and NBFCs to adopt these changes swiftly to ensure compliance and security in today's digital age. These measures not only protect customers but also strengthen the overall integrity of financial institutions and businesses.</p><h3 id="cyber-security-frameworks">Cyber Security &amp; Frameworks</h3><p>The REs should have complied with the RBI guidelines on the minimum baseline cyber security and resilience framework for banks. The infrastructure, including application software and workflows, should be regularly upgraded.</p><h3 id="spoof-ip-detection">Spoof IP detection</h3><p>To ensure the security and integrity of the <a href="https://www.videosdk.live/blog/build-vs-buy-video-kyc-infrastructure#understanding-video-kyc-infrastructure">V-CIP infrastructure/application</a>, it must possess the capability to prevent connections from IP addresses outside of India or from spoofed IP addresses. This measure is essential in safeguarding against potential threats and unauthorized access, thereby enhancing the overall security of the system.</p><h3 id="seamless-secure-and-visual-infrastructure">Seamless, Secure and Visual infrastructure</h3><p>Financial institutions and regulated entities (REs) are required to verify the identity of their customers using secure and live, Informed-consent-based audio-visual seamless interactions. This process includes facial recognition and customer due diligence conducted by an authorized official of the regulated entities. The official interacts with the customer to gather the required identification information for customer due diligence (CDD) purposes.</p><h3 id="end-to-end-encryption-for-protecting-customer-data">End-to-end encryption for protecting customer data</h3><p>The regulated entities shall guarantee end-to-end encryption of data between the customer device and the hosting point of the VCIP application, as per relevant encryption standards. The customer approval should be recorded in an auditable and alteration-proof manner.</p><h3 id="geo-tagging-of-customer-interactions">Geo-tagging of customer interactions</h3><p>The video recordings should contain the live GPS coordinates (geo-tagging) of the customer undertaking the VCIP and the date-time stamp. The video recordings will serve as a reliable and secure source of evidence for the VCIP procedure. This will provide a comprehensive VCIP process record and help verify the customer's identity.</p><h3 id="face-to-face-cip-customer-identification-process">Face-to-face CIP (Customer Identification Process)</h3><p>The significance of Video-based KYC or/and V-CIP treated with Face-to-Face customer identification for regulatory purposes components with face liveness/spoof detection as well as face-matching technology with a high degree of accuracy, especially in the context of digital banking and remote customer onboarding, even though the ultimate responsibility of any customer identification rests with the REs.</p><h2 id="why-cert-in-and-vpat-are-confidential-compliances-for-video-kyc-software">Why CERT-in And VPAT Are Confidential Compliances For Video KYC Software?</h2><p>It's essential for all Regulated Entities (REs) as well as First Layer Infrastructure (FLI) provider who provides video banking services to all major Banks and NBFCs, to adopt these changes swiftly to ensure compliance and security in today's digital age.</p><h3 id="cert-in-compliances">CERT-in Compliances</h3><p>CERT-in is the national nodal agency for responding to cyber security incidents. CERT-in stands for The Indian Computer Emergency Response Team. It performs in the area of collection, analysis, and dissemination of information on cyber securities. Such tests should also be carried out periodically in conformance with internal/regulatory guidelines.</p><h3 id="vapt-and-security-audits"><strong>VAPT and Security Audits</strong></h3><p>To ensure the security and authenticity of video KYC software, every business should prioritize the use of Vulnerability Assessment and Penetration Testing (VAPT) and security audits. These measures are essential for independent verification of provided information and maintaining a secure audit trail. By conducting VAPT and security audits, businesses can identify and address any critical issues before implementing their video KYC software, ensuring its robustness and security.</p><h3 id="data-localization">Data Localization</h3><p>To ensure data security and compliance, every business should host its software on Indian data servers. This includes conducting appropriate tests for functional, performance, and maintenance strength before using the V-CIP application software and its relevant APIs/web services in a live environment. It is also important to conduct periodic tests by internal regulatory guidelines to ensure compliance with data localization requirements.</p><h2 id="conclusion">Conclusion</h2><p>RBI KYC guidelines for video highlight the importance of conducting necessary tests. Many Indian infrastructures shall undergo required tests such as Vulnerability Assessment, Penetration Testing, and a Security Audit to ensure their robustness and end-to-end encryption capabilities. Any critical gap reported under this process shall be mitigated before rolling out its implementation. It is recommended to conduct these tests with the empaneled auditors of the Indian Computer Emergency Response Team (CERT-In) periodically by internal and regulatory guidelines.</p><p>For detailed information and guidance on RBI's V-CIP and KYC updates, you can visit the official website of the Reserve Bank of India.</p>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html>

<head>
	<style>
		.center {
			text-align: center;
		}

		.my-button {
			width: 250px;
			font-size: 24px;
			background-color: #5f7afa;
			color: white;
			border: none;
			padding: 10px 20px;
			text-align: center;
			text-decoration: none;
			display: inline-block;
			cursor: pointer;
			border-radius: 5px;
		}

		.my-button:hover {
			background-color: #5f7ada;
		}
	</style>
</head>

<body>

	<div class="center">
		<a href="https://www.videosdk.live/resources/ebook/ebook-for-video-kyc-compliances">
			<button class="my-button">Download Now!</button>
		</a>
	</div>

</body>
</html>
<!--kg-card-end: html-->
<p/><p>You can <a href="https://www.videosdk.live/contact">talk with our team</a> if you have any questions regarding CERT compliance or Video KYC for your app.</p>]]></content:encoded></item><item><title><![CDATA[Integrate Pre-Call Check in React Native]]></title><description><![CDATA[Discover the importance of implementing a precall feature in your React Native app to enhance user experience and ensure successful video calls. ]]></description><link>https://www.videosdk.live/blog/precall-integration-in-react-native</link><guid isPermaLink="false">669e34f820fab018df10fa2e</guid><category><![CDATA[React Native]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 01 Oct 2024 13:03:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/07/Pre-Call-Check-Setup.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/Pre-Call-Check-Setup.jpg" alt="Integrate Pre-Call Check in React Native"/><p>Before diving into the depth of video calls, imagine giving your setup a quick check-up, like a tech-savvy doctor making sure all systems are in order. It's essentially a pre-call experience—it's like your extensive debug session before the main code execution—a critical step in ensuring your application's performance is optimal.</p><p>As users increasingly rely on video conferencing for both personal and professional interactions, the importance of a well-structured on-premise experience cannot be overstated. It serves as a safeguard against potential technical hiccups that could disrupt the flow of communication.</p><p>In this article, we'll explore the importance of implementing the pre-call feature in React Native apps, guiding developers through the steps needed to create a robust and user-friendly setup. By prioritizing this preliminary phase, developers can enhance user trust and satisfaction, paving the way for successful and engaging video calls.</p><h2 id="why-is-it-necessary%E2%80%8B">Why is it Necessary?<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/setup-call/precall#why-is-it-necessary">​</a></h2><p>Why to invest time and effort into crafting a precall experience, you wonder? Well, picture this scenario: your users eagerly join a video call, only to encounter a myriad of technical difficulties—muted microphones, pixelated cameras, and laggy connections. Not exactly the smooth user experience you had in mind, right?</p><p>By integrating a robust precall process into your app, developers become the unsung heroes, preemptively addressing potential pitfalls and ensuring that users step into their video calls with confidence.</p><h2 id="step-by-step-guide-integrating-precall-feature%E2%80%8B">Step-by-Step Guide: Integrating Precall Feature​</h2><h3 id="step-1-check-permissions%E2%80%8B">Step 1: <strong>Check Permissions</strong><a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/setup-call/precall#check-permissions">​</a></h3><ul><li>Begin by ensuring that your application has the necessary permissions to access user devices such as cameras, microphones</li><li>Utilize the <code>checkPermission()</code> and <code>checkBluetoothPermission()</code> methods of the <code>useMediaDevice</code> hook to verify if permissions are granted.</li></ul><pre><code class="language-js">import { useMediaDevice } from "@videosdk.live/react-native-sdk";

const { checkPermission } = useMediaDevice();

const checkMediaPermission = async () =&gt; {
  //These methods return a Promise that resolve to a Map&lt;string, boolean&gt; object.
  const checkAudioPermission = await checkPermission("audio"); //For getting audio permission
  const checkVideoPermission = await checkPermission("video"); //For getting video permission
  const checkAudioVideoPermission = await checkPermission("audio_video"); //For getting both audio and video permissions
  const checkBTPermission = await checkBlueToothPermission(); // For getting bluetooth permission
  // Output: Map object for both audio and video permission:
  /*
        Map(2)
        0 : {"audio" =&gt; true}
            key: "audio"
            value: true
        1 : {"video" =&gt; true}
            key: "video"
            value: true
    */
};</code></pre><h3 id="step-2-request-permissions">Step 2: Request Permissions</h3><p>If permissions are not granted, use the <code>requestPermission()</code> and <code>requestBluetoothPermission</code> methods of the <code>useMediaDevice</code> hook to prompt users to grant access to their devices.</p><pre><code class="language-js">const requestAudioVideoPermission = async () =&gt; {
  try {
    //These methods return a Promise that resolve to a Map&lt;string, boolean&gt; object.
    const requestAudioPermission = await requestPermission("audio"); //For Requesting Audio Permission
    const requestVideoPermission = await requestPermission("video"); //For Requesting Video Permission
    const requestAudioVideoPermission = await requestPermission("audio_video"); //For Requesting Audio and Video Permissions

    // Applicable only to Android; not required for iOS
    const checkBTPermission = await requestBluetoothPermission(); //For requesting Bluetooth Permission.
  } catch (ex) {
    console.log("Error in requestPermission ", ex);
  }
};</code></pre><h3 id="step-3-render-device-list">Step 3: Render Device List</h3><p>Once you have the necessary permissions, Fetch and render a list of available cameras, microphone, and list of all devices using the <code>getCameras()</code>, <code>getAudioDeviceList()</code> and <code>getDevices()</code> methods of the <code>useMediaDevice</code> hook respectively.</p><pre><code class="language-js">const getMediaDevices = async () =&gt; {
  try {
    //Method to get all available webcams.
    //It returns a Promise that is resolved with an array of CameraDeviceInfo objects describing the video input devices.
    let webcams = await getCameras();
    console.log("List of Devices:", webcams);
    //Method to get all available Microphones.
    //It returns a Promise that is resolved with an array of MicrophoneDeviceInfo objects describing the audio input devices.
    const mics = await getAudioDeviceList();
    console.log("List of Microphone:", mics);
    //Method to get all available cameras and playback devices.
    //It returns a list of the currently available media input and output devices, such as microphones, cameras, headsets, and so forth
    let deivces = await getDevices();
    console.log("List of Cameras:", devices);
  } catch (err) {
    console.log("Error in getting audio or video devices", err);
  }
};</code></pre><h3 id="step-4-handle-device-changes%E2%80%8B">Step 4: Handle Device Changes​</h3><ul><li>Implement the onAudioDeviceChanged callback of the useMediaDevice hook to dynamically re-render device lists whenever new devices are attached or removed from the system.</li><li>Ensure that users can seamlessly interact with newly connected devices without disruptions.</li></ul><pre><code class="language-js">const {
    ...
  } = useMediaDevice({ onAudioDeviceChanged });

//Fetch camera, mic and speaker devices again using this function.
function onAudioDeviceChanged(device) {
    console.log("Device Changed", device)
}</code></pre><h3 id="step-5-create-media-tracks">Step 5: Create Media Tracks</h3><p>Create media tracks for the selected microphone and camera using the <code>createMicrophoneAudioTrack()</code> and <code>createCameraVideoTrack()</code> methods.</p><p>Ensure that these tracks originate from the user-selected devices for accurate testing.</p><pre><code class="language-js">import {
  createCameraVideoTrack,
  createMicrophoneAudioTrack,
} from "@videosdk.live/react-native-sdk";

//For Getting Audio Tracks
const getMediaTracks = async () =&gt; {
  try {
    //Returns a MediaStream object, containing the Audio Stream from the selected Mic Device.
    let customTrack = await createMicrophoneAudioTrack({
      encoderConfig: "speech_standard",
      noiseConfig: {
        noiseSuppression: true,
        echoCancellation: true,
        autoGainControl: true,
      },
    });
  } catch (error) {
    console.log("Error in getting Audio Track", error);
  }

  //For Getting Video Tracks
  try {
    //Returns a MediaStream object, containing the Video Stream from the selected Webcam Device.
    let customVideoTrack = await createCameraVideoTrack({
      optimizationMode: "motion",
      encoderConfig: "h720p_w1280p",
      facingMode: "user",
    });
    //To retrive video tracks that will be displayed to the user from the stream.
    const videoTracks = stream?.getVideoTracks();
    const videoTrack = videoTracks.length ? videoTracks[0] : null;
  } catch (error) {
    console.log("Error in getting Video Track", error);
  }
};</code></pre><h3 id="step-6-passing-states-to-meeting">Step 6: Passing States to Meeting</h3><p>Ensure that all relevant states, such as microphone and camera status (on/off), and selected devices, are passed into the meeting from the precall screen.</p><p>This can be accomplished by passing these crucial states and media streams onto the VideoSDK MeetingProvider.</p><p>By ensuring this integration, users can seamlessly transition from the precall setup to the actual meeting while preserving their preferred settings.</p><pre><code class="language-js">&lt;MeetingProvider
    config={
        {
            ...
            //Status of Mircophone Device as selected by the user (On/Off).
            micEnabled: micOn,
            //Status of Webcam Device as selected by the user (On/Off).
            webcamEnabled: webcamOn,
            //customVideoStream has to be the Video Stream of the user's selected Webcam device as created in Step-5.
            customCameraVideoTrack: customVideoStream,
            //customAudioStream has to be the Audio Stream of the user's selected Microphone device as created in Step-5.
            customMicrophoneAudioTrack: customAudioStream
        }
    } &gt;
&lt;/MeetingProvider&gt;</code></pre><h2 id="conclusion">Conclusion</h2><p>The step-by-step guide provided in this article equips developers with the tools and knowledge needed to implement a comprehensive pre-call process, turning potential frustrations into a smooth, enjoyable experience. </p><p>Incorporating precall features into your React Native apps is not only a best practice; it's a necessity. By addressing potential issues—such as device permissions, connectivity, and media track management—you empower users to confidently join their video calls.</p><p>As you embark on this journey, remember that a little preparation goes a long way in ensuring that your users can focus on what matters; meaningful conversations and connections. Adopt the pre-call experience and watch your application thrive in an increasingly competitive landscape.</p>]]></content:encoded></item><item><title><![CDATA[Open Source vs VideoSDK: Which is the Best Video KYC Solution for Your Needs?]]></title><description><![CDATA[This article will help you understand the limitations of Open Source Platform WebRTC and Jitsi. Also, why VideoSDK might be a better choice for Video KYC solutions]]></description><link>https://www.videosdk.live/blog/comparison-of-open-source-and-videosdk</link><guid isPermaLink="false">651fd83d9eadee0b8b9eb413</guid><category><![CDATA[Video KYC]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Tue, 01 Oct 2024 11:57:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/10/Opensource-vs-VSDK.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/10/Opensource-vs-VSDK.jpg" alt="Open Source vs VideoSDK: Which is the Best Video KYC Solution for Your Needs?"/><p>In this article, we will explore the options available for Video KYC Infrastructure and compare the limitations of Open Source solutions with the benefits of VideoSDK. We will discuss why VideoSDK may be better for your VideoKYC Solution. By understanding the differences between Open Source and VideoSDK, you can make a decision that aligns with your organization's needs. Let's talk about detailed comparisons and discover how VideoSDK can improve your real KYC (generally considered as VCIP) procedures.</p><h2 id="what-is-open-source-video-infrastructure">What is Open Source Video Infrastructure?</h2><p>Open Source Video Infrastructure is developed with the contribution of the developer-friendly community as most of those features are demanded by the community. It can be customized and branded for specific use cases of video conferencing platforms and It provides flexibility for developers to integrate video communication capabilities into their applications.</p><h2 id="what-are-the-limitations-of-open-source-infrastructure-used-by-industry">What are the limitations of Open Source Infrastructure used by industry?</h2><p>When it comes to video KYC solutions or infrastructure, there are various options available, in open-source solutions. However, larger banks don't consider these types of video KYC solutions. OS platforms like Jitsi and Janus built on WebRTC can be delivered to specific use cases. It's also important to note that open-source solutions may have some limitations. One of the limitations is <strong>Data security</strong>, which is a crucial aspect of Video KYC processes, and while open-source solutions may not provide complete control over data security, they only offer control over data.</p><h3 id="webrtc-web-real-time-communication">WebRTC (Web Real-Time Communication)</h3><p>WebRTC, the backbone of real-time communication on the web, has recently encountered a series of challenges that deserve attention. These issues range from the compatibility of WebRTC with various platforms and devices.</p><h4 id="scalable-video-codec-concerns">Scalable Video Codec Concerns</h4>
<p>Some noteworthy problems include the <a href="https://github.com/webrtc/samples/issues/1621">Video Analyzer</a> failing to function under default server settings, the need for a <a href="https://github.com/webrtc/samples/issues/1602">WebGPU </a>code refresh, and concerns about scalability with the <a href="https://github.com/webrtc/samples/issues/1597">Scalable Video Codec</a>. These issues address the requirement for updates for not functioning correctly with the VP9 video codec.</p><h4 id="troubles-with-audio-and-video-source-selection">Troubles with Audio and Video Source Selection</h4>
<p>There are issues related to <a href="https://github.com/webrtc/samples/issues/1498">audio and video source selection</a> on Chrome/Android for external <a href="https://github.com/webrtc/samples/issues/1498">USB devices and difficulties </a>with perfect negotiation, rendering resolutions in Firefox, and utilizing Bluetooth headsets with WebRTC.</p><h4 id="overcoming-compatibility-challenges">Overcoming Compatibility Challenges</h4>
<p>Some issues have surfaced regarding permissions, and device compatibility recently, ranging from <a href="https://github.com/webrtc/samples/issues/1626">mirroring and recording video</a> from camera streams to addressing concerns related to the use of Scalable Video Codec and <a href="https://github.com/webrtc/samples/issues/1556">Trickle ICE</a>. Some have been cropping up to ensure users a smoother and more reliable real-time communication experience.</p><h4 id="other-issues">Other Issues</h4>
<p>Additionally, there is a problem related to code samples using deprecated functions and rendering <a href="https://github.com/webrtc/samples/issues/1415">webgl_teapot</a> samples at incorrect resolutions in Firefox. These issues underscore the importance of continually improving and refining WebRTC and shed light on the evolving landscape of WebRTC, where developers are actively tackling discussions.</p><h3 id="jitsi">Jitsi</h3><p>Jitsi is built on WebRTC with a customizable option and It does not provide end-to-end encryption on the Firefox browser of <a href="https://foundation.mozilla.org/en/privacynotincluded/jitsimeet/#:~:text=Jitsi%20Meet%20uses%20encryption%2C%20but,%2Dend%20encryption%20on%20Firefox">Mozilla</a>, users have experienced camera and microphone issues in Google Chrome when authentication is enabled without a guest domain.</p><h4 id="translation-issues">Translation Issues</h4>
<p>Some reported problems include <a href="https://github.com/jitsi/jitsi-meet/issues/5056">translation </a>not functioning as expected, and concerns about features like <a href="https://github.com/jitsi/jitsi-meet/issues/8144">startWithAudioMuted and startWithVideoMuted</a> not aligning with the configuration settings, indicating potential localization issues.</p><h4 id="quality-and-configuration-concerns">Quality and Configuration Concerns</h4>
<p>A peculiar problem arises when no camera or microphone is detected in <a href="https://github.com/jitsi/jitsi-meet/issues/13815">Google Chrome</a> when authentication is enabled without a guest domain, which may not align with configuration expectations, impacting the quality of web conferencing.</p><h4 id="mobile-bandwidth-optimization">Mobile Bandwidth Optimization</h4>
<p>Another issue highlights the need to enable resolution constraints on mobile apps to conserve <a href="https://github.com/jitsi/jitsi-meet/issues/5808">bandwidth</a>, particularly for iOS and Android. The recurring problem involves encountering audio and video communication failures when connecting randomly.</p><h4 id="other-issues">Other Issues</h4>
<p>Some other reports mention random users experiencing <strong>issues </strong>with enhanced <strong>end-to-end encryption</strong> (E2EE) support for Android and iOS, while others note that Jitsi doesn't initiate connections on locked iPhones.</p><p>In Jitsi, all concerns have been raised about <strong>privacy</strong>, with reports indicating that <a href="https://github.com/jitsi/jitsi-meet/issues/6474"><code>JitsiMeet.framework</code> </a>utilizes <code>react-native-netinfo</code>, which sends HTTP requests to Google, potentially raising privacy-related questions. These issues highlight the importance of continuous improvement and community involvement in open-source projects like Jitsi.</p><h2 id="unparalleled-limitations">Unparalleled Limitations</h2><p>Open Source does not end here with its limitations. If you deep down, it requires more self-control upon utilization including,</p><h3 id="access-to-critical-datasets"><strong>Access to Critical Datasets</strong></h3><p>Open source tools might <strong>lack access</strong> to KYC Infrastructure’s <strong>standardized datasets</strong>. These datasets are often maintained by other entities, limiting their ability to deliver a comprehensive KYC and AML solution.</p><h3 id="data-security"><strong>Data Security</strong></h3><p>Open-source solutions may not offer the robust <strong>data security measures</strong> required for KYC and AML compliance. Encryption protocols provide control over data but they can be less comprehensive.</p><h3 id="scalability-and-reliability"><strong>Scalability and Reliability</strong></h3><p>When dealing with large volumes of customer data, scaling open-source solutions can be <strong>challenging</strong>, especially Ensuring reliability under varying workloads can be a hurdle.</p><p>Open Source tools may <strong>not have access</strong> to these datasets, making it <strong>difficult to provide</strong> a comprehensive KYC solution. Yet, it's time to consider how transitioning to VideoSDK can revolutionize your KYC (Know Your Customer) processes.</p><h2 id="why-should-you-migrate-from-open-source-to-videosdks-infrastructure">Why should you migrate from Open Source to VideoSDK's Infrastructure?</h2><p><a href="https://www.videosdk.live/solutions/video-kyc">VideoSDK</a> offers an extensive collection of benefits, including cost-efficiency, regulatory compliance, and developer as well as user-friendliness. By choosing VideoSDK, organizations can have peace of mind knowing that their video KYC processes meet the necessary <strong>security and compliance standards. </strong>It provides <strong>10,000 free minutes each month</strong>, ensuring that your KYC processes are not only efficient but also budget-friendly.</p><h3 id="developer-friendly">Developer Friendly</h3><p>VideoSDK has a developer-friendly interface and a comprehensive feature set designed to simplify your manual and VCIP KYC processes. Its API integration supports 20+ advanced programming languages used by prominent fintech &amp; finance industries. With appropriate technology frameworks like <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start"><strong>JavaScript,</strong></a><strong> </strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/concept-and-architecture"><strong>React,</strong></a><strong> </strong><a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/concept-and-architecture"><strong>React Native,</strong></a><strong> </strong><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/concept-and-architecture"><strong>Flutter,</strong></a><strong> </strong><a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/concept-and-architecture"><strong>Android, and</strong></a><strong> </strong><a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/getting-started"><strong>iOS,</strong></a> it streamlines smooth and secure operations for clients and their customers.</p><h3 id="regulatory-compliance-with-cert-in">Regulatory Compliance with CERT-in</h3><p>At first, VideoSDK ensures that you meet critical regulatory requirements, including <a href="https://www.videosdk.live/blog/rbi-compliance-for-video-kyc">RBI's CERT-In compliance</a>. It provides features like <strong>Dedicated IP whitelisting, Firewall proxy support</strong> (beta), and <strong>Customer IP Masking</strong> and adheres to eKYC guidelines, giving you confidence in your compliance efforts.</p><h3 id="vapt-and-security-audits">VAPT and Security Audits</h3><p>VideoSDK provides essential tests such as <strong>Vulnerability Assessment, Penetration Testing</strong> (VAPT), and a <strong>Security Audit</strong> to ensure its robustness and end-to-end encryption. It is compliant and provides security to all major Banks and NBFCs in today's digital age. </p><p>These evaluations are thorough, and combined with <a href="https://www.aikido.dev/blog/top-automated-penetration-testing-tools" rel="noreferrer">automated penetration testing tools</a>, they help uncover vulnerabilities more effectively.</p><h3 id="data-localization">Data Localization</h3><p>VideoSDK Infrastructure has better frameworks for data management systems, that should be maintaining reliable record-keeping processes aligned with the new PMLA Act in India. It offers the robust <strong>data security measures</strong> required for KYC and AML compliance. It’s end-to-end encryption protocols provide control over data but it can be less comprehensive.</p><hr><p>If you're deeply implanted in the open-source infrastructure, migrating to VideoSDK might seem like a significant shift. Consider your organization's specific needs, and weigh the advantages of VideoSDK carefully. It offers cost-efficiency, regulatory compliance, and user-friendliness—attributes that can elevate to new heights.</p><p>Make the strategic move to VideoSDK to elevate your KYC infrastructure and enhance your business's capabilities. It's time to embrace the future of KYC with VideoSDK.</p><p>You can <a href="https://www.videosdk.live/contact">talk with our team</a> if you have any questions regarding Video KYC infrastructure.</p></hr>]]></content:encoded></item><item><title><![CDATA[Top 10 Jitsi Alternatives in 2026]]></title><description><![CDATA[Uncover a groundbreaking and transformative alternative to Jitsi, revolutionizing your online experience and propelling you toward unprecedented success. Embrace this golden opportunity to unlock unrivaled potential and embark on a journey to new heights.]]></description><link>https://www.videosdk.live/blog/jitsi-alternative</link><guid isPermaLink="false">64afa0a45badc3b21a595670</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 01 Oct 2024 10:22:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/07/Jitsi-Meet-alternative-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Jitsi-Meet-alternative-1.jpg" alt="Top 10 Jitsi Alternatives in 2026"/><p>If you're in search of a smooth integration of real-time video into your application and are considering an <a href="https://www.videosdk.live/jitsi-vs-videosdk" rel="noreferrer"><strong>alternative to Jitsi</strong></a>, you've landed in the perfect place! While Jitsi is widely known, there are numerous untapped opportunities outside their platform waiting to be explored. Stay tuned to uncover what you may have been missing out on, especially if you're already a Jitsi Meet user. Prepare yourself to delve into new possibilities!</p><h2 id="why-explore-a-jitsi-meet-alternative">Why explore a Jitsi Meet Alternative</h2>
<p>There are various reasons <strong>why someone might seek a Jitsi alternative</strong>. It could be due to specific <a href="https://github.com/jitsi/lib-jitsi-meet/issues/2205">feature requirements</a>, such as the <a href="https://github.com/jitsi/lib-jitsi-meet/issues/2314">need for better performance</a> in larger meetings or concerns about security and privacy. Integration with existing tools and a simpler, more user-friendly interface are also common considerations. By exploring alternative platforms, individuals or organizations can find a video conferencing solution that better aligns with their needs and preferences.</p><p>The <strong>top 10 Jitsi Alternatives</strong> are VideoSDK, Twilio, MirrorFly, Agora, WebRTC, Vonage, AWS Chime, EnableX, WhereBy, and SignalWire.</p><blockquote>
<h2 id="top-10-jitsi-alternatives-for-2026">Top 10 Jitsi Alternatives for 2026</h2>
<ul>
<li>VideoSDK</li>
<li>Twilio Video</li>
<li>MirrorFly</li>
<li>Agora</li>
<li>WebRTC</li>
<li>Vonage</li>
<li>AWS Chime SDK</li>
<li>Enablex</li>
<li>Whereby</li>
<li>SignalWire</li>
</ul>
</blockquote>
<h2 id="1-videosdk">1. VideoSDK</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-SDK-for-Real-time-Communication-Live-Streaming-Video-API-9.jpeg" class="kg-image" alt="Top 10 Jitsi Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Experience the incredible power of <a href="https://www.videosdk.live">VideoSDK</a>, an API specifically crafted to effortlessly integrate powerful audio and video features into your applications. With minimal effort, you can elevate your app by offering live audio and video experiences across various platforms.</p><h3 id="key-points-about-videosdk">Key points about VideoSDK</h3>
<ul><li>VideoSDK offers the perfect blend of simplicity and fast integration, allowing you to dedicate more time to developing innovative features that boost user retention. Bid farewell to cumbersome integration processes and open the door to endless possibilities.</li><li>Embrace the advantages of VideoSDK, such as its exceptional scalability, adaptive <a href="https://www.videosdk.live/blog/what-is-bitrate" rel="noreferrer">bitrate</a> technology, extensive customization options, top-notch recording quality, comprehensive analytics, cross-platform streaming, effortless scaling, and comprehensive platform support. </li><li>Whether you're on mobile (Flutter, Android, iOS), web (JavaScript Core SDK + UI Kit), or desktop (Flutter Desktop), Video SDK empowers you to effortlessly create immersive video experiences.</li></ul><h3 id="videosdk-pricing">VideoSDK pricing</h3>
<ul><li>Unlock amazing value with Video SDK! Make the most of the generous offer of <a href="https://www.videosdk.live/pricing" rel="noreferrer">$20 free credit</a> and enjoy <a href="https://www.videosdk.live/pricing#pricingCalc">flexible pricing</a> options for both video and audio calls. </li><li><strong>Video calls</strong> start at an incredible rate of <strong>just $0.003</strong> per participant per minute, while <strong>audio calls</strong> begin at a minimal cost of <strong>$0.0006</strong>.</li><li>For added convenience, <strong>cloud recordings</strong> are available at an affordable rate of <strong>$0.015</strong> per minute, and <strong>RTMP output</strong> comes at a competitive price of <strong>$0.030</strong> per minute.</li><li>What's more, you'll have access to <strong>free 24/7 customer support</strong>, ensuring assistance whenever you need it. Upgrade your video capabilities today and embark on a new level of excellence!</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/jitsi-vs-videosdk"><strong>Jitsi and VideoSDK</strong></a><strong>.</strong></blockquote>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-twilio-video">2. Twilio Video</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Communication-APIs-for-SMS-Voice-Video-Authentication_twilio-8.jpeg" class="kg-image" alt="Top 10 Jitsi Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Twilio is a leading video SDK solution that empowers businesses to effortlessly integrate live video into their mobile and web applications. The standout advantage of Twilio lies in its versatility, offering you the flexibility to build an app from scratch or enhance your existing solutions with robust communication features. Whether you're starting a new one or expanding your app's functionalities, Twilio provides a reliable and all-encompassing solution for seamlessly incorporating live video into your applications.</p><h3 id="key-points-about-twilio">Key points about Twilio</h3>
<ul><li>Twilio provides web, iOS, and Android SDKs for integrating live video into applications.</li><li>Manual configuration and extra code are required for using multiple audio and video inputs.</li><li>Twilio's call insights can track and analyze errors, but additional code is needed for implementation.</li><li>Pricing can be a concern as usage grows, as Twilio lacks a built-in tiering system in the dashboard.</li><li>Twilio supports up to 50 hosts and participants in a call.</li><li>There are no plugins available for easy product development with Twilio.</li><li>The level of customization offered by the Twilio Video SDK may not meet the needs of all developers, resulting in additional code writing.</li></ul><h3 id="pricing-for-twilio">Pricing for Twilio</h3>
<ul><li>The <a href="https://www.twilio.com/en-us/video/pricing">pricing</a> for <a href="https://www.videosdk.live/blog/twilio-video-alternative"><strong>Twilio</strong></a> starts at <strong>$4</strong> per 1,000 minutes. </li><li><strong>Recordings</strong> are charged at <strong>$0.004</strong> per participant minute, <strong>recording compositions</strong> cost <strong>$0.01</strong> per composed minute, and <strong>storage</strong> is priced at <strong>$0.00167</strong> per GB per day after the first 10 GBs.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/twilio-vs-jitsi"><strong>Jitsi and Twilio</strong></a><strong>.</strong></blockquote><h2 id="3-mirrorfly">3. MirrorFly</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Live-Video-Call-API-Best-Video-Chat-SDK-for-Android-iOS-mirrorfly-8.jpeg" class="kg-image" alt="Top 10 Jitsi Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>MirrorFly is an exceptional in-app communication suite tailor-made for enterprises. It boasts an extensive array of powerful APIs and SDKs that provide unparalleled chat and calling experiences. With over 150 remarkable features for chat, voice, and video calling, this cloud-based solution seamlessly integrates to form a robust communication platform.</p><h3 id="key-points-about-mirrorfly">Key points about MirrorFly</h3>
<ul><li>MirrorFly may have limited customization options, restricting the ability to tailor the platform to specific branding or user experience requirements. This can limit the uniqueness and personalization of the communication features.</li><li>MirrorFly may face challenges in scaling for larger applications or handling a high volume of users. The platform may struggle to maintain performance and stability when dealing with significant traffic or complex use cases.</li><li>Users have reported mixed experiences with MirrorFly's technical support. Some have found it lacking in responsiveness, leading to delays or difficulties in resolving issues or addressing concerns.</li><li>MirrorFly's pricing structure may not be suitable for all budgets or use cases. Depending on the desired features and scalability requirements, the costs associated with using MirrorFly may be higher compared to alternative communication platforms.</li><li>Integrating MirrorFly into existing applications or workflows may require significant effort and technical expertise. The platform might lack comprehensive documentation or robust developer resources, making the integration process challenging or time-consuming.</li></ul><h3 id="mirrorfly-pricing">MirrorFly pricing</h3>
<ul><li>MirrorFly's pricing starts at <strong>$299</strong> per month, positioning it as a higher-cost option to take into account.</li></ul><h2 id="4-agora">4. Agora</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Agora-Real-Time-Voice-and-Video-Engagement-8.jpeg" class="kg-image" alt="Top 10 Jitsi Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Agora's video calling SDK offers a wealth of features, including embedded voice and video chat, real-time recording, live streaming, and instant messaging. These features provide developers with the tools they need to create captivating and immersive live experiences within their applications.</p><h3 id="key-points-about-agora">Key points about Agora</h3>
<ul><li>Agora's video SDK provides a range of features, including embedded voice and video chat, real-time recording, live streaming, and instant messaging.</li><li>Additional add-ons like AR facial masks, sound effects, whiteboards, and more are available for an extra cost.</li><li>Agora's SD-RTN ensures extensive global coverage, connecting users from over 200 countries and regions with ultra-low latency streaming capabilities.</li><li>The pricing structure can be complex and may not be suitable for businesses with limited budgets.</li><li>Users seeking hands-on support may experience delays as Agora's support team may require additional time to provide assistance.</li></ul><h3 id="agora-pricing">Agora pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/agora-alternative"><strong>Agora</strong></a> offers Premium and Standard <a href="https://www.agora.io/en/pricing/">pricing</a> options, with the usage duration for audio and video calculated every month. </li><li>The pricing is categorized into four types based on video resolution, ensuring flexibility and cost-effectiveness. </li><li>The pricing structure includes <strong>Audio</strong> at <strong>$0.99</strong> per 1,000 participant minutes, <strong>HD Video</strong> at <strong>$3.99</strong> per 1,000 participant minutes, and <strong>Full HD Video</strong> at <strong>$8.99</strong> per 1,000 participant minutes.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/agora-vs-jitsi"><strong>Jitsi and Agora</strong></a><strong>.</strong></blockquote><h2 id="5-webrtc">5. WebRTC</h2>
<p><a href="https://www.videosdk.live/blog/webrtc" rel="noreferrer">WebRTC (Web Real-Time Communication)</a> is an API that enables real-time communication between web browsers. It allows direct peer-to-peer connections for audio, video, and data transmission without the need for plugins. WebRTC supports applications like video conferencing and voice calling.</p><h3 id="key-points-about-webrtc">Key points about WebRTC</h3>
<ul><li>WebRTC relies on establishing direct peer-to-peer connections, which can be hindered by firewalls and network address translation (NAT). This can lead to connectivity issues, especially in corporate or restrictive network environments.</li><li>Although major web browsers support WebRTC, there may be differences in implementation and support for certain features. Developers may need to account for these variations, which can increase complexity and testing efforts.</li><li>Real-time audio and video streaming can consume significant bandwidth. In scenarios with limited or unstable network connections, WebRTC applications may experience degraded performance, such as audio/video quality reduction or interruptions.</li><li>While WebRTC supports secure communication through encryption, vulnerabilities can still exist if proper security measures are not implemented. Malicious actors may exploit these vulnerabilities, leading to potential privacy breaches or unauthorized access.</li><li>WebRTC performs well in small-scale applications, but it may face challenges when scaling to a large number of participants or when dealing with complex network topologies. Ensuring a smooth and reliable user experience in such scenarios requires careful architectural considerations.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/webrtc-vs-jitsi"><strong>Jitsi and WebRTC</strong></a><strong>.</strong></blockquote><h2 id="6-vonage">6. Vonage</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-API-Fully-Programmable-and-Customizable-Vonage-7.jpeg" class="kg-image" alt="Top 10 Jitsi Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Despite being acquired by Vonage and subsequently renamed as the "Vonage API," TokBox is still widely known by its original name. TokBox's SDKs provide reliable point-to-point communication, making it a suitable choice for establishing proof of concepts during hackathons or meeting investor deadlines. Its SDKs offer the necessary tools for developers to create secure and seamless communication experiences within their applications.</p><h3 id="key-points-about-vonage">Key points about Vonage</h3>
<ul><li>The TokBox SDK allows developers to build custom audio/video streams with effects, filters, and AR/VR capabilities on mobile devices.</li><li>It supports a wide range of use cases, including 1:1 video, group video chat, and large-scale broadcast sessions.</li><li>Participants in a call can share screens, exchange messages via chat, and send data during the call.</li><li>One challenge with TokBox is the scaling costs, as the price per stream per minute increases as the user base grows.</li><li>Additional features like recording and interactive broadcast come at an additional cost.</li><li>Once the number of connections reaches 2,000, the platform switches to CDN delivery, resulting in higher latency.</li><li>Real-time streaming at scale can be challenging, as anything over 3,000 viewers requires switching to HLS, which introduces significant latency.</li></ul><h3 id="vonage-pricing">Vonage pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/vonage-alternative"><strong>Vonage</strong></a> follows a usage-based <a href="https://www.vonage.com/communications-apis/video/pricing/">pricing</a> model for their video sessions, where the cost is determined by the number of participants and calculated dynamically every minute. Their <strong>pricing plans</strong> start at <strong>$9.99</strong> per month and include a free allowance of 2,000 minutes per month for all plans.</li><li>Once the free allowance is exhausted, users are billed at a rate of <strong>$0.00395</strong> per participant per minute. <strong>Recording</strong> services are available starting at <strong>$0.010</strong> per minute, while <strong>HLS streaming</strong> is priced at <strong>$0.003</strong> per minute.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/vonage-vs-jitsi"><strong>Jitsi and Vonage</strong></a><strong>.</strong></blockquote><h2 id="7-aws-chime-sdk">7. AWS Chime SDK</h2>
<p>The <a href="https://www.videosdk.live/blog/amazon-chime-sdk-alternative">Amazon Chime SDK</a> serves as the core technology behind Amazon Chime, functioning independently without its user interface or outer shell.</p><h3 id="key-points-about-aws-chime-sdk">Key points about AWS Chime SDK</h3>
<ul><li>The Amazon Chime SDK allows up to 25 participants (or 50 for mobile users) in a video meeting.</li><li>Simulcast technology ensures consistent video quality across different devices and networks.</li><li>All calls, videos, and chats are encrypted for enhanced security.</li><li>It lacks certain features like polling, auto-sync with Google Calendar, and background blur effects.</li><li>Compatibility issues have been reported in Linux environments and with participants using the Safari browser.</li><li>Customer support experiences can vary, with inconsistent query resolution times depending on the support agent.</li></ul><h3 id="aws-chime-pricing">AWS Chime pricing</h3>
<ul><li><strong>The free</strong> <strong>basic plan</strong> allows users to have <strong>one-on-one audio/video calls</strong> and <strong>group chats</strong>.</li><li><strong>The Plus plan</strong>, <a href="https://aws.amazon.com/chime/pricing/">priced</a> at <strong>$2.50</strong> per monthly user, provides additional features including <strong>screen sharing</strong>, <strong>remote desktop control</strong>, <strong>1 GB of message history</strong> per user, and <strong>Active Directory integration</strong>.</li><li><strong>The Pro plan</strong>, priced at <strong>$15</strong> per user per month, includes all the features of the Plus plan and allows for <strong>meetings</strong> with <strong>three</strong> or more participants.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/amazon-chime-sdk-vs-jitsi"><strong>Jitsi and AWS Chime</strong></a><strong>.</strong></blockquote><h2 id="8-enablex">8. EnableX</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Call-API-Video-Chat-API-Voice-API-Video-Conferencing_enebleX-9.jpeg" class="kg-image" alt="Top 10 Jitsi Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>The EnableX SDK offers a wide range of capabilities, including video and audio calling, as well as collaborative features such as a whiteboard, screen sharing, annotation, recording, host control, and chat. With this SDK, you can easily integrate these functionalities into your application. It provides a video builder tool that allows you to create customized video-calling solutions that align with your application's requirements. You have the flexibility to personalize the live video streams with a custom user interface, choose appropriate hosting options, integrate billing functionality, and implement other essential features that cater to your specific needs.</p><h3 id="key-points-about-enablex">Key points about EnableX</h3>
<ul><li>EnableX provides a self-service portal with reporting capabilities and live analytics for tracking quality and facilitating online payments.</li><li>The SDK supports JavaScript, PHP, and Python programming languages.</li><li>Users can stream live content directly from their app/site or on platforms like YouTube and Facebook.</li><li>The support team's response time may take up to 72 hours, which could be a potential drawback for users in need of timely assistance.</li></ul><h3 id="enablex-pricing">EnableX pricing</h3>
<ul><li>EnableX <a href="https://www.enablex.io/cpaas/pricing/our-pricing">pricing</a> starts at <strong>$0.004</strong> per minute per participant for rooms with <strong>up to 50 people</strong>. For larger meetings or events, custom pricing options are available through their sales team.</li><li><strong>Recording</strong> is offered at a rate of <strong>$0.010</strong> per minute per participant.</li><li><strong>Transcoding of video</strong> into a different format is available at a rate of <strong>$0.010</strong> per minute.</li><li><strong>Additional storage</strong> can be obtained for <strong>$0.05</strong> per GB per month, and <strong>RTMP streaming</strong> is priced at <strong>$0.10</strong> per minute.</li></ul><h2 id="9-whereby">9. Whereby</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Calling-API-for-Web-and-App-Developers-Whereby-9.jpeg" class="kg-image" alt="Top 10 Jitsi Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Whereby is a user-friendly video conferencing platform designed for small to medium-sized meetings. While it provides a straightforward experience, it may not be suitable for larger businesses or those requiring advanced features.</p><h3 id="key-points-about-whereby">Key points about Whereby</h3>
<ul><li>Basic customization options are available for the video interface, but it has limited choices and does not support a fully custom experience.</li><li>Video calls can be embedded directly into websites, <a href="https://uxpilot.ai/mobile-design-templates" rel="noreferrer">mobile apps</a>, and web products, eliminating the need for external links or apps.</li><li>Whereby provides a seamless video conferencing experience, but it may lack advanced features compared to other tools. The maximum capacity for meetings is 50 participants.</li><li>Screen sharing for mobile users and customization options for the host interface may be limited.</li><li>Whereby does not offer a virtual background feature, and some users have reported issues with the mobile app, which can affect the overall user experience.</li></ul><h3 id="whereby-pricing">Whereby pricing</h3>
<ul><li>Whereby offers a <a href="https://whereby.com/information/pricing"><strong>pricing</strong></a><strong> model</strong> starting at <strong>$6.99</strong> per month, which includes an allocation of up to 2,000 user minutes that are renewed monthly.</li><li>Once the allocated minutes are exhausted, an additional charge of <strong>$0.004</strong> per minute applies.</li><li><strong>Cloud recording</strong> and <strong>live streaming</strong> options are available at a rate of <strong>$0.01</strong> per minute.</li><li><strong>Email</strong> and <strong>chat support</strong> are provided <strong>free</strong> of charge to all users, ensuring accessible assistance.</li><li><strong>Paid</strong> support plans offer additional features like <strong>technical onboarding</strong>, <strong>customer success management</strong>, and <strong>HIPAA compliance</strong>.</li></ul><h2 id="10-signalwire">10. SignalWire</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Building-The-Software-Defined-Telecom-Network-SignalWire-8.jpeg" class="kg-image" alt="Top 10 Jitsi Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>SignalWire is a platform that utilizes APIs to empower developers to effortlessly integrate live and on-demand video experiences into their applications. Its main objective is to simplify the processes of video encoding, delivery, and renditions, ensuring a seamless and uninterrupted video streaming experience for users.</p><h3 id="overview-of-signalwire">Overview of SignalWire</h3>
<ul><li>SignalWire offers an SDK that allows for the integration of real-time video and live streams into web, iOS, and Android applications. The SDK enables video calls with up to 100 participants in a real-time webRTC environment.</li><li>However, it's important to mention that the SDK does not provide built-in support for managing disruptions or user publish-subscribe logic, which developers will need to implement separately.</li></ul><h3 id="signalwire-pricing">SignalWire pricing</h3>
<ul><li>SignalWire follows a <a href="https://signalwire.com/pricing/video">pricing</a> model based on per-minute usage. For <strong>HD video calls</strong>, the pricing is <strong>$0.0060</strong> per minute, while for <strong>Full HD</strong> <strong>video calls</strong>, it is <strong>$0.012</strong> per minute. The actual cost may vary depending on the desired video quality for your application.</li><li>Additionally, SignalWire offers additional features such as <strong>recording</strong>, which is available at a rate of <strong>$0.0045</strong> per minute. This allows you to capture and store video content for future use. </li><li>The platform also provides <strong>streaming</strong> capabilities priced at <strong>$0.10</strong> per minute, enabling you to broadcast your video content in real-time.</li></ul><h2 id="certainly">Certainly!</h2>
<p><a href="https://www.videosdk.live">VideoSDK</a> is a standout software development kit (SDK) that focuses on fast and smooth integration. It provides developers with a low-code solution to swiftly build live video experiences in their applications. With VideoSDK, custom video conferencing solutions can be created and deployed in less than 10 minutes, significantly reducing the time and effort required for integration. Unlike other SDKs, Video SDK offers a streamlined process that simplifies the creation and embedding of live video experiences, enabling real-time connections, communication, and collaboration with ease.</p><h2 id="still-skeptical">Still Skeptical?</h2>
<p>Delve into the extensive <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start">Quickstart guide</a> of Video SDK and discover the boundless possibilities it offers. Immerse yourself in its potential by exploring the specially designed <a href="https://docs.videosdk.live/code-sample">sample app</a> that showcases the power of Video SDK. Sign up now and embark on your integration journey, seizing the opportunity to claim your  <a href="https://www.videosdk.live/pricing" rel="noreferrer">Free $20</a> and unleash the full potential of Video SDK. Rest assured, our dedicated team is just a click away, ready to assist you whenever you need support. Get ready to unleash your creativity and demonstrate the remarkable experiences you can create with Video SDK. Let the world witness your creations!</p>]]></content:encoded></item><item><title><![CDATA[Build a Flutter Video Call App with VideoSDK]]></title><description><![CDATA[Learn how to create a video calling app and transform your Flutter application into a real-time video calling platform with Video SDK. ]]></description><link>https://www.videosdk.live/blog/video-calling-in-flutter</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb7f</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 01 Oct 2024 09:16:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/05/Flutter1-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/05/Flutter1-1.jpg" alt="Build a Flutter Video Call App with VideoSDK"/><p>Do you wonder what video-calling apps actually mean? If not then no worries, you will get an idea soon while reading this blog. We all have come across <strong>virtual</strong> words more often nowadays. We are connecting with our employees, colleagues, and others with the help of online platforms to share content and knowledge and report to one another. <a href="https://www.videosdk.live/"><strong>VideoSDK</strong> </a>came up with the idea of making an app that helps people with connecting remotely. During the meeting one can present their content to others, can raise their query by dropping a text, one can ask questions by turning on the mic, and many more features are there you will get acquainted with at the end of this blog.</p><h2 id="5-steps-to-build-a-video-calling-app-in-flutter">5 Steps to Build a Video Calling App in Flutter</h2><p>Develop and launch Apps for both Android and iOS at the same time.</p><!--kg-card-begin: markdown--><h3 id="prerequisite">Prerequisite</h3>
<!--kg-card-end: markdown--><ul><li>VideoSDK Developer Account (Not having one, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer">VideoSDK Dashboard</a>)</li><li>A token of VideoSDK from the <a href="https://app.videosdk.live/api-keys">VideoSDK Dashboard</a></li><li>Have Flutter VideoSDK installed on your device.</li></ul><!--kg-card-begin: markdown--><h3 id="project-structure">Project Structure</h3>
<!--kg-card-end: markdown--><p>Create a new Flutter Video Call App using the below command.</p><pre><code class="language-js">$ flutter create videosdk_flutter_quickstart
</code></pre><p>Your project structure's  <strong>lib directory</strong> should be as same as mentioned below</p><pre><code class="language-js">root
    ├── android
    ├── ios
    ├── lib
         ├── api.dart
         ├── join_screen.dart
         ├── main.dart
         ├── room_controls.dart
         ├── room_screen.dart
         ├── participant_tile.dart</code></pre><h2 id="step-1-flutter-video-call-sdk-integration">Step 1: Flutter Video Call SDK Integration</h2><p>1: install <a href="https://pub.dev/packages/videosdk">videosdk package</a> from the <a href="https://pub.dev/">pub.dev</a></p><pre><code class="language-js">$ flutter pub add videosdk

//run this command to add http library to perform network call to generate roomId

$ flutter pub add http</code></pre><h2 id="step-2-setup-for-android">Step 2: Setup for Android </h2><p>1: Update the <strong>AndroidManifest.xml</strong>  for the permissions we will be using to implement the audio and video features. </p><ul><li>You can find the <strong>AndroidManifest.xml</strong> file at <strong>&lt;project root&gt;/android/app/src/main/AndroidManifest.xml</strong></li></ul><!--kg-card-begin: markdown--><pre><code class="language-xml">&lt;uses-feature android:name=&quot;android.hardware.camera&quot; /&gt;
&lt;uses-feature android:name=&quot;android.hardware.camera.autofocus&quot; /&gt;
&lt;uses-permission android:name=&quot;android.permission.CAMERA&quot; /&gt;
&lt;uses-permission android:name=&quot;android.permission.RECORD_AUDIO&quot; /&gt;
&lt;uses-permission android:name=&quot;android.permission.ACCESS_NETWORK_STATE&quot; /&gt;
&lt;uses-permission android:name=&quot;android.permission.CHANGE_NETWORK_STATE&quot; /&gt;
&lt;uses-permission android:name=&quot;android.permission.MODIFY_AUDIO_SETTINGS&quot; /&gt;
&lt;uses-permission android:name=&quot;android.permission.FOREGROUND_SERVICE&quot;/&gt;
&lt;uses-permission android:name=&quot;android.permission.WAKE_LOCK&quot; /&gt;

&lt;uses-permission android:name=&quot;android.permission.INTERNET&quot;/&gt;
</code></pre>
<!--kg-card-end: markdown--><p>3: Also you will need to set your build settings to Java 8 because the official WebRTC jar now uses static methods in <strong>EglBase</strong> interface. Just add this to your app-level <strong>build.gradle</strong></p><!--kg-card-begin: markdown--><pre><code class="language-js">android {
    //...
    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8
    }
}
</code></pre>
<!--kg-card-end: markdown--><ul><li>If necessary, in the same <strong>build.gradle</strong> you will need to increase <strong>minSdkVersion</strong> of <strong>defaultConfig </strong>up to <strong>23 </strong>(currently default Flutter generator set it to <strong>16</strong>).</li><li>If necessary, in the same <strong>build.gradle</strong> you will need to increase <strong>compileSdkVersion</strong> and <strong>targetSdkVersion</strong> up to <strong>32</strong> (currently default Flutter generator set it to <strong>30</strong>).</li></ul><h2 id="step-3-setup-for-ios">Step 3: Setup for iOS</h2><p>1: Add the following entries which allow your app to access the camera and microphone to your Info.plist file, located in <strong>&lt;project root&gt;/ios/Runner/Info.plist</strong>:</p><!--kg-card-begin: markdown--><pre><code class="language-js">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Camera Usage!&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Microphone Usage!&lt;/string&gt;
</code></pre>
<!--kg-card-end: markdown--><h2 id="step-4start-writing-your-code-for-flutter-video-call-app">Step 4:Start Writing Your Code for Flutter Video Call App</h2><p>1: Let's first set up <strong>api.dart</strong> file</p><p>Before jumping to anything else, you will write a function to generate a unique roomId. You will require an auth-token, you can generate it either by using <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or generate it from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for development.</p><!--kg-card-begin: markdown--><pre><code class="language-js">import 'dart:convert';
import 'package:http/http.dart' as http;

String token = &quot;&lt;Generated-from-dashboard&gt;&quot;;

Future&lt;String&gt; createRoom() async {
  final http.Response httpResponse = await http.post(
    Uri.parse(&quot;https://api.videosdk.live/v2/rooms&quot;),
    headers: {'Authorization': token},
  );

  return json.decode(httpResponse.body)['roomId'];
}
</code></pre>
<!--kg-card-end: markdown--><p>2: Now you will set up <strong>join_screen.dart</strong> file. The Joining screen will consist of</p><ul><li><strong>Create Room Button</strong> - This button will create a new room for you.</li><li><strong>TextField for RoomId</strong> - This text field will contain the RoomId you want to join.</li><li><strong>Join Button</strong> - This button will join the room that you provided.</li></ul><p>JoinScreen will accept 3 functions in the constructor.</p><ul><li><strong>onCreateRoomButtonPressed </strong>- invoked when the Create Room button pressed</li><li><strong>onJoinRoomButtonPressed</strong> - invoked when the Join button pressed</li><li><strong>onRoomIdChanged</strong> - invoked when RoomId TextField value changed</li></ul><p>Replace content of join_screen.dart file with code mentioned in the code block </p><!--kg-card-begin: markdown--><pre><code class="language-js">import 'package:flutter/material.dart';

class JoinScreen extends StatelessWidget {
  final void Function() onCreateRoomButtonPressed;
  final void Function() onJoinRoomButtonPressed;
  final void Function(String) onRoomIdChanged;

  const JoinScreen({
    Key? key,
    required this.onCreateRoomButtonPressed,
    required this.onJoinRoomButtonPressed,
    required this.onRoomIdChanged,
  }) : super(key: key);

  @override
  Widget build(BuildContext context) {
    return Column(
      mainAxisAlignment: MainAxisAlignment.center,
      children: [
        ElevatedButton(
            child: const Text(&quot;Create Room&quot;),
            onPressed: onCreateRoomButtonPressed),
        const SizedBox(height: 16),
        TextField(
            decoration: const InputDecoration(
              hintText: &quot;Room ID&quot;,
              border: OutlineInputBorder(),
            ),
            onChanged: onRoomIdChanged),
        const SizedBox(height: 8),
        ElevatedButton(
          child: const Text(&quot;Join&quot;),
          onPressed: onJoinRoomButtonPressed,
        )
      ],
    );
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>3: We are done with the creation of the join screen, now let's create room controls of the video calling app.</p><p>Create a new <strong>room_controls.dart</strong> file with a stateless widget named <strong>RoomControls</strong>.</p><p><strong>The RoomControls will consist of:</strong></p><ul><li><strong>Leave Button</strong> - This button will leave the room.</li><li><strong>Toggle Mic Button</strong> - This button will enable or disable the mic.</li><li><strong>Toggle Camera Button</strong> - This button will enable or disable the camera.</li></ul><p><strong>RoomControls will accept 3 functions in the constructor</strong></p><ul><li><strong>onLeaveButtonPressed </strong>- invoked when the Leave button pressed</li><li><strong>onToggleMicButtonPressed</strong> - invoked when the Toggle Mic button pressed</li><li><strong>onToggleCameraButtonPressed</strong> - invoked when the Toggle Camera button pressed</li></ul><!--kg-card-begin: markdown--><pre><code class="language-js">import 'package:flutter/material.dart';

class RoomControls extends StatelessWidget {
  final void Function() onToggleMicButtonPressed;
  final void Function() onToggleCameraButtonPressed;
  final void Function() onLeaveButtonPressed;

  const RoomControls({
    Key? key,
    required this.onToggleMicButtonPressed,
    required this.onToggleCameraButtonPressed,
    required this.onLeaveButtonPressed,
  }) : super(key: key);

  @override
  Widget build(BuildContext context) {
    return Row(
      mainAxisAlignment: MainAxisAlignment.spaceAround,
      children: [
        ElevatedButton(
          child: const Text(&quot;Leave&quot;),
          onPressed: onLeaveButtonPressed,
        ),
        ElevatedButton(
          child: const Text(&quot;Toggle Mic&quot;),
          onPressed: onToggleMicButtonPressed,
        ),
        ElevatedButton(
          child: const Text(&quot;Toggle Camera&quot;),
          onPressed: onToggleCameraButtonPressed,
        )
      ],
    );
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>4: Now we will create a participantTile for each participant who joins the room.</p><p>For that create a <strong>participant_tile.dart</strong> file and create <strong>ParticipantTile StateLess Widget</strong>.</p><p><strong>The ParticipantTile will consist of:</strong></p><ul><li><strong>RTCVideoView</strong> - This will show a remote participant video stream.</li></ul><p>ParticipantTile will accept Stream in the constructor</p><ul><li><strong>stream</strong> - remote participant video stream</li></ul><!--kg-card-begin: markdown--><pre><code class="language-js">import 'package:flutter/material.dart';
import 'package:videosdk/rtc.dart';

class ParticipantTile extends StatelessWidget {
  final Stream stream;
  const ParticipantTile({
    Key? key, required this.stream,
  }) : super(key: key);

  @override
  Widget build(BuildContext context) {
    return Padding(
      padding: const EdgeInsets.all(8.0),
      child: SizedBox(
        height: 200,
        width: 200,
        child: RTCVideoView(
          stream.renderer!,
          objectFit: RTCVideoViewObjectFit.RTCVideoViewObjectFitCover,
        ),
      ),
    );
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>5:  In this step, we are going to create RoomScreen </p><p>For that, create <strong>room_screen.dart</strong> file and create <strong>RoomScreen as a StateFul Widget</strong>.</p><p><strong>RoomScreen will accept roomId and token in the constructor</strong></p><ul><li><strong>roomId</strong> - roomId, you want to join</li><li><strong>token</strong> - VideoSdk Auth token</li></ul><!--kg-card-begin: markdown--><pre><code class="language-js">import 'package:flutter/material.dart';
import 'package:videosdk/rtc.dart';
import 'room_controls.dart';
import 'participant_tile.dart';

class RoomScreen extends StatefulWidget {
  final String roomId;
  final String token;
  final void Function() leaveRoom;

  const RoomScreen(
      {Key? key,
      required this.roomId,
      required this.token,
      required this.leaveRoom})
      : super(key: key);

  @override
  State&lt;RoomScreen&gt; createState() =&gt; _RoomScreenState();
}

class _RoomScreenState extends State&lt;RoomScreen&gt; {
  Map&lt;String, Stream?&gt; participantVideoStreams = {};

  bool micEnabled = true;
  bool camEnabled = true;
  late Room room;

  void setParticipantStreamEvents(Participant participant) {
    participant.on(Events.streamEnabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; participantVideoStreams[participant.id] = stream);
      }
    });

    participant.on(Events.streamDisabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; participantVideoStreams.remove(participant.id));
      }
    });
  }

  void setRoomEventListener(Room _room) {
    setParticipantStreamEvents(_room.localParticipant);
    _room.on(
      Events.participantJoined,
      (Participant participant) =&gt; setParticipantStreamEvents(participant),
    );
    _room.on(Events.participantLeft, (String participantId) {
      if (participantVideoStreams.containsKey(participantId)) {
        setState(() =&gt; participantVideoStreams.remove(participantId));
      }
    });
    _room.on(Events.roomLeft, () {
      participantVideoStreams.clear();
      widget.leaveRoom();
    });
  }
  
   @override
  void initState() {
    super.initState();
    // Create instance of Roo
    
    room = VideoSDK.createRoom(
      roomId: widget.roomId,
      token: widget.token,
      displayName: &quot;Yash Chudasama&quot;,
      micEnabled: micEnabled,
      camEnabled: camEnabled,
      maxResolution: 'hd',
      defaultCameraIndex: 1,
      notification: const NotificationInfo(
        title: &quot;Video SDK&quot;,
        message: &quot;Video SDK is sharing screen in the room&quot;,
        icon: &quot;notification_share&quot;, // drawable icon name
      ),
    );

    setRoomEventListener(room);

    // Join room
    room.join();
  }

  @override
  Widget build(BuildContext context) {
    return SingleChildScrollView(
      child: Column(
        crossAxisAlignment: CrossAxisAlignment.center,
        children: [
          Text(&quot;Room ID: ${widget.roomId}&quot;),
          RoomControls(
            onToggleMicButtonPressed: () {
              micEnabled ? room.muteMic() : room.unmuteMic();
              micEnabled = !micEnabled;
            },
            onToggleCameraButtonPressed: () {
              cameraEnabled
                  ? room.disableCamera()
                  : room.enableCamera();
              cameraEnabled = !cameraEnabled;
            },
            onLeaveButtonPressed: () =&gt; room.leave(),
          ),
          ...participantVideoStreams.values
              .map(
                (e) =&gt; ParticipantTile(
                  stream: e!,
                ),
              )
              .toList(),
        ],
      ),
    );
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>6: What is the purpose of doing all the above steps without changing our <strong>main.dart</strong> file.</p><p>So in this step, we will see about changing <strong>main.dart</strong> file where we will provide a condition and based on that our joinScreen or roomScreen will get appear.</p><p>Remove the boilerplate code from the <strong>main.dart</strong>. Create <strong>VideoSDKQuickStart</strong> StatefulWidget and pass it to <strong>MaterialApp</strong>.</p><p><strong>VideoSDKQuickStart</strong> widget will return <strong><code>RoomScreen</code></strong> if the room is active, otherwise, return <strong>JoinScreen</strong>.</p><!--kg-card-begin: markdown--><pre><code class="language-js">import 'package:flutter/material.dart';
import 'api.dart';
import 'join_screen.dart';
import 'room_screen.dart';

void main() {
  runApp(
    const MaterialApp(
      title: 'VideoSDK QuickStart',
      home: VideoSDKQuickStart(),
    ),
  );
}

class VideoSDKQuickStart extends StatefulWidget {
  const VideoSDKQuickStart({Key? key}) : super(key: key);

  @override
  State&lt;VideoSDKQuickStart&gt; createState() =&gt; _VideoSDKQuickStartState();
}

class _VideoSDKQuickStartState extends State&lt;VideoSDKQuickStart&gt; {
  String roomId = &quot;&quot;;
  bool isRoomActive = false;

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: const Text(&quot;VideoSDK QuickStart&quot;),
      ),
      body: Padding(
        padding: const EdgeInsets.all(16.0),
        child: isRoomActive
            ? RoomScreen(
                roomId: roomId,
                token: token,
                leaveRoom: () {
                  setState(() =&gt; isRoomActive = false);
                },
              )
            : JoinScreen(
                onRoomIdChanged: (value) =&gt; roomId = value,
                onCreateRoomButtonPressed: () async {
                  roomId = await createRoom();
                  setState(() =&gt; isRoomActive = true);
                },
                onJoinRoomButtonPressed: () {
                  setState(() =&gt; isRoomActive = true);
                },
              ),
      ),
    );
  }
}
</code></pre>
<!--kg-card-end: markdown--><h2 id="step-5run-your-code-now">Step 5:Run Your Code Now</h2><!--kg-card-begin: markdown--><pre><code class="language-js">$ flutter run
</code></pre>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/05/giphy-1.gif" class="kg-image" alt="Build a Flutter Video Call App with VideoSDK" loading="lazy" width="480" height="237"/></figure><h3 id="conclusion">Conclusion</h3><p>With this, we successfully built the flutter video call with VideoSDK. If you wish to add functionalities like chat messaging, and screen sharing, you can always check out our <a href="https://docs.videosdk.live/" rel="noopener">documentation</a>. If you face any difficulty with the implementation you can connect with us on our <a href="https://discord.gg/Gpmj6eCq5u" rel="noopener">discord community</a>.</p><h3 id="more-flutter-resources">More Flutter Resources</h3><ul><li><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/getting-started">VideoSDK Flutter SDK Integration Guide</a></li><li><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start">Flutter SDK QuickStart</a></li><li><a href="https://github.com/videosdk-live/videosdk-rtc-flutter-sdk-example">Flutter SDK Github Example</a></li><li><a href="https://www.videosdk.live/blog/video-calling-in-flutter">Build a Flutter Video Calling App with VideoSDK</a></li><li><a href="https://youtu.be/4h57eVcaC34">Video Call Flutter app with VideoSDK (Android &amp; IOS)</a></li><li><a href="https://pub.dev/packages/videosdk">Official VideoSDK flutter plugin: (feel free to give it a star ⭐)</a></li></ul>]]></content:encoded></item><item><title><![CDATA[The Future of Live Commerce + Trends]]></title><description><![CDATA[Live commerce comes up as an amazing opportunity for small sellers as well as big brands to showcase what they can serve their clients and viewers with, in the most lucrative way. ]]></description><link>https://www.videosdk.live/blog/live-streaming-is-the-new-future-of-e-commerce</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb69</guid><category><![CDATA[E-Commerce]]></category><category><![CDATA[live-streaming]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Mon, 30 Sep 2024 14:59:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/08/Low-latency-live-streaming--2.png" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/08/Low-latency-live-streaming--2.png" alt="The Future of Live Commerce + Trends"/><p>Ever since the e-commerce industry began, it has not stopped growing. This industry has managed to become the trendiest source of engagement when it comes to shopping. We have landed ourselves in a position where before shopping for anything from anywhere, we spend some lavish time on the internet and make decisions to find the best deal. That’s an obvious activity which we do not forget to practice in a single chance. Live commerce remarks a considerable change concerning the functioning of various retail industries.</p><h2 id="what-is-live-commerce">What is Live Commerce? </h2><p>Live commerce, also known as live shopping or <a href="https://www.videosdk.live/solutions/live-shopping" rel="noreferrer">live streaming shopping</a>, refers to a form of e-commerce where sellers promote and sell products in real time through live video broadcasts. It combines elements of traditional TV shopping channels with the interactivity and convenience of online shopping.</p><h3 id="how-does-live-commerce-work">How does live commerce work?</h3><p>Live commerce involves a host or influencer showcasing products through a live video stream, while viewers can interact in real time by asking questions, making inquiries, or purchasing products directly from the stream.</p><h2 id="how-popular-is-live-commerce">How popular is live commerce?</h2><p>Live commerce has gained significant popularity in recent years, especially in countries like China and South Korea—opening new opportunities for showing off dropshipping products in real time and engaging directly with shoppers. It has also gained traction in other parts of the world, with many brands and retailers adopting live commerce strategies to engage with consumers and drive sales.</p><h3 id="what-makes-e-commerce-a-trend">What makes e-commerce a trend?</h3><p>The e-commerce industry has always been caught bringing up ideas that catch the eye of attraction for consumers. All that the companies need is to do something exciting as well as attractive for its end consumer. The companies are emerging and adapting a lot in bringing innovation to their sales strategies by incorporating the best sales lead database to enhance their targeting and customer engagement. Many brands now complement their online stores with a <a href="https://pagination.com/catalog-maker/" rel="noreferrer">digital catalog</a>, letting customers explore collections, compare products, and shop directly, seamlessly connecting with live commerce and social shopping. As we have kept e-commerce in knowledge for a decade or more, we have seen transitions in their way of approaching the consumer community. <br><br>Technology plays an important role in the upbringing of this industry. Research states that the e-commerce industry in China, in the last few years has increased its customer background with its change in strategies. It was believed that the touch of personal selling on e-commerce made them achieve more than the targets. Chinese companies like Alibaba (Taobao), and Vipshop have recently come up with the idea of live streaming sales, or simply live e-commerce. The rise of Live e-commerce in China, led by Alibaba and Vipshop, emphasizes personal engagement to boost sales. This trend underscores the value of headless commerce development services for enhanced customer experiences. To support these advanced commerce strategies, <a href="https://www.digitalsilk.com/web-development/ecommerce-development/" rel="noreferrer">ecommerce developers</a> help businesses build scalable platforms that enable seamless live shopping, personalized experiences, and improved customer engagement.</br></br></p><h2 id="swift-in-the-past-years-with-live-stream-commerce-sales"><strong>Swift in the past years with live stream </strong>commerce<strong> sales</strong></h2><p>To be on point, livestream commerce has brought an amazing drift in the e-commerce industry in the past years. The companies started sales with live streaming on their applications and claimed a rise of 30% in their customer engagement. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/08/Graphical-Increase-1.png" class="kg-image" alt="The Future of Live Commerce + Trends" loading="lazy" width="1600" height="900"><figcaption><span style="white-space: pre-wrap;">Swift in the past years with live-stream sales</span></figcaption></img></figure><p>The idea was to generate engagement. Live streaming selling or Live commerce started with the only key and that was to increase the client base and sustainability. Live streaming creates a huge viewing community for brands to create an everlasting impact on viewers for their products. It became trending as viewers found the essence of a personal touch from the brand. It is always appealing when you are demonstrated or taught something with all your heart and soul. Live commerce is also adjoined with live chat where the seller can answer the viewers’ questions. This has made selling and branding comparatively easy where it ensures all the customers’ if and buts, within the streaming. Live commerce can also be referred to as a retail market that converts clients with their appealing communication skills over the web. </p><h2 id="what-are-the-benefits-of-live-commerce">What are the benefits of live commerce?</h2><p>Live commerce offers several advantages, such as creating an engaging and interactive shopping experience, allowing viewers to ask questions and receive immediate responses, building trust through product demonstrations, and providing a sense of urgency with limited-time offers.</p><h2 id="the-future-of-e-commerce-in-the-next-five-years-is-live-commerce">The future of e-commerce in the next five years is live commerce</h2><p>With industries modernizing, the blueprint of deriving customer focus has also changed. Companies and brands have started focusing more on the engagement of clients than selling. Leveraging different <a href="https://www.brandcrowd.com/blog/brand-architecture-demystified" rel="noreferrer">brand architecture models</a> can help companies structure their product lines and sub-brands effectively, ensuring consistent messaging and stronger engagement during live commerce campaigns. Being consumers as the primary focus, the plans are set to innovate satisfaction and newness for them. The value of live streaming is increasing each day as sellers, companies, and brands get to know how they move towards becoming known in the market each day. It is expected that by 2025, the global market share of  Live commerce will increase by multiple times with the technologies being adopted by brands and sellers.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/08/MArket-size-of-live-commerce.png" class="kg-image" alt="The Future of Live Commerce + Trends" loading="lazy" width="1600" height="900"><figcaption><span style="white-space: pre-wrap;">The Market Size of Live Commerce</span></figcaption></img></figure><h2 id="the-value-of-live-commerce-is-increasing-each-day">The value of live commerce is increasing each day</h2><p>Live streaming masters at creating engagement and so it manages to draft one for the e-commerce industry as well. As e-commerce dwells some of its part with live commerce it makes a huge area for the customers designing products with the mindset of the client. This has helped brands to create a strong client base for their companies. It helps brands to return viewers long to their streams, creating a lasting impression.<br/></p><p><strong>Client conversion ratio:</strong> Live streaming keeps clients engrossed with the content delivery while being live. It has been noticed that clients have shown up 10 times more than the standard e-commerce. <a href="https://www.videosdk.live/solutions/live-shopping">Live commerce</a> allows viewers to engage themselves with the liveliness of the stream. The stream also focuses on consumer-attractive deals, with coupons, offers, and <a href="https://www.shopper.com/discount-codes">discount vouchers</a> making them valid till the stream spans.  Displaying a QR code generated from <a href="https://www.the-qrcode-generator.com/" rel="noreferrer">The QR Code Generator (TQRCG)</a> on-screen during the stream, and linking it directly to the deal or product page, makes this even more effective. It gives viewers on any device a one-scan path from watching to purchasing without ever leaving the stream. These strategies create client sustainability along with higher conversions.<br/></p><p><strong>Personal touch:</strong> Live commerce similar to a retail store, also ensures personal touch. The idea revolves around engaging the clients with the superior quality products the brands serve for customer benefit. The idea revolves around engaging clients while allowing brands to <a href="https://www.logodesign.net/web" rel="noreferrer">create a custom website</a> that showcases their superior-quality products and highlights their benefits for customers. Live commerce creates lively atmos during the stream where the brand comes up with a homely sales approach for their customers. Low latency live streaming is real-time, the viewers can also make a conversation with the brands via live chat. These make live commerce more captivating. Also, brands can use a <a href="https://www.brandcrowd.com/logo-maker">logo maker</a> to create branding elements for their e-commerce website to resonate with the audience.</p><p><strong>Customer engagement:</strong> It has always been engaging if a video is broadcast live. It creates more traffic. Above all, a seller can put its brand up with so many lucrative deals that it ultimately manages customer attention. Altogether, even a single live button displayed on the screen creates an impact on viewers. The way the brands create an impression through a video speaks a lot about how the viewers escalate over live commerce deals.</p><p><strong>Brand pull: </strong>Live commerce is a new technology that has given a boost to the e-commerce industry. Popular brands are coming up live to showcase their products with various offers, collaborating with celebrities, and influencers. All these strategies along with scalable technology create a huge impact on viewers. These viewers tend to follow people with whom they get influenced, and live commerce with these influencers becomes a plus. it considerably shows a rise in brand awareness among the people escalating brand growth. All brands can do is ask influencers to add their <a href="https://www.designcrowd.com/logo-design">logo design</a> or other brand elements to <a href="https://contentbase.com/blog/successful-content-marketing-techniques/" rel="noreferrer">spread brand awareness</a> to their audience.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/08/Impact-of-Technology-3.png" class="kg-image" alt="The Future of Live Commerce + Trends" loading="lazy" width="1600" height="900"><figcaption><span style="white-space: pre-wrap;">Live commerce user rentention</span></figcaption></img></figure><h2 id="how-does-technology-play-its-part-in-enhancing-live-commerce">How does Technology play its part in enhancing live Commerce?</h2><p>Live commerce is becoming popular day by day, and it is all because of technology that live commerce came into existence. It has been such an approaching deal today, that live-streaming technology is on point becoming popular among various e-commerce websites, companies, and brands. Various technologies within live streaming make live commerce more interesting. </p><p><strong>Improvised recommendations:</strong> Technology has so far advanced a lot. On an <a href="https://amasty.com/blog/20-best-e-commerce-stores/" rel="noreferrer">e-commerce website</a>, a viewer witnesses recommendations or choices similar to his taste. AI technology helps an e-commerce user to make deals according to the suggestions displayed. Technology has been so precise that it does not let a seller miss the opportunity to convert a viewer to a trusted consumer. For instance, <a href="https://www.experro.com/blog/augmented-reality-jewelry/">jewelry AR</a> technology not only allows customers to virtually try on pieces but also enhances recommendations by analyzing user preferences and suggesting styles that best match their taste.</p><p><strong>Sense of being live:</strong> Technology today has become real-time. It has brought up products that have enhanced live communication and focused on brightening networks. Similarly, the innovation of live streaming has become a deal that can make huge engagements with the title of being live on a virtual platform. On live commerce, one can be in touch with the seller personally, chatting with them. Communicating in real-time helps to bring up bonding between the two communities.</p><p><strong>High information density:</strong> Technology enables a user to retain huge data. It collects and summarises data of clients, viewers, and also their searches and purchases, making a strong database. This is particularly beneficial in sectors like <a href="https://thelist.app/collections/Men/Clothing/Jackets"><strong>men's fashio</strong>n</a>, where tracking customer preferences, trends, and purchase history allows for personalized recommendations and targeted marketing. Technology has made it possible. A seller, brand, or company can avail of a huge database and retain their customers easily. Live commerce is a boon for generating a large database as it builds a huge viewing community, which communicates with the brands’ product sales. Nonetheless, they also help in making reliable engagement strategies.</p><p><strong>Increased conversion rates:</strong> Live commerce has considerably seen an increasing graph of viewers converting into customers. The innovation brought up by the brands cares for an appealing touch, increasing traffic at their streams. It is also noticed that just by live commerce strategies the conversion rates have comparatively increased. It was noticed that the fashion industry increased its online presence by 20%, just by live commerce, engaging viewers with numerous influencers.</p><h2 id="which-platforms-support-live-commerce">Which platforms support live commerce?</h2><p>Various platforms support live commerce, including social media platforms like Facebook, Instagram, and <a href="https://techcrunch.com/2023/02/17/more-brands-are-now-testing-tiktoks-shop-feature-in-the-u-s/">TikTok</a>, as well as dedicated live streaming platforms such as Amazon Live, Alibaba's Taobao Live, and JD Live in China. Effective <a href="https://www.shoutdigital.com.au/blog/video-marketing-guide-improve-keyword-research/" rel="noreferrer">video SEO strategies</a> through targeted keyword research can further enhance engagement and reach for live streams on these platforms. Each of these platforms provides the necessary features. For example, people can easily implement their <a href="https://www.highflyers.media/blogs/tiktok-video-marketing-strategies-for-a-new-brand">TikTok video marketing strategies</a> and attract more users and attention.</p><h3 id="big-tech-is-launching-live-shopping-platforms">Big Tech is launching Live Shopping platforms </h3><p><strong>Amazon Live</strong>: The E-commerce giant is naturally looking to expand towards live streaming as it launched Amazon Live for its US citizens in 2022. Amazon Live is a platform where brands, retailers, &amp; Amazon talent go live &amp; showcase the products, and customers can have live interaction with them via chat &amp; emojis and easily buy the products. Further, streamers on the platform get a unique URL that works as their storefront where customer can view all their previous live streams as well as recorded videos. The adoption of <a href="https://techcrunch.com/2023/05/05/amazons-tiktok-like-inspire-shopping-feed-available-all-customers-us/">Amazon Live</a> has been incredible so far as content creators flocking around Amazon Live to create a huge profit for themselves.</p><p><strong>eBay Live</strong>: eBay is also not far behind Amazon as it's announced to launch a dedicated live shopping platform. eBay Live is in Beta and will have live streaming for curated trading cards &amp; collectibles as eBay always had a huge community around collectibles &amp; trading cards as it’s growing faster. Moreover, these collectibles will also include NFTs. As collectible market &amp; NFTs go hand in hand. This unique combo just goes to show the reach of live shopping &amp; live commerce. Furthermore, eBay promises their live platform will have an engaging environment and deliver a more streamlined, and sophisticated way for communities to connect, buy and sell. With more sellers tapping into this live commerce opportunity, tools like <a href="https://linkmybooks.com/blog/best-accounting-software-for-ebay-sellers" rel="noreferrer">eBay accounting software</a> can help manage sales records, track profits, and simplify financial reporting.</p><p><strong>Facebook Live</strong>: Facebook is also testing out its Facebook Live for creators. Facebook live shopping is a live streaming feature that has been around for a couple of years, where creators can showcase their products to their friends, in groups, &amp; on pages, but the seamless adoption of it by brands &amp; influencers was not present. However, as the trends are changing and as <a href="https://www.videosdk.live/solutions/live-shopping">Live shopping</a> is becoming more &amp; more trendier Facebook is looking to increase user adoption by hosting live streams much easier. Keeping up with the latest <a href="https://leadsbridge.com/blog/facebook-trends/" rel="noreferrer">Facebook trends</a> helps brands understand how emerging features like live shopping are evolving and how they can use them to drive engagement and social commerce sales. They also launched a couple of features like shopping in groups &amp; product recommendations to encourage live shopping further. This would allow group admins to enable the shopping feature on Facebook pages &amp; groups. Additionally, on Facebook, many are hosting live-stream shopping events to showcase their products &amp; sell them successfully. This has been very profitable for influencers &amp; celebrities so far.</p><h3 id="can-anyone-participate-in-live-commerce">Can anyone participate in live commerce?</h3><p>While anyone can participate in watching live commerce broadcasts, hosting live commerce sessions typically requires a partnership or collaboration with the platform or an influencer. However, with the rise of social commerce, individuals can leverage live-streaming features on social media platforms to showcase and sell products to their audience.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2021/08/AR--VR--and-AI--2.png" class="kg-image" alt="The Future of Live Commerce + Trends" loading="lazy" width="1600" height="900"><figcaption><span style="white-space: pre-wrap;">How does Technology play its part in enhancing Live Commerce?</span></figcaption></img></figure><h2 id="future-of-live-commerce-with-technological-transitions">Future of live commerce with technological transitions</h2><p>Earlier, e-commerce was merely used by the fashion and apparel industry. Where the brands just focused on bringing up what’s new in the fashion market. Gradually with an increase in technology, the e-commerce industry got a kick with various other industries. These e-commerce websites being a B2C market also made conversions with joining <a href="https://virtocommerce.com/blog/what-is-b2b-ecommerce">B2B ecommerce</a> and C2C business models. </p><p>Technology has brought up several transitions in the e-commerce market. Here we realize that these markets are now experiencing a new phase which is live commerce. A combination of technology and marketplace. As the word sounds exciting, so is its happening. It has created an everlasting impact on e-commerce websites, brands, sellers, and the marketplace community. And the practice of this activity has made huge engagements. The sellers as well as the buyers, both have convinced themselves that live commerce is an approaching deal</p><p>In the current phase, the low latency live streaming trends the most. It assures viewers with a real-time experience. The brands can be in direct conversation with the viewers through live chat. The future is going to be more amazing with more enhanced attributes. We are going to experience huge changes in technology. </p><ul><li>Technology will help to enhance and expand the micro sectors, bringing them up, competing with huge brands, and excelling in the standard of living.</li><li>We will experience growth of under-rated influencers that will considerably lower the brands’ expenses</li><li>Technology is going to have a positive impact on the new sectors. We will observe those sectors which were never seen in the e-commerce industry to date.</li><li>It can be observed that technological transitions will bring changes in live commerce with tools like AR, VR, AI, and more, making real-time experiences more fascinating and accessible.</li></ul><p>Technology has always played a crucial role in bringing change in several industries. Bringing up newness each day into technology has increased engagement and has made the conversion ratio more accelerated. E-commerce has recently begun with the practice of live streaming and has implemented the technology well. With transitions in technology, live commerce creates a dynamic impact on sales and strategies for the coming times.</p>]]></content:encoded></item><item><title><![CDATA[Post-call Transcription & Summary in React-Native]]></title><description><![CDATA[Learn to integrate Post-call Transcription & Summary in React-Native. Our guide ensures your app delivers accurate transcriptions and concise summaries for better user engagement.]]></description><link>https://www.videosdk.live/blog/post-call-transcription-in-react-native</link><guid isPermaLink="false">6683d42f20fab018df10f3fd</guid><category><![CDATA[React Native]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 30 Sep 2024 09:46:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/07/Post-time-transcription-and-summary.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/Post-time-transcription-and-summary.png" alt="Post-call Transcription & Summary in React-Native"/><p>Post-call transcription and summary is a powerful feature provided by <a href="https://www.videosdk.live/">VideoSDK </a>that allows users to generate detailed transcriptions and summaries of recorded meetings after they have concluded. This feature is particularly beneficial for capturing and documenting important information discussed during meetings, ensuring that nothing is missed and that there is a comprehensive record of the conversation.</p><h3 id="how-post-call-transcription-works">How Post-Call Transcription Works?</h3><p><strong>Post-call transcription</strong> involves processing the recorded audio or video content of a meeting to produce a textual representation of the conversation. Here’s a step-by-step breakdown of how it works:</p><ol><li><strong>Recording the Meeting:</strong> During the meeting, the audio and video are recorded. This can include everything that was said and any shared content, such as presentations or screen shares.</li><li><strong>Uploading the Recording:</strong> Once the meeting is over, the recorded file is uploaded to the VideoSDK platform. This can be done automatically or manually, depending on the configuration.</li><li><strong>Transcription Processing:</strong> The uploaded recording is then processed by VideoSDK’s transcription engine. This engine uses advanced speech recognition technology to convert spoken words into written text.</li><li><strong>Retrieving the Transcription:</strong> After the transcription process is complete, the textual representation of the meeting is made available. This text can be accessed via the VideoSDK API and used in various applications.</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/react-quick-start-fafbfbc2ed2d7cbfd4c5ee2e36296f9e-1.png" class="kg-image" alt="Post-call Transcription & Summary in React-Native" loading="lazy" width="2906" height="1446"/></figure><h3 id="benefits-of-post-call-transcription">Benefits of Post-Call Transcription</h3><ul><li><strong>Accurate Documentation:</strong> Provides a precise record of what was discussed, which is invaluable for meeting minutes, legal documentation, and reference.</li><li><strong>Enhanced Accessibility:</strong> Makes content accessible to those who may have missed the meeting or have hearing impairments.</li><li><strong>Easy Review and Analysis:</strong> Enables quick review of key points and decisions made during the meeting without having to re-watch the entire recording.</li></ul><h2 id="lets-get-started">Let's Get started </h2><p>VideoSDK empowers you to seamlessly integrate the video calling feature into your React application within minutes.</p><p>In this quickstart, you'll explore the group calling feature of VideoSDK. Follow the step-by-step guide to integrate it within your application.</p><h3 id="prerequisites%E2%80%8B">Prerequisites<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#prerequisites">​</a></h3><ul><li>Node.js v12+</li><li>NPM v6+ (comes installed with newer Node versions)</li><li>Android Studio or Xcode installed</li></ul><h3 id="app-architecture%E2%80%8B">App Architecture<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#app-architecture">​</a></h3><p>The App will contain two screens :</p><ol><li><code>Join Screen</code> : This screen allows users to either create a meeting or join a predefined meeting.</li><li><code>Meeting Screen</code> : This screen contains a participant list and meeting controls, such as enabling/disabling the microphone and camera and leaving the meeting.</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/image-7.png" class="kg-image" alt="Post-call Transcription & Summary in React-Native" loading="lazy" width="2751" height="1541"/></figure><h2 id="getting-started-with-the-code%E2%80%8B">Getting Started with the Code!<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#getting-started-with-the-code">​</a></h2><h3 id="create-app%E2%80%8B">Create App<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#create-app">​</a></h3><p>Create a new React Native App using the below command.</p><pre><code class="language-bash">npx react-native init AppName</code></pre><p>For React Native setup, you can follow the <a href="https://reactnative.dev/docs/environment-setup" rel="noopener noreferrer">Official Documentation</a>.</p><h3 id="videosdk-installation%E2%80%8B">VideoSDK Installation<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#videosdk-installation">​</a></h3><p>Install the VideoSDK by using the following command. Ensure that you are in your project directory before running this command.</p><pre><code class="language-bash">npm install "@videosdk.live/react-native-sdk"  "@videosdk.live/react-native-incallmanager"</code></pre><h3 id="project-structure">Project Structure</h3><pre><code class="language-Structure">  root
   ├── node_modules
   ├── android
   ├── ios
   ├── App.js
   ├── api.js
   ├── index.js</code></pre><h3 id="project-configuration%E2%80%8B">Project Configuration<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#project-configuration">​</a></h3><h4 id="android-setup%E2%80%8B">Android Setup<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#android-setup">​</a></h4><ol><li>Add the required permissions in the <code>AndroidManifest.xml</code> file.</li></ol><p>AndroidManifest.xml</p><figure class="kg-card kg-code-card"><pre><code class="language-xml">&lt;manifest
  xmlns:android="http://schemas.android.com/apk/res/android"
  package="com.cool.app"
&gt;
    &lt;!-- Give all the required permissions to app --&gt;
    &lt;uses-permission android:name="android.permission.INTERNET" /&gt;
    &lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Legacy up to Android 11) --&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH"
        android:maxSdkVersion="30" /&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH_ADMIN"
        android:maxSdkVersion="30" /&gt;

    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Android 12 upwards)--&gt;
    &lt;uses-permission android:name="android.permission.BLUETOOTH_CONNECT" /&gt;

    &lt;uses-permission android:name="android.permission.CAMERA" /&gt;
    &lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
    &lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
    &lt;uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" /&gt;
    &lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE"/&gt;
    &lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;

    &lt;application&gt;
   &lt;meta-data
      android:name="live.videosdk.rnfgservice.notification_channel_name"
      android:value="Meeting Notification"
     /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_channel_description"
    android:value="Whenever meeting started notification will appear."
    /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_color"
    android:resource="@color/red"
    /&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundService" android:foregroundServiceType="mediaProjection"&gt;&lt;/service&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundServiceTask"&gt;&lt;/service&gt;
  &lt;/application&gt;
&lt;/manifest&gt;</code></pre><figcaption>AndroidManifest.xml</figcaption></figure><p>2. Update your <code>colors.xml</code> file for internal dependencies.</p><p>android/app/src/main/res/values/colors.xml</p><pre><code class="language-js">&lt;resources&gt;
  &lt;item name="red" type="color"&gt;
    #FC0303
  &lt;/item&gt;
  &lt;integer-array name="androidcolors"&gt;
    &lt;item&gt;@color/red&lt;/item&gt;
  &lt;/integer-array&gt;
&lt;/resources&gt;</code></pre><p>3. Link the necessary VideoSDK Dependencies.</p><p>android/app/build.gradle</p><pre><code class="language-java">  dependencies {
   implementation project(':rnwebrtc')
   implementation project(':rnfgservice')
  }</code></pre><p>android/settings.gradle</p><pre><code class="language-gradle">include ':rnwebrtc'
project(':rnwebrtc').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-webrtc/android')

include ':rnfgservice'
project(':rnfgservice').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-foreground-service/android')
</code></pre><p>MainApplication.java</p><pre><code class="language-java">import live.videosdk.rnwebrtc.WebRTCModulePackage;
import live.videosdk.rnfgservice.ForegroundServicePackage;

public class MainApplication extends Application implements ReactApplication {
  private static List&lt;ReactPackage&gt; getPackages() {
      @SuppressWarnings("UnnecessaryLocalVariable")
      List&lt;ReactPackage&gt; packages = new PackageList(this).getPackages();
      // Packages that cannot be autolinked yet can be added manually here, for example:

      packages.add(new ForegroundServicePackage());
      packages.add(new WebRTCModulePackage());

      return packages;
  }
}</code></pre><p>android/gradle.properties</p><pre><code class="language-java">/* This one fixes a weird WebRTC runtime problem on some devices. */
android.enableDexingArtifactTransform.desugaring=false</code></pre><p>4. Include the following line in your <code>proguard-rules.pro</code> file (optional: if you are using Proguard)</p><p>android/app/proguard-rules.pro</p><pre><code class="language-java">-keep class org.webrtc.** { *; }</code></pre><p>5. In your <code>build.gradle</code> file, update the minimum OS/SDK version to <code>23</code>.</p><pre><code class="language-java">buildscript {
  ext {
      minSdkVersion = 23
  }
}</code></pre><h4 id="ios-setup%E2%80%8B">iOS Setup<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#ios-setup">​</a></h4><ol><li><strong>IMPORTANT</strong>: Ensure that you are using CocoaPods version 1.10 or later.</li></ol><p>To update CocoaPods, you can reinstall the gem using the following command:</p><pre><code class="language-gem">$ sudo gem install cocoapods</code></pre><p>2. Manually link react-native-incall-manager (if it is not linked automatically).</p><ul><li>Select <code>Your_Xcode_Project/TARGETS/BuildSettings</code>, in Header Search Paths, add <code>"$(SRCROOT)/../node_modules/@videosdk.live/react-native-incall-manager/ios/RNInCallManager"</code></li></ul><p>3. Change the path of <code>react-native-webrtc</code> using the following command:</p><p><strong>Podfile</strong></p><pre><code class="language-sh">pod ‘react-native-webrtc’, :path =&gt; ‘../node_modules/@videosdk.live/react-native-webrtc’</code></pre><p>4. Change the version of your platform.</p><p>You need to change the platform field in the Podfile to 12.0 or above because react-native-webrtc doesn't support iOS versions earlier than 12.0. Update the line: platform : ios, ‘12.0’.</p><p>5. Install pods.</p><p>After updating the version, you need to install the pods by running the following command:</p><pre><code class="language-sh">Pod install</code></pre><p>6. Declare permissions in Info.plist :</p><p>Add the following lines to your info.plist file located at (project folder/ios/projectname/info.plist):</p><p>ios/projectname/info.plist</p><pre><code class="language-html">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;&lt;string&gt;Camera permission description&lt;/string&gt;&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;&lt;string&gt;Microphone permission description&lt;/string&gt;</code></pre><h4 id="register-service%E2%80%8B">Register Service<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#register-service">​</a></h4><p>Register VideoSDK services in your root <code>index.js</code> file for the initialization service.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">import { AppRegistry } from "react-native";
import App from "./App";
import { name as appName } from "./app.json";
import { register } from "@videosdk.live/react-native-sdk";

register();

AppRegistry.registerComponent(appName, () =&gt; App);</code></pre><figcaption>index.js</figcaption></figure><h3 id="step-1-get-started-with-apijs%E2%80%8B">Step 1: Get started with api.js<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-1--get-started-with-apijs">​</a></h3><p>Prior to moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><pre><code class="language-js">export const token = "&lt;Generated-from-dashbaord&gt;";
// API call to create meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${token}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });

  const { roomId } = await res.json();
  return roomId;
};</code></pre><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand Context Provider and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meeting such as join, leave, enable/disable mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as name, webcamStream, micStream etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the App.js file.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">import React, { useState, useEffect } from 'react';
import {
  SafeAreaView,
  TouchableOpacity,
  Text,
  TextInput,
  View,
  FlatList,
  StyleSheet,
  Modal,
} from 'react-native';
import {
  MeetingProvider,
  useMeeting,
  useParticipant,
  MediaStream,
  RTCView,
} from '@videosdk.live/react-native-sdk';
import { createMeeting, token } from './api';

function JoinScreen(props) {
  return null;
}

function ControlsContainer() {
  return null;
}

function MeetingView() {
  return null;
}

export default function App() {
  const [meetingId, setMeetingId] = useState(null);

  const getMeetingId = async (id) =&gt; {
    const meetingId = id == null ? await createMeeting({ token }) : id;
    setMeetingId(meetingId);
  };

  return meetingId ? (
    &lt;SafeAreaView style={{ flex: 1, backgroundColor: "#F6F6FF" }}&gt;
      &lt;MeetingProvider
        config={{
          meetingId,
          micEnabled: false,
          webcamEnabled: true,
          name: "Test User",
        }}
        token={token}
      &gt;
        &lt;MeetingView /&gt;
      &lt;/MeetingProvider&gt;
    &lt;/SafeAreaView&gt;
  ) : (
    &lt;JoinScreen getMeetingId={getMeetingId} /&gt;
  );
}</code></pre><figcaption>App.js</figcaption></figure><h3 id="step-3-implement-join-screen%E2%80%8B">Step 3: Implement Join Screen<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-3--implement-join-screen">​</a></h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">function JoinScreen(props) {
  const [meetingVal, setMeetingVal] = useState("");
  return (
    &lt;SafeAreaView
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        paddingHorizontal: 6 * 10,
      }}
    &gt;
      &lt;TouchableOpacity
        onPress={() =&gt; {
          props.getMeetingId();
        }}
        style={{ backgroundColor: "#1178F8", padding: 12, borderRadius: 6 }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Create Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;

      &lt;Text
        style={{
          alignSelf: "center",
          fontSize: 22,
          marginVertical: 16,
          fontStyle: "italic",
          color: "grey",
        }}
      &gt;
        ---------- OR ----------
      &lt;/Text&gt;
      &lt;TextInput
        value={meetingVal}
        onChangeText={setMeetingVal}
        placeholder={"XXXX-XXXX-XXXX"}
        style={{
          padding: 12,
          borderWidth: 1,
          borderRadius: 6,
          fontStyle: "italic",
        }}
      /&gt;
      &lt;TouchableOpacity
        style={{
          backgroundColor: "#1178F8",
          padding: 12,
          marginTop: 14,
          borderRadius: 6,
        }}
        onPress={() =&gt; {
          props.getMeetingId(meetingVal);
        }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Join Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;
    &lt;/SafeAreaView&gt;
  );
}</code></pre><figcaption>App.js</figcaption></figure><h4 id="output">Output</h4><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/image-13.png" class="kg-image" alt="Post-call Transcription & Summary in React-Native" loading="lazy" width="324" height="720"/></figure><h3 id="step-4-configuring-transcription-and-implement-controls">Step 4: Configuring Transcription and Implement Controls</h3><p>In this step, we set up the configuration for post-transcription and summary generation. We define the webhook URL where the webhooks will be received.</p><p>In the <code>startRecording</code> function, we have passed the transcription object and the webhook URL, which will initiate the post-call transcription process.</p><p>Finally, when we call the <code>stopRecording</code> function, both the post-call transcription and the recording will be stopped.</p><p>The next step is to create a <code>ControlsContainer</code> component to manage features such as Join or Leaving Meetings and Enable or Disable Webcam/Mic.</p><p>In this step, the <code>useMeeting</code> hook is utilized to acquire all the required methods such as <code>join()</code>, <code>leave()</code>, <code>toggleWebcam</code>, <code>toggleMic</code>, <code>startRecording</code> and <code>stopRecording</code>.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const Button = ({ onPress, buttonText, backgroundColor }) =&gt; {
  return (
    &lt;TouchableOpacity
      onPress={onPress}
      style={{
        backgroundColor: backgroundColor,
        justifyContent: 'center',
        alignItems: 'center',
        padding: 12,
        borderRadius: 4,
      }}&gt;
      &lt;Text style={{ color: 'white', fontSize: 12 }}&gt;{buttonText}&lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
  );
};

function ControlsContainer({ join, leave, toggleWebcam, toggleMic, startRecording, stopRecording }) {
  const [isJoined, setIsJoined] = useState(false);

  const handleJoin = () =&gt; {
    join();
    setIsJoined(true);
  }

  const handleLeave = () =&gt; {
    leave();
    setIsJoined(false)
  }

  const webhookUrl = "https://www.example.com";

  const transcription = {
    enabled: true, // Enables post transcription
    summary: {
      enabled: true, // Enables summary generation
  
      // Guides summary generation
      prompt:
        "Write summary in sections like Title, Agenda, Speakers, Action Items, Outlines, Notes and Summary",
    },
  };

  return (
    &lt;SafeAreaView&gt;

      &lt;View
        style={{
          padding: 24,
          flexDirection: 'row',
          justifyContent: 'space-between',
        }}&gt;
        &lt;Button
          onPress={() =&gt; {
            handleJoin();
          }}
          buttonText={'Join'}
          backgroundColor={'#1178F8'}
        /&gt;
        &lt;Button
          onPress={() =&gt; {
            toggleWebcam();
          }}
          buttonText={'Toggle Webcam'}
          backgroundColor={'#1178F8'}
          disabled={!isJoined}
        /&gt;
        &lt;Button
          onPress={() =&gt; {
            toggleMic();
          }}
          buttonText={'Toggle Mic'}
          backgroundColor={'#1178F8'}
          disabled={!isJoined}
        /&gt;
        &lt;Button
          onPress={() =&gt; {
            handleLeave();
          }}
          buttonText={'Leave'}
          backgroundColor={'#FF0000'}
          disabled={!isJoined}
        /&gt;
      &lt;/View&gt;
      &lt;View
        style={{
          padding: 24,
          flexDirection: 'row',
          justifyContent: 'space-evenly',
        }}&gt;
        &lt;Button
          onPress={() =&gt; {
            startRecording(webhookUrl, null, null, transcription);
          }}
          buttonText={'Start Recording'}
          backgroundColor={'#1178F8'}
          disabled={!isJoined}
        /&gt;
        &lt;Button
          onPress={() =&gt; {
            stopRecording();
          }}
          buttonText={'Stop Recording'}
          backgroundColor={'#1178F8'}
          disabled={!isJoined}
        /&gt;
      &lt;/View&gt;
    &lt;/SafeAreaView&gt;
  );
}</code></pre><figcaption>App.js</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantList() {
  return null;
}
function MeetingView() {
  // Get `participants` from useMeeting Hook
  const [notificationVisible, setNotificationVisible] = useState(false);
  const [notificationMessage, setNotificationMessage] = useState('');
  const { join, leave, toggleWebcam, toggleMic, participants, meetingId, startRecording, stopRecording } = useMeeting({onRecordingStarted, onRecordingStopped});
  const participantsArrId = [...participants.keys()];

  useEffect(() =&gt; {
    if (notificationVisible) {
      const timer = setTimeout(() =&gt; {
        setNotificationVisible(false);
      }, 2000); // 2000 milliseconds = 2 seconds

      return () =&gt; clearTimeout(timer);
    }
  }, [notificationVisible]);

  function showNotification(message) {
    setNotificationMessage(message);
    setNotificationVisible(true);
  }

  function onRecordingStarted() {
    showNotification('Recording Started');
  }

  function onRecordingStopped() {
    showNotification('Recording Stopped');
  }

  return (
    &lt;View style={{ flex: 1 }}&gt;
      {meetingId ? (
        &lt;Text style={{ fontSize: 18, padding: 12 }}&gt;Meeting Id :{meetingId}&lt;/Text&gt;
      ) : null}
      &lt;ParticipantList participants={participantsArrId} /&gt;
      {notificationVisible &amp;&amp; (
        &lt;View style={styles.notificationContainer}&gt;
          &lt;Text style={styles.notificationText}&gt;{notificationMessage}&lt;/Text&gt;
        &lt;/View&gt;
      )}
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
        startRecording={startRecording}
        stopRecording={stopRecording}
      /&gt;
    &lt;/View&gt;
  );
}

const styles = StyleSheet.create({
  meetingIdText: {
    fontSize: 18,
    padding: 12,
  },
  notificationContainer: {
    position: 'absolute',
    top: 0,
    left: 0,
    right: 0,
    backgroundColor: 'rgba(0, 0, 0, 0.8)',
    paddingVertical: 10,
    paddingHorizontal: 20,
    justifyContent: 'center',
    alignItems: 'center',
  },
  notificationText: {
    color: '#FFFFFF',
    fontSize: 16,
    textAlign: 'center',
  },
});</code></pre><figcaption>App.js</figcaption></figure><h4 id="output-1">Output</h4><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/image-12.png" class="kg-image" alt="Post-call Transcription & Summary in React-Native" loading="lazy" width="324" height="720"/></figure><h3 id="step-5-render-participant-list%E2%80%8B">Step 5: Render Participant List<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-5--render-participant-list">​</a></h3><p>After implementing the controls, the next step is to render the joined participants.</p><p>You can get all the joined <code>participants</code> from the <code>useMeeting</code> Hook.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantView() {
  return null;
}

function ParticipantList({ participants }) {
  return participants.length &gt; 0 ? (
    &lt;FlatList
      data={participants}
      renderItem={({ item }) =&gt; {
        return &lt;ParticipantView participantId={item} /&gt;;
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 20 }}&gt;Press Join button to enter meeting.&lt;/Text&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption>ParticipantList Component</figcaption></figure><h3 id="step-6-handling-participants-media%E2%80%8B">Step 6: Handling Participant's Media<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-6--handling-participants-media">​</a></h3><p>Before Handling the Participant's Media, you need to understand a couple of concepts.</p><h4 id="1-useparticipant-hook%E2%80%8B">1. useParticipant Hook<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#1-useparticipant-hook">​</a></h4><p>The <code>useParticipant</code> hook is responsible for handling all the properties and events of one particular participant joined in the meeting. It will take participantId as argument.</p><p>useParticipant Hook Example</p><pre><code class="language-js">const { webcamStream, webcamOn, displayName } = useParticipant(participantId);</code></pre><h4 id="2-mediastream-api%E2%80%8B">2. MediaStream API<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#2-mediastream-api">​</a></h4><p>The MediaStream API is beneficial for adding a MediaTrack to the <code>RTCView</code> component, enabling the playback of audio or video.</p><p>useParticipant Hook Example</p><pre><code class="language-js">&lt;RTCView
  streamURL={new MediaStream([webcamStream.track]).toURL()}
  objectFit={"cover"}
  style={{
    height: 300,
    marginVertical: 8,
    marginHorizontal: 8,
  }}
/&gt;</code></pre><h4 id="rendering-participant-media">Rendering Participant Media</h4><figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantView({ participantId }) {
  const { webcamStream, webcamOn } = useParticipant(participantId);

  return webcamOn &amp;&amp; webcamStream ? (
    &lt;RTCView
      streamURL={new MediaStream([webcamStream.track]).toURL()}
      objectFit={"cover"}
      style={{
        height: 300,
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        backgroundColor: "grey",
        height: 300,
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 16 }}&gt;NO MEDIA&lt;/Text&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption>App.js</figcaption></figure><h4 id="output-2">Output</h4><p><a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#output-2">​</a></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/image-16.png" class="kg-image" alt="Post-call Transcription & Summary in React-Native" loading="lazy" width="324" height="720"/></figure><h2 id="run-your-application%E2%80%8B">Run your application<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#run-your-application">​</a></h2><pre><code class="language-bash">npm run android // Android
npm run ios //  iOS</code></pre><h2 id="output-3">Output</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/screen-20240702-181743-ezgif.com-video-to-gif-converter.gif" class="kg-image" alt="Post-call Transcription & Summary in React-Native" loading="lazy" width="576" height="1280"/></figure><h2 id="fetching-the-transcription-from-the-dashboard">Fetching the Transcription from the Dashboard</h2><p>Once the transcription is ready, you can fetch it from the VideoSDK dashboard. The dashboard provides a user-friendly interface where you can view, download, and manage your Transcriptions &amp; Summary.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/20240702_171456-ezgif.com-resize.gif" class="kg-image" alt="Post-call Transcription & Summary in React-Native" loading="lazy" width="1920" height="1080"><figcaption>To Access Transcription &amp; Summary Files</figcaption></img></figure><h2 id="conclusion">Conclusion</h2><p>Implementing post-call transcription and summary features in your React Native application using VideoSDK greatly enhances the functionality and user experience of your video conferencing tool. This detailed guide has provided you with the necessary steps to set up and configure the transcription service, from recording meetings to processing and retrieving transcriptions. By following this guide, you can ensure that all critical information discussed during meetings is accurately captured and easily accessible for future reference.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate RTMP Livestream in JavaScript Video Chat App?]]></title><description><![CDATA[Learn how to seamlessly integrate RTMP Livestream into your JavaScript Video Chat App. Elevate user experiences with VideoSDK capabilities.]]></description><link>https://www.videosdk.live/blog/integrate-rtmp-livestream-in-javascript-video-chat-app</link><guid isPermaLink="false">662b84f62a88c204ca9d4f98</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Mon, 30 Sep 2024 06:29:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-Live-Stream-in-JavaScript-Video-Call-App.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-Live-Stream-in-JavaScript-Video-Call-App.jpg" alt="How to Integrate RTMP Livestream in JavaScript Video Chat App?"/><p>Integrating RTMP (<a href="https://www.videosdk.live/blog/what-is-rtmp">Real-Time Messaging Protocol</a>) live streaming into your <a href="https://www.videosdk.live/blog/video-calling-javascript">JavaScript video chat application</a> can enhance your platform's capabilities, allowing users to broadcast their video chats in real time. With RTMP, you can seamlessly stream video and audio content to various platforms such as YouTube, Twitch, or your own streaming server.</p><p><strong>Benefits of Integrating RTMP Livestream in a JavaScript video chat app:</strong></p><ul><li><strong>Enhanced Interactivity</strong>: RTMP allows for real-time streaming, enabling users to engage in live interactions, and making video chats more dynamic and engaging.</li><li><strong>Scalability</strong>: With RTMP, your app can handle a large number of concurrent streams, ensuring scalability as your user base grows.</li><li><strong>Low Latency</strong>: RTMP provides low-latency streaming, reducing the delay between sending and receiving video data, resulting in smoother conversations.</li></ul><p><strong>Use Cases of Integrating RTMP Livestream in a JavaScript video chat app:</strong></p><ul><li><strong>Live Events</strong>: Broadcasting live events such as conferences, webinars, or concerts where real-time interaction with viewers is crucial.</li><li><strong>Online Education</strong>: Facilitating live classes or tutoring sessions with the ability for students to ask questions and interact with the instructor in real time.</li><li><strong>Virtual Meetings</strong>: Enhancing video conferencing apps with real-time streaming capabilities for more engaging and productive meetings.</li></ul><p>This tutorial will guide you through a step-by-step process, enabling you to build a JavaScript video chat app with RTMP integration &amp; VideoSDK.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>Before we get into implementing RTMP Live Stream functionality, let's make sure you've completed the necessary prerequisites to take advantage of it with the VideoSDK capabilities.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. Consider referring to the <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a> for a more visual understanding of the account creation and token generation process.</p><h3 id="prerequisites">Prerequisites</h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (if you do not have one, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer">VideoSDK Dashboard</a>)</li><li>Have Node and NPM installed on your device.</li></ul><h2 id="%E2%AC%87%EF%B8%8F-install-videosdk">⬇️ Install VideoSDK</h2><p>Import VideoSDK using the <code>&lt;script&gt;</code> tag or install it using the following npm command. Make sure you are in your app directory before you run this command.</p><pre><code class="language-js">&lt;html&gt;
  &lt;head&gt;
    &lt;!--.....--&gt;
  &lt;/head&gt;
  &lt;body&gt;
    &lt;!--.....--&gt;
    &lt;script src="https://sdk.videosdk.live/js-sdk/0.0.83/videosdk.js"&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre><ul><li><strong>npm</strong></li></ul><pre><code class="language-js">npm install @videosdk.live/js-sdk</code></pre><ul><li><strong>Yarn</strong></li></ul><pre><code class="language-js">yarn add @videosdk.live/js-sdk</code></pre><h3 id="structure-of-the-project">Structure of the project</h3><p>Your project structure should look like this.</p><pre><code class="language-js">  root
   ├── index.html
   ├── config.js
   ├── index.js</code></pre><p>You will be working on the following files:</p><ul><li><strong>index.html</strong>: Responsible for creating a basic UI.</li><li><strong>config.js</strong>: Responsible for storing the token.</li><li><strong>index.js</strong>: Responsible for rendering the meeting view and the join meeting functionality.</li></ul><h2 id="essential-steps-to-implement-video-call-functionality">Essential Steps to Implement Video Call Functionality</h2><p>Once you've successfully installed VideoSDK in your project, you'll have access to a range of functionalities for building your video call application. RTMP is one such feature that leverages VideoSDK's capabilities. It leverages VideoSDK's capabilities to identify the user with the strongest audio signal (the one speaking)</p><h3 id="step-1-design-the-user-interface-ui%E2%80%8B">Step 1: Design the user interface (UI)<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-1--design-the-user-interface-ui">​</a></h3><p>Create an HTML file containing the screens, <code>join-screen</code> and <code>grid-screen</code>.</p><pre><code class="language-js">&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt; &lt;/head&gt;

  &lt;body&gt;
    &lt;div id="join-screen"&gt;
      &lt;!-- Create new Meeting Button --&gt;
      &lt;button id="createMeetingBtn"&gt;New Meeting&lt;/button&gt;
      OR
      &lt;!-- Join existing Meeting --&gt;
      &lt;input type="text" id="meetingIdTxt" placeholder="Enter Meeting id" /&gt;
      &lt;button id="joinBtn"&gt;Join Meeting&lt;/button&gt;
    &lt;/div&gt;

    &lt;!-- for Managing meeting status --&gt;
    &lt;div id="textDiv"&gt;&lt;/div&gt;

    &lt;div id="grid-screen" style="display: none"&gt;
      &lt;!-- To Display MeetingId --&gt;
      &lt;h3 id="meetingIdHeading"&gt;&lt;/h3&gt;

      &lt;!-- Controllers --&gt;
      &lt;button id="leaveBtn"&gt;Leave&lt;/button&gt;
      &lt;button id="toggleMicBtn"&gt;Toggle Mic&lt;/button&gt;
      &lt;button id="toggleWebCamBtn"&gt;Toggle WebCam&lt;/button&gt;

      &lt;!-- render Video --&gt;
      &lt;div class="row" id="videoContainer"&gt;&lt;/div&gt;
    &lt;/div&gt;

    &lt;!-- Add VideoSDK script --&gt;
    &lt;script src="https://sdk.videosdk.live/js-sdk/0.0.83/videosdk.js"&gt;&lt;/script&gt;
    &lt;script src="config.js"&gt;&lt;/script&gt;
    &lt;script src="index.js"&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre><h3 id="step-2-implement-join-screen%E2%80%8B">Step 2: Implement Join Screen<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-2--implement-join-screen">​</a></h3><p>Configure the token in the <code>config.js</code> file, which you can obtain from the <a href="https://app.videosdk.live/">VideoSDK Dashbord</a>.</p><pre><code class="language-js">// Auth token will be used to generate a meeting and connect to it
TOKEN = "Your_Token_Here";</code></pre><p>Next, retrieve all the elements from the DOM and declare the following variables in the <code>index.js</code> file. Then, add an event listener to the join and create meeting buttons.</p><pre><code class="language-js">// Getting Elements from DOM
const joinButton = document.getElementById("joinBtn");
const leaveButton = document.getElementById("leaveBtn");
const toggleMicButton = document.getElementById("toggleMicBtn");
const toggleWebCamButton = document.getElementById("toggleWebCamBtn");
const createButton = document.getElementById("createMeetingBtn");
const videoContainer = document.getElementById("videoContainer");
const textDiv = document.getElementById("textDiv");

// Declare Variables
let meeting = null;
let meetingId = "";
let isMicOn = false;
let isWebCamOn = false;

function initializeMeeting() {}

function createLocalParticipant() {}

function createVideoElement() {}

function createAudioElement() {}

function setTrack() {}

// Join Meeting Button Event Listener
joinButton.addEventListener("click", async () =&gt; {
  document.getElementById("join-screen").style.display = "none";
  textDiv.textContent = "Joining the meeting...";

  roomId = document.getElementById("meetingIdTxt").value;
  meetingId = roomId;

  initializeMeeting();
});

// Create Meeting Button Event Listener
createButton.addEventListener("click", async () =&gt; {
  document.getElementById("join-screen").style.display = "none";
  textDiv.textContent = "Please wait, we are joining the meeting";

  // API call to create meeting
  const url = `https://api.videosdk.live/v2/rooms`;
  const options = {
    method: "POST",
    headers: { Authorization: TOKEN, "Content-Type": "application/json" },
  };

  const { roomId } = await fetch(url, options)
    .then((response) =&gt; response.json())
    .catch((error) =&gt; alert("error", error));
  meetingId = roomId;

  initializeMeeting();
});</code></pre><h3 id="step-3-initialize-meeting%E2%80%8B">Step 3: Initialize Meeting<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-3--initialize-meeting">​</a></h3><p>Following that, initialize the meeting using the <code>initMeeting()</code> function and proceed to join the meeting.</p><pre><code class="language-js">// Initialize meeting
function initializeMeeting() {
  window.VideoSDK.config(TOKEN);

  meeting = window.VideoSDK.initMeeting({
    meetingId: meetingId, // required
    name: "Thomas Edison", // required
    micEnabled: true, // optional, default: true
    webcamEnabled: true, // optional, default: true
  });

  meeting.join();

  // Creating local participant
  createLocalParticipant();

  // Setting local participant stream
  meeting.localParticipant.on("stream-enabled", (stream) =&gt; {
    setTrack(stream, null, meeting.localParticipant, true);
  });

  // meeting joined event
  meeting.on("meeting-joined", () =&gt; {
    textDiv.style.display = "none";
    document.getElementById("grid-screen").style.display = "block";
    document.getElementById(
      "meetingIdHeading"
    ).textContent = `Meeting Id: ${meetingId}`;
  });

  // meeting left event
  meeting.on("meeting-left", () =&gt; {
    videoContainer.innerHTML = "";
  });

  // Remote participants Event
  // participant joined
  meeting.on("participant-joined", (participant) =&gt; {
    //  ...
  });

  // participant left
  meeting.on("participant-left", (participant) =&gt; {
    //  ...
  });
}</code></pre><h3 id="step-4-create-the-media-elements%E2%80%8B">Step 4: Create the Media Elements<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-4--create-the-media-elements">​</a></h3><p>In this step, Create a function to generate audio and video elements for displaying both local and remote participants. Set the corresponding media track based on whether it's a video or audio stream.</p><pre><code class="language-js">// creating video element
function createVideoElement(pId, name) {
  let videoFrame = document.createElement("div");
  videoFrame.setAttribute("id", `f-${pId}`);
  videoFrame.style.width = "300px";
    

  //create video
  let videoElement = document.createElement("video");
  videoElement.classList.add("video-frame");
  videoElement.setAttribute("id", `v-${pId}`);
  videoElement.setAttribute("playsinline", true);
  videoElement.setAttribute("width", "300");
  videoFrame.appendChild(videoElement);

  let displayName = document.createElement("div");
  displayName.innerHTML = `Name : ${name}`;

  videoFrame.appendChild(displayName);
  return videoFrame;
}

// creating audio element
function createAudioElement(pId) {
  let audioElement = document.createElement("audio");
  audioElement.setAttribute("autoPlay", "false");
  audioElement.setAttribute("playsInline", "true");
  audioElement.setAttribute("controls", "false");
  audioElement.setAttribute("id", `a-${pId}`);
  audioElement.style.display = "none";
  return audioElement;
}

// creating local participant
function createLocalParticipant() {
  let localParticipant = createVideoElement(
    meeting.localParticipant.id,
    meeting.localParticipant.displayName
  );
  videoContainer.appendChild(localParticipant);
}

// setting media track
function setTrack(stream, audioElement, participant, isLocal) {
  if (stream.kind == "video") {
    isWebCamOn = true;
    const mediaStream = new MediaStream();
    mediaStream.addTrack(stream.track);
    let videoElm = document.getElementById(`v-${participant.id}`);
    videoElm.srcObject = mediaStream;
    videoElm
      .play()
      .catch((error) =&gt;
        console.error("videoElem.current.play() failed", error)
      );
  }
  if (stream.kind == "audio") {
    if (isLocal) {
      isMicOn = true;
    } else {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(stream.track);
      audioElement.srcObject = mediaStream;
      audioElement
        .play()
        .catch((error) =&gt; console.error("audioElem.play() failed", error));
    }
  }
}</code></pre><h3 id="step-5-handle-participant-events%E2%80%8B">Step 5: Handle participant events<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-5--handle-participant-events">​</a></h3><p>Thereafter, implement the events related to the participants and the stream.</p><p>The following are the events to be executed in this step:</p><ol><li><code>participant-joined</code>: When a remote participant joins, this event will trigger. In the event callback, create video and audio elements previously defined for rendering their video and audio streams.</li><li><code>participant-left</code>: When a remote participant leaves, this event will trigger. In the event callback, remove the corresponding video and audio elements.</li><li><code>stream-enabled</code>: This event manages the media track of a specific participant by associating it with the appropriate video or audio element.</li></ol><pre><code class="language-js">// Initialize meeting
function initializeMeeting() {
  // ...

  // participant joined
  meeting.on("participant-joined", (participant) =&gt; {
    let videoElement = createVideoElement(
      participant.id,
      participant.displayName
    );
    let audioElement = createAudioElement(participant.id);
    // stream-enabled
    participant.on("stream-enabled", (stream) =&gt; {
      setTrack(stream, audioElement, participant, false);
    });
    videoContainer.appendChild(videoElement);
    videoContainer.appendChild(audioElement);
  });

  // participants left
  meeting.on("participant-left", (participant) =&gt; {
    let vElement = document.getElementById(`f-${participant.id}`);
    vElement.remove(vElement);

    let aElement = document.getElementById(`a-${participant.id}`);
    aElement.remove(aElement);
  });
}</code></pre><h3 id="step-6-implement-controls%E2%80%8B">Step 6: Implement Controls<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-6--implement-controls">​</a></h3><p>Next, implement the meeting controls, such as <code>toggleMic</code>, <code>toggleWebcam</code>, and leave the meeting.</p><pre><code class="language-js">// leave Meeting Button Event Listener
leaveButton.addEventListener("click", async () =&gt; {
  meeting?.leave();
  document.getElementById("grid-screen").style.display = "none";
  document.getElementById("join-screen").style.display = "block";
});

// Toggle Mic Button Event Listener
toggleMicButton.addEventListener("click", async () =&gt; {
  if (isMicOn) {
    // Disable Mic in Meeting
    meeting?.muteMic();
  } else {
    // Enable Mic in Meeting
    meeting?.unmuteMic();
  }
  isMicOn = !isMicOn;
});

// Toggle Web Cam Button Event Listener
toggleWebCamButton.addEventListener("click", async () =&gt; {
  if (isWebCamOn) {
    // Disable Webcam in Meeting
    meeting?.disableWebcam();

    let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
    vElement.style.display = "none";
  } else {
    // Enable Webcam in Meeting
    meeting?.enableWebcam();

    let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
    vElement.style.display = "inline";
  }
  isWebCamOn = !isWebCamOn;
});</code></pre><p>You can check out the complete guide here:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/videosdk-live/quickstart/tree/main/js-rtc"><div class="kg-bookmark-content"><div class="kg-bookmark-title">quickstart/js-rtc at main · videosdk-live/quickstart</div><div class="kg-bookmark-description">A short and sweet tutorial for getting up to speed with VideoSDK in less than 10 minutes - videosdk-live/quickstart</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="How to Integrate RTMP Livestream in JavaScript Video Chat App?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">videosdk-live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/de970b6f7db5c97b5728471cbf13bae285388d9a9ccaa9bd294da67c509984d5/videosdk-live/quickstart" alt="How to Integrate RTMP Livestream in JavaScript Video Chat App?" onerror="this.style.display = 'none'"/></div></a></figure><p>After you've installed the video SDK in your video call application, you're ready to explore the exciting world of RTMP live streaming. RTMP is a popular protocol designed specifically for real-time video and audio transmission. This enables broadcasters to share live experiences, host presentations, or run live tutorials within your app, expanding its reach and functionality.</p><h2 id="integrate-rtmp-livestream">Integrate RTMP Livestream</h2><p>RTMP is a widely used protocol for live streaming video content from VideoSDK to platforms like YouTube, Twitch, Facebook, and others.</p><p>To initiate live streaming from VideoSDK to platforms supporting RTMP ingestion, you simply need to provide the platform-specific stream key and stream URL. This enables VideoSDK to connect to the platform's RTMP server and transmit the live video stream.</p><p>Furthermore, VideoSDK offers flexibility in configuring livestream layouts. You can achieve this by either selecting different prebuilt layouts in the configuration or by providing your own custom template for live streaming, catering to your specific layout preferences.</p><p>This guide will provide an overview of how to implement starting and stopping RTMP live streaming with VideoSDK.</p><h3 id="start-live-stream">Start Live Stream</h3><p>The <code>startLivestream()</code> method, accessible from the <code>meeting</code> object, is used to initiate the RTMP live stream of a meeting. This method accepts the following two parameters:</p><ul><li><code>1. outputs</code>: This parameter takes an array of objects containing the RTMP <code>url</code> and <code>streamKey</code> specific to the platform where you want to initiate the live stream.</li><li><code>2. config (optional)</code>: This parameter defines the layout configuration for the live stream.</li></ul><pre><code class="language-js">let meeting;

// Initialize Meeting
meeting = VideoSDK.initMeeting({
  // ...
});

const startLivestreamBtn = document.getElementById("startLivestreamBtn");
startLivestreamBtn.addEventListener("click", () =&gt; {
  // Start Livestream
  meeting?.startLivestream(
    [
      {
        url: "rtmp://a.rtmp.youtube.com/live2",
        streamKey: "key",
      },
    ],
    {
      layout: {
        type: "GRID",
        priority: "SPEAKER",
        gridSize: 4,
      },
      theme: "DARK",
    }
  );
});</code></pre><h3 id="stop-live-stream">Stop Live Stream</h3><p>The <code>stopLivestream()</code> method, accessible from the <code>meeting</code> object is used to stop the RTMP live stream of a meeting.</p><pre><code class="language-js">let meeting;

// Initialize Meeting
meeting = VideoSDK.initMeeting({
  // ...
});


const stopLivestreamBtn = document.getElementById("stopLivestreamBtn");
stopLivestreamBtn.addEventListener("click", () =&gt; {
  // Stop Livestream
  meeting?.stopLivestream();
});</code></pre><h3 id="event-associated-with-livestream%E2%80%8B">Event associated with Livestream<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream#event-associated-with-livestream">​</a></h3><ul><li><strong>livestream-state-changed</strong>: Whenever a livestream state changes, then <code>livestream-state-changed</code> the event will trigger.</li></ul><pre><code class="language-js">let meeting;

// Initialize Meeting
meeting = VideoSDK.initMeeting({
  // ...
});

const Constants = VideoSDK.Constants;

meeting.on("livestream-state-changed", (data) =&gt; {
  const { status } = data;

  if (status === Constants.livestreamEvents.LIVESTREAM_STARTING) {
    console.log("Meeting livestream is starting");
  } else if (status === Constants.livestreamEvents.LIVESTREAM_STARTED) {
    console.log("Meeting livestream is started");
  } else if (status === Constants.livestreamEvents.LIVESTREAM_STOPPING) {
    console.log("Meeting livestream is stopping");
  } else if (status === Constants.livestreamEvents.LIVESTREAM_STOPPED) {
    console.log("Meeting livestream is stopped");
  } else {
    //
  }
});</code></pre><h3 id="custom-template%E2%80%8B">Custom Template<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream#custom-template">​</a></h3><p>With VideoSDK, you have the option to employ your own custom-designed layout template for livestreaming a meeting. To use a custom template, <a href="https://docs.videosdk.live/javascript/guide/interactive-live-streaming/custom-template">follow this guide</a> to create and set up the template. Once the template is configured, you can initiate recording using the <a href="https://docs.videosdk.live/api-reference/realtime-communication/start-livestream">REST API</a>, specifying the <code>templateURL</code> parameter.</p><h2 id="%E2%9C%A8-want-to-add-more-features-to-javascript-video-calling-app">✨ Want to Add More Features to JavaScript Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your JavaScript video-calling app,</p><p><strong>Check out these additional resources:</strong></p><ul><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/integrate-active-speaker-indication-in-javascript-video-chat-app">Link</a></li><li>Image Capture: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-javascript-chat-app">Link</a></li><li>Screen Share Feature: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-javascript-video-chat-app">Link</a></li><li>Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/integrate-picture-in-picture-pip-mode-in-javascript-video-chat-app">Link</a></li></ul><h2 id="conclusion">Conclusion</h2><p>Whether it's for hosting live events, broadcasting presentations, or enabling interactive streaming sessions, RTMP Livestream opens up new possibilities for your app. Take your video chat app to the next level with VideoSDK and empower your users with high-quality live streaming. <a href="https://www.videosdk.live/signup">Get started</a> today and revolutionize your app's video capabilities</p><p>If you are new here and want to build an interactive JavaScript app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get? 10000 free minutes every month. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[How to Build a WebRTC React Native App?]]></title><description><![CDATA[In this tutorial, learn the fundamentals of WebRTC to build a React Native video calling application that can be implemented on iOS or Android.]]></description><link>https://www.videosdk.live/blog/webrtc-react-native</link><guid isPermaLink="false">63bcf639bd44f53bde5cf8bf</guid><category><![CDATA[WebRTC]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 27 Sep 2024 18:26:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2023/01/Build-a-React-Native-Video-Calling-App-with-Callkeep-using-Firebase-and-Video-SDK--3-.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction-to-react-native-and-webrtc">Introduction to React Native and WebRTC</h2><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Build-a-React-Native-Video-Calling-App-with-Callkeep-using-Firebase-and-Video-SDK--3-.jpg" alt="How to Build a WebRTC React Native App?"/><p>Video conferencing is an important part of today's environment. However, due to its complexity, most developers(me too ?) have difficulty implementing it.</p><p>React Native WebRTC are great framework for creating video conferencing/video call applications. we will take a deep dive into these frameworks and develop one application.</p><p>If you are impatient to see the results, here is the whole <a href="https://github.com/videosdk-live/webrtc/tree/main/react-native-webrtc-app"><strong>react-native-webrtc</strong></a><strong> </strong>repo for your project.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Join-Screen.png" class="kg-image" alt="How to Build a WebRTC React Native App?" loading="lazy" width="1920" height="1080"><figcaption><span style="white-space: pre-wrap;">Free React Native WebRTC App</span></figcaption></img></figure><h3 id="what-is-react-native">What Is React Native?</h3><p>React Native is a JavaScript framework for creating natively rendered mobile apps for iOS and Android. It's built on React, Facebook's JavaScript toolkit for creating user interfaces, but instead of being aimed at browsers, it's aimed at mobile platforms. In other words, web developers can now create mobile applications that look and feel fully "native," all while using a JavaScript framework they are already familiar with. Furthermore, because much of the code you create can be shared between platforms, <a href="https://en.wikipedia.org/wiki/React_Native">React Native</a> makes it simple to build for both Android and iOS at the same time.</p><h3 id="what-is-webrtc">What is WebRTC?</h3><p><a href="https://www.videosdk.live/blog/webrtc" rel="noreferrer">WebRTC (Web Real-Time Communications)</a> is an open-source P2P protocol that allows web browsers and devices to communicate in real-time via voice, text, and video. WebRTC delivers application programming interfaces (APIs) defined in JavaScript to software developers.</p><p>P2P simply implies that two peers (for example, your device and mine) communicate directly with one another, without the need for a server in the middle.</p><p>WebRTC employs several technologies to enable real-time peer-to-peer communication across browsers.</p><ol>
<li>SDP (Session Description Protocol)</li>
<li>ICE (Interactivity Connection Establishment)</li>
<li>RTP (Real Time Protocol)</li>
</ol>
<p>Another component that is required to run WebRTC is a Signaling Server. However, there is no standard for implementing a signaling server and its implementation can vary from developer to developer. More information on Signaling Server will be provided later in this section.</p><p>Let's quickly go through some of the technology mentioned above.</p><h3 id="sdp-session-description-protocol">SDP (Session Description Protocol)</h3><ul><li>SDP is a simple protocol that is used to determine which codecs are supported by browsers. Assume there are two peers (Client A and Client B) that will be connected over WebRTC.</li><li>Clients A and B generate SDP strings that specify which codecs they support. Client A, for example, may support H264, VP8, and VP9 video codecs, as well as Opus and PCM audio codecs. Client B may only support H264 for video and the Opus codec for audio.</li><li>In this scenario, the codecs that will be used between Client A and Client B are H264 and Opus. Peer-to-peer communication cannot be established if there are no shared codecs.</li></ul><p>You may have a query regarding how these SDP strings communicate with one another. This is where we will use Signaling Server.</p><h3 id="ice-interactivity-connection-establishment">ICE (Interactivity Connection Establishment)</h3><p>ICE is the magic that connects peers even if they are separated by NAT.</p><ul><li>Client A uses the STUN server to determine their local and public Internet addresses, which they then relay to Client B via the Signaling Server. Each address received from the STUN server is referred to as an ICE candidate.</li></ul><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/untitled2x_1.png" class="kg-image" alt="How to Build a WebRTC React Native App?" loading="lazy" width="667" height="434"/></figure><ul><li>There are two servers in the image above. One of them is the STUN server, and the other is the TURN server.</li></ul><h4 id="stun-session-traversal-utilities-for-nat">STUN (Session Traversal Utilities for NAT)</h4><ul><li>The <a href="https://www.videosdk.live/developer-hub/stun-turn-server/what-is-a-stun-server" rel="noreferrer">STUN server</a> is used to allow Client A to discover all of its addresses.</li><li>STUN servers reveal their peer's Public and Local IP addresses. By the way, google offers a free STUN server (stun.l.google.com:19302).</li></ul><h4 id="turn-traversal-using-relays-around-nat">TURN (Traversal Using Relays around NAT)</h4><ul><li>When peer-to-peer connections cannot be formed, a TURN Server is used. TURN server simply relays data between peers.</li></ul><h3 id="rtp-real-time-protocol">RTP (Real Time Protocol)</h3><ul><li>RTP is a well-established standard for transmitting real-time data. It is built on UDP. In WebRTC, audio and video are transferred using RTP.</li></ul><h3 id="webrtc-signalling">WebRTC Signalling</h3><ul><li>WebRTC can function without servers, but it requires a server to create the connection. The server serves as a channel for exchanging information necessary to build a peer-to-peer connection.</li><li>The data that must be transferred is <code>Offer</code>, <code>Answer</code>, and <code>information about the Network Connection</code>.</li></ul><p>Let us understand it with an example.</p><ul><li>Client A, who will be the connection's initiator, will create an Offer. This offer will then be sent to Client B via the Signaling server. Client B will get the Offer and respond accordingly. This information will subsequently be relayed to Client A over the signal channel by Client B.</li><li>Once the offer and response have been completed, the connection between the two peers is established. An ICE candidate provides the protocols and routing required for WebRTC to communicate with a remote device for that peer exchange RTCIceCandidate. Each peer suggests their top candidates, from best to worst. The link is then formed after they agree to use it only once.</li></ul><h3 id="implementing-the-signaling-server">Implementing the Signaling Server</h3><p><br>Now that we've covered the fundamentals of WebRTC, let's use it to build a Video Calling application that uses SocketIO as a signaling channel.</br></p><ul><li>As previously said, we will use WebRTC Nodejs SocketIO to communicate information between clients.</li></ul><p>Now we'll make a WebRTC Node js Express project, and our directory structure will look somewhat like this.</p><pre><code class="language-js">server
    └── index.js
    └── socket.js
    └── package.json</code></pre><h4 id="step-1indexjs-file-will-look-like-this">Step 1:<code>index.js</code> file will look like this.</h4>
<pre><code class="language-js">const path = require('path');
const { createServer } = require('http');

const express = require('express');
const { getIO, initIO } = require('./socket');

const app = express();

app.use('/', express.static(path.join(__dirname, 'static')));

const httpServer = createServer(app);

let port = process.env.PORT || 3500;

initIO(httpServer);

httpServer.listen(port)
console.log("Server started on ", port);

getIO();</code></pre><h4 id="step-2-socketjs-file-will-look-like-this">Step 2: <code>socket.js</code> file will look like this.</h4>
<pre><code class="language-js">const { Server } = require("socket.io");
let IO;

module.exports.initIO = (httpServer) =&gt; {
  IO = new Server(httpServer);

  IO.use((socket, next) =&gt; {
    if (socket.handshake.query) {
      let callerId = socket.handshake.query.callerId;
      socket.user = callerId;
      next();
    }
  });

  IO.on("connection", (socket) =&gt; {
    console.log(socket.user, "Connected");
    socket.join(socket.user);

    socket.on("call", (data) =&gt; {
      let calleeId = data.calleeId;
      let rtcMessage = data.rtcMessage;

      socket.to(calleeId).emit("newCall", {
        callerId: socket.user,
        rtcMessage: rtcMessage,
      });
    });

    socket.on("answerCall", (data) =&gt; {
      let callerId = data.callerId;
      rtcMessage = data.rtcMessage;

      socket.to(callerId).emit("callAnswered", {
        callee: socket.user,
        rtcMessage: rtcMessage,
      });
    });

    socket.on("ICEcandidate", (data) =&gt; {
      console.log("ICEcandidate data.calleeId", data.calleeId);
      let calleeId = data.calleeId;
      let rtcMessage = data.rtcMessage;
        
      socket.to(calleeId).emit("ICEcandidate", {
        sender: socket.user,
        rtcMessage: rtcMessage,
      });
    });
  });
};

module.exports.getIO = () =&gt; {
  if (!IO) {
    throw Error("IO not initilized.");
  } else {
    return IO;
  }
};
</code></pre><p>As previously stated, we require a server to pass three pieces of information: <code>Offer</code>, <code>Answer</code> and <code>ICECandidate</code>. The <code>call</code> event sends the caller's offer to the callee, whereas the <code>answerCall</code> event sends the callee's answer to the caller. The <code>ICEcandidate</code> event, exchanges the data.</p><p>This is the most basic form of the signaling server that we require.</p><p>3. <code>package.json</code> file will look like this.</p><h4 id="step-3-packagejson-file-will-look-like-this">Step 3: package.json file will look like this.</h4>
<pre><code class="language-js">{
  "name": "WebRTC",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" &amp;&amp; exit 1",
    "start": "node index.js"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "express": "^4.17.1",
    "socket.io": "^4.0.2" // Socket dependency
  }
}</code></pre><p>We are almost finished with the server-side programming. Let's create a client-side app with react-native-webrtc.</p><p>Before we begin development, let's first grasp the app's flow. We will offer CallerId whenever the user opens the app (5-digit random number).</p><p>For example, John and Michel have CallerId, <code>12345</code> for John and <code>67890</code> for Michel, so John initiates the call to Michel with his caller ID. Now John will receive an Outgoing call screen while Michel will see an Incoming call screen with an Accept button. After accepting a call, John and Michel will join the meeting.</p><h2 id="developing-the-react-native-webrtc-application">Developing the React Native WebRTC Application</h2><h4 id="step-1-setup-a-react-native-webrtc-project-using-react-native-cli">Step 1: Setup a react native WebRTC project using react-native-cli</h4>
<p>You can follow this official guide — <a href="https://reactnative.dev/docs/environment-setup" rel="noopener ugc nofollow">https://reactnative.dev/docs/environment-setup</a></p><h4 id="step-2-after-successfully-running-your-demo-application-we-will-install-some-react-native-lib">Step 2: After Successfully running your demo application we will install some React Native lib</h4>
<p>Here are my package.json dependencies you also need to install.</p><pre><code class="language-js">"dependencies": {
    "react": "17.0.2",
    "react-native": "0.68.2",
    "react-native-svg": "^13.7.0",
    "react-native-webrtc": "^1.94.2",
    "socket.io-client": "^4.5.4"
  }</code></pre><h4 id="step-3-android-setup-for-react-native-webrtc-package">Step 3: Android Setup for <code>react-native-webrtc</code> package</h4>
<p>Starting with React Native WebRTC  0.60 due to a new auto-linking feature you no longer need to follow manual linking steps but you will need to follow the other steps below if you plan on releasing your app to production.</p><p><strong>3.1 Declaring Permissions</strong></p><p>In <code>android/app/main/AndroidManifest.xml</code> add the following permissions before the <code>&lt;application&gt;</code> section.</p><pre><code class="language-xml">&lt;uses-feature android:name="android.hardware.camera" /&gt;
&lt;uses-feature android:name="android.hardware.camera.autofocus" /&gt;
&lt;uses-feature android:name="android.hardware.audio.output" /&gt;
&lt;uses-feature android:name="android.hardware.microphone" /&gt;

&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET" /&gt;
&lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;
</code></pre><p><br><strong>3.2 Enable Java 8 Support</strong></br></p><p>In <code>android/app/build.gradle</code> add the following inside the <code>android</code> section.</p><pre><code class="language-js">compileOptions {
	sourceCompatibility JavaVersion.VERSION_1_8
	targetCompatibility JavaVersion.VERSION_1_8
}</code></pre><p><strong>3.3 R8/ProGuard Support</strong></p><p>In <code>android/app/proguard-rules.pro</code> add the following on a new line.</p><pre><code class="language-js">-keep class org.webrtc.** { *; }
</code></pre><p><br><strong>3.4 Fatal Exception: java.lang.UnsatisfiedLinkError</strong></br></p><pre><code class="language-js">Fatal Exception: java.lang.UnsatisfiedLinkError: No implementation found for void org.webrtc.PeerConnectionFactory.nativeInitializeAndroidGlobals() (tried Java_org_webrtc_PeerConnectionFactory_nativeInitializeAndroidGlobals and Java_org_webrtc_PeerConnectionFactory_nativeInitializeAndroidGlobals__)
</code></pre><p>If you're experiencing the error above then in <code>android/gradle.properties</code> add the following.</p><pre><code class="language-js">android.enableDexingArtifactTransform.desugaring=false
</code></pre><h4 id="step-4-ios-setup-for-react-native-webrtc-package">Step 4: IOS Setup for <code>react-native-webrtc</code> Package</h4>
<p><br><strong>4.1 Adjusting the supported platform version</strong></br></p><p><strong>IMPORTANT:</strong> Make sure you are using CocoaPods 1.10 or higher.<br>You may have to change the <code>platform</code> field in your podfile.<br><code>react-native-webrtc</code> doesn't support iOS &lt; 12 Set it to '12.0' or above or you'll get an error when running <code>pod install</code>.</br></br></p><pre><code class="language-js">platform :ios, '12.0'
</code></pre><p><strong>4.2 Declaring Permissions</strong></p><p>Navigate to <code>&lt;ProjectFolder&gt;/ios/&lt;ProjectName&gt;/</code> and edit <code>Info.plist</code>, add the following lines.</p><pre><code class="language-js">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Camera permission description&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Microphone permission description&lt;/string&gt;</code></pre><h4 id="step-5-develop-ui-screens">Step 5: Develop UI Screens</h4>
<p>Our directory structure for client-side project will look somewhat like this.</p><pre><code class="language-js">client
    └── android
    └── asset
    └── ios
    └── index.js
    └── App.js
    └── components
    └── package.json</code></pre><p>Now, we will develop <code>JoinScreen</code> , <code>IncomingCallScreen</code> and <code>OutgoingCallScreen</code>.</p><p>We will use <code>App.js</code> files throughout the development process.</p><p><strong>JoinScreen</strong></p><pre><code class="language-js">import React, {useEffect, useState, useRef} from 'react';
import {
  Platform,
  KeyboardAvoidingView,
  TouchableWithoutFeedback,
  Keyboard,
  View,
  Text,
  TouchableOpacity,
} from 'react-native';
import TextInputContainer from './src/components/TextInputContainer';

export default function App({}) {

  const [type, setType] = useState('JOIN');

  const [callerId] = useState(
    Math.floor(100000 + Math.random() * 900000).toString(),
  );
    
  const otherUserId = useRef(null);
  

  const JoinScreen = () =&gt; {
    return (
      &lt;KeyboardAvoidingView
        behavior={Platform.OS === 'ios' ? 'padding' : 'height'}
        style={{
          flex: 1,
          backgroundColor: '#050A0E',
          justifyContent: 'center',
          paddingHorizontal: 42,
        }}&gt;
        &lt;TouchableWithoutFeedback onPress={Keyboard.dismiss}&gt;
          &lt;&gt;
            &lt;View
              style={{
                padding: 35,
                backgroundColor: '#1A1C22',
                justifyContent: 'center',
                alignItems: 'center',
                borderRadius: 14,
              }}&gt;
              &lt;Text
                style={{
                  fontSize: 18,
                  color: '#D0D4DD',
                }}&gt;
                Your Caller ID
              &lt;/Text&gt;
              &lt;View
                style={{
                  flexDirection: 'row',
                  marginTop: 12,
                  alignItems: 'center',
                }}&gt;
                &lt;Text
                  style={{
                    fontSize: 32,
                    color: '#ffff',
                    letterSpacing: 6,
                  }}&gt;
                  {callerId}
                &lt;/Text&gt;
              &lt;/View&gt;
            &lt;/View&gt;

            &lt;View
              style={{
                backgroundColor: '#1A1C22',
                padding: 40,
                marginTop: 25,
                justifyContent: 'center',
                borderRadius: 14,
              }}&gt;
              &lt;Text
                style={{
                  fontSize: 18,
                  color: '#D0D4DD',
                }}&gt;
                Enter call id of another user
              &lt;/Text&gt;
              &lt;TextInputContainer
                placeholder={'Enter Caller ID'}
                value={otherUserId.current}
                setValue={text =&gt; {
                  otherUserId.current = text;
                }}
                keyboardType={'number-pad'}
              /&gt;
              &lt;TouchableOpacity
                onPress={() =&gt; {
                  setType('OUTGOING_CALL');
                }}
                style={{
                  height: 50,
                  backgroundColor: '#5568FE',
                  justifyContent: 'center',
                  alignItems: 'center',
                  borderRadius: 12,
                  marginTop: 16,
                }}&gt;
                &lt;Text
                  style={{
                    fontSize: 16,
                    color: '#FFFFFF',
                  }}&gt;
                  Call Now
                &lt;/Text&gt;
              &lt;/TouchableOpacity&gt;
            &lt;/View&gt;
          &lt;/&gt;
        &lt;/TouchableWithoutFeedback&gt;
      &lt;/KeyboardAvoidingView&gt;
    );
  };

  const OutgoingCallScreen = () =&gt; {
    return null
  };

  const IncomingCallScreen = () =&gt; {
  	return null
  };

  switch (type) {
    case 'JOIN':
      return JoinScreen();
    case 'INCOMING_CALL':
      return IncomingCallScreen();
    case 'OUTGOING_CALL':
      return OutgoingCallScreen();
    default:
      return null;
  }
}
</code></pre><p>Here in the above file, we are just storing a 5-digit Random CallerId which will represent this user and can be referred by another connected user.</p><p><code>TextInputContainer.js</code> component code file.</p><pre><code class="language-js">import React from 'react';
import {View, TextInput} from 'react-native';
const TextInputContainer = ({placeholder, value, setValue, keyboardType}) =&gt; {
  return (
    &lt;View
      style={{
        height: 50,
        justifyContent: 'center',
        alignItems: 'center',
        backgroundColor: '#202427',
        borderRadius: 12,
        marginVertical: 12,
      }}&gt;
      &lt;TextInput
        style={{
          margin: 8,
          padding: 8,
          width: '90%',
          textAlign: 'center',
          fontSize: 16,
          color: '#FFFFFF',
        }}
        multiline={true}
        numberOfLines={1}
        cursorColor={'#5568FE'}
        placeholder={placeholder}
        placeholderTextColor={'#9A9FA5'}
        onChangeText={text =&gt; {
          setValue(text);
        }}
        value={value}
        keyboardType={keyboardType}
      /&gt;
    &lt;/View&gt;
  );
};

export default TextInputContainer;
</code></pre><p>Our Screen will look like this.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Screenshot_2023-01-12-16-29-44-351_com.webrtcexample--1-.jpg" class="kg-image" alt="How to Build a WebRTC React Native App?" loading="lazy" width="180" height="400"/></figure><p><strong>IncomingCallScreen</strong></p><pre><code class="language-js">import CallAnswer from './asset/CallAnswer';

export default function App({}) {
    //
  const IncomingCallScreen = () =&gt; {
    return (
      &lt;View
        style={{
          flex: 1,
          justifyContent: 'space-around',
          backgroundColor: '#050A0E',
        }}&gt;
        &lt;View
          style={{
            padding: 35,
            justifyContent: 'center',
            alignItems: 'center',
            borderRadius: 14,
          }}&gt;
          &lt;Text
            style={{
              fontSize: 36,
              marginTop: 12,
              color: '#ffff',
            }}&gt;
            {otherUserId.current} is calling..
          &lt;/Text&gt;
        &lt;/View&gt;
        &lt;View
          style={{
            justifyContent: 'center',
            alignItems: 'center',
          }}&gt;
          &lt;TouchableOpacity
            onPress={() =&gt; {
              setType('WEBRTC_ROOM');
            }}
            style={{
              backgroundColor: 'green',
              borderRadius: 30,
              height: 60,
              aspectRatio: 1,
              justifyContent: 'center',
              alignItems: 'center',
            }}&gt;
            &lt;CallAnswer height={28} fill={'#fff'} /&gt;
          &lt;/TouchableOpacity&gt;
        &lt;/View&gt;
      &lt;/View&gt;
    );
  };
  	//
  }</code></pre><p>You may get the SVG of the <code>CallAnswer</code> Icon here <a href="https://github.com/videosdk-live/webrtc/tree/main/react-native-webrtc-app/client/asset">Assets</a>.</p><p>Our Screen will look like this.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Screenshot_2023-01-12-16-33-10-501_com.webrtcexample--1-.jpg" class="kg-image" alt="How to Build a WebRTC React Native App?" loading="lazy" width="180" height="400"/></figure><p><strong>OutgoingCallScreen</strong></p><pre><code class="language-js">import CallEnd from './asset/CallEnd';

export default function App({}) {
    //
const OutgoingCallScreen = () =&gt; {
    return (
      &lt;View
        style={{
          flex: 1,
          justifyContent: 'space-around',
          backgroundColor: '#050A0E',
        }}&gt;
        &lt;View
          style={{
            padding: 35,
            justifyContent: 'center',
            alignItems: 'center',
            borderRadius: 14,
          }}&gt;
          &lt;Text
            style={{
              fontSize: 16,
              color: '#D0D4DD',
            }}&gt;
            Calling to...
          &lt;/Text&gt;

          &lt;Text
            style={{
              fontSize: 36,
              marginTop: 12,
              color: '#ffff',
              letterSpacing: 6,
            }}&gt;
            {otherUserId.current}
          &lt;/Text&gt;
        &lt;/View&gt;
        &lt;View
          style={{
            justifyContent: 'center',
            alignItems: 'center',
          }}&gt;
          &lt;TouchableOpacity
            onPress={() =&gt; {
              setType('JOIN');
              otherUserId.current = null;
            }}
            style={{
              backgroundColor: '#FF5D5D',
              borderRadius: 30,
              height: 60,
              aspectRatio: 1,
              justifyContent: 'center',
              alignItems: 'center',
            }}&gt;
            &lt;CallEnd width={50} height={12} /&gt;
          &lt;/TouchableOpacity&gt;
        &lt;/View&gt;
      &lt;/View&gt;
    );
  };
  	//
  }</code></pre><p>You may get the SVG of the <code>CallEnd</code> Icon here <a href="https://github.com/videosdk-live/webrtc/tree/main/react-native-webrtc-app/client/asset">Assets</a>.</p><p>Our Screen will look like this.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Screenshot_2023-01-12-16-32-15-953_com.webrtcexample--1-.jpg" class="kg-image" alt="How to Build a WebRTC React Native App?" loading="lazy" width="180" height="400"/></figure><h4 id="step-6-setup-websocket-and-webrtc">Step 6: Setup WebSocket and WebRTC</h4>
<p>After creating the UI, let's set a Socket and  WebRTC Peer Connection in the same file (App.js)</p><pre><code class="language-js">import SocketIOClient from 'socket.io-client'; // import socket io
// import WebRTC 
import {
  mediaDevices,
  RTCPeerConnection,
  RTCView,
  RTCIceCandidate,
  RTCSessionDescription,
} from 'react-native-webrtc';

export default function App({}) {

// Stream of local user
const [localStream, setlocalStream] = useState(null);

/* When a call is connected, the video stream from the receiver is appended to this state in the stream*/
const [remoteStream, setRemoteStream] = useState(null);

// This establishes your WebSocket connection
const socket = SocketIOClient('http://192.168.1.10:3500', {
    transports: ['websocket'],
    query: {
        callerId, 
    /* We have generated this `callerId` in `JoinScreen` implementation */
    },
  });

 /*   This creates an WebRTC Peer Connection, which will be used to set local/remote descriptions and offers. */
 const peerConnection = useRef(
    new RTCPeerConnection({
      iceServers: [
        {
          urls: 'stun:stun.l.google.com:19302',
        },
        {
          urls: 'stun:stun1.l.google.com:19302',
        },
        {
          urls: 'stun:stun2.l.google.com:19302',
        },
      ],
    }),
  );
  
    useEffect(() =&gt; {
    socket.on('newCall', data =&gt; {
     /* This event occurs whenever any peer wishes to establish a call with you. */
    });

    socket.on('callAnswered', data =&gt; {
      /* This event occurs whenever remote peer accept the call. */
    });

    socket.on('ICEcandidate', data =&gt; {
      /* This event is for exchangin Candidates. */

    });

    let isFront = false;

/*The MediaDevices interface allows you to access connected media inputs such as cameras and microphones. We ask the user for permission to access those media inputs by invoking the mediaDevices.getUserMedia() method.&nbsp;*/
    mediaDevices.enumerateDevices().then(sourceInfos =&gt; {
      let videoSourceId;
      for (let i = 0; i &lt; sourceInfos.length; i++) {
        const sourceInfo = sourceInfos[i];
        if (
          sourceInfo.kind == 'videoinput' &amp;&amp;
          sourceInfo.facing == (isFront ? 'user' : 'environment')
        ) {
          videoSourceId = sourceInfo.deviceId;
        }
      }
       
        
      mediaDevices
        .getUserMedia({
          audio: true,
          video: {
            mandatory: {
              minWidth: 500, // Provide your own width, height and frame rate here
              minHeight: 300,
              minFrameRate: 30,
            },
            facingMode: isFront ? 'user' : 'environment',
            optional: videoSourceId ? [{sourceId: videoSourceId}] : [],
          },
        })
        .then(stream =&gt; {
          // Get local stream!
          setlocalStream(stream);

          // setup stream listening
          peerConnection.current.addStream(stream);
        })
        .catch(error =&gt; {
          // Log error
        });
    });

    peerConnection.current.onaddstream = event =&gt; {
      setRemoteStream(event.stream);
    };

    // Setup ice handling
    peerConnection.current.onicecandidate = event =&gt; {
      
    };

    return () =&gt; {
      socket.off('newCall');
      socket.off('callAnswered');
      socket.off('ICEcandidate');
    };
  }, []);

}</code></pre><h4 id="step-7-establishing-and-managing-webrtc-calls">Step 7: Establishing and Managing WebRTC Calls</h4>
<p>This phase will explain how WebRTC calls are established between peers.</p><pre><code class="language-js">let remoteRTCMessage = useRef(null);

useEffect(() =&gt; {
  socket.on("newCall", (data) =&gt; {
    remoteRTCMessage.current = data.rtcMessage;
    otherUserId.current = data.callerId;
    setType("INCOMING_CALL");
  });

  socket.on("callAnswered", (data) =&gt; {
    // 7. When Alice gets Bob's session description, she sets that as the remote description with `setRemoteDescription` method.

    remoteRTCMessage.current = data.rtcMessage;
    peerConnection.current.setRemoteDescription(
      new RTCSessionDescription(remoteRTCMessage.current)
    );
    setType("WEBRTC_ROOM");
  });

  socket.on("ICEcandidate", (data) =&gt; {
    let message = data.rtcMessage;

    // When Bob gets a candidate message from Alice, he calls `addIceCandidate` to add the candidate to the remote peer description.

   if (peerConnection.current) {
        peerConnection?.current
          .addIceCandidate(
            new RTCIceCandidate({
              candidate: message.candidate,
              sdpMid: message.id,
              sdpMLineIndex: message.label,
            }),
          )
          .then(data =&gt; {
            console.log('SUCCESS');
          })
          .catch(err =&gt; {
            console.log('Error', err);
          });
      }
  });

  // Alice creates an RTCPeerConnection object with an `onicecandidate` handler, which runs when network candidates become available.
  peerConnection.current.onicecandidate = (event) =&gt; {
    if (event.candidate) {
      // Alice sends serialized candidate data to Bob using Socket
      sendICEcandidate({
        calleeId: otherUserId.current,
        rtcMessage: {
          label: event.candidate.sdpMLineIndex,
          id: event.candidate.sdpMid,
          candidate: event.candidate.candidate,
        },
      });
    } else {
      console.log("End of candidates.");
    }
  };
}, []);

async function processCall() {
  // 1. Alice runs the `createOffer` method for getting SDP.
  const sessionDescription = await peerConnection.current.createOffer();

  // 2. Alice sets the local description using `setLocalDescription`.
  await peerConnection.current.setLocalDescription(sessionDescription);

  // 3. Send this session description to Bob uisng socket
  sendCall({
    calleeId: otherUserId.current,
    rtcMessage: sessionDescription,
  });
}

async function processAccept() {
  // 4. Bob sets the description, Alice sent him as the remote description using `setRemoteDescription()`
  peerConnection.current.setRemoteDescription(
    new RTCSessionDescription(remoteRTCMessage.current)
  );

  // 5. Bob runs the `createAnswer` method
  const sessionDescription = await peerConnection.current.createAnswer();

  // 6. Bob sets that as the local description and sends it to Alice
  await peerConnection.current.setLocalDescription(sessionDescription);
  answerCall({
    callerId: otherUserId.current,
    rtcMessage: sessionDescription,
  });
}

function answerCall(data) {
  socket.emit("answerCall", data);
}

function sendCall(data) {
  socket.emit("call", data);
}

const JoinScreen = () =&gt; {
  return (
    /*
      ...
      ...
      ...
      */
    &lt;TouchableOpacity
      onPress={() =&gt; {
        processCall();
        setType("OUTGOING_CALL");
      }}
      style={{
        height: 50,
        backgroundColor: "#5568FE",
        justifyContent: "center",
        alignItems: "center",
        borderRadius: 12,
        marginTop: 16,
      }}
    &gt;
      &lt;Text
        style={{
          fontSize: 16,
          color: "#FFFFFF",
        }}
      &gt;
        Call Now
      &lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
    /*
      ...
      ...
      ...
      */
  );
};

const IncomingCallScreen = () =&gt; {
    return (
      /*
      ...
      ...
      ...
      */
      &lt;TouchableOpacity
        onPress={() =&gt; {
          processAccept();
          setType('WEBRTC_ROOM');
        }}
        style={{
          backgroundColor: 'green',
          borderRadius: 30,
          height: 60,
          aspectRatio: 1,
          justifyContent: 'center',
          alignItems: 'center',
        }}&gt;
        &lt;CallAnswer height={28} fill={'#fff'} /&gt;
      &lt;/TouchableOpacity&gt;
      /*
      ...
      ...
      ...
      */
    );
  };
</code></pre><h4 id="step-8-render-local-and-remote-mediastream">Step 8: Render Local and Remote MediaStream</h4>
<pre><code class="language-js">import MicOn from "./asset/MicOn";
import MicOff from "./asset/MicOff";
import VideoOn from "./asset/VideoOn";
import VideoOff from "./asset/VideoOff";
import CameraSwitch from "./asset/CameraSwitch";
import IconContainer from "./src/components/IconContainer";

export default function App({}) {
  // Handling Mic status
  const [localMicOn, setlocalMicOn] = useState(true);

  // Handling Camera status
  const [localWebcamOn, setlocalWebcamOn] = useState(true);

  // Switch Camera
  function switchCamera() {
    localStream.getVideoTracks().forEach((track) =&gt; {
      track._switchCamera();
    });
  }

  // Enable/Disable Camera
  function toggleCamera() {
    localWebcamOn ? setlocalWebcamOn(false) : setlocalWebcamOn(true);
    localStream.getVideoTracks().forEach((track) =&gt; {
      localWebcamOn ? (track.enabled = false) : (track.enabled = true);
    });
  }

  // Enable/Disable Mic
  function toggleMic() {
    localMicOn ? setlocalMicOn(false) : setlocalMicOn(true);
    localStream.getAudioTracks().forEach((track) =&gt; {
      localMicOn ? (track.enabled = false) : (track.enabled = true);
    });
  }

  // Destroy WebRTC Connection
  function leave() {
    peerConnection.current.close();
    setlocalStream(null);
    setType("JOIN");
  }

  const WebrtcRoomScreen = () =&gt; {
    return (
      &lt;View
        style={{
          flex: 1,
          backgroundColor: "#050A0E",
          paddingHorizontal: 12,
          paddingVertical: 12,
        }}
      &gt;
        {localStream ? (
          &lt;RTCView
            objectFit={"cover"}
            style={{ flex: 1, backgroundColor: "#050A0E" }}
            streamURL={localStream.toURL()}
          /&gt;
        ) : null}
        {remoteStream ? (
          &lt;RTCView
            objectFit={"cover"}
            style={{
              flex: 1,
              backgroundColor: "#050A0E",
              marginTop: 8,
            }}
            streamURL={remoteStream.toURL()}
          /&gt;
        ) : null}
        &lt;View
          style={{
            marginVertical: 12,
            flexDirection: "row",
            justifyContent: "space-evenly",
          }}
        &gt;
          &lt;IconContainer
            backgroundColor={"red"}
            onPress={() =&gt; {
              leave();
              setlocalStream(null);
            }}
            Icon={() =&gt; {
              return &lt;CallEnd height={26} width={26} fill="#FFF" /&gt;;
            }}
          /&gt;
          &lt;IconContainer
            style={{
              borderWidth: 1.5,
              borderColor: "#2B3034",
            }}
            backgroundColor={!localMicOn ? "#fff" : "transparent"}
            onPress={() =&gt; {
              toggleMic();
            }}
            Icon={() =&gt; {
              return localMicOn ? (
                &lt;MicOn height={24} width={24} fill="#FFF" /&gt;
              ) : (
                &lt;MicOff height={28} width={28} fill="#1D2939" /&gt;
              );
            }}
          /&gt;
          &lt;IconContainer
            style={{
              borderWidth: 1.5,
              borderColor: "#2B3034",
            }}
            backgroundColor={!localWebcamOn ? "#fff" : "transparent"}
            onPress={() =&gt; {
              toggleCamera();
            }}
            Icon={() =&gt; {
              return localWebcamOn ? (
                &lt;VideoOn height={24} width={24} fill="#FFF" /&gt;
              ) : (
                &lt;VideoOff height={36} width={36} fill="#1D2939" /&gt;
              );
            }}
          /&gt;
          &lt;IconContainer
            style={{
              borderWidth: 1.5,
              borderColor: "#2B3034",
            }}
            backgroundColor={"transparent"}
            onPress={() =&gt; {
              switchCamera();
            }}
            Icon={() =&gt; {
              return &lt;CameraSwitch height={24} width={24} fill="#FFF" /&gt;;
            }}
          /&gt;
        &lt;/View&gt;
      &lt;/View&gt;
    );
  };

  switch (type) {
    case 'JOIN':
      return JoinScreen();
    case 'INCOMING_CALL':
      return IncomingCallScreen();
    case 'OUTGOING_CALL':
      return OutgoingCallScreen();
    case 'WEBRTC_ROOM': // Added line
      return WebrtcRoomScreen(); // Added line
    default:
      return null;
  }
}
</code></pre><p>You may get the SVG of the <code>CameraSwitch</code>, <code>VideoOn</code> and <code>MicOn</code> Icon here <a href="https://github.com/videosdk-live/webrtc/tree/main/react-native-webrtc-app/client/asset">Assets</a>.</p><p><code>IconContainer.js</code> component code file.</p><pre><code class="language-js">import React from 'react';
import {TouchableOpacity} from 'react-native';

const buttonStyle = {
  height: 50,
  aspectRatio: 1,
  justifyContent: 'center',
  alignItems: 'center',
};
const IconContainer = ({backgroundColor, onPress, Icon, style}) =&gt; {
  return (
    &lt;TouchableOpacity
      onPress={onPress}
      style={{
        ...style,
        backgroundColor: backgroundColor ? backgroundColor : 'transparent',
        borderRadius: 30,
        height: 60,
        aspectRatio: 1,
        justifyContent: 'center',
        alignItems: 'center',
      }}&gt;
      &lt;Icon /&gt;
    &lt;/TouchableOpacity&gt;
  );
};
export default IconContainer;
</code></pre><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Screenshot_2023-01-12-16-41-46-547_com.webrtcexample--1-.jpg" class="kg-image" alt="How to Build a WebRTC React Native App?" loading="lazy" width="180" height="400"/></figure><p>Woohoo!! Finally we did it. </p><h4 id="step-9-handle-audio-routing-in-webrtc">Step 9: Handle Audio Routing in WebRTC</h4>
<p>We will use the third-party lib WebRTC React Native Incall-Manager (<a href="https://github.com/react-native-webrtc/react-native-incall-manager">https://github.com/react-native-webrtc/react-native-incall-manager</a>) to handle all audio-related edge cases during video conferencing.</p><pre><code class="language-js"> useEffect(() =&gt; {
    InCallManager.start();
    InCallManager.setKeepScreenOn(true);
    InCallManager.setForceSpeakerphoneOn(true);

    return () =&gt; {
      InCallManager.stop();
    };
  }, []);</code></pre><p>You can get the complete source code <a href="https://github.com/videosdk-live/webrtc/tree/main/react-native-webrtc-app">here</a>. we created a WebRTC app with a signaling server with the help of this blog. We can use peer-to-peer communication with 2-3 people in one room/meeting. </p><h2 id="integrate-webrtc-react-native-using-videosdk">Integrate WebRTC React Native using VideoSDK</h2><p><br><a href="https://videosdk.live/">VideoSDK</a> is the most developer-friendly platform for live video and audio SDKs. VideoSDK makes integrating live video and audio into your React Native project considerably easier and faster. You can have a branded, customized, and programmable call-up and running in no time with only a few lines of code.</br></p><p>In addition, VideoSDK provides best-in-class modifications, providing you total control over layout and rights. Plugins may be used to improve the experience, and end-to-end call logs and quality data can be accessed directly from your VideoSDK dashboard or via <a href="https://docs.videosdk.live/api-reference/realtime-communication/intro">REST APIs</a>. This amount of data enables developers to debug any issues that arise during a conversation and improve their integrations for the best customer experience possible.</p><p>Alternatively, you could follow this quickstart guide to <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/react-native-android-sdk">Create a Demo React Native Project with the VideoSDK.</a> or start with <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-sdk-example">Code Sample</a>.</p><h2 id="resources">Resources</h2><ul><li><a href="https://www.videosdk.live/blog/how-to-make-a-video-calling-app-using-react-native">Build a React Native Video Calling App with Video SDK</a></li><li><a href="https://www.videosdk.live/blog/react-native-android-video-calling-app-with-callkeep">Build a React Native Android Video Calling App with ? Callkeep using ? Firebase and Video SDK</a></li><li><a href="https://youtu.be/pqg1y3eRyK4">React Native Group Video Calling App Tutorial - YouTube</a><br/></li></ul>]]></content:encoded></item><item><title><![CDATA[Twilio Programmable Video End of Life and Zoom Out Twilio Migration]]></title><description><![CDATA[Twilio's Programmable Video SDK was shut down which led to migration to Zoom, which faces challenges like latency and limited features. ]]></description><link>https://www.videosdk.live/blog/zoom-out-twilio-migration</link><guid isPermaLink="false">65714e66cbe3c80e020eb989</guid><dc:creator><![CDATA[Arjun Kava]]></dc:creator><pubDate>Fri, 27 Sep 2024 15:13:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/12/Migration-to-VideoSDK.gif" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/12/Migration-to-VideoSDK.gif" alt="Twilio Programmable Video End of Life and Zoom Out Twilio Migration"/><p>Twilio has decided to shut down its Programmable Video SDK. This decision, while understandable, may leave many developers wondering about the future of their video-based applications.</p><p>In a recent statement, Twilio CEO Jeff Lawson explained the reason behind this decision: While I was disappointed with this decision, the Twilio Video SDK was one of the best products in town, especially for builders.</p><blockquote>Lastly, we’ve decided to end-of-life (EOL) Twilio Programmable Video as a standalone product. Given it’s such a niche area and a relatively small part of our portfolio, we believe partnering with video industry leaders is the best way to ensure long-term product innovation for our customers.</blockquote><blockquote>Removing Programmable Video from our portfolio will also allow Communications to more effectively focus on our pillar products - Messaging, Voice, and Email.</blockquote><p>I previously wrote about "Domino called exit(); for Twilio Programmable Video", click the link below if you haven't read it yet.</p><h2 id="zooming-into-a-problem-situation">Zooming into a Problem Situation</h2><p>Twilio has partnered with Zoom to migrate from the Twilio Programmable Video to Zoom Video SDK:</p><blockquote>We recommend migrating your application to the API provided by our preferred video partner, Zoom. We've prepared this migration guide to assist you in minimizing any service disruption.</blockquote><p>The only reason for Zoom is to not pick up the competition. Zoom is nowhere near a direct competitor to Twilio programmable video, although both have similar types of customers such as contact centers, sales/marketing departments in corporates, etc. Either way, Twilio customers have a year to find a solution unless drastic changes are made to the <a href="https://www.videosdk.live/"><strong>WebRTC API</strong></a><strong>.</strong></p><h2 id="future-of-building-real-time-video-apps">Future of Building Real-time Video Apps</h2><p>I think creating a great video app is an art. Every company has different uses and daily needs. Development flexibility is even more important because each use case requires high-end customization.</p><h3 id="compatibility-with-browsers-and-mobile-devices">Compatibility with browsers and mobile devices</h3><p>First and foremost, compatibility with major browsers and mobile devices is essential. Pre-call checks play a big role in improving the call experience. The main cause of poor call experience is either audio/video capture failure, or pre-checking on capture will save a lot of time and ensure browser/device support is available.</p><h3 id="end-to-end-customisable-uiux-experience">End to End Customisable UI/UX Experience</h3><p>Real-time experiences are about harmony between web apps and mobile apps. Each business requires different features to build. </p><p>For example, an education use case is driven by a single speaker while tele-consultancy is a continuous communication between two people and real-time audio broadcast requires a constant change of speakers. Each application requires different types of user experience to implement and manage according to the user base and their behavior.</p><h3 id="high-quality-audiovideo-experience">High-Quality Audio/Video Experience</h3><p>In an era where user experience is a key differentiator, a video application with high-quality audio and video capabilities stands out. Users expect nothing less than excellence, and an app that delivers on this front not only meets but exceeds those expectations. </p><p>This, in turn, contributes to positive reviews, increased user retention, and organic growth of the app's user base. To achieve that, high bitrates are required to send and receive audio and video with optimal compression mechanisms.</p><h3 id="native-integration-of-ai-on-top-of-audio-and-video">Native Integration of AI on top of Audio and Video</h3><p>As the world is going through a generative AI boom. Real-time audio and video will play an important role in the adoption of AI. Background change, filters, and face tracking are much-needed features depending on different market segments. </p><p>Apart from that speech, text and transcription are essential features when it comes to video analysis. This is just the beginning as a growing number of companies are integrating native generative AI capabilities over real-time audio and video to better assist their users.</p><h3 id="collaboration-and-moderation-on-scale">Collaboration and Moderation on scale</h3><p>On a 1:1 basis, small group call or large group call collaborative features like chat, polling, Q&amp;A, raising hands, layout changes, etc. are essential to create a connection between participants. When it comes to large group calls, moderation controls like mute all, spotlight, waiting room, etc. are very necessary.</p><h3 id="built-in-data-privacy-and-protection">Built-in Data Privacy and Protection</h3><p>Data privacy and security is the most important aspects that a company should consider before making any decision. It is essential to protect customer information from threats. A focus on privacy prevents unauthorized access, protects sensitive data, and maintains the app’s reputation in an environment where user trust is paramount.</p><h3 id="%E2%8F%BA%EF%B8%8F-instant-recordings-with-customizable-templates">⏺️ Instant recordings with customizable templates</h3><p>Most use cases around cloud recording are either post-streaming of content or post-production of content. For example, the use-case is streaming in education recordings while in virtual events, it is post-production content for websites and social media. Instant recording infrastructure plays a big role in such use cases where users do not have to wait for the recording to be processed.</p><h2 id="nightmare-of-migrating-to-zoom-sdk">Nightmare of migrating to Zoom SDK</h2><p>Now let's talk about the elephant in the room, migration to Zoom. Zoom by nature is an MCU architecture, meaning they decrypt audio/video streams on the server and mix them as one stream.</p><p>Compared to SFU architecture it is difficult to have a good developer experience and thus it is becoming a nightmare for developers for several reasons but below are the most important ones to consider:</p><h3 id="not-having-globally-connected-regions-to-solve-global-latency">Not having globally connected regions to solve global latency</h3><p>Zoom forces you to select a region before initiating the client, and that's why it's very difficult to solve for global latency because anyone from Europe joining a call to a US server will experience latency and massive packet loss.</p><pre><code class="language-js">client.init('en-US', 'Global', { patchJsMedia: true }).then(() =&gt; {
  client.join('sessionName', 'VIDEO_SDK_JWT', 'userName', 'sessionPasscode').then(() =&gt; {
    stream = client.getMediaStream()
  })
})</code></pre><h3 id="video-layout-with-canvas-painting">Video Layout with Canvas Painting</h3><p>Zoom doesn't have raw access to audio and video streams in the SDK, and because of that you have to calculate a lot of unnecessary mathematical code to manage multiple layouts. Although canvas rendering is good but developing and maintaining layout logic takes almost more time than creating a product.</p><pre><code class="language-js">let participants = client.getAllUser()

stream.renderVideo(document.querySelector('#participant-videos-canvas'), participants[0].userId, 960, 540, 0, 540, 2)
stream.renderVideo(document.querySelector('#participant-videos-canvas'), participants[1].userId, 960, 540, 960, 540, 2)
stream.renderVideo(document.querySelector('#participant-videos-canvas'), participants[2].userId, 960, 540, 0, 0, 2)
stream.renderVideo(document.querySelector('#participant-videos-canvas'), participants[3].userId, 960, 540, 960, 0, 2)</code></pre><h3 id="raw-media-access-for-generative-ai-use-cases">Raw media Access for Generative AI Use cases</h3><p>As I said earlier, Zoom doesn't allow Twilio media stream or other raw media stream access which means you can't integrate any third-party SDKs or open-source models on the client or server side.</p><p>Generative AI is becoming increasingly important for every application integrating text-to-speech, face tracking, face recognition, and server-side audio/video analysis. All of the above is not possible with the Zoom SDK and cannot be done due to the nature of the technology.</p><h3 id="large-size-of-sdk-binary">Large Size of SDK Binary</h3><p>The Zoom SDK averages 97mb+ in mobile while it can go up to 157mb+ (sometimes I've read 500mb+ in community threads) which makes it heavy for a large number of use cases.</p><h3 id="%E2%AD%90-720p-resolution-is-not-supported">⭐ 720p+ resolution is not supported</h3><p>Zoom is not suitable if you are building apps for high-quality experiences due to resolution and bitrate restrictions at 720p. This kills most use cases like broadcasting, high-resolution screen sharing, high-quality content sharing, etc.</p><h3 id="virtual-background-gallery-view-with-sharedbuffer-array">Virtual Background, Gallery View with SharedBuffer Array</h3><p>The <code>ShareadBuffer</code> array is a compatibility and cross-device support killer in the Zoom SDK. It is only supported by two browsers as mentioned below by the Zoom team. The bad news is that your client has to enable this because Chrome doesn't allow <code>SharedArrayBuffer</code> directly.</p><blockquote>As of <a href="https://developer.chrome.com/blog/enabling-shared-array-buffer">Chrome and Edge 92</a>, and <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/SharedArrayBuffer">Firefox 79</a> SharedArrayBuffer is only available if your web page is Cross-Origin Isolated, or if your web page uses Credentialless headers, or if you have registered for the SharedArrayBuffer Chrome Origin Trial (works only in Chrome and Edge).</blockquote><h3 id="%E2%9D%8C-not-compatible-majority-of-browsers">❌ Not Compatible majority of browsers</h3><p>Since the Zoom SDK does not use WebRTC technology and relies on its own MCU infrastructure, it is not able to provide good support for web calls.</p><h3 id="end-to-end-encryption-of-audio-video-streams">End to End Encryption of Audio / Video Streams </h3><p>Zoom does not allow you to encrypt streams in the browser. This means that the data transmitted from your browser, including audio and video, is not encrypted throughout its journey to the recipient. </p><p>While Zoom encrypts the data in transit between its servers, there is a potential vulnerability in the browser-to-server leg of the communication. When selecting a video conferencing platform, it is crucial to consider security needs. If end-to-end encryption is essential for your specific requirements, ensuring the platform you choose offers this functionality is critical.</p><h3 id="simulcast-for-adaptive-bitrate-streaming">Simulcast for Adaptive Bitrate Streaming</h3><p>Zoom does not allow multi-layer sending and receiving of video with adaptive bitrate streaming due to the same MCU architecture. This does not allow the Zoom SDK to reflect audio/video bitrate and resolution depending on the volatility and change of internet bandwidth.</p><h3 id="sender-receiver-media-track-subscription">Sender / Receiver media track subscription</h3><p>Zoom does not allow subscribe/unsubscribe from receiving audio/video streams which makes it difficult to implement for use cases such as breakout rooms, backstage, and watch parties.</p><h3 id="pre-call-testing-for-the-best-call-experience">Pre-call Testing for the best call experience</h3><p>Zoom does not provide a pre-call test before starting a video call. While checking quality only a preview is available, connectivity and all other features are not available.</p><h3 id="qos-apis-and-dashboard">QOS APIs and Dashboard</h3><p>Quality of service (QoS) is paramount for any video conferencing platform. Zoom remains committed to providing its users with the tools and insights necessary to ensure optimal communication experiences. To this end, Zoom offers two key avenues for monitoring and managing QoS: APIs and the Dashboard.</p><p>Zoom offers a single API that grants developers access to a wealth of QoS data. This API delivers detailed information on key performance metrics. By leveraging the API, developers can integrate custom monitoring tools, automated remediation workflows, and real-time quality feedback into their applications.</p><h3 id="server-side-raw-video-streams-for-ai-use-case">Server-side raw video streams for AI use-case</h3><p>It is not possible to extract raw audio/video streams from the client or server side using the Zoom Video SDK. This makes it difficult to create custom AI use cases such as transcription, speech-to-text, or any other type of intelligence.</p><h3 id="%E2%9C%89%EF%B8%8F-missing-data-channel-for-collaboration-and-moderation-controls">✉️ Missing data channel for collaboration and moderation controls</h3><p>Zoom does not have a proper Data Channel feature within the SDK. That means you can't create collaborative features like polls, Q&amp;A, layout changes, and moderation features like mute all, invite as a host, etc.</p><h2 id="what-should-you-know-before-moving-to-the-next-api-sdk">What Should You Know Before Moving to the Next API / SDK?</h2><ul><li>List your requirements and prioritize SDKs based on match score</li><li>Create a small POC and test the platform</li><li>Check out their support in the community and their knowledge of the space.</li><li>Check out the latest releases and continuous updates for your industry.</li><li>Get a demo with their team and see how committed they are to your future roadmap and what they're building for the next couple of quarters.</li><li>Check if they have been laid off in the last few quarters and see if they will be around for long.</li><li>A red flag is people trying to sell demos instead of explaining what's available and not available compared to Twilio.</li><li>Another red flag is getting onboard by offering migration credits and offers, trust me it's not worth it.</li><li>Make sure you invest in one vendor rather than buying from multiple because, in the long run, one will be able to justify the usage, business needs, and relationships.</li></ul><h2 id="developer-first-approach-at-videosdk">Developer first approach at VideoSDK</h2><p>VideoSDK.live is solving one problem for the best developer experience, reliability, and security of real-time video infrastructure. Compared to Zoom, we have a rich, highly flexible, and developer-friendly SDK. Here is the comparison:</p>
<!--kg-card-begin: html-->
<table>
    <tr>
        <td>Features</td>
        <td>Zoom SDK</td>
        <td>VideoSDK.live</td>
    </tr>
    <tr>
        <td>Globally Connected Regions</td>
        <td>No</td>
        <td>Yes</td>
    </tr>
    <tr>
        <td>Video Tiles Rendering</td>
        <td>In-flexible</td>
        <td>Flexible</td>
    </tr>
    <tr>
        <td>Raw Media Access</td>
        <td>No</td>
        <td>Yes</td>
    </tr>
    <tr>
        <td>SDK Binary Size</td>
        <td>100mb+</td>
        <td>20mb</td>
    </tr>
    <tr>
        <td>Max Resolution</td>
        <td>720p</td>
        <td>2k+</td>
    </tr>
    <tr>
        <td>Browser Compatibility</td>
        <td>10% browsers</td>
        <td>98% browsers</td>
    </tr>
    <tr>
        <td>Sender / Receiver media track subscription</td>
        <td>No</td>
        <td>Yes</td>
    </tr>
    <tr>
        <td>QOS API / Dashboard</td>
        <td>No</td>
        <td>Yes</td>
    </tr>
    <tr>
        <td>Pre-call Check</td>
        <td>No</td>
        <td>Yes</td>
    </tr>
    <tr>
        <td>Data Channel</td>
        <td>No</td>
        <td>Yes</td>
    </tr>
</table>
<!--kg-card-end: html-->
<p>Follow the migration guide from Twilio programmable video to Video SDK, and at least start building a POC to see what works for you and what doesn't. Here are some references to do the same: </p><p><strong>Check out the Migration guide that we have written: </strong></p><ul><li><a href="https://docs.videosdk.live/tutorials/twilio-to-videosdk-migration-guide">Overview and Feature Comparison</a> </li><li><a href="https://docs.videosdk.live/tutorials/migration-guide-from-twilio-to-videosdk-js-sdk">JavaScript Migration Guide</a></li><li><a href="https://docs.videosdk.live/tutorials/migration-guide-from-twilio-to-videosdk-web-edition">React Migration Guide</a></li><li><a href="https://docs.videosdk.live/tutorials/migration-guide-from-twilio-to-videosdk-ios-sdk">iOS Migration Guide</a></li><li><a href="https://docs.videosdk.live/tutorials/migration-guide-from-twilio-to-videosdk-android-sdk">Android Migration Guide</a></li></ul><p>That's all for today, feel free to reach out if you need any help to navigate through the solution, <a href="https://www.videosdk.live/contact">Talk to Our Migration Expert</a></p><p>As I said earlier here is the link about "<a href="https://protocol.substack.com/p/end-of-life-for-twilios-programmable-video">Domino called exit(); for Twilio's Programmable Video</a>"</p><p>Until next time, see you.</p>]]></content:encoded></item><item><title><![CDATA[What is WebRTC Leak?]]></title><description><![CDATA[





Schedule a Demo with Our Live Video Expert!




Discover how VideoSDK can help you build a cutting-edge real-time video app.






Book a call




]]></description><link>https://www.videosdk.live/blog/webrtc-leak</link><guid isPermaLink="false">653785f5bbb6901662fde10a</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 27 Sep 2024 14:48:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/10/Frame-4.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/10/Frame-4.jpg" alt="What is WebRTC Leak?"/><p>In an age where online privacy is paramount, WebRTC leaks have emerged as a significant concern. Understanding what <a href="https://www.videosdk.live/blog/webrtc">WebRTC</a> leaks are, how to shield against them, conduct tests to detect vulnerabilities, and prevent potential leaks is crucial in maintaining your online privacy. This comprehensive article delves into these topics, providing insights, tips, and expert guidance.</p><h2 id="what-is-webrtc-leak">What is WebRTC Leak?</h2><p>WebRTC, or Web Real-Time Communication, is a technology that allows web browsers to communicate directly with one another. While it's a valuable tool for video and audio calls, it can inadvertently reveal your IP address, potentially compromising your privacy.</p><p>WebRTC leaks occur when your real IP address is exposed through your web browser, often due to default settings in various browsers. To prevent such leaks, you can make use of WebRTC leak shields.</p><h2 id="the-significance-of-webrtc-leak-shields">The Significance of WebRTC Leak Shields</h2><p>WebRTC leak shields, as the name suggests, act as protective barriers, preventing the leakage of your IP address during online communication. They ensure that the only IP address visible to the websites you visit is that of the VPN server you are connected to.</p><p>Using a VPN with WebRTC leak shield capabilities is a proactive way to ensure your online activities remain private and secure. To check if your VPN has WebRTC leak protection, you can perform a WebRTC leak test.</p><h2 id="conducting-a-webrtc-leak-test">Conducting a WebRTC Leak Test</h2><p>A WebRTC leak test is a straightforward method to determine whether your VPN is effectively protecting <a href="https://dnschecker.org/whats-my-ip-address.php">your IP address</a>. Here's how you can conduct one:</p><p><strong>Access a WebRTC Leak Test Website</strong>: Numerous websites provide WebRTC leak tests. Open one in your browser.</p><p><strong>Check for IP Leakage</strong>: Run the test, and the website will display the IP address it detects. If it shows your actual IP address instead of the VPN server's, your VPN may not be adequately protecting against WebRTC leaks.</p><p><strong>Ensure VPN WebRTC Leak Protection</strong>: If your VPN fails the test, check its settings or consider switching to a VPN with robust WebRTC leak protection.</p><h2 id="effective-methods-to-prevent-webrtc-leaks">Effective Methods to Prevent WebRTC Leaks</h2><p>Preventing WebRTC leaks involves more than just relying on your VPN. Here are some additional methods to ensure your online privacy:</p><p><strong>Disable WebRTC</strong>: In some browsers, you can disable WebRTC entirely. However, this may impact your ability to use certain online services that rely on this technology.</p><p><strong>Use Browser Extensions</strong>: Several browser extensions are available to prevent WebRTC leaks. They offer an extra layer of security.</p><p><strong>Regularly Update Your Browser</strong>: Keeping your browser updated ensures that you have the latest security features, including patches for any WebRTC vulnerabilities.</p><p><strong>OTP for a Trustworthy VPN</strong>: Ensure that your VPN provider has robust WebRTC leak protection and is committed to your online privacy.</p><p><strong>Use a Secure DNS</strong>: A secure Domain Name System (DNS) can also help prevent IP leakage during WebRTC communication.</p><p><strong>Educate Yourself</strong>: Knowledge is your best defense. Regularly research and stay informed about the latest developments in online privacy and security.</p><h2 id="faqs">FAQs</h2><p><strong>Q:</strong> Can WebRTC leaks be harmful?<br><strong>A:</strong> Yes, they can compromise your online privacy by revealing your real IP address, allowing websites to track your online activities.</br></p><p><strong>Q:</strong> Are all VPNs equipped with WebRTC leak protection?<br><strong>A:</strong> No, not all VPNs have WebRTC leak protection. It's essential to choose a VPN with this feature if online privacy is a priority.</br></p><p><strong>Q:</strong> Can disabling WebRTC affect my web browsing experience?<br><strong>A:</strong> Yes, disabling WebRTC may affect your ability to use certain web services that rely on it. It's a trade-off between privacy and functionality.</br></p><p><strong>Q:</strong> Are WebRTC leak tests reliable?<br><strong>A:</strong> WebRTC leak tests are generally reliable in identifying IP leakage. However, always ensure you're using a trusted testing platform.</br></p><p><strong>Q:</strong> How often should I update my browser?<br><strong>A:</strong> Regularly updating your browser is recommended. It helps keep your online experience secure by patching vulnerabilities.</br></p><p><strong>Q:</strong> What is DNS, and how does it relate to WebRTC leaks?<br><strong>A:</strong> DNS (Domain Name System) translates website addresses into IP addresses. Using a secure DNS can help prevent IP leakage during WebRTC communication.</br></p><h2 id="conclusion">Conclusion</h2><p>Understanding WebRTC leaks, the significance of WebRTC leak shields, conducting WebRTC leak tests, and implementing effective preventive measures are crucial for safeguarding your online privacy. By following the guidelines outlined in this article, you can ensure that your online activities remain secure and confidential.</p><h2 id="take-advantage-of-webrtc-with-videosdk">Take Advantage of WebRTC with VideoSDK</h2><p>With the rapid growth of online communication and real-time video interactions, harnessing the power of WebRTC (Web Real-Time Communication) through a <a href="https://www.videosdk.live/">VideoSDK </a>can greatly enhance your applications and services. Whether you're looking to integrate video capabilities into your web or mobile applications, We support various frameworks and provide comprehensive documentation to make the process seamless.</p><p><strong>WEB SDK:</strong></p><p>Our Web SDK offers <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/getting-started">prebuilt solutions</a> for quick and easy integration, supporting popular <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start">JavaScript</a> and <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/concept-and-architecture">React</a> frameworks. Our <a href="https://docs.videosdk.live/javascript/api/sdk-reference/setup">API reference</a>, <a href="https://www.videosdk.live/blog/tag/product">developer blogs</a>, and <a href="https://docs.videosdk.live/code-sample">code samples</a> are available to guide you through the implementation process.</p><p><strong>MOBILE SDK:</strong></p><p>For mobile app development, our SDK supports <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/concept-and-architecture">React Native</a>, <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/concept-and-architecture">Flutter</a>, <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/concept-and-architecture">Android</a>, and <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/getting-started">iOS platforms</a>, ensuring a consistent and reliable video experience across devices.</p><p><strong>Use Cases:</strong></p><p>Our VideoSDK finds applications across multiple industries, such as:</p><ol><li><a href="https://www.videosdk.live/solutions/video-kyc">Video KYC</a>: Streamline and secure your Know Your Customer processes with real-time video verification.</li><li><a href="https://www.videosdk.live/solutions/telehealth">Telehealth</a>: Enable high-quality video consultations for healthcare professionals and patients.</li><li><a href="https://www.videosdk.live/solutions/education">Education</a>: Enhance online learning experiences with interactive video classes and collaboration tools.</li><li><a href="https://www.videosdk.live/solutions/live-shopping">Live Shopping</a>: Create engaging live shopping experiences, enabling customers to interact with sellers in real time.</li><li><a href="https://www.videosdk.live/solutions/virtual-events">Virtual Events</a>: Host virtual conferences, expos, and trade shows with integrated video features.</li><li><a href="https://www.videosdk.live/solutions/social">Social Media</a>: Enhance social platforms with live video streaming for more engaging user interactions.</li><li><a href="https://www.videosdk.live/solutions/live-audio-streaming">Live Audio Streaming</a>: Offer real-time audio streaming for podcasts, music, and more.</li></ol><p>More importantly, it is <strong><strong>FREE</strong></strong> to start. You are guaranteed to receive <a href="https://app.videosdk.live/"><strong><strong>10,000 minutes of free EVERY MONTH</strong></strong>.</a></p><!--kg-card-begin: html--><!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h2 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h2>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html><!--kg-card-end: html--><!--kg-card-begin: html--><script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "FAQPage",
  "mainEntity": [
    {
      "@type": "Question",
      "name": "Can WebRTC leaks be harmful?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "Yes, they can compromise your online privacy by revealing your real IP address, allowing websites to track your online activities."
      }
    }
  ]
}
</script><!--FAQPage Code Generated by https://saijogeorge.com/json-ld-schema-generator/faq/-->
<!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Introduction to Real-time communication SDK]]></title><description><![CDATA[Real-time communication SDK is built with blend of webRTC and optimised UDP protocol. Our SDK helps developers to add real-time audio and video calls to any mobile app or web application.]]></description><link>https://www.videosdk.live/blog/introduction-to-real-time-communication-sdk</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb60</guid><category><![CDATA[Getting Started]]></category><dc:creator><![CDATA[Arjun Kava]]></dc:creator><pubDate>Fri, 27 Sep 2024 13:27:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/12/unlimited-host.webp" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/12/unlimited-host.webp" alt="Introduction to Real-time communication SDK"/><p>Real-time communication SDK is built with blend of webRTC and optimised UDP protocol. Our SDK helps developers to add real-time audio and video calls to any mobile app or web application.</p><p>With our embedded-sdk, you can embedded a video call widget in your web application. It supports 98% of the devices across all platforms and adaptive video calling for better quality calls with low latency. Developers can also customise embeded-sdk to make it more convenient for your application.</p><p>Our research team has worked hard to handled all the edge cases so you just have to focus on what matters.</p><h2 id="ways-to-start-developing">Ways to start developing</h2><h3 id="1-dashboard">1. Dashboard</h3><p>Create or schedule meetings instantly using dashboard and try it out.</p><h3 id="2-embedded-sdk">2. Embedded SDK</h3><p>Best way to start is embedded a working video call widget in your app.</p><h3 id="3-programmable-api">3. Programmable API</h3><p>Programmable API enables opportunity to create and manage rooms directly from your backend server.</p><h3 id="4-custom-meetings-interface-sdk">4. Custom Meetings Interface SDK</h3><p>Our front-end SDKs provides fine control to design custom user interface and experience specifically for your needs.</p><p>Find out more about it in documentation. </p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/docs/realtime-communication/intro"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Introduction | VideoSDK.live Documentation</div><div class="kg-bookmark-description">Real-time comunication SDK is built with blend of webRTC and optimised UDP protocol. Our SDK helps developers to add real-time audio and video calls to any mobile app or web application.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/favicon.ico" alt="Introduction to Real-time communication SDK"><span class="kg-bookmark-author">videosdk.live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/assets/images/Zujonow-whitelabel-min-7e7fcd47dedd07f03f4355427a764caf.jpg" alt="Introduction to Real-time communication SDK"/></div></a></figure>]]></content:encoded></item><item><title><![CDATA[End-to-End Encryption(E2EE): How It Secures Your Digital Communications?]]></title><description><![CDATA[E2EE (End-to-End Encryption) safeguards messages by encrypting them from sender to recipient, enhancing privacy and security online.]]></description><link>https://www.videosdk.live/blog/what-is-end-to-end-encryption-e2ee</link><guid isPermaLink="false">6676749e20fab018df10ef15</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 27 Sep 2024 11:13:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/06/end-to-end-encryption.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction-to-end-to-end-encryption">Introduction to End-to-End Encryption</h2>
<img src="https://assets.videosdk.live/static-assets/ghost/2024/06/end-to-end-encryption.png" alt="End-to-End Encryption(E2EE): How It Secures Your Digital Communications?"/><p>In an era where digital communication is ubiquitous, securing the integrity and privacy of data as it travels across the internet is more crucial than ever. End-to-end encryption (E2EE) stands out as a vital technology in this landscape, ensuring that messages are readable only by the intended recipients. Unlike other encryption methods that protect data in transit but not when it is stored or processed, E2EE maintains the confidentiality of communication from the moment it leaves one device until it reaches another, with no intermediate decryptions. This method is especially significant in protecting against a wide array of cyber threats, including unauthorized surveillance and data breaches.</p><p>E2EE is not just a technical term but a foundation for modern privacy, crucial for both individuals and businesses. As digital communication evolves, the relevance of E2EE continues to grow, underscored by its widespread adoption in everything from messaging apps to financial transactions. Understanding how E2EE works, its benefits and its limitations is essential for anyone looking to safeguard their digital communications or navigate the complex landscape of cybersecurity.</p><h2 id="what-is-end-to-end-encryptione2ee">What is End-to-end encryption(E2EE)?</h2>
<p>End-to-end encryption (E2EE) ensures secure communication where only the sender and recipient can access messages. It prevents intermediaries, including service providers, from deciphering content. E2EE uses cryptographic keys: one for encryption by the sender and another for decryption by the recipient, ensuring data remains confidential throughout transmission. This technology is crucial for protecting sensitive information in messaging apps, emails, and online transactions.</p><h3 id="how-does-end-to-end-encryption-work">How Does End-to-End Encryption Work?</h3>
<p>End-to-end encryption (E2EE) is a method of secure communication that prevents third parties from accessing data while it's transferred from one end system or device to another. In E2EE, data is encrypted on the sender's system or device and only the recipient is able to decrypt it. No intermediary, not even the service provider, has access to the decryption keys necessary to decrypt the data.</p><h4 id="encryption-and-decryption-process">Encryption and Decryption Process:</h4>
<p>The process begins when data is encrypted by the sender using a key known only to the device and the intended recipient. This key is never exposed to the service providers or any other entities that facilitate the communication. The encrypted data, which appears as scrambled and unreadable ciphertext, travels securely through various transmission mediums until it reaches the recipient. Upon arrival, the recipient's device uses a corresponding private key to decrypt the data back into its original plaintext form.</p><h4 id="role-of-keys-in-asymmetric-encryption">Role of Keys in Asymmetric Encryption:</h4>
<p>In most E2EE setups, asymmetric cryptography, or public key cryptography, plays a crucial role. This system uses two separate keys: a public key, which is shared openly, and a private key, which remains confidential to each user. Data encrypted with a public key can only be decrypted by its corresponding private key and vice versa. This ensures that even if the public key is intercepted, the information remains secure because only the private key holder can decrypt it.</p><h4 id="securing-data-transmission">Securing Data Transmission:</h4>
<p>Public key infrastructure (PKI) supports the distribution and identification of public encryption keys, enabling users and devices to securely exchange data over networks without prior arrangements. The security and management of these keys are fundamental to the effectiveness of E2EE. Services like ProtonMail and applications such as WhatsApp utilize this technology to ensure that only the communicating users can read the messages, without the possibility of third-party access.</p><h2 id="benefits-of-end-to-end-encryption">Benefits of End-to-End Encryption</h2>
<h3 id="privacy-and-security">Privacy and Security</h3>
<p>The primary benefit of E2EE is that it significantly enhances the privacy and security of data. By ensuring that only the intended recipients can access the message content, E2EE protects sensitive information from eavesdroppers, hackers, and even the service providers themselves.</p><h3 id="data-integrity">Data Integrity</h3>
<p>E2EE also helps maintain the integrity of data by preventing unauthorized alterations during transit. Since the data can only be decrypted by the recipient's private key, any tampering with the encrypted data is easily detectable.</p><h3 id="regulatory-compliance">Regulatory Compliance</h3>
<p>For businesses, E2EE can help in complying with privacy laws and regulations such as the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA), which mandate the protection of personal and sensitive data.</p><h2 id="challenges-and-limitations">Challenges and Limitations</h2>
<p>Despite its strengths, E2EE is not w without challenges. Key management can be complex and requires robust security measures to prevent unauthorized access. If a private key is lost or stolen, the corresponding data cannot be decrypted, which can lead to data loss.</p><h3 id="endpoint-security">Endpoint Security</h3>
<p>E2EE does not protect against vulnerabilities at the endpoints themselves. If malware or a malicious actor gains access to an endpoint after decryption, the security benefits of E2EE are nullified. Moreover, the encrypted nature of E2EE can complicate legal compliance efforts when lawful access to the data is required.</p><h3 id="implementation-complexity">Implementation Complexity</h3>
<p>Implementing E2EE correctly requires significant expertise and can introduce complexities in the architecture of applications. Poor implementation can lead to vulnerabilities that might be exploited by attackers.</p><p>As digital communications continue to evolve, understanding the intricate details of how E2EE works and its application in real-world scenarios becomes increasingly important. This technology forms the backbone of secure communication in our modern digital age, empowering users to control their data and maintain privacy in their digital interactions.</p><h2 id="real-world-applications-of-end-to-end-encryption">Real-world Applications of End-to-End Encryption</h2>
<p>End-to-end encryption (E2EE) isn't just a theoretical construct; it's actively employed across various platforms and industries to safeguard communication and data storage. This section explores how E2EE is utilized in the real world, emphasizing its versatility and critical role in digital security.</p><h3 id="messaging-applications">Messaging Applications</h3>
<p>Popular messaging apps like WhatsApp, Signal, and Telegram use E2EE to ensure that only the communicating users can access messages. This technology secures millions of personal and professional conversations daily, making it virtually impossible for hackers or even the service providers themselves to intercept and read messages.</p><h3 id="email-encryption">Email Encryption</h3>
<p>Email services such as ProtonMail and Tutanota offer E2EE for email marketing needs like sending emails, attachments, and even contacts. Understanding how email encryption works is important to ensure that sensitive data remains protected from unauthorized access. This layer of security is crucial, especially for sharing sensitive information such as legal documents, medical records, or confidential business plans. To further reduce risks at the entry point, using a <a href="https://clearout.io/form-guard/" rel="noreferrer">form validator</a> can help ensure only accurate and verified data enters your system.</p><h3 id="data-storage">Data Storage</h3>
<p>Cloud storage providers, including Apple’s iCloud and Google Drive, have started implementing E2EE for data at rest and in transit. This means that data is encrypted before leaving the user's device and can only be decrypted by the intended recipient, thereby protecting personal and enterprise data from potential breaches.</p><h3 id="financial-transactions">Financial Transactions</h3>
<p>Banks and financial institutions are increasingly relying on E2EE to secure online transactions and protect customer data. This application is critical in building trust and compliance with financial regulations that require robust data protection measures.</p><h2 id="conclusion-the-importance-and-future-of-end-to-end-encryption">Conclusion: The Importance and Future of End-to-End Encryption</h2>
<p>The significance of E2EE in today's digital landscape cannot be overstated. It offers a high level of security that is crucial for protecting personal privacy and sensitive corporate information. As cyber threats continue to evolve, the role of E2EE in safeguarding digital communications will only grow more critical.</p><h3 id="challenges-ahead">Challenges Ahead</h3>
<p>While E2EE is powerful, it faces challenges, including issues with key management, the complexity of implementation, and the potential for misuse in shielding illegal activities. Future advancements in encryption technology must address these challenges while maintaining or enhancing security levels.</p><h3 id="future-innovations">Future Innovations</h3>
<p>Innovations such as quantum computing pose both an opportunity and a threat to E2EE. Quantum-resistant encryption algorithms are being developed to counter potential future threats, ensuring that E2EE remains a robust defense against evolving cyber threats.</p><h2 id="faqs-for-e2eeend-to-end-encryption">FAQs for E2EE(End-to-end encryption)</h2>
<h3 id="1-what-happens-if-i-lose-my-private-key">1. What happens if I lose my private key?</h3>
<p>Losing a private key means losing access to decrypt any data encrypted with it. It’s crucial to manage and back up keys securely.</p><h3 id="2-can-e2ee-be-broken">2. Can E2EE be broken?</h3>
<p>While theoretically secure, the actual security depends on the encryption protocol’s implementation and the security of the endpoints.</p><h3 id="3-is-e2ee-legal-everywhere">3. Is E2EE legal everywhere?</h3>
<p>The legality of E2EE varies by country. Some nations restrict or regulate its use due to concerns over national security and law enforcement.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Screen Share in the React Native iOS Video Call App?]]></title><description><![CDATA[In this article, we'll guide you on how to smoothly integrate Screen Share capabilities in your React Native iOS with VideoSDK.]]></description><link>https://www.videosdk.live/blog/integrate-screen-share-in-react-native-ios-video-call-app</link><guid isPermaLink="false">661391452a88c204ca9d0011</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[React Native]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Fri, 27 Sep 2024 10:38:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-SHare-in-React-Native-Android-2.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-SHare-in-React-Native-Android-2.png" alt="How to Integrate Screen Share in the React Native iOS Video Call App?"/><p>In today's digital world, good communication and teamwork are crucial. As people look for more effective methods to connect, video conferencing solutions have become indispensable. Screen sharing has evolved into an integral element of modern communication and collaboration platforms, allowing users to share their device displays with others during meetings, presentations, and distant collaborations. In this article, we'll look at how to smoothly incorporate Screen Share capabilities with VideoSDK.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><h3 id="goals">Goals</h3><p>By the end of this article, we'll:</p><ol><li>Create a <a href="https://app.videosdk.live/signup">VideoSDK account</a> and generate your VideoSDK auth token.</li><li>Integrate the VideoSDK library and dependencies into your project.</li><li>Implement core functionalities for video calls using VideoSDK.</li><li>Enable Screen Sharing feature.</li></ol><p>To take advantage of the Screen Sharing functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Node.js v12+</li><li>NPM v6+ (comes installed with newer Node versions)</li><li>Android Studio or Xcode installed</li></ul><h2 id="%E2%AC%87%EF%B8%8F-integrate-videosdk"><strong>⬇️ </strong>Integrate VideoSDK</h2><p>Install the VideoSDK by using the following command. Ensure that you are in your project directory before running this command.</p><ul><li>For NPM </li></ul><pre><code class="language-js">npm install "@videosdk.live/react-native-sdk"  "@videosdk.live/react-native-incallmanager"</code></pre><ul><li>For Yarn</li></ul><pre><code class="language-js">yarn add "@videosdk.live/react-native-sdk" "@videosdk.live/react-native-incallmanager"</code></pre><h3 id="project-configuration">Project Configuration</h3><p>Before integrating the Screen Share functionality, ensure that your project is correctly prepared to handle the integration. This setup consists of a sequence of steps for configuring rights, dependencies, and platform-specific parameters so that VideoSDK can function seamlessly inside your application context.</p><h4 id="ios-setup%E2%80%8B">iOS Setup​</h4>
<blockquote>Ensure that you are using CocoaPods version 1.10 or later.</blockquote><p><strong>1. </strong>To update CocoaPods, you can reinstall <code>gem</code> using the following command:</p><pre><code class="language-swift">$ sudo gem install cocoapods</code></pre><p><strong>2.</strong> Manually link react-native-incall-manager (if it is not linked automatically).</p><p>Select <code>Your_Xcode_Project/TARGETS/BuildSettings</code>, in Header Search Paths, add <code>"$(SRCROOT)/../node_modules/@videosdk.live/react-native-incall-manager/ios/RNInCallManager"</code></p><p><strong>3.</strong> Change the path of <code>react-native-webrtc</code> using the following command:</p><pre><code class="language-swift">pod ‘react-native-webrtc’, :path =&gt; ‘../node_modules/@videosdk.live/react-native-webrtc’</code></pre><p><strong>4. </strong>Change the version of your platform.</p><p>You need to change the platform field in the Podfile to 12.0 or above because <strong>react-native-webrtc</strong> doesn't support iOS versions earlier than 12.0. Update the line: platform: ios, ‘12.0’.</p><p><strong>5. </strong>Install pods.</p><p>After updating the version, you need to install the pods by running the following command:</p><pre><code class="language-swift">Pod install</code></pre><p><strong>6. </strong>Add “libreact-native-webrtc.a” binary.</p><p>Add the "<strong>libreact-native-webrtc.a</strong>" binary to the "Link Binary With Libraries" section in the target of your main project folder.</p><p><strong>7. </strong>Declare permissions in <strong>Info.plist</strong> :</p><p>Add the following lines to your info.plist file located at:</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Camera permission description&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Microphone permission description&lt;/string&gt;</code></pre><figcaption><p><span style="white-space: pre-wrap;">ios/projectname/info.plist</span></p></figcaption></figure><h4 id="register-service">Register Service</h4>
<p>Register VideoSDK services in your root <code>index.js</code> file for the initialization service.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">import { AppRegistry } from "react-native";
import App from "./App";
import { name as appName } from "./app.json";
import { register } from "@videosdk.live/react-native-sdk";

register();

AppRegistry.registerComponent(appName, () =&gt; App);</code></pre><figcaption><p><span style="white-space: pre-wrap;">index.js</span></p></figcaption></figure><h2 id="essential-steps-to-add-video-in-your-app">Essential Steps to Add Video in Your App</h2><p>By following essential steps, you can seamlessly implement video into your applications.</p><h3 id="step-1-get-started-with-apijs%E2%80%8B">Step 1: Get started with api.js<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-1--get-started-with-apijs">​</a></h3><p>Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">export const token = "&lt;Generated-from-dashbaord&gt;";
// API call to create meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${token}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });

  const { roomId } = await res.json();
  return roomId;
};</code></pre><figcaption><p><span style="white-space: pre-wrap;">api.js</span></p></figcaption></figure><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the Context Provider and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meetings such as join/leave, enable/disable the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as name, webcamStream, micStream etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the App.js file.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">import React, { useState } from "react";
import {
  SafeAreaView,
  TouchableOpacity,
  Text,
  TextInput,
  View,
  FlatList,
} from "react-native";
import {
  MeetingProvider,
  useMeeting,
  useParticipant,
  MediaStream,
  RTCView,
} from "@videosdk.live/react-native-sdk";
import { createMeeting, token } from "./api";

function JoinScreen(props) {
  return null;
}

function ControlsContainer() {
  return null;
}

function MeetingView() {
  return null;
}

export default function App() {
  const [meetingId, setMeetingId] = useState(null);

  const getMeetingId = async (id) =&gt; {
    const meetingId = id == null ? await createMeeting({ token }) : id;
    setMeetingId(meetingId);
  };

  return meetingId ? (
    &lt;SafeAreaView style={{ flex: 1, backgroundColor: "#F6F6FF" }}&gt;
      &lt;MeetingProvider
        config={{
          meetingId,
          micEnabled: false,
          webcamEnabled: true,
          name: "Test User",
        }}
        token={token}
      &gt;
        &lt;MeetingView /&gt;
      &lt;/MeetingProvider&gt;
    &lt;/SafeAreaView&gt;
  ) : (
    &lt;JoinScreen getMeetingId={getMeetingId} /&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">App.js</span></p></figcaption></figure><h3 id="step-3-implement-join-screen%E2%80%8B">Step 3: Implement Join Screen<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-3--implement-join-screen">​</a></h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">function JoinScreen(props) {
  const [meetingVal, setMeetingVal] = useState("");
  return (
    &lt;SafeAreaView
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        paddingHorizontal: 6 * 10,
      }}
    &gt;
      &lt;TouchableOpacity
        onPress={() =&gt; {
          props.getMeetingId();
        }}
        style={{ backgroundColor: "#1178F8", padding: 12, borderRadius: 6 }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Create Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;

      &lt;Text
        style={{
          alignSelf: "center",
          fontSize: 22,
          marginVertical: 16,
          fontStyle: "italic",
          color: "grey",
        }}
      &gt;
        ---------- OR ----------
      &lt;/Text&gt;
      &lt;TextInput
        value={meetingVal}
        onChangeText={setMeetingVal}
        placeholder={"XXXX-XXXX-XXXX"}
        style={{
          padding: 12,
          borderWidth: 1,
          borderRadius: 6,
          fontStyle: "italic",
        }}
      /&gt;
      &lt;TouchableOpacity
        style={{
          backgroundColor: "#1178F8",
          padding: 12,
          marginTop: 14,
          borderRadius: 6,
        }}
        onPress={() =&gt; {
          props.getMeetingId(meetingVal);
        }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Join Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;
    &lt;/SafeAreaView&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">JoinScreen Component</span></p></figcaption></figure><h3 id="step-4-implement-controls%E2%80%8B">Step 4: Implement Controls<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-4--implement-controls">​</a></h3><p>The next step is to create a <code>ControlsContainer</code> component to manage features such as Join or leave a Meeting and Enable or Disable the Webcam/Mic.</p><p>In this step, the <code>useMeeting</code> hook is utilized to acquire all the required methods such as <code>join()</code>, <code>leave()</code>, <code>toggleWebcam</code> and <code>toggleMic</code>.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const Button = ({ onPress, buttonText, backgroundColor }) =&gt; {
  return (
    &lt;TouchableOpacity
      onPress={onPress}
      style={{
        backgroundColor: backgroundColor,
        justifyContent: "center",
        alignItems: "center",
        padding: 12,
        borderRadius: 4,
      }}
    &gt;
      &lt;Text style={{ color: "white", fontSize: 12 }}&gt;{buttonText}&lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
  );
};

function ControlsContainer({ join, leave, toggleWebcam, toggleMic }) {
  return (
    &lt;View
      style={{
        padding: 24,
        flexDirection: "row",
        justifyContent: "space-between",
      }}
    &gt;
      &lt;Button
        onPress={() =&gt; {
          join();
        }}
        buttonText={"Join"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleWebcam();
        }}
        buttonText={"Toggle Webcam"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleMic();
        }}
        buttonText={"Toggle Mic"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          leave();
        }}
        buttonText={"Leave"}
        backgroundColor={"#FF0000"}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ControlsContainer Component</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantList() {
  return null;
}
function MeetingView() {
  const { join, leave, toggleWebcam, toggleMic, meetingId } = useMeeting({});

  return (
    &lt;View style={{ flex: 1 }}&gt;
      {meetingId ? (
        &lt;Text style={{ fontSize: 18, padding: 12 }}&gt;
          Meeting Id :{meetingId}
        &lt;/Text&gt;
      ) : null}
      &lt;ParticipantList /&gt;
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingView Component</span></p></figcaption></figure><h3 id="step-5-render-participant-list%E2%80%8B">Step 5: Render Participant List<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-5--render-participant-list">​</a></h3><p>After implementing the controls, the next step is to render the joined participants.</p><p>You can get all the joined <code>participants</code> from the <code>useMeeting</code> Hook.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantView() {
  return null;
}

function ParticipantList({ participants }) {
  return participants.length &gt; 0 ? (
    &lt;FlatList
      data={participants}
      renderItem={({ item }) =&gt; {
        return &lt;ParticipantView participantId={item} /&gt;;
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 20 }}&gt;Press Join button to enter meeting.&lt;/Text&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ParticipantList Component</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">function MeetingView() {
  // Get `participants` from useMeeting Hook
  const { join, leave, toggleWebcam, toggleMic, participants } = useMeeting({});
  const participantsArrId = [...participants.keys()];

  return (
    &lt;View style={{ flex: 1 }}&gt;
      &lt;ParticipantList participants={participantsArrId} /&gt;
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingView Component</span></p></figcaption></figure><h3 id="step-6-handling-participants-media%E2%80%8B">Step 6: Handling Participant's Media<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-6--handling-participants-media">​</a></h3><p>Before Handling the Participant's Media, you need to understand a couple of concepts.</p><h4 id="1-useparticipant-hook">1. useParticipant Hook</h4>
<p>The <code>useParticipant</code> hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take <code>participantId</code> as argument.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const { webcamStream, webcamOn, displayName } = useParticipant(participantId);</code></pre><figcaption><p><span style="white-space: pre-wrap;">useParticipant Hook Example</span></p></figcaption></figure><h4 id="2-mediastream-api">2. MediaStream API</h4>
<p>The MediaStream API is beneficial for adding a MediaTrack into the <code>RTCView</code> component, enabling the playback of audio or video.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">&lt;RTCView
  streamURL={new MediaStream([webcamStream.track]).toURL()}
  objectFit={"cover"}
  style={{
    height: 300,
    marginVertical: 8,
    marginHorizontal: 8,
  }}
/&gt;</code></pre><figcaption><p><span style="white-space: pre-wrap;">useParticipant Hook Example</span></p></figcaption></figure><h4 id="rendering-participant-media">Rendering Participant Media</h4>
<figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantView({ participantId }) {
  const { webcamStream, webcamOn } = useParticipant(participantId);

  return webcamOn &amp;&amp; webcamStream ? (
    &lt;RTCView
      streamURL={new MediaStream([webcamStream.track]).toURL()}
      objectFit={"cover"}
      style={{
        height: 300,
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        backgroundColor: "grey",
        height: 300,
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 16 }}&gt;NO MEDIA&lt;/Text&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ParticipantView Component</span></p></figcaption></figure><p>Congratulations! By following these steps, you're on your way to unlocking the video within your application. Now, we are moving forward to integrate the feature that builds immersive video experiences for your users!</p><h2 id="integrate-screen-sharing-feature">Integrate Screen Sharing Feature</h2><p>Adding the Screen Share functionality to your application improves cooperation by allowing users to share their device screens during meetings or presentations. It allows everyone in the conference to view precisely what you see on your screen, which is useful for presentations, demos, and collaborations.</p><h3 id="create-broadcast-upload-extension-in-ios">Create Broadcast Upload Extension in iOS</h3><h4 id="step-1-open-target">Step 1: Open Target</h4>
<p>Open your project with Xcode, then select <strong>File &gt; New &gt; Target</strong> in the menu bar.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step1-xcode-8cbf05258253368fa8095f10aa06459d.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="801" height="580"/></figure><h4 id="step-2-select-target">Step 2: Select Target</h4>
<p>Select Broadcast Upload Extension and click next.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step2-xcode-5270422797cc8941922cefadffb5238d.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="769" height="555"/></figure><h4 id="step-3-configure-broadcast-upload-extension">Step 3: Configure Broadcast Upload Extension</h4>
<p>Enter the extension's name in the Product Name field, choose the team from the dropdown, uncheck the "Include UI extension" field, and click "Finish."</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step3-xcode-458facd46d26cfa090fd8724b6ba831b.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="789" height="567"/></figure><h4 id="step-4-activate-extension-scheme">Step 4: Activate Extension scheme</h4>
<p>You will be prompted with a popup: <strong>Activate "Your-Extension-name" scheme?</strong> click on activate.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step4-xcode-6de6fa7fc079f920f52d8315b2717e9d.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="510" height="490"/></figure><p>Now, the "Broadcast" folder will appear in the Xcode left sidebar.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step5-xcode-79ba89532c028fb25376bf6d4953f0b2.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="474" height="652"/></figure><h4 id="step-5-add-external-file-in-created-extension">Step 5: Add External file in Created Extension</h4>
<p>Open the <strong>videosdk-rtc-react-native-sdk-example</strong> repository, and copy the following files: <code>SampleUploader.swift</code>, <code>SocketConnection.swift</code>, <code>DarwinNotificationCenter.swift</code>, and <code>Atomic.swift</code> to your extension's folder. Ensure that these files are added to the target.</p><h4 id="step-6-update-samplehandlerswift-file">Step 6: Update SampleHandler.swift file</h4>
<p>Open <code>SampleHandler.swift</code>, and copy the content of the file. Paste this content into your extension's <code>SampleHandler.swift</code> file.</p><h4 id="step-7-add-capability-in-the-app">Step 7: Add Capability in the App</h4>
<p>In Xcode, navigate to <strong>YourappName &gt; Signing &amp; Capabilities</strong>, and click on +Capability to configure the app group.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step8-xcode-60177a1783e9203570a06479c30492b8-1.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="930" height="529"/></figure><p>Choose <strong>App Groups</strong> from the list.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step9-xcode-13aa5448218f78f473f060c6854874a2.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="803" height="579"/></figure><p>After that, select or add the generated App Group ID that you have created before.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step10-xcode-80bd6f05c87acae9a9fbaa4a89f414f9.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="1176" height="735"/></figure><h4 id="step-8-add-capability-in-extension">Step 8: Add Capability in Extension</h4>
<p>Go to <strong>Your-Extension-Name &gt; Signing &amp; Capabilities</strong> and configure <strong>App Group</strong> functionality which we had performed in previous steps. (Group id should be same for both targets).</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step11-xcode-021329a0abd110f74fb1eba7d668fc79.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="803" height="445"/></figure><h4 id="step-9-add-app-group-id-in-extension-file">Step 9: Add App Group ID in Extension File</h4>
<p>Go to the extension's <code>SampleHandler.swift</code> file and paste your group ID into the <code>appGroupIdentifier</code> constant.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step12-xcode-6f145ca7606a486596da66ebf5c31e36.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="951" height="562"/></figure><h4 id="step-10-update-app-level-infoplist-file">Step 10: Update App level <code>info.plist</code> file</h4>
<ol><li>Add a new key <strong>RTCScreenSharingExtension</strong> in <strong>Info.plist</strong> with the extension's Bundle Identifier as the value.</li><li>Add a new key <strong>RTCAppGroupIdentifier</strong> in <strong>Info.plist</strong> with the extension's App groups Id as the value.</li></ol><p><strong>Note</strong>: For the extension's Bundle Identifier, go to <strong>TARGETS &gt; Your-Extension-Name &gt; Signing &amp; Capabilities</strong>.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step13-xcode-00cb59e564facb3c7f365235065f4561.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="933" height="550"/></figure><blockquote>You can also check out the extension's <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-sdk-example/tree/master/ios/BroadcastScreen" rel="noopener noreferrer">example code</a> on github.</blockquote><h3 id="create-an-ios-native-module-for-rpsystembroadcastpickerview">Create an iOS Native Module for <code>RPSystemBroadcastPickerView</code></h3><h4 id="step-1-add-files-to-ios-project">Step 1: Add Files to iOS Project</h4>
<p>Go to <strong>Xcode &gt; Your App</strong> and create a new file named <code>VideosdkRPK.swift</code>.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step18-xcode-1e37fbb475ebcde38aa2cce7786794f1.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="940" height="629"/></figure><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step19-xcode-1fcb795f17181a1b6bc6c09558001b3c.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="781" height="592"/></figure><p>After clicking the Create button, it will prompt you to create a bridging header.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step20-xcode-31b738cbe8cd385e602531be157b6a62.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="721" height="536"/></figure><p>After creating the bridging header file, create an Objective-c file named <code>VideosdkRPK</code>.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step21-xcode-b15e58298ff12891ce3625919cc3b956.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="753" height="543"/></figure><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step22-xcode-ae36d17fc8a0a65c3726ee31deb51f34.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="767" height="549"/></figure><ul><li>For the <code>VideosdkRPK.swift</code> file, copy the content from <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-sdk-example/blob/master/ios/VideosdkRPK.swift" rel="noopener noreferrer">here</a>.</li><li>In the <code>Appname-Bridging-Header</code> file, add the line <code>#import "React/RCTEventEmitter.h"</code>.</li><li>For the <code>VideosdkRPK.m</code> file, copy the content from <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-sdk-example/blob/master/ios/VideosdkRPK.m" rel="noopener noreferrer">here</a>.</li></ul><h4 id="step-2-integrate-the-native-module-on-react-native-side">Step 2: Integrate the Native Module on React native side</h4>
<ul><li>Create a file named <code>VideosdkRPK.js</code> and copy the content from <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-sdk-example/blob/master/VideosdkRPK.js" rel="noopener noreferrer">here</a>.</li><li>Add the lines given below for handling the enable and disable screen share event.</li></ul><h3 id="enable-screen-share">Enable Screen Share</h3><ul><li>By using the <code>enableScreenShare()</code> function of the <code>useMeeting</code> hook, the local participant can share their mobile screen with other participants.</li><li>The Screen Share stream of a participant can be accessed from the <code>screenShareStream</code> property of the <code>useParticipant</code> hook.</li></ul><h3 id="disable-screen-share">Disable Screen Share</h3><ul><li>By using the <code>disableScreenShare()</code> function of the <code>useMeeting</code> hook, the local participants can stop sharing their mobile screens with other participants.</li></ul><pre><code class="language-js">import React, { useEffect } from "react";
import VideosdkRPK from "../VideosdkRPK";
import { TouchableOpacity, Text } from "react-native";

const { enableScreenShare, disableScreenShare } = useMeeting({});

useEffect(() =&gt; {
  VideosdkRPK.addListener("onScreenShare", (event) =&gt; {
    if (event === "START_BROADCAST") {
      enableScreenShare();
    } else if (event === "STOP_BROADCAST") {
      disableScreenShare();
    }
  });

  return () =&gt; {
    VideosdkRPK.removeSubscription("onScreenShare");
  };
}, []);

return (
  &lt;&gt;
    &lt;TouchableOpacity
      onPress={() =&gt; {
        // Calling startBroadcast from iOS Native Module
        VideosdkRPK.startBroadcast();
      }}
    &gt;
      &lt;Text&gt; Start Screen Share &lt;/Text&gt;
    &lt;/TouchableOpacity&gt;

    &lt;TouchableOpacity
      onPress={() =&gt; {
        disableScreenShare();
      }}
    &gt;
      &lt;Text&gt; Stop Screen Share &lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
  &lt;/&gt;
);</code></pre><p>The <code>VideosdkRPK.startBroadcast()</code> the method produces the following result.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/step23-xcode-e815d2c96e8061551e941055e32578b3.png" class="kg-image" alt="How to Integrate Screen Share in the React Native iOS Video Call App?" loading="lazy" width="307" height="504"/></figure><p>After clicking the <strong>Start Broadcast</strong> button, you will be able to get the screen share stream in the session.</p><h3 id="rendering-screen-share-stream%E2%80%8B">Rendering Screen Share Stream​</h3><ul><li>To render the screenshare, you will need the <code>participantId</code> of the user presenting the screen. This can be obtained from the <code>presenterId</code> property of the <code>useMeeting</code> hook.</li></ul><pre><code class="language-js">function MeetingView() {
  const { presenterId } = useMeeting({});

  return (
    &lt;View style={{ flex: 1 }}&gt;
      //..
      {presenterId &amp;&amp; &lt;PresenterView presenterId={presenterId} /&gt;}
      &lt;ParticipantList /&gt;
      // ...
    &lt;/View&gt;
  );
}

const PresenterView = ({ presenterId }) =&gt; {
  return &lt;Text&gt;PresenterView&lt;/Text&gt;;
};</code></pre><ul><li>Now that you have the <code>presenterId</code>, you can obtain the <code>screenShareStream</code> using the <code>useParticipant</code> hook and play it in the <code>RTCView</code> component.</li></ul><pre><code class="language-js">const PresenterView = ({ presenterId }) =&gt; {
  const { screenShareStream, screenShareOn } = useParticipant(presenterId);

  return (
    &lt;&gt;
      // playing the media stream in the RTCView
      {screenShareOn &amp;&amp; screenShareStream ? (
        &lt;RTCView
          streamURL={new MediaStream([screenShareStream.track]).toURL()}
          objectFit={"contain"}
          style={{
            flex: 1,
          }}
        /&gt;
      ) : null}
    &lt;/&gt;
  );
};</code></pre><h3 id="events-associated-with-screen-share">Events Associated with Screen Share</h3><h4 id="events-associated-with-enablescreenshare">Events associated with <code>enableScreenShare</code></h4>
<ul><li>Every Participant will receive a callback on <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-participant/events#onstreamenabled"><code>onStreamEnabled()</code></a> event of the <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-participant/introduction"><code>useParticipant()</code></a> hook with the <code>Stream</code> object.</li><li>Every Participant will receive the <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-meeting/events#onpresenterchanged"><code>onPresenterChanged()</code></a> callback of the <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-meeting/introduction"><code>useMeeting</code></a> hook, which provides the <code>participantId</code> as the <code>presenterId</code> of the participant who started the screen share.</li></ul><h4 id="events-associated-with-disablescreenshare">Events associated with <code>disableScreenShare</code></h4>
<ul><li>Every Participant will receive a callback on <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-participant/events#onstreamdisabled"><code>onStreamDisabled()</code></a> event of the <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-participant/introduction"><code>useParticipant()</code></a> hook with the <code>Stream</code> object.</li><li>Every Participant will receive the <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-meeting/events#onpresenterchanged"><code>onPresenterChanged()</code></a> callback of the <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-meeting/introduction"><code>useMeeting</code></a> hook, with the <code>presenterId</code> as <code>null</code>, indicating that there is no current presenter.</li></ul><pre><code class="language-js">import { useParticipant, useMeeting } from "@videosdk.live/react-native-sdk";

const MeetingView = () =&gt; {
  //Callback for when the presenter changes
  function onPresenterChanged(presenterId) {
    if(presenterId){
      console.log(presenterId, "started screen share");
    }else{
      console.log("someone stopped screen share");
    }
  }

  const { participants } = useMeeting({
    onPresenterChanged,
    ...
  });

  return &lt;&gt;...&lt;/&gt;
}

const ParticipantView = (participantId) =&gt; {
  //Callback for when the participant starts a stream
  function onStreamEnabled(stream) {
    if(stream.kind === 'share'){
      console.log("Share Stream On: onStreamEnabled", stream);
    }
  }

  //Callback for when the participant stops a stream
  function onStreamDisabled(stream) {
    if(stream.kind === 'share'){
      console.log("Share Stream Off: onStreamDisabled", stream);
    }
  }

  const {
    displayName
    ...
  } = useParticipant(participantId,{
    onStreamEnabled,
    onStreamDisabled
    ...
  });
  return &lt;&gt; Participant View &lt;/&gt;;
}</code></pre><p>By following the steps outlined in this guide, you can seamlessly integrate the Screen Share feature and empower your users to share their screens with ease, fostering better communication and collaboration.</p><h2 id="%E2%9C%A8-want-to-add-more-features-to-react-native-video-calling-app">✨ Want to Add More Features to React Native Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your React Native video-calling app,</p><p><strong>Check out these additional resources:</strong></p><ul><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/active-speaker-indication-in-react-native-video-call-app">Link</a></li><li>RTMP Live Stream: <a href="https://www.videosdk.live/blog/integrate-rtmp-in-react-native-video-app">Link</a></li><li>Image Capture Feature: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-react-native-for-android-app">Link</a></li><li>Screen Share Feature in Android: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-native-android-video-call-app">Link</a></li><li>Chat Feature: <a href="https://www.videosdk.live/blog/integrate-chat-feature-in-react-native-video-call-app">Link</a></li><li>Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/picture-in-picture-pip-in-react-native">Link</a></li></ul><h2 id="conclusion">Conclusion</h2><p>Screen Share enables participants to easily share their ideas, presentations, and information during corporate meetings, educational sessions, or creative collaborations. With VideoSDK's Screen Share functionality, you can create immersive and engaging video call experiences that connect people from all over the world.</p><p>If you are new here and want to build an interactive React Native app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get? <em>10000 free minutes every month</em>. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate RTMP Live Stream in React JS Video Call App?]]></title><description><![CDATA[Integrate RTMP Live Stream seamlessly into your React JS Video Call App for dynamic engagement and immersive experiences.]]></description><link>https://www.videosdk.live/blog/integrate-rtmp-livestream-in-react-js</link><guid isPermaLink="false">6618e6522a88c204ca9d0942</guid><category><![CDATA[React]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Fri, 27 Sep 2024 07:09:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-Live-Stream-in-React-Video-Call-App.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-Live-Stream-in-React-Video-Call-App.jpg" alt="How to Integrate RTMP Live Stream in React JS Video Call App?"/><p>Integrating RTMP into a React JS video call application seamlessly adds cutting-edge technology with a user-friendly interface. Experience real-time communication with high-quality video streaming, as RTMP (Real-Time Messaging Protocol) ensures quick transmission of audiovisual data over the internet. </p><p>Integrating this protocol into React JS empowers developers to create immersive video call experiences within their applications. In the below guide, you will learn modern ReactJS video-calling applications with live streaming using RTMP and VideoSDK.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To utilize the RTMP feature, we need to leverage the capabilities provided by VideoSDK. Before we delve into the implementation steps, let's make sure you have completed the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>Consider referring to the <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a> for a more visual understanding of the account creation and token generation process.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (Not having one?, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a>)</li><li>Basic understanding of React.</li><li><a href="https://www.npmjs.com/package/@videosdk.live/react-sdk" rel="noopener noreferrer"><strong>React VideoSDK</strong></a></li><li>Make sure Node and NPM are installed on your device.</li><li>Basic understanding of Hooks (useState, useRef, useEffect)</li><li>React Context API (optional)</li></ul><p>Follow the steps to create the environment necessary to add video calls to your app. You can also find the code sample for <a href="https://github.com/videosdk-live/quickstart/tree/main/react-rtc" rel="noopener noreferrer">Quickstart here</a>.<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#create-new-react-app" rel="noopener noreferrer">​</a></p><p><strong>Create a new React App using the below command.</strong></p><pre><code class="language-js">$ npx create-react-app videosdk-rtc-react-app</code></pre><h2 id="%E2%AC%87%EF%B8%8F-install-videosdk%E2%80%8B">⬇️ Install VideoSDK<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#install-videosdk">​</a></h2><p>It is necessary to set up VideoSDK within your project before going into the details of integrating the RTMP feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.</p><ul><li>For NPM</li></ul><pre><code class="language-js">$ npm install "@videosdk.live/react-sdk"

//For the Participants Video
$ npm install "react-player"</code></pre><ul><li>For Yarn</li></ul><pre><code class="language-js">$ yarn add "@videosdk.live/react-sdk"

//For the Participants Video
$ yarn add "react-player"</code></pre><p>You are going to use functional components to leverage React's reusable component architecture. There will be components for users, videos and controls (mic, camera, leave) over the video.</p><h3 id="app-architecture">App Architecture</h3>
<p>The App will contain a <code>MeetingView</code> component which includes a <code>ParticipantView</code> component which will render the participant's name, video, audio, etc. It will also have a <code>Controls</code> component that will allow the user to perform operations like leave and toggle media.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-quick-start-fafbfbc2ed2d7cbfd4c5ee2e36296f9e.png" class="kg-image" alt="How to Integrate RTMP Live Stream in React JS Video Call App?" loading="lazy" width="1356" height="780"/></figure><p>You will be working on the following files:</p><ul><li>API.js: Responsible for handling API calls such as generating unique <code>meetingId</code> and token</li><li>App.js: Responsible for rendering <code>MeetingView</code> and joining the meeting.</li></ul><h2 id="essential-steps-to-implement-video-calling-functionality">Essential Steps to Implement Video Calling Functionality</h2><p>To add video capability to your React application, you must first complete a sequence of prerequisites.</p><h3 id="step-1-get-started-with-apijs%E2%80%8B">Step 1: Get started with API.js<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-1-get-started-with-apijs">​</a></h3><p>Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><pre><code class="language-js">//This is the Auth token, you will use it to generate a meeting and connect to it
export const authToken = "&lt;Generated-from-dashbaord&gt;";
// API call to create a meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${authToken}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });
  //Destructuring the roomId from the response
  const { roomId } = await res.json();
  return roomId;
};</code></pre><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the <strong>Context of Provider</strong> and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meetings such as join/leave, enable/disable the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as <strong>name</strong>, <strong>webcamStream</strong>, <strong>micStream</strong>, etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the <strong>App.js</strong> file.</p><pre><code class="language-js">import "./App.css";
import React, { useEffect, useMemo, useRef, useState } from "react";
import {
  MeetingProvider,
  MeetingConsumer,
  useMeeting,
  useParticipant,
} from "@videosdk.live/react-sdk";
import { authToken, createMeeting } from "./API";
import ReactPlayer from "react-player";

function JoinScreen({ getMeetingAndToken }) {
  return null;
}

function ParticipantView(props) {
  return null;
}

function Controls(props) {
  return null;
}

function MeetingView(props) {
  return null;
}

function App() {
  const [meetingId, setMeetingId] = useState(null);

  //Getting the meeting id by calling the api we just wrote
  const getMeetingAndToken = async (id) =&gt; {
    const meetingId =
      id == null ? await createMeeting({ token: authToken }) : id;
    setMeetingId(meetingId);
  };

  //This will set Meeting Id to null when meeting is left or ended
  const onMeetingLeave = () =&gt; {
    setMeetingId(null);
  };

  return authToken &amp;&amp; meetingId ? (
    &lt;MeetingProvider
      config={{
        meetingId,
        micEnabled: true,
        webcamEnabled: true,
        name: "C.V. Raman",
      }}
      token={authToken}
    &gt;
      &lt;MeetingView meetingId={meetingId} onMeetingLeave={onMeetingLeave} /&gt;
    &lt;/MeetingProvider&gt;
  ) : (
    &lt;JoinScreen getMeetingAndToken={getMeetingAndToken} /&gt;
  );
}

export default App;</code></pre><h3 id="step-3-implement-join-screen%E2%80%8B">Step 3: Implement Join Screen<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-3-implement-join-screen">​</a></h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><pre><code class="language-js">function JoinScreen({ getMeetingAndToken }) {
  const [meetingId, setMeetingId] = useState(null);
  const onClick = async () =&gt; {
    await getMeetingAndToken(meetingId);
  };
  return (
    &lt;div&gt;
      &lt;input
        type="text"
        placeholder="Enter Meeting Id"
        onChange={(e) =&gt; {
          setMeetingId(e.target.value);
        }}
      /&gt;
      &lt;button onClick={onClick}&gt;Join&lt;/button&gt;
      {" or "}
      &lt;button onClick={onClick}&gt;Create Meeting&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><h4 id="output">Output</h4>
<figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-join-screen-06fb57cf0d9e3bcc1e7da9fc032298c3.jpeg" class="kg-image" alt="How to Integrate RTMP Live Stream in React JS Video Call App?" loading="lazy" width="720" height="130"/></figure><h3 id="step-4-implement-meetingview-and-controls%E2%80%8B">Step 4: Implement MeetingView and Controls<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-4-implement-meetingview-and-controls">​</a></h3><p>The next step is to create <code>MeetingView</code> and <code>Controls</code> components to manage features such as join, leave, mute, and unmute.</p><pre><code class="language-js">function MeetingView(props) {
  const [joined, setJoined] = useState(null);
  //Get the method which will be used to join the meeting.
  //We will also get the participants list to display all participants
  const { join, participants } = useMeeting({
    //callback for when meeting is joined successfully
    onMeetingJoined: () =&gt; {
      setJoined("JOINED");
    },
    //callback for when meeting is left
    onMeetingLeft: () =&gt; {
      props.onMeetingLeave();
    },
  });
  const joinMeeting = () =&gt; {
    setJoined("JOINING");
    join();
  };

  return (
    &lt;div className="container"&gt;
      &lt;h3&gt;Meeting Id: {props.meetingId}&lt;/h3&gt;
      {joined &amp;&amp; joined == "JOINED" ? (
        &lt;div&gt;
          &lt;Controls /&gt;
          //For rendering all the participants in the meeting
          {[...participants.keys()].map((participantId) =&gt; (
            &lt;ParticipantView
              participantId={participantId}
              key={participantId}
            /&gt;
          ))}
        &lt;/div&gt;
      ) : joined &amp;&amp; joined == "JOINING" ? (
        &lt;p&gt;Joining the meeting...&lt;/p&gt;
      ) : (
        &lt;button onClick={joinMeeting}&gt;Join&lt;/button&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><figure class="kg-card kg-code-card"><pre><code class="language-js">function Controls() {
  const { leave, toggleMic, toggleWebcam } = useMeeting();
  return (
    &lt;div&gt;
      &lt;button onClick={() =&gt; leave()}&gt;Leave&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleMic()}&gt;toggleMic&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleWebcam()}&gt;toggleWebcam&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">Control Component</span></p></figcaption></figure><h4 id="output-of-controls-component">Output of Controls Component</h4>
<figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-container-controls-2cebdfdfd1371b010b773cb6fb9c7ae8.jpeg" class="kg-image" alt="How to Integrate RTMP Live Stream in React JS Video Call App?" loading="lazy" width="720" height="177"/></figure><h3 id="step-5-implement-participant-view%E2%80%8B">Step 5: Implement Participant View<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-5-implement-participant-view">​</a></h3><p>Before implementing the participant view, you need to understand a couple of concepts.</p><h4 id="51-forwarding-ref-for-mic-and-camera">5.1 Forwarding Ref for mic and camera</h4>
<p>The <code>useRef</code> hook is responsible for referencing the audio and video components. It will be used to play and stop the audio and video of the participant.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const webcamRef = useRef(null);
const micRef = useRef(null);</code></pre><figcaption><p><span style="white-space: pre-wrap;">Forwarding Ref for mic and camera</span></p></figcaption></figure><h4 id="52-useparticipant-hook">5.2 useParticipant Hook</h4>
<p>The <code>useParticipant</code> hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take participantId as an argument.</p><pre><code class="language-js">const { webcamStream, micStream, webcamOn, micOn } = useParticipant(
  props.participantId
);</code></pre><h4 id="53-mediastream-api">5.3 MediaStream API</h4>
<p>The MediaStream API is beneficial for adding a MediaTrack to the audio/video tag, enabling the playback of audio or video.</p><pre><code class="language-js">const webcamRef = useRef(null);
const mediaStream = new MediaStream();
mediaStream.addTrack(webcamStream.track);

webcamRef.current.srcObject = mediaStream;
webcamRef.current
  .play()
  .catch((error) =&gt; console.error("videoElem.current.play() failed", error));</code></pre><h4 id="54-implement-participantview%E2%80%8B">5.4 Implement <code>ParticipantView</code>​</h4>
<p>Now you can use both of the hooks and the API to create <code>ParticipantView</code></p><pre><code class="language-js">function ParticipantView(props) {
  const micRef = useRef(null);
  const { webcamStream, micStream, webcamOn, micOn, isLocal, displayName } =
    useParticipant(props.participantId);

  const videoStream = useMemo(() =&gt; {
    if (webcamOn &amp;&amp; webcamStream) {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(webcamStream.track);
      return mediaStream;
    }
  }, [webcamStream, webcamOn]);

  useEffect(() =&gt; {
    if (micRef.current) {
      if (micOn &amp;&amp; micStream) {
        const mediaStream = new MediaStream();
        mediaStream.addTrack(micStream.track);

        micRef.current.srcObject = mediaStream;
        micRef.current
          .play()
          .catch((error) =&gt;
            console.error("videoElem.current.play() failed", error)
          );
      } else {
        micRef.current.srcObject = null;
      }
    }
  }, [micStream, micOn]);

  return (
    &lt;div&gt;
      &lt;p&gt;
        Participant: {displayName} | Webcam: {webcamOn ? "ON" : "OFF"} | Mic:{" "}
        {micOn ? "ON" : "OFF"}
      &lt;/p&gt;
      &lt;audio ref={micRef} autoPlay playsInline muted={isLocal} /&gt;
      {webcamOn &amp;&amp; (
        &lt;ReactPlayer
          //
          playsinline // extremely crucial prop
          pip={false}
          light={false}
          controls={false}
          muted={true}
          playing={true}
          //
          url={videoStream}
          //
          height={"300px"}
          width={"300px"}
          onError={(err) =&gt; {
            console.log(err, "participant video error");
          }}
        /&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><blockquote>You can check out the complete <a href="https://github.com/videosdk-live/quickstart/tree/main/react-rtc" rel="noopener noreferrer">quick start example here</a>.</blockquote><h2 id="integrate-rtmp-stream">Integrate RTMP Stream</h2><p>RTMP is a widely used protocol for live streaming video content from VideoSDK to platforms like YouTube, Twitch, Facebook, and others.</p><p>To initiate live streaming from VideoSDK to platforms supporting RTMP ingestion, you simply need to provide the platform-specific stream key and stream URL. This enables VideoSDK to connect to the platform's RTMP server and transmit the live video stream.</p><p>Furthermore, VideoSDK offers flexibility in configuring livestream layouts. You can achieve this by either selecting different prebuilt layouts in the configuration or by providing your custom template for live streaming, catering to your specific layout preferences.</p><p>This guide will provide an overview of how to implement starting and stopping RTMP live streaming with VideoSDK.</p><h3 id="start-live-stream">Start Live Stream</h3><p>The <code>startLivestream()</code> method, accessible from the <code>useMeeting</code> hook, is used to initiate the RTMP livestream of a meeting. This method accepts the following two parameters:</p><ul><li><code>1. outputs</code>: This parameter takes an array of objects containing the RTMP <code>url</code> and <code>streamKey</code> specific to the platform where you want to initiate the livestream.</li><li><code>2. config (optional)</code>: This parameter defines the layout configuration for the livestream.</li></ul><pre><code class="language-js">const config = {
  // Layout Configuration
  layout: {
    type: "GRID", // "SPOTLIGHT" | "SIDEBAR",  Default : "GRID"
    priority: "SPEAKER", // "PIN", Default : "SPEAKER"
    gridSize: 4, // MAX : 4
  },

  // Theme of livestream layout
  theme: "DARK", //  "LIGHT" | "DEFAULT"
};

const outputs = [
  {
    url: "&lt;RTMP_URL&gt;",
    streamKey: "&lt;RTMP_STREAM_KEY&gt;",
  },
];

startLivestream(outputs, config);</code></pre><h3 id="stop-live-stream">Stop Live Stream</h3><p>The <code>stopLivestream()</code> method, accessible from the <code>useMeeting</code> hook is used to stop the RTMP livestream of a meeting.</p><p><strong>Example:</strong></p><pre><code class="language-js">import { useMeeting } from "@videosdk.live/react-sdk";

const MeetingView = () =&gt; {
  const { startLivestream, stopLivestream } = useMeeting();

  const handleStartLivestream = () =&gt; {
    // Start Livestream
    startLivestream(
      [
        {
          url: "rtmp://a.rtmp.youtube.com/live2",
          streamKey: "key",
        },
      ],
      {
        layout: {
          type: "GRID",
          priority: "SPEAKER",
          gridSize: 4,
        },
        theme: "DARK",
      }
    );
  };

  const handleStopLivestream = () =&gt; {
    // Start Livestream
    stopLivestream();
  };

  return (
    &lt;&gt;
      &lt;button onClick={handleStartLivestream}&gt;Start Livestream&lt;/button&gt;
      &lt;button onClick={handleStopLivestream}&gt;Stop Livestream&lt;/button&gt;
    &lt;/&gt;
  );
};</code></pre><h3 id="event-associated-with-livestream%E2%80%8B">Event associated with Livestream<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream#event-associated-with-livestream">​</a></h3><p>The <code>onLivestreamStateChanged()</code> event is triggered whenever the state of the meeting livestream changes.</p><pre><code class="language-js">import { Constants, useMeeting } from "@videosdk.live/react-sdk";

function onLivestreamStateChanged(data) {
  const { status } = data;

  if (status === Constants.livestreamEvents.LIVESTREAM_STARTING) {
    console.log("Meeting livestream is starting");
  } else if (status === Constants.livestreamEvents.LIVESTREAM_STARTED) {
    console.log("Meeting livestream is started");
  } else if (status === Constants.livestreamEvents.LIVESTREAM_STOPPING) {
    console.log("Meeting livestream is stopping");
  } else if (status === Constants.livestreamEvents.LIVESTREAM_STOPPED) {
    console.log("Meeting livestream is stopped");
  } else {
    //
  }
 }

const {
  meetingId
  ...
} = useMeeting({
  onLivestreamStateChanged,
});</code></pre><h3 id="custom-template%E2%80%8B">Custom Template<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream#custom-template">​</a></h3><p>With VideoSDK, you have the option to employ your own custom-designed layout template for livestreaming a meeting. To use a custom template, <a href="https://docs.videosdk.live/react/guide/interactive-live-streaming/custom-template">Follow this guide</a> to create and set up the template. Once the template is configured, you can initiate recording using the <a href="https://docs.videosdk.live/api-reference/realtime-communication/start-livestream">REST API</a>, specifying the <code>templateURL</code> parameter.</p><h2 id="%E2%9C%A8-want-to-add-more-features-to-react-js-video-calling-app">✨ Want to Add More Features to React JS Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your React video-calling app,</p><p><strong>Check out these additional resources:</strong></p><ul><li>HLS Player: <a href="https://www.videosdk.live/blog/implement-hls-player-in-react-js">Link</a></li><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/integrate-active-speaker-indication-in-react-js">Link</a></li><li>Image Capture Feature: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-react-js">Link</a></li><li>Screen Share Feature: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-js">Link</a></li><li>Chat Feature: <a href="https://www.videosdk.live/blog/integrate-chat-feature-in-react-js">Link</a></li><li>Collaborative Whiteboard: <a href="https://www.videosdk.live/blog/integrate-whiteboard-in-react-js">Link</a></li><li>Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/integrate-picture-in-picture-pip-in-react-js">Link</a></li></ul><h2 id="wrap-up">Wrap-up</h2><p>Integrating RTMP live streaming into your React JS video call app with the VideoSDK React API helps with real-time communication. You can seamlessly incorporate RTMP functionality, enhancing the app's capabilities and user experience. </p><p>With RTMP, users can engage in high-quality live video streaming, making the communication process more immersive and dynamic. Elevate your app today and bring your community closer through the power of live streaming with VideoSDK.</p><p>If you are new here and want to build an interactive react app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get? 10000 free minutes every month. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate RTMP Live Stream in React Native Video Call App?]]></title><description><![CDATA[Integrate RTMP live streaming in your React Native video call app with VideoSDK, follow a simple process outlined in the guide provided.]]></description><link>https://www.videosdk.live/blog/integrate-rtmp-in-react-native-video-app</link><guid isPermaLink="false">662f49f22a88c204ca9d5102</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[React Native]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Fri, 27 Sep 2024 06:58:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-Live-Stream-in-React-Native-Video-Call-App.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="real-timeintroduction"><strong>real-time</strong>Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-Live-Stream-in-React-Native-Video-Call-App.jpg" alt="How to Integrate RTMP Live Stream in React Native Video Call App?"/><p>Integrating RTMP (<a href="https://www.videosdk.live/blog/what-is-rtmp">Real-Time Messaging Protocol</a>) into your <a href="https://www.videosdk.live/blog/how-to-make-a-video-calling-app-using-react-native">React Native video call app </a>enables seamless live streaming functionality. With this feature, users can broadcast their video calls to a wider audience in real-time. Implementing RTMP in React Native is achievable through utilizing the appropriate SDKs and APIs. RTMP integration adds versatility and dynamism to your React Native video call app, enriching user engagement and satisfaction.</p><p><strong>Benefits of integrating RTMP feature into a React Native video call app:</strong></p><ol><li><strong>Real-Time Broadcasting</strong>: RTMP enables real-time streaming of video and audio content, ensuring instant communication with minimal delay.</li><li><strong>Scalability</strong>: RTMP supports scalable streaming, allowing your app to handle a large number of concurrent viewers without compromising performance.</li><li><strong>High-Quality Streaming</strong>: RTMP offers high-quality video and audio streaming, providing users with a superior viewing experience.</li></ol><p><strong>Benefits of integrating RTMP feature into a React Native video call app:</strong></p><ol><li><a href="https://www.videosdk.live/blog/future-of-virtual-events" rel="noreferrer"><strong>Virtual Events</strong></a>: Host virtual conferences, webinars, and live seminars, allowing participants to join remotely and engage in real-time.</li><li><strong>Live Tutorials</strong>: Conduct live tutorials and workshops, enabling instructors to interact with students in real-time<strong>real-time</strong>, answer questions, and provide feedback.</li><li><strong>Remote Collaboration</strong>: Facilitate remote team meetings, enabling teams to collaborate effectively regardless of their geographical locations.</li></ol><p>By following the provided guide, you can incorporate RTMP live streaming effortlessly, enhancing the capabilities of your app.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the RTMP functionality, we must use the capabilities that the  <a href="https://www.videosdk.live/">VideoSDK</a> offers. Before diving into the implementation steps, ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your  <a href="https://app.videosdk.live/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Node.js v12+</li><li>NPM v6+ (comes installed with newer Node versions)</li><li>Android Studio or Xcode installed</li></ul><h2 id="%E2%AC%87%EF%B8%8F-install-videosdk-config"><strong>⬇️ </strong>Install VideoSDK Config.</h2><p>It is necessary to set up VideoSDK within your project before going into the details of integrating the RTMP feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.</p><p>For NPM</p><pre><code class="language-js">npm install "@videosdk.live/react-native-sdk"  "@videosdk.live/react-native-incallmanager"
</code></pre><p>For Yarn</p><pre><code class="language-js">yarn add "@videosdk.live/react-native-sdk" "@videosdk.live/react-native-incallmanager"
</code></pre><h3 id="project-configuration">Project Configuration</h3><p>Before integrating the Image Capture functionality, ensure that your project is correctly prepared to handle the integration. This setup consists of a sequence of steps for configuring rights, dependencies, and platform-specific parameters so that VideoSDK can function seamlessly inside your application context.</p><h4 id="android-setup">Android Setup</h4><p>Add the required permissions to the  <code>AndroidManifest.xml</code>  file.</p><pre><code class="language-js">&lt;manifest
  xmlns:android="http://schemas.android.com/apk/res/android"
  package="com.cool.app"
&gt;
    &lt;!-- Give all the required permissions to app --&gt;
    &lt;uses-permission android:name="android.permission.INTERNET" /&gt;
    &lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Legacy up to Android 11) --&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH"
        android:maxSdkVersion="30" /&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH_ADMIN"
        android:maxSdkVersion="30" /&gt;

    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Android 12 upwards)--&gt;
    &lt;uses-permission android:name="android.permission.BLUETOOTH_CONNECT" /&gt;

    &lt;uses-permission android:name="android.permission.CAMERA" /&gt;
    &lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
    &lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
    &lt;uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" /&gt;
    &lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE"/&gt;
    &lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;

    &lt;application&gt;
   &lt;meta-data
      android:name="live.videosdk.rnfgservice.notification_channel_name"
      android:value="Meeting Notification"
     /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_channel_description"
    android:value="Whenever meeting started notification will appear."
    /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_color"
    android:resource="@color/red"
    /&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundService" android:foregroundServiceType="mediaProjection"&gt;&lt;/service&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundServiceTask"&gt;&lt;/service&gt;
  &lt;/application&gt;
&lt;/manifest&gt;
</code></pre><p>Update your  <code>colors.xml</code>  file for internal dependencies.</p><pre><code class="language-js">&lt;resources&gt;
  &lt;item name="red" type="color"&gt;
    #FC0303
  &lt;/item&gt;
  &lt;integer-array name="androidcolors"&gt;
    &lt;item&gt;@color/red&lt;/item&gt;
  &lt;/integer-array&gt;
&lt;/resources&gt;
</code></pre><p>Link the necessary VideoSDK dependencies.</p><pre><code class="language-js">  dependencies {
   implementation project(':rnwebrtc')
   implementation project(':rnfgservice')
  }
</code></pre><pre><code class="language-js">include ':rnwebrtc'
project(':rnwebrtc').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-webrtc/android')

include ':rnfgservice'
project(':rnfgservice').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-foreground-service/android')
</code></pre><pre><code class="language-js">import live.videosdk.rnwebrtc.WebRTCModulePackage;
import live.videosdk.rnfgservice.ForegroundServicePackage;

public class MainApplication extends Application implements ReactApplication {
  private static List&lt;ReactPackage&gt; getPackages() {
      @SuppressWarnings("UnnecessaryLocalVariable")
      List&lt;ReactPackage&gt; packages = new PackageList(this).getPackages();
      // Packages that cannot be autolinked yet can be added manually here, for example:

      packages.add(new ForegroundServicePackage());
      packages.add(new WebRTCModulePackage());

      return packages;
  }
}
</code></pre><pre><code class="language-js">/* This one fixes a weird WebRTC runtime problem on some devices. */
android.enableDexingArtifactTransform.desugaring=false
</code></pre><p>Include the following line in your  <code>proguard-rules.pro</code>  file (optional: if you are using Proguard)</p><pre><code class="language-js">-keep class org.webrtc.** { *; }
</code></pre><p>In your  <code>build.gradle</code>  file, update the minimum OS/SDK version to  <code>23</code>.</p><pre><code class="language-js">buildscript {
  ext {
      minSdkVersion = 23
  }
}
</code></pre><h4 id="ios-setup%E2%80%8B">iOS Setup​</h4><blockquote>IMPORTANT: Ensure that you are using CocoaPods version 1.10 or later.</blockquote><ol><li>To update CocoaPods, you can reinstall the  <code>gem</code>  using the following command:</li></ol><pre><code class="language-swift">$ sudo gem install cocoapods
</code></pre><ol><li>Manually link react-native-incall-manager (if it is not linked automatically).</li></ol><p>Select  <code>Your_Xcode_Project/TARGETS/BuildSettings</code>, in Header Search Paths, add  <code>"$(SRCROOT)/../node_modules/@videosdk.live/react-native-incall-manager/ios/RNInCallManager"</code></p><ol><li>Change the path of  <code>react-native-webrtc</code>  using the following command:</li></ol><pre><code class="language-swift">pod ‘react-native-webrtc’, :path =&gt; ‘../node_modules/@videosdk.live/react-native-webrtc’
</code></pre><ol><li>Change the version of your platform.</li></ol><p>You need to change the platform field in the Podfile to 12.0 or above because  <strong>react-native-webrtc</strong>  doesn't support iOS versions earlier than 12.0. Update the line: platform: ios, ‘12.0’.</p><ol><li>Install pods.</li></ol><p>After updating the version, you need to install the pods by running the following command:</p><pre><code class="language-swift">Pod install
</code></pre><ol><li>Add “<strong>libreact-native-webrtc.a</strong>” binary.</li></ol><p>Add the "<strong>libreact-native-webrtc.a</strong>" binary to the "Link Binary With Libraries" section in the target of your main project folder.</p><ol><li>Declare permissions in  <strong>Info.plist</strong>:</li></ol><p>Add the following lines to your  <strong>info.plist</strong>  file located at (project folder/ios/projectname/info.plist):</p><pre><code class="language-swift">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Camera permission description&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Microphone permission description&lt;/string&gt;
</code></pre><h4 id="register-service">Register Service</h4><p>Register VideoSDK services in your root  <code>index.js</code>  file for the initialization service.</p><pre><code class="language-js">import { AppRegistry } from "react-native";
import App from "./App";
import { name as appName } from "./app.json";
import { register } from "@videosdk.live/react-native-sdk";

register();

AppRegistry.registerComponent(appName, () =&gt; App);
</code></pre><p>index.js</p><h2 id="essential-steps-for-implement-the-video-calling-functionality">Essential Steps for Implement the Video Calling Functionality</h2><h3 id="step-1-get-started-with-apijs">Step 1: Get started with api.js</h3><p>Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the  <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples">videosdk-rtc-api-server-examples</a>  or directly from the  <a href="https://app.videosdk.live/api-keys">VideoSDK Dashboard</a>  for developers.</p><pre><code class="language-js">export const token = "&lt;Generated-from-dashbaord&gt;";
// API call to create meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${token}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });

  const { roomId } = await res.json();
  return roomId;
};
</code></pre><h3 id="step-2-wireframe-appjs-with-all-the-components">Step 2: Wireframe App.js with all the components</h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the  <strong>Context Provider </strong>and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value  <code>config</code>  and  <code>token</code>  as a prop. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meetings such as join, leave, enable/disable the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as  <strong>name</strong>,  <strong>webcamStream</strong>,  <strong>micStream</strong>, etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the  <strong>App.js</strong>  file.</p><pre><code class="language-js">import React, { useState } from "react";
import {
  SafeAreaView,
  TouchableOpacity,
  Text,
  TextInput,
  View,
  FlatList,
} from "react-native";
import {
  MeetingProvider,
  useMeeting,
  useParticipant,
  MediaStream,
  RTCView,
} from "@videosdk.live/react-native-sdk";
import { createMeeting, token } from "./api";

function JoinScreen(props) {
  return null;
}

function ControlsContainer() {
  return null;
}

function MeetingView() {
  return null;
}

export default function App() {
  const [meetingId, setMeetingId] = useState(null);

  const getMeetingId = async (id) =&gt; {
    const meetingId = id == null ? await createMeeting({ token }) : id;
    setMeetingId(meetingId);
  };

  return meetingId ? (
    &lt;SafeAreaView style={{ flex: 1, backgroundColor: "#F6F6FF" }}&gt;
      &lt;MeetingProvider
        config={{
          meetingId,
          micEnabled: false,
          webcamEnabled: true,
          name: "Test User",
        }}
        token={token}
      &gt;
        &lt;MeetingView /&gt;
      &lt;/MeetingProvider&gt;
    &lt;/SafeAreaView&gt;
  ) : (
    &lt;JoinScreen getMeetingId={getMeetingId} /&gt;
  );
}
</code></pre><h3 id="step-3-implement-join-screen">Step 3: Implement Join Screen</h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><pre><code class="language-js">function JoinScreen(props) {
  const [meetingVal, setMeetingVal] = useState("");
  return (
    &lt;SafeAreaView
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        paddingHorizontal: 6 * 10,
      }}
    &gt;
      &lt;TouchableOpacity
        onPress={() =&gt; {
          props.getMeetingId();
        }}
        style={{ backgroundColor: "#1178F8", padding: 12, borderRadius: 6 }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Create Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;

      &lt;Text
        style={{
          alignSelf: "center",
          fontSize: 22,
          marginVertical: 16,
          fontStyle: "italic",
          color: "grey",
        }}
      &gt;
        ---------- OR ----------
      &lt;/Text&gt;
      &lt;TextInput
        value={meetingVal}
        onChangeText={setMeetingVal}
        placeholder={"XXXX-XXXX-XXXX"}
        style={{
          padding: 12,
          borderWidth: 1,
          borderRadius: 6,
          fontStyle: "italic",
        }}
      /&gt;
      &lt;TouchableOpacity
        style={{
          backgroundColor: "#1178F8",
          padding: 12,
          marginTop: 14,
          borderRadius: 6,
        }}
        onPress={() =&gt; {
          props.getMeetingId(meetingVal);
        }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Join Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;
    &lt;/SafeAreaView&gt;
  );
}
</code></pre><h3 id="step-4-implement-controls">Step 4: Implement Controls</h3><p>The next step is to create a  <code>ControlsContainer</code>  component to manage features such as Join or leave a Meeting and Enable or Disable the Webcam/Mic.</p><p>In this step, the  <code>useMeeting</code>  hook is utilized to acquire all the required methods such as  <code>join()</code>,  <code>leave()</code>,  <code>toggleWebcam</code>  and  <code>toggleMic</code>.</p><pre><code class="language-js">const Button = ({ onPress, buttonText, backgroundColor }) =&gt; {
  return (
    &lt;TouchableOpacity
      onPress={onPress}
      style={{
        backgroundColor: backgroundColor,
        justifyContent: "center",
        alignItems: "center",
        padding: 12,
        borderRadius: 4,
      }}
    &gt;
      &lt;Text style={{ color: "white", fontSize: 12 }}&gt;{buttonText}&lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
  );
};

function ControlsContainer({ join, leave, toggleWebcam, toggleMic }) {
  return (
    &lt;View
      style={{
        padding: 24,
        flexDirection: "row",
        justifyContent: "space-between",
      }}
    &gt;
      &lt;Button
        onPress={() =&gt; {
          join();
        }}
        buttonText={"Join"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleWebcam();
        }}
        buttonText={"Toggle Webcam"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleMic();
        }}
        buttonText={"Toggle Mic"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          leave();
        }}
        buttonText={"Leave"}
        backgroundColor={"#FF0000"}
      /&gt;
    &lt;/View&gt;
  );
}
</code></pre><p>ControlsContainer Component</p><pre><code class="language-js">function ParticipantList() {
  return null;
}
function MeetingView() {
  const { join, leave, toggleWebcam, toggleMic, meetingId } = useMeeting({});

  return (
    &lt;View style={{ flex: 1 }}&gt;
      {meetingId ? (
        &lt;Text style={{ fontSize: 18, padding: 12 }}&gt;
          Meeting Id :{meetingId}
        &lt;/Text&gt;
      ) : null}
      &lt;ParticipantList /&gt;
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}
</code></pre><p>MeetingView Component</p><h3 id="step-5-render-participant-list">Step 5: Render Participant List</h3><p>After implementing the controls, the next step is to render the joined participants.</p><p>You can get all the joined  <code>participants</code>  from the  <code>useMeeting</code>  Hook.</p><pre><code class="language-js">function ParticipantView() {
  return null;
}

function ParticipantList({ participants }) {
  return participants.length &gt; 0 ? (
    &lt;FlatList
      data={participants}
      renderItem={({ item }) =&gt; {
        return &lt;ParticipantView participantId={item} /&gt;;
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 20 }}&gt;Press Join button to enter meeting.&lt;/Text&gt;
    &lt;/View&gt;
  );
}
</code></pre><p>ParticipantList Component</p><pre><code class="language-js">function MeetingView() {
  // Get `participants` from useMeeting Hook
  const { join, leave, toggleWebcam, toggleMic, participants } = useMeeting({});
  const participantsArrId = [...participants.keys()];

  return (
    &lt;View style={{ flex: 1 }}&gt;
      &lt;ParticipantList participants={participantsArrId} /&gt;
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}
</code></pre><p>MeetingView Component</p><h3 id="step-6-handling-participants-media">Step 6: Handling Participant's Media</h3><p>Before Handling the Participant's Media, you need to understand a couple of concepts.</p><h4 id="1-useparticipant-hook">1. <code>useParticipant</code> Hook</h4><p>The  <code>useParticipant</code>  hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take  <code>participantId</code>  as argument.</p><pre><code class="language-js">const { webcamStream, webcamOn, displayName } = useParticipant(participantId);
</code></pre><p>useParticipant Hook Example</p><h4 id="2-mediastream-api">2. MediaStream API</h4><p>The MediaStream API is beneficial for adding a MediaTrack to the  <code>RTCView</code>  component, enabling the playback of audio or video.</p><pre><code class="language-js">&lt;RTCView
  streamURL={new MediaStream([webcamStream.track]).toURL()}
  objectFit={"cover"}
  style={{
    height: 300,
    marginVertical: 8,
    marginHorizontal: 8,
  }}
/&gt;
</code></pre><p>useParticipant Hook Example</p><h4 id="rendering-participant-media">Rendering Participant Media</h4><pre><code class="language-js">function ParticipantView({ participantId }) {
  const { webcamStream, webcamOn } = useParticipant(participantId);

  return webcamOn &amp;&amp; webcamStream ? (
    &lt;RTCView
      streamURL={new MediaStream([webcamStream.track]).toURL()}
      objectFit={"cover"}
      style={{
        height: 300,
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        backgroundColor: "grey",
        height: 300,
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 16 }}&gt;NO MEDIA&lt;/Text&gt;
    &lt;/View&gt;
  );
}
</code></pre><p>ParticipantView Component</p><p>Congratulations! By following these steps, you're on your way to unlocking the video within your application. Now, we are moving forward to integrate the RTMP feature that builds immersive video experiences for your users!</p><h2 id="integrate-rtmp-live-stream-feature">Integrate RTMP Live Stream Feature</h2><p>RTMP is a widely used protocol for live streaming video content from VideoSDK to platforms like YouTube, Twitch, Facebook, and others.</p><p>To initiate live streaming from VideoSDK to platforms supporting RTMP ingestion, you simply need to provide the platform-specific stream key and stream URL. This enables VideoSDK to connect to the platform's RTMP server and transmit the live video stream.</p><p>Furthermore, VideoSDK offers flexibility in configuring livestream layouts. You can achieve this by either selecting different prebuilt layouts in the configuration or by providing your own custom template for live streaming, catering to your specific layout preferences.</p><p>This guide will provide an overview of how to implement starting and stopping RTMP live streaming with VideoSDK.</p><h3 id="start-livestream">Start Livestream</h3><p>The  <code>startLivestream()</code>  method, accessible from the  <code>useMeeting</code>  hook is used to initiate the RTMP live stream of a meeting. This method accepts the following two parameters:</p><ol><li><code>outputs</code>: This parameter takes an array of objects containing the RTMP  <code>url</code>  and  <code>streamKey</code>  specific to the platform where you want to initiate the live stream.</li><li><code>config (optional)</code>: This parameter defines the layout configuration for the live stream.</li></ol><pre><code class="language-js">const config = {
  // Layout Configuration
  layout: {
    type: "GRID", // "SPOTLIGHT" | "SIDEBAR",  Default : "GRID"
    priority: "SPEAKER", // "PIN", Default : "SPEAKER"
    gridSize: 4, // MAX : 4
  },

  // Theme of RTMP
  theme: "DARK", //  "LIGHT" | "DEFAULT"
};

const outputs = [
  {
    url: "&lt;RTMP_URL&gt;",
    streamKey: "&lt;RTMP_STREAM_KEY&gt;",
  },
];

startLivestream(outputs, config);
</code></pre><h3 id="stop-livestream">Stop Livestream</h3><p>The  <code>stopLivestream()</code>  method, accessible from the  <code>useMeeting</code>  hook is used to stop the RTMP live stream of a meeting.</p><pre><code class="language-js">import { useMeeting } from "@videosdk.live/react-native-sdk";
import { TouchableOpacity, Text } from "react-native";

const MeetingView = () =&gt; {
  const { startLivestream, stopLivestream } = useMeeting();

  const handleStartLivestream = () =&gt; {
    // Start Livestream
    startLivestream(
      [
        {
          url: "rtmp://a.rtmp.youtube.com/live2",
          streamKey: "key",
        },
      ],
      {
        layout: {
          type: "GRID",
          priority: "SPEAKER",
          gridSize: 4,
        },
        theme: "DARK",
      }
    );
  };

  const handleStopLivestream = () =&gt; {
    // Stop Livestream
    stopLivestream();
  };

  return (
    &lt;&gt;
      &lt;TouchableOpacity
        onPress={() =&gt; {
          handleStartLivestream();
        }}
      &gt;
        &lt;Text&gt;Start Livestream&lt;/Text&gt;
      &lt;/TouchableOpacity&gt;

      &lt;TouchableOpacity
        onPress={() =&gt; {
          handleStopLivestream();
        }}
      &gt;
        &lt;Text&gt;Stop Livestream&lt;/Text&gt;
      &lt;/TouchableOpacity&gt;
    &lt;/&gt;
  );
};
</code></pre><h3 id="event-associated-with-livestream">Event associated with Livestream</h3><p><strong>onLivestreamStateChanged</strong>  - The  <code>onLivestreamStateChanged()</code>  event is triggered whenever the state of the meeting livestream changes.</p><pre><code class="language-js">import { Constants, useMeeting } from "@videosdk.live/react-native-sdk";

function onLivestreamStateChanged(data) {
  const { status } = data;

  if (status === Constants.livestreamEvents.LIVESTREAM_STARTING) {
    console.log("Meeting livestream is starting");
  } else if (status === Constants.livestreamEvents.LIVESTREAM_STARTED) {
    console.log("Meeting livestream is started");
  } else if (status === Constants.livestreamEvents.LIVESTREAM_STOPPING) {
    console.log("Meeting livestream is stopping");
  } else if (status === Constants.livestreamEvents.LIVESTREAM_STOPPED) {
    console.log("Meeting livestream is stopped");
  } else {
    //
  }
 }

const {
  meetingId
  ...
} = useMeeting({
  onLivestreamStateChanged,
});
</code></pre><h3 id="custom-template">Custom Template</h3><p>With VideoSDK, you can also use your own custom-designed layout template to record the meetings. To use the custom template, you need to create a template, for which you can <a href="https://docs.videosdk.live/react/guide/interactive-live-streaming/custom-template">follow this guide</a>. Once you have set up the template, you can use the  <a href="https://docs.videosdk.live/api-reference/realtime-communication/start-recording">REST API to start</a>  the recording with the  <code>templateURL</code>  parameter.</p><h2 id="%E2%9C%A8-want-to-add-more-features-to-react-native-video-calling-app">✨ Want to Add More Features to React Native Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your React Native video-calling app,</p><p><strong>Check out these additional resources:</strong></p><ul><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/active-speaker-indication-in-react-native-video-call-app">Link</a></li><li>Image Capture Feature: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-react-native-for-android-app">Link</a></li><li>Screen Share Feature in Android: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-native-android-video-call-app">Link</a></li><li>Screen Share Feature in iOS: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-native-ios-video-call-app">Link</a></li><li>Chat Feature: <a href="https://www.videosdk.live/blog/integrate-chat-feature-in-react-native-video-call-app">Link</a></li><li>?Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/picture-in-picture-pip-in-react-native">Link</a></li></ul><h2 id="conclusion">Conclusion</h2><p>Integrating RTMP functionality into your React Native app with videoSDK empowers you to create a more versatile and engaging user experience. This guide provided a roadmap for setting up the development environment, configuring the SDK, and establishing a connection to your RTMP server. With RTMP support, you can enable features like live streaming to external platforms or broadcasting calls to a wider audience.</p><p>If you are new here and want to build an interactive React Native app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get ? <em>10000 free minutes every month</em>. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[Integrate Pre-Call Check in Javascript]]></title><description><![CDATA[Discover the importance of a Pre-call check in JavaScript and learn how to implement a robust setup for video calls. ]]></description><link>https://www.videosdk.live/blog/integrate-precall-in-javascript</link><guid isPermaLink="false">669e4f6d20fab018df10fac8</guid><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 26 Sep 2024 13:02:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/07/Pre-Call-Check-in-JavaScript.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/Pre-Call-Check-in-JavaScript.jpg" alt="Integrate Pre-Call Check in Javascript"/><p>Imagine stepping into a virtual meeting room, only to be met with technical glitches that disrupt the flow of conversation. This is where the concept of a Pre-call Check comes into play. Much like detailed pre-flight checklist for pilots, a Pre-call setup acts as a crucial preparatory phase, allowing developers to troubleshoot and optimize user settings before the main event. </p><p>In this article, we'll explore the pre-call check integration in JavaScript and provide a comprehensive guide on how to effectively implement this feature, ensuring your users can engage  without interruption.</p><h3 id="why-is-it-necessary%E2%80%8B">Why is it Necessary?<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/setup-call/precall#why-is-it-necessary">​</a></h3><p>Why invest time and effort into crafting a precall experience, you wonder? Well, picture this scenario: your users eagerly join a video call, only to encounter a myriad of technical difficulties—muted microphones, pixelated cameras, and laggy connections. Not exactly the smooth user experience you had in mind, right?</p><p>By integrating a robust precall process into your app, developers become the unsung heroes, preemptively addressing potential pitfalls and ensuring that users step into their video calls with confidence.</p><h2 id="step-by-step-guide-integrating-precall-feature%E2%80%8B">Step-by-Step Guide: Integrating Precall Feature<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/setup-call/precall#step-by-step-guide-integrating-precall-feature">​</a></h2><h3 id="step-1-check-permissions%E2%80%8B">Step 1: Check Permissions​</h3><ul><li>Begin by ensuring that your application has the necessary permissions to access user devices such as cameras, microphones, and speakers.</li><li>Utilize the <code>checkPermissions()</code> method of the <code>VideoSDK</code> class to verify if permissions are granted.</li></ul><pre><code class="language-js">const checkMediaPermission = async () =&gt; {
  //These methods return a Promise that resolve to a Map&lt;string, boolean&gt; object.
  const checkAudioPermission = await VideoSDK.checkPermissions("audio"); //For getting audio permission
  const checkVideoPermission = await VideoSDK.checkPermissions("video"); //For getting video permission
  const checkAudioVideoPermission = await VideoSDK.checkPermissions(
    "audio_video"
  ); //For getting both audio and video permissions
  // Output: Map object for both audio and video permission:
  /*
        Map(2)
        0 : {"audio" =&gt; true}
            key: "audio"
            value: true
        1 : {"video" =&gt; true}
            key: "video"
            value: true
    */
};</code></pre><p>When microphone and camera permissions are blocked, rendering device lists is not possible:</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/precall_no_permissions.gif" class="kg-image" alt="Integrate Pre-Call Check in Javascript" loading="lazy" width="1920" height="1020"/></figure><h3 id="step-2-request-permissions">Step 2: Request Permissions</h3><p>If permissions are not granted, use the <code>requestPermission()</code> method of the VideoSDK class to prompt users to grant access to their devices.</p><pre><code class="language-js">const requestAudioVideoPermission = async () =&gt; {
  try {
    //These methods return a Promise that resolve to a Map&lt;string, boolean&gt; object.
    const requestAudioPermission = await VideoSDK.requestPermission("audio"); //For Requesting Audio Permission
    const requestVideoPermission = await VideoSDK.requestPermission("video"); //For Requesting Video Permission
    const requestAudioVideoPermission = await VideoSDK.requestPermission(
      "audio_video"
    ); //For Requesting Audio and Video Permissions
  } catch (ex) {
    console.log("Error in requestPermission ", ex);
  }
};</code></pre><p>Requesting permissions if not already granted:</p><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/precall_requesting_permissions.png" class="kg-image" alt="Integrate Pre-Call Check in Javascript" loading="lazy"/></figure><h3 id="step-3-render-device-list%E2%80%8B">Step 3: Render Device List​</h3><ul><li>Once you have the necessary permissions, Fetch and render list of available camera, microphone, and speaker devices using the <code>getCameras()</code>, <code>getMicrophones()</code>, and <code>getPlaybackDevices()</code> methods of the <code>VideoSDK</code> class respectively.</li><li>Enable users to select their preferred devices from these lists.</li></ul><pre><code class="language-js">const getMediaDevices = async () =&gt; {
  try {
    //Method to get all available webcams.
    //It returns a Promise that is resolved with an array of CameraDeviceInfo objects describing the video input devices.
    let webcams = await VideoSDK.getCameras();
    //Method to get all available Microphones.
    //It returns a Promise that is resolved with an array of MicrophoneDeviceInfo objects describing the audio input devices.
    let mics = await VideoSDK.getMicrophones();
    //Method to get all available speakers.
    //It returns a Promise that is resolved with an array of PlaybackDeviceInfo objects describing the playback devices.
    let speakers = await VideoSDK.getPlaybackDevices();
  } catch (err) {
    console.log("Error in getting audio or video devices", err);
  }
};</code></pre><p>Displaying device lists once permissions are granted:</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/precall_render_device_list.gif" class="kg-image" alt="Integrate Pre-Call Check in Javascript" loading="lazy" width="1920" height="1020"/></figure><h3 id="step-4-handle-device-changes%E2%80%8B">Step 4: Handle Device Changes​</h3><ul><li>Implement the <code>device-changed</code> event of the <code>VideoSDK</code> class to dynamically re-render device lists whenever new devices are attached or removed from the system.</li><li>Ensure that users can seamlessly interact with newly connected devices without disruptions.</li></ul><pre><code class="language-js">//Fetch camera, mic and speaker devices again using this function.
const deviceChangeEventListener = async (devices) =&gt; {
  //
  console.log("Device Changed", devices);
};

VideoSDK.on("device-changed", deviceChangeEventListener);</code></pre><p>Dynamically updating device lists when new devices are connected or disconnected:</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/precall_on_device_change.gif" class="kg-image" alt="Integrate Pre-Call Check in Javascript" loading="lazy" width="1920" height="1020"/></figure><h3 id="step-5-create-media-tracks%E2%80%8B">Step 5: Create Media Tracks​</h3><ul><li>Upon user selection of devices, create media tracks for the selected microphone and camera using the <code>createMicrophoneAudioTrack()</code> and <code>createCameraVideoTrack()</code> methods of the <code>VideoSDK</code> class.</li><li>Ensure that these tracks originate from the user-selected devices for accurate testing.</li></ul><pre><code class="language-js">//For Getting Audio Tracks
const getMediaTracks = async () =&gt; {
  try {
    //Returns a MediaStream object, containing the Audio Stream from the selected Mic Device.
    const customAudioStream = await VideoSDK.createMicrophoneAudioTrack({
      // Here, selectedMicId should be the microphone id of the device selected by the user.
      microphoneId: selectedMicId,
    });
    //To retrive audio tracks that will be displayed to the user from the stream.
    const audioTracks = stream?.getAudioTracks();
    const audioTrack = audioTracks.length ? audioTracks[0] : null;
  } catch (error) {
    console.log("Error in getting Audio Track", error);
  }

  //For Getting Video Tracks
  try {
    //Returns a MediaStream object, containing the Video Stream from the selected Webcam Device.
    const customVideoStream = await VideoSDK.createCameraVideoTrack({
      // Here, selectedWebcamId should be the webcam id of the device selected by the user.
      cameraId: selectedWebcamId,
      encoderConfig: encoderConfig ? encoderConfig : "h540p_w960p",
      optimizationMode: "motion",
      multiStream: false,
    });
    //To retrive video tracks that will be displayed to the user from the stream.
    const videoTracks = stream?.getVideoTracks();
    const videoTrack = videoTracks.length ? videoTracks[0] : null;
  } catch (error) {
    console.log("Error in getting Video Track", error);
  }
};</code></pre><p>Rendering Media Tracks when necessary permissions are available:</p><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/precall_render_media_tracks.png" class="kg-image" alt="Integrate Pre-Call Check in Javascript" loading="lazy"/></figure><h3 id="step-6-testing-microphone%E2%80%8B">Step 6: Testing Microphone​</h3><ul><li>The process of testing microphone device provides valuable insights into microphone quality and ensures users can optimize their audio setup for clear communication.</li><li>To facilitate this functionality, incorporate a recording feature that enables users to capture audio for a specified duration. After recording, users can playback the audio to evaluate microphone performance accurately.</li><li>For implementing this functionality, you can refer to the official guide of <a href="https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder" rel="noopener noreferrer">MediaRecorder</a> for comprehensive instructions and best practices.</li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/precall_test_mic.gif" class="kg-image" alt="Integrate Pre-Call Check in Javascript" loading="lazy" width="1920" height="1020"/></figure><h3 id="step-7-testing-speakers%E2%80%8B">Step 7: Testing Speakers​</h3><ul><li>Testing the speaker device allows users to assess audio playback clarity and fidelity, enabling them to fine-tune settings for optimal sound quality in calls and meetings.</li><li>To facilitate effective speaker testing, integrate sound playback functionality into your application.</li><li>This functionality empowers users to play a predefined audio sample, providing a precise evaluation of their speaker output quality.</li></ul><pre><code class="language-js">const testSpeakers = () =&gt; {
  //Here, you have to path of your desired test sound.
  const test_sound_path = "test_sound_path";
  //Create an audio tag using a test sound of your choice.
  const audio = new Audio(test_sound_path);
  try {
    //Set the sinkId of the audio to the speaker's device Id, as selected by the user.
    audio.setSinkId(selectedSpeakerDeviceId).then(() =&gt; {
      audio.play();
    });
  } catch (error) {
    console.log(error);
  }
};</code></pre><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/precall_test_speaker.gif" class="kg-image" alt="Integrate Pre-Call Check in Javascript" loading="lazy" width="1920" height="1020"/></figure><h3 id="step-8-network-quality-assessment%E2%80%8B">Step 8: Network Quality Assessment​</h3><ul><li>Utilize the <code>getNetworkStats()</code> method of the <code>VideoSDK</code> class to provide users with insights into their network upload and download speeds.</li><li>Handle potential errors gracefully, such as offline status or poor connection, to maintain a smooth user experience.</li></ul><pre><code class="language-js">const getNetworkStatistics = async () =&gt; {
  try {
    //The timeOutDuration is a set time, after which the method stops fetching stats and throws a timeout error.
    const options = { timeoutDuration: 45000 };
    //This method returns a Promise that resolves with an object, containing network speed statistics or rejects with an error message.
    const networkStats = await VideoSDK.getNetworkStats(options);
    const downloadSpeed = networkStats["downloadSpeed"];
    const uploadSpeed = networkStats["uploadSpeed"];
  } catch (ex) {
    console.log("Error in networkStats: ", ex);
  }
};</code></pre><ul><li>Displaying the Upload and Download speeds of the network:</li></ul><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/precall_network_stats.png" class="kg-image" alt="Integrate Pre-Call Check in Javascript" loading="lazy"/></figure><ul><li>Error Handling when user is offline:</li></ul><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/precall_networ_stats_offline.png" class="kg-image" alt="Integrate Pre-Call Check in Javascript" loading="lazy"/></figure><h3 id="step-9-ensuring-correct-device-selection-in-the-meeting%E2%80%8B">Step 9: Ensuring Correct Device Selection in the Meeting​</h3><ul><li>Ensure that all relevant states, such as microphone and camera status (on/off), and selected devices, are passed into the meeting from the precall screen.</li><li>This can be accomplished by passing these crucial states and media streams during the initialization of the meeting using the <code>initMeeting</code> method of the <code>VideoSDK</code> class.</li><li>This ensures consistency and allows users to seamlessly transition from the precall setup to the actual meeting while retaining their chosen settings.</li></ul><pre><code class="language-js">const meeting = VideoSDK.initMeeting({
    ...
    //Status of Mircophone Device as selected by the user (On/Off).
    micEnabled: micEnable,
    //Status of Webcam Device as selected by the user (On/Off).
    webcamEnabled: webCamEnable,
    //customVideoStream has to be the Video Stream of the user's selected Webcam device as created in Step-5.
    customCameraVideoTrack: customVideoStream,
    //customAudioStream has to be the Audio Stream of the user's selected Microphone device as created in Step-5.
    customMicrophoneAudioTrack: customAudioStream,
});</code></pre><h2 id="conclusion">Conclusion</h2><p>Following the step-by-step instructions, we explore the pre-call check integration in <a href="https://www.codewithfaraz.com/projects/html-css-and-javascript-projects-from-beginner-to-advanced">JavaScript </a>and prepare your users with the tools they need to optimize their audio and video settings, ensuring smooth and effective communication. From checking permissions to assessing network quality, each step plays a vital role in preemptively addressing potential issues.</p><p>As developers, embracing this proactive approach not only enhances user satisfaction but also positions your application as a reliable platform for virtual interactions. With a well-executed Pre-call check, you empower your users to focus on what truly matters—connecting and collaborating effectively.</p>]]></content:encoded></item><item><title><![CDATA[What is WebTransport?]]></title><description><![CDATA[WebTransport can be defined as a modern protocol designed to facilitate efficient and low-latency communication between clients and servers over the web.]]></description><link>https://www.videosdk.live/blog/what-is-webtransport</link><guid isPermaLink="false">65ba29762a88c204ca9ce753</guid><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Thu, 26 Sep 2024 12:50:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/What-is-WebTransport.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/01/What-is-WebTransport.png" alt="What is WebTransport?"/><p>WebTransport can be defined as a modern protocol designed to facilitate efficient and low-latency communication between clients and servers over the web. Unlike its predecessors, WebTransport is designed to meet the demands of modern web applications, providing enhanced performance, flexibility, and security.</p><p>In the traditional landscape, real-time communication often relied on technologies such as WebSockets or HTTP long polling. While effective, these approaches presented performance, scalability, and security limitations. WebTransport seeks to overcome these limitations by introducing the QUIC protocol, offering several advantages over conventional protocols like TCP.</p><p>One noteworthy advantage of WebTransport is its superior scalability. The QUIC protocol supports multiplexing, enabling the transmission of multiple data streams over a single connection. This eliminates the need to establish multiple connections, ultimately reducing latency and enhancing overall efficiency.</p><p>WebTransport also emphasizes security by incorporating the advanced security features of QUIC, such as encryption and authentication. This ensures the confidentiality and integrity of the data exchanged between clients and servers, mitigating common security threats like eavesdropping and tampering.</p><p>Furthermore, WebTransport encompasses essential features for real-time applications, including flow control, prioritization, and reliable delivery. These features empower developers to optimize application performance, delivering a superior user experience.</p><p>To understand the significance of WebTransport, it's essential to trace the evolution of web communication technologies. From the rudimentary HTTP//1.0 to the more advanced WebSockets and the latest HTTP/2 and HTTP/3 protocols, each iteration has strived to improve efficiency and responsiveness in web interactions.</p><h2 id="what-are-the-types-of-webtransport">What are the Types of WebTransport?</h2><p>Several web transport services are available for developers building real-time chat and messaging applications. These services offer scalable and secure platforms for data transmission:</p><h3 id="websockets">WebSockets</h3><p>A bidirectional communication protocol allows real-time communication between clients and servers. It supports full-duplex communication, reducing latency and overhead.</p><h3 id="webrtc-web-real-time-communication">WebRTC (Web Real-Time Communication)</h3><p>WebRTC, an open-source project, facilitates real-time communication between web browsers and mobile applications. This technology enables seamless audio, video, and data transfer for peer-to-peer interaction. WebRTC can be used for various applications, including video conferencing, voice calling, live streaming, and more.</p><h3 id="server-sent-events-sse">Server-Sent Events (SSE)</h3><p><br>Server-sent events (SSE) is a unidirectional communication protocol designed to enable servers to push data to clients using HTTP. In contrast to WebSocket, SSE exclusively supports server-initiated communication. This means that the server can send information to the client without the need for client requests.</br></p><h3 id="http2-server-push">HTTP/2 Server Push</h3><p>HTTP/2 incorporates a notable feature known as server push, enabling servers to proactively send resources to clients without awaiting specific client requests. This functionality proves particularly advantageous in scenarios requiring real-time updates or the pre-loading of resources anticipated by the server.</p><h2 id="what-are-the-benefits-of-using-webtransport">What are the benefits of using WebTransport?</h2><p>WebTransport brings several advantages to the table, making it an appealing choice for developers:</p><h3 id="low-latency">Low Latency</h3><p>WebTransport significantly reduces latency through advanced congestion control algorithms, providing a more responsive user experience.</p><h3 id="improved-efficiency">Improved Efficiency</h3><p>With multiplexing and stream prioritization, WebTransport optimizes data transfer, reducing connection overhead and enhancing resource utilization.</p><h3 id="scalability">Scalability</h3><p>The multiplexing feature allows WebTransport to handle more concurrent connections efficiently, ensuring scalability for applications with high user engagement.</p><h3 id="flexibility">Flexibility</h3><p>WebTransport supports reliable and unreliable transport protocols, offering flexibility for different real-time communication patterns.</p><h3 id="security">Security</h3><p>Incorporates modern security mechanisms like TLS for encrypted communication, ensuring the confidentiality and integrity of transmitted data.</p><h3 id="compatibility">Compatibility</h3><p>WebTransport is designed to work seamlessly with existing web technologies, ensuring compatibility with a wide range of browsers and devices. This compatibility ensures that developers can easily integrate WebTransport into their applications.</p><h2 id="what-are-the-risks-associated-with-webtransport">What are the Risks Associated with WebTransport?</h2><p>While WebTransport presents numerous benefits, there are associated risks that developers should be mindful of:</p><h3 id="security-risks">Security Risks</h3><p>WebTransport uses a new protocol called QUIC, which introduces the source of security concerns. Staying updated with security patches and best practices is crucial.</p><h3 id="compatibility-issues">Compatibility Issues</h3><p>WebTransport's support may vary across browsers, leading to compatibility issues, particularly with older or less common browsers.</p><h3 id="performance-impact">Performance Impact</h3><p>Improper implementation or excessive data transfer can impact performance. Code optimization is essential to minimize potential bottlenecks.</p><h3 id="complexity">Complexity:</h3><p>Learning and implementing WebTransport introduces new concepts and APIs, such as the Streams API, which may introduce complexity into the development process.</p><h3 id="dependency-on-third-party-providers">Dependency on Third-Party Providers</h3><p>Reliance on third-party services introduces the risk of downtime or disruptions, emphasizing the importance of choosing reliable providers.</p><h2 id="how-videosdk-leverages-webtransport">How VideoSDK Leverages WebTransport</h2><h3 id="videosdk">VideoSDK</h3><p><a href="https://www.videosdk.live/">VideoSDK </a>is a comprehensive live video infrastructure designed for developers. It offers real-time audio-video SDKs that provide complete flexibility, scalability, and control, making it seamless for developers to integrate <a href="https://www.videosdk.live/audio-video-conferencing">audio-video conferencing</a> and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a> into their web and mobile applications.</p><h3 id="seamless-integration-of-videosdk-with-webtransport">Seamless Integration of VideoSDK with WebTransport</h3><p>VideoSDK seamlessly integrates WebTransport, empowering developers to incorporate real-time audio-video capabilities effortlessly into their web and mobile applications.</p><h3 id="videosdk-compatible-with-different-platforms">VideoSDK Compatible with Different Platforms</h3><p>VideoSDK ensures compatibility across diverse platforms, allowing developers to harness the power of WebTransport in creating cross-platform applications.</p><h3 id="features-and-capabilities">Features and Capabilities</h3><ul><li><strong>Real-time Video Streaming:</strong> VideoSDK leverages WebTransport to facilitate real-time video streaming, making it an ideal choice for applications requiring live video infrastructure.</li><li><strong>Interactive Communication:</strong> The bidirectional capabilities of WebTransport enhance interactive communication within applications powered by VideoSDK, creating a more engaging user experience.</li></ul>]]></content:encoded></item><item><title><![CDATA[Top Zoom Meeting SDK Competitors in 2025]]></title><description><![CDATA[Explore Zoom's competition by comparing Zoom with its alternatives in the realm of real-time communication.]]></description><link>https://www.videosdk.live/blog/zoom-video-sdk-competitors</link><guid isPermaLink="false">64b524889eadee0b8b9e71bf</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 26 Sep 2024 11:22:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/07/Zoom-competitors.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="exploring-zoom-video-sdk-alternatives">Exploring Zoom Video SDK Alternatives</h2><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Zoom-competitors.jpg" alt="Top Zoom Meeting SDK Competitors in 2025"/><p>If you're currently exploring the realm of video communication APIs, Zoom Video SDK Competitors might be on your radar along with various other potential contenders that suit your needs.</p><p>However, navigating the array of <a href="https://www.videosdk.live/blog/zoom-video-sdk-alternative">alternatives to Zoom</a> and pinpointing the ideal choice can be quite intricate, given the plethora of options available. Before delving into the landscape of video API offerings, it's crucial to outline your project's budget, primary use case, and essential feature requirements.</p><p>To streamline this process, we've compiled a list of the <strong>top Zoom Competitors</strong> to simplify your decision-making process. This compilation helps you refine your choices and find the ideal video API solution for your application. While we may have a preference, our aim is to provide an impartial overview of each provider, enabling you to make an informed decision tailored to your specific needs.</p><h2 id="zoom-video-sdk">Zoom Video SDK</h2>
<h3 id="key-points-about-zoom">Key points about Zoom</h3>
<ul><li>The Zoom Video SDK empowers you to construct applications featuring personalized video compositions, accommodating <strong>up to 1,000 co-hosts/participants</strong> within a session. </li><li>While Zoom offers certain customization options for the live video interface, its <strong>scope is limited</strong>.</li><li>It's worth noting that the Zoom Video SDK <strong>only supports the predefined roles</strong> of hosts and participants. </li><li>This constraint could be <strong>challenging</strong> for scenarios requiring <strong>customized permissions</strong> for peers. Additionally, unless you opt for paid support plans, you might encounter <strong>slower email-based support</strong>.</li><li>While the SDK does provide <strong>partial assistance</strong> in managing user bandwidth consumption during network degradation, it's important to consider this aspect for optimal performance.</li></ul><h3 id="zoom-video-sdk-pricing">Zoom Video SDK Pricing</h3>
<ul><li>Zoom offers a generous allowance of 10,000 free minutes per month, and charges are applicable only after surpassing this threshold. </li><li>The pricing structure commences at <strong>$0.31</strong> per user minute. If you require <strong>recordings</strong>, they are available at a rate of <strong>$100</strong> per month, offering you <strong>1 TB</strong> of storage. <strong>Telephony services</strong> come with a price tag of <strong>$100</strong> per month.</li><li>Zoom presents <strong>three distinct customer support plans</strong>: <strong>Access</strong>, <strong>Premier</strong>, and <strong>Premier+</strong>.</li><li>Please note that the Zoom API <a href="https://zoom.us/buy/videosdk">pricing structure</a> might be subject to change or updates, so it's always recommended to verify the current pricing from the official sources before making any decisions.</li></ul><h2 id="zoom-video-sdk-vs-top-competitors">Zoom Video SDK vs Top Competitors</h2>
<p>The <strong>top competitors of Zoom Video SDK</strong> are VideoSDK, <a href="https://www.videosdk.live/blog/vonage-competitors">Vonage</a>, <a href="https://www.videosdk.live/blog/amazon-chime-sdk-competitors">AWS Chime</a>, 100ms, WebRTC, <a href="https://www.videosdk.live/blog/jitsi-competitors">Jitsi</a>, Daily, and LiveKit.</p><p>Let's compare Zoom with each of the above competitors.</p><h2 id="1-zoom-vs-videosdk-a-detailed-comparison">1. Zoom vs VideoSDK: A Detailed Comparison</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Zoom-vs-Videosdk.jpg" class="kg-image" alt="Top Zoom Meeting SDK Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>VideoSDK offers developers a seamless API that simplifies incorporating robust, scalable, and dependable audio-video capabilities into their applications. With only a few lines of code, developers can introduce live audio and video experiences to various platforms within minutes. One of the primary benefits of opting for the <a href="https://www.videosdk.live/">VideoSDK</a> is its remarkable ease of integration. This characteristic enables developers to concentrate their efforts on crafting innovative features that contribute to enhanced user engagement and retention.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Zoom pricing</strong></td>
        <td><strong>Video SDK pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$3.5</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$2</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>Starts from <strong>$1</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>Starts from <b>$3.5</b> per 1,000 minutes</td>
        <td>
            <strong>$15</strong> per 1,000 minutes, No limit on participants
        </td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$4</strong> per 1,000 minutes</td>
        <td>
            <strong>$15</strong> per 1,000 minutes, No limit on participants
        </td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/alternative/zoom-vs-videosdk">Zoom vs VideoSDK</a>.</blockquote>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-zoom-vs-vonage">2. Zoom vs Vonage</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Zoom-vs-Vonage.jpg" class="kg-image" alt="Top Zoom Meeting SDK Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>The Vonage Video API, previously known as TokBox, provides developers with the tools to create customized video experiences for their web, mobile, or desktop applications. By utilizing the <a href="https://www.videosdk.live/blog/vonage-alternative">Vonage</a> API, developers can easily integrate video communication features, providing users with a distinctive and immersive video experience directly within their apps.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Zoom pricing</strong></td>
        <td><strong>Vonage pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$3.5</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$2</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>Starts from <strong>$1</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>Starts from <strong>$3.5</strong> per 1,000 minutes</td>
        <td><strong>$15</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td>
            <strong>$4</strong> per 1,000 minutes
        </td>
        <td><strong>$15</strong> per 1,000 minutes</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/zoom-vs-vonage">Zoom vs Vonage</a>.</blockquote><h2 id="3-zoom-vs-aws-chime">3. Zoom vs AWS Chime</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Zoom-vs-AWS-Chime.jpg" class="kg-image" alt="Top Zoom Meeting SDK Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>The Amazon Chime SDK is designed to streamline the integration of video conferencing, audio calls, and messaging into your applications. <a href="https://www.videosdk.live/blog/amazon-chime-sdk-alternative">AWS Chime SDK</a> provides fundamental live video features and leverages the user management capabilities of Amazon Web Services (AWS) to enhance the overall functionality of your applications.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Zoom pricing</strong></td>
        <td><strong>AWS Chime pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$3.5</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$1.7</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>Starts from <strong>$3.5</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$4</strong> per 1,000 minutes</td>
        <td><strong>$12.5</strong> per 1,000 minutes</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/zoom-vs-amazon-chime-sdk">Zoom vs AWS Chime</a>.</blockquote><h2 id="4-zoom-vs-100ms">4. Zoom vs 100ms</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Zoom-vs-100-ms.jpg" class="kg-image" alt="Top Zoom Meeting SDK Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>100ms is a cloud platform that provides developers with the tools to seamlessly integrate video and audio conferencing into web, Android, and iOS applications. Through a comprehensive set of REST APIs, software development kits (SDKs), and a user-friendly dashboard, <a href="https://www.videosdk.live/blog/100ms-alternative">100ms</a> simplifies the tasks of capturing, distributing, recording, and presenting live interactive audio and video content.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Zoom pricing</strong></td>
        <td><strong>100ms pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$3.5</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$4</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>Starts from <strong>$4</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>Starts from <strong>$3.5</strong> per 1,000 minutes</td>
        <td><strong>$40</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$4</strong> per 1,000 minutes</td>
        <td><strong>$13.5</strong> per 1,000 minutes</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/zoom-vs-100ms">Zoom vs 100ms</a>.</blockquote><h2 id="5-zoom-vs-webrtc">5. Zoom vs WebRTC</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Zoom-vs-WebRTC.jpg" class="kg-image" alt="Top Zoom Meeting SDK Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>WebRTC is an open-source initiative that enables web browsers to incorporate Real-Time Communications (RTC) capabilities using simple JavaScript APIs. The components of <a href="https://www.videosdk.live/blog/webrtc-alternative">WebRTC</a> have been carefully fine-tuned to efficiently support real-time communication features within web browsers.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Zoom pricing</strong></td>
        <td><strong>WebRTC pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$3.5</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>Starts from <strong>$3.5</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$4</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/zoom-vs-webrtc">Zoom vs WebRTC</a>.</blockquote><h2 id="6-zoom-vs-jitsi">6. Zoom vs Jitsi</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Zoom-vs-Jitsi-Meet.jpg" class="kg-image" alt="Top Zoom Meeting SDK Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>Jitsi is a free and open-source platform designed to streamline video conferencing. <a href="https://www.videosdk.live/blog/jitsi-alternative">Jitsi</a> provides a user-friendly experience that eliminates the need for downloads or plugins, making it an excellent choice for those looking for a simple and cost-effective solution for live video communication.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Zoom pricing</strong></td>
        <td><strong>Jitsi pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$3.5</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>Starts from <strong>$3.5</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$4</strong> per 1,000 minutes</td>
        <td>NA</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/zoom-vs-jitsi">Zoom vs Jitsi</a>.</blockquote><h2 id="7-zoom-vs-daily">7. Zoom vs Daily</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Zoom-vs-Daily.jpg" class="kg-image" alt="Top Zoom Meeting SDK Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>Daily enables developers to easily create real-time video and audio calls that seamlessly operate within web browsers. By simplifying the management of typical backend video call features across various platforms, <a href="https://www.videosdk.live/blog/daily-co-alternative">Daily</a> offers practical defaults that streamline the development process and enhance the overall user experience.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Zoom pricing</strong></td>
        <td><strong>Daily pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$3.5</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$4</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>Starts from <strong>$1.2</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>Starts from <strong>$3.5</strong> per 1,000 minutes</td>
        <td><strong>$15</strong> per 1,000 minutes
    </td></tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$4</strong> per 1,000 minutes</td>
        <td><strong>$13.49</strong> per 1,000 minutes</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/zoom-vs-daily">Zoom vs Daily</a>.</blockquote><h2 id="8-zoom-vs-livekit">8. Zoom vs LiveKit</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Zoom-vs-LiveKit.jpg" class="kg-image" alt="Top Zoom Meeting SDK Competitors in 2025" loading="lazy" width="1429" height="525"/></figure><p>Livekit provides a suite of SDKs designed to seamlessly integrate live video and audio capabilities into your native applications. With features including live streaming, in-game communication, video calls, and more, <a href="https://www.videosdk.live/blog/livekit-alternative">Livekit</a> offers a comprehensive solution. By utilizing a modern end-to-end WebRTC stack, Livekit ensures smooth and high-quality real-time communication experiences for users.</p>
<!--kg-card-begin: html-->
<table border="2px black" cellspacing="0px" style="border-radius: 10px;">
    <tr>
        <td/>
        <td><strong>Zoom pricing</strong></td>
        <td><strong>LiveKit pricing</strong></td>
    </tr>
    <tr>
        <td><strong>Video calling</strong></td>
        <td>Starts from <strong>$3.5</strong> per 1,000 minutes</td>
        <td>Starts from <strong>$20</strong> per 1,000 minutes</td>
    </tr>
    <tr>
        <td><strong>Interactive live streaming</strong></td>
        <td>NA</td>
        <td>
            <strong>$69</strong> per hour (upto 500 viewers only and doesn't support Full HD)		
        </td>
    </tr>
    <tr>
        <td><strong>RTMP</strong></td>
        <td>Starts from <strong>$3.5</strong> per 1,000 minutes</td>
        <td>No accurate data available</td>
    </tr>
    <tr>
        <td><strong>Cloud Recording<strong/></strong></td>
        <td><strong>$4</strong> per 1,000 minutes</td>
        <td>No accurate data available</td>
    </tr>
</table>
<!--kg-card-end: html-->
<blockquote>Here's a detailed comparison of <a href="https://www.videosdk.live/zoom-vs-livekit">Zoom vs LiveKit</a>.</blockquote><h2 id="have-you-determined-whether-zoom-aligns-with-your-requirements-or-have-you-found-an-alternative">Have you determined whether Zoom aligns with your requirements, or have you found an alternative?</h2>
<p>The alternatives to Zoom that were discussed earlier offer a diverse array of solutions for developers aiming to enhance in-app user experiences. However, if your requirements extend to a comprehensive engagement strategy that includes not only in-app communication but also encompasses voice and video functionalities, solutions such as <a href="https://www.videosdk.live/signup/">VideoSDK</a> could be a more fitting choice for your needs. These solutions can empower you to create more immersive and interactive experiences for your users, expanding beyond simple text-based communication.</p><p>Taking into account your specific needs, budget limitations, and the critical features you're seeking, Zoom may not perfectly align with your requirements. To make an informed decision, it's advisable to explore the alternatives mentioned earlier, some of which offer free trial options like Video SDK. By testing these alternatives through practical projects, you can gain a better understanding of their suitability before making a significant commitment. Remember, if your needs change in the future, transitioning away from Zoom is always a viable option.</p>]]></content:encoded></item><item><title><![CDATA[What is High-Grade Client-to-Server Encryption?]]></title><description><![CDATA[This article explores High-Grade Client-to-Server Encryption's significance, common encryption algorithms like AES and RSA, and best practices for secure communication.]]></description><link>https://www.videosdk.live/blog/high-grade-client-to-server-encryption</link><guid isPermaLink="false">66e2e59620fab018df110303</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Thu, 26 Sep 2024 06:30:05 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-High-Grade-Client-to-Server-Encryption.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/09/What-is-High-Grade-Client-to-Server-Encryption.jpg" alt="What is High-Grade Client-to-Server Encryption?"/><p>High-grade client-to-server encryption refers to the implementation of robust encryption protocols to protect data transmitted between a client (e.g., a user's device or application) and a server. This type of encryption ensures that data, such as personal information, financial details, or any other sensitive data, is securely transmitted and protected from unauthorized access, eavesdropping, or tampering during its journey across the network.</p><p>This article delves into the world of client-to-server encryption, exploring its importance, common algorithms, best practices, and the challenges developers face in implementing effective encryption solutions.</p><h3 id="understanding-client-to-server-encryption">Understanding Client-to-Server Encryption</h3><p>This type of encryption ensures that even if the data is intercepted during transmission, it remains unreadable to unauthorized parties. Unlike client-side encryption, where data is encrypted before being sent to the server, client-to-server encryption involves encrypting the data on the client side and decrypting it on the server side.</p><h3 id="how-many-types-of-encryption-algorithms">How Many Types of Encryption Algorithms?</h3><p>Two of the most widely used encryption algorithms in client-to-server communication are AES (Advanced Encryption Standard) and <a href="https://en.wikipedia.org/wiki/RSA_(cryptosystem)">RSA (Rivest-Shamir-Adleman</a>). AES is a symmetric-key algorithm that uses a single key for both encryption and decryption, while RSA is an asymmetric-key algorithm that uses a pair of keys: a public key for encryption and a private key for decryption.</p><p><a href="https://www.techopedia.com/definition/1779/hybrid-encryption">Hybrid encryption</a>, which combines both symmetric and asymmetric encryption, is also commonly used in client-to-server communication. This approach leverages the speed of symmetric encryption for bulk data encryption and the security of asymmetric encryption for key exchange.</p><h3 id="importance-of-high-grade-encryption">Importance of High-Grade Encryption</h3><p>The consequences of unencrypted data transmission can be severe, as evidenced by numerous data breaches that have occurred due to inadequate encryption. High-profile examples include the <a href="https://www.nytimes.com/2017/10/03/technology/yahoo-hack-3-billion-users.html">Yahoo data breach in 2013</a>, which affected over 3 billion user accounts, and the <a href="https://archive.epic.org/privacy/data-breach/equifax/">Equifax data breach in 2017</a>, which exposed the sensitive personal information of 147 million people.</p><p>High-grade encryption mitigates these risks by ensuring that even if data is intercepted, it remains unreadable to attackers. This not only protects the privacy and security of users but also helps organizations maintain compliance with data protection regulations such as GDPR and HIPAA.</p><h3 id="key-features-of-high-grade-client-to-server-encryption">Key Features of High-Grade Client-to-Server Encryption:</h3><ol><li><strong>Strong Encryption Algorithms:</strong> High-grade encryption typically uses advanced algorithms like AES (Advanced Encryption Standard) with 256-bit keys, which are considered very secure and resistant to brute-force attacks.</li><li><strong>TLS/SSL Protocols:</strong> The most common protocols used for high-grade encryption are TLS (Transport Layer Security) and its predecessor SSL (Secure Sockets Layer). These protocols establish an encrypted connection between the client and server, ensuring that all data transferred remains confidential.</li><li><strong>End-to-end Encryption:</strong> Although client-to-server encryption specifically secures the data between the client and the server, it is often part of a broader strategy of end-to-end encryption, which ensures that data is encrypted at all stages of transmission.</li><li><strong>Data Integrity:</strong> High-grade encryption not only ensures data privacy but also protects against data tampering. It uses hashing and digital signatures to verify that data has not been altered during transmission.</li><li><strong>Authentication:</strong> Secure connections are established through a handshake process that involves the exchange of digital certificates. This verifies the identities of both the client and the server, adding another layer of security.</li><li><strong>Forward Secrecy:</strong> Many high-grade encryption implementations support forward secrecy, which ensures that session keys used for encrypting the data are not compromised even if the server's private key is later exposed.</li></ol><h3 id="benefits">Benefits:</h3><ul><li><strong>Privacy Protection:</strong> Sensitive data like passwords, personal information, and financial details are encrypted, ensuring they cannot be read by unauthorized parties.</li><li><strong>Data Integrity:</strong> Prevents unauthorized modifications, ensuring that the data received is exactly what was sent.</li><li><strong>Compliance:</strong> Helps meet regulatory requirements for data protection, such as GDPR, HIPAA, and PCI-DSS.</li><li><strong>Trust:</strong> Builds user trust by ensuring secure communications, which is particularly important for financial transactions, healthcare data, and other sensitive communications.</li></ul><p>High-grade client-to-server encryption is essential in any secure web service, protecting users and their data from a wide range of security threats.</p><h3 id="implementing-client-to-server-encryption">Implementing Client-to-Server Encryption</h3><p>Implementing high-grade client-to-server encryption involves several steps:</p><ol><li><strong>Dev Environment Preparation</strong>: Developers must ensure that their development environment is properly configured with the necessary tools and libraries, such as WebCryptoAPI for client-side encryption in JavaScript or the Node.js Crypto module for server-side encryption.</li><li><strong>Key Generation</strong>: For asymmetric encryption algorithms like RSA, developers must generate public and private keys. These keys are used for encrypting and decrypting data, respectively.</li><li><strong>Data Encryption and Decryption</strong>: The process of encrypting data before transmission and decrypting it on the server side is a critical step in client-to-server encryption.</li></ol><h3 id="challenges-in-client-to-server-encryption">Challenges in Client-to-Server Encryption</h3><p>Implementing effective client-to-server encryption is not without its challenges. Key management and distribution can be complex, especially when dealing with asymmetric encryption algorithms like RSA. Performance issues can also arise due to the overhead of encryption and decryption processes. Ensuring compatibility across different platforms and devices is another challenge that developers must address.</p><h3 id="best-practices-for-high-grade-client-to-server-encryption">Best Practices for High-Grade Client-to-Server Encryption</h3><p>To ensure the effectiveness of client-to-server encryption, developers should adhere to best practices such as:</p><ul><li>Using established libraries and protocols like TLS (Transport Layer Security) for secure communication.</li><li>Regularly updating encryption standards and algorithms to keep pace with evolving security threats.</li><li>Educating users about secure practices, such as using strong passwords and enabling two-factor authentication.</li></ul><h3 id="future-of-client-to-server-encryption">Future of Client-to-Server Encryption</h3><p>As technology advances, new challenges emerge in the realm of client-to-server encryption. The rise of quantum computing poses a potential threat to current encryption standards, as quantum computers may be able to break these algorithms more efficiently. This has led to the development of post-quantum encryption standards, which aim to create encryption algorithms that are resistant to quantum attacks.</p><h3 id="conclusion">Conclusion</h3><p>High-grade client-to-server encryption is a critical component of modern web applications, ensuring the security and privacy of sensitive data during transmission. By understanding the types of encryption algorithms, implementing best practices, and staying informed about emerging challenges, developers can create secure and reliable applications that protect user data and maintain trust. As the digital landscape continues to evolve, the importance of client-to-server encryption will only grow, making it an essential skill for developers to master.</p>]]></content:encoded></item><item><title><![CDATA[How to Build IOS Video Call App with VideoSDK]]></title><description><![CDATA[Step-by-Step Tutorial: In this article, You will learn to create an IOS Video Call App in 6 steps.]]></description><link>https://www.videosdk.live/blog/ios-video-calling-sdk</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb81</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Wed, 25 Sep 2024 16:55:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/06/ios_video-calling.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h3 id="introduction">Introduction</h3>
<!--kg-card-end: markdown--><img src="https://assets.videosdk.live/static-assets/ghost/2023/06/ios_video-calling.jpg" alt="How to Build IOS Video Call App with VideoSDK"/><p>This tutorial is about creating a video call app using the <a href="https://en.wikipedia.org/wiki/IOS">iOS</a> platform. You will use <a href="https://en.wikipedia.org/wiki/Xcode">Xcode</a>(an IDE for iOS app development) and <a href="https://en.wikipedia.org/wiki/Swift_(programming_language)">Swift</a>(a programming language).</p><p>The tutorial will guide you through setting up the project, integrating the <a href="https://www.videosdk.live">VideoSDK</a> library with <a href="https://en.wikipedia.org/wiki/CocoaPods">CocoaPods</a>, and implementing the joining screen and meeting functionalities.</p><p>By the end, you will have developed a video calling app, learned <a href="https://www.differenzsystem.com/mobile-app-development-company-in-california?utm_source=google&amp;utm_medium=blog&amp;utm_campaign=organic+backlink&amp;utm_term=videosdk">iOS development</a>, and gained skills in integrating external libraries and handling meeting interactions. This will enable you to create similar apps or enhance existing ones with video-calling capabilities.</p><!--kg-card-begin: markdown--><h2 id="prerequisites">Prerequisites</h2>
<!--kg-card-end: markdown--><ul><li>iOS 11.0+</li><li>Xcode 12.0+</li><li>Swift 5.0+</li><li>A token of VideoSDK.live from <a href="https://app.videosdk.live/api-keys">VideoSDK Dashboard</a>.</li></ul><!--kg-card-begin: markdown--><h2 id="app-architecture">App Architecture</h2>
<!--kg-card-end: markdown--><p>This App will contain two screens:</p><ol><li><strong>Join Screen:</strong> This screen allows the user to either create a meeting or join the predefined meeting.</li><li><strong>Meeting Screen:</strong> This screen basically contains local and remote participant views and some meeting controls such as Enable / Disable Mic &amp; Camera and Leaving the meeting.</li></ol><!--kg-card-begin: markdown--><p><img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_architecture.png" alt="How to Build IOS Video Call App with VideoSDK" loading="lazy"/></p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="getting-started-with-the-code">Getting Started With the Code!</h2>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="create-app">Create App</h3>
<!--kg-card-end: markdown--><p><strong>Step 1:</strong>  Create a new application by selecting <code>Create a new Xcode project</code></p><p><strong>Step 2:</strong> Choose <code>App</code> then click Next</p><!--kg-card-begin: markdown--><p><img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_app_selection.png" alt="How to Build IOS Video Call App with VideoSDK" loading="lazy"/></p>
<!--kg-card-end: markdown--><p><strong>Step 3:</strong> Add the Product Name and Save the project.</p><!--kg-card-begin: markdown--><p><img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_add_product_name.png" alt="How to Build IOS Video Call App with VideoSDK" loading="lazy"/></p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="videosdk-installation">VideoSDK Installation</h2>
<!--kg-card-end: markdown--><p>To install VideoSDK, you must initialize the pod on the project by running the following command.</p><!--kg-card-begin: markdown--><pre><code class="language-js">pod init
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p><img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_podfile.png" alt="How to Build IOS Video Call App with VideoSDK" loading="lazy"/></p>
<!--kg-card-end: markdown--><p>then run the below code to install the pod:</p><!--kg-card-begin: markdown--><pre><code class="language-js">pod install
</code></pre>
<!--kg-card-end: markdown--><p>then declare the permissions in Info.plist :</p><!--kg-card-begin: markdown--><pre><code class="language-js">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Camera permission description&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Microphone permission description&lt;/string&gt;
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="project-structure">Project Structure</h2>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code class="language-js">iOSQuickStartDemo
   ├── Models
        ├── RoomStruct.swift
        └── MeetingData.swift
   ├── ViewControllers
        ├── StartMeetingViewController.swift
        └── MeetingViewController.swift
   ├── AppDelegate.swift // Default
   ├── SceneDelegate.swift // Default
   └── APIService
           └── APIService.swift
   ├── Main.storyboard // Default
   ├── LaunchScreen.storyboard // Default
   └── Info.plist // Default
 Pods
     └── Podfile
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="mainstoryboard-design">Main.storyboard Design</h2>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p><img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_storyboard.png" alt="How to Build IOS Video Call App with VideoSDK" loading="lazy"/></p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="create-models">Create models</h2>
<!--kg-card-end: markdown--><p>Create a swift file for <code>MeetingData</code> and <code>RoomStruct</code> class model for setting data in object pattern.</p><!--kg-card-begin: markdown--><p><code>MeetingData.swift</code></p>
<pre><code class="language-js">import Foundation
struct MeetingData {
    let token: String
    let name: String
    let meetingId: String
    let micEnabled: Bool
    let cameraEnabled: Bool
}
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p><code>RoomStruct.swift</code></p>
<pre><code class="language-js">import Foundation
struct RoomsStruct: Codable {
    let createdAt, updatedAt, roomID: String?
    let links: Links?
    let id: String?
    enum CodingKeys: String, CodingKey {
        case createdAt, updatedAt
        case roomID = &quot;roomId&quot;
        case links, id
    }
}
// MARK: - Links
struct Links: Codable {
    let getRoom, getSession: String?
    enum CodingKeys: String, CodingKey {
        case getRoom = &quot;get_room&quot;
        case getSession = &quot;get_session&quot;
    }
}
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="5-steps-to-build-ios-video-call-app">5 Steps to Build IOS Video Call App</h2>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="step-1-get-started-with-apiclient">Step 1: Get started with APIClient</h3>
<!--kg-card-end: markdown--><p>Before jumping to anything else, we have to write API to generate a unique meetingId. You will require an Auth token, you can generate it using either using <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-server-api-example</a> or generate it from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for the developer.</p><!--kg-card-begin: markdown--><p><code>APIService.swift</code></p>
<pre><code class="language-js">import Foundation

let TOKEN_STRING: String = &quot;&lt;AUTH_TOKEN&gt;&quot;;

class APIService {

    class func createMeeting(token: String, completion: @escaping (Result&lt;String, Error&gt;) -&gt; Void) {
        let url = URL(string: &quot;https://api.videosdk.live/v2/rooms&quot;)!

        var request = URLRequest(url: url)
        request.httpMethod = &quot;POST&quot;
        request.addValue(TOKEN_STRING, forHTTPHeaderField: &quot;authorization&quot;)

        URLSession.shared.dataTask(with: request, completionHandler: { (data: Data?, response: URLResponse?, error: Error?) in
            DispatchQueue.main.async {
                if let data = data, let utf8Text = String(data: data, encoding: .utf8)
                {
                    do{
                        let dataArray = try JSONDecoder().decode(RoomsStruct.self,from: data)
                        completion(.success(dataArray.roomID ?? &quot;&quot;))
                    } catch {
                        print(&quot;Error while creating a meeting: \(error)&quot;)
                        completion(.failure(error))
                    }
                }
            }
        }).resume()
    }
}
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="step-2-implement-join-screen">Step 2: Implement Join Screen</h3>
<!--kg-card-end: markdown--><p>The join screen will work as a medium to either schedule a new meeting or join the existing meeting.</p><!--kg-card-begin: markdown--><p><code>StartMeetingViewController.swift</code></p>
<pre><code class="language-js">import Foundation
import UIKit

class StartMeetingViewController: UIViewController, UITextFieldDelegate {

        private var serverToken = &quot;&quot;

        /// MARK: outlet for create meeting button
        @IBOutlet weak var btnCreateMeeting: UIButton!

        /// MARK: outlet for join meeting button
        @IBOutlet weak var btnJoinMeeting: UIButton!

        /// MARK: outlet for meetingId textfield
        @IBOutlet weak var txtMeetingId: UITextField!

        /// MARK: Initialize the private variable with TOKEN_STRING &amp;
        /// setting the meeting id in the textfield
        override func viewDidLoad() {
            txtMeetingId.delegate = self
            serverToken = TOKEN_STRING
            txtMeetingId.text = &quot;PROVIDE-STATIC-MEETING-ID&quot;
        }

        /// MARK: method for joining meeting through seague named as &quot;StartMeeting&quot;
        /// after validating the serverToken in not empty
        func joinMeeting() {
            txtMeetingId.resignFirstResponder()

            if !serverToken.isEmpty {
                DispatchQueue.main.async {
                    self.dismiss(animated: true) {
                        self.performSegue(withIdentifier: &quot;StartMeeting&quot;, sender: nil)
                    }
                }
            } else {
                print(&quot;Please provide auth token to start the meeting.&quot;)
            }
        }

        /// MARK: outlet for create meeting button tap event
        @IBAction func btnCreateMeetingTapped(_ sender: Any) {
                print(&quot;show loader while meeting gets connected with server&quot;)
            joinRoom()
        }

        /// MARK: outlet for join meeting button tap event
        @IBAction func btnJoinMeetingTapped(_ sender: Any) {
            if((txtMeetingId.text ?? &quot;&quot;).isEmpty){

                        print(&quot;Please provide meeting id to start the meeting.&quot;)
                txtMeetingId.resignFirstResponder()
            } else {
                joinMeeting()
            }
        }

        // MARK: - method for creating room api call and getting meetingId for joining meeting

        func joinRoom() {

           APIService.createMeeting(token: self.serverToken) { result in
            if case .success(let meetingId) = result {
                DispatchQueue.main.async {
                    self.txtMeetingId.text = meetingId
                    self.joinMeeting()
                }
            }
        }
        }

        /// MARK: preparing to animate to meetingViewController screen
        override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

            guard let navigation = segue.destination as? UINavigationController,
                  let meetingViewController = navigation.topViewController as? MeetingViewController else {
                  return
              }

            meetingViewController.meetingData = MeetingData(
                token: serverToken,
                name: txtMeetingId.text ?? &quot;Guest&quot;,
                meetingId: txtMeetingId.text ?? &quot;&quot;,
                micEnabled: true,
                cameraEnabled: true
            )
        }
}
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h4 id="output">Output</h4>
<!--kg-card-end: markdown--><!--kg-card-begin: html--><center>
    <img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_join_screen.png" height="450" width="200" alt="How to Build IOS Video Call App with VideoSDK"/></center><!--kg-card-end: html--><!--kg-card-begin: markdown--><h3 id="step-3-initialize-and-join-meeting">Step 3: Initialize and Join Meeting</h3>
<!--kg-card-end: markdown--><p>Using the provided <code>token</code> and <code>meetingId</code>, You will configure and initialize the meeting in <code>viewDidLoad()</code>.</p><p>Then, you need to add <strong>@</strong>IBOutlet for <code>localParticipantVideoView</code> and <code>remoteParticipantVideoView</code>, which can render local and remote participant media respectively.</p><!--kg-card-begin: markdown--><p><code>MeetingViewController.swift</code></p>
<pre><code class="language-js">
class MeetingViewController: UIViewController {

import UIKit
import VideoSDKRTC
import WebRTC
import AVFoundation

class MeetingViewController: UIViewController {

    // MARK: - Properties
    // outlet for local participant container view
    @IBOutlet weak var localParticipantViewContainer: UIView!

    // outlet for label for meeting Id
    @IBOutlet weak var lblMeetingId: UILabel!

    // outlet for local participant video view
    @IBOutlet weak var localParticipantVideoView: RTCMTLVideoView!

    // outlet for remote participant video view
    @IBOutlet weak var remoteParticipantVideoView: RTCMTLVideoView!

    // outlet for remote participant no media label
    @IBOutlet weak var lblRemoteParticipantNoMedia: UILabel!

    // outlet for remote participant container view
    @IBOutlet weak var remoteParticipantViewContainer: UIView!

    // outlet for local participant no media label
    @IBOutlet weak var lblLocalParticipantNoMedia: UILabel!

    /// Meeting data - required to start
    var meetingData: MeetingData!

    /// current meeting reference
    private var meeting: Meeting?

    // MARK: - video participants including self to show in UI
    private var participants: [Participant] = []

        // MARK: - Lifecycle Events

        override func viewDidLoad() {
        super.viewDidLoad()
        // configure the VideoSDK with token
        VideoSDK.config(token: meetingData.token)

        // init meeting
        initializeMeeting()

        // set meeting id in button text
        lblMeetingId.text = &quot;Meeting Id: \(meetingData.meetingId)&quot;
      }

      override func viewWillAppear(_ animated: Bool) {
          super.viewWillAppear(animated)
          navigationController?.navigationBar.isHidden = true
      }

    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)
        navigationController?.navigationBar.isHidden = false
        NotificationCenter.default.removeObserver(self)
    }

        // MARK: - Meeting

        private func initializeMeeting() {

            // Initialize the VideoSDK
            meeting = VideoSDK.initMeeting(
                meetingId: meetingData.meetingId,
                participantName: meetingData.name,
                micEnabled: meetingData.micEnabled,
                webcamEnabled: meetingData.cameraEnabled
            )

            // Adding the listener to meeting
            meeting?.addEventListener(self)

            // joining the meeting
            meeting?.join()
        }
}
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="step-4-implement-controls">Step 4: Implement Controls</h3>
<!--kg-card-end: markdown--><p>After initializing the meeting in the previous step. You now need to add <strong>@</strong>IBOutlet for <code>btnLeave</code>, <code>btnToggleVideo</code> and <code>btnToggleMic</code> that can control media in the meeting.</p><!--kg-card-begin: markdown--><p><code>MeetingViewController.swift</code></p>
<pre><code class="language-js">
class MeetingViewController: UIViewController {

...

    // outlet for leave button
    @IBOutlet weak var btnLeave: UIButton!

    // outlet for toggle video button
    @IBOutlet weak var btnToggleVideo: UIButton!

    // outlet for toggle audio button
    @IBOutlet weak var btnToggleMic: UIButton!

    // bool for mic
    var micEnabled = true
    // bool for video
    var videoEnabled = true


    // outlet for leave button click event
    @IBAction func btnLeaveTapped(_ sender: Any) {
            DispatchQueue.main.async {
                self.meeting?.leave()
                self.dismiss(animated: true)
            }
        }

    // outlet for toggle mic button click event
    @IBAction func btnToggleMicTapped(_ sender: Any) {
        if micEnabled {
            micEnabled = !micEnabled // false
            self.meeting?.muteMic()
        } else {
            micEnabled = !micEnabled // true
            self.meeting?.unmuteMic()
        }
    }

    // outlet for toggle video button click event
    @IBAction func btnToggleVideoTapped(_ sender: Any) {
        if videoEnabled {
            videoEnabled = !videoEnabled // false
            self.meeting?.disableWebcam()
        } else {
            videoEnabled = !videoEnabled // true
            self.meeting?.enableWebcam()
        }
    }

...

}
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h4 id="output">Output</h4>
<!--kg-card-end: markdown--><!--kg-card-begin: html--><center>
    <img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_loading.png" height="450" width="200" alt="How to Build IOS Video Call App with VideoSDK"/></center><!--kg-card-end: html--><!--kg-card-begin: markdown--><h3 id="step-5-implementing-meetingeventlistener">Step 5 : Implementing MeetingEventListener</h3>
<!--kg-card-end: markdown--><p>In this step, you'll create an extension for the <code>MeetingViewController</code> that implements the <code>MeetingEventListener</code>, which implements the <code>onMeetingJoined</code>, <code>onMeetingLeft</code>, <code>onParticipantJoined</code>, <code>onParticipantLeft</code>, <code>onParticipantChanged</code>, <code>onSpeakerChanged</code> etc methods.</p><!--kg-card-begin: markdown--><p><code>MeetingViewController.swift</code></p>
<pre><code class="language-js">
class MeetingViewController: UIViewController {

...

extension MeetingViewController: MeetingEventListener {

        /// Meeting started
        func onMeetingJoined() {

            // handle local participant on start
            guard let localParticipant = self.meeting?.localParticipant else { return }
            // add to list
            participants.append(localParticipant)

            // add event listener
            localParticipant.addEventListener(self)

            localParticipant.setQuality(.high)

            if(localParticipant.isLocal){
                self.localParticipantViewContainer.isHidden = false
            } else {
                self.remoteParticipantViewContainer.isHidden = false
            }
        }

        /// Meeting ended
        func onMeetingLeft() {
            // remove listeners
            meeting?.localParticipant.removeEventListener(self)
            meeting?.removeEventListener(self)
        }

        /// A new participant joined
        func onParticipantJoined(_ participant: Participant) {
            participants.append(participant)

            // add listener
            participant.addEventListener(self)

            participant.setQuality(.high)

            if(participant.isLocal){
                self.localParticipantViewContainer.isHidden = false
            } else {
                self.remoteParticipantViewContainer.isHidden = false
            }
        }

        /// A participant left from the meeting
        /// - Parameter participant: participant object
        func onParticipantLeft(_ participant: Participant) {
            participant.removeEventListener(self)
            guard let index = self.participants.firstIndex(where: { $0.id == participant.id }) else {
                return
            }
            // remove participant from list
            participants.remove(at: index)
            // hide from ui
            UIView.animate(withDuration: 0.5){
                if(!participant.isLocal){
                    self.remoteParticipantViewContainer.isHidden = true
                }
            }
        }

        /// Called when speaker is changed
        /// - Parameter participantId: participant id of the speaker, nil when no one is speaking.
        func onSpeakerChanged(participantId: String?) {

            // show indication for active speaker
            if let participant = participants.first(where: { $0.id == participantId }) {
                self.showActiveSpeakerIndicator(participant.isLocal ? localParticipantViewContainer : remoteParticipantViewContainer, true)
            }

            // hide indication for others participants
            let otherParticipants = participants.filter { $0.id != participantId }
            for participant in otherParticipants {
                if participants.count &gt; 1 &amp;&amp; participant.isLocal {
                    showActiveSpeakerIndicator(localParticipantViewContainer, false)
                } else {
                    showActiveSpeakerIndicator(remoteParticipantViewContainer, false)
                }
            }
        }

        func showActiveSpeakerIndicator(_ view: UIView, _ show: Bool) {
            view.layer.borderWidth = 4.0
            view.layer.borderColor = show ? UIColor.blue.cgColor : UIColor.clear.cgColor
        }

}

...

</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="step-6-implementing-participanteventlistener">Step 6: Implementing ParticipantEventListener</h3>
<!--kg-card-end: markdown--><p>In this stage, you'll add an extension for the <code>MeetingViewController</code> that implements the ParticipantEventListener, which implements the <code>onStreamEnabled</code> and <code>onStreamDisabled</code> methods for the audio and video of MediaStreams enabled or disabled.</p><p>The function <code>updateUI</code> is frequently used to control or modify the user interface (enable/disable camera &amp; mic) in accordance with the MediaStream state.</p><!--kg-card-begin: markdown--><p><code>MeetingViewController.swift</code></p>
<pre><code class="language-js">class MeetingViewController: UIViewController {

...

extension MeetingViewController: ParticipantEventListener {

        /// Participant has enabled mic, video or screenshare
        /// - Parameters:
        ///   - stream: enabled stream object
        ///   - participant: participant object
        func onStreamEnabled(_ stream: MediaStream, forParticipant participant: Participant) {
            updateUI(participant: participant, forStream: stream, enabled: true)
        }

        /// Participant has disabled mic, video or screenshare
        /// - Parameters:
        ///   - stream: disabled stream object
        ///   - participant: participant object
        func onStreamDisabled(_ stream: MediaStream, forParticipant participant: Participant) {
            updateUI(participant: participant, forStream: stream, enabled: false)
        }
}

private extension MeetingViewController {

    func updateUI(participant: Participant, forStream stream: MediaStream, enabled: Bool) { // true
        switch stream.kind {
        case .state(value: .video):
            if let videotrack = stream.track as? RTCVideoTrack {
                if enabled {
                    DispatchQueue.main.async {
                        UIView.animate(withDuration: 0.5){
                            if(participant.isLocal){
                                self.localParticipantViewContainer.isHidden =   false
                                self.localParticipantVideoView.isHidden = false
                                self.localParticipantVideoView.videoContentMode = .scaleAspectFill
                                self.localParticipantViewContainer.bringSubviewToFront(self.localParticipantVideoView)
                                videotrack.add(self.localParticipantVideoView)
                                self.lblLocalParticipantNoMedia.isHidden = true
                            } else {
                                self.remoteParticipantViewContainer.isHidden = false
                                self.remoteParticipantVideoView.isHidden = false
                                self.remoteParticipantVideoView.videoContentMode = .scaleAspectFill
                                self.remoteParticipantViewContainer.bringSubviewToFront(self.remoteParticipantVideoView)
                                videotrack.add(self.remoteParticipantVideoView)
                                self.lblRemoteParticipantNoMedia.isHidden = true
                            }
                        }
                    }
                } else {
                    UIView.animate(withDuration: 0.5){
                        if(participant.isLocal){
                            self.localParticipantViewContainer.isHidden = false
                            self.localParticipantVideoView.isHidden = true
                            self.lblLocalParticipantNoMedia.isHidden = false
                            videotrack.remove(self.localParticipantVideoView)
                        } else {
                            self.remoteParticipantViewContainer.isHidden = false
                            self.remoteParticipantVideoView.isHidden = true
                            self.lblRemoteParticipantNoMedia.isHidden = false
                            videotrack.remove(self.remoteParticipantVideoView)
                        }
                    }
                }
            }

        case .state(value: .audio):
            if participant.isLocal {
                localParticipantViewContainer.layer.borderWidth = 4.0
                localParticipantViewContainer.layer.borderColor = enabled ? UIColor.clear.cgColor : UIColor.red.cgColor
            } else {
                remoteParticipantViewContainer.layer.borderWidth = 4.0
                remoteParticipantViewContainer.layer.borderColor = enabled ? UIColor.clear.cgColor : UIColor.red.cgColor
            }
        default:
            break
        }
    }
}

...

</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h4 id="output">Output</h4>
<!--kg-card-end: markdown--><!--kg-card-begin: html--><center>
    <img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_meeting_screen.png" height="450" width="200" alt="How to Build IOS Video Call App with VideoSDK"/></center><!--kg-card-end: html--><!--kg-card-begin: markdown--><h2 id="known-issue">Known Issue</h2>
<!--kg-card-end: markdown--><p>Please add the following line to the <code>MeetingViewController.swift</code> file's <code>viewDidLoad</code> method If you get your video out of the container view like the below image.</p><!--kg-card-begin: html--><center>
    <img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_known_issue.png" height="450" width="200" alt="How to Build IOS Video Call App with VideoSDK"/></center><!--kg-card-end: html--><!--kg-card-begin: markdown--><p><code>MeetingViewController.swift</code></p>
<pre><code class="language-js">override func viewDidLoad() {

    localParticipantVideoView.frame = CGRect(x: 10, y: 0, width: localParticipantViewContainer.frame.width, height: localParticipantViewContainer.frame.height)

    localParticipantVideoView.bounds = CGRect(x: 10, y: 0, width: localParticipantViewContainer.frame.width, height: localParticipantViewContainer.frame.height)

    localParticipantVideoView.clipsToBounds = true

    remoteParticipantVideoView.frame = CGRect(x: 10, y: 0, width: remoteParticipantViewContainer.frame.width, height: remoteParticipantViewContainer.frame.height)
    remoteParticipantVideoView.bounds = CGRect(x: 10, y: 0, width: remoteParticipantViewContainer.frame.width, height: remoteParticipantViewContainer.frame.height)
    remoteParticipantVideoView.clipsToBounds = true
}

</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="conclusion">Conclusion</h2>
<!--kg-card-end: markdown--><ul><li>By following this tutorial, You created a video calling app using the iOS platform, Xcode, and Swift.</li><li>Integrated the VideoSDK library with CocoaPods for seamless video calling functionality.</li><li>Implemented the Join Screen, allowing users to create new meetings or join existing ones.</li><li>Developed the Meeting Screen, which includes local and remote participant views and essential meeting controls like enabling/disabling the microphone and camera, and leaving the meeting.</li><li>Gained knowledge and hands-on experience in iOS app development, including integrating external libraries and handling meeting interactions.</li><li>If you are facing any issues, Feel free to join our <a href="https://discord.gg/Gpmj6eCq5u">Discord community</a>. We would be happy to help.</li></ul><!--kg-card-begin: html--><!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h2 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h2>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>
		</div>
		<svg viewBox="0 0 1024 1024" class="absolute left-1/2 top-1/2 -z-10 h-[64rem] w-[64rem] -translate-x-1/2 [mask-image:radial-gradient(closest-side,white,transparent)]" aria-hidden="true">
			
			<defs>
				<radialGradient id="827591b1-ce8c-4110-b064-7cb85a0b1217">
					<stop stop-color="#CB4371"/>
					<stop offset="0.5" stop-color="#AE49B0"/>
					<stop offset="1" stop-color="#493BB9"/>
				</radialGradient>
			</defs>
		</svg>
	</div>
</body>

</html><!--kg-card-end: html--><!--kg-card-begin: markdown--><h2 id="more-ios-resources">More IOS Resources</h2>
<!--kg-card-end: markdown--><ul><li><a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/getting-started">IOS video calling App documentation</a></li><li><a href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example">IOS video calling SDK</a></li><li><a href="https://www.videosdk.live/blog/video-calling-in-flutter">Flutter Video Calling App</a></li><li><a href="https://www.videosdk.live/blog/how-to-make-a-video-calling-app-using-react-native">React Native Video Calling App</a></li></ul><!--kg-card-begin: html--><style>
	.zf_lB_Dimmer_908212 {
		position: fixed;
		top: 0px;
		left: 0px;
		right: 0px;
		bottom: 0px;
		background: rgb(0, 0, 0);
		opacity: 0.8;
		z-index: 10000000;
	}

	.zf_lB_Container_908212 {
		position: fixed;
		background-color: white;
		margin: 0;
		margin-right: 0px;
		padding: 0;
		height: 452px;
		width: 70%;
		top: 50%;
		left: 50%;
		margin-right: -50%;
		transform: translate(-50%, -50%);
		border: 7px solid none;
		max-height: calc(100% - 60px);
		z-index: 999999;
	}

	p {
		margin-bottom: 10px;
	}

	.zf_lB_Wrapper_908212 {
		position: fixed;
		top: 50%;
		left: 50%;
		margin-left: 0;
		margin-top: -180px;
		z-index: 10000001;
	}

	.zf_main_id_908212 {
		height: calc(100% - 0px);
		display: flex;
		overflow-y: auto;
		overflow-x: hidden;
	}

	.zf_lb_closeform_908212 {
		position: absolute;
		right: -20px;
		background: #2f2e2e;
		padding: 0;
		border-radius: 50%;
		width: 34px;
		height: 34px;
		top: -15px;
		cursor: pointer;
		border: 2px solid #d9d9d9;
	}

	.zf_lb_closeform_908212:before,
	.zf_lb_closeform_908212:after {
		position: absolute;
		left: 16px;
		content: ' ';
		height: 19px;
		width: 2px;
		top: 7px;
		background-color: #f7f7f7;
	}

	.zf_lb_closeform_908212:before {
		transform: rotate(45deg);
	}

	.zf_lb_closeform_908212:after {
		transform: rotate(-45deg);
	}



	@media screen and (min-device-width: 10px) and (max-device-width: 380px) {
		.zf_lB_Container_908212 {
			width: 270px !important;
		}
	}

	@media screen and (min-device-width: 360px) and (max-device-width: 480px) {
		.zf_lB_Container_908212 {
			width: 300px !important;
		}
	}

	@media screen and (min-device-width: 440px) and (max-device-width: 500px) {
		.zf_lB_Container_908212 {
			width: 380px !important;
		}
	}

	@media only screen and (min-width:500px) and (max-width:600px) {
		.zf_lB_Container_908212 {
			width: 450px;
		}
	}

	@media only screen and (min-width:601px) and (max-width:700px) {
		.zf_lB_Container_908212 {
			width: 540px;
		}
	}

	@media only screen and (min-width:700px) and (max-width:800px) {
		.zf_lB_Container_908212 {
			width: 650px;
		}
	}

	@media screen and (min-device-width: 801px) and (max-device-width: 1268px) {
		.zf_lB_Container_908212 {
			width: 750px !important;
		}
	}
</style>


<button id="zf_button_908212" style="display: none;">Form</button>



<script type="text/javascript">
	(function() {
	try{

			if( document.readyState == "complete" ){ 
				onloadActions_908212();
			}  else {
			  	window.addEventListener('load', function (){
			  		onloadActions_908212();
			  	}, false);
			}

			function onloadActions_908212(){
				constructDiv_908212();
				showZForm_908212();
			}

			function constructDiv_908212(){
				var iframeDiv = document.createElement("div");
				iframeDiv.setAttribute('id','TzzUBsOahUm2oLyqWZJTOzXbqm89DE43U7yt-sAilYs_908212');
				iframeDiv.setAttribute('class','zf_main_id_908212');

				var closeFormDiv = document.createElement("div");
				closeFormDiv.setAttribute('id','deleteform_908212');
				closeFormDiv.setAttribute('class','zf_lb_closeform_908212');
				

				var containerDiv = document.createElement("div");
				containerDiv.setAttribute('id','containerDiv_908212');
				containerDiv.setAttribute('class','zf_lB_Container_908212 ');
				containerDiv.appendChild(iframeDiv);
				containerDiv.appendChild(closeFormDiv);
				
				var wrapperDiv = document.createElement("div");
				wrapperDiv.setAttribute('class','zf_lB_Wrapper_908212');
				wrapperDiv.appendChild(containerDiv);


				var dimmerDiv = document.createElement("div");
				dimmerDiv.setAttribute('class','zf_lB_Dimmer_908212');
				dimmerDiv.setAttribute('elname','popup_box');

				var mainDiv = document.createElement("div");
				mainDiv.setAttribute('id','formsLightBox_908212');
				mainDiv.style.display = "none";
				mainDiv.appendChild(wrapperDiv);
				mainDiv.appendChild(dimmerDiv);

				document.body.appendChild(mainDiv);

			}

			function showZForm_908212(){
				var iframe = document.getElementById("TzzUBsOahUm2oLyqWZJTOzXbqm89DE43U7yt-sAilYs_908212").getElementsByTagName("iframe")[0];
				if(iframe == undefined ||iframe.length == 0){
					loadZForm_908212();
					
				} 
				document.getElementById("formsLightBox_908212").style.display = "block"; 
				document.body.style.overflow = "hidden";
			}

			function loadZForm_908212() {
				var iframe = document.getElementById("TzzUBsOahUm2oLyqWZJTOzXbqm89DE43U7yt-sAilYs_908212").getElementsByTagName("iframe")[0];
				if(iframe == undefined ||iframe.length == 0){
					var f = document.createElement("iframe");
					f.src = getsrcurlZForm_908212('https://forms.videosdk.live/zujotechpvtltd/form/Demo1/formperma/TzzUBsOahUm2oLyqWZJTOzXbqm89DE43U7yt-sAilYs?zf_rszfm=1');
				    
					f.style.border="none";
					f.style.minWidth="100%";
					f.style.overflow="hidden";
					var d = document.getElementById("TzzUBsOahUm2oLyqWZJTOzXbqm89DE43U7yt-sAilYs_908212");
					d.appendChild(f);

					var deleteForm = document.getElementById("deleteform_908212");
					deleteForm.onclick = function deleteZForm_908212() {
						var divCont = document.getElementById("formsLightBox_908212");
						divCont.style.display="none";
						document.body.style.overflow = "";

						var iframe = document.getElementById("TzzUBsOahUm2oLyqWZJTOzXbqm89DE43U7yt-sAilYs_908212").getElementsByTagName("iframe")[0];
						iframe.remove();
					}

					

					window.addEventListener('message', function (){
						var evntData = event.data;
						if( evntData && evntData.constructor == String ){
							var zf_ifrm_data = evntData.split("|");
							if ( zf_ifrm_data.length == 2 ) {
								var zf_perma = zf_ifrm_data[0];
								var zf_ifrm_ht_nw = ( parseInt(zf_ifrm_data[1], 10) + 15 ) + "px";
								var iframe = document.getElementById("TzzUBsOahUm2oLyqWZJTOzXbqm89DE43U7yt-sAilYs_908212").getElementsByTagName("iframe")[0];
								if ( (iframe.src).indexOf('formperma') > 0 && (iframe.src).indexOf(zf_perma) > 0 ) {
									var prevIframeHeight = iframe.style.height;
									if ( prevIframeHeight != zf_ifrm_ht_nw ) {
									iframe.style.minHeight = zf_ifrm_ht_nw;
										var containerDiv = document.getElementById("containerDiv_908212");
										containerDiv.style.height=zf_ifrm_ht_nw;
									}
								}
							}
						}

					}, false);
				}
			}

			

			function getsrcurlZForm_908212(zf_src) {
				try {
					
					if ( typeof ZFAdvLead !== "undefined" && typeof zfutm_zfAdvLead !== "undefined" ) {
						for( var prmIdx = 0 ; prmIdx < ZFAdvLead.utmPNameArr.length ; prmIdx ++ ) {
				        	var utmPm = ZFAdvLead.utmPNameArr[ prmIdx ];
				        	var utmVal = zfutm_zfAdvLead.zfautm_gC_enc( ZFAdvLead.utmPNameArr[ prmIdx ] );
					        if ( typeof utmVal !== "undefined" ) {
					          if ( utmVal != "" ){
					            if(zf_src.indexOf('?') > 0){
					              zf_src = zf_src+'&'+utmPm+'='+utmVal;//No I18N
					            }else{
					              zf_src = zf_src+'?'+utmPm+'='+utmVal;//No I18N
					            }
					          }
					        }
				      	}
					}

					if ( typeof ZFLead !== "undefined" && typeof zfutm_zfLead !== "undefined" ) {
						for( var prmIdx = 0 ; prmIdx < ZFLead.utmPNameArr.length ; prmIdx ++ ) {
				        	var utmPm = ZFLead.utmPNameArr[ prmIdx ];
				        	var utmVal = zfutm_zfLead.zfutm_gC_enc( ZFLead.utmPNameArr[ prmIdx ] );
					        if ( typeof utmVal !== "undefined" ) {
					          if ( utmVal != "" ){
					            if(zf_src.indexOf('?') > 0){
					              zf_src = zf_src+'&'+utmPm+'='+utmVal;//No I18N
					            }else{
					              zf_src = zf_src+'?'+utmPm+'='+utmVal;//No I18N
					            }
					          }
					        }
				      	}
					}
				}catch(e){}
				return zf_src;
			}
			var buttonElem = document.getElementById("zf_button_908212");
			buttonElem.style.display = "block";
			buttonElem.addEventListener("click", showZForm_908212);
		
			
	}catch(e){}
})();
</script><!--kg-card-end: html--><p>1</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Screen Share in Android(Kotlin) Video Chat App?]]></title><description><![CDATA[This Expert guide is about your Android (Kotlin) video app with the powerful Screen Share feature integration in Kotlinusing VideoSDK.]]></description><link>https://www.videosdk.live/blog/integrate-screen-share-in-kotlin-video-chat-app</link><guid isPermaLink="false">65fbf0b72a88c204ca9cec46</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[Android]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Wed, 25 Sep 2024 12:59:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-Share-Kotlin-1.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-Share-Kotlin-1.png" alt="How to Integrate Screen Share in Android(Kotlin) Video Chat App?"/><p>Ever need to show others exactly what's on your mobile screen during a video call? If your answer is 'Yes!', then you need to know that this very important feature is called Screen Sharing. Screen share is the process of showing your smartphone screen to the other participants. It enables everyone in the conference to view precisely what you see on your screen, which is useful for presentations, demos, and collaborations. </p><p>Integrating the Screen Share feature into your video app offers various possibilities for improved collaboration and communication. Whether delivering presentations or collaborating on projects, the Screen Share functionality allows users to easily share their displays during video calls. </p><p>Android developers may create compelling and interactive video experiences for users by following the steps below and leveraging VideoSDK's capabilities. Start implementing the Screen Share feature immediately to transform your video app's functionality and user engagement.</p><h2 id="goals">Goals</h2><p>By the End of this Article:</p><ol><li>Create a <a href="https://www.videosdk.live/signup">VideoSDK account</a> and generate your VideoSDK auth token.</li><li>Integrate the VideoSDK library and dependencies into your project.</li><li>Implement core functionalities for video calls using VideoSDK.</li><li>Enable Screen Sharing Feature.</li></ol><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the screen share functionality, we will need to use the capabilities that the VideoSDK offers. Before we dive into the implementation steps, let's make sure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://dev.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token plays a crucial role in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/authentication-and-token#1-generating-token-from-dashboard">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Java Development Kit is supported.</li><li>Android Studio version 3.0 or later.</li><li>Android SDK API level 21 or higher.</li><li>A mobile device with Android 5.0 or later version.</li></ul><h2 id="integrate-videosdk">Integrate VideoSDK</h2><p>Following the account creation and token generation steps, we'll guide you through the process of adding the VideoSDK library and other dependencies to your project. We'll also ensure your app has the required permissions to access features like audio recording, camera usage, and internet connectivity, all crucial for a seamless video experience.</p><h3 id="step-a-add-the-repositories-to-the-projects-settingsgradle-file">Step (a): Add the repositories to the project's <code>settings.gradle</code> file.</h3><pre><code class="language-kotlin">dependencyResolutionManagement {
  repositories {
    // ...
    google()
    mavenCentral()
    maven { url '&lt;https://jitpack.io&gt;' }
    maven { url "&lt;https://maven.aliyun.com/repository/jcenter&gt;" }
  }
}
</code></pre><h3 id="step-b-include-the-following-dependency-within-your-applications-buildgradle-file">Step (b): Include the following dependency within your application's <code>build.gradle</code> file:</h3><pre><code class="language-kotlin">dependencies {
  implementation 'live.videosdk:rtc-android-sdk:0.1.26'

  // library to perform Network call to generate a meeting id
  implementation 'com.amitshekhar.android:android-networking:1.0.2'

  // Other dependencies specific to your app
}
</code></pre><blockquote>If your project has set <code>android.useAndroidX=true</code>, then set <code>android.enableJetifier=true</code> in the <code>gradle.properties</code> file to migrate your project to AndroidX and avoid duplicate class conflict.</blockquote><h3 id="step-c-add-permissions-to-your-project">Step (c): Add permissions to your project</h3><p>In <code>/app/Manifests/AndroidManifest.xml</code>, add the following permissions after <code>&lt;/application&gt;</code>.</p><pre><code class="language-kotlin">&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
</code></pre><p>These permissions are essential for enabling core functionalities like audio recording, internet connectivity for real-time communication, and camera access for video streams within your video application.</p><h2 id="essential-steps-for-building-the-video-calling-functionality">Essential Steps for Building the Video Calling Functionality</h2><p>We'll now delve into the functionalities that make your video application after setting up your project with VideoSDK. This section outlines the essential steps for implementing core functionalities within your app.</p><p>This section will guide you through four key aspects:</p><h3 id="step-1-generate-a-meetingid">Step 1: Generate a <code>meetingId</code></h3><p>Now, we can create the <code>meetingId</code> from the VideoSDK's rooms API. You can refer to this <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/setup-call/initialize-meeting#generating-meeting-id">documentation</a> to generate meetingId.</p><h3 id="step-2-initializing-the-meeting">Step 2: Initializing the Meeting</h3><p>After getting <code>meetingId</code> , the next step involves initializing the meeting for that we need to,</p><ol><li>Initialize VideoSDK.</li><li>Configure <strong>VideoSDK</strong> with the token.</li><li>Initialize the meeting with required params such as <code>meetingId</code>, <code>participantName</code>, <code>micEnabled</code>, <code>webcamEnabled</code> and more.</li><li>Add <code>MeetingEventListener</code> for listening events such as Meeting Join/Left and Participant Join/Left.</li><li>Join the room with <code>meeting.join()</code> a method.</li></ol><p>Please copy the .xml file of the <code>MeetingActivity</code> from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/activity_meeting.xml"><strong>here</strong></a>.</p><pre><code class="language-kotlin">class MeetingActivity : AppCompatActivity() {
  // declare the variables we will be using to handle the meeting
  private var meeting: Meeting? = null
  private var micEnabled = true
  private var webcamEnabled = true

  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_meeting)

    val token = "" // Replace with the token you generated from the VideoSDK Dashboard
    val meetingId = "" // Replace with the meetingId you have generated
    val participantName = "John Doe"
    
    // 1. Initialize VideoSDK
    VideoSDK.initialize(applicationContext)

    // 2. Configuration VideoSDK with Token
    VideoSDK.config(token)

    // 3. Initialize VideoSDK Meeting
    meeting = VideoSDK.initMeeting(
      this@MeetingActivity, meetingId, participantName,
      micEnabled, webcamEnabled,null, null, false, null, null)

    // 4. Add event listener for listening upcoming events
    meeting!!.addEventListener(meetingEventListener)

    // 5. Join VideoSDK Meeting
    meeting!!.join()

    (findViewById&lt;View&gt;(R.id.tvMeetingId) as TextView).text = meetingId
  }

  // creating the MeetingEventListener
  private val meetingEventListener: MeetingEventListener = object : MeetingEventListener() {
    override fun onMeetingJoined() {
      Log.d("#meeting", "onMeetingJoined()")
    }

    override fun onMeetingLeft() {
      Log.d("#meeting", "onMeetingLeft()")
      meeting = null
      if (!isDestroyed) finish()
    }

    override fun onParticipantJoined(participant: Participant) {
      Toast.makeText(
        this@MeetingActivity, participant.displayName + " joined",
        Toast.LENGTH_SHORT
      ).show()
    }

    override fun onParticipantLeft(participant: Participant) {
      Toast.makeText(
         this@MeetingActivity, participant.displayName + " left",
         Toast.LENGTH_SHORT
      ).show()
    }
  }
}
</code></pre><h3 id="step-3-handle-local-participant-media">Step 3: Handle Local Participant Media</h3><p>After successfully entering the meeting, it's time to manage the webcam and microphone for the local participant (you).</p><p>To enable or disable the webcam, we'll use the <code>Meeting</code> class methods <code>enableWebcam()</code> and <code>disableWebcam()</code>, respectively. Similarly, to mute or unmute the microphone, we'll utilize the methods <code>muteMic()</code> and <code>unmuteMic()</code></p><pre><code class="language-kotlin">class MeetingActivity : AppCompatActivity() {
  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_meeting)
    //...Meeting Setup is Here

    // actions
    setActionListeners()
  }

  private fun setActionListeners() {
    // toggle mic
    findViewById&lt;View&gt;(R.id.btnMic).setOnClickListener { view: View? -&gt;
      if (micEnabled) {
        // this will mute the local participant's mic
        meeting!!.muteMic()
        Toast.makeText(this@MeetingActivity, "Mic Muted", Toast.LENGTH_SHORT).show()
      } else {
        // this will unmute the local participant's mic
        meeting!!.unmuteMic()
        Toast.makeText(this@MeetingActivity, "Mic Enabled", Toast.LENGTH_SHORT).show()
      }
        micEnabled=!micEnabled
    }

    // toggle webcam
    findViewById&lt;View&gt;(R.id.btnWebcam).setOnClickListener { view: View? -&gt;
      if (webcamEnabled) {
        // this will disable the local participant webcam
        meeting!!.disableWebcam()
        Toast.makeText(this@MeetingActivity, "Webcam Disabled", Toast.LENGTH_SHORT).show()
      } else {
        // this will enable the local participant webcam
        meeting!!.enableWebcam()
        Toast.makeText(this@MeetingActivity, "Webcam Enabled", Toast.LENGTH_SHORT).show()
      }
       webcamEnabled=!webcamEnabled
    }

    // leave meeting
    findViewById&lt;View&gt;(R.id.btnLeave).setOnClickListener { view: View? -&gt;
      // this will make the local participant leave the meeting
      meeting!!.leave()
    }
  }
}
</code></pre><h3 id="step-4-handling-the-participants-view">Step 4: Handling the Participants' View</h3><p>To display a list of participants in your video UI, we'll utilize a <code>RecyclerView</code>.</p><p><strong>(a)</strong> This involves creating a new layout for the participant view named <code>item_remote_peer.xml</code> in the <code>res/layout</code> folder. You can copy <code>item_remote_peer.xml </code>file from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/item_remote_peer.xml"><strong>here</strong></a>.</p><p><strong>(b)</strong> Create a RecyclerView adapter <code>ParticipantAdapter</code> which will be responsible for displaying the participant list. Within this adapter, define a <code>PeerViewHolder</code> class that extends <code>RecyclerView.ViewHolder</code>.</p><pre><code class="language-kotlin">class ParticipantAdapter(meeting: Meeting) : RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt;() {

  override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): PeerViewHolder {
    return PeerViewHolder(
      LayoutInflater.from(parent.context)
        .inflate(R.layout.item_remote_peer, parent, false)
    )
  }

  override fun onBindViewHolder(holder: PeerViewHolder, position: Int) {
  }

  override fun getItemCount(): Int {
    return 0
  }

  class PeerViewHolder(view: View) : RecyclerView.ViewHolder(view) {
    // 'VideoView' to show Video Stream
    var participantView: VideoView
    var tvName: TextView

    init {
        tvName = view.findViewById(R.id.tvName)
        participantView = view.findViewById(R.id.participantView)
    }
  }
}
</code></pre><p><strong>(c)</strong> Now, we will render a list of <code>Participant</code> for the meeting. We will initialize this list in the constructor of the <code>ParticipantAdapter</code></p><pre><code class="language-kotlin">class ParticipantAdapter(meeting: Meeting) :
    RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt;() {

  // creating a empty list which will store all participants
  private val participants: MutableList&lt;Participant&gt; = ArrayList()

  init {
    // adding the local participant(You) to the list
    participants.add(meeting.localParticipant)

    // adding Meeting Event listener to get the participant join/leave event in the meeting.
    meeting.addEventListener(object : MeetingEventListener() {
      override fun onParticipantJoined(participant: Participant) {
        // add participant to the list
        participants.add(participant)
        notifyItemInserted(participants.size - 1)
      }

      override fun onParticipantLeft(participant: Participant) {
        var pos = -1
        for (i in participants.indices) {
          if (participants[i].id == participant.id) {
            pos = i
            break
          }
        }
        // remove participant from the list
        participants.remove(participant)
        if (pos &gt;= 0) {
          notifyItemRemoved(pos)
        }
      }
    })
  }

  // replace getItemCount() method with following.
  // this method returns the size of total number of participants
  override fun getItemCount(): Int {
    return participants.size
  }
  //...
}
</code></pre><p><strong>(d)</strong> We have listed our participants. Let's set up the view holder to display a participant video.</p><pre><code class="language-kotlin">class ParticipantAdapter(meeting: Meeting) :
    RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt;() {

  // replace onBindViewHolder() method with following.
  override fun onBindViewHolder(holder: PeerViewHolder, position: Int) {
    val participant = participants[position]

    holder.tvName.text = participant.displayName

    // adding the initial video stream for the participant into the 'VideoView'
    for ((_, stream) in participant.streams) {
      if (stream.kind.equals("video", ignoreCase = true)) {
        holder.participantView.visibility = View.VISIBLE
        val videoTrack = stream.track as VideoTrack
        holder.participantView.addTrack(videoTrack)
        break
      }
    }

    // add Listener to the participant which will update start or stop the video stream of that participant
    participant.addEventListener(object : ParticipantEventListener() {
      override fun onStreamEnabled(stream: Stream) {
        if (stream.kind.equals("video", ignoreCase = true)) {
          holder.participantView.visibility = View.VISIBLE
          val videoTrack = stream.track as VideoTrack
          holder.participantView.addTrack(videoTrack)
       }
      }

      override fun onStreamDisabled(stream: Stream) {
        if (stream.kind.equals("video", ignoreCase = true)) {
          holder.participantView.removeTrack()
          holder.participantView.visibility = View.GONE
        }
      }
    })
  }
}
</code></pre><p><strong>(e)</strong> Now, add this adapter to the <code>MeetingActivity</code></p><pre><code class="language-kotlin">override fun onCreate(savedInstanceState: Bundle?) {
  // Meeting Setup...
  //...
  val rvParticipants = findViewById&lt;RecyclerView&gt;(R.id.rvParticipants)
  rvParticipants.layoutManager = GridLayoutManager(this, 2)
  rvParticipants.adapter = ParticipantAdapter(meeting!!)
}
</code></pre><h2 id="screen-share-feature-integration">Screen Share Feature Integration</h2><p>The screen share feature enhances the collaborative experience in video conferences by allowing participants to share their screens with others. Integrating screen share functionality into your video app using VideoSDK is straightforward and can significantly enhance the usability and effectiveness of your application.</p><p>Let's walk through the steps to enable screen-sharing functionality using VideoSDK.</p><h3 id="how-does-screen-share-work%E2%80%8B">How does Screen Share work?<a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/handling-media/screen-share#how-screen-share-works">​</a></h3><p>The following diagram shows the flow of screen sharing in Android using VideoSDK :</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/flow_diagram-babfdcf534c77239e38b3a11d005def6.png" class="kg-image" alt="How to Integrate Screen Share in Android(Kotlin) Video Chat App?" loading="lazy" width="674" height="826"/></figure><h3 id="enable-screen-sharing">Enable Screen Sharing</h3><p>To initiate screen sharing, utilize the <code>enableScreenShare()</code> function within the Meeting class. This enables the local participants to share their mobile screens with other participants seamlessly.</p><ul><li>You can pass customized screen share track in <code>enableScreenShare()</code> by using <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/render-media/optimize-video-track#custom-screen-share-track">Custom Screen Share Track</a>.</li><li>Screen Share stream of the participant can be accessed from the <code>onStreamEnabled</code> event of <code>ParticipantEventListener</code>.</li></ul><!--kg-card-begin: markdown--><h4 id="screenshare-permission%E2%80%8B">Screenshare permission​</h4>
<!--kg-card-end: markdown--><ul><li>Before commencing screen sharing, it's crucial to address screen share permissions. The participant's screen share stream is facilitated through the <code>MediaProjection</code> API, compatible only with <code>Build.VERSION_CODES.LOLLIPOP</code> or higher.</li><li>To attain permission for screen sharing, acquire an instance of the <code>MediaProjectionManager</code> and invoke the <code>createScreenCaptureIntent()</code> a method within an activity. This prompts a dialog for the user to authorize screen projection. </li></ul><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/user_permission-d3adf623ea99e011b2b069031f2570be.jpg" class="kg-image" alt="How to Integrate Screen Share in Android(Kotlin) Video Chat App?" loading="lazy"/></figure><ul><li>Following the permission grant, proceed to call the <code>enableScreenShare()</code> method.</li></ul><pre><code class="language-kotlin">private fun enableScreenShare() {
    val mediaProjectionManager = application.getSystemService(
        MEDIA_PROJECTION_SERVICE
    ) as MediaProjectionManager
    startActivityForResult(
        mediaProjectionManager.createScreenCaptureIntent(), CAPTURE_PERMISSION_REQUEST_CODE
    )
}

public override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
    super.onActivityResult(requestCode, resultCode, data)
    if (requestCode != CAPTURE_PERMISSION_REQUEST_CODE) return
    if (resultCode == RESULT_OK) {
        // Enabling screen share
        meeting!!.enableScreenShare(data)
    }
}</code></pre><!--kg-card-begin: markdown--><h4 id="customize-notification">Customize notification</h4>
<!--kg-card-end: markdown--><ul><li>Upon initiating screen sharing, the presenter will receive a notification featuring a predefined title and message. Notification with pre-defined title and message will look like this:</li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/03/notification-screenshare-android.jpg" class="kg-image" alt="How to Integrate Screen Share in Android(Kotlin) Video Chat App?" loading="lazy" width="600" height="531"/></figure><ul><li>You can Customise those titles, messages, and icons as per your requirements using <code>&lt;meta-data&gt;</code> specified in <code>app/src/main/AndroidManifest.xml</code>.</li><li>The notification appearance can be customized to align with specific requirements by modifying the titles, messages, and icons using <code>&lt;meta-data&gt;</code> specified in <code>app/src/main/AndroidManifest.xml</code>.</li></ul><pre><code class="language-kotlin">&lt;application&gt;
  &lt;meta-data
    android:name="notificationTitle"
    android:value="@string/notificationTitle"
  /&gt;
  &lt;meta-data
    android:name="notificationContent"
    android:value="@string/notificationContent"
  /&gt;
  &lt;meta-data
    android:name="notificationIcon"
    android:resource="@mipmap/ic_launcher_round"
  /&gt;
&lt;/application&gt;</code></pre><h3 id="disable-screen-sharing">Disable Screen Sharing</h3><p>You have to employ the <code>disableScreenShare()</code> function from the <code>Meeting</code> class. This action enables the local participant to halt sharing their mobile screen with other participants.</p><pre><code class="language-kotlin">private fun disableScreenShare() {
    // Disabling screen share
    meeting!!.disableScreenShare()
}</code></pre><h3 id="events-associated-with-screen-sharing"><strong>Events Associated with Screen Sharing</strong></h3><!--kg-card-begin: markdown--><h4 id="events-associated-with-enablescreenshare%E2%80%8B">Events associated with <code>enableScreenShare</code>​</h4>
<!--kg-card-end: markdown--><ul><li>The participant who shares their mobile screen will receive a callback on <a href="https://docs.videosdk.live/android/api/sdk-reference/participant-class/participant-event-listener-class#onstreamenabled"><code>onStreamEnabled()</code></a> of the <a href="https://docs.videosdk.live/android/api/sdk-reference/participant-class/introduction"><code>Participant</code></a> with <code>Stream</code> object.</li><li>While other Participants will receive <a href="https://docs.videosdk.live/android/api/sdk-reference/meeting-class/meeting-event-listener-class#onpresenterchanged"><code>onPresenterChanged()</code></a> callback of the <a href="https://docs.videosdk.live/android/api/sdk-reference/meeting-class/introduction"><code>Meeting</code></a> class with the participantId as <code>presenterId</code> who started the screen share.</li></ul><!--kg-card-begin: markdown--><h4 id="events-associated-with-disablescreenshare%E2%80%8B">Events associated with <code>disableScreenShare</code>​</h4>
<!--kg-card-end: markdown--><ul><li>The participant who shared their mobile screen will receive a callback on <a href="https://docs.videosdk.live/android/api/sdk-reference/participant-class/participant-event-listener-class#onstreamdisabled"><code>onStreamDisabled()</code></a> of the <a href="https://docs.videosdk.live/android/api/sdk-reference/participant-class/introduction"><code>Participant</code></a> with Stream object.</li><li>While other Participants will receive <a href="https://docs.videosdk.live/android/api/sdk-reference/meeting-class/meeting-event-listener-class#onpresenterchanged"><code>onPresenterChanged()</code></a> callback of the <a href="https://docs.videosdk.live/android/api/sdk-reference/meeting-class/introduction"><code>Meeting</code></a> class with the <code>presenterId</code> as <code>null</code> indicating there is no presenter.</li></ul><pre><code class="language-kotlin">private fun setLocalListeners() {
    meeting!!.localParticipant.addEventListener(object : ParticipantEventListener() {
        //Callback for when the participant starts a stream
        override fun onStreamEnabled(stream: Stream) {
           if (stream.kind.equals("share", ignoreCase = true)) {
              Log.d("VideoSDK","Share Stream On: onStreamEnabled $stream");
            }
        }

        //Callback for when the participant stops a stream
        override fun onStreamDisabled(stream: Stream) {
            if (stream.kind.equals("share", ignoreCase = true)) {
               Log.d("VideoSDK","Share Stream On: onStreamDisabled $stream");
            }
        }
    });
}

private val meetingEventListener: MeetingEventListener = object : MeetingEventListener() {
    //Callback for when the presenter changes
    override fun onPresenterChanged(participantId: String) {
        if(!TextUtils.isEmpty(participantId))
        {
          Log.d("VideoSDK","$participantId started screen share");
        }else{
          Log.d("VideoSDK","some one stopped screen share");
        }
    }
}</code></pre><p>That's it. Following these steps will allow you to effortlessly implement screen-share capability into your video app, increasing its adaptability and usefulness for users across a wide range of use cases. </p><p>For an in-depth exploration of the code snippets along with thorough explanations, I highly recommend delving into the <a href="https://github.com/videosdk-live/videosdk-rtc-android-kotlin-sdk-example" rel="noopener noreferrer">GitHub repository</a>. By navigating through the repository, you'll gain access to the complete set of code snippets, accompanied by detailed explanations that shed light on their functionality and implementation.</p><h2 id="conclusion"><strong>Conclusion</strong></h2><p>We have discussed the essential steps for integrating the screen share feature into your Android video app using VideoSDK. By following these steps, You may improve the collaborative experience of your video apps, allowing users to effortlessly exchange information during video conferences. </p><p>Screen sharing feature not only increases user engagement but also extends the number of use cases for video communication services, making them more adaptable and beneficial to users.</p><p><a href="https://www.videosdk.live/signup"><strong>Sign up with VideoSDK</strong></a> today and Get <strong>10000 Free Minutes </strong>to take your video app to the next level!</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Picture-in-Picture (PiP) Mode in React JS?]]></title><description><![CDATA[In this tutorial, you'll learn how to implement Picture-in-Picture mode into your React Video apps using VideoSDK in a few simple steps.]]></description><link>https://www.videosdk.live/blog/integrate-picture-in-picture-pip-in-react-js</link><guid isPermaLink="false">660fe7112a88c204ca9cfef7</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[React]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Wed, 25 Sep 2024 12:38:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/PIP-in-React.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/PIP-in-React.png" alt="How to Integrate Picture-in-Picture (PiP) Mode in React JS?"/><p>Picture-in-Picture (PiP) is a user interface feature that allows a secondary window to appear alongside the main window or screen. This secondary window often displays video content, allowing users to watch videos or make video calls while conducting other tasks. The PiP window remains visible and resizable, giving users uninterrupted access to the video content when switching between programs or windows.</p><h3 id="use-case-and-effectiveness">Use Case and Effectiveness</h3><p>The use case for PiP mode is diverse, spanning from business meetings to personal video conversations. Professionals can use PiP mode to participate in video conferences while additionally examining documents, taking notes, or collaborating on projects. This feature boosts productivity by reducing the need to move between apps or windows, allowing for smooth multitasking during meetings.</p><p>In personal contexts, PiP mode allows for ongoing conversation with friends and family during video chats. Users may have video discussions while browsing the internet, reading emails, or using other apps on their smartphones. This flexibility improves the user experience by making it easier and more efficient to manage many activities at the same time. In this tutorial, we'll look at how to include Picture-in-Picture mode into a React application using VideoSDK.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the Picture-in-Picture (PiP) Mode functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>Consider referring to the <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a> for a more visual understanding of the account creation and token generation process.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (Not having one?, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a>)</li><li>Basic understanding of React.</li><li><a href="https://www.npmjs.com/package/@videosdk.live/react-sdk" rel="noopener noreferrer"><strong>React VideoSDK</strong></a></li><li>Make sure Node and NPM are installed on your device.</li><li>Basic understanding of Hooks (useState, useRef, useEffect)</li><li>React Context API (optional)</li></ul><p>Follow the steps to create the environment necessary to add video calls to your app. You can also find the code sample for <a href="https://github.com/videosdk-live/quickstart/tree/main/react-rtc" rel="noopener noreferrer">Quickstart here</a>.<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#create-new-react-app" rel="noopener noreferrer">​</a></p><p><strong>Create a new React App using the below command.</strong></p><pre><code class="language-js">$ npx create-react-app videosdk-rtc-react-app</code></pre><h2 id="%E2%AC%87%EF%B8%8F-install-videosdk%E2%80%8B">⬇️ Install VideoSDK<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#install-videosdk">​</a></h2><p>It is necessary to set up VideoSDK within your project before going into the details of integrating the Picture-in-Picture (PiP) Mode feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.</p><ul><li>For NPM</li></ul><pre><code class="language-js">$ npm install "@videosdk.live/react-sdk"

//For the Participants Video
$ npm install "react-player"</code></pre><ul><li>For Yarn</li></ul><pre><code class="language-js">$ yarn add "@videosdk.live/react-sdk"

//For the Participants Video
$ yarn add "react-player"</code></pre><p>You are going to use functional components to leverage React's reusable component architecture. There will be components for users, videos and controls (mic, camera, leave) over the video.</p><h3 id="app-architecture">App Architecture</h3>
<p>The App will contain a <code>MeetingView</code> component which includes a <code>ParticipantView</code> component which will render the participant's name, video, audio, etc. It will also have a <code>Controls</code> component that will allow the user to perform operations like leave and toggle media.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-quick-start-fafbfbc2ed2d7cbfd4c5ee2e36296f9e.png" class="kg-image" alt="How to Integrate Picture-in-Picture (PiP) Mode in React JS?" loading="lazy" width="1356" height="780"/></figure><p>You will be working on the following files:</p><ul><li>API.js: Responsible for handling API calls such as generating unique meetingId and token</li><li>App.js: Responsible for rendering <code>MeetingView</code> and joining the meeting.</li></ul><h2 id="essential-steps-to-implement-video-calling-functionality">Essential Steps to Implement Video Calling Functionality</h2><p>To add video capability to your React application, you must first complete a sequence of prerequisites.</p><h3 id="step-1-get-started-with-apijs%E2%80%8B">Step 1: Get started with API.js<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-1-get-started-with-apijs">​</a></h3><p>Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><pre><code class="language-js">//This is the Auth token, you will use it to generate a meeting and connect to it
export const authToken = "&lt;Generated-from-dashbaord&gt;";
// API call to create a meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${authToken}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });
  //Destructuring the roomId from the response
  const { roomId } = await res.json();
  return roomId;
};</code></pre><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the <strong>Context Provider</strong> and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meetings such as join/leave, enable/disable the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as <strong>name</strong>, <strong>webcamStream</strong>, <strong>micStream</strong>, etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the <strong>App.js</strong> file.</p><pre><code class="language-js">import "./App.css";
import React, { useEffect, useMemo, useRef, useState } from "react";
import {
  MeetingProvider,
  MeetingConsumer,
  useMeeting,
  useParticipant,
} from "@videosdk.live/react-sdk";
import { authToken, createMeeting } from "./API";
import ReactPlayer from "react-player";

function JoinScreen({ getMeetingAndToken }) {
  return null;
}

function ParticipantView(props) {
  return null;
}

function Controls(props) {
  return null;
}

function MeetingView(props) {
  return null;
}

function App() {
  const [meetingId, setMeetingId] = useState(null);

  //Getting the meeting id by calling the api we just wrote
  const getMeetingAndToken = async (id) =&gt; {
    const meetingId =
      id == null ? await createMeeting({ token: authToken }) : id;
    setMeetingId(meetingId);
  };

  //This will set Meeting Id to null when meeting is left or ended
  const onMeetingLeave = () =&gt; {
    setMeetingId(null);
  };

  return authToken &amp;&amp; meetingId ? (
    &lt;MeetingProvider
      config={{
        meetingId,
        micEnabled: true,
        webcamEnabled: true,
        name: "C.V. Raman",
      }}
      token={authToken}
    &gt;
      &lt;MeetingView meetingId={meetingId} onMeetingLeave={onMeetingLeave} /&gt;
    &lt;/MeetingProvider&gt;
  ) : (
    &lt;JoinScreen getMeetingAndToken={getMeetingAndToken} /&gt;
  );
}

export default App;</code></pre><h3 id="step-3-implement-join-screen%E2%80%8B">Step 3: Implement Join Screen<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-3-implement-join-screen">​</a></h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><pre><code class="language-js">function JoinScreen({ getMeetingAndToken }) {
  const [meetingId, setMeetingId] = useState(null);
  const onClick = async () =&gt; {
    await getMeetingAndToken(meetingId);
  };
  return (
    &lt;div&gt;
      &lt;input
        type="text"
        placeholder="Enter Meeting Id"
        onChange={(e) =&gt; {
          setMeetingId(e.target.value);
        }}
      /&gt;
      &lt;button onClick={onClick}&gt;Join&lt;/button&gt;
      {" or "}
      &lt;button onClick={onClick}&gt;Create Meeting&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><h4 id="output">Output</h4>
<figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-join-screen-06fb57cf0d9e3bcc1e7da9fc032298c3.jpeg" class="kg-image" alt="How to Integrate Picture-in-Picture (PiP) Mode in React JS?" loading="lazy" width="720" height="130"/></figure><h3 id="step-4-implement-meetingview-and-controls%E2%80%8B">Step 4: Implement MeetingView and Controls<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-4-implement-meetingview-and-controls">​</a></h3><p>The next step is to create <code>MeetingView</code> and <code>Controls</code> components to manage features such as join, leave, mute, and unmute.</p><pre><code class="language-js">function MeetingView(props) {
  const [joined, setJoined] = useState(null);
  //Get the method which will be used to join the meeting.
  //We will also get the participants list to display all participants
  const { join, participants } = useMeeting({
    //callback for when meeting is joined successfully
    onMeetingJoined: () =&gt; {
      setJoined("JOINED");
    },
    //callback for when meeting is left
    onMeetingLeft: () =&gt; {
      props.onMeetingLeave();
    },
  });
  const joinMeeting = () =&gt; {
    setJoined("JOINING");
    join();
  };

  return (
    &lt;div className="container"&gt;
      &lt;h3&gt;Meeting Id: {props.meetingId}&lt;/h3&gt;
      {joined &amp;&amp; joined == "JOINED" ? (
        &lt;div&gt;
          &lt;Controls /&gt;
          //For rendering all the participants in the meeting
          {[...participants.keys()].map((participantId) =&gt; (
            &lt;ParticipantView
              participantId={participantId}
              key={participantId}
            /&gt;
          ))}
        &lt;/div&gt;
      ) : joined &amp;&amp; joined == "JOINING" ? (
        &lt;p&gt;Joining the meeting...&lt;/p&gt;
      ) : (
        &lt;button onClick={joinMeeting}&gt;Join&lt;/button&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><figure class="kg-card kg-code-card"><pre><code class="language-js">function Controls() {
  const { leave, toggleMic, toggleWebcam } = useMeeting();
  return (
    &lt;div&gt;
      &lt;button onClick={() =&gt; leave()}&gt;Leave&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleMic()}&gt;toggleMic&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleWebcam()}&gt;toggleWebcam&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">Control Component</span></p></figcaption></figure><h4 id="output-of-controls-component">Output of Controls Component</h4>
<figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-container-controls-2cebdfdfd1371b010b773cb6fb9c7ae8.jpeg" class="kg-image" alt="How to Integrate Picture-in-Picture (PiP) Mode in React JS?" loading="lazy" width="720" height="177"/></figure><h3 id="step-5-implement-participant-view%E2%80%8B">Step 5: Implement Participant View<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-5-implement-participant-view">​</a></h3><p>Before implementing the participant view, you need to understand a couple of concepts.</p><h4 id="51-forwarding-ref-for-mic-and-camera">5.1 Forwarding Ref for mic and camera</h4>
<p>The <code>useRef</code> hook is responsible for referencing the audio and video components. It will be used to play and stop the audio and video of the participant.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const webcamRef = useRef(null);
const micRef = useRef(null);</code></pre><figcaption><p><span style="white-space: pre-wrap;">Forwarding Ref for mic and camera</span></p></figcaption></figure><h4 id="52-useparticipant-hook">5.2 useParticipant Hook</h4>
<p>The <code>useParticipant</code> hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take participantId as an argument.</p><pre><code class="language-js">const { webcamStream, micStream, webcamOn, micOn } = useParticipant(
  props.participantId
);</code></pre><h4 id="53-mediastream-api">5.3 MediaStream API</h4>
<p>The MediaStream API is beneficial for adding a MediaTrack to the audio/video tag, enabling the playback of audio or video.</p><pre><code class="language-js">const webcamRef = useRef(null);
const mediaStream = new MediaStream();
mediaStream.addTrack(webcamStream.track);

webcamRef.current.srcObject = mediaStream;
webcamRef.current
  .play()
  .catch((error) =&gt; console.error("videoElem.current.play() failed", error));</code></pre><h4 id="54-implement-participantview%E2%80%8B">5.4 Implement <code>ParticipantView</code>​</h4>
<p>Now you can use both of the hooks and the API to create <code>ParticipantView</code></p><pre><code class="language-js">function ParticipantView(props) {
  const micRef = useRef(null);
  const { webcamStream, micStream, webcamOn, micOn, isLocal, displayName } =
    useParticipant(props.participantId);

  const videoStream = useMemo(() =&gt; {
    if (webcamOn &amp;&amp; webcamStream) {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(webcamStream.track);
      return mediaStream;
    }
  }, [webcamStream, webcamOn]);

  useEffect(() =&gt; {
    if (micRef.current) {
      if (micOn &amp;&amp; micStream) {
        const mediaStream = new MediaStream();
        mediaStream.addTrack(micStream.track);

        micRef.current.srcObject = mediaStream;
        micRef.current
          .play()
          .catch((error) =&gt;
            console.error("videoElem.current.play() failed", error)
          );
      } else {
        micRef.current.srcObject = null;
      }
    }
  }, [micStream, micOn]);

  return (
    &lt;div&gt;
      &lt;p&gt;
        Participant: {displayName} | Webcam: {webcamOn ? "ON" : "OFF"} | Mic:{" "}
        {micOn ? "ON" : "OFF"}
      &lt;/p&gt;
      &lt;audio ref={micRef} autoPlay playsInline muted={isLocal} /&gt;
      {webcamOn &amp;&amp; (
        &lt;ReactPlayer
          //
          playsinline // extremely crucial prop
          pip={false}
          light={false}
          controls={false}
          muted={true}
          playing={true}
          //
          url={videoStream}
          //
          height={"300px"}
          width={"300px"}
          onError={(err) =&gt; {
            console.log(err, "participant video error");
          }}
        /&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><blockquote>You can check out the complete <a href="https://github.com/videosdk-live/quickstart/tree/main/react-rtc" rel="noopener noreferrer">quick start example here</a>.</blockquote><h2 id="integrate-picture-in-picture-feature">Integrate Picture-in-Picture Feature</h2><p>Picture-in-picture (PiP) is a commonly used feature in video conferencing software, enabling users to simultaneously engage in a video conference and perform other tasks on their devices. With PiP, you can keep the video conference window open, resize it to a smaller size, and continue working on other tasks while still seeing and hearing the other participants in the conference. This feature proves beneficial when you need to take notes, send an email, or look up information during the conference.</p><p>This explains the steps to implement the Picture-in-Picture feature using VideoSDK.</p><h3 id="pip-video%E2%80%8B">PiP Video<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/render-media/picture-in-picture#pip-video">​</a></h3><p>All modern-day browsers support popping a video stream out from the <code>HTMLVideoElement</code>. You can achieve this either directly from the controls shown on the video element or by using the Browser API method <a href="https://developer.mozilla.org/en-US/docs/Web/API/HTMLVideoElement/requestPictureInPicture" rel="noopener noreferrer"><code>requestPictureInPicture()</code></a> on the video element.</p><blockquote>
<p>Chrome, Edge, and Safari support this browser Web API, however, Firefox has no programmatic way of triggering PiP.</p>
</blockquote>
<h3 id="customize-video-pip-with-multiple-video-streams%E2%80%8B">Customize Video PiP with multiple video streams<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/render-media/picture-in-picture#customize-video-pip-with-multiple-video-streams">​</a></h3><p><strong>Step 1: </strong>Create a button that toggles the Picture-in-Picture (PiP) mode during the meeting. This button should invoke the <code>togglePipMode()</code> method when clicked.</p><pre><code class="language-js">function Controls() {
  const togglePipMode = async () =&gt; {};
  return (
    &lt;div&gt;
      &lt;button onClick={() =&gt; togglePipMode()}&gt;start Pip&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><p><strong>Step 2: </strong>The first step is to check if the browser supports PiP mode; if not, display a message to the user.</p><pre><code class="language-js">function Controls() {
  const togglePipMode = async () =&gt; {
    //Check if browser supports PiP mode else show a message to user
    if ("pictureInPictureEnabled" in document) {

    } else {
      alert("PiP is not supported by your browser");
    }
  };
  return ...;
}</code></pre><p><strong>Step 3: </strong>Now, if the browser supports PiP mode, create a <code>Canvas</code> element and a <code>Video</code> element. Generate a Stream from the Canvas and play it in the video element. Request PiP mode for the video element once the metadata has been loaded.</p><pre><code class="language-js">function Controls() {

  const pipWindowRef = useRef();

  const togglePipMode = async () =&gt; {
    //Check if browser supports PiP mode else show a message to user
    if ("pictureInPictureEnabled" in document) {
      //Create a Canvas which will render the PiP Stream
      const source = document.createElement("canvas");
      const ctx = source.getContext("2d");

      //Create a Video tag which will popout for PiP
      const pipVideo = document.createElement("video");
      pipWindowRef.current = pipVideo;
      pipVideo.autoplay = true;

      //Create a stream from canvas which will play
      const stream = source.captureStream();
      pipVideo.srcObject = stream;

      //Do initial Canvas Paint
      drawCanvas()

      //When Video is ready, start PiP mode
      pipVideo.onloadedmetadata = () =&gt; {
        pipVideo.requestPictureInPicture();
      };
      await pipVideo.play();
    } else {
      alert("PiP is not supported by your browser");
    }
  };
  return ...;
}</code></pre><p><strong>Step 4:  </strong>The next step is to paint the canvas with the Participant Grid, which will be visible in the PiP window.</p><pre><code class="language-js">function Controls() {

  const getRowCount = (length) =&gt; {
    return length &gt; 2 ? 2 : length &gt; 0 ? 1 : 0;
  };
  const getColCount = (length) =&gt; {
    return length &lt; 2 ? 1 : length &lt; 5 ? 2 : 3;
  };

  const togglePipMode = async () =&gt; {
    //Check if browser supports PiP mode else show a message to user
    if ("pictureInPictureEnabled" in document) {

      //Stream playing here
      //...

      //When the PiP mode starts, draw the canvas with PiP view
      pipVideo.addEventListener("enterpictureinpicture", (event) =&gt; {
        drawCanvas();
      });

      //When PiP mode exits, dispose the tracks that were created earlier
      pipVideo.addEventListener("leavepictureinpicture", (event) =&gt; {
        pipWindowRef.current = null;
        pipVideo.srcObject.getTracks().forEach((track) =&gt; track.stop());
      });

      //This will draw all the video elements in to the Canvas
      function drawCanvas() {
        //Getting all the video elements in the document
        const videos = document.querySelectorAll("video");
        try {
          //Perform initial black paint on the canvas
          ctx.fillStyle = "black";
          ctx.fillRect(0, 0, source.width, source.height);

          //Drawing the participant videos on the canvas in the grid format
          const rows = getRowCount(videos.length);
          const columns = getColCount(videos.length);
          for (let i = 0; i &lt; rows; i++) {
            for (let j = 0; j &lt; columns; j++) {
              if (j + i * columns &lt;= videos.length || videos.length == 1) {
                ctx.drawImage(
                  videos[j + i * columns],
                  j &lt; 1 ? 0 : source.width / (columns / j),
                  i &lt; 1 ? 0 : source.height / (rows / i),
                  source.width / columns,
                  source.height / rows
                );
              }
            }
          }
        } catch (error) {}

        //If pip mode is on, keep drawing the canvas when ever new frame is requested
        if (document.pictureInPictureElement === pipVideo) {
          requestAnimationFrame(drawCanvas);
        }
      }

    } else {
      alert("PiP is not supported by your browser");
    }
  };
  return ...;
}</code></pre><blockquote>
<p>Only the participants who have their video turned on, will be shown in the PiP mode.</p>
</blockquote>
<p><strong>Step 5: </strong>Exit the PiP mode if it is already active.</p><pre><code class="language-js">function Controls() {
  const togglePipMode = async () =&gt; {

    //Check if PiP Window is active or not
    //If active we will turn it off
    if (pipWindowRef.current) {
      await document.exitPictureInPicture();
      pipWindowRef.current = null;
      return;
    }

    //Check if browser supports PiP mode else show a message to user
    if ("pictureInPictureEnabled" in document) {
      ...
    } else {
      alert("PiP is not supported by your browser");
    }
  };
  return ...;
}</code></pre><h2 id="%E2%9C%A8-want-to-add-more-features-to-react-js-video-calling-app">✨ Want to Add More Features to React JS Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your React video-calling app,</p><p><strong>Check out these additional resources:</strong></p><ul><li>HLS Player: <a href="https://www.videosdk.live/blog/implement-hls-player-in-react-js">Link</a></li><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/integrate-active-speaker-indication-in-react-js">Link</a></li><li>RTMP Live Stream: <a href="https://www.videosdk.live/blog/integrate-rtmp-livestream-in-react-js">Link</a></li><li>Image Capture Feature: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-react-js">Link</a></li><li>Screen Share Feature: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-js">Link</a></li><li>Chat Feature: <a href="https://www.videosdk.live/blog/integrate-chat-feature-in-react-js">Link</a></li><li>Collaborative Whiteboard: <a href="https://www.videosdk.live/blog/integrate-whiteboard-in-react-js">Link</a></li></ul><h2 id="conclusion">Conclusion</h2><p>This function allows users to multitask while attending a conference, which boosts productivity and convenience. Embracing PiP mode allows you to provide immersive and feature-rich video conferencing solutions that meet the changing expectations of consumers in today's digital ecosystem. </p><p>If you are new here and want to build an interactive react app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get? <em>10000 free minutes every month</em>. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[Live Streaming vs Video Conferencing - What's the difference?]]></title><description><![CDATA[A detailed comparision of live streaming and video conferencing, covering all the desired information one needs to know about these technologies]]></description><link>https://www.videosdk.live/blog/live-streaming-vs-video-conferencing</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb64</guid><category><![CDATA[live-streaming]]></category><category><![CDATA[video-conferencing]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Wed, 25 Sep 2024 10:17:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/07/video-conf-vs-live-streaming.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="what-is-live-streaming">What is Live Streaming?</h2><img src="http://assets.videosdk.live/static-assets/ghost/2021/07/video-conf-vs-live-streaming.jpg" alt="Live Streaming vs Video Conferencing - What's the difference?"/><p>Live streaming refers to the concept of streaming digital video content over the internet. On a Live stream technology, one can watch, create, and share video content online. It needs an internet connection, a smartphone, or a PC to enable the live streaming activity on a device.</p><p>Live streaming is an essential tool for both large corporations and small businesses, aiding in branding, marketing, and advancement. It has become a modern technology that enhances digital presence in the community. Live streaming has gained popularity not only in gaming, content creation, and education but also in the media, entertainment, and business, healthcare and many more.</p><p>Live streaming happens to broadcast media online over several social media platforms. A live streamer, at once can stream content over multiple platforms increasing reach with broader visibility. A live stream is organized by a host who invites the viewers to his streaming session. A live stream allows a comment and chat facility on an ongoing streaming session. It also allows a huge number of viewers to participate in the stream. </p><p>Live streaming is telecasted with a delay or lag of 5 to 10 seconds for the viewers. It is encoded in different resolutions, enabling the support for Adaptive Bitrate Streaming (ABS) and the good part is that live streaming can be recorded for viewing later using the VoD facility, allowing video playback. It is a boon that with unstable internet connectivity, the streaming on our device doesn’t stop, as it adjusts the viewing quality of the video. Unfortunately, if a viewer faces disconnection, live streaming is a benefit in this case, as it allows playback of the recorded stream.</p><h2 id="what-is-video-conferencing">What is Video Conferencing?</h2><p>Video conferencing is a technology that allows meeting in closed groups with close people. Video conferencing is today a prominent mode of communication in corporate houses. Video conferencing allows connecting people from all around the world on a single screen. </p><p>Video conferencing connects people over a low-latency video and audio transmission over IP. It allows talk and chat features over conferencing where people can be verbally communicative and can discuss and generate solutions. A video conference generally has limited participants, who are the ones invited by the host. Generally, video conferencing systems are secured with passcodes initiated by the host for the participants.</p><p>A video conferencing system provides no lag in the delivery of messages to the participants. Under low latency streaming, video conferencing often causes participants a glitch, in case of poor connectivity. When a participant of a conference faces a power cut or a disconnection, the conference immediately stops for him. Concerning this, he cannot view the previous discussions. A video conference can only be streamed on-demand when it is recorded. </p><h3 id="the-main-differences-between-live-streaming-and-video-conferencing">The main differences between Live Streaming and Video Conferencing</h3><p>Live streaming and video conferencing serve distinct purposes in the digital realm. Live streaming typically involves broadcasting content to a wide audience, fostering one-way communication, and often features events or performances. In contrast, video conferencing emphasizes interactive, two-way communication among participants, facilitating real-time collaboration and meetings. While live streaming appeals to larger audiences for entertainment or educational purposes, video conferencing excels in fostering direct engagement, making it an ideal tool for remote work, virtual meetings, and personal interactions. Each platform caters to different needs, offering unique advantages in the evolving landscape of online communication.</p><h3 id="use-cases">Use cases</h3><p/><p><strong>LIVE STREAMING</strong></p><ol><li><a href="https://videosdk.live/solutions/virtual-events">Virtual summits and events</a></li><li>Public announcements</li><li><a href="https://videosdk.live/solutions/virtual-events">Webinars</a></li><li>Gaming</li><li><a href="https://www.videosdk.live/solutions/edtech">Education</a></li><li><a href="https://videosdk.live/solutions/telehealth">Health care</a></li></ol><p><strong>VIDEO CONFERENCING</strong></p><ol><li>Business communication</li><li>Interviews</li><li>Education</li><li>Social media conversations</li><li>Retail</li><li>Telehealth<br/></li></ol><h3 id="what-does-the-blog-explain">What does the blog explain?</h3><p>This blog explains how live streaming differs from <a href="https://videosdk.live/blog/video-sdk-embed">video conferencing</a>. Both live streaming and video conferencing are real-time technologies. These are the modern technologies of the digitally trending community. Beyond par, we are increasingly consuming these forefront advancements. They have been among the most useful communication tools in recent times. In this blog, we will learn how these technologies have their pros and cons and how they are similar to and different from each other.</p><h3 id="live-streamingpros-and-cons">Live streaming - Pros and Cons</h3><p/><p><strong>PROS</strong></p><ol><li>A huge number of participants can join a live stream</li><li>A live stream also works in an unstable internet connection</li><li>It allows the video playback option</li><li>A live stream can be recorded in HD quality</li><li>It supports on-demand services</li><li>A live stream supports streaming over multiple platforms at once</li><li>Supports screen sharing</li></ol><p><strong>CONS</strong></p><ol><li>Two-way communication is difficult</li><li>It does not support participants’ visibility to the host</li><li>A chat box is available only for comments<br/></li></ol><h3 id="video-conferencingpros-and-cons">Video conferencing - Pros and cons</h3><p/><p><strong>PROS</strong></p><ol><li>Possible two-way communication</li><li>Supports screen sharing</li><li>Allotted space for participants’ visibility</li><li>Saves time and money</li><li>Easy scheduling of meetings</li><li>Real-time communication without lags</li></ol><p><strong>CONS</strong></p><ol><li>A limited number of participants can join</li><li>Poor video quality in case of unstable internet connection</li><li>Limited streaming platform support<br/></li></ol><h3 id="similarities-between-live-streaming-and-video-conferencing">Similarities between live streaming and video conferencing</h3><ol><li><strong>Live-action: </strong>Both technologies work on live platforms. They stream the content live to the viewers, where the viewers can chat and converse to bring out conclusions.</li><li><strong>Involvement of participants: </strong>These technologies are designed to draw maximum engagement. Participants are allowed to join the stream with their audio and video by the hosts.</li><li><strong>Screenshare: </strong>Both live streaming and video conferencing allow screen sharing options, displaying pre-recorded content and images to the screen.</li><li><strong>Chat features: </strong>These technologies allow chat options for the participants in ongoing streaming. The participants can comment/converse with the host via chatbox.</li></ol><h3 id="dissimilarities-between-live-streaming-and-video-conferencing">Dissimilarities between live streaming and video conferencing</h3><ol><li><strong>Viewer screen space-  </strong>Live streaming does not allow viewers to share screen space with the live streamer. One can only communicate via chat. In video conferencing, the participants have an allotted box where they can be seen as well as the other participants in the meeting.</li><li><strong>Accessibility- </strong>Live streaming is accessible to a huge number of viewers at once, also on different platforms. Whereas, in video conferencing, only limited participants are allowed into the meeting, making it a kind of closed group meeting.</li><li><strong>Communication ease- </strong>There is an ease of communication with video conferencing due to a limited audience in the meeting, where everyone can put their points to conclude. In a live stream, no conclusion can be derived as it is merely a stream where people can only watch certain content and comment on the same.</li><li><strong>Scalability- </strong> A live stream is encoded in several resolutions to make it available in a manageable quality for each device type and its connectivity. Whereas in video conferencing, the video outcome becomes poor due to low latency.<br/></li></ol><h3 id="what-is-a-better-option">What is a better option?</h3><p>Both video conferencing and live streaming have their benefits and different use cases. It all depends on one’s need for the time. All you need is a mobile device and an internet connection whenever you think of using them. </p><p>For instance, a company has launched a product for which it is looking for a public announcement, and also to increase its branding and market value. In this case, the company opts for live streaming where it makes an announcement that gets into the knowledge of millions of viewers. The company excels in its strategy with live streaming. In the same situation, if the company opts for video conferencing, it may convey its product to only some thousands of people, which also tends to be costly as well as gives a low reach to the product. These states' live streaming is a better option for announcements and huge engagements.</p><p>Now for instance, if a company has to make some decisions with the team, which does not require the external public at all. In this case, the company opts for video conferencing. The foremost reason is that video conferencing technology works in real-time, where the messages are delivered instantly to the receiver. This technology also keeps a window for each participant, which leads to the active participation of each one during the meeting. Video conferencing is favorably a better option for small group communication, where the general external public is not kept as an ideal viewer.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Video and Audio Calling Features into iOS Apps?]]></title><description><![CDATA[This comprehensive guide covers the step-by-step process, from setting up the development environment to implementing core functionalities, empowering you to build scalable, feature-rich communication solutions.]]></description><link>https://www.videosdk.live/blog/integrate-video-audio-calling-in-ios</link><guid isPermaLink="false">6697513120fab018df10f82a</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[iOS]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Wed, 25 Sep 2024 09:56:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/07/How-to-Integrate-Video-and-Audio-Calling-Features-into-iOS-Apps_-1.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/How-to-Integrate-Video-and-Audio-Calling-Features-into-iOS-Apps_-1.jpg" alt="How to Integrate Video and Audio Calling Features into iOS Apps?"/><p>Do you want to make your iOS app more engaging for your users so that they can talk in real time within the applications without directing users outside? That means you have to integrate audio and video calling features that are scalable reliable, and secure at the same time. To solve this problem you need a good infrastructure like <a href="https://www.videosdk.live/">VideoSDK</a> that matches all the requirements you need. VideoSDK allows you to add video and audio calling to web, Android, and iOS applications. It offers a customizable SDK and REST API for creating expandable video conferencing apps.</p><p>In this quickstart guide, we'll take you step-by-step through the process of integrating video and audio calling capabilities into your iOS app, empowering you to create innovative, engaging, and truly connected experiences for your users.</p><h2 id="getting-started-with-code">Getting Started with Code!</h2><h3 id="prerequisites%E2%80%8B">Prerequisites<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#prerequisites">​</a></h3><ul><li>iOS 11.0+</li><li>Xcode 12.0+</li><li>Swift 5.0+</li></ul><h3 id="app-architecture%E2%80%8B">App Architecture<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#app-architecture">​</a></h3><p>This App will contain two screens:</p><ol><li><code>Join Screen</code> : This screen allows the user to either create a meeting or join a predefined meeting.</li><li><code>Meeting Screen</code> : This screen basically contains local and remote participant views and some meeting controls such as Enable / Disable Mic &amp; Camera and Leave the meeting.</li></ol><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_architecture.png" class="kg-image" alt="How to Integrate Video and Audio Calling Features into iOS Apps?" loading="lazy"/></figure><h3 id="create-new-project">Create New Project </h3><p><strong>Step 1:</strong> Create a new application by selecting <code><strong>Create a new Xcode project</strong></code></p><p><strong>Step 2:</strong> Choose <strong>App</strong> then click Next</p><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_app_selection.png" class="kg-image" alt="How to Integrate Video and Audio Calling Features into iOS Apps?" loading="lazy"/></figure><p><strong>Step 3:</strong> Add the <strong>Product Name</strong> and Save the project.</p><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_add_product_name.png" class="kg-image" alt="How to Integrate Video and Audio Calling Features into iOS Apps?" loading="lazy"/></figure><h2 id="videosdk-installation%E2%80%8B">VideoSDK Installation<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#videosdk-installation">​</a></h2><p>To install VideoSDK, you must initialize the pod on the project by running the following command.</p><pre><code class="language-swift">pod init</code></pre><p>It will create the Podfile in your project folder, open that file, and add the dependency for the VideoSDK like below:</p><pre><code class="language-swift">pod 'VideoSDKRTC', :git =&gt; 'https://github.com/videosdk-live/videosdk-rtc-ios-sdk.git'</code></pre><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_podfile.png" class="kg-image" alt="How to Integrate Video and Audio Calling Features into iOS Apps?" loading="lazy"/></figure><p>then run the below code to install the pod:</p><pre><code class="language-swift">pod install</code></pre><p>then declare the permissions in <code>Info.plist</code> :</p><pre><code class="language-swift">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Allow camera access to start video.&lt;/string&gt;

&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Allow microphone access to start audio.&lt;/string&gt;</code></pre><h3 id="project-structure%E2%80%8B">Project Structure<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#project-structure">​</a></h3><pre><code class="language-swift">iOSQuickStartDemo
   ├── Models
		├── RoomStruct.swift
	    └── MeetingData.swift
   ├── ViewControllers
		├── StartMeetingViewController.swift
	    └── MeetingViewController.swift
   ├── AppDelegate.swift // Default
   ├── SceneDelegate.swift // Default
   └── APIService
		   └── APIService.swift
   ├── Main.storyboard // Default
   ├── LaunchScreen.storyboard // Default
   └── Info.plist // Default
 Pods
	 └── Podfile</code></pre><h3 id="mainstoryboard-design%E2%80%8B"><code>Main.storyboard</code> Design<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#mainstoryboard-design">​</a></h3><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_storyboard.png" class="kg-image" alt="How to Integrate Video and Audio Calling Features into iOS Apps?" loading="lazy"/></figure><h3 id="create-models%E2%80%8B">Create models<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#create-models">​</a></h3><p>Create a swift file for <code>MeetingData</code> and <code>RoomStruct</code> class model for setting data in object pattern.</p><figure class="kg-card kg-code-card"><pre><code class="language-Swift">import Foundation
struct MeetingData {
    let token: String
    let name: String
    let meetingId: String
    let micEnabled: Bool
    let cameraEnabled: Bool
}</code></pre><figcaption>MeetingData.swift</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
struct RoomsStruct: Codable {
    let createdAt, updatedAt, roomID: String?
    let links: Links?
    let id: String?
    enum CodingKeys: String, CodingKey {
        case createdAt, updatedAt
        case roomID = "roomId"
        case links, id
    }
}
// MARK: - Links
struct Links: Codable {
    let getRoom, getSession: String?
    enum CodingKeys: String, CodingKey {
        case getRoom = "get_room"
        case getSession = "get_session"
    }
}</code></pre><figcaption>RoomStruct.swift</figcaption></figure><h2 id="step-by-step-integration">Step-by-Step Integration</h2><h3 id="step-1-get-started-with-apiclient%E2%80%8B">Step 1: Get started with APIClient<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-1--get-started-with-apiclient">​</a></h3><p>Before jumping to anything else, we have to write API to generate a unique meetingId. You will require an <strong><a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/server-setup">Auth token</a></strong>, you can generate it using either <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-server-api-example</a> or generate it from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">Video SDK Dashboard</a> for developers.</p><figure class="kg-card kg-code-card"><pre><code class="language-Swift">import Foundation

let TOKEN_STRING: String = "&lt;AUTH_TOKEN&gt;";

class APIService {

    class func createMeeting(token: String, completion: @escaping (Result&lt;String, Error&gt;) -&gt; Void) {
        let url = URL(string: "https://api.videosdk.live/v2/rooms")!

        var request = URLRequest(url: url)
        request.httpMethod = "POST"
        request.addValue(TOKEN_STRING, forHTTPHeaderField: "authorization")

        URLSession.shared.dataTask(with: request, completionHandler: { (data: Data?, response: URLResponse?, error: Error?) in
            DispatchQueue.main.async {
                if let data = data, let utf8Text = String(data: data, encoding: .utf8)
                {
                    do{
                        let dataArray = try JSONDecoder().decode(RoomsStruct.self,from: data)
                        completion(.success(dataArray.roomID ?? ""))
                    } catch {
                        print("Error while creating a meeting: \(error)")
                        completion(.failure(error))
                    }
                }
            }
        }).resume()
    }
}</code></pre><figcaption>APIService.swift</figcaption></figure><h3 id="step-2-implement-join-screen%E2%80%8B">Step 2: Implement Join Screen<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-2--implement-join-screen">​</a></h3><p>The join screen will work as a medium to either schedule a new meeting or join the existing meeting.</p><figure class="kg-card kg-code-card"><pre><code class="language-Swift">import Foundation
import UIKit

class StartMeetingViewController: UIViewController, UITextFieldDelegate {

		private var serverToken = ""

		/// MARK: outlet for create meeting button
		@IBOutlet weak var btnCreateMeeting: UIButton!

		/// MARK: outlet for join meeting button
		@IBOutlet weak var btnJoinMeeting: UIButton!

		/// MARK: outlet for meetingId textfield
		@IBOutlet weak var txtMeetingId: UITextField!

		/// MARK: Initialize the private variable with TOKEN_STRING &amp;
		/// setting the meeting id in the textfield
		override func viewDidLoad() {
		    txtMeetingId.delegate = self
		    serverToken = TOKEN_STRING
		    txtMeetingId.text = "PROVIDE-STATIC-MEETING-ID"
		}

		/// MARK: method for joining meeting through seague named as "StartMeeting"
		/// after validating the serverToken in not empty
		func joinMeeting() {
		    txtMeetingId.resignFirstResponder()

		    if !serverToken.isEmpty {
		        DispatchQueue.main.async {
		            self.dismiss(animated: true) {
		                self.performSegue(withIdentifier: "StartMeeting", sender: nil)
		            }
		        }
		    } else {
				print("Please provide auth token to start the meeting.")
		    }
		}

		/// MARK: outlet for create meeting button tap event
		@IBAction func btnCreateMeetingTapped(_ sender: Any) {
				print("show loader while meeting gets connected with server")
		    joinRoom()
		}

		/// MARK: outlet for join meeting button tap event
		@IBAction func btnJoinMeetingTapped(_ sender: Any) {
		    if((txtMeetingId.text ?? "").isEmpty){

						print("Please provide meeting id to start the meeting.")
		        txtMeetingId.resignFirstResponder()
		    } else {
		        joinMeeting()
		    }
		}

		// MARK: - method for creating room api call and getting meetingId for joining meeting

		func joinRoom() {

		   APIService.createMeeting(token: self.serverToken) { result in
            if case .success(let meetingId) = result {
                DispatchQueue.main.async {
                    self.txtMeetingId.text = meetingId
                    self.joinMeeting()
                }
            }
        }
		}

		/// MARK: preparing to animate to meetingViewController screen
		override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

		    guard let navigation = segue.destination as? UINavigationController,
		          let meetingViewController = navigation.topViewController as? MeetingViewController else {
		          return
		      }

		    meetingViewController.meetingData = MeetingData(
		        token: serverToken,
		        name: txtMeetingId.text ?? "Guest",
		        meetingId: txtMeetingId.text ?? "",
		        micEnabled: true,
		        cameraEnabled: true
		    )
		}
}</code></pre><figcaption>StartMeetingViewController.swift</figcaption></figure><h4 id="output%E2%80%8B">Output<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#output">​</a></h4><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/ios_quickstart_join_screen.jpg" class="kg-image" alt="How to Integrate Video and Audio Calling Features into iOS Apps?" loading="lazy" width="462" height="1000"/></figure><h3 id="step-3-initialize-and-join-meeting%E2%80%8B">Step 3: Initialize and Join Meeting<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-3--initialize-and-join-meeting">​</a></h3><p>Using the provided <code>token</code> and <code>meetingId</code>, we will configure and initialize the meeting in <code>viewDidLoad()</code>.</p><p>Then, we'll add <strong>@IBOutlet</strong> for <code>localParticipantVideoView</code> and <code>remoteParticipantVideoView</code>, which can render local and remote participant media respectively.</p><figure class="kg-card kg-code-card"><pre><code class="language-Swift">
class MeetingViewController: UIViewController {

import UIKit
import VideoSDKRTC
import WebRTC
import AVFoundation

class MeetingViewController: UIViewController {

    // MARK: - Properties
    // outlet for local participant container view
    @IBOutlet weak var localParticipantViewContainer: UIView!

    // outlet for label for meeting Id
    @IBOutlet weak var lblMeetingId: UILabel!

    // outlet for local participant video view
    @IBOutlet weak var localParticipantVideoView: RTCMTLVideoView!

    // outlet for remote participant video view
    @IBOutlet weak var remoteParticipantVideoView: RTCMTLVideoView!

    // outlet for remote participant no media label
    @IBOutlet weak var lblRemoteParticipantNoMedia: UILabel!

    // outlet for remote participant container view
    @IBOutlet weak var remoteParticipantViewContainer: UIView!

    // outlet for local participant no media label
    @IBOutlet weak var lblLocalParticipantNoMedia: UILabel!

    /// Meeting data - required to start
    var meetingData: MeetingData!

    /// current meeting reference
    private var meeting: Meeting?

    // MARK: - video participants including self to show in UI
    private var participants: [Participant] = []

		// MARK: - Lifecycle Events

		override func viewDidLoad() {
        super.viewDidLoad()
        // configure the VideoSDK with token
        VideoSDK.config(token: meetingData.token)

        // init meeting
        initializeMeeting()

        // set meeting id in button text
        lblMeetingId.text = "Meeting Id: \(meetingData.meetingId)"
	  }

	  override func viewWillAppear(_ animated: Bool) {
	      super.viewWillAppear(animated)
	      navigationController?.navigationBar.isHidden = true
	  }

    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)
        navigationController?.navigationBar.isHidden = false
        NotificationCenter.default.removeObserver(self)
    }

		// MARK: - Meeting

		private func initializeMeeting() {

		    // Initialize the VideoSDK
		    meeting = VideoSDK.initMeeting(
		        meetingId: meetingData.meetingId,
		        participantName: meetingData.name,
		        micEnabled: meetingData.micEnabled,
		        webcamEnabled: meetingData.cameraEnabled
		    )

		    // Adding the listener to meeting
		    meeting?.addEventListener(self)

		    // joining the meeting
		    meeting?.join()
		}
}</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h3 id="step-4-implement-controls%E2%80%8B">Step 4: Implement Controls<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-4--implement-controls">​</a></h3><p>After initializing the meeting in the previous step. We will now add <strong>@IBOutlet</strong> for <code>btnLeave</code>, <code>btnToggleVideo</code> and <code>btnToggleMic</code> which can control the media in meetings.</p><figure class="kg-card kg-code-card"><pre><code class="language-Swift">
class MeetingViewController: UIViewController {

...

    // outlet for leave button
    @IBOutlet weak var btnLeave: UIButton!

    // outlet for toggle video button
    @IBOutlet weak var btnToggleVideo: UIButton!

    // outlet for toggle audio button
    @IBOutlet weak var btnToggleMic: UIButton!

    // bool for mic
    var micEnabled = true
    // bool for video
    var videoEnabled = true


    // outlet for leave button click event
    @IBAction func btnLeaveTapped(_ sender: Any) {
            DispatchQueue.main.async {
                self.meeting?.leave()
                self.dismiss(animated: true)
            }
        }

    // outlet for toggle mic button click event
    @IBAction func btnToggleMicTapped(_ sender: Any) {
        if micEnabled {
            micEnabled = !micEnabled // false
            self.meeting?.muteMic()
        } else {
            micEnabled = !micEnabled // true
            self.meeting?.unmuteMic()
        }
    }

    // outlet for toggle video button click event
    @IBAction func btnToggleVideoTapped(_ sender: Any) {
        if videoEnabled {
            videoEnabled = !videoEnabled // false
            self.meeting?.disableWebcam()
        } else {
            videoEnabled = !videoEnabled // true
            self.meeting?.enableWebcam()
        }
    }

...

}</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h4 id="output%E2%80%8B-1">Output<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#output-1">​</a></h4><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/ios_quickstart_loading.jpg" class="kg-image" alt="How to Integrate Video and Audio Calling Features into iOS Apps?" loading="lazy" width="462" height="1000"/></figure><h3 id="step-5-implementing-meetingeventlistener%E2%80%8B">Step 5 : Implementing MeetingEventListener<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-5--implementing-meetingeventlistener">​</a></h3><p>In this step, we'll create an extension for the <code>MeetingViewController</code> that implements the MeetingEventListener, which implements the <code>onMeetingJoined</code>, <code>onMeetingLeft</code>, <code>onParticipantJoined</code>, <code>onParticipantLeft</code>, <code>onParticipantChanged</code>, <code>onSpeakerChanged</code>, etc. methods.</p><figure class="kg-card kg-code-card"><pre><code class="language-Swift">
class MeetingViewController: UIViewController {

...

extension MeetingViewController: MeetingEventListener {

		/// Meeting started
		func onMeetingJoined() {

		    // handle local participant on start
		    guard let localParticipant = self.meeting?.localParticipant else { return }
		    // add to list
		    participants.append(localParticipant)

		    // add event listener
		    localParticipant.addEventListener(self)

		    localParticipant.setQuality(.high)

		    if(localParticipant.isLocal){
		        self.localParticipantViewContainer.isHidden = false
		    } else {
		        self.remoteParticipantViewContainer.isHidden = false
		    }
		}

		/// Meeting ended
		func onMeetingLeft() {
		    // remove listeners
		    meeting?.localParticipant.removeEventListener(self)
		    meeting?.removeEventListener(self)
		}

		/// A new participant joined
		func onParticipantJoined(_ participant: Participant) {
		    participants.append(participant)

		    // add listener
		    participant.addEventListener(self)

		    participant.setQuality(.high)

		    if(participant.isLocal){
		        self.localParticipantViewContainer.isHidden = false
		    } else {
		        self.remoteParticipantViewContainer.isHidden = false
		    }
		}

		/// A participant left from the meeting
		/// - Parameter participant: participant object
		func onParticipantLeft(_ participant: Participant) {
		    participant.removeEventListener(self)
		    guard let index = self.participants.firstIndex(where: { $0.id == participant.id }) else {
		        return
		    }
		    // remove participant from list
		    participants.remove(at: index)
		    // hide from ui
		    UIView.animate(withDuration: 0.5){
		        if(!participant.isLocal){
		            self.remoteParticipantViewContainer.isHidden = true
		        }
		    }
		}

		/// Called when speaker is changed
		/// - Parameter participantId: participant id of the speaker, nil when no one is speaking.
		func onSpeakerChanged(participantId: String?) {

		    // show indication for active speaker
		    if let participant = participants.first(where: { $0.id == participantId }) {
		        self.showActiveSpeakerIndicator(participant.isLocal ? localParticipantViewContainer : remoteParticipantViewContainer, true)
		    }

		    // hide indication for others participants
		    let otherParticipants = participants.filter { $0.id != participantId }
		    for participant in otherParticipants {
		        if participants.count &gt; 1 &amp;&amp; participant.isLocal {
		            showActiveSpeakerIndicator(localParticipantViewContainer, false)
		        } else {
		            showActiveSpeakerIndicator(remoteParticipantViewContainer, false)
		        }
		    }
		}

		func showActiveSpeakerIndicator(_ view: UIView, _ show: Bool) {
		    view.layer.borderWidth = 4.0
		    view.layer.borderColor = show ? UIColor.blue.cgColor : UIColor.clear.cgColor
		}

}

...
</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h3 id="step-6-implementing-participanteventlistener%E2%80%8B">Step 6: Implementing ParticipantEventListener<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-6--implementing-participanteventlistener">​</a></h3><p>In this stage, we'll add an extension for the <code>MeetingViewController</code> that implements the ParticipantEventListener, which implements the <code>onStreamEnabled</code> and <code>onStreamDisabled</code> methods for the audio and video of MediaStreams enabled or disabled.</p><p>The function <code>updateUI</code> is frequently used to control or modify the user interface (enable/disable camera &amp; mic) by the MediaStream state.</p><figure class="kg-card kg-code-card"><pre><code class="language-Swift">class MeetingViewController: UIViewController {

...

extension MeetingViewController: ParticipantEventListener {

		/// Participant has enabled mic, video or screenshare
		/// - Parameters:
		///   - stream: enabled stream object
		///   - participant: participant object
		func onStreamEnabled(_ stream: MediaStream, forParticipant participant: Participant) {
		    updateUI(participant: participant, forStream: stream, enabled: true)
		}

		/// Participant has disabled mic, video or screenshare
		/// - Parameters:
		///   - stream: disabled stream object
		///   - participant: participant object
		func onStreamDisabled(_ stream: MediaStream, forParticipant participant: Participant) {
		    updateUI(participant: participant, forStream: stream, enabled: false)
		}
}

private extension MeetingViewController {

    func updateUI(participant: Participant, forStream stream: MediaStream, enabled: Bool) { // true
        switch stream.kind {
        case .state(value: .video):
            if let videotrack = stream.track as? RTCVideoTrack {
                if enabled {
                    DispatchQueue.main.async {
                        UIView.animate(withDuration: 0.5){
                            if(participant.isLocal){
                                self.localParticipantViewContainer.isHidden =   false
                                self.localParticipantVideoView.isHidden = false
                                self.localParticipantVideoView.videoContentMode = .scaleAspectFill
                                self.localParticipantViewContainer.bringSubviewToFront(self.localParticipantVideoView)
                                videotrack.add(self.localParticipantVideoView)
                                self.lblLocalParticipantNoMedia.isHidden = true
                            } else {
                                self.remoteParticipantViewContainer.isHidden = false
                                self.remoteParticipantVideoView.isHidden = false
                                self.remoteParticipantVideoView.videoContentMode = .scaleAspectFill
                                self.remoteParticipantViewContainer.bringSubviewToFront(self.remoteParticipantVideoView)
                                videotrack.add(self.remoteParticipantVideoView)
                                self.lblRemoteParticipantNoMedia.isHidden = true
                            }
                        }
                    }
                } else {
                    UIView.animate(withDuration: 0.5){
                        if(participant.isLocal){
                            self.localParticipantViewContainer.isHidden = false
                            self.localParticipantVideoView.isHidden = true
                            self.lblLocalParticipantNoMedia.isHidden = false
                            videotrack.remove(self.localParticipantVideoView)
                        } else {
                            self.remoteParticipantViewContainer.isHidden = false
                            self.remoteParticipantVideoView.isHidden = true
                            self.lblRemoteParticipantNoMedia.isHidden = false
                            videotrack.remove(self.remoteParticipantVideoView)
                        }
                    }
                }
            }

        case .state(value: .audio):
            if participant.isLocal {
                localParticipantViewContainer.layer.borderWidth = 4.0
                localParticipantViewContainer.layer.borderColor = enabled ? UIColor.clear.cgColor : UIColor.red.cgColor
            } else {
                remoteParticipantViewContainer.layer.borderWidth = 4.0
                remoteParticipantViewContainer.layer.borderColor = enabled ? UIColor.clear.cgColor : UIColor.red.cgColor
            }
        default:
            break
        }
    }
}

...
</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h4 id="output">Output</h4><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/ios_quickstart_meeting_screen.jpg" class="kg-image" alt="How to Integrate Video and Audio Calling Features into iOS Apps?" loading="lazy" width="462" height="1000"/></figure><p>If you want to explore more, please check out this GitHub Example  </p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - videosdk-live/videosdk-rtc-ios-sdk: IOS SDK is a client for real-time communication for ios devices. It inherits the same terminology as all other SDKs do.</div><div class="kg-bookmark-description">IOS SDK is a client for real-time communication for ios devices. It inherits the same terminology as all other SDKs do. - videosdk-live/videosdk-rtc-ios-sdk</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="How to Integrate Video and Audio Calling Features into iOS Apps?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">videosdk-live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/c1020f22bd8fedb3bfd08e9f60eaeee76b1c8965ea7e545f04cb2fb7a9c8e837/videosdk-live/videosdk-rtc-ios-sdk" alt="How to Integrate Video and Audio Calling Features into iOS Apps?"/></div></a></figure><h3 id="known-issue%E2%80%8B">Known Issue<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#known-issue">​</a></h3><p>Please add the following line to the <code>MeetingViewController.swift</code> file's <code>viewDidLoad</code> method If you get your video out of the container view like the below image.</p><figure class="kg-card kg-code-card"><pre><code class="language-Swift">override func viewDidLoad() {

    localParticipantVideoView.frame = CGRect(x: 10, y: 0, width: localParticipantViewContainer.frame.width, height: localParticipantViewContainer.frame.height)

    localParticipantVideoView.bounds = CGRect(x: 10, y: 0, width: localParticipantViewContainer.frame.width, height: localParticipantViewContainer.frame.height)

    localParticipantVideoView.clipsToBounds = true

    remoteParticipantVideoView.frame = CGRect(x: 10, y: 0, width: remoteParticipantViewContainer.frame.width, height: remoteParticipantViewContainer.frame.height)
    remoteParticipantVideoView.bounds = CGRect(x: 10, y: 0, width: remoteParticipantViewContainer.frame.width, height: remoteParticipantViewContainer.frame.height)
    remoteParticipantVideoView.clipsToBounds = true
}
</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h2 id="conclusion">Conclusion</h2><p>The VideoSDK iOS SDK provides a powerful and flexible platform for integrating high-quality video and audio calling capabilities into iOS applications. By leveraging the VideoSDK platform, developers can focus on building the unique features and user experience of their application, rather than having to tackle the complex technical challenges of building a robust video conferencing system from scratch. The comprehensive set of APIs and tools provided by VideoSDK make it easy to add advanced features like screen sharing, recording, and breakout rooms, allowing developers to create truly compelling video-enabled experiences for their users.</p><h2 id="additional-source">Additional Source</h2><ul><li><strong><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start">Quickstart for Flutter</a></strong></li><li><strong><a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start">Quickstart for JavaScript</a></strong></li><li><strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start">Quickstart for React</a></strong></li><li><strong><a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/quick-start">Quickstart for Android</a></strong></li><li><strong><a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start">Quickstart for React Native</a></strong></li></ul>]]></content:encoded></item><item><title><![CDATA[How to Integrate Image Capture in React Native Video Calling App?]]></title><description><![CDATA[Implementing image capture in React Native video calling apps boosts functionality, allowing users to take photos within the app.]]></description><link>https://www.videosdk.live/blog/integrate-image-capture-in-react-native-for-android-app</link><guid isPermaLink="false">660f797c2a88c204ca9cfce0</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[React Native]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Wed, 25 Sep 2024 09:23:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Image-Capture-in-React-Native-1.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Image-Capture-in-React-Native-1.png" alt="How to Integrate Image Capture in React Native Video Calling App?"/><p>Integrating the image capture feature into a React Native video-calling app enhances its functionality by allowing users to capture images alongside recording videos. Image Capture enables users to freeze memorable moments during video playback or instantly capture stills while shooting. This feature enriches user experience, offering versatility and convenience within the app. </p><p>By seamlessly merging video recording and image capture functionalities, users can efficiently create multimedia content without switching between multiple applications. Furthermore, it provides additional creative possibilities for users, enabling them to express themselves more dynamically through a combination of videos and images within the same platform.</p><p>This article explains how to integrate the image capture feature in the React Native Video Calling App. We will guide you through the steps of installing VideoSDK, integrating it into your project, and adding an image capture feature and improve the video viewing experience in your application.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the Image Capture functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Node.js v12+</li><li>NPM v6+ (comes installed with newer Node versions)</li><li>Android Studio or Xcode installed</li></ul><h2 id="%E2%AC%87%EF%B8%8F-integrate-videosdk%E2%80%8B"><strong>⬇️ </strong>Integrate VideoSDK<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#videosdk-installation">​</a></h2><p>It is necessary to set up VideoSDK within your project before going into the details of integrating the Image Capture feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.</p><ul><li>For <strong>NPM</strong> </li></ul><pre><code class="language-js">npm install "@videosdk.live/react-native-sdk"  "@videosdk.live/react-native-incallmanager"</code></pre><ul><li>For <strong>Yarn</strong></li></ul><pre><code class="language-js">yarn add "@videosdk.live/react-native-sdk" "@videosdk.live/react-native-incallmanager"</code></pre><h3 id="project-configuration">Project Configuration</h3><p>Before integrating the Image Capture functionality, ensure that your project is correctly prepared to handle the integration. This setup consists of a sequence of steps for configuring rights, dependencies, and platform-specific parameters so that VideoSDK can function seamlessly inside your application context.</p><h4 id="android-setup">Android Setup</h4>
<ul><li>Add the required permissions in the <code>AndroidManifest.xml</code> file.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">&lt;manifest
  xmlns:android="http://schemas.android.com/apk/res/android"
  package="com.cool.app"
&gt;
    &lt;!-- Give all the required permissions to app --&gt;
    &lt;uses-permission android:name="android.permission.INTERNET" /&gt;
    &lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Legacy up to Android 11) --&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH"
        android:maxSdkVersion="30" /&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH_ADMIN"
        android:maxSdkVersion="30" /&gt;

    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Android 12 upwards)--&gt;
    &lt;uses-permission android:name="android.permission.BLUETOOTH_CONNECT" /&gt;

    &lt;uses-permission android:name="android.permission.CAMERA" /&gt;
    &lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
    &lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
    &lt;uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" /&gt;
    &lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE"/&gt;
    &lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;

    &lt;application&gt;
   &lt;meta-data
      android:name="live.videosdk.rnfgservice.notification_channel_name"
      android:value="Meeting Notification"
     /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_channel_description"
    android:value="Whenever meeting started notification will appear."
    /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_color"
    android:resource="@color/red"
    /&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundService" android:foregroundServiceType="mediaProjection"&gt;&lt;/service&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundServiceTask"&gt;&lt;/service&gt;
  &lt;/application&gt;
&lt;/manifest&gt;</code></pre><figcaption><p><span style="white-space: pre-wrap;">AndroidManifest.xml</span></p></figcaption></figure><ul><li>Update your <code>colors.xml</code> file for internal dependencies: </li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">&lt;resources&gt;
  &lt;item name="red" type="color"&gt;
    #FC0303
  &lt;/item&gt;
  &lt;integer-array name="androidcolors"&gt;
    &lt;item&gt;@color/red&lt;/item&gt;
  &lt;/integer-array&gt;
&lt;/resources&gt;</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/app/src/main/res/values/colors.xml</span></p></figcaption></figure><ul><li>Link the necessary VideoSDK Dependencies.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">  dependencies {
   implementation project(':rnwebrtc')
   implementation project(':rnfgservice')
  }</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/app/build.gradle</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">include ':rnwebrtc'
project(':rnwebrtc').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-webrtc/android')

include ':rnfgservice'
project(':rnfgservice').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-foreground-service/android')</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/settings.gradle</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">import live.videosdk.rnwebrtc.WebRTCModulePackage;
import live.videosdk.rnfgservice.ForegroundServicePackage;

public class MainApplication extends Application implements ReactApplication {
  private static List&lt;ReactPackage&gt; getPackages() {
      @SuppressWarnings("UnnecessaryLocalVariable")
      List&lt;ReactPackage&gt; packages = new PackageList(this).getPackages();
      // Packages that cannot be autolinked yet can be added manually here, for example:

      packages.add(new ForegroundServicePackage());
      packages.add(new WebRTCModulePackage());

      return packages;
  }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MainApplication.java</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">/* This one fixes a weird WebRTC runtime problem on some devices. */
android.enableDexingArtifactTransform.desugaring=false</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/gradle.properties</span></p></figcaption></figure><ul><li>Include the following line in your <code>proguard-rules.pro</code> file (optional: if you are using Proguard)</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">-keep class org.webrtc.** { *; }</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/app/proguard-rules.pro</span></p></figcaption></figure><ul><li>In your <code>build.gradle</code> file, update the minimum OS/SDK version to <code>23</code>.</li></ul><pre><code class="language-js">buildscript {
  ext {
      minSdkVersion = 23
  }
}</code></pre><h4 id="ios-setup%E2%80%8B">iOS Setup​</h4>
<blockquote>Ensure that you are using CocoaPods version 1.10 or later.</blockquote><p><strong>1. </strong>To update CocoaPods, you can reinstall the <code>gem</code> using the following command:</p><pre><code class="language-gem">$ sudo gem install cocoapods</code></pre><p><strong>2.</strong> Manually link react-native-incall-manager (if it is not linked automatically).</p><p>Select <code>Your_Xcode_Project/TARGETS/BuildSettings</code>, in Header Search Paths, add <code>"$(SRCROOT)/../node_modules/@videosdk.live/react-native-incall-manager/ios/RNInCallManager"</code></p><p><strong>3.</strong> Change the path of <code>react-native-webrtc</code> using the following command:</p><pre><code class="language-Podfile">pod ‘react-native-webrtc’, :path =&gt; ‘../node_modules/@videosdk.live/react-native-webrtc’</code></pre><p><strong>4. </strong>Change the version of your platform.</p><p>You need to change the platform field in the Podfile to 12.0 or above because <strong>react-native-webrtc</strong> doesn't support iOS versions earlier than 12.0. Update the line: platform: ios, ‘12.0’.</p><p><strong>5. </strong>Install pods.</p><p>After updating the version, you need to install the pods by running the following command:</p><pre><code class="language-sh">Pod install</code></pre><p><strong>6. </strong>Add “<strong>libreact-native-webrtc.a</strong>” binary.</p><p>Add the "<strong>libreact-native-webrtc.a</strong>" binary to the "Link Binary With Libraries" section in the target of your main project folder.</p><p><strong>7. </strong>Declare permissions in <strong>Info.plist </strong>:</p><p>Add the following lines to your info.plist file located at:</p><figure class="kg-card kg-code-card"><pre><code class="language-js">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Camera permission description&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Microphone permission description&lt;/string&gt;</code></pre><figcaption><p><b><strong style="white-space: pre-wrap;">project folder/ios/projectname/info.plist</strong></b></p></figcaption></figure><h4 id="register-service">Register Service</h4>
<p>Register VideoSDK services in your root <code>index.js</code> file for the initialization service.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">import { AppRegistry } from "react-native";
import App from "./App";
import { name as appName } from "./app.json";
import { register } from "@videosdk.live/react-native-sdk";

register();

AppRegistry.registerComponent(appName, () =&gt; App);</code></pre><figcaption><p><span style="white-space: pre-wrap;">index.js</span></p></figcaption></figure><h2 id="essential-steps-for-building-the-video-calling">Essential Steps for Building the Video Calling </h2><p>By following essential steps, you can seamlessly implement video into your applications with VideoSDK, which provides a robust set of tools and APIs to facilitate the integration of video capabilities into applications.</p><h3 id="step-1-get-started-with-apijs%E2%80%8B">Step 1: Get started with api.js<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-1--get-started-with-apijs">​</a></h3><p>Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">export const token = "&lt;Generated-from-dashbaord&gt;";
// API call to create meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${token}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });

  const { roomId } = await res.json();
  return roomId;
};</code></pre><figcaption><p><span style="white-space: pre-wrap;">api.js</span></p></figcaption></figure><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the <strong>Context of Provider</strong> and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meeting such as join/leave, enable/disable the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as <strong>name</strong>, <strong>webcamStream</strong>, <strong>micStream</strong>, etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the <strong>App.js</strong> file.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">import React, { useState } from "react";
import {
  SafeAreaView,
  TouchableOpacity,
  Text,
  TextInput,
  View,
  FlatList,
} from "react-native";
import {
  MeetingProvider,
  useMeeting,
  useParticipant,
  MediaStream,
  RTCView,
} from "@videosdk.live/react-native-sdk";
import { createMeeting, token } from "./api";

function JoinScreen(props) {
  return null;
}

function ControlsContainer() {
  return null;
}

function MeetingView() {
  return null;
}

export default function App() {
  const [meetingId, setMeetingId] = useState(null);

  const getMeetingId = async (id) =&gt; {
    const meetingId = id == null ? await createMeeting({ token }) : id;
    setMeetingId(meetingId);
  };

  return meetingId ? (
    &lt;SafeAreaView style={{ flex: 1, backgroundColor: "#F6F6FF" }}&gt;
      &lt;MeetingProvider
        config={{
          meetingId,
          micEnabled: false,
          webcamEnabled: true,
          name: "Test User",
        }}
        token={token}
      &gt;
        &lt;MeetingView /&gt;
      &lt;/MeetingProvider&gt;
    &lt;/SafeAreaView&gt;
  ) : (
    &lt;JoinScreen getMeetingId={getMeetingId} /&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">App.js</span></p></figcaption></figure><h3 id="step-3-implement-join-screen%E2%80%8B">Step 3: Implement Join Screen<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-3--implement-join-screen">​</a></h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">function JoinScreen(props) {
  const [meetingVal, setMeetingVal] = useState("");
  return (
    &lt;SafeAreaView
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        paddingHorizontal: 6 * 10,
      }}
    &gt;
      &lt;TouchableOpacity
        onPress={() =&gt; {
          props.getMeetingId();
        }}
        style={{ backgroundColor: "#1178F8", padding: 12, borderRadius: 6 }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Create Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;

      &lt;Text
        style={{
          alignSelf: "center",
          fontSize: 22,
          marginVertical: 16,
          fontStyle: "italic",
          color: "grey",
        }}
      &gt;
        ---------- OR ----------
      &lt;/Text&gt;
      &lt;TextInput
        value={meetingVal}
        onChangeText={setMeetingVal}
        placeholder={"XXXX-XXXX-XXXX"}
        style={{
          padding: 12,
          borderWidth: 1,
          borderRadius: 6,
          fontStyle: "italic",
        }}
      /&gt;
      &lt;TouchableOpacity
        style={{
          backgroundColor: "#1178F8",
          padding: 12,
          marginTop: 14,
          borderRadius: 6,
        }}
        onPress={() =&gt; {
          props.getMeetingId(meetingVal);
        }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Join Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;
    &lt;/SafeAreaView&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">JoinScreen Component</span></p></figcaption></figure><h3 id="step-4-implement-controls%E2%80%8B">Step 4: Implement Controls<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-4--implement-controls">​</a></h3><p>The next step is to create a <code>ControlsContainer</code> component to manage features such as Join or leave a Meeting and Enable or Disable the Webcam/Mic.</p><p>In this step, the <code>useMeeting</code> hook is utilized to acquire all the required methods such as <code>join()</code>, <code>leave()</code>, <code>toggleWebcam</code> and <code>toggleMic</code>.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const Button = ({ onPress, buttonText, backgroundColor }) =&gt; {
  return (
    &lt;TouchableOpacity
      onPress={onPress}
      style={{
        backgroundColor: backgroundColor,
        justifyContent: "center",
        alignItems: "center",
        padding: 12,
        borderRadius: 4,
      }}
    &gt;
      &lt;Text style={{ color: "white", fontSize: 12 }}&gt;{buttonText}&lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
  );
};

function ControlsContainer({ join, leave, toggleWebcam, toggleMic }) {
  return (
    &lt;View
      style={{
        padding: 24,
        flexDirection: "row",
        justifyContent: "space-between",
      }}
    &gt;
      &lt;Button
        onPress={() =&gt; {
          join();
        }}
        buttonText={"Join"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleWebcam();
        }}
        buttonText={"Toggle Webcam"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleMic();
        }}
        buttonText={"Toggle Mic"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          leave();
        }}
        buttonText={"Leave"}
        backgroundColor={"#FF0000"}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ControlsContainer Component</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantList() {
  return null;
}
function MeetingView() {
  const { join, leave, toggleWebcam, toggleMic, meetingId } = useMeeting({});

  return (
    &lt;View style={{ flex: 1 }}&gt;
      {meetingId ? (
        &lt;Text style={{ fontSize: 18, padding: 12 }}&gt;
          Meeting Id :{meetingId}
        &lt;/Text&gt;
      ) : null}
      &lt;ParticipantList /&gt;
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingView Component</span></p></figcaption></figure><h3 id="step-5-render-participant-list%E2%80%8B">Step 5: Render Participant List<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-5--render-participant-list">​</a></h3><p>After implementing the controls, the next step is to render the joined participants.</p><p>You can get all the joined <code>participants</code> from the <code>useMeeting</code> Hook.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantView() {
  return null;
}

function ParticipantList({ participants }) {
  return participants.length &gt; 0 ? (
    &lt;FlatList
      data={participants}
      renderItem={({ item }) =&gt; {
        return &lt;ParticipantView participantId={item} /&gt;;
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 20 }}&gt;Press Join button to enter meeting.&lt;/Text&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ParticipantList Component</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">function MeetingView() {
  // Get `participants` from useMeeting Hook
  const { join, leave, toggleWebcam, toggleMic, participants } = useMeeting({});
  const participantsArrId = [...participants.keys()];

  return (
    &lt;View style={{ flex: 1 }}&gt;
      &lt;ParticipantList participants={participantsArrId} /&gt;
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingView Component</span></p></figcaption></figure><h3 id="step-6-handling-participants-media%E2%80%8B">Step 6: Handling Participant's Media<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-6--handling-participants-media">​</a></h3><p>Before Handling the Participant's Media, you need to understand a couple of concepts.</p><h4 id="1-useparticipant-hook">1. useParticipant Hook</h4>
<p>The <code>useParticipant</code> hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take <code>participantId</code> as argument.</p><pre><code class="language-useParticipant Hook Example">const { webcamStream, webcamOn, displayName } = useParticipant(participantId);</code></pre><h4 id="2-mediastream-api">2. MediaStream API</h4>
<p>The MediaStream API is beneficial for adding a MediaTrack into the <code>RTCView</code> component, enabling the playback of audio or video.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">&lt;RTCView
  streamURL={new MediaStream([webcamStream.track]).toURL()}
  objectFit={"cover"}
  style={{
    height: 300,
    marginVertical: 8,
    marginHorizontal: 8,
  }}
/&gt;</code></pre><figcaption><p><span style="white-space: pre-wrap;">useParticipant Hook Example</span></p></figcaption></figure><h4 id="rendering-participant-media">Rendering Participant Media</h4>
<figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantView({ participantId }) {
  const { webcamStream, webcamOn } = useParticipant(participantId);

  return webcamOn &amp;&amp; webcamStream ? (
    &lt;RTCView
      streamURL={new MediaStream([webcamStream.track]).toURL()}
      objectFit={"cover"}
      style={{
        height: 300,
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        backgroundColor: "grey",
        height: 300,
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 16 }}&gt;NO MEDIA&lt;/Text&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ParticipantView Component</span></p></figcaption></figure><p>Congratulations! By following these steps, you're on your way to unlocking the video within your application. Now, we are moving forward to integrate the feature that builds immersive video experiences for your users!</p><h2 id="integrate-image-capture-feature">Integrate Image Capture Feature</h2><p>This function proves particularly valuable in Video KYC scenarios, enabling the capture of images where users can hold up their identity for verification.</p><ul><li>By using the <code>captureImage()</code> function of the <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-participant/introduction"><code>useParticipant</code></a> hook, you can capture an image of a local participant from their video stream.</li><li>You have the option to specify the desired height and width in the <code>captureImage()</code> function; however, these parameters are optional. If not provided, the VideoSDK will automatically use the dimensions of the local participant's webcamStream.</li><li>The <code>captureImage()</code> function returns the image in the form of a <code>base64</code> string.</li></ul><pre><code class="language-js">import { useMeeting, useParticipant } from '@videosdk.live/react-native-sdk';

const {localParticipant} = useMeeting()

const { webcamStream, webcamOn, captureImage } = useParticipant(localParticipant.id);

async function imageCapture() {
    if (webcamOn &amp;&amp; webcamStream) {
      const base64 = await captureImage({height:400,width:400}); // captureImage will return base64 string
      console.log("base64",base64);
    } else {
      console.error("Camera must be on to capture an image");
    }
}</code></pre><blockquote>You can only capture an image of a local participant. If you call <code>captureImage()</code> the function on a remote participant, you will receive an error. If you want to capture an image of a remote participant, you can follow the below documentation.</blockquote><h3 id="how-to-capture-images-of-remote-participants%E2%80%8B">How to Capture Images of remote participants?<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/handling-media/image-capturer#how-to-capture-image-of-remote-participant-">​</a></h3><ul><li>Before proceeding, it's crucial to understand <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/upload-fetch-temporary-file">VideoSDK's temporary file storage system</a> and the underlying <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/pubsub">pubSub mechanism</a>.</li><li>Here's a breakdown of the steps, using the names Participant A and Participant B for clarity:</li></ul><h3 id="step-1-initiate-image-capture-request%E2%80%8B">Step 1: Initiate Image Capture Request<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/handling-media/image-capturer#step-1--initiate-image-capture-request">​</a></h3><ul><li>In this step, you have to first send a request to Participant B, whose image you want to capture, using pubSub.</li><li>To do that, you have to create a pubSub topic called <code>IMAGE_CAPTURE</code> in the <code>ParticipantView</code> Component.​</li><li>Here, you will be using the <code>sendOnly</code> property of the <code>publish()</code> method. Therefore, the request will be sent to that participant only.</li></ul><pre><code class="language-js">import {usePubSub} from '@videosdk.live/react-native-sdk';
import {
  TouchableOpacity,
  Text
} from 'react-native';

function ParticipantView({ participantId }) {
  // create pubsub topic to send Request
  const { publish } = usePubSub('IMAGE_CAPTURE');
​
  // send Request to participant
  function sendRequest() {
    // Pass the participantId of the participant whose image you want to capture
    // Here, it will be Participant B's id, as you want to capture the image of Participant B
    publish("Sending request to capture image", { persist: false, sendOnly: [participantId] });
  };
  
  return &lt;&gt;
    // other components
    &lt;TouchableOpacity style={{ width: 80, height : 45, backgroundColor: 'red', position: 'absolute', top: 10 }} onPress={() =&gt; {
        sendRequest()
    }}&gt;
      &lt;Text style={{ fontSize: 15, color: 'white', left:10 }}&gt;
          Capture Image
      &lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
  &lt;/&gt;;
}
​</code></pre><h3 id="step-2-capture-and-upload-file%E2%80%8B">Step 2: Capture and Upload File<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/handling-media/image-capturer#step-2--capture-and-upload-file">​</a></h3><ul><li>To capture images from remote participant [Participant B], we've created the <code>CaptureImageListener</code> component. When a participant receives an image capture request, this component uses the <code>captureImage</code> function of <code>useParticipant</code> hook to capture the image.</li></ul><pre><code class="language-js">import {
  useFile,
  usePubSub,
  useParticipant
} from '@videosdk.live/react-native-sdk';
​
const CaptureImageListner = ({ localParticipantId }) =&gt; {
​
  const { captureImage } = useParticipant(localParticipantId);
​
  // subscribe to receive request
  usePubSub('IMAGE_CAPTURE', {
    onMessageReceived: (message) =&gt; {
      _handleOnImageCaptureMessageReceived(message);
    },
  });
​
  const _handleOnImageCaptureMessageReceived = (message) =&gt; {
    try {
      if (message.senderId !== localParticipantId) {
        // capture and store image when message received
        captureAndStoreImage({ senderId: message.senderId });
      }
    } catch (err) {
      console.log("error on image capture", err);
    }
  };

  async function captureAndStoreImage({ senderId }) {
    // capture image
    const base64Data = await captureImage({height:400,width:400});
    console.log('base64Data',base64Data);
  }

  return &lt;&gt;&lt;/&gt;;
};
  
export default CaptureImageListner;</code></pre><ul><li>The captured image is then stored in the VideoSDK's temporary file storage system using the <code>uploadBase64File()</code> function of the <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-file"><code>useFile</code></a> hook. This operation returns a unique <code>fileUrl</code> of the stored image.</li></ul><pre><code class="language-js">const CaptureImageListner = ({ localParticipantId }) =&gt; {

  const { uploadBase64File } = useFile();
  
  async function captureAndStoreImage({ senderId }) {
    // capture image
    const base64Data = await captureImage({height:400,width:400});
    const token = "&lt;YOUR-TOKEN&gt;";
    const fileName = "myCapture.jpeg";  // specify a name for image file with extension
    // upload image to videosdk storage system
    const fileUrl = await uploadBase64File({base64Data,token,fileName});
    console.log('fileUrl',fileUrl)
  }

  //...
}</code></pre><ul><li>Next, the <code>fileUrl</code> is sent back to the participant who initiated the request using the <code>IMAGE_TRANSFER</code> topic.</li></ul><pre><code class="language-js">const CaptureImageListner = ({ localParticipantId }) =&gt; {

  //...

  // publish image Transfer
  const { publish: imageTransferPublish } = usePubSub('IMAGE_TRANSFER');

  async function captureAndStoreImage({ senderId }) {
    //...
    const fileUrl = await uploadBase64File({base64Data,token,fileName});
    imageTransferPublish(fileUrl, { persist: false , sendOnly: [senderId] });
  }

  //...
}</code></pre><ul><li>Then the <code>CaptureImageListener</code> component has to be rendered within the <code>MeetingView</code> component.</li></ul><pre><code class="language-js">import CaptureImageListner from './captureImageListner';
import {useMeeting} from '@videosdk.live/react-native-sdk';
​
function MeetingView() {
​
 //...
​
 // Get `localParticipant` from useMeeting Hook
 const {localParticipant } = useMeeting({});
​
 return (
  &lt;View&gt;
    // other components
    &lt;CaptureImageListner localParticipantId={localParticipant?.id} /&gt;
  &lt;/View&gt;
 );
}</code></pre><h3 id="step-3-fetch-and-display-image%E2%80%8B">Step 3: Fetch and Display Image<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/handling-media/image-capturer#step-3--fetch-and-display-image">​</a></h3><ul><li>To display a captured image, the <code>ShowImage</code> component is used. Here's how it works:</li><li>Within <code>ShowImage</code>, you need to subscribe to the <code>IMAGE_TRANSFER</code> topic, receiving the <code>fileUrl</code> associated with the captured image. Once obtained, leverage the <code>fetchBase64File()</code> function from the <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-file"><code>useFile</code></a> hook to retrieve the file in <code>base64</code> format from VideoSDK's temporary storage.</li></ul><pre><code class="language-js">import {
  useMeeting,
  useFile
} from '@videosdk.live/react-native-sdk';

function ShowImage() {
  const mMeeting = useMeeting();
  const { fetchBase64File } = useFile();
  ​
  const topicTransfer = "IMAGE_TRANSFER";
  ​
  const [bitMapImg, setbitMapImg] = useState(null);
  ​
  usePubSub(topicTransfer, {
    onMessageReceived: (message) =&gt; {
      if (message.senderId !== mMeeting.localParticipant.id) {
        fetchFile({ url: message.message }); // pass fileUrl to fetch the file
      } 
    }
  });
  ​
  async function fetchFile({ url }) {
    const token = "&lt;YOUR-TOKEN&gt;";
    const base64 = await fetchBase64File({ url, token });
    console.log("base64",base64); // here is your image in a form of base64
    setbitMapImg(base64);
  }
}</code></pre><ul><li>With the <code>base64</code> data in hand, you can now display the image in a modal. This seamless image presentation is integrated into the <code>MeetingView</code> component.</li></ul><pre><code class="language-js">import {
  Image,
  Modal,
  Pressable
} from 'react-native';

function ShowImage() {

 //...

 return &lt;&gt;
  {bitMapImg ? (
    &lt;View&gt;
      &lt;Modal animationType={"slide"} transparent={false} &gt;
        &lt;View style={{
          flex: 1,
          flexDirection: 'column',
          justifyContent: 'center',
          alignItems: 'center'
        }}&gt;
          &lt;View&gt;
            &lt;Image
              style={{ height: 400, width: 300, objectFit: "contain" }}
              source={{ uri: `data:image/jpeg;base64,${bitMapImg}` }}
            /&gt;
            &lt;Pressable
              onPress={() =&gt; setbitMapImg(null)}&gt;
            &lt;Text style={{color:"black"}}&gt;Close Dialog&lt;/Text&gt;
          &lt;/Pressable&gt;
          &lt;/View&gt;
        &lt;/View&gt;
      &lt;/Modal&gt;
    &lt;/View&gt;
  ) : null}
 &lt;/&gt;;
}</code></pre><pre><code class="language-js">function MeetingView() {
  // ...
  return (
    &lt;View&gt;
      // other componets
      &lt;CaptureImageListner localParticipantId={localParticipant?.id} /&gt;
      &lt;ShowImage /&gt;
    &lt;/View&gt;
  );
}</code></pre><p>Congratulations! By successfully integrating the Image Capture feature, developers can enhance the immersive video experience for users within their applications. </p><blockquote>The file stored in the <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/upload-fetch-temporary-file">VideoSDK's temporary file storage system</a> will be automatically deleted once the current room/meeting comes to an end.</blockquote><h2 id="%E2%9C%A8-want-to-add-more-features-to-react-native-video-calling-app">✨ Want to Add More Features to React Native Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your React Native video-calling app,</p><p><strong>Check out these additional resources:</strong></p><ul><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/active-speaker-indication-in-react-native-video-call-app">Link</a></li><li>RTMP Live Stream: <a href="https://www.videosdk.live/blog/integrate-rtmp-in-react-native-video-app">Link</a></li><li>Screen Share Feature in Android: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-native-android-video-call-app">Link</a></li><li>Screen Share Feature in iOS: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-native-ios-video-call-app">Link</a></li><li>Chat Feature: <a href="https://www.videosdk.live/blog/integrate-chat-feature-in-react-native-video-call-app">Link</a></li><li>Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/picture-in-picture-pip-in-react-native">Link</a></li></ul><h2 id="conclusion">Conclusion</h2><p>In summary, integrating Image Capture into the React Native video-calling app not only enhances its functionality but also opens up various possibilities for users to create, share, and engage with multimedia content in diverse scenarios.</p><p>Developers can simply integrate the Image Capture functionality into their apps, whether for identity verification (allowing additional functionality such as Video KYC), content creation, or collaborative experiences, developers can effortlessly integrate VideoSDK into their apps and use the capabilities of this feature to improve user experiences. </p><p>If you are new here and want to build an interactive React Native app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get? <em>10000 free minutes every month</em>. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Picture-in-Picture (PiP) Mode in JavaScript Video Chat App?]]></title><description><![CDATA[Learn how to seamlessly integrate Picture-in-Picture (PiP) mode into your JavaScript video chat application for enhanced user experience.]]></description><link>https://www.videosdk.live/blog/integrate-picture-in-picture-pip-mode-in-javascript-video-chat-app</link><guid isPermaLink="false">662b6f452a88c204ca9d4f75</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Wed, 25 Sep 2024 08:59:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/05/PIP-mode-javascipt-chat-app.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/05/PIP-mode-javascipt-chat-app.png" alt="How to Integrate Picture-in-Picture (PiP) Mode in JavaScript Video Chat App?"/><p>With Picture-in-Picture (PiP) mode integrated into your <a href="https://www.videosdk.live/blog/video-calling-javascript">JavaScript video chat app</a> built with VideoSDK, you don't have to choose between staying connected and checking out that fleeting bit of online content. </p><p>PiP mode allows users to minimize the video call window into a resizable and movable window, keeping the call visible while they attend to other tasks on their screen.</p><p>This functionality enhances the user experience by providing greater flexibility and multitasking capabilities during video calls. This guide goes through the steps of integrating PiP mode into your video call app using VideoSDK.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of picture-in-picture mode functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. Consider referring to the <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a> for a more visual understanding of the account creation and token generation process.</p><h3 id="prerequisites">Prerequisites</h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (if you do not have one, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer">VideoSDK Dashboard</a>)</li><li>Have Node and NPM installed on your device.</li></ul><h2 id="%E2%AC%87%EF%B8%8F-install-videosdk">⬇️ Install VideoSDK</h2><p>Import VideoSDK using the <code>&lt;script&gt;</code> tag or install it using the following npm command. Make sure you are in your app directory before you run this command.</p><pre><code class="language-js">&lt;html&gt;
  &lt;head&gt;
    &lt;!--.....--&gt;
  &lt;/head&gt;
  &lt;body&gt;
    &lt;!--.....--&gt;
    &lt;script src="https://sdk.videosdk.live/js-sdk/0.0.83/videosdk.js"&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre><ul><li><strong>npm</strong></li></ul><pre><code class="language-js">npm install @videosdk.live/js-sdk</code></pre><ul><li><strong>Yarn</strong></li></ul><pre><code class="language-js">yarn add @videosdk.live/js-sdk</code></pre><h3 id="structure-of-the-project">Structure of the project</h3><p>Your project structure should look like this.</p><pre><code class="language-js">  root
   ├── index.html
   ├── config.js
   ├── index.js</code></pre><p>You will be working on the following files:</p><ul><li><strong>index.html</strong>: Responsible for creating a basic UI.</li><li><strong>config.js</strong>: Responsible for storing the token.</li><li><strong>index.js</strong>: Responsible for rendering the meeting view and the join meeting functionality.</li></ul><h2 id="essential-steps-to-implement-video-call-functionality">Essential Steps to Implement Video Call Functionality</h2><p>Once you've successfully installed VideoSDK in your project, you'll have access to a range of functionalities for building your video call application. Picture-in-Picture mode is one such feature that leverages VideoSDK's capabilities. It leverages VideoSDK's capabilities to identify the user with the strongest audio signal (the one speaking)</p><h3 id="step-1-design-the-user-interface-ui%E2%80%8B">Step 1: Design the user interface (UI)<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-1--design-the-user-interface-ui">​</a></h3><p>Create an HTML file containing the screens, <code>join-screen</code> and <code>grid-screen</code>.</p><pre><code class="language-js">&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt; &lt;/head&gt;

  &lt;body&gt;
    &lt;div id="join-screen"&gt;
      &lt;!-- Create new Meeting Button --&gt;
      &lt;button id="createMeetingBtn"&gt;New Meeting&lt;/button&gt;
      OR
      &lt;!-- Join existing Meeting --&gt;
      &lt;input type="text" id="meetingIdTxt" placeholder="Enter Meeting id" /&gt;
      &lt;button id="joinBtn"&gt;Join Meeting&lt;/button&gt;
    &lt;/div&gt;

    &lt;!-- for Managing meeting status --&gt;
    &lt;div id="textDiv"&gt;&lt;/div&gt;

    &lt;div id="grid-screen" style="display: none"&gt;
      &lt;!-- To Display MeetingId --&gt;
      &lt;h3 id="meetingIdHeading"&gt;&lt;/h3&gt;

      &lt;!-- Controllers --&gt;
      &lt;button id="leaveBtn"&gt;Leave&lt;/button&gt;
      &lt;button id="toggleMicBtn"&gt;Toggle Mic&lt;/button&gt;
      &lt;button id="toggleWebCamBtn"&gt;Toggle WebCam&lt;/button&gt;

      &lt;!-- render Video --&gt;
      &lt;div class="row" id="videoContainer"&gt;&lt;/div&gt;
    &lt;/div&gt;

    &lt;!-- Add VideoSDK script --&gt;
    &lt;script src="https://sdk.videosdk.live/js-sdk/0.0.83/videosdk.js"&gt;&lt;/script&gt;
    &lt;script src="config.js"&gt;&lt;/script&gt;
    &lt;script src="index.js"&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre><h3 id="step-2-implement-join-screen%E2%80%8B">Step 2: Implement Join Screen<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-2--implement-join-screen">​</a></h3><p>Configure the token in the <code>config.js</code> file, which you can obtain from the <a href="https://app.videosdk.live/login" rel="noopener noreferrer">VideoSDK Dashbord</a>.</p><pre><code class="language-js">// Auth token will be used to generate a meeting and connect to it
TOKEN = "Your_Token_Here";</code></pre><p>Next, retrieve all the elements from the DOM and declare the following variables in the <code>index.js</code> file. Then, add an event listener to the join and create meeting buttons.</p><pre><code class="language-js">// Getting Elements from DOM
const joinButton = document.getElementById("joinBtn");
const leaveButton = document.getElementById("leaveBtn");
const toggleMicButton = document.getElementById("toggleMicBtn");
const toggleWebCamButton = document.getElementById("toggleWebCamBtn");
const createButton = document.getElementById("createMeetingBtn");
const videoContainer = document.getElementById("videoContainer");
const textDiv = document.getElementById("textDiv");

// Declare Variables
let meeting = null;
let meetingId = "";
let isMicOn = false;
let isWebCamOn = false;

function initializeMeeting() {}

function createLocalParticipant() {}

function createVideoElement() {}

function createAudioElement() {}

function setTrack() {}

// Join Meeting Button Event Listener
joinButton.addEventListener("click", async () =&gt; {
  document.getElementById("join-screen").style.display = "none";
  textDiv.textContent = "Joining the meeting...";

  roomId = document.getElementById("meetingIdTxt").value;
  meetingId = roomId;

  initializeMeeting();
});

// Create Meeting Button Event Listener
createButton.addEventListener("click", async () =&gt; {
  document.getElementById("join-screen").style.display = "none";
  textDiv.textContent = "Please wait, we are joining the meeting";

  // API call to create meeting
  const url = `https://api.videosdk.live/v2/rooms`;
  const options = {
    method: "POST",
    headers: { Authorization: TOKEN, "Content-Type": "application/json" },
  };

  const { roomId } = await fetch(url, options)
    .then((response) =&gt; response.json())
    .catch((error) =&gt; alert("error", error));
  meetingId = roomId;

  initializeMeeting();
});</code></pre><h3 id="step-3-initialize-meeting%E2%80%8B">Step 3: Initialize Meeting<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-3--initialize-meeting">​</a></h3><p>Following that, initialize the meeting using the <code>initMeeting()</code> function and proceed to join the meeting.</p><pre><code class="language-js">// Initialize meeting
function initializeMeeting() {
  window.VideoSDK.config(TOKEN);

  meeting = window.VideoSDK.initMeeting({
    meetingId: meetingId, // required
    name: "Thomas Edison", // required
    micEnabled: true, // optional, default: true
    webcamEnabled: true, // optional, default: true
  });

  meeting.join();

  // Creating local participant
  createLocalParticipant();

  // Setting local participant stream
  meeting.localParticipant.on("stream-enabled", (stream) =&gt; {
    setTrack(stream, null, meeting.localParticipant, true);
  });

  // meeting joined event
  meeting.on("meeting-joined", () =&gt; {
    textDiv.style.display = "none";
    document.getElementById("grid-screen").style.display = "block";
    document.getElementById(
      "meetingIdHeading"
    ).textContent = `Meeting Id: ${meetingId}`;
  });

  // meeting left event
  meeting.on("meeting-left", () =&gt; {
    videoContainer.innerHTML = "";
  });

  // Remote participants Event
  // participant joined
  meeting.on("participant-joined", (participant) =&gt; {
    //  ...
  });

  // participant left
  meeting.on("participant-left", (participant) =&gt; {
    //  ...
  });
}</code></pre><h3 id="step-4-create-the-media-elements%E2%80%8B">Step 4: Create the Media Elements<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-4--create-the-media-elements">​</a></h3><p>In this step, Create a function to generate audio and video elements for displaying both local and remote participants. Set the corresponding media track based on whether it's a video or audio stream.</p><pre><code class="language-js">// creating video element
function createVideoElement(pId, name) {
  let videoFrame = document.createElement("div");
  videoFrame.setAttribute("id", `f-${pId}`);
  videoFrame.style.width = "300px";
    

  //create video
  let videoElement = document.createElement("video");
  videoElement.classList.add("video-frame");
  videoElement.setAttribute("id", `v-${pId}`);
  videoElement.setAttribute("playsinline", true);
  videoElement.setAttribute("width", "300");
  videoFrame.appendChild(videoElement);

  let displayName = document.createElement("div");
  displayName.innerHTML = `Name : ${name}`;

  videoFrame.appendChild(displayName);
  return videoFrame;
}

// creating audio element
function createAudioElement(pId) {
  let audioElement = document.createElement("audio");
  audioElement.setAttribute("autoPlay", "false");
  audioElement.setAttribute("playsInline", "true");
  audioElement.setAttribute("controls", "false");
  audioElement.setAttribute("id", `a-${pId}`);
  audioElement.style.display = "none";
  return audioElement;
}

// creating local participant
function createLocalParticipant() {
  let localParticipant = createVideoElement(
    meeting.localParticipant.id,
    meeting.localParticipant.displayName
  );
  videoContainer.appendChild(localParticipant);
}

// setting media track
function setTrack(stream, audioElement, participant, isLocal) {
  if (stream.kind == "video") {
    isWebCamOn = true;
    const mediaStream = new MediaStream();
    mediaStream.addTrack(stream.track);
    let videoElm = document.getElementById(`v-${participant.id}`);
    videoElm.srcObject = mediaStream;
    videoElm
      .play()
      .catch((error) =&gt;
        console.error("videoElem.current.play() failed", error)
      );
  }
  if (stream.kind == "audio") {
    if (isLocal) {
      isMicOn = true;
    } else {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(stream.track);
      audioElement.srcObject = mediaStream;
      audioElement
        .play()
        .catch((error) =&gt; console.error("audioElem.play() failed", error));
    }
  }
}</code></pre><h3 id="step-5-handle-participant-events%E2%80%8B">Step 5: Handle participant events<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-5--handle-participant-events">​</a></h3><p>Thereafter, implement the events related to the participants and the stream.</p><p>The following are the events to be executed in this step:</p><ol><li><code>participant-joined</code>: When a remote participant joins, this event will trigger. In the event callback, create video and audio elements previously defined for rendering their video and audio streams.</li><li><code>participant-left</code>: When a remote participant leaves, this event will trigger. In the event callback, remove the corresponding video and audio elements.</li><li><code>stream-enabled</code>: This event manages the media track of a specific participant by associating it with the appropriate video or audio element.</li></ol><pre><code class="language-js">// Initialize meeting
function initializeMeeting() {
  // ...

  // participant joined
  meeting.on("participant-joined", (participant) =&gt; {
    let videoElement = createVideoElement(
      participant.id,
      participant.displayName
    );
    let audioElement = createAudioElement(participant.id);
    // stream-enabled
    participant.on("stream-enabled", (stream) =&gt; {
      setTrack(stream, audioElement, participant, false);
    });
    videoContainer.appendChild(videoElement);
    videoContainer.appendChild(audioElement);
  });

  // participants left
  meeting.on("participant-left", (participant) =&gt; {
    let vElement = document.getElementById(`f-${participant.id}`);
    vElement.remove(vElement);

    let aElement = document.getElementById(`a-${participant.id}`);
    aElement.remove(aElement);
  });
}</code></pre><h3 id="step-6-implement-controls%E2%80%8B">Step 6: Implement Controls<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-6--implement-controls">​</a></h3><p>Next, implement the meeting controls, such as <code>toggleMic</code>, <code>toggleWebcam</code>, and leave the meeting.</p><pre><code class="language-js">// leave Meeting Button Event Listener
leaveButton.addEventListener("click", async () =&gt; {
  meeting?.leave();
  document.getElementById("grid-screen").style.display = "none";
  document.getElementById("join-screen").style.display = "block";
});

// Toggle Mic Button Event Listener
toggleMicButton.addEventListener("click", async () =&gt; {
  if (isMicOn) {
    // Disable Mic in Meeting
    meeting?.muteMic();
  } else {
    // Enable Mic in Meeting
    meeting?.unmuteMic();
  }
  isMicOn = !isMicOn;
});

// Toggle Web Cam Button Event Listener
toggleWebCamButton.addEventListener("click", async () =&gt; {
  if (isWebCamOn) {
    // Disable Webcam in Meeting
    meeting?.disableWebcam();

    let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
    vElement.style.display = "none";
  } else {
    // Enable Webcam in Meeting
    meeting?.enableWebcam();

    let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
    vElement.style.display = "inline";
  }
  isWebCamOn = !isWebCamOn;
});</code></pre><p><strong>You can check out the complete.</strong></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/videosdk-live/quickstart/tree/main/js-rtc"><div class="kg-bookmark-content"><div class="kg-bookmark-title">quickstart/js-rtc at main · videosdk-live/quickstart</div><div class="kg-bookmark-description">A short and sweet tutorial for getting up to speed with VideoSDK in less than 10 minutes - videosdk-live/quickstart</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="How to Integrate Picture-in-Picture (PiP) Mode in JavaScript Video Chat App?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">videosdk-live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/de970b6f7db5c97b5728471cbf13bae285388d9a9ccaa9bd294da67c509984d5/videosdk-live/quickstart" alt="How to Integrate Picture-in-Picture (PiP) Mode in JavaScript Video Chat App?" onerror="this.style.display = 'none'"/></div></a></figure><p>After installing videoSDK, PiP mode becomes readily available for you to integrate into your video call app.  This functionality enhances the user experience by providing flexibility and maintaining continuity.</p><h2 id="integrate-picture-in-picture-mode">Integrate Picture-in-Picture Mode</h2><p>Picture-in-picture (PiP) is a commonly used feature in video conferencing software, enabling users to simultaneously engage in a video conference and perform other tasks on their devices. </p><p>With PiP, you can keep the video conference window open, resize it to a smaller size, and continue working on other tasks while still seeing and hearing the other participants in the conference. </p><p>This feature proves beneficial when you need to take notes, send an email, or look up information during the conference. This guide explains the steps to implement the Picture-in-Picture feature using VideoSDK.</p><h3 id="pip-video%E2%80%8B">PiP Video<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/render-media/picture-in-picture#pip-video">​</a></h3><p>All modern-day browsers support popping a video stream out from the <code>HTMLVideoElement</code>. You can achieve this either directly from the controls shown on the video element or by using the Browser API method <a href="https://developer.mozilla.org/en-US/docs/Web/API/HTMLVideoElement/requestPictureInPicture" rel="noopener noreferrer"><code>requestPictureInPicture()</code></a> on the video element.</p><h3 id="customize-video-pip-with-multiple-video-streams%E2%80%8B">Customize Video PiP with multiple video streams<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/render-media/picture-in-picture#customize-video-pip-with-multiple-video-streams">​</a></h3><p><strong>Step 1: </strong>Create a button that toggles the Picture-in-Picture (PiP) mode during the meeting. This button should invoke the <code>togglePipMode()</code> method when clicked.</p><pre><code class="language-js">const togglePipBtn = document.getElementById("togglePipBtn");

togglePipBtn.addEventListener("click", () =&gt; {
  togglePipMode();
});</code></pre><p><strong>Step 2:  </strong>The first step is to check if the browser supports PiP mode; if not, display a message to the user.</p><pre><code class="language-js">const togglePipMode = async () =&gt; {
  //Check if browser supports PiP mode else show a message to user
  if ("pictureInPictureEnabled" in document) {
    //
  } else {
    alert("PiP is not supported by your browser");\
    return
  }
};
</code></pre><p><strong>Step 3: </strong>Now, if the browser supports PiP mode, create <code>Canvas</code> element and a <code>Video</code> element. Generate a Stream from the Canvas and play it in the video element. Request PiP mode for the video element once the metadata has been loaded.</p><pre><code class="language-js">const togglePipMode = async () =&gt; {
  //Check if browser supports PiP mode else show a message to user
  if ("pictureInPictureEnabled" in document) {
    //Creating a Canvas which will render our PiP Stream
    const source = document.createElement("canvas");
    const ctx = source.getContext("2d");

    //Create a Video tag which we will popout for PiP
    const pipVideo = document.createElement("video");
    pipWindowRef.current = pipVideo;
    pipVideo.autoplay = true;

    //Creating stream from canvas which we will play
    const stream = source.captureStream();
    pipVideo.srcObject = stream;

    //Do initial Canvas Paint
    drawCanvas();

    //When Video is ready we will start PiP mode
    pipVideo.onloadedmetadata = () =&gt; {
      pipVideo.requestPictureInPicture();
    };
    await pipVideo.play();
  } else {
    alert("PiP is not supported by your browser");
  }
};</code></pre><p><strong>Step 4: </strong>The next step is to paint the canvas with the Participant Grid, which will be visible in the PiP window.</p><pre><code class="language-js">const getRowCount = (length) =&gt; {
  return length &gt; 2 ? 2 : length &gt; 0 ? 1 : 0;
};
const getColCount = (length) =&gt; {
  return length &lt; 2 ? 1 : length &lt; 5 ? 2 : 3;
};

const togglePipMode = async () =&gt; {
  //Check if browser supports PiP mode else show a message to user
  if ("pictureInPictureEnabled" in document) {
    //Stream playing here
    //...

    //When the PiP mode starts, we will start drawing canvas with PiP view
    pipVideo.addEventListener("enterpictureinpicture", (event) =&gt; {
      drawCanvas();
    });

    //When PiP mode exits, we will dispose the track we created earlier
    pipVideo.addEventListener("leavepictureinpicture", (event) =&gt; {
      pipWindowRef.current = null;
      pipVideo.srcObject.getTracks().forEach((track) =&gt; track.stop());
    });

    //This will draw all the video elements in to the Canvas
    function drawCanvas() {
      //Getting all the video elements in the document
      const videos = document.querySelectorAll("video");
      try {
        //Perform initial black paint on the canvas
        ctx.fillStyle = "black";
        ctx.fillRect(0, 0, source.width, source.height);

        //Drawing the participant videos on the canvas in the grid format
        const rows = getRowCount(videos.length);
        const columns = getColCount(videos.length);
        for (let i = 0; i &lt; rows; i++) {
          for (let j = 0; j &lt; columns; j++) {
            if (j + i * columns &lt;= videos.length || videos.length == 1) {
              ctx.drawImage(
                videos[j + i * columns],
                j &lt; 1 ? 0 : source.width / (columns / j),
                i &lt; 1 ? 0 : source.height / (rows / i),
                source.width / columns,
                source.height / rows
              );
            }
          }
        }
      } catch (error) {}

      //If pip mode is on, keep drawing the canvas when ever new frame is requested
      if (document.pictureInPictureElement === pipVideo) {
        requestAnimationFrame(drawCanvas);
      }
    }
  } else {
    alert("PiP is not supported by your browser");
  }
};</code></pre><p><strong>Step 5: </strong>Exit the PiP mode if it is already active.</p><pre><code class="language-js">const togglePipMode = async () =&gt; {
  //Check if PiP Window is active or not
  //If active we will turn it off
  if (pipWindowRef.current) {
    await document.exitPictureInPicture();
    pipWindowRef.current = null;
    return;
  }

  //Check if browser supports PiP mode else show a message to user
  if ("pictureInPictureEnabled" in document) {
    // ...
  } else {
    alert("PiP is not supported by your browser");
  }
};</code></pre><h2 id="%E2%9C%A8-want-to-add-more-features-to-javascript-video-calling-app">✨ Want to Add More Features to JavaScript Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your JavaScript video-calling app,</p><p>Check out these additional resources:</p><ul><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/integrate-active-speaker-indication-in-javascript-video-chat-app">Link</a></li><li>Image Capture: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-javascript-chat-app">Link</a></li><li>Screen Share Feature: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-javascript-video-chat-app">Link</a></li><li>RTMP Livestream: <a href="https://www.videosdk.live/blog/integrate-rtmp-livestream-in-javascript-video-chat-app">Link</a></li></ul><h2 id="wrap-up">Wrap-up</h2><p>By incorporating PiP mode into your VideoSDK video call app, you'll allow your users to navigate between applications without missing a beat during their video calls. They can stay on top of calls while checking emails, browsing the web, or even using other apps. </p><p>If you are new here and want to build an interactive JavaScript app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get? <em>10000 free minutes every month</em>. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Screen Share Feature in React JS Video Call App?]]></title><description><![CDATA[Enhance your React video call app with seamless screen-sharing capabilities for effective collaboration.]]></description><link>https://www.videosdk.live/blog/integrate-screen-share-in-react-js</link><guid isPermaLink="false">6617ca5d2a88c204ca9d0778</guid><category><![CDATA[React]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Wed, 25 Sep 2024 08:14:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-Share-React.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-Share-React.png" alt="How to Integrate Screen Share Feature in React JS Video Call App?"/><p>Integrating a screen share feature into your <a href="https://www.videosdk.live/blog/react-js-video-calling">React JS video call app</a> can enhance collaboration and productivity. With this feature, users can seamlessly share their screens during calls, facilitating real-time presentations, demonstrations, and discussions. Begin by implementing a reliable screen-capturing mechanism using libraries like MediaDevices API or WebRTC. Design a user-friendly interface allowing users to initiate and stop screen sharing effortlessly.</p><p><strong>Benefits of Screen Share Feature:</strong></p><ol><li><strong>Enhanced Collaboration</strong>: Users can share their screens to demonstrate concepts, collaborate on projects, or provide real-time assistance, fostering better teamwork.</li><li><strong>Improved Communication</strong>: Visual aids enhance communication by allowing users to illustrate ideas, share documents, or present slideshows during video calls.</li><li><strong>Increased Productivity</strong>: Screen sharing reduces the need for separate meetings or lengthy explanations, leading to faster decision-making and task completion.</li><li><strong>Remote Work Facilitation</strong>: Facilitates remote work by enabling virtual presentations, training sessions, and remote troubleshooting.</li><li><strong>Seamless Integration</strong>: Integrating screen share seamlessly into your React JS app enhances its functionality, making it a comprehensive communication tool.</li></ol><p><strong>Use Cases Screen Share Feature:</strong></p><ol><li><strong>Remote Work</strong>: Employees can conduct virtual meetings, share progress reports, and collaborate on documents or presentations from anywhere.</li><li><strong>Online Education</strong>: Teachers can deliver interactive lessons, share educational resources, and provide one-on-one assistance to students through screen sharing.</li><li><strong>Customer Support</strong>: Support agents can guide customers through troubleshooting steps, demonstrate product features, or assist with software setup via screen sharing.</li><li><strong>Team Collaboration</strong>: Team members can brainstorm ideas, review designs, or collaborate on code by sharing their screens during video calls.</li><li><strong>Sales Demos</strong>: Sales professionals can deliver persuasive product demonstrations, share sales decks, and answer client questions in real-time using screen sharing.</li></ol><p>By the time you reach the end of this exhaustive guide, you'll be equipped with a fully-fledged React video call application that not only facilitates communication but also empowers users to collaborate with unparalleled efficiency. Say goodbye to mundane meetings and hello to a new era of dynamic collaboration!</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the Screen Share functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (Not having one?, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a>)</li><li>Basic understanding of React.</li><li><a href="https://www.npmjs.com/package/@videosdk.live/react-sdk" rel="noopener noreferrer"><strong>React VideoSDK</strong></a></li><li>Make sure Node and NPM are installed on your device.</li><li>Basic understanding of Hooks (useState, useRef, useEffect)</li><li>React Context API (optional)</li></ul><p>Follow the steps to create the environment necessary to add video calls to your app. You can also find the code sample for <a href="https://github.com/videosdk-live/quickstart/tree/main/react-rtc" rel="noopener noreferrer">Quickstart here</a>.<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#create-new-react-app" rel="noopener noreferrer">​</a></p><p><strong>Create a new React App using the below command.</strong></p><pre><code class="language-js">$ npx create-react-app videosdk-rtc-react-app</code></pre><h2 id="%E2%AC%87%EF%B8%8F-install-videosdk%E2%80%8B">⬇️ Install VideoSDK<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#install-videosdk">​</a></h2><p>It is necessary to set up VideoSDK within your project before going into the details of integrating the Screen Share feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.</p><ul><li>For NPM</li></ul><pre><code class="language-js">$ npm install "@videosdk.live/react-sdk"

//For the Participants Video
$ npm install "react-player"</code></pre><ul><li>For Yarn</li></ul><pre><code class="language-js">$ yarn add "@videosdk.live/react-sdk"

//For the Participants Video
$ yarn add "react-player"</code></pre><p>You are going to use functional components to leverage React's reusable component architecture. There will be components for users, videos and controls (mic, camera, leave) over the video.</p><h3 id="app-architecture">App Architecture</h3>
<p>The App will contain a <code>MeetingView</code> component which includes a <code>ParticipantView</code> component which will render the participant's name, video, audio, etc. It will also have a <code>Controls</code> component that will allow the user to perform operations like leave and toggle media.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-quick-start-fafbfbc2ed2d7cbfd4c5ee2e36296f9e.png" class="kg-image" alt="How to Integrate Screen Share Feature in React JS Video Call App?" loading="lazy" width="1356" height="780"/></figure><p>You will be working on the following files:</p><ul><li>API.js: Responsible for handling API calls such as generating unique meetingId and token</li><li>App.js: Responsible for rendering <code>MeetingView</code> and joining the meeting.</li></ul><h2 id="essential-steps-to-implement-videosdk">Essential Steps to Implement VideoSDK</h2><p>To add video capability to your React application, you must first complete a sequence of prerequisites.</p><h3 id="step-1-get-started-with-apijs%E2%80%8B">Step 1: Get started with API.js<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-1-get-started-with-apijs">​</a></h3><p>Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><pre><code class="language-js">//This is the Auth token, you will use it to generate a meeting and connect to it
export const authToken = "&lt;Generated-from-dashbaord&gt;";
// API call to create a meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${authToken}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });
  //Destructuring the roomId from the response
  const { roomId } = await res.json();
  return roomId;
};</code></pre><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the <strong>Context Provider</strong> and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meetings such as join/leave, enable/disable the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as <strong>name</strong>, <strong>webcamStream</strong>, <strong>micStream</strong>, etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the <strong>App.js</strong> file.</p><pre><code class="language-js">import "./App.css";
import React, { useEffect, useMemo, useRef, useState } from "react";
import {
  MeetingProvider,
  MeetingConsumer,
  useMeeting,
  useParticipant,
} from "@videosdk.live/react-sdk";
import { authToken, createMeeting } from "./API";
import ReactPlayer from "react-player";

function JoinScreen({ getMeetingAndToken }) {
  return null;
}

function ParticipantView(props) {
  return null;
}

function Controls(props) {
  return null;
}

function MeetingView(props) {
  return null;
}

function App() {
  const [meetingId, setMeetingId] = useState(null);

  //Getting the meeting id by calling the api we just wrote
  const getMeetingAndToken = async (id) =&gt; {
    const meetingId =
      id == null ? await createMeeting({ token: authToken }) : id;
    setMeetingId(meetingId);
  };

  //This will set Meeting Id to null when meeting is left or ended
  const onMeetingLeave = () =&gt; {
    setMeetingId(null);
  };

  return authToken &amp;&amp; meetingId ? (
    &lt;MeetingProvider
      config={{
        meetingId,
        micEnabled: true,
        webcamEnabled: true,
        name: "C.V. Raman",
      }}
      token={authToken}
    &gt;
      &lt;MeetingView meetingId={meetingId} onMeetingLeave={onMeetingLeave} /&gt;
    &lt;/MeetingProvider&gt;
  ) : (
    &lt;JoinScreen getMeetingAndToken={getMeetingAndToken} /&gt;
  );
}

export default App;</code></pre><h3 id="step-3-implement-join-screen%E2%80%8B">Step 3: Implement Join Screen<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-3-implement-join-screen">​</a></h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><pre><code class="language-js">function JoinScreen({ getMeetingAndToken }) {
  const [meetingId, setMeetingId] = useState(null);
  const onClick = async () =&gt; {
    await getMeetingAndToken(meetingId);
  };
  return (
    &lt;div&gt;
      &lt;input
        type="text"
        placeholder="Enter Meeting Id"
        onChange={(e) =&gt; {
          setMeetingId(e.target.value);
        }}
      /&gt;
      &lt;button onClick={onClick}&gt;Join&lt;/button&gt;
      {" or "}
      &lt;button onClick={onClick}&gt;Create Meeting&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><h4 id="output">Output</h4>
<figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-join-screen-06fb57cf0d9e3bcc1e7da9fc032298c3.jpeg" class="kg-image" alt="How to Integrate Screen Share Feature in React JS Video Call App?" loading="lazy" width="720" height="130"/></figure><h3 id="step-4-implement-meetingview-and-controls%E2%80%8B">Step 4: Implement MeetingView and Controls<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-4-implement-meetingview-and-controls">​</a></h3><p>The next step is to create <code>MeetingView</code> and <code>Controls</code> components to manage features such as join, leave, mute, and unmute.</p><pre><code class="language-js">function MeetingView(props) {
  const [joined, setJoined] = useState(null);
  //Get the method which will be used to join the meeting.
  //We will also get the participants list to display all participants
  const { join, participants } = useMeeting({
    //callback for when meeting is joined successfully
    onMeetingJoined: () =&gt; {
      setJoined("JOINED");
    },
    //callback for when meeting is left
    onMeetingLeft: () =&gt; {
      props.onMeetingLeave();
    },
  });
  const joinMeeting = () =&gt; {
    setJoined("JOINING");
    join();
  };

  return (
    &lt;div className="container"&gt;
      &lt;h3&gt;Meeting Id: {props.meetingId}&lt;/h3&gt;
      {joined &amp;&amp; joined == "JOINED" ? (
        &lt;div&gt;
          &lt;Controls /&gt;
          //For rendering all the participants in the meeting
          {[...participants.keys()].map((participantId) =&gt; (
            &lt;ParticipantView
              participantId={participantId}
              key={participantId}
            /&gt;
          ))}
        &lt;/div&gt;
      ) : joined &amp;&amp; joined == "JOINING" ? (
        &lt;p&gt;Joining the meeting...&lt;/p&gt;
      ) : (
        &lt;button onClick={joinMeeting}&gt;Join&lt;/button&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><figure class="kg-card kg-code-card"><pre><code class="language-js">function Controls() {
  const { leave, toggleMic, toggleWebcam } = useMeeting();
  return (
    &lt;div&gt;
      &lt;button onClick={() =&gt; leave()}&gt;Leave&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleMic()}&gt;toggleMic&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleWebcam()}&gt;toggleWebcam&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">Control Component</span></p></figcaption></figure><h4 id="output-of-controls-component">Output of Controls Component</h4>
<figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-container-controls-2cebdfdfd1371b010b773cb6fb9c7ae8.jpeg" class="kg-image" alt="How to Integrate Screen Share Feature in React JS Video Call App?" loading="lazy" width="720" height="177"/></figure><h3 id="step-5-implement-participant-view%E2%80%8B">Step 5: Implement Participant View<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-5-implement-participant-view">​</a></h3><p>Before implementing the participant view, you need to understand a couple of concepts.</p><h4 id="51-forwarding-ref-for-mic-and-camera">5.1 Forwarding Ref for mic and camera</h4>
<p>The <code>useRef</code> hook is responsible for referencing the audio and video components. It will be used to play and stop the audio and video of the participant.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const webcamRef = useRef(null);
const micRef = useRef(null);</code></pre><figcaption><p><span style="white-space: pre-wrap;">Forwarding Ref for mic and camera</span></p></figcaption></figure><h4 id="52-useparticipant-hook">5.2 useParticipant Hook</h4>
<p>The <code>useParticipant</code> hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take participantId as an argument.</p><pre><code class="language-js">const { webcamStream, micStream, webcamOn, micOn } = useParticipant(
  props.participantId
);</code></pre><h4 id="53-mediastream-api">5.3 MediaStream API</h4>
<p>The MediaStream API is beneficial for adding a MediaTrack to the audio/video tag, enabling the playback of audio or video.</p><pre><code class="language-js">const webcamRef = useRef(null);
const mediaStream = new MediaStream();
mediaStream.addTrack(webcamStream.track);

webcamRef.current.srcObject = mediaStream;
webcamRef.current
  .play()
  .catch((error) =&gt; console.error("videoElem.current.play() failed", error));</code></pre><h4 id="54-implement-participantview%E2%80%8B">5.4 Implement <code>ParticipantView</code>​</h4>
<p>Now you can use both of the hooks and the API to create <code>ParticipantView</code></p><pre><code class="language-js">function ParticipantView(props) {
  const micRef = useRef(null);
  const { webcamStream, micStream, webcamOn, micOn, isLocal, displayName } =
    useParticipant(props.participantId);

  const videoStream = useMemo(() =&gt; {
    if (webcamOn &amp;&amp; webcamStream) {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(webcamStream.track);
      return mediaStream;
    }
  }, [webcamStream, webcamOn]);

  useEffect(() =&gt; {
    if (micRef.current) {
      if (micOn &amp;&amp; micStream) {
        const mediaStream = new MediaStream();
        mediaStream.addTrack(micStream.track);

        micRef.current.srcObject = mediaStream;
        micRef.current
          .play()
          .catch((error) =&gt;
            console.error("videoElem.current.play() failed", error)
          );
      } else {
        micRef.current.srcObject = null;
      }
    }
  }, [micStream, micOn]);

  return (
    &lt;div&gt;
      &lt;p&gt;
        Participant: {displayName} | Webcam: {webcamOn ? "ON" : "OFF"} | Mic:{" "}
        {micOn ? "ON" : "OFF"}
      &lt;/p&gt;
      &lt;audio ref={micRef} autoPlay playsInline muted={isLocal} /&gt;
      {webcamOn &amp;&amp; (
        &lt;ReactPlayer
          //
          playsinline // extremely crucial prop
          pip={false}
          light={false}
          controls={false}
          muted={true}
          playing={true}
          //
          url={videoStream}
          //
          height={"300px"}
          width={"300px"}
          onError={(err) =&gt; {
            console.log(err, "participant video error");
          }}
        /&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><h2 id="integrate-screen-share-feature">Integrate Screen Share Feature</h2><p>Screen sharing in a meeting is the process of sharing your computer screen with other participants in the meeting. It allows everyone in the meeting to see exactly what you are seeing on your screen, which can be helpful for presentations, demonstrations, or collaborations.</p><h3 id="enable-screen-share">Enable Screen Share</h3><ul><li>By using the <code>enableScreenShare()</code> function of the <code>useMeeting</code> hook, the local participant can share their desktop screen with other participants.</li><li>The Screen Share stream of a participant can be accessed from the <code>screenShareStream</code> property of the <code>useParticipant</code> hook.</li></ul><h3 id="disable-screen-share">Disable Screen Share</h3><ul><li>By using the <code>disableScreenShare()</code> function of the <code>useMeeting</code> hook, the local participant can stop sharing their desktop screen with other participants.</li></ul><h3 id="toggle-screen-share%E2%80%8B">Toggle Screen Share<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/handling-media/screen-share#togglescreenshare">​</a></h3><ul><li>By using the <code>toggleScreenShare()</code> function of the <code>useMeeting</code> hook, the local participant can start or stop sharing their desktop screen with other participants based on the current state of the screen sharing.</li><li>The Screen Share stream of a participant can be accessed from the <code>screenShareStream</code> property of <code>useParticipant</code> hook.</li></ul><blockquote><strong>Note</strong>: Screen Sharing is only supported in the <strong>Desktop browsers</strong> and <strong>not in mobile/tab browser</strong>.</blockquote><pre><code class="language-js">const ControlsContainer = () =&gt; {
  //Getting the screen-share method from hook
  const { toggleScreenShare } = useMeeting();

  return (
    //...
    //...
    &lt;button onClick={() =&gt; toggleScreenShare()}&gt;Screen Share&lt;/button&gt;
    //...
    //...
  );
};</code></pre><h3 id="events-associated-with-togglescreenshare%E2%80%8B">Events associated with toggleScreenShare<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/handling-media/screen-share#events-associated-with-togglescreenshare">​</a></h3><ul><li>Every Participant will receive a callback on  <a href="https://docs.videosdk.live/react/api/sdk-reference/use-participant/events#onstreamdisabled"><code>onStreamEnable()</code></a> event of the <a href="https://docs.videosdk.live/react/api/sdk-reference/use-participant/introduction"><code>useParticipant()</code></a> hook with the <code>Stream</code> object, if the <strong>screen share broadcasting was started</strong>.</li><li>Every Participant will receive a callback on <a href="https://docs.videosdk.live/react/api/sdk-reference/use-participant/events#onstreamdisabled"><code>onStreamDisabled()</code></a> event of the <a href="https://docs.videosdk.live/react/api/sdk-reference/use-participant/introduction"><code>useParticipant()</code></a> hook with the <code>Stream</code> object, if the <strong>screen share broadcasting was stopped</strong>.</li><li>Every Participant will receive the <a href="https://docs.videosdk.live/react/api/sdk-reference/use-meeting/events#onpresenterchanged"><code>onPresenterChanged()</code></a> callback of the <a href="https://docs.videosdk.live/react/api/sdk-reference/use-meeting/introduction"><code>useMeeting</code></a> hook, providing the <code>participantId</code> as the <code>presenterId</code> of the participant who started the screen share or <code>null</code> if the screen share was turned off.</li></ul><pre><code class="language-js">import { useParticipant, useMeeting } from "@videosdk.live/react-sdk";

const MeetingView = () =&gt; {
  //Callback for when the presenter changes
  function onPresenterChanged(presenterId) {
    if(presenterId){
      console.log(presenterId, "started screen share");
    }else{
      console.log("someone stopped screen share");
    }
  }

  const { participants } = useMeeting({
    onPresenterChanged,
    ...
  });

  return &lt;&gt;...&lt;/&gt;
}

const ParticipantView = (participantId) =&gt; {
  //Callback for when the participant starts a stream
  function onStreamEnabled(stream) {
    if(stream.kind === 'share'){
      console.log("Share Stream On: onStreamEnabled", stream);
    }
  }

  //Callback for when the participant stops a stream
  function onStreamDisabled(stream) {
    if(stream.kind === 'share'){
      console.log("Share Stream Off: onStreamDisabled", stream);
    }
  }

  const {
    displayName
    ...
  } = useParticipant(participantId,{
    onStreamEnabled,
    onStreamDisabled
    ...
  });
  return &lt;&gt; Participant View &lt;/&gt;;
}</code></pre><h3 id="screen-share-with-audio%E2%80%8B">Screen Share with Audio<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/handling-media/screen-share#screen-share-with-audio">​</a></h3><p>To enable screen sharing with audio, select the <strong>Share tab audio</strong> option when sharing the Chrome tab as shown below.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/screenshare-with-audio-ca8ee299f68c32ba08cd811e3fb7cd2f.png" class="kg-image" alt="How to Integrate Screen Share Feature in React JS Video Call App?" loading="lazy" width="596" height="482"/></figure><p>After clicking the <code>Share</code> button, you will receive the selected tab's audio stream in the participant's <code>screenShareAudioStream</code>.</p><blockquote>? <strong>NOTE</strong>: Screen Share with Audio is only supported while sharing <strong>Chrome Tab</strong> in a <strong>Chromium based browser</strong> like Google Chrome, Brave etc.</blockquote><h3 id="rendering-screen-share-and-screen-share-audio%E2%80%8B">Rendering Screen Share and Screen Share Audio​</h3><ul><li>To render the screen share, you will need the <code>participantId</code> of the user presenting the screen. This can be obtained from the <code>presenterId</code> property of the <code>useMeeting</code> hook. </li></ul><pre><code class="language-js">const MeetingView = () =&gt; {
  //
  const { presenterId } = useMeeting();

  return &lt;&gt;{presenterId &amp;&amp; &lt;PresenterView presenterId={presenterId} /&gt;}&lt;/&gt;;
};

const PresenterView = ({ presenterId }) =&gt; {
  return &lt;div&gt;PresenterView&lt;/div&gt;;
};</code></pre><ul><li>Now that you have the <code>presenterId</code>, you can obtain the <code>screenShareStream</code> using the <code>useParticipant</code> hook and play it in the video tag.</li></ul><pre><code class="language-js">const PresenterView = ({ presenterId }) =&gt; {
  const { screenShareStream, screenShareOn } = useParticipant(presenterId);

  //Creating a media stream from the screen share stream
  const mediaStream = useMemo(() =&gt; {
    if (screenShareOn &amp;&amp; screenShareStream) {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(screenShareStream.track);
      return mediaStream;
    }
  }, [screenShareStream, screenShareOn]);

  return (
    &lt;&gt;
      // playing the media stream in the ReactPlayer
      &lt;ReactPlayer
        //
        playsinline // extremely crucial prop
        playIcon={&lt;&gt;&lt;/&gt;}
        //
        pip={false}
        light={false}
        controls={false}
        muted={true}
        playing={true}
        //
        url={mediaStream} // passing mediastream here
        //
        height={"100%"}
        width={"100%"}
        onError={(err) =&gt; {
          console.log(err, "presenter video error");
        }}
      /&gt;
    &lt;/&gt;
  );
};</code></pre><ul><li>You can then add the screen share audio to this component by retrieving the <code>screenShareAudioStream</code> from the <code>useParticipant</code> hook.</li></ul><pre><code class="language-js">const PresenterView = ({ presenterId }) =&gt; {
  const { screenShareAudioStream, isLocal, screenShareStream, screenShareOn } =
    useParticipant(presenterId);

  // Creating a reference to the audio element
  const audioPlayer = useRef();

  // Playing the screen share audio stream
  useEffect(() =&gt; {
    if (
      !isLocal &amp;&amp;
      audioPlayer.current &amp;&amp;
      screenShareOn &amp;&amp;
      screenShareAudioStream
    ) {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(screenShareAudioStream.track);

      audioPlayer.current.srcObject = mediaStream;
      audioPlayer.current.play().catch((err) =&gt; {
        if (
          err.message ===
          "play() failed because the user didn't interact with the document first. https://goo.gl/xX8pDD"
        ) {
          console.error("audio" + err.message);
        }
      });
    } else {
      audioPlayer.current.srcObject = null;
    }
  }, [screenShareAudioStream, screenShareOn, isLocal]);

  return (
    &lt;&gt;
      {/*... React player is here */}
      //Adding this audio tag to play the screen share audio
      &lt;audio autoPlay playsInline controls={false} ref={audioPlayer} /&gt;
    &lt;/&gt;
  );
};</code></pre><blockquote>⚠️ <strong>CAUTION</strong>: To use the audio and video communications in the web browser, your site must be SSL enabled i.e. it must be secured and running on https.</blockquote><h2 id="%E2%9C%A8-want-to-add-more-features-to-react-js-video-calling-app">✨ Want to Add More Features to React JS Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your React video-calling app,</p><p><strong>Check out these additional resources:</strong></p><ul><li>HLS Player: <a href="https://www.videosdk.live/blog/implement-hls-player-in-react-js">Link</a></li><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/integrate-active-speaker-indication-in-react-js">Link</a></li><li>RTMP Live Stream: <a href="https://www.videosdk.live/blog/integrate-rtmp-livestream-in-react-js">Link</a></li><li>Image Capture Feature: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-react-js">Link</a></li><li>Chat Feature: <a href="https://www.videosdk.live/blog/integrate-chat-feature-in-react-js">Link</a></li><li>Collaborative Whiteboard: <a href="https://www.videosdk.live/blog/integrate-whiteboard-in-react-js">Link</a></li><li>Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/integrate-picture-in-picture-pip-in-react-js">Link</a></li></ul><h2 id="conclusion">Conclusion</h2><p>Congratulation! You've successfully integrated screen sharing into your React video call app using videoSDK. Incorporating a screen share feature into your React JS video call app elevates its functionality, enabling seamless collaboration and communication among users. </p><p>By integrating this feature, you empower users to share their screens effortlessly, facilitating real-time demonstrations, presentations, and troubleshooting sessions. With careful implementation and consideration of performance and security aspects, your app becomes a powerful tool for remote teams, educators, and businesses to enhance productivity and engagement</p><p>If you are new here and want to build an interactive react app with free resources, you can <a href="https://www.videosdk.live/signup"><u>Sign up with VideoSDK</u></a> and get ? <em>10000 free minutes every month</em>. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Image Capture Feature in React JS Video Call App?]]></title><description><![CDATA[Integrate image capture into your React JS video call app for enhanced user experience and versatility. Enable users to snap and share moments seamlessly.


]]></description><link>https://www.videosdk.live/blog/integrate-image-capture-in-react-js</link><guid isPermaLink="false">6617d6022a88c204ca9d0809</guid><category><![CDATA[React]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Tue, 24 Sep 2024 12:31:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Image-Capture-React.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Image-Capture-React.png" alt="How to Integrate Image Capture Feature in React JS Video Call App?"/><p>Integrating image capture into a <a href="https://www.videosdk.live/blog/react-js-video-calling">React JS video call app</a> enhances the user experience and functionality. With the Image Capturer API from VideoSDK, you can effortlessly empower your app users to capture high-quality images during video calls, adding a dynamic dimension to their interactions.</p><p>Implementing this feature is seamless within your React JS application. By following the provided documentation and integrating the Image Capturer component, users can easily capture snapshots during their video conversations with just a click. Whether it's for preserving memorable moments or sharing essential information visually, this functionality enriches the overall user experience.</p><p><strong>Benefits of Image Capture:</strong></p><ul><li><strong>Enhanced Communication:</strong> Image capture enables users to express themselves more vividly during video calls, fostering richer communication.</li><li><strong>Memorable Moments:</strong> Users can capture memorable moments during video conversations, preserving them as images for future reference or sharing.</li><li><strong>Visual Information Sharing:</strong> Image snapshots allow users to convey complex information visually, making it easier to share ideas, documents, or diagrams.</li><li><strong>Increased Engagement:</strong> The ability to capture images adds interactivity to video calls, keeping participants engaged and attentive.</li><li><strong>Convenience:</strong> With a simple click, users can capture images without leaving the video call interface, ensuring a seamless experience.</li></ul><p><strong>Use Cases of Image Capture:</strong></p><ul><li><strong>Education:</strong> Students can capture whiteboard content or diagrams shared during online classes for later review.</li><li><strong>Business Meetings:</strong> Participants can capture key points discussed in meetings or presentations, ensuring clarity and accountability.</li><li><strong>Remote Collaboration:</strong> Teams working remotely can capture design mockups, charts, or code snippets for collaborative brainstorming sessions.</li><li><strong>Personal Communication:</strong> Friends and family members can capture fun moments or important information shared during video calls, preserving memories.</li></ul><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the Image Capture functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. </p><p>Consider referring to the <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a> for a more visual understanding of the account creation and token generation process.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (Not having one? Follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a>)</li><li>Basic understanding of React.</li><li><a href="https://www.npmjs.com/package/@videosdk.live/react-sdk" rel="noopener noreferrer"><strong>React VideoSDK</strong></a></li><li>Make sure Node and NPM are installed on your device.</li><li>Basic understanding of Hooks (useState, useRef, useEffect)</li><li>React Context API (optional)</li></ul><p>Follow the steps to create the environment necessary to add video calls to your app. You can also find the code sample for <a href="https://github.com/videosdk-live/quickstart/tree/main/react-rtc" rel="noopener noreferrer">Quickstart here</a>.<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#create-new-react-app" rel="noopener noreferrer">​</a></p><p><strong>Create a new React App using the below command.</strong></p><pre><code class="language-js">$ npx create-react-app videosdk-rtc-react-app</code></pre><h2 id="%E2%AC%87%EF%B8%8F-install-videosdk%E2%80%8B">⬇️ Install VideoSDK<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#install-videosdk">​</a></h2><p>It is necessary to set up VideoSDK within your project before going into the details of integrating the Image Capture feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.</p><ul><li>For NPM</li></ul><pre><code class="language-js">$ npm install "@videosdk.live/react-sdk"

//For the Participants Video
$ npm install "react-player"</code></pre><ul><li>For Yarn</li></ul><pre><code class="language-js">$ yarn add "@videosdk.live/react-sdk"

//For the Participants Video
$ yarn add "react-player"</code></pre><p>You are going to use functional components to leverage React's reusable component architecture. There will be components for users, videos and controls (mic, camera, leave) over the video.</p><h3 id="app-architecture">App Architecture</h3>
<p>The App will contain a <code>MeetingView</code> component which includes a <code>ParticipantView</code> component which will render the participant's name, video, audio, etc. It will also have a <code>Controls</code> component that will allow the user to perform operations like leave and toggle media.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-quick-start-fafbfbc2ed2d7cbfd4c5ee2e36296f9e.png" class="kg-image" alt="How to Integrate Image Capture Feature in React JS Video Call App?" loading="lazy" width="1356" height="780"/></figure><p>You will be working on the following files:</p><ul><li>API.js: Responsible for handling API calls such as generating unique meetingId and token</li><li>App.js: Responsible for rendering <code>MeetingView</code> and joining the meeting.</li></ul><h2 id="essential-steps-to-implement-video-calling-functionality">Essential Steps to Implement Video Calling Functionality</h2><p>To add video capability to your React application, you must first complete a sequence of prerequisites.</p><h3 id="step-1-get-started-with-apijs%E2%80%8B">Step 1: Get started with API.js<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-1-get-started-with-apijs">​</a></h3><p>Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><pre><code class="language-js">//This is the Auth token, you will use it to generate a meeting and connect to it
export const authToken = "&lt;Generated-from-dashbaord&gt;";
// API call to create a meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${authToken}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });
  //Destructuring the roomId from the response
  const { roomId } = await res.json();
  return roomId;
};</code></pre><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the <strong>Context Provider</strong> and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value proposition changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meetings such as join/leave, enable/disable the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant, such as <strong>name</strong>, <strong>webcamStream</strong>, <strong>micStream</strong>, etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the <strong>App.js</strong> file.</p><pre><code class="language-js">import "./App.css";
import React, { useEffect, useMemo, useRef, useState } from "react";
import {
  MeetingProvider,
  MeetingConsumer,
  useMeeting,
  useParticipant,
} from "@videosdk.live/react-sdk";
import { authToken, createMeeting } from "./API";
import ReactPlayer from "react-player";

function JoinScreen({ getMeetingAndToken }) {
  return null;
}

function ParticipantView(props) {
  return null;
}

function Controls(props) {
  return null;
}

function MeetingView(props) {
  return null;
}

function App() {
  const [meetingId, setMeetingId] = useState(null);

  //Getting the meeting id by calling the api we just wrote
  const getMeetingAndToken = async (id) =&gt; {
    const meetingId =
      id == null ? await createMeeting({ token: authToken }) : id;
    setMeetingId(meetingId);
  };

  //This will set Meeting Id to null when meeting is left or ended
  const onMeetingLeave = () =&gt; {
    setMeetingId(null);
  };

  return authToken &amp;&amp; meetingId ? (
    &lt;MeetingProvider
      config={{
        meetingId,
        micEnabled: true,
        webcamEnabled: true,
        name: "C.V. Raman",
      }}
      token={authToken}
    &gt;
      &lt;MeetingView meetingId={meetingId} onMeetingLeave={onMeetingLeave} /&gt;
    &lt;/MeetingProvider&gt;
  ) : (
    &lt;JoinScreen getMeetingAndToken={getMeetingAndToken} /&gt;
  );
}

export default App;</code></pre><h3 id="step-3-implement-join-screen%E2%80%8B">Step 3: Implement Join Screen<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-3-implement-join-screen">​</a></h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><pre><code class="language-JavaScript">function JoinScreen({ getMeetingAndToken }) {
  const [meetingId, setMeetingId] = useState(null);
  const onClick = async () =&gt; {
    await getMeetingAndToken(meetingId);
  };
  return (
    &lt;div&gt;
      &lt;input
        type="text"
        placeholder="Enter Meeting Id"
        onChange={(e) =&gt; {
          setMeetingId(e.target.value);
        }}
      /&gt;
      &lt;button onClick={onClick}&gt;Join&lt;/button&gt;
      {" or "}
      &lt;button onClick={onClick}&gt;Create Meeting&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><h4 id="output">Output</h4>
<figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-join-screen-06fb57cf0d9e3bcc1e7da9fc032298c3.jpeg" class="kg-image" alt="How to Integrate Image Capture Feature in React JS Video Call App?" loading="lazy" width="720" height="130"/></figure><h3 id="step-4-implement-meetingview-and-controls%E2%80%8B">Step 4: Implement MeetingView and Controls<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-4-implement-meetingview-and-controls">​</a></h3><p>The next step is to create <code>MeetingView</code> and <code>Controls</code> components to manage features such as join, leave, mute, and unmute.</p><pre><code class="language-JavaScript">function MeetingView(props) {
  const [joined, setJoined] = useState(null);
  //Get the method which will be used to join the meeting.
  //We will also get the participants list to display all participants
  const { join, participants } = useMeeting({
    //callback for when meeting is joined successfully
    onMeetingJoined: () =&gt; {
      setJoined("JOINED");
    },
    //callback for when meeting is left
    onMeetingLeft: () =&gt; {
      props.onMeetingLeave();
    },
  });
  const joinMeeting = () =&gt; {
    setJoined("JOINING");
    join();
  };

  return (
    &lt;div className="container"&gt;
      &lt;h3&gt;Meeting Id: {props.meetingId}&lt;/h3&gt;
      {joined &amp;&amp; joined == "JOINED" ? (
        &lt;div&gt;
          &lt;Controls /&gt;
          //For rendering all the participants in the meeting
          {[...participants.keys()].map((participantId) =&gt; (
            &lt;ParticipantView
              participantId={participantId}
              key={participantId}
            /&gt;
          ))}
        &lt;/div&gt;
      ) : joined &amp;&amp; joined == "JOINING" ? (
        &lt;p&gt;Joining the meeting...&lt;/p&gt;
      ) : (
        &lt;button onClick={joinMeeting}&gt;Join&lt;/button&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><pre><code class="language-JavaScript">function Controls() {
  const { leave, toggleMic, toggleWebcam } = useMeeting();
  return (
    &lt;div&gt;
      &lt;button onClick={() =&gt; leave()}&gt;Leave&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleMic()}&gt;toggleMic&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleWebcam()}&gt;toggleWebcam&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><h4 id="output-of-controls-component">Output of Controls Component</h4>
<figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-container-controls-2cebdfdfd1371b010b773cb6fb9c7ae8.jpeg" class="kg-image" alt="How to Integrate Image Capture Feature in React JS Video Call App?" loading="lazy" width="720" height="177"/></figure><h3 id="step-5-implement-participant-view%E2%80%8B">Step 5: Implement Participant View<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-5-implement-participant-view">​</a></h3><p>Before implementing the participant view, you need to understand a couple of concepts.</p><h4 id="51-forwarding-ref-for-mic-and-camera">5.1 Forwarding Ref for mic and camera</h4>
<p>The <code>useRef</code> hook is responsible for referencing the audio and video components. It will be used to play and stop the audio and video of the participant.</p><pre><code class="language-js">const webcamRef = useRef(null);
const micRef = useRef(null);</code></pre><h4 id="52-useparticipant-hook">5.2 useParticipant Hook</h4>
<p>The <code>useParticipant</code> hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take participantId as an argument.</p><pre><code class="language-Javascript">const { webcamStream, micStream, webcamOn, micOn } = useParticipant(
  props.participantId
);</code></pre><h4 id="53-mediastream-api">5.3 MediaStream API</h4>
<p>The MediaStream API is beneficial for adding a MediaTrack to the audio/video tag, enabling the playback of audio or video.</p><pre><code class="language-JavaScript">const webcamRef = useRef(null);
const mediaStream = new MediaStream();
mediaStream.addTrack(webcamStream.track);

webcamRef.current.srcObject = mediaStream;
webcamRef.current
  .play()
  .catch((error) =&gt; console.error("videoElem.current.play() failed", error));</code></pre><h4 id="54-implement-participantview%E2%80%8B">5.4 Implement <code>ParticipantView</code>​</h4>
<p>Now you can use both of the hooks and the API to create <code>ParticipantView</code></p><pre><code class="language-JavaScript">function ParticipantView(props) {
  const micRef = useRef(null);
  const { webcamStream, micStream, webcamOn, micOn, isLocal, displayName } =
    useParticipant(props.participantId);

  const videoStream = useMemo(() =&gt; {
    if (webcamOn &amp;&amp; webcamStream) {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(webcamStream.track);
      return mediaStream;
    }
  }, [webcamStream, webcamOn]);

  useEffect(() =&gt; {
    if (micRef.current) {
      if (micOn &amp;&amp; micStream) {
        const mediaStream = new MediaStream();
        mediaStream.addTrack(micStream.track);

        micRef.current.srcObject = mediaStream;
        micRef.current
          .play()
          .catch((error) =&gt;
            console.error("videoElem.current.play() failed", error)
          );
      } else {
        micRef.current.srcObject = null;
      }
    }
  }, [micStream, micOn]);

  return (
    &lt;div&gt;
      &lt;p&gt;
        Participant: {displayName} | Webcam: {webcamOn ? "ON" : "OFF"} | Mic:{" "}
        {micOn ? "ON" : "OFF"}
      &lt;/p&gt;
      &lt;audio ref={micRef} autoPlay playsInline muted={isLocal} /&gt;
      {webcamOn &amp;&amp; (
        &lt;ReactPlayer
          //
          playsinline // extremely crucial prop
          pip={false}
          light={false}
          controls={false}
          muted={true}
          playing={true}
          //
          url={videoStream}
          //
          height={"300px"}
          width={"300px"}
          onError={(err) =&gt; {
            console.log(err, "participant video error");
          }}
        /&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><h2 id="integrate-image-capture-feature">Integrate Image Capture Feature</h2><p>This capability proves particularly valuable in Video KYC scenarios, enabling the capture of images where users can hold up their identity for verification.</p><blockquote>NOTE: The <code>captureImage()</code> function is supported from version <code>0.0.79</code> onward.<br>Enhance the <code>captureImage()</code> function by making the height and width parameters, which is optional from version <code>0.0.81</code> onward.</br></blockquote><ul><li>By using the <code>captureImage()</code> function of the <code>useParticipant</code> hook, you can capture an image of a local participant from their video stream.</li><li>You have the option to specify the desired height and width in the <code>captureImage()</code> function; however, these parameters are optional. If not provided, the VideoSDK will automatically use the dimensions of the local participant's webcamStream.</li><li>The <code>captureImage()</code> function returns the image in the form of a <code>base64</code> string.</li></ul><pre><code class="language-JavaScript">import { useMeeting, useParticipant } from "@videosdk.live/react-sdk";

const { localParticipant } = useMeeting();

const { webcamStream, webcamOn, captureImage } = useParticipant(
  localParticipant.id
);

async function imageCapture() {
  if (webcamOn &amp;&amp; webcamStream) {
    const base64 = await captureImage({ height: 400, width: 400 }); // captureImage will return base64 string
    console.log("base64", base64);
  } else {
    console.error("Camera must be on to capture an image");
  }
}</code></pre><blockquote>NOTE: You can only capture an image of the local participant. If you called <code>captureImage()</code> function on a remote participant, you will receive an error. To capture an image of a remote participant, refer to the documentation below.</blockquote><h3 id="how-to-capture-an-image-of-a-remote-participant%E2%80%8B">How to capture an image of a remote participant?<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/handling-media/image-capturer#how-to-capture-image-of-a-remote-participant-">​</a></h3><ul><li>Before proceeding, it's crucial to understand <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/upload-fetch-temporary-file">VideoSDK's temporary file storage system</a> and the underlying <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/pubsub">pubsub mechanism</a>.</li><li>Here's a breakdown of the steps, using the names Participant A and Participant B for clarity:</li></ul><ul><li>In this step, you have to first send a request to Participant B, whose image you want to capture, using Pubsub.</li><li>To do that, you have to create a Pubsub topic called <code>IMAGE_CAPTURE</code> the <code>ParticipantView</code> Component.​</li><li>Here, you will be using the <code>sendOnly</code> property of the <code>publish()</code> method. Therefore, the request will be sent to that participant only.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-JavaScript">import {usePubSub,useParticipant} from '@videosdk.live/react-sdk';

function ParticipantView({ participantId }) {
  // create pubsub topic to send Request
  const { publish } = usePubSub('IMAGE_CAPTURE');
  const { isLocal } = useParticipant(participantId);
​
  // send Request to participant
  function sendRequest() {
    // Pass the participantId of the participant whose image you want to capture
    // Here, it will be Participant B's id, as you want to capture the the image of Participant B
    publish("Sending request to capture image", { persist: false, sendOnly: [participantId] });
  };

  return (
    &lt;&gt;
      // other components
      &lt;button
        style={{
          position: 'absolute', backgroundColor: "#00000066", top: 10 , left:10
        }}
        onClick={async () =&gt; {
          if (!isLocal) {
            sendRequest();
          }
        }}
      &gt;
        Capture Image
      &lt;/button&gt;
    &lt;/&gt;
  );
}
​</code></pre><figcaption><p><span style="white-space: pre-wrap;">ParticipantView.js</span></p></figcaption></figure><h4 id="step-2-capture-and-upload-file">Step 2: Capture and Upload File</h4>
<p>To capture an image from the remote participant [Participant B], you have to create the <code>CaptureImageListener</code> component. When a participant receives an image capture request, this component uses the <code>captureImage</code> function of the <code>useParticipant</code> hook to capture the image.</p><figure class="kg-card kg-code-card"><pre><code class="language-JavaScript">import { useFile } from '@videosdk.live/react-sdk';
​
const CaptureImageListner = ({ localParticipantId }) =&gt; {
​
  const { captureImage } = useParticipant(localParticipantId);
​
  // subscribe to receive request
  usePubSub('IMAGE_CAPTURE', {
    onMessageReceived: (message) =&gt; {
      _handleOnImageCaptureMessageReceived(message);
    },
  });
​
  const _handleOnImageCaptureMessageReceived = (message) =&gt; {
    try {
      if (message.senderId !== localParticipantId) {
        // capture and store image when message received
        captureAndStoreImage({ senderId: message.senderId });
      }
    } catch (err) {
      console.log("error on image capture", err);
    }
  };

  async function captureAndStoreImage({ senderId }) {
    // capture image
    const base64Data = await captureImage({height:400,width:400});
    console.log('base64Data',base64Data);
  }

  return &lt;&gt;&lt;/&gt;;
};

export default CaptureImageListner;</code></pre><figcaption><p><span style="white-space: pre-wrap;">CaptureImageListner.js</span></p></figcaption></figure><ul><li>The captured image is then stored in VideoSDK's temporary file storage system using the <code>uploadBase64File()</code> function of the <code>useFile</code> hook. This operation returns a unique <code>fileUrl</code> of the stored image.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-JavaScript">const CaptureImageListner = ({ localParticipantId }) =&gt; {
  const { uploadBase64File } = useFile();

  async function captureAndStoreImage({ senderId }) {
    // capture image
    const base64Data = await captureImage({ height: 400, width: 400 });
    const token = "&lt;VIDEOSDK_TOKEN&gt;";
    const fileName = "myCapture.jpeg"; // specify a name for image file with extension
    // upload image to videosdk storage system
    const fileUrl = await uploadBase64File({ base64Data, token, fileName });
    console.log("fileUrl", fileUrl);
  }

  //...
};</code></pre><figcaption><p><span style="white-space: pre-wrap;">CaptureImageListner.js</span></p></figcaption></figure><ul><li>Next, the <code>fileUrl</code> is sent back to the participant who initiated the request using the <code>IMAGE_TRANSFER</code> topic.</li></ul><pre><code class="language-JavaScript">const CaptureImageListner = ({ localParticipantId }) =&gt; {
  //...

  // publish image Transfer
  const { publish: imageTransferPublish } = usePubSub("IMAGE_TRANSFER");

  async function captureAndStoreImage({ senderId }) {
    //...
    const fileUrl = await uploadBase64File({ base64Data, token, fileName });
    imageTransferPublish(fileUrl, { persist: false, sendOnly: [senderId] });
  }

  //...
};</code></pre><ul><li>Then the <code>CaptureImageListener</code> component has to be rendered within the <code>MeetingView</code> component.</li></ul><h4 id="step-3-fetch-and-display-image%E2%80%8B">Step 3: Fetch and Display Image<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/handling-media/image-capturer#step-3--fetch-and-display-image">​</a></h4><ul><li>To display a captured image, the <code>ShowImage</code> component is used. Here's how it works:</li><li>Within <code>ShowImage</code>, you need to subscribe to the <code>IMAGE_TRANSFER</code> topic, receiving the <code>fileUrl</code> associated with the captured image. Once obtained, leverage the <code>fetchBase64File()</code> function from the <code>useFile</code> hook to retrieve the file in <code>base64</code> format from VideoSDK's temporary storage.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-JavaScript">import {
  usePubSub,
  useMeeting,
  useFile
} from '@videosdk.live/react-sdk';
import { useState } from "react";

function ShowImage() {
  const mMeeting = useMeeting();
  const { fetchBase64File } = useFile();
  ​
  const topicTransfer = "IMAGE_TRANSFER";
  ​
  const [imageSrc, setImageSrc] = useState(null);
  const [open, setOpen] = useState(false);

  ​
  usePubSub(topicTransfer, {
    onMessageReceived: (message) =&gt; {
      if (message.senderId !== mMeeting.localParticipant.id) {
        fetchFile({ url: message.message }); // pass fileUrl to fetch the file
      }
    }
  });
  ​
  async function fetchFile({ url }) {
    const token = "&lt;VIDEOSDK_TOKEN&gt;";
    const base64 = await fetchBase64File({ url, token });
    console.log("base64",base64); // here is your image in the form of base64
    setImageSrc(base64);
    setOpen(true);
  }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ShowImage.js</span></p></figcaption></figure><ul><li>With the <code>base64</code> data in hand, you can now display the image in a modal. This seamless image presentation is integrated into the <code>MeetingView</code> component.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-JavaScript">import { Dialog, Transition } from "@headlessui/react";
import { Fragment } from "react";

function ShowImage() {
  //...

  return (
    &lt;&gt;
      {imageSrc &amp;&amp; (
        &lt;Transition appear show={open} as={Fragment}&gt;
          &lt;Dialog as="div" className="relative z-10" onClose={() =&gt; {}}&gt;
            &lt;Transition.Child
              as={Fragment}
              enter="ease-out duration-300"
              enterFrom="opacity-0"
              enterTo="opacity-100"
              leave="ease-in duration-200"
              leaveFrom="opacity-100"
              leaveTo="opacity-0"
            &gt;
              &lt;div className="fixed inset-0 bg-black/25" /&gt;
            &lt;/Transition.Child&gt;

            &lt;div className="fixed inset-0 overflow-y-auto"&gt;
              &lt;div className="flex min-h-full items-center justify-center p-4 text-center"&gt;
                &lt;Transition.Child
                  as={Fragment}
                  enter="ease-out duration-300"
                  enterFrom="opacity-0 scale-95"
                  enterTo="opacity-100 scale-100"
                  leave="ease-in duration-200"
                  leaveFrom="opacity-100 scale-100"
                  leaveTo="opacity-0 scale-95"
                &gt;
                  &lt;Dialog.Panel className="w-full max-w-md transform overflow-hidden rounded-2xl bg-gray-750 p-4 text-left align-middle shadow-xl transition-all"&gt;
                    &lt;Dialog.Title
                      as="h3"
                      className="text-lg font-medium leading-6 text-center text-gray-900"
                    &gt;
                      Image Preview
                    &lt;/Dialog.Title&gt;
                    &lt;div className="mt-8 flex flex-col items-center justify-center"&gt;
                      {imageSrc ? (
                        &lt;img
                          src={`data:image/jpeg;base64,${imageSrc}`}
                          width={300}
                          height={300}
                        /&gt;
                      ) : (
                        &lt;div width={300} height={300}&gt;
                          &lt;p className=" text-white  text-center"&gt;
                            Loading Image...
                          &lt;/p&gt;
                        &lt;/div&gt;
                      )}
                      &lt;div className="mt-4 "&gt;
                        &lt;button
                          type="button"
                          className="rounded border border-white bg-transparent px-4 py-2 text-sm font-medium text-white hover:bg-gray-700"
                          onClick={() =&gt; {
                            setOpen(false);
                          }}
                        &gt;
                          Okay
                        &lt;/button&gt;
                      &lt;/div&gt;
                    &lt;/div&gt;
                  &lt;/Dialog.Panel&gt;
                &lt;/Transition.Child&gt;
              &lt;/div&gt;
            &lt;/div&gt;
          &lt;/Dialog&gt;
        &lt;/Transition&gt;
      )}
    &lt;/&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ShowImage.js</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-JavaScript">function MeetingView() {
  // ...
  return (
    &lt;div&gt;
      // other components
      &lt;CaptureImageListner localParticipantId={localParticipant?.id} /&gt;
      &lt;ShowImage /&gt;
    &lt;/div&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingView.js</span></p></figcaption></figure><h2 id="%E2%9C%A8-want-to-add-more-features-to-react-js-video-calling-app">✨ Want to Add More Features to React JS Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your React video-calling app,</p><p><strong>Check out these additional resources:</strong></p><ul><li>HLS Player: <a href="https://www.videosdk.live/blog/implement-hls-player-in-react-js">Link</a></li><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/integrate-active-speaker-indication-in-react-js">Link</a></li><li>RTMP Live Stream: <a href="https://www.videosdk.live/blog/integrate-rtmp-livestream-in-react-js">Link</a></li><li>Screen Share Feature: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-js">Link</a></li><li>Chat Feature: <a href="https://www.videosdk.live/blog/integrate-chat-feature-in-react-js">Link</a></li><li>Collaborative Whiteboard: <a href="https://www.videosdk.live/blog/integrate-whiteboard-in-react-js">Link</a></li><li>Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/integrate-picture-in-picture-pip-in-react-js">Link</a></li></ul><h2 id="conclusion">Conclusion</h2><p>In conclusion, integrating image capture functionality into a React JS video call app is a powerful enhancement that enriches user interaction and collaboration. This feature not only enhances the app's versatility but also fosters engagement and productivity among users. With intuitive UI components enabling easy photo capture and sharing, the app becomes more comprehensive and user-friendly.</p><p>It's important to keep in mind that VideoSDK offers a comprehensive and intuitive solution for image capture within React JS. Its wide array of features and straightforward integration process enables you to effortlessly incorporate this functionality into your application, propelling it towards greater heights of functionality and user engagement.</p><p>If you are new here and want to build an interactive react app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get ? <em>10000 free minutes every month</em>. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[Simple usage-based Video SDK Pricing]]></title><description><![CDATA[Crystal Clear Pricing of all that we serve on the platform. Enhance working at cheaper rates and better quality.]]></description><link>https://www.videosdk.live/blog/video-sdk-api-pricing</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb71</guid><category><![CDATA[Pricing]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 24 Sep 2024 11:42:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/09/videosdk-pricing-thumbnail.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/09/videosdk-pricing-thumbnail.jpg" alt="Simple usage-based Video SDK Pricing"/><p>This is a shortcut blog. Here you can get all the pricing information of all our products at once. You can always redirect yourself to the links to get detailed information about each of them. <br/></p><p>In this write-up, we have briefly mentioned the pricing to make all the arrangements in one place. Feel free to reach out to our sales team at any <br/></p><h2 id="audio-calling-api">Audio Calling API</h2><p><strong>Pricing = Number of participants x Meeting minutes x Unit Price per minute</strong></p><p><strong>Videosdk.live Price per minute = $ 0.00060</strong><br/></p><p><strong>Example:</strong></p><p>Total Participants (N)= 50, Total Minutes consumed (M)= 100. The Total Participant Minutes (PM)= 5000.</p><p><strong>Total Price= N x M x Price per minute =  50 x 100x 0.0006 = $3</strong><br/></p><p><a href="https://www.videosdk.live/blog/audio-calling-api-pricing">Click here for a detailed explanation</a></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/audio-calling-api-pricing"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Pricing of Audio Calling API | Videosdk.live</div><div class="kg-bookmark-description">Enjoy the finest interactive group audio calls with affordable pricing. Get the first 10,000 minutes free. Enjoy paid plans at just $0.0006 per minute</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/blog/favicon/android-icon-192x192.png" alt="Simple usage-based Video SDK Pricing"><span class="kg-bookmark-author">Pricing of Audio Calling API | Videosdk.live</span><span class="kg-bookmark-publisher">Video</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/audio-calling-Pricing-thumbnail.jpg" alt="Simple usage-based Video SDK Pricing"/></div></a></figure><h2 id="video-calling-api"><br>Video Calling API</br></h2><p><strong>Pricing = Number of participants x Meeting minutes x Unit Price per minute</strong></p><p>Example:</p><ul><li>In a 480p resolution, total participants (N)= 5, total minutes consumed (M)= 60, price per minute= $ 0.00199</li></ul><p>So, total pricing= N x M x price per minute= 5 x 60 x 0.00199 = $0.60<br/></p><ul><li>In a 720p resolution, otal pricing=  5 x 60 x 0.00299 = $0.90<br/></li><li>In a 1080p resolution, total pricing= N x M x price per minute= 5 x 60 x 0.00699 = $2.10<br/></li></ul><p><a href="https://www.videosdk.live/blog/video-conferencing-api-pricing">Click here for a detailed explanation</a><br/></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/video-conferencing-api-pricing"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Pricing of Video Calling API | videosdk.live</div><div class="kg-bookmark-description">Video conferencing API pricing, pay-as-you-go from as low as $0.001 per minute. 90% affordable than other providers. Get 10,000 minutes free each every month.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/blog/favicon/android-icon-192x192.png" alt="Simple usage-based Video SDK Pricing"><span class="kg-bookmark-author">Pricing of Video Calling API | videosdk.live</span><span class="kg-bookmark-publisher">Video</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/Pricing-thumbnail.jpg" alt="Simple usage-based Video SDK Pricing"/></div></a></figure><h2/><h2 id="interactive-live-streaming">Interactive Live Streaming</h2><p><strong>Cost of host</strong>= Unit price (resolution per minute) x number of minutes x Total hosts</p><p><strong>Cost of views</strong>= Unit price  (resolution per minute) x number of minutes x Total participants</p><p><strong>Total cost= </strong>Cost of Hosts + Cost of Viewers <br/></p><p>Total streaming minutes= 45, Total Viewers= 1,000 Total Hosts= 1</p><ol><li><strong>At 720p</strong> Total Cost= $ 67.63</li><li><strong><strong><strong>At 1080p </strong>Total Cost= $ 180.31 (0.00699 x 45 x 1) + (0.004 x 45 x 1000)</strong></strong><br/></li></ol><p><a href="https://www.videosdk.live/blog/interactive-live-streaming-pricing">Click here for a detailed explanation</a><br/></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/interactive-live-streaming-pricing"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Interactive Live Streaming Pricing | videosdk.live</div><div class="kg-bookmark-description">Low Latency Interactive Live Streaming at the most effective pricing. Easy, simple, and comparable pricing allowing 10000+ viewers with multiple hosts facility</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/blog/favicon/android-icon-192x192.png" alt="Simple usage-based Video SDK Pricing"><span class="kg-bookmark-author">Interactive Live Streaming Pricing | videosdk.live</span><span class="kg-bookmark-publisher">Video</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/Interactive-live-streaming-Pricing-thumbnail--1--1.jpg" alt="Simple usage-based Video SDK Pricing"/></div></a></figure><h2 id="standard-live-streaming-pricing">Standard Live Streaming Pricing</h2><p><strong>Total cost of a live streaming= Encoding + Storage + Delivery</strong></p><p>Lifetime video encoding= $0.05 per minute; Per month video storage= $0.003 per minute</p><p>Deliver= Total minutes x Total Views x Unit price (per resolution)<br/></p><ul><li>Resolution- 240p; Unit price per Minute- 0.0004</li></ul><p>Calculation- Delivery= 30 x 100 x 0.0004 = $1.2</p><p><strong>Total cost at 240p= 1.5 + 0.09 + 1.2= $ 2.79</strong><br/></p><ul><li>Total cost at 360p= 1.5 + 0.09 + 1.8= $ 3.39<br/></li><li>Total cost at 480p= 1.5 + 0.09 + 2.4= $ 3.99<br/></li><li>Total cost at 720p= 1.5 + 0.09 + 3= $ 4.59<br/></li><li>Total cost at 1080p= 1.5 + 0.09 + 3.6= $ 5.19<br/></li></ul><p><a href="https://www.videosdk.live/blog/standard-live-streaming-pricing">Click here for a detailed explanation</a></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/standard-live-streaming-pricing"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Pricing on Standard Live Streaming | Videosdk.live</div><div class="kg-bookmark-description">Live stream at budget-friendly pricing. Easy, simple, and cost-effective fit for your application. Now integrate live streaming hassle-free with videosdk.live</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/blog/favicon/android-icon-192x192.png" alt="Simple usage-based Video SDK Pricing"><span class="kg-bookmark-author">Pricing on Standard Live Streaming | Videosdk.live</span><span class="kg-bookmark-publisher">Video</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/Pricing-thumbnail--1-.jpg" alt="Simple usage-based Video SDK Pricing"/></div></a></figure><h2 id="video-on-demand">Video-on-Demand</h2><p><strong>Total cost of an on-demand video= Encoding + Storage + Delivery</strong></p><p><strong>Lifetime video encoding= $0.05 per minute; Per month video storage= $0.003 per minute</strong><br/></p><p><strong>Example-</strong></p><p>Total Minutes= 30; Total Views= 100</p><p>Encoding- 30 x 0.05= $1.5; Storage- 30 x 0.003= $0.09<br/></p><ul><li>Resolution- 240p; Unit price per Minute- 0.0004</li></ul><p>Calculation- Delivery= 30 x 100 x 0.0004 = $1.2</p><p><strong>Total cost at 240p= 1.5 + 0.09 + 1.2= $ 2.79</strong><br/></p><ul><li>Resolution- 360p; Unit price per Minute- 0.0006</li></ul><p><strong>Total cost at 360p= 1.5 + 0.09 + 1.8= $ 3.39</strong><br/></p><ul><li>Resolution- 480p; Unit price per Minute- 0.0008</li></ul><p><strong>Total cost at 480p= 1.5 + 0.09 + 2.4= $ 3.99</strong><br/></p><ul><li>Resolution- 720p; Unit price per Minute- 0.0010</li></ul><p><strong>Total cost at 720p= 1.5 + 0.09 + 3= $ 4.59</strong><br/></p><ul><li>Resolution- 1080p; Unit price per Minute- 0.0012</li></ul><p><strong>Total cost at 1080p= 1.5 + 0.09 + 3.6= $ 5.19</strong><br/></p><p><a href="https://www.videosdk.live/blog/video-on-demand-pricing">Click here for a detailed explanation</a></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/video-on-demand-pricing"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Video-on-Demand Pricing | Videosdk.live</div><div class="kg-bookmark-description">Budget-friendly on-demand videos. Pricing with easy and simpler standards. Cheapest prices for all businesses. Flexible rates with resolutions starting from 240p</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/blog/favicon/android-icon-192x192.png" alt="Simple usage-based Video SDK Pricing"><span class="kg-bookmark-author">Video-on-Demand Pricing | Videosdk.live</span><span class="kg-bookmark-publisher">Video</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/VOD-Pricing-thumbnail.jpg" alt="Simple usage-based Video SDK Pricing"/></div></a></figure><h2 id="cloud-recording">Cloud Recording</h2><p>There are two ways of cloud recording</p><p><strong>a) Only cloud recording  b) Recording with storage and streaming</strong><br/></p><h3 id="only-cloud-recording">Only cloud recording</h3><p><strong>Cost = Total recording minutes x cost per minute</strong><br/></p><p>For example- Total minutes= 20</p><p>Cost of cloud recording= 20 x 0.0015 = $ 0.03<br/></p><h3 id="cloud-recording-with-storage-and-streaming">Cloud Recording with storage and streaming</h3><p><strong>Total minutes= 30, Total Views= 90</strong><br/></p><p>Cost of recording= 30x 0.0015 = $0.045</p><p>Cost of storage= 30 x 0.003= $0.09</p><p>Cost of delivery= 30 x 90 x 0.0010= $2.7<br/></p><p><a href="https://www.videosdk.live/blog/cloud-recording-and-rtmp-pricing">Click here for a detailed explanation</a></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/cloud-recording-and-rtmp-pricing"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Cloud Recording and RTMP Pricing | videosdk.live</div><div class="kg-bookmark-description">Stream to YouTube, Twitch, Facebook, and 30+ streaming platforms at once starting from $0.0025 per minute. Enjoy Cloud recording starts at $0.0015 per month</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/blog/favicon/android-icon-192x192.png" alt="Simple usage-based Video SDK Pricing"><span class="kg-bookmark-author">Cloud Recording and RTMP Pricing | videosdk.live</span><span class="kg-bookmark-publisher">Video</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/Cloud-Recording---RTMP-thumbnail-1.jpg" alt="Simple usage-based Video SDK Pricing"/></div></a></figure><h2 id="real-time-messaging-protocol-rtmp">Real-Time Messaging Protocol (RTMP)</h2><p><strong>Cost of RTMP per minute= Total RTMP minutes x $0.0025</strong><br/></p><p><strong>Example</strong></p><p>Total minutes= 60</p><p>Cost of RTMP= 60 x 0.0025= $0.15<br/></p><p><a href="https://www.videosdk.live/blog/cloud-recording-and-rtmp-pricing#real-time-messaging-protocol-rtmp">Click here for a detailed explanation </a></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/cloud-recording-and-rtmp-pricing#real-time-messaging-protocol-rtmp"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Cloud Recording and RTMP Pricing | videosdk.live</div><div class="kg-bookmark-description">Stream to YouTube, Twitch, Facebook, and 30+ streaming platforms at once starting from $0.0025 per minute. Enjoy Cloud recording starts at $0.0015 per month</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/blog/favicon/android-icon-192x192.png" alt="Simple usage-based Video SDK Pricing"><span class="kg-bookmark-author">Cloud Recording and RTMP Pricing | videosdk.live</span><span class="kg-bookmark-publisher">Video</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/Cloud-Recording---RTMP-thumbnail-1.jpg" alt="Simple usage-based Video SDK Pricing"/></div></a></figure><p>This blog gives the reader a basic understanding of all our products concerning their pricing. Visit the links for detailed help. You can always connect to the ales support in arise of any query.</p><p>Thanks for reading!</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate RTMP Live Stream in Android(Kotlin) Video Chat App?]]></title><description><![CDATA[This comprehensive guide explores integrating the RTMP Livestream feature seamlessly into your Kotlin app using VideoSDK to amplify it with the capability of livestream.]]></description><link>https://www.videosdk.live/blog/integrate-rtmp-livestream-in-kotlin-video-chat-app</link><guid isPermaLink="false">6604fc702a88c204ca9cf333</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[Android]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Tue, 24 Sep 2024 10:41:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-Live-Stream-Kotlin-1.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-Live-Stream-Kotlin-1.png" alt="How to Integrate RTMP Live Stream in Android(Kotlin) Video Chat App?"/><p>If you're on the path to enhancing your Android video-calling app with the power of RTMP live streaming, you're in the right place. In this technical guide, we will explore the integration of the RTMP Livestream feature within your Android video apps using Kotlin with the help of VideoSDK. By following the outlined steps, you will not only learn to create a robust video calling experience but also amplify it with the capability to live stream your meetings to various platforms effortlessly.</p><h2 id="goals">Goals</h2><p>By the End of this Article:</p><ol><li>Create a <a href="https://www.videosdk.live/signup">VideoSDK account</a> and generate your VideoSDK auth token.</li><li>Integrate the VideoSDK library and dependencies into your project.</li><li>Implement core functionalities for video calls using VideoSDK.</li><li>Enable RTMP Livestream Feature.</li></ol><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>Before we dive into the implementation steps, let's make sure you complete the necessary steps and prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token plays a crucial role in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/authentication-and-token#1-generating-token-from-dashboard">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Java Development Kit is supported.</li><li>Android Studio version 3.0 or later.</li><li>Android SDK API level 21 or higher.</li><li>A mobile device with Android 5.0 or later version.</li></ul><h2 id="integrate-videosdk">Integrate VideoSDK</h2><p>Following the account creation and token generation steps, we'll guide you through the process of adding the VideoSDK library and other dependencies to your project. We'll also ensure your app has the required permissions to access features like audio recording, camera usage, and internet connectivity, all crucial for a seamless video experience.</p><h3 id="step-a-add-the-repositories-to-the-projects-settingsgradle-file">Step (a): Add the repositories to the project's <code>settings.gradle</code> file.</h3><pre><code class="language-kotlin">dependencyResolutionManagement {
  repositories {
    // ...
    google()
    mavenCentral()
    maven { url '&lt;https://jitpack.io&gt;' }
    maven { url "&lt;https://maven.aliyun.com/repository/jcenter&gt;" }
  }
}
</code></pre><h3 id="step-b-include-the-following-dependency-within-your-applications-buildgradle-file">Step (b): Include the following dependency within your application's <code>build.gradle</code> file</h3><pre><code class="language-kotlin">dependencies {
  implementation 'live.videosdk:rtc-android-sdk:0.1.26'

  // library to perform Network call to generate a meeting id
  implementation 'com.amitshekhar.android:android-networking:1.0.2'

  // Other dependencies specific to your app
}
</code></pre><blockquote>If your project has set <code>android.useAndroidX=true</code>, then set <code>android.enableJetifier=true</code> in the <code>gradle.properties</code> file to migrate your project to AndroidX and avoid duplicate class conflict.</blockquote><h3 id="step-c-add-permissions-to-your-project">Step (c): Add permissions to your project</h3><p>In <code>/app/Manifests/AndroidManifest.xml</code>, add the following permissions after <code>&lt;/application&gt;</code>.</p><pre><code class="language-kotlin">&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
</code></pre><p>These permissions are essential for enabling core functionalities like audio recording, internet connectivity for real-time communication, and camera access for video streams within your video application.</p><h2 id="essential-steps-for-building-the-video-calling-functionality">Essential Steps for Building the Video Calling Functionality</h2><p>We'll now delve into the functionalities that make your video application after setting up your project with VideoSDK. This section outlines the essential steps for implementing core functionalities within your app.</p><p>This section will guide you through four key aspects:</p><h3 id="step-1-generate-a-meetingid">Step 1: Generate a <code>meetingId</code></h3><p>Now, we can create the <code>meetingId</code> from the VideoSDK's rooms API. You can refer to this <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/setup-call/initialize-meeting#generating-meeting-id">documentation</a> to generate meetingId.</p><h3 id="step-2-initializing-the-meeting">Step 2: Initializing the Meeting</h3><p>After getting <code>meetingId</code> , the next step involves initializing the meeting for that we need to,</p><ol><li>Initialize VideoSDK.</li><li>Configure VideoSDK with the token.</li><li>Initialize the meeting with required params such as <code>meetingId</code>, <code>participantName</code>, <code>micEnabled</code>, <code>webcamEnabled</code> and more.</li><li>Add <code>MeetingEventListener</code> for listening events such as Meeting Join/Left and Participant Join/Left.</li><li>Join the room with <code>meeting.join()</code> a method.</li></ol><p>Please copy the .xml file of the <code>MeetingActivity</code> from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/activity_meeting.xml">here</a>.</p><pre><code class="language-kotlin">class MeetingActivity : AppCompatActivity() {
  // declare the variables we will be using to handle the meeting
  private var meeting: Meeting? = null
  private var micEnabled = true
  private var webcamEnabled = true

  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_meeting)

    val token = "" // Replace with the token you generated from the VideoSDK Dashboard
    val meetingId = "" // Replace with the meetingId you have generated
    val participantName = "John Doe"
    
    // 1. Initialize VideoSDK
    VideoSDK.initialize(applicationContext)

    // 2. Configuration VideoSDK with Token
    VideoSDK.config(token)

    // 3. Initialize VideoSDK Meeting
    meeting = VideoSDK.initMeeting(
      this@MeetingActivity, meetingId, participantName,
      micEnabled, webcamEnabled,null, null, false, null, null)

    // 4. Add event listener for listening upcoming events
    meeting!!.addEventListener(meetingEventListener)

    // 5. Join VideoSDK Meeting
    meeting!!.join()

    (findViewById&lt;View&gt;(R.id.tvMeetingId) as TextView).text = meetingId
  }

  // creating the MeetingEventListener
  private val meetingEventListener: MeetingEventListener = object : MeetingEventListener() {
    override fun onMeetingJoined() {
      Log.d("#meeting", "onMeetingJoined()")
    }

    override fun onMeetingLeft() {
      Log.d("#meeting", "onMeetingLeft()")
      meeting = null
      if (!isDestroyed) finish()
    }

    override fun onParticipantJoined(participant: Participant) {
      Toast.makeText(
        this@MeetingActivity, participant.displayName + " joined",
        Toast.LENGTH_SHORT
      ).show()
    }

    override fun onParticipantLeft(participant: Participant) {
      Toast.makeText(
         this@MeetingActivity, participant.displayName + " left",
         Toast.LENGTH_SHORT
      ).show()
    }
  }
}
</code></pre><h3 id="step-3-handle-local-participant-media">Step 3: Handle Local Participant Media</h3><p>After successfully entering the meeting, it's time to manage the webcam and microphone for the local participant (you).</p><p>To enable or disable the webcam, we'll use the <code>Meeting</code> class methods <code>enableWebcam()</code> and <code>disableWebcam()</code>, respectively. Similarly, to mute or unmute the microphone, we'll utilize the methods <code>muteMic()</code> and <code>unmuteMic()</code></p><pre><code class="language-kotlin">class MeetingActivity : AppCompatActivity() {
  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_meeting)
    //...Meeting Setup is Here

    // actions
    setActionListeners()
  }

  private fun setActionListeners() {
    // toggle mic
    findViewById&lt;View&gt;(R.id.btnMic).setOnClickListener { view: View? -&gt;
      if (micEnabled) {
        // this will mute the local participant's mic
        meeting!!.muteMic()
        Toast.makeText(this@MeetingActivity, "Mic Muted", Toast.LENGTH_SHORT).show()
      } else {
        // this will unmute the local participant's mic
        meeting!!.unmuteMic()
        Toast.makeText(this@MeetingActivity, "Mic Enabled", Toast.LENGTH_SHORT).show()
      }
        micEnabled=!micEnabled
    }

    // toggle webcam
    findViewById&lt;View&gt;(R.id.btnWebcam).setOnClickListener { view: View? -&gt;
      if (webcamEnabled) {
        // this will disable the local participant webcam
        meeting!!.disableWebcam()
        Toast.makeText(this@MeetingActivity, "Webcam Disabled", Toast.LENGTH_SHORT).show()
      } else {
        // this will enable the local participant webcam
        meeting!!.enableWebcam()
        Toast.makeText(this@MeetingActivity, "Webcam Enabled", Toast.LENGTH_SHORT).show()
      }
       webcamEnabled=!webcamEnabled
    }

    // leave meeting
    findViewById&lt;View&gt;(R.id.btnLeave).setOnClickListener { view: View? -&gt;
      // this will make the local participant leave the meeting
      meeting!!.leave()
    }
  }
}
</code></pre><h3 id="step-4-handling-the-participants-view">Step 4: Handling the Participants' View</h3><p>To display a list of participants in your video UI, we'll utilize a <code>RecyclerView</code>.</p><p><strong>(a)</strong> This involves creating a new layout for the participant view named <code>item_remote_peer.xml</code> in the <code>res/layout</code> folder. You can copy <code>item_remote_peer.xml </code>file from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/item_remote_peer.xml">here</a>.</p><p><strong>(b)</strong> Create a RecyclerView adapter <code>ParticipantAdapter</code> which will be responsible for displaying the participant list. Within this adapter, define a <code>PeerViewHolder</code> class that extends <code>RecyclerView.ViewHolder</code>.</p><pre><code class="language-kotlin">class ParticipantAdapter(meeting: Meeting) : RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt;() {

  override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): PeerViewHolder {
    return PeerViewHolder(
      LayoutInflater.from(parent.context)
        .inflate(R.layout.item_remote_peer, parent, false)
    )
  }

  override fun onBindViewHolder(holder: PeerViewHolder, position: Int) {
  }

  override fun getItemCount(): Int {
    return 0
  }

  class PeerViewHolder(view: View) : RecyclerView.ViewHolder(view) {
    // 'VideoView' to show Video Stream
    var participantView: VideoView
    var tvName: TextView

    init {
        tvName = view.findViewById(R.id.tvName)
        participantView = view.findViewById(R.id.participantView)
    }
  }
}
</code></pre><p><strong>(c)</strong> Now, we will render a list of <code>Participant</code> for the meeting. We will initialize this list in the constructor of the <code>ParticipantAdapter</code></p><pre><code class="language-kotlin">class ParticipantAdapter(meeting: Meeting) :
    RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt;() {

  // creating a empty list which will store all participants
  private val participants: MutableList&lt;Participant&gt; = ArrayList()

  init {
    // adding the local participant(You) to the list
    participants.add(meeting.localParticipant)

    // adding Meeting Event listener to get the participant join/leave event in the meeting.
    meeting.addEventListener(object : MeetingEventListener() {
      override fun onParticipantJoined(participant: Participant) {
        // add participant to the list
        participants.add(participant)
        notifyItemInserted(participants.size - 1)
      }

      override fun onParticipantLeft(participant: Participant) {
        var pos = -1
        for (i in participants.indices) {
          if (participants[i].id == participant.id) {
            pos = i
            break
          }
        }
        // remove participant from the list
        participants.remove(participant)
        if (pos &gt;= 0) {
          notifyItemRemoved(pos)
        }
      }
    })
  }

  // replace getItemCount() method with following.
  // this method returns the size of total number of participants
  override fun getItemCount(): Int {
    return participants.size
  }
  //...
}
</code></pre><p><strong>(d)</strong> We have listed our participants. Let's set up the view holder to display a participant video.</p><pre><code class="language-kotlin">class ParticipantAdapter(meeting: Meeting) :
    RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt;() {

  // replace onBindViewHolder() method with following.
  override fun onBindViewHolder(holder: PeerViewHolder, position: Int) {
    val participant = participants[position]

    holder.tvName.text = participant.displayName

    // adding the initial video stream for the participant into the 'VideoView'
    for ((_, stream) in participant.streams) {
      if (stream.kind.equals("video", ignoreCase = true)) {
        holder.participantView.visibility = View.VISIBLE
        val videoTrack = stream.track as VideoTrack
        holder.participantView.addTrack(videoTrack)
        break
      }
    }

    // add Listener to the participant which will update start or stop the video stream of that participant
    participant.addEventListener(object : ParticipantEventListener() {
      override fun onStreamEnabled(stream: Stream) {
        if (stream.kind.equals("video", ignoreCase = true)) {
          holder.participantView.visibility = View.VISIBLE
          val videoTrack = stream.track as VideoTrack
          holder.participantView.addTrack(videoTrack)
       }
      }

      override fun onStreamDisabled(stream: Stream) {
        if (stream.kind.equals("video", ignoreCase = true)) {
          holder.participantView.removeTrack()
          holder.participantView.visibility = View.GONE
        }
      }
    })
  }
}
</code></pre><p><strong>(e)</strong> Now, add this adapter to the <code>MeetingActivity</code></p><pre><code class="language-kotlin">override fun onCreate(savedInstanceState: Bundle?) {
  // Meeting Setup...
  //...
  val rvParticipants = findViewById&lt;RecyclerView&gt;(R.id.rvParticipants)
  rvParticipants.layoutManager = GridLayoutManager(this, 2)
  rvParticipants.adapter = ParticipantAdapter(meeting!!)
}
</code></pre><h2 id="integrate-rtmp-livestream-feature">Integrate RTMP Livestream Feature</h2><p>RTMP is a popular protocol for live streaming video content from a VideoSDK to platforms such as YouTube, Twitch, Facebook, and others. It enables the transmission of live video streams by connecting the VideoSDK to the platform's RTMP server.</p><p>VideoSDK allows you to live stream your meeting to the platform which supports RTMP ingestion just by providing the platform-specific stream key and stream URL, we can connect to the platform's RTMP server and transmit the live video stream.</p><p>VideoSDK also allows you to configure the livestream layouts in numerous ways like by simply setting different prebuilt layouts in the configuration or by providing your own custom template to do the livestream according to your layout choice.</p><p>This guide will provide an overview of how to implement RTMP Livestreaming in your video chat app.</p><h3 id="start-rtmp-livestreaming">Start RTMP Livestreaming</h3><p><code>startLivestream()</code> can be used to start an RTMP live stream of the meeting which can be accessed from the <code>Meeting</code> class. This method accepts two parameters:</p><ul><li><code>outputs</code>: This parameter accepts a list of <code>LivestreamOutput</code> objects that contain the RTMP <code>url</code> and <code>streamKey</code> of the platforms, you want to start the live stream.</li><li><code>config</code>: This parameter will define how the live stream layout should look like. You can pass <code>null</code> for the default layout.</li></ul><pre><code class="language-Kotlin">val config = JSONObject()

// Layout Configuration
val layout = JSONObject()
JsonUtils.jsonPut(layout, "type", "GRID") // "SPOTLIGHT" | "SIDEBAR",  Default : "GRID"
JsonUtils.jsonPut(layout, "priority", "SPEAKER") // "PIN", Default : "SPEAKER"
JsonUtils.jsonPut(layout, "gridSize", 4) // MAX : 4
JsonUtils.jsonPut(config, "layout", layout)

// Theme of livestream layout
JsonUtils.jsonPut(config, "theme", "DARK") //  "LIGHT" | "DEFAULT"

val outputs: MutableList&lt;LivestreamOutput&gt; = ArrayList()
outputs.add(LivestreamOutput(RTMP_URL, RTMP_STREAM_KEY))

meeting!!.startLivestream(outputs,config)</code></pre><h3 id="stop-rtmp-livestreaming">Stop RTMP Livestreaming</h3><ul><li><code>stopLivestream()</code> is used to stop the meeting live stream which can be accessed from the <code>Meeting</code> class.</li></ul><h3 id="example">Example</h3><pre><code class="language-kotlin">// keep track of livestream status
val liveStream = false

findViewById&lt;View&gt;(R.id.btnLiveStream).setOnClickListener { view: View? -&gt;
  if (!liveStream) {
    val config = JSONObject()
    val layout = JSONObject()
    JsonUtils.jsonPut(layout, "type", "GRID")
    JsonUtils.jsonPut(layout, "priority", "SPEAKER")
    JsonUtils.jsonPut(layout, "gridSize", 4)
    JsonUtils.jsonPut(config, "layout", layout)
    JsonUtils.jsonPut(config, "theme", "DARK")

    val outputs: MutableList&lt;LivestreamOutput&gt; = ArrayList()
    outputs.add(LivestreamOutput(RTMP_URL, RTMP_STREAM_KEY))

    // Start LiveStream
    meeting!!.startLivestream(outputs,config)
  } else {
    // Stop LiveStream
    meeting!!.stopLivestream()
  }
}</code></pre><h3 id="event-associated-with-livestream">Event associated with Livestream</h3><ul><li>onLivestreamStateChanged - Whenever meeting livestream state changes, then <code>onLivestreamStateChanged</code> event will trigger.</li></ul><pre><code class="language-Kotlin">private val meetingEventListener: MeetingEventListener = object : MeetingEventListener() {
  override fun onLivestreamStateChanged(livestreamState: String?) {
    when (livestreamState) {
      "LIVESTREAM_STARTING" -&gt; Log.d( "LivestreamStateChanged",
          "Meeting livestream is starting"
      )
      "LIVESTREAM_STARTED" -&gt; Log.d( "LivestreamStateChanged",
          "Meeting livestream is started"
      )
      "LIVESTREAM_STOPPING" -&gt; Log.d("LivestreamStateChanged",
          "Meeting livestream is stopping"
      )
      "LIVESTREAM_STOPPED" -&gt; Log.d("LivestreamStateChanged",
          "Meeting livestream is stopped"
      )
    }
  }
}

override fun onCreate(savedInstanceState: Bundle?) {
  //...

  // add listener to meeting
  meeting!!.addEventListener(meetingEventListener)
}</code></pre><h3 id="custom-template%E2%80%8B">Custom Template<a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream#custom-template">​</a></h3><p>With VideoSDK, you can also use your own custom-designed layout template to livestream the meetings. To use the custom template, you need to create a template for which you can <a href="https://docs.videosdk.live/react/guide/interactive-live-streaming/custom-template">follow this guide</a>. Once you have set the template, you can use the <a href="https://docs.videosdk.live/api-reference/realtime-communication/start-livestream">REST API to start</a> the livestream with the <code>templateURL</code> parameter.</p><h2 id="conclusion">Conclusion</h2><p>Integrating the RTMP Livestream feature into your Android (Kotlin) video call application using VideoSDK opens up a world of possibilities for enhancing real-time video communication. VideoSDK offers the flexibility needed to tailor the live stream experience according to your app's unique requirements. By following the instructions provided, you can seamlessly incorporate RTMP Livestreaming capabilities, extend the reach of your application to popular streaming platforms, and deliver captivating visual experiences to your audience. </p><p>To unlock the full potential of VideoSDK and create easy-to-use video chat experiences, developers are encouraged to sign up for VideoSDK and further explore its features.</p><p><a href="https://www.videosdk.live/signup"><strong>Sign up with VideoSDK</strong></a> today and Get <strong>10000 minutes free</strong> to take your video app to the next level!</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Screen Share in JavaScript Video Chat App?]]></title><description><![CDATA[Learn to integrate screen sharing into your JavaScript video chat app with VideoSDK. Enhance collaboration and communication effortlessly.]]></description><link>https://www.videosdk.live/blog/integrate-screen-share-in-javascript-video-chat-app</link><guid isPermaLink="false">662b589a2a88c204ca9d4ea3</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Tue, 24 Sep 2024 09:10:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-Share-in-JavaScript-video-Call-App.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Screen-Share-in-JavaScript-video-Call-App.png" alt="How to Integrate Screen Share in JavaScript Video Chat App?"/><p>Integrating screen sharing into your JavaScript video chat app expands its capabilities, allowing users to share their screens during calls. This feature enhances collaboration by enabling participants to show presentations, documents, or other content directly from their screens. With seamless integration, users can easily switch between video chat and screen-sharing modes, making discussions more interactive and productive.</p><p><strong>Benefits of Integrating Screen Share in a JavaScript Video Chat App:</strong></p><ol><li><strong>Enhanced Collaboration</strong>: Screen sharing facilitates real-time collaboration by allowing users to share their screens, enabling them to demonstrate concepts, share documents, or provide visual instructions during video calls.</li><li><strong>Improved Communication</strong>: Visual aids provided by screen sharing enhance communication clarity, ensuring that all participants are on the same page, reducing misunderstandings, and improving overall comprehension.</li><li><strong>Interactive Learning</strong>: In educational settings, screen sharing facilitates interactive learning experiences, enabling instructors to demonstrate concepts, showcase multimedia content, or lead virtual workshops effectively.</li></ol><p>In the below-provided guide, you'll be able to implement screen-sharing functionality smoothly into your app using JavaScript with VideoSDK.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of Screen Share functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, ensure you complete the necessary prerequisites.</p><h3 id="1%EF%B8%8F%E2%83%A3-create-a-videosdk-account"><strong>1️⃣ </strong>Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="2%EF%B8%8F%E2%83%A3-generate-your-auth-token"><strong>2️⃣ </strong>Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. Consider referring to the <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a> for a more visual understanding of the account creation and token generation process.</p><h3 id="3%EF%B8%8F%E2%83%A3-prerequisites"><strong>3️⃣ </strong>Prerequisites</h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (if you do not have one, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer">VideoSDK Dashboard</a>)</li><li>Have Node and NPM installed on your device.</li></ul><h2 id="%E2%AC%87%EF%B8%8F-install-videosdk">⬇️ Install VideoSDK</h2><p>Import VideoSDK using the <code>&lt;script&gt;</code> tag or install it using the following npm command. Make sure you are in your app directory before you run this command.</p><pre><code class="language-js">&lt;html&gt;
  &lt;head&gt;
    &lt;!--.....--&gt;
  &lt;/head&gt;
  &lt;body&gt;
    &lt;!--.....--&gt;
    &lt;script src="https://sdk.videosdk.live/js-sdk/0.0.85/videosdk.js"&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre><ul><li><strong>npm</strong></li></ul><pre><code class="language-js">npm install @videosdk.live/js-sdk</code></pre><ul><li><strong>Yarn</strong></li></ul><pre><code class="language-js">yarn add @videosdk.live/js-sdk</code></pre><h3 id="structure-of-the-project">Structure of the project</h3><p>Your project structure should look like this.</p><pre><code class="language-js">  root
   ├── index.html
   ├── config.js
   ├── index.js</code></pre><p>You will be working on the following files:</p><ul><li><strong>index.html</strong>: Responsible for creating a basic UI.</li><li><strong>config.js</strong>: Responsible for storing the token.</li><li><strong>index.js</strong>: Responsible for rendering the meeting view and the join meeting functionality.</li></ul><h2 id="essential-steps-to-implement-video-call-functionality">Essential Steps to Implement Video Call Functionality</h2><p>Once you've successfully installed VideoSDK in your project, you'll have access to a range of functionalities for building your video call application. Screen Share is one such feature that leverages VideoSDK's capabilities. It leverages VideoSDK's capabilities to identify the user with the strongest audio signal (the one speaking)</p><h3 id="step-1-design-the-user-interface-ui%E2%80%8B">Step 1: Design the user interface (UI)<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-1--design-the-user-interface-ui">​</a></h3><p>Create an HTML file containing the screens, <code>join-screen</code> and <code>grid-screen</code>.</p><pre><code class="language-js">&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt; &lt;/head&gt;

  &lt;body&gt;
    &lt;div id="join-screen"&gt;
      &lt;!-- Create new Meeting Button --&gt;
      &lt;button id="createMeetingBtn"&gt;New Meeting&lt;/button&gt;
      OR
      &lt;!-- Join existing Meeting --&gt;
      &lt;input type="text" id="meetingIdTxt" placeholder="Enter Meeting id" /&gt;
      &lt;button id="joinBtn"&gt;Join Meeting&lt;/button&gt;
    &lt;/div&gt;

    &lt;!-- for Managing meeting status --&gt;
    &lt;div id="textDiv"&gt;&lt;/div&gt;

    &lt;div id="grid-screen" style="display: none"&gt;
      &lt;!-- To Display MeetingId --&gt;
      &lt;h3 id="meetingIdHeading"&gt;&lt;/h3&gt;

      &lt;!-- Controllers --&gt;
      &lt;button id="leaveBtn"&gt;Leave&lt;/button&gt;
      &lt;button id="toggleMicBtn"&gt;Toggle Mic&lt;/button&gt;
      &lt;button id="toggleWebCamBtn"&gt;Toggle WebCam&lt;/button&gt;

      &lt;!-- render Video --&gt;
      &lt;div class="row" id="videoContainer"&gt;&lt;/div&gt;
    &lt;/div

 	  &lt;!-- render Screen Share Video --&gt;
      &lt;div class="row" id="screenShareVideoContainer"&gt;&lt;/div&gt;

    &lt;!-- Add VideoSDK script --&gt;
    &lt;script src="https://sdk.videosdk.live/js-sdk/0.0.85/videosdk.js"&gt;&lt;/script&gt;
    &lt;script src="config.js"&gt;&lt;/script&gt;
    &lt;script src="index.js"&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre><h3 id="step-2-implement-join-screen%E2%80%8B">Step 2: Implement Join Screen<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-2--implement-join-screen">​</a></h3><p>Configure the token in the <code>config.js</code> file, which you can obtain from the <a href="https://app.videosdk.live/">VideoSDK Dashboard</a>.</p><pre><code class="language-js">// Auth token will be used to generate a meeting and connect to it
TOKEN = "Your_Token_Here";</code></pre><p>Next, retrieve all the elements from the DOM and declare the following variables in the <code>index.js</code> file. Then, add an event listener to the join and create meeting buttons.</p><pre><code class="language-js">// Getting Elements from DOM
const joinButton = document.getElementById("joinBtn");
const enableScreenShareButton = document.getElementById("enableScreenShareBtn");
const disableScreenShareButton = document.getElementById(
  "disableScreenShareBtn"
);
const leaveButton = document.getElementById("leaveBtn");
const toggleMicButton = document.getElementById("toggleMicBtn");
const toggleWebCamButton = document.getElementById("toggleWebCamBtn");
const createButton = document.getElementById("createMeetingBtn");
const videoContainer = document.getElementById("videoContainer");
const screenShareVideoContainer = document.getElementById(
  "screenShareVideoContainer"
);
const textDiv = document.getElementById("textDiv");

// Declare Variables
let meeting = null;
let meetingId = "";
let isMicOn = false;
let isWebCamOn = false;

function initializeMeeting() {}

function createLocalParticipant() {}

function createVideoElement() {}

function createAudioElement() {}

function setTrack() {}

// Join Meeting Button Event Listener
joinButton.addEventListener("click", async () =&gt; {
  document.getElementById("join-screen").style.display = "none";
  textDiv.textContent = "Joining the meeting...";

  roomId = document.getElementById("meetingIdTxt").value;
  meetingId = roomId;

  initializeMeeting();
});

// Create Meeting Button Event Listener
createButton.addEventListener("click", async () =&gt; {
  document.getElementById("join-screen").style.display = "none";
  textDiv.textContent = "Please wait, we are joining the meeting";

  // API call to create meeting
  const url = `https://api.videosdk.live/v2/rooms`;
  const options = {
    method: "POST",
    headers: { Authorization: TOKEN, "Content-Type": "application/json" },
  };

  const { roomId } = await fetch(url, options)
    .then((response) =&gt; response.json())
    .catch((error) =&gt; alert("error", error));
  meetingId = roomId;

  initializeMeeting();
});</code></pre><h3 id="step-3-initialize-meeting%E2%80%8B">Step 3: Initialize Meeting<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-3--initialize-meeting">​</a></h3><p>Following that, initialize the meeting using the <code>initMeeting()</code> function and proceed to join the meeting.</p><pre><code class="language-js">// Initialize meeting
function initializeMeeting() {
  window.VideoSDK.config(TOKEN);

  meeting = window.VideoSDK.initMeeting({
    meetingId: meetingId, // required
    name: "Thomas Edison", // required
    micEnabled: true, // optional, default: true
    webcamEnabled: true, // optional, default: true
  });

  meeting.join();

  // Creating local participant
  createLocalParticipant();

  // Setting local participant stream
  meeting.localParticipant.on("stream-enabled", (stream) =&gt; {
    setTrack(stream, null, meeting.localParticipant, true);
  });

  // meeting joined event
  meeting.on("meeting-joined", () =&gt; {
    textDiv.style.display = "none";
    document.getElementById("grid-screen").style.display = "block";
    document.getElementById(
      "meetingIdHeading"
    ).textContent = `Meeting Id: ${meetingId}`;
  });

  // meeting left event
  meeting.on("meeting-left", () =&gt; {
    videoContainer.innerHTML = "";
  });

  // Remote participants Event
  // participant joined
  meeting.on("participant-joined", (participant) =&gt; {
    //  ...
  });

  // participant left
  meeting.on("participant-left", (participant) =&gt; {
    //  ...
  });
}</code></pre><h3 id="step-4-create-the-media-elements%E2%80%8B">Step 4: Create the Media Elements<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-4--create-the-media-elements">​</a></h3><p>In this step, Create a function to generate audio and video elements for displaying both local and remote participants. Set the corresponding media track based on whether it's a video or audio stream.</p><pre><code class="language-js">// creating video element
function createVideoElement(pId, name) {
  let videoFrame = document.createElement("div");
  videoFrame.setAttribute("id", `f-${pId}`);
  videoFrame.style.width = "300px";
    

  //create video
  let videoElement = document.createElement("video");
  videoElement.classList.add("video-frame");
  videoElement.setAttribute("id", `v-${pId}`);
  videoElement.setAttribute("playsinline", true);
  videoElement.setAttribute("width", "300");
  videoFrame.appendChild(videoElement);

  let displayName = document.createElement("div");
  displayName.innerHTML = `Name : ${name}`;

  videoFrame.appendChild(displayName);
  return videoFrame;
}

// creating audio element
function createAudioElement(pId) {
  let audioElement = document.createElement("audio");
  audioElement.setAttribute("autoPlay", "false");
  audioElement.setAttribute("playsInline", "true");
  audioElement.setAttribute("controls", "false");
  audioElement.setAttribute("id", `a-${pId}`);
  audioElement.style.display = "none";
  return audioElement;
}

// creating local participant
function createLocalParticipant() {
  let localParticipant = createVideoElement(
    meeting.localParticipant.id,
    meeting.localParticipant.displayName
  );
  videoContainer.appendChild(localParticipant);
}

// setting media track
function setTrack(stream, audioElement, participant, isLocal) {
  if (stream.kind == "video") {
    isWebCamOn = true;
    const mediaStream = new MediaStream();
    mediaStream.addTrack(stream.track);
    let videoElm = document.getElementById(`v-${participant.id}`);
    videoElm.srcObject = mediaStream;
    videoElm
      .play()
      .catch((error) =&gt;
        console.error("videoElem.current.play() failed", error)
      );
  }
  if (stream.kind == "audio") {
    if (isLocal) {
      isMicOn = true;
    } else {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(stream.track);
      audioElement.srcObject = mediaStream;
      audioElement
        .play()
        .catch((error) =&gt; console.error("audioElem.play() failed", error));
    }
  }
}</code></pre><h3 id="step-5-handle-participant-events%E2%80%8B">Step 5: Handle participant events<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-5--handle-participant-events">​</a></h3><p>Thereafter, implement the events related to the participants and the stream.</p><p>The following are the events to be executed in this step:</p><ol><li><code>participant-joined</code>: When a remote participant joins, this event will trigger. In the event callback, create video and audio elements previously defined for rendering their video and audio streams.</li><li><code>participant-left</code>: When a remote participant leaves, this event will trigger. In the event callback, remove the corresponding video and audio elements.</li><li><code>stream-enabled</code>: This event manages the media track of a specific participant by associating it with the appropriate video or audio element.</li></ol><pre><code class="language-js">// Initialize meeting
function initializeMeeting() {
  // ...

  // participant joined
  meeting.on("participant-joined", (participant) =&gt; {
    let videoElement = createVideoElement(
      participant.id,
      participant.displayName
    );
    let audioElement = createAudioElement(participant.id);
    // stream-enabled
    participant.on("stream-enabled", (stream) =&gt; {
      setTrack(stream, audioElement, participant, false);
    });
    videoContainer.appendChild(videoElement);
    videoContainer.appendChild(audioElement);
  });

  // participants left
  meeting.on("participant-left", (participant) =&gt; {
    let vElement = document.getElementById(`f-${participant.id}`);
    vElement.remove(vElement);

    let aElement = document.getElementById(`a-${participant.id}`);
    aElement.remove(aElement);
  });
}</code></pre><h3 id="step-6-implement-controls%E2%80%8B">Step 6: Implement Controls<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-6--implement-controls">​</a></h3><p>Next, implement the meeting controls such as toggleMic, toggleWebcam, and leave the meeting.</p><pre><code class="language-js">// leave Meeting Button Event Listener
leaveButton.addEventListener("click", async () =&gt; {
  meeting?.leave();
  document.getElementById("grid-screen").style.display = "none";
  document.getElementById("join-screen").style.display = "block";
});

// Toggle Mic Button Event Listener
toggleMicButton.addEventListener("click", async () =&gt; {
  if (isMicOn) {
    // Disable Mic in Meeting
    meeting?.muteMic();
  } else {
    // Enable Mic in Meeting
    meeting?.unmuteMic();
  }
  isMicOn = !isMicOn;
});

// Toggle Web Cam Button Event Listener
toggleWebCamButton.addEventListener("click", async () =&gt; {
  if (isWebCamOn) {
    // Disable Webcam in Meeting
    meeting?.disableWebcam();

    let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
    vElement.style.display = "none";
  } else {
    // Enable Webcam in Meeting
    meeting?.enableWebcam();

    let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
    vElement.style.display = "inline";
  }
  isWebCamOn = !isWebCamOn;
});</code></pre><p><strong>You can check out the complete.</strong></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/videosdk-live/quickstart/tree/main/js-rtc"><div class="kg-bookmark-content"><div class="kg-bookmark-title">quickstart/js-rtc at main · videosdk-live/quickstart</div><div class="kg-bookmark-description">A short and sweet tutorial for getting up to speed with VideoSDK in less than 10 minutes - videosdk-live/quickstart</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="How to Integrate Screen Share in JavaScript Video Chat App?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">videosdk-live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/de970b6f7db5c97b5728471cbf13bae285388d9a9ccaa9bd294da67c509984d5/videosdk-live/quickstart" alt="How to Integrate Screen Share in JavaScript Video Chat App?" onerror="this.style.display = 'none'"/></div></a></figure><h2 id="integrate-screen-share-feature">Integrate Screen Share Feature</h2><p>Screen sharing in a meeting is the process of sharing your computer screen with other participants in the meeting. It allows everyone in the meeting to see exactly what you are seeing on your screen, which can be helpful for presentations, demonstrations, or collaborations.</p><h3 id="enable-screen-share">Enable Screen Share</h3><p>By using the <code>enableScreenShare()</code> function of the <code>meeting</code> object, the local participant can share their desktop screen to other participants.</p><ul><li>You can also pass a customised screenshare track in <code>enableScreenShare()</code> by using <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/render-media/optimize-video-track#custom-screen-share-track">Custom Screen Share Track</a>.</li><li>The Screen Share stream of a participant can be accessed from the <code>streams</code> property of the <code>Participant</code> object.</li></ul><h3 id="disable-screen-share">Disable Screen Share</h3><p>By using <code>disableScreenShare()</code> function of the <code>meeting</code> object, the local participant can stop sharing their desktop screen to other participants.</p><blockquote><strong>NOTE:</strong><br>Screen Sharing is only supported in the <strong>Desktop browsers</strong> and <strong>not in mobile/tab browser</strong>.</br></blockquote><pre><code class="language-js">let meeting;

// Initialize Meeting
meeting = VideoSDK.initMeeting({
  // ...
});


enableScreenShareButton.addEventListener("click", () =&gt; {
  // Enabling ScreenShare
  meeting?.enableScreenShare();
});


disableScreenShareButton.addEventListener("click", () =&gt; {
  // Disabling ScreenShare
  meeting?.disableScreenShare();
});</code></pre><h3 id="events-associated-with-screen-share">Events associated with Screen Share</h3><h4 id="enablescreenshare">enableScreenShare</h4>
<ul><li>Every Participant will receive a callback on <a href="https://docs.videosdk.live/javascript/api/sdk-reference/participant-class/events#stream-enabled"><code>stream-enabled</code></a> event of the <a href="https://docs.videosdk.live/javascript/api/sdk-reference/participant-class/introduction"><code>participant</code></a> object with <code>Stream</code> object.</li><li>Every Participant will receive a callback on <a href="https://docs.videosdk.live/javascript/api/sdk-reference/meeting-class/events#presenter-changed"><code>presenter-changed</code></a> event of the meeting object with <code>presenterId</code>.</li></ul><h4 id="disablescreenshare">disableScreenShare</h4>
<ul><li>Every Participant will receive a callback on <a href="https://docs.videosdk.live/react/api/sdk-reference/use-participant/events#onstreamdisabled"><code>stream-disabled</code></a> event of the <a href="https://docs.videosdk.live/javascript/api/sdk-reference/participant-class/introduction"><code>participant</code></a> object with the <code>Stream</code> object.</li><li>Every Participant will receive a callback on <a href="https://docs.videosdk.live/javascript/api/sdk-reference/meeting-class/events#presenter-changed"><code>presenter-changed</code></a> event of the meeting object with <code>null</code> value indicating there is no current presenter.</li></ul><pre><code class="language-js">let meeting;

// Initialize Meeting
meeting = VideoSDK.initMeeting({
  // ...
});

  // participant joined
  meeting.on("participant-joined", (participant) =&gt; {
    // ...
    // ...
    participant.on("stream-enabled", (stream) =&gt; {
      setTrack(stream, audioElement, participant, false);

      if (stream.kind == "share") {
        //particiapnt turned on screen share
        //Render screenshare logic here
      }
    });
    // ...
    // ...
      
   participant.on("stream-disabled", (stream) =&gt; {
      if (stream.kind === "share") {
        //particiapnt turned off screenshare
        //remove screenshare logic here
      }
    });
  });

meeting.on("presenter-changed", (presenterId) =&gt; {
  if (presenterId) {
    //someone start presenting
  } else {
    //someone stopped presenting
  }
});</code></pre><h3 id="screen-share-with-audio">Screen Share with Audio</h3><p>To enable screen sharing with audio, select the <em>Share tab audio</em> option when sharing the Chrome tab, as shown below.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/screenshare-with-audio-ca8ee299f68c32ba08cd811e3fb7cd2f.png" class="kg-image" alt="How to Integrate Screen Share in JavaScript Video Chat App?" loading="lazy" width="596" height="482"/></figure><p>After clicking the <code>Share</code> button, you will receive the selected tab's audio stream in the participant's <code>screenShareAudioStream</code>.</p><blockquote><strong>NOTE:</strong><br>Screen Share with Audio is only supported while sharing <strong>Chrome Tab</strong> in a <strong>Chromium based browser</strong> like Google Chrome, Brave etc.</br></blockquote><h3 id="rendering-screen-share-and-screen-share-audio%E2%80%8B">Rendering Screen Share and Screen Share Audio​</h3><p>To display the screenshare video stream, you will receive it in the participant's stream-enabled callback with the stream kind set as <code>share</code>.</p><pre><code class="language-js">participant.on("stream-enabled", (stream) =&gt; {
  if (stream.kind == "share") {
  const videoElem = createShareVideoElement(participant.id, stream);

  //add videoElem to your container
  screenShareVideoContainer.appendChild(videoElem);
  }

  if (stream.kind == "shareAudio") {
  }
});

// creating video element
function createShareVideoElement(pId, stream) {
  if (pId == meeting.localParticipant.id) return;

  let videoElement = document.createElement("video");
  videoElement.setAttribute("autoPlay", false);
  videoElement.setAttribute("controls", "false");
  videoElement.setAttribute("id", `v-share-${pId}`);

  const mediaStream = new MediaStream();
  mediaStream.addTrack(stream.track);
  videoElement.srcObject = mediaStream;
  videoElement
    .play()
    .catch((error) =&gt; console.error("audioElem.play() failed", error));
  return videoElement;
}

// creating audio element
function createShareAudioElement(pId, stream) {}</code></pre><p>Now to render the screenshare audio stream, you will receive it in the participant's stream-enabled callback with the stream kind set as <code>shareAudio</code>.</p><pre><code class="language-js">participant.on("stream-enabled", (stream) =&gt; {
  if (stream.kind == "share") {
  }
  if (stream.kind == "shareAudio") {
    const audioElem = createShareAudioElement(participant.id, stream);

    //add audioElem to your container
    screenShareVideoContainer.appendChild(audioElem);
  }
});

// creating video element
function createShareVideoElement(pId, stream) {}

// creating audio element
function createShareAudioElement(pId, stream) {
  if (pId == meeting.localParticipant.id) return;

  let audioElement = document.createElement("audio");
  audioElement.setAttribute("autoPlay", false);
  audioElement.setAttribute("playsInline", "false");
  audioElement.setAttribute("controls", "false");
  audioElement.setAttribute("id", `a-share-${pId}`);
  audioElement.style.display = "none";

  const mediaStream = new MediaStream();
  mediaStream.addTrack(stream.track);
  audioElement.srcObject = mediaStream;
  audioElement
    .play()
    .catch((error) =&gt; console.error("audioElem.play() failed", error));
  return audioElement;
}</code></pre><h2 id="%E2%9C%A8-want-to-add-more-features-to-javascript-video-calling-app">✨ Want to Add More Features to JavaScript Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your JavaScript video-calling app, check out these additional resources:</p><ul><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/integrate-active-speaker-indication-in-javascript-video-chat-app">Link</a></li><li>Image Capture: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-javascript-chat-app">Link</a></li><li>?Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/integrate-picture-in-picture-pip-mode-in-javascript-video-chat-app">Link</a></li><li>RTMP Livestream: <a href="https://www.videosdk.live/blog/integrate-rtmp-livestream-in-javascript-video-chat-app">Link</a></li></ul><h2 id="conclusion">Conclusion</h2><p>Integrating screen sharing into your JavaScript video chat app is a powerful way to enhance communication and collaboration. By enabling users to share their screens during calls, you empower them to demonstrate ideas, share presentations, and collaborate on documents more effectively. </p><p>With a seamless integration process provided by VideoSDK, implementing screen sharing becomes straightforward, ensuring a smooth user experience. Whether for remote work, online education, or virtual meetings, this feature adds significant value to your app, making it more versatile and engaging for users.</p><p>If you are new here and want to build an interactive JavaScript app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get? <em>10000 free minutes every month</em>. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Image Capture in JavaScript Video Chat App?]]></title><description><![CDATA[Integrate image capture into your JavaScript video chat app for enhanced user experience and versatility. Enable users to snap and share moments seamlessly.]]></description><link>https://www.videosdk.live/blog/integrate-image-capture-in-javascript-chat-app</link><guid isPermaLink="false">662b5fb72a88c204ca9d4f20</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Tue, 24 Sep 2024 05:49:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/05/Image-Capture-in-JavaScript-Video-Call-App.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/05/Image-Capture-in-JavaScript-Video-Call-App.jpg" alt="How to Integrate Image Capture in JavaScript Video Chat App?"/><p>Integrating image capture in a <a href="https://www.videosdk.live/blog/video-calling-javascript">JavaScript video chat app</a> enhances user interaction and functionality. With this feature, users can take snapshots during video calls, adding a new dimension to their communication experience. By incorporating image capture, the app enables users to capture memorable moments or important information shared during the call. JavaScript's flexibility allows seamless integration of image capture, ensuring a smooth user experience across different devices and browsers.</p><p><strong>Benefits of Integrate Image Capture in JavaScript Video Chat App:</strong></p><ol><li><strong>Enhanced Communication</strong>: Image capture adds a visual element to video calls, improving communication and understanding.</li><li><strong>Memorable Moments</strong>: Users can capture snapshots of important moments during the call, preserving memories.</li><li><strong>Increased Engagement</strong>: Interactive features like image capture <a href="https://www.commandbar.com/blog/saas-user-engagement-lifecycle">keep users engaged </a>and active during video calls.</li></ol><p><strong>Use Case of Integrate Image Capture in JavaScript Video Chat App:</strong></p><ol><li><strong>Education:</strong> Students can capture whiteboard content or diagrams shared during online classes for later review.</li><li><strong>Business Meetings:</strong> Participants can capture key points discussed in meetings or presentations, ensuring clarity and accountability.</li><li><strong>Remote Collaboration:</strong> Teams working remotely can capture design mockups, charts, or code snippets for collaborative brainstorming sessions.</li></ol><p>This tutorial will guide you through a step-by-step process of how to integrate image capture functionality in a JavaScript chat App with <a href="https://www.videosdk.live/">VideoSDK</a>.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of Image Feature Integration functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a>. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. consider referring to the provided tutorial for a more visual understanding of the account creation and token generation process.</p><h3 id="prerequisites">Prerequisites</h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (if you do not have one, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer">VideoSDK Dashboard</a>)</li><li>Have Node and NPM installed on your device.</li></ul><h2 id="%E2%AC%87%EF%B8%8F-install-videosdk">⬇️ Install VideoSDK</h2><p>Import VideoSDK using the <code>&lt;script&gt;</code> tag or install it using the following npm command. Make sure you are in your app directory before you run this command.</p><pre><code class="language-js">&lt;html&gt;
  &lt;head&gt;
    &lt;!--.....--&gt;
  &lt;/head&gt;
  &lt;body&gt;
    &lt;!--.....--&gt;
    &lt;script src="https://sdk.videosdk.live/js-sdk/0.0.83/videosdk.js"&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre><ul><li><strong>npm</strong></li></ul><pre><code class="language-js">npm install @videosdk.live/js-sdk</code></pre><ul><li><strong>Yarn</strong></li></ul><pre><code class="language-js">yarn add @videosdk.live/js-sdk</code></pre><h3 id="structure-of-the-project">Structure of the project</h3><p>Your project structure should look like this.</p><pre><code class="language-js">  root
   ├── index.html
   ├── config.js
   ├── index.js</code></pre><p>You will be working on the following files:</p><ul><li><strong>index.html</strong>: Responsible for creating a basic UI.</li><li><strong>config.js</strong>: Responsible for storing the token.</li><li><strong>index.js</strong>: Responsible for rendering the meeting view and the join meeting functionality.</li></ul><h2 id="essential-steps-to-implement-video-call-functionality">Essential Steps to Implement Video Call Functionality</h2><p>Once you've successfully installed VideoSDK in your project, you'll have access to a range of functionalities for building your video call application. Image Capture is one such feature that leverages VideoSDK's capabilities. It leverages VideoSDK's capabilities to identify the user with the strongest audio signal (the one speaking).</p><h3 id="step-1-design-the-user-interface-ui%E2%80%8B">Step 1: Design the user interface (UI)<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-1--design-the-user-interface-ui">​</a></h3><p>Create an HTML file containing the screens, <code>join-screen</code> and <code>grid-screen</code>.</p><pre><code class="language-js">&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt; &lt;/head&gt;

  &lt;body&gt;
    &lt;div id="join-screen"&gt;
      &lt;!-- Create new Meeting Button --&gt;
      &lt;button id="createMeetingBtn"&gt;New Meeting&lt;/button&gt;
      OR
      &lt;!-- Join existing Meeting --&gt;
      &lt;input type="text" id="meetingIdTxt" placeholder="Enter Meeting id" /&gt;
      &lt;button id="joinBtn"&gt;Join Meeting&lt;/button&gt;
    &lt;/div&gt;

    &lt;!-- for Managing meeting status --&gt;
    &lt;div id="textDiv"&gt;&lt;/div&gt;

    &lt;div id="grid-screen" style="display: none"&gt;
      &lt;!-- To Display MeetingId --&gt;
      &lt;h3 id="meetingIdHeading"&gt;&lt;/h3&gt;

      &lt;!-- Controllers --&gt;
      &lt;button id="leaveBtn"&gt;Leave&lt;/button&gt;
      &lt;button id="toggleMicBtn"&gt;Toggle Mic&lt;/button&gt;
      &lt;button id="toggleWebCamBtn"&gt;Toggle WebCam&lt;/button&gt;

      &lt;!-- render Video --&gt;
      &lt;div class="row" id="videoContainer"&gt;&lt;/div&gt;
    &lt;/div&gt;

    &lt;!-- Add VideoSDK script --&gt;
    &lt;script src="https://sdk.videosdk.live/js-sdk/0.0.83/videosdk.js"&gt;&lt;/script&gt;
    &lt;script src="config.js"&gt;&lt;/script&gt;
    &lt;script src="index.js"&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre><h3 id="step-2-implement-join-screen%E2%80%8B">Step 2: Implement Join Screen<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-2--implement-join-screen">​</a></h3><p>Configure the token in the <code>config.js</code> file, which you can obtain from the <a href="https://app.videosdk.live/login" rel="noopener noreferrer">VideoSDK Dashboard</a>.</p><pre><code class="language-js">// Auth token will be used to generate a meeting and connect to it
TOKEN = "Your_Token_Here";</code></pre><p>Next, retrieve all the elements from the DOM and declare the following variables in the <code>index.js</code> file. Then, add an event listener to the join and create meeting buttons.</p><pre><code class="language-js">// Getting Elements from DOM
const joinButton = document.getElementById("joinBtn");
const leaveButton = document.getElementById("leaveBtn");
const toggleMicButton = document.getElementById("toggleMicBtn");
const toggleWebCamButton = document.getElementById("toggleWebCamBtn");
const createButton = document.getElementById("createMeetingBtn");
const videoContainer = document.getElementById("videoContainer");
const textDiv = document.getElementById("textDiv");

// Declare Variables
let meeting = null;
let meetingId = "";
let isMicOn = false;
let isWebCamOn = false;

function initializeMeeting() {}

function createLocalParticipant() {}

function createVideoElement() {}

function createAudioElement() {}

function setTrack() {}

// Join Meeting Button Event Listener
joinButton.addEventListener("click", async () =&gt; {
  document.getElementById("join-screen").style.display = "none";
  textDiv.textContent = "Joining the meeting...";

  roomId = document.getElementById("meetingIdTxt").value;
  meetingId = roomId;

  initializeMeeting();
});

// Create Meeting Button Event Listener
createButton.addEventListener("click", async () =&gt; {
  document.getElementById("join-screen").style.display = "none";
  textDiv.textContent = "Please wait, we are joining the meeting";

  // API call to create meeting
  const url = `https://api.videosdk.live/v2/rooms`;
  const options = {
    method: "POST",
    headers: { Authorization: TOKEN, "Content-Type": "application/json" },
  };

  const { roomId } = await fetch(url, options)
    .then((response) =&gt; response.json())
    .catch((error) =&gt; alert("error", error));
  meetingId = roomId;

  initializeMeeting();
});</code></pre><h3 id="step-3-initialize-meeting%E2%80%8B">Step 3: Initialize Meeting<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-3--initialize-meeting">​</a></h3><p>Following that, initialize the meeting using the <code>initMeeting()</code> function and proceed to join the meeting.</p><pre><code class="language-js">// Initialize meeting
function initializeMeeting() {
  window.VideoSDK.config(TOKEN);

  meeting = window.VideoSDK.initMeeting({
    meetingId: meetingId, // required
    name: "Thomas Edison", // required
    micEnabled: true, // optional, default: true
    webcamEnabled: true, // optional, default: true
  });

  meeting.join();

  // Creating local participant
  createLocalParticipant();

  // Setting local participant stream
  meeting.localParticipant.on("stream-enabled", (stream) =&gt; {
    setTrack(stream, null, meeting.localParticipant, true);
  });

  // meeting joined event
  meeting.on("meeting-joined", () =&gt; {
    textDiv.style.display = "none";
    document.getElementById("grid-screen").style.display = "block";
    document.getElementById(
      "meetingIdHeading"
    ).textContent = `Meeting Id: ${meetingId}`;
  });

  // meeting left event
  meeting.on("meeting-left", () =&gt; {
    videoContainer.innerHTML = "";
  });

  // Remote participants Event
  // participant joined
  meeting.on("participant-joined", (participant) =&gt; {
    //  ...
  });

  // participant left
  meeting.on("participant-left", (participant) =&gt; {
    //  ...
  });
}</code></pre><h3 id="step-4-create-the-media-elements%E2%80%8B">Step 4: Create the Media Elements<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-4--create-the-media-elements">​</a></h3><p>In this step, Create a function to generate audio and video elements for displaying both local and remote participants. Set the corresponding media track based on whether it's a video or audio stream.</p><pre><code class="language-js">// creating video element
function createVideoElement(pId, name) {
  let videoFrame = document.createElement("div");
  videoFrame.setAttribute("id", `f-${pId}`);
  videoFrame.style.width = "300px";
    

  //create video
  let videoElement = document.createElement("video");
  videoElement.classList.add("video-frame");
  videoElement.setAttribute("id", `v-${pId}`);
  videoElement.setAttribute("playsinline", true);
  videoElement.setAttribute("width", "300");
  videoFrame.appendChild(videoElement);

  let displayName = document.createElement("div");
  displayName.innerHTML = `Name : ${name}`;

  videoFrame.appendChild(displayName);
  return videoFrame;
}

// creating audio element
function createAudioElement(pId) {
  let audioElement = document.createElement("audio");
  audioElement.setAttribute("autoPlay", "false");
  audioElement.setAttribute("playsInline", "true");
  audioElement.setAttribute("controls", "false");
  audioElement.setAttribute("id", `a-${pId}`);
  audioElement.style.display = "none";
  return audioElement;
}

// creating local participant
function createLocalParticipant() {
  let localParticipant = createVideoElement(
    meeting.localParticipant.id,
    meeting.localParticipant.displayName
  );
  videoContainer.appendChild(localParticipant);
}

// setting media track
function setTrack(stream, audioElement, participant, isLocal) {
  if (stream.kind == "video") {
    isWebCamOn = true;
    const mediaStream = new MediaStream();
    mediaStream.addTrack(stream.track);
    let videoElm = document.getElementById(`v-${participant.id}`);
    videoElm.srcObject = mediaStream;
    videoElm
      .play()
      .catch((error) =&gt;
        console.error("videoElem.current.play() failed", error)
      );
  }
  if (stream.kind == "audio") {
    if (isLocal) {
      isMicOn = true;
    } else {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(stream.track);
      audioElement.srcObject = mediaStream;
      audioElement
        .play()
        .catch((error) =&gt; console.error("audioElem.play() failed", error));
    }
  }
}</code></pre><h3 id="step-5-handle-participant-events%E2%80%8B">Step 5: Handle participant events<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-5--handle-participant-events">​</a></h3><p>Thereafter, implement the events related to the participants and the stream.</p><p>The following are the events to be executed in this step:</p><ol><li><code>participant-joined</code>: When a remote participant joins, this event will trigger. In the event callback, create video and audio elements previously defined for rendering their video and audio streams.</li><li><code>participant-left</code>: When a remote participant leaves, this event will trigger. In the event callback, remove the corresponding video and audio elements.</li><li><code>stream-enabled</code>: This event manages the media track of a specific participant by associating it with the appropriate video or audio element.</li></ol><pre><code class="language-js">// Initialize meeting
function initializeMeeting() {
  // ...

  // participant joined
  meeting.on("participant-joined", (participant) =&gt; {
    let videoElement = createVideoElement(
      participant.id,
      participant.displayName
    );
    let audioElement = createAudioElement(participant.id);
    // stream-enabled
    participant.on("stream-enabled", (stream) =&gt; {
      setTrack(stream, audioElement, participant, false);
    });
    videoContainer.appendChild(videoElement);
    videoContainer.appendChild(audioElement);
  });

  // participants left
  meeting.on("participant-left", (participant) =&gt; {
    let vElement = document.getElementById(`f-${participant.id}`);
    vElement.remove(vElement);

    let aElement = document.getElementById(`a-${participant.id}`);
    aElement.remove(aElement);
  });
}</code></pre><h3 id="step-6-implement-controls%E2%80%8B">Step 6: Implement Controls<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-6--implement-controls">​</a></h3><p>Next, implement the meeting controls such as toggleMic, toggleWebcam, and leave the meeting.</p><pre><code class="language-js">// leave Meeting Button Event Listener
leaveButton.addEventListener("click", async () =&gt; {
  meeting?.leave();
  document.getElementById("grid-screen").style.display = "none";
  document.getElementById("join-screen").style.display = "block";
});

// Toggle Mic Button Event Listener
toggleMicButton.addEventListener("click", async () =&gt; {
  if (isMicOn) {
    // Disable Mic in Meeting
    meeting?.muteMic();
  } else {
    // Enable Mic in Meeting
    meeting?.unmuteMic();
  }
  isMicOn = !isMicOn;
});

// Toggle Web Cam Button Event Listener
toggleWebCamButton.addEventListener("click", async () =&gt; {
  if (isWebCamOn) {
    // Disable Webcam in Meeting
    meeting?.disableWebcam();

    let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
    vElement.style.display = "none";
  } else {
    // Enable Webcam in Meeting
    meeting?.enableWebcam();

    let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
    vElement.style.display = "inline";
  }
  isWebCamOn = !isWebCamOn;
});</code></pre><p><strong>You can check out the complete.</strong></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/videosdk-live/quickstart/tree/main/js-rtc"><div class="kg-bookmark-content"><div class="kg-bookmark-title">quickstart/js-rtc at main · videosdk-live/quickstart</div><div class="kg-bookmark-description">A short and sweet tutorial for getting up to speed with VideoSDK in less than 10 minutes - videosdk-live/quickstart</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="How to Integrate Image Capture in JavaScript Video Chat App?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">videosdk-live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/de970b6f7db5c97b5728471cbf13bae285388d9a9ccaa9bd294da67c509984d5/videosdk-live/quickstart" alt="How to Integrate Image Capture in JavaScript Video Chat App?" onerror="this.style.display = 'none'"/></div></a></figure><h2 id="integrate-image-capture-feature">Integrate Image Capture Feature</h2><p>This guide provides instructions on capturing images of participants from a video stream. This capability proves particularly valuable in Video KYC scenarios, enabling the capture of images where users can hold up their identity for verification.</p><p>By using the <code>captureImage()</code> method of the <code>Participant</code> class, you can capture an image of a local participant from their video stream.</p><ul><li>You have the option to specify the desired height and width in the <code>captureImage()</code> function; however, these parameters are optional. If not provided, the VideoSDK will automatically use the dimensions of the local participant's webcamStream.</li><li>The <code>captureImage()</code> function returns the image in the form of a <code>base64</code> string.</li></ul><pre><code class="language-js">let meeting;

// Initialize Meeting
meeting = VideoSDK.initMeeting({
  // ...
});

let isWebcamOn; // status of your webcam, true/false

async function imageCapture() {
  if (isWebcamOn) {
    const base64Data = await meeting.localParticipant.captureImage(); // captureImage will return base64 string
    console.log("base64", base64);
  } else {
    console.error("Camera must be on to capture an image");
  }
}</code></pre><blockquote><strong>TIP:</strong><br>Rather than utilizing the <code>participants.get(participantId).captureImage()</code> method to capture an image of a remote participant, it is advisable to refer to the provided documentation for a more effective approach. <br><br>The <code>participants.get(participantId).captureImage()</code> method captures an image from the current video stream being consumed from the remote participant. The alternative documentation is likely to provide a better and more appropriate method to achieve the desired result.</br></br></br></blockquote><h3 id="how-to-capture-an-image-of-a-remote-participant%E2%80%8B">How to capture an image of a remote participant?​</h3><ul><li>Before proceeding, it's crucial to understand <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/upload-fetch-temporary-file">VideoSDK's temporary file storage system</a> and the underlying <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/pubsub">pubsub mechanism</a>.</li><li>Here's a breakdown of the steps, using the names Participant A and Participant B for clarity:</li></ul><h4 id="step-1-initiate-image-capture-request">Step 1: Initiate Image Capture Request</h4>
<p>In this step, you have to first send a request to Participant B, whose image you want to capture, using Pubsub.</p><ul><li>In order to do that, you have to create a Pubsub topic called <code>IMAGE_CAPTURE</code> the <code>index.js</code> File.​</li><li>Here, you will be using the <code>sendOnly</code> property of the <code>publish()</code> method. Therefore, the request will be sent to that participant only.</li></ul><pre><code class="language-js">let meeting;

// Initialize Meeting
meeting = VideoSDK.initMeeting({
  // ...
});

function sendRequest({ participantId }) {
  // Pass the participantId of the participant twhose image you want to capture 
  // Here, it will be Participant B's id, as you want to capture the the image of Participant B
  let message = "Sending request to capture image";
  meeting.pubSub
    .publish("IMAGE_CAPTURE", message, {
      persist: false,
      sendOnly: [participantId],
    })
    .then((res) =&gt; console.log(`response of publish : ${res}`))
    .catch((err) =&gt; console.log(`error of publish : ${err}`));
}</code></pre><p>Place one button to capture an image of a remote participant in <code>index.js</code> file inside createVideoElement function.</p><pre><code class="language-js">// creating video element
function createVideoElement(pId, name) {
// ...

  // Create wrapper div
  let wrapperDiv = document.createElement("div");
    
  // Create button element
  let buttonElement = document.createElement("button");
  buttonElement.setAttribute("id", `btnCaptureImage-${pId}`);
  buttonElement.textContent = "CaptureImage";
  buttonElement.classList.add("capture-button");
    
  // Append video and button elements to the wrapper div
  wrapperDiv.appendChild(videoElement);
  wrapperDiv.appendChild(buttonElement);

  // Append the wrapper div before the video element
  division.appendChild(wrapperDiv);

  // Set up event listener for hover effect

  wrapperDiv.addEventListener("mouseover", function () {
    buttonElement.style.display = "block";
  });

  wrapperDiv.addEventListener("mouseout", function () {
    buttonElement.style.display = "none";
  });

  wrapperDiv.style.width = `${videoElement.getAttribute("width")}px`;
  wrapperDiv.style.height = `${videoElement.getAttribute("height")}px`;

//...
}</code></pre><p> Now on the capture Image button click the call <code>sendRequest()</code> in <code>index.js</code></p><pre><code class="language-js">// creating video element
function createVideoElement(pId, name) {
// ...
    
    buttonElement.addEventListener("click", async function () {
    if (pId == meeting.localParticipant.id) {
      const base64Data = await meeting.localParticipant.captureImage();
      base64 = "data:data:image/jpeg;base64," + base64Data;
      captureImageDiv.style.display = "block";
      captureImage.src = base64;
      captureImage.onload = function () {
        alert(this.width + "x" + this.height);
      };
    } else {
      let participantId = await participants.get(pId);
      sendRequest({ participantId: participantId.id });
    }
  });
}    </code></pre><p>To capture an image from the remote participant [Participant B], you have to subscribe to the pub-sub topic on the <code>meeting-joined</code> event of the <code>Meeting</code> class. When a participant receives an image capture request, this component uses the <code>captureImage</code> method of the <code>Participant</code> class to capture the image.</p><pre><code class="language-js">let meeting;

// Initialize Meeting
meeting = VideoSDK.initMeeting({
  // ...
});

async function captureAndStoreImage() {
  // capture image
  const base64Data = await meeting.localParticipant.captureImage();
  console.log("base64Data", base64Data);
}

const _handleOnImageCaptureMessageReceived = (message) =&gt; {
  try {
    if (message.senderId !== meeting.localParticipant.id) {
      // capture and store image when message received
      captureAndStoreImage();
    }
  } catch (err) {
    console.log("error on image capture", err);
  }
};

meeting.on("meeting-joined", () =&gt; {
  // ...
  meeting.pubSub.subscribe("IMAGE_CAPTURE", (data) =&gt; {
    _handleOnImageCaptureMessageReceived(data);
  });
});</code></pre><p>The captured image is then stored in VideoSDK's temporary file storage system using the <code>uploadBase64File()</code> function of the <code>Meeting</code> class. This operation returns a unique <code>fileUrl</code> of the stored image.</p><pre><code class="language-js">let meeting;

// Initialize Meeting
meeting = VideoSDK.initMeeting({
  // ...
});

async function captureAndStoreImage() {
  // capture image
  const base64Data = await meeting.localParticipant.captureImage();
  // upload image to videosdk storage system
  const fileUrl = await meeting.uploadBase64File({
    base64Data,
    token: "VIDEOSDK_TOKEN",
    fileName: "myImage.jpeg", // specify a name for image file with extension
  });

  console.log("fileUrl", fileUrl);
}</code></pre><p>Next, the <code>fileUrl</code> is sent back to the participant who initiated the request using the <code>IMAGE_TRANSFER</code> topic.</p><pre><code class="language-js">async function captureAndStoreImage() {
  // ...

  const fileUrl = await meeting.uploadBase64File({
    base64Data,
    token: "VIDEOSDK_TOKEN",
    fileName: "myImage.jpeg", // specify a name for image file with extension
  });

  // publish image Transfer
  meeting.pubSub
    .publish("IMAGE_TRANSFER", fileUrl, {
      persist: false,
      sendOnly: [senderId],
    })
    .then((res) =&gt; console.log(`response of publish : ${res}`))
    .catch((err) =&gt; console.log(`error of publish : ${err}`));
}</code></pre><h4 id="step-3-fetch-and-display-image">Step 3: Fetch and Display Image</h4>
<p>Upon publishing on the <code>IMAGE_TRANSFER</code> topic, subscribe to the same topic within the meeting-joined event of the Meeting class. This will provide access to the <code>fileUrl</code> associated with the captured image. Once obtained, use the <code>fetchBase64File()</code> method of the Meeting class to retrieve the file in <code>base64</code> format from VideoSDK's temporary storage.</p><pre><code class="language-js">async function captureImageAndDisplay(message) {
  const token = "VIDEOSDK_TOKEN";
  let base64 = await meeting.fetchBase64File({
    url: message.message,
    token,
  });
  console.log("base64", base64); // here is your image in a form of base64
}

meeting.on("meeting-joined", () =&gt; {
  // ...
  meeting.pubSub.subscribe("IMAGE_TRANSFER", (data) =&gt; {
    if (data.senderId !== meeting.localParticipant.id) {
      captureImageAndDisplay(data);
    }
  });
});</code></pre><p>With the <code>base64</code> data in hand, you can now display the image.</p><pre><code class="language-js">let captureImage = document.getElementById("captureImage");

async function captureImageAndDisplay(message) {
  const token = "VIDEOSDK_TOKEN";
  let base64 = await meeting.fetchBase64File({
    url: message.message,
    token,
  });
  console.log("base64", base64); // here is your image in a form of base64

  base64 = "data:data:image/jpeg;base64," + base64;
  captureImage.src = base64;
}</code></pre><pre><code class="language-js">&lt;!--  --&gt;
&lt;img id="captureImage" /&gt;</code></pre><blockquote><strong>NOTE:</strong><br>The file stored in the <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/upload-fetch-temporary-file">VideoSDK's temporary file storage system</a> will be automatically deleted once the current room/meeting comes to an end.</br></blockquote><h2 id="%E2%9C%A8-want-to-add-more-features-to-javascript-video-calling-app">✨ Want to Add More Features to JavaScript Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your JavaScript video-calling app, check out these additional resources:</p><ul><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/integrate-active-speaker-indication-in-javascript-video-chat-app">Link</a></li><li>Screen Share Feature: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-javascript-video-chat-app">Link</a></li><li>Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/integrate-picture-in-picture-pip-mode-in-javascript-video-chat-app">Link</a></li><li>RTMP Livestream: <a href="https://www.videosdk.live/blog/integrate-rtmp-livestream-in-javascript-video-chat-app">Link</a></li></ul><h2 id="wrap-up">Wrap-up</h2><p>Integrating image capture into a JavaScript video call app significantly enhances its functionality and user experience. This feature not only adds a visual dimension to communication but also provides users with a valuable tool for capturing and preserving important moments, information, or visuals shared during the call.</p><p>VideoSDK's capabilities ensure seamless integration of image capture functionality with VideoSDK, providing a smooth user experience across different platforms and devices. By incorporating this feature, the JavaScript video call app becomes more versatile, empowering users with enhanced communication tools and fostering a more interactive and engaging environment.</p><p>If you are new here and want to build an interactive JavaScript app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get ? 10000 free minutes every month. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate RTMP Live Stream in iOS Video Call App?]]></title><description><![CDATA[Learn to integrate RTMP live streaming in your iOS video call app. This guide provides step-by-step instructions for seamless integration.]]></description><link>https://www.videosdk.live/blog/integrate-rtmp-live-stream-in-ios-video-call-app</link><guid isPermaLink="false">662781312a88c204ca9d4bf6</guid><category><![CDATA[iOS]]></category><category><![CDATA[RTMP]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Mon, 23 Sep 2024 12:51:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-Live-Stream-in-iOS-Video-Call-App.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/RTMP-Live-Stream-in-iOS-Video-Call-App.jpg" alt="How to Integrate RTMP Live Stream in iOS Video Call App?"/><p>Integrating RTMP (<a href="https://www.videosdk.live/blog/what-is-rtmp">Real-Time Messaging Protocol</a>) live stream into your iOS video call app enables seamless broadcasting of live video content. With this integration, users can easily share live streams during video calls, enhancing communication and collaboration. By leveraging RTMP technology, your app can efficiently transmit real-time video data to streaming servers, ensuring high-quality and low-latency streaming experiences. </p><p><strong>Benefits of Integrate RTMP Live Stream in the iOS app:</strong></p><ol><li><strong>Enhanced Communication</strong>: Integrating RTMP live stream enables users to share live video content during video calls, enhancing communication and collaboration.</li><li><strong>Real-Time Interaction</strong>: Users can engage in real-time discussions and interactions while streaming live video, fostering dynamic communication.</li><li><strong>Versatility</strong>: The integration adds versatility to your iOS video call app, allowing users to utilize it for both regular video calls and live streaming purposes.</li></ol><p><strong>Use Case of Integrate RTMP Live Stream in the iOS app:</strong></p><ol><li><strong>Training Sessions</strong>: Various departments conduct training sessions during the conference. The RTMP live stream feature enables remote employees to join these sessions, ensuring everyone receives the necessary training regardless of their physical location.</li><li><strong>Keynote Address</strong>: The CEO delivers the keynote address via video call, and the RTMP live stream feature allows all employees to watch the address in real-time, irrespective of their location.</li><li><strong>Product Demonstrations</strong>: The company showcases new products through live demonstrations. Employees can watch the live stream and interact with the presenters, providing feedback and asking questions.</li></ol><p>This comprehensive guide will lead you through the step-by-step process of integrating RTMP live streaming into your iOS video call app using VideoSDK.</p><h2 id="how-to-build-an-ios-live-streaming-app-using-rtmp">How to build an iOS live Streaming app using RTMP</h2><h3 id="getting-started-with-videosdk">Getting Started with VideoSDK</h3><p>VideoSDK enables the opportunity to integrate video &amp; audio calling into Web, Android, and iOS applications with so many different frameworks. It is the best infrastructure solution that provides programmable SDKs and REST APIs to build scalable video conferencing applications. This guide will get you running with the VideoSDK video &amp; audio calling in minutes.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/login">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/server-setup">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><ul><li>iOS 11.0+</li><li>Xcode 12.0+</li><li>Swift 5.0+</li></ul><p>This App will contain two screens:</p><p><strong>Join Screen</strong>: This screen allows the user to either create a meeting or join the predefined meeting.</p><p><strong>Meeting Screen</strong>: This screen basically contains local and remote participant views and some meeting controls such as Enable/Disable the mic &amp; Camera and Leave the meeting.</p><h2 id="integrate-videosdk%E2%80%8B">Integrate VideoSDK​</h2><p>To install VideoSDK, you must initialize the pod on the project by running the following command:</p><pre><code class="language-swift">pod init</code></pre><p>It will create the pod file in your project folder, Open that file and add the dependency for the VideoSDK, like below:</p><pre><code class="language-swift">pod 'VideoSDKRTC', :git =&gt; 'https://github.com/videosdk-live/videosdk-rtc-ios-sdk.git'</code></pre><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_podfile.png" class="kg-image" alt="How to Integrate RTMP Live Stream in iOS Video Call App?" loading="lazy"/></figure><p>then run the below code to install the pod:</p><pre><code class="language-swift">pod install</code></pre><p>then declare the permissions in Info.plist :</p><pre><code class="language-swift">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Camera permission description&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Microphone permission description&lt;/string&gt;</code></pre><h3 id="project-structure">Project Structure</h3><pre><code class="language-swift">iOSQuickStartDemo
   ├── Models
        ├── RoomStruct.swift
        └── MeetingData.swift
   ├── ViewControllers
        ├── StartMeetingViewController.swift
        └── MeetingViewController.swift
   ├── AppDelegate.swift // Default
   ├── SceneDelegate.swift // Default
   └── APIService
           └── APIService.swift
   ├── Main.storyboard // Default
   ├── LaunchScreen.storyboard // Default
   └── Info.plist // Default
 Pods
     └── Podfile</code></pre><h3 id="create-models%E2%80%8B">Create models<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#create-models">​</a></h3><p>Create a swift file for <code>MeetingData</code> and <code>RoomStruct</code> class model for setting data in object pattern.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
struct MeetingData {
    let token: String
    let name: String
    let meetingId: String
    let micEnabled: Bool
    let cameraEnabled: Bool
}</code></pre><figcaption>MeetingData.swift</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
struct RoomsStruct: Codable {
    let createdAt, updatedAt, roomID: String?
    let links: Links?
    let id: String?
    enum CodingKeys: String, CodingKey {
        case createdAt, updatedAt
        case roomID = "roomId"
        case links, id
    }
}

// MARK: - Links
struct Links: Codable {
    let getRoom, getSession: String?
    enum CodingKeys: String, CodingKey {
        case getRoom = "get_room"
        case getSession = "get_session"
    }
}</code></pre><figcaption>RoomStruct.swift</figcaption></figure><h2 id="essential-steps-for-building-the-video-calling">Essential Steps for Building the Video Calling</h2><p>This guide is designed to walk you through the process of integrating Chat with <a href="https://www.videosdk.live/">VideoSDK</a>. We'll cover everything from setting up the SDK to incorporating the visual cues into your app's interface, ensuring a smooth and efficient implementation process.</p><h3 id="step-1-get-started-with-apiclient%E2%80%8B">Step 1: Get started with APIClient<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-1--get-started-with-apiclient">​</a></h3><p>Before jumping to anything else, we have to write an API to generate a unique <code>meetingId</code>. You will require an <strong>authentication token;</strong> you can generate it either using <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-server-api-example</a> or from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">Video SDK Dashboard</a> for developers.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation

let TOKEN_STRING: String = "&lt;AUTH_TOKEN&gt;"

class APIService {

  class func createMeeting(token: String, completion: @escaping (Result&lt;String, Error&gt;) -&gt; Void) {

    let url = URL(string: "https://api.videosdk.live/v2/rooms")!

    var request = URLRequest(url: url)
    request.httpMethod = "POST"
    request.addValue(TOKEN_STRING, forHTTPHeaderField: "authorization")

    URLSession.shared.dataTask(
      with: request,
      completionHandler: { (data: Data?, response: URLResponse?, error: Error?) in

        DispatchQueue.main.async {

          if let data = data, let utf8Text = String(data: data, encoding: .utf8) {
            do {
              let dataArray = try JSONDecoder().decode(RoomsStruct.self, from: data)

              completion(.success(dataArray.roomID ?? ""))
            } catch {
              print("Error while creating a meeting: \(error)")
              completion(.failure(error))
            }
          }
        }
      }
    ).resume()
  }
}
</code></pre><figcaption>APIService.swift</figcaption></figure><h3 id="step-2-implement-join-screen%E2%80%8B">Step 2: Implement Join Screen<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-2--implement-join-screen">​</a></h3><p>The Join Screen will work as a medium to either schedule a new meeting or join an existing meeting.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
import UIKit

class StartMeetingViewController: UIViewController, UITextFieldDelegate {

  private var serverToken = ""

  /// MARK: outlet for create meeting button
  @IBOutlet weak var btnCreateMeeting: UIButton!

  /// MARK: outlet for join meeting button
  @IBOutlet weak var btnJoinMeeting: UIButton!

  /// MARK: outlet for meetingId textfield
  @IBOutlet weak var txtMeetingId: UITextField!

  /// MARK: Initialize the private variable with TOKEN_STRING &amp;
  /// setting the meeting id in the textfield
  override func viewDidLoad() {
    txtMeetingId.delegate = self
    serverToken = TOKEN_STRING
    txtMeetingId.text = "PROVIDE-STATIC-MEETING-ID"
  }

  /// MARK: method for joining meeting through seague named as "StartMeeting"
  /// after validating the serverToken in not empty
  func joinMeeting() {

    txtMeetingId.resignFirstResponder()

    if !serverToken.isEmpty {
      DispatchQueue.main.async {
        self.dismiss(animated: true) {
          self.performSegue(withIdentifier: "StartMeeting", sender: nil)
        }
      }
    } else {
      print("Please provide auth token to start the meeting.")
    }
  }

  /// MARK: outlet for create meeting button tap event
  @IBAction func btnCreateMeetingTapped(_ sender: Any) {
    print("show loader while meeting gets connected with server")
    joinRoom()
  }

  /// MARK: outlet for join meeting button tap event
  @IBAction func btnJoinMeetingTapped(_ sender: Any) {
    if (txtMeetingId.text ?? "").isEmpty {

      print("Please provide meeting id to start the meeting.")
      txtMeetingId.resignFirstResponder()
    } else {
      joinMeeting()
    }
  }

  // MARK: - method for creating room api call and getting meetingId for joining meeting

  func joinRoom() {

    APIService.createMeeting(token: self.serverToken) { result in
      if case .success(let meetingId) = result {
        DispatchQueue.main.async {
          self.txtMeetingId.text = meetingId
          self.joinMeeting()
        }
      }
    }
  }

  /// MARK: preparing to animate to meetingViewController screen
  override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

    guard let navigation = segue.destination as? UINavigationController,

      let meetingViewController = navigation.topViewController as? MeetingViewController
    else {
      return
    }

    meetingViewController.meetingData = MeetingData(
      token: serverToken,
      name: txtMeetingId.text ?? "Guest",
      meetingId: txtMeetingId.text ?? "",
      micEnabled: true,
      cameraEnabled: true
    )
  }
}
</code></pre><figcaption>StartMeetingViewController.swift</figcaption></figure><h3 id="step-3-initialize-and-join-meeting%E2%80%8B">Step 3: Initialize and Join Meeting<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-3--initialize-and-join-meeting">​</a></h3><p>Using the provided <code>token</code> and <code>meetingId</code>, we will configure and initialize the meeting in <code>viewDidLoad()</code>.</p><p>Then, we'll add <strong>@IBOutlet</strong> for <code>localParticipantVideoView</code> and <code>remoteParticipantVideoView</code>, which can render local and remote participant media, respectively.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">class MeetingViewController: UIViewController {

import UIKit
import VideoSDKRTC
import WebRTC
import AVFoundation

class MeetingViewController: UIViewController {

// MARK: - Properties
// outlet for local participant container view
   @IBOutlet weak var localParticipantViewContainer: UIView!

// outlet for label for meeting Id
   @IBOutlet weak var lblMeetingId: UILabel!

// outlet for local participant video view
   @IBOutlet weak var localParticipantVideoView: RTCMTLVideoView!

// outlet for remote participant video view
   @IBOutlet weak var remoteParticipantVideoView: RTCMTLVideoView!

// outlet for remote participant no media label
   @IBOutlet weak var lblRemoteParticipantNoMedia: UILabel!

// outlet for remote participant container view
   @IBOutlet weak var remoteParticipantViewContainer: UIView!

// outlet for local participant no media label
   @IBOutlet weak var lblLocalParticipantNoMedia: UILabel!

// Meeting data - required to start
   var meetingData: MeetingData!

// current meeting reference
   private var meeting: Meeting?

    // MARK: - video participants including self to show in UI
    private var participants: [Participant] = []

        // MARK: - Lifecycle Events

        override func viewDidLoad() {
        super.viewDidLoad()
        // configure the VideoSDK with token
        VideoSDK.config(token: meetingData.token)

        // init meeting
        initializeMeeting()

        // set meeting id in button text
        lblMeetingId.text = "Meeting Id: \(meetingData.meetingId)"
      }

      override func viewWillAppear(_ animated: Bool) {
          super.viewWillAppear(animated)
          navigationController?.navigationBar.isHidden = true
      }

    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)
        navigationController?.navigationBar.isHidden = false
        NotificationCenter.default.removeObserver(self)
    }

        // MARK: - Meeting

        private func initializeMeeting() {

            // Initialize the VideoSDK
            meeting = VideoSDK.initMeeting(
                meetingId: meetingData.meetingId,
                participantName: meetingData.name,
                micEnabled: meetingData.micEnabled,
                webcamEnabled: meetingData.cameraEnabled
            )

            // Adding the listener to meeting
            meeting?.addEventListener(self)

            // joining the meeting
            meeting?.join()
        }
}</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h3 id="step-4-implement-controls%E2%80%8B">Step 4: Implement Controls<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-4--implement-controls">​</a></h3><p>After initializing the meeting in the previous step, we will now add <strong>@IBOutlet</strong> for <code>btnLeave</code>, <code>btnToggleVideo</code> and <code>btnToggleMic</code> which can control the media in the meeting.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">class MeetingViewController: UIViewController {

...

    // outlet for leave button
    @IBOutlet weak var btnLeave: UIButton!

    // outlet for toggle video button
    @IBOutlet weak var btnToggleVideo: UIButton!

    // outlet for toggle audio button
    @IBOutlet weak var btnToggleMic: UIButton!

    // bool for mic
    var micEnabled = true
    // bool for video
    var videoEnabled = true


    // outlet for leave button click event
    @IBAction func btnLeaveTapped(_ sender: Any) {
            DispatchQueue.main.async {
                self.meeting?.leave()
                self.dismiss(animated: true)
            }
        }

    // outlet for toggle mic button click event
    @IBAction func btnToggleMicTapped(_ sender: Any) {
        if micEnabled {
            micEnabled = !micEnabled // false
            self.meeting?.muteMic()
        } else {
            micEnabled = !micEnabled // true
            self.meeting?.unmuteMic()
        }
    }

    // outlet for toggle video button click event
    @IBAction func btnToggleVideoTapped(_ sender: Any) {
        if videoEnabled {
            videoEnabled = !videoEnabled // false
            self.meeting?.disableWebcam()
        } else {
            videoEnabled = !videoEnabled // true
            self.meeting?.enableWebcam()
        }
    }

...

}</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h3 id="step-5-implementing-meetingeventlistener%E2%80%8B">Step 5: Implementing <code>MeetingEventListener</code>​</h3><p>In this step, we'll create an extension for the <code>MeetingViewController</code> that implements the MeetingEventListener, which implements the <code>onMeetingJoined</code>, <code>onMeetingLeft</code>, <code>onParticipantJoined</code>, <code>onParticipantLeft</code>, <code>onParticipantChanged</code>, <code>onSpeakerChanged</code>, etc. methods.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">
class MeetingViewController: UIViewController {

...

extension MeetingViewController: MeetingEventListener {

        /// Meeting started
        func onMeetingJoined() {

            // handle local participant on start
            guard let localParticipant = self.meeting?.localParticipant else { return }
            // add to list
            participants.append(localParticipant)

            // add event listener
            localParticipant.addEventListener(self)

            localParticipant.setQuality(.high)

            if(localParticipant.isLocal){
                self.localParticipantViewContainer.isHidden = false
            } else {
                self.remoteParticipantViewContainer.isHidden = false
            }
        }

        /// Meeting ended
        func onMeetingLeft() {
            // remove listeners
            meeting?.localParticipant.removeEventListener(self)
            meeting?.removeEventListener(self)
        }

        /// A new participant joined
        func onParticipantJoined(_ participant: Participant) {
            participants.append(participant)

            // add listener
            participant.addEventListener(self)

            participant.setQuality(.high)

            if(participant.isLocal){
                self.localParticipantViewContainer.isHidden = false
            } else {
                self.remoteParticipantViewContainer.isHidden = false
            }
        }

        /// A participant left from the meeting
        /// - Parameter participant: participant object
        func onParticipantLeft(_ participant: Participant) {
            participant.removeEventListener(self)
            guard let index = self.participants.firstIndex(where: { $0.id == participant.id }) else {
                return
            }
            // remove participant from list
            participants.remove(at: index)
            // hide from ui
            UIView.animate(withDuration: 0.5){
                if(!participant.isLocal){
                    self.remoteParticipantViewContainer.isHidden = true
                }
            }
        }

        /// Called when speaker is changed
        /// - Parameter participantId: participant id of the speaker, nil when no one is speaking.
        func onSpeakerChanged(participantId: String?) {

            // show indication for active speaker
            if let participant = participants.first(where: { $0.id == participantId }) {
                self.showActiveSpeakerIndicator(participant.isLocal ? localParticipantViewContainer : remoteParticipantViewContainer, true)
            }

            // hide indication for others participants
            let otherParticipants = participants.filter { $0.id != participantId }
            for participant in otherParticipants {
                if participants.count &gt; 1 &amp;&amp; participant.isLocal {
                    showActiveSpeakerIndicator(localParticipantViewContainer, false)
                } else {
                    showActiveSpeakerIndicator(remoteParticipantViewContainer, false)
                }
            }
        }

        func showActiveSpeakerIndicator(_ view: UIView, _ show: Bool) {
            view.layer.borderWidth = 4.0
            view.layer.borderColor = show ? UIColor.blue.cgColor : 													UIColor.clear.cgColor
        }

}

...</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h3 id="step-6-implementing-participanteventlistener">Step 6: Implementing <code>ParticipantEventListener</code></h3><p>In this stage, we'll add an extension for the <code>MeetingViewController</code> that implements the ParticipantEventListener, which implements the <code>onStreamEnabled</code> and <code>onStreamDisabled</code> methods for the audio and video of MediaStreams enabled or disabled.</p><p>The function update UI is frequently used to control or modify the user interface (enable/disable camera &amp; mic) following the MediaStream state.</p><pre><code class="language-swift">class MeetingViewController: UIViewController {

...

extension MeetingViewController: ParticipantEventListener {

/// Participant has enabled mic, video or screenshare
/// - Parameters:
/// - stream: enabled stream object
/// - participant: participant object
func onStreamEnabled(_ stream: MediaStream, forParticipant participant: Participant) {
    updateUI(participant: participant, forStream: stream, enabled: true)
 }

/// Participant has disabled mic, video or screenshare
/// - Parameters:
///   - stream: disabled stream object
///   - participant: participant object
        
func onStreamDisabled(_ stream: MediaStream, 
			forParticipant participant: Participant) {
            
  updateUI(participant: participant, forStream: stream, enabled: false)
 }
 
}

private extension MeetingViewController {

 func updateUI(participant: Participant, forStream stream: MediaStream, 							enabled: Bool) { // true
        switch stream.kind {
        case .state(value: .video):
            if let videotrack = stream.track as? RTCVideoTrack {
                if enabled {
                    DispatchQueue.main.async {
                        UIView.animate(withDuration: 0.5){
                        
                            if(participant.isLocal) {
                            
    	self.localParticipantViewContainer.isHidden = false
	self.localParticipantVideoView.isHidden = false       
	self.localParticipantVideoView.videoContentMode = .scaleAspectFill                            self.localParticipantViewContainer.bringSubviewToFront(self.localParticipantVideoView)                         									
    videotrack.add(self.localParticipantVideoView)
    self.lblLocalParticipantNoMedia.isHidden = true

} else {
		self.remoteParticipantViewContainer.isHidden = false
        	self.remoteParticipantVideoView.isHidden = false
                                self.remoteParticipantVideoView.videoContentMode = .scaleAspectFill
                                self.remoteParticipantViewContainer.bringSubviewToFront(self.remoteParticipantVideoView)
                                		        videotrack.add(self.remoteParticipantVideoView)
 self.lblRemoteParticipantNoMedia.isHidden = true
        }
     }
  }
} else {
         UIView.animate(withDuration: 0.5){
                if(participant.isLocal){
                
                    self.localParticipantViewContainer.isHidden = false
                    self.localParticipantVideoView.isHidden = true
                    self.lblLocalParticipantNoMedia.isHidden = false
                            videotrack.remove(self.localParticipantVideoView)
} else {
                   self.remoteParticipantViewContainer.isHidden = false
                   self.remoteParticipantVideoView.isHidden = true
                   self.lblRemoteParticipantNoMedia.isHidden = false
                            videotrack.remove(self.remoteParticipantVideoView)
      }
    }
  }
}

     case .state(value: .audio):
            if participant.isLocal {
                
               localParticipantViewContainer.layer.borderWidth = 4.0
               localParticipantViewContainer.layer.borderColor = enabled ? UIColor.clear.cgColor : UIColor.red.cgColor
            } else {
                remoteParticipantViewContainer.layer.borderWidth = 4.0
                remoteParticipantViewContainer.layer.borderColor = enabled ? UIColor.clear.cgColor : UIColor.red.cgColor
            }
        default:
            break
        }
    }
}

...
</code></pre><h3 id="known-issue%E2%80%8B">Known Issue<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#known-issue">​</a></h3><p>Please add the following line to the <code>MeetingViewController.swift</code> file's <code>viewDidLoad</code> method If you get your video out of the container.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">override func viewDidLoad() {

  localParticipantVideoView.frame = CGRect(x: 10, y: 0, 
    		width: localParticipantViewContainer.frame.width, 
   		height: localParticipantViewContainer.frame.height)

  localParticipantVideoView.bounds = CGRect(x: 10, y: 0, 
  		width: localParticipantViewContainer.frame.width, 
        	height: localParticipantViewContainer.frame.height)

  localParticipantVideoView.clipsToBounds = true

  remoteParticipantVideoView.frame = CGRect(x: 10, y: 0, 
  		width: remoteParticipantViewContainer.frame.width, 
        	height: remoteParticipantViewContainer.frame.height)
        
  remoteParticipantVideoView.bounds = CGRect(x: 10, y: 0, 
  		width: remoteParticipantViewContainer.frame.width, 
    		height: remoteParticipantViewContainer.frame.height)
    
    remoteParticipantVideoView.clipsToBounds = true
}
</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><p>After following the steps, you will be able to seamlessly integrate live streaming into your application, allowing users to broadcast their video calls directly to the RTMP server. This opens the door to a variety of applications, from hosting live Q&amp;A sessions to creating interactive workshops.</p><blockquote><strong>TIP:</strong><br>Stuck anywhere? Check out this <a href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example" rel="noopener noreferrer">example code</a> on GitHub</br></blockquote><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - videosdk-live/videosdk-rtc-ios-sdk-example: WebRTC based video conferencing SDK for iOS (Swift / Objective C)</div><div class="kg-bookmark-description">WebRTC based video conferencing SDK for iOS (Swift / Objective C) - videosdk-live/videosdk-rtc-ios-sdk-example</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="How to Integrate RTMP Live Stream in iOS Video Call App?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">videosdk-live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/3d2f5eef43ad3d03fbe693ee2c8633053215e90252a63bddc775d8b8d8a7e380/videosdk-live/videosdk-rtc-ios-sdk-example" alt="How to Integrate RTMP Live Stream in iOS Video Call App?"/></div></a></figure><h2 id="integrate-rtmp-live-stream-in-ios-video-app">Integrate RTMP Live Stream in iOS Video App </h2><p>RTMP is a widely-used protocol for live streaming video content from VideoSDK to platforms like YouTube, Twitch, and Facebook. By inputting platform-specific stream keys and URLs, VideoSDK connects to the platform's RTMP server, enabling the transmission of live video streams.</p><p>VideoSDK enables live streaming of your meetings to platforms supporting RTMP ingestion. Simply provide the platform-specific stream key and URL, and we'll connect to the platform's RTMP server to transmit the live video stream. </p><p>The below guide provides an overview of implementing start-and-stop RTMP live streaming.</p><h3 id="start-live-stream">Start Live Stream</h3><p><code>startLivestream()</code> can be used to start an RTMP live stream of the meeting which can be accessed from the <code>Meeting</code> class. This method accepts two parameters:</p><p><code>outputs</code>: This parameter accepts a list of <code>LivestreamOutput</code> objects that contain the RTMP <code>url</code> and <code>streamKey</code> of the platforms, you want to start the live stream.</p><pre><code class="language-swift">class MeetingViewController {

private let platformUrl = "&lt;url-of-the-platform&gt;"
private let privateKey = "&lt;private-key&gt;"

    // button to start livestream
    @IBAction func startLiveStreamButtonTapped(_ sender: Any) {
         self.meeting?.startLivestream(outputs: LivestreamOutput(url: platformUrl, streamKey: privateKey))
    }
}</code></pre><h3 id="stop-live-stream">Stop Live Stream</h3><p><code>stopLivestream()</code> is used to stop the meeting live stream which can be accessed from the <code>Meeting</code> class.</p><pre><code class="language-Swift">class MeetingViewController {
    // button to stop livestream
    @IBAction func startLiveStreamButtonTapped(_ sender: Any) {
         self.meeting?.stopLivestream()
    }
}</code></pre><h3 id="event-associated-with-livestream%E2%80%8B">Event associated with Livestream<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream#event-associated-with-livestream">​</a></h3><p>Whenever the livestream state changes, then <code>onLivestreamStateChanged</code> the event will trigger.</p><pre><code class="language-Swift">extension MeetingViewController: MeetingEventListener {
    // rtmp-event
    func onLivestreamStateChanged(state: LiveStreamState) {
        switch(state) {
            case .LIVESTREAM_STARTING:
                print("livestream starting")
            
            case .LIVESTREAM_STARTED:
                print("livestream started")
                
            case .LIVESTREAM_STOPPING:
                print("livestream stoping")
        
            case .LIVESTREAM_STOPPED:
                print("livestream stopped")
        }
    }
}</code></pre><h3 id="custom-template%E2%80%8B">Custom Template<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/recording-and-live-streaming/rtmp-livestream#custom-template">​</a></h3><p>With VideoSDK, you can also use your own custom-designed layout template to livestream the meetings. To use the custom template, you need to create a template for which you can <a href="https://docs.videosdk.live/react/guide/interactive-live-streaming/custom-template">follow this guide</a>. Once you have set the template, you can use the <a href="https://docs.videosdk.live/api-reference/realtime-communication/start-livestream">REST API to start</a> the live stream with the <code>templateURL</code> parameter.</p><h2 id="known-issue%E2%80%8B-1">Known Issue<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#known-issue">​</a></h2><p>Please add the following line to the <code>MeetingViewController.swift</code> file's <code>viewDidLoad</code> method If you get your video out of the container view like the below image.</p><pre><code class="language-swift">override func viewDidLoad() {

    localParticipantVideoView.frame = CGRect(x: 10, y: 0, width: localParticipantViewContainer.frame.width, height: localParticipantViewContainer.frame.height)

    localParticipantVideoView.bounds = CGRect(x: 10, y: 0, width: localParticipantViewContainer.frame.width, height: localParticipantViewContainer.frame.height)

    localParticipantVideoView.clipsToBounds = true

    remoteParticipantVideoView.frame = CGRect(x: 10, y: 0, width: remoteParticipantViewContainer.frame.width, height: remoteParticipantViewContainer.frame.height)
    remoteParticipantVideoView.bounds = CGRect(x: 10, y: 0, width: remoteParticipantViewContainer.frame.width, height: remoteParticipantViewContainer.frame.height)
    remoteParticipantVideoView.clipsToBounds = true
}
</code></pre><h2 id="conclusion">Conclusion</h2><p>Integrating RTMP live streaming into an iOS video call app using VideoSDK is a powerful way to enhance real-time communication. With this integration, users can seamlessly broadcast live video content alongside their video calls, expanding the app's functionality.</p><p>Unlock the full potential of VideoSDK today and craft seamless video experiences! <strong><strong><strong><strong><a href="https://app.videosdk.live/dashboard">Sign up</a></strong></strong></strong></strong> now to receive 10,000 free minutes and take your video app to new heights.</p>]]></content:encoded></item><item><title><![CDATA[Post-Call Transcription & Summary in Flutter]]></title><description><![CDATA[Discover how to integrate Post-Call Transcription & Summary in Flutter. Our guide helps you enhance your app with precise transcriptions and efficient call summaries for better user experience.]]></description><link>https://www.videosdk.live/blog/post-call-transcription-in-flutter</link><guid isPermaLink="false">6683a14820fab018df10f330</guid><category><![CDATA[Flutter]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 23 Sep 2024 09:45:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/07/Post-time-transcription-and-summary-2.png" medium="image"/><content:encoded><![CDATA[<h3 id="introduction">Introduction</h3><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/Post-time-transcription-and-summary-2.png" alt="Post-Call Transcription & Summary in Flutter"/><p>Post-call transcription and summary is a powerful feature provided by <a href="https://www.videosdk.live/">VideoSDK </a>that allows users to generate detailed transcriptions and summaries of recorded meetings after they have concluded. This feature is particularly beneficial for capturing and documenting important information discussed during meetings, ensuring that nothing is missed and that there is a comprehensive record of the conversation.</p><h3 id="how-post-call-transcription-works">How Post-Call Transcription Works?</h3><p><strong>Post-call transcription</strong> involves processing the recorded audio or video content of a meeting to produce a textual representation of the conversation. Here’s a step-by-step breakdown of how it works:</p><ol><li><strong>Recording the Meeting:</strong> During the meeting, the audio and video are recorded. This can include everything that was said and any shared content, such as presentations or screen shares.</li><li><strong>Uploading the Recording:</strong> Once the meeting is over, the recorded file is uploaded to the VideoSDK platform. This can be done automatically or manually, depending on the configuration.</li><li><strong>Transcription Processing:</strong> The uploaded recording is then processed by VideoSDK’s transcription engine. This engine uses advanced speech recognition technology to convert spoken words into written text.</li><li><strong>Retrieving the Transcription:</strong> After the transcription process is complete, the textual representation of the meeting is made available. This text can be accessed via the VideoSDK API and used in various applications.</li></ol><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/react-quick-start-fafbfbc2ed2d7cbfd4c5ee2e36296f9e-1.png" class="kg-image" alt="Post-Call Transcription & Summary in Flutter" loading="lazy" width="2906" height="1446"/></figure><h3 id="benefits-of-post-call-transcription">Benefits of Post-Call Transcription</h3><ul><li><strong>Accurate Documentation:</strong> Provides a precise record of what was discussed, which is invaluable for meeting minutes, legal documentation, and reference.</li><li><strong>Enhanced Accessibility:</strong> Makes content accessible to those who may have missed the meeting or have hearing impairments.</li><li><strong>Easy Review and Analysis:</strong> Enables quick review of key points and decisions made during the meeting without having to re-watch the entire recording.</li></ul><h2 id="lets-get-started">Let's Get started </h2><p>VideoSDK empowers you to seamlessly integrate the video calling feature into your Flutter application within minutes.</p><p>In this quickstart, you'll explore the group calling feature of VideoSDK. Follow the step-by-step guide to integrate it within your application.</p><h3 id="prerequisites">Prerequisites</h3><p><br>Before proceeding, ensure your development environment meets the following requirements:</br></p><ul><li>Video SDK Developer Account: If you don't have one, you can create it by following the instructions on the Video SDK Dashboard.</li><li> Basic Understanding of Flutter: Familiarity with Flutter development is necessary.</li><li> Flutter Video SDK: Ensure you have the Flutter Video SDK installed.</li><li> Flutter Installation: Flutter should be installed on your device.</li><li>One should have a VideoSDK account to generate tokens. Visit the VideoSDK <strong><a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">dashboard</a></strong> to generate a token.</li></ul><h2 id="getting-started-with-the-code%E2%80%8B">Getting Started with the Code!<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#getting-started-with-the-code">​</a></h2><p>Follow the steps to create the environment necessary to add video calls to your app. You can also find the code sample for <a href="https://github.com/videosdk-live/quickstart/tree/main/react-rtc" rel="noopener noreferrer">quickstart here</a>.</p><h3 id="create-a-new-flutter-project%E2%80%8B">Create a new Flutter project.<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#create-a-new-flutter-project">​</a></h3><p>Create a new Flutter App using the below command.</p><h3 id="install-video-sdk%E2%80%8B">Install Video SDK<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#install-video-sdk">​</a></h3><p>Install the VideoSDK using the below-mentioned flutter command. Make sure you are in your Flutter app directory before you run this command.</p><pre><code class="language-dart">//run this command in terminal to add videoSDK 
flutter pub add videosdk
//run this command to add http library to perform network call to generate roomId
flutter pub add http</code></pre><figure class="kg-card kg-code-card"><pre><code>root
├── android
├── ios
├── lib
     ├── api_call.dart
     ├── join_screen.dart
     ├── main.dart
     ├── meeting_controls.dart
     ├── meeting_screen.dart
     ├── participant_tile.dart
</code></pre><figcaption>Project Files Structure</figcaption></figure><h3 id="app-structure%E2%80%8B">App Structure<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#app-structure">​</a></h3><p>The app widget will contain <code>JoinScreen</code> and <code>MeetingScreen</code> widget. <code>MeetingScreen</code> will have <code>MeetingControls</code> and <code>ParticipantTile</code> widget.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/image-17.png" class="kg-image" alt="Post-Call Transcription & Summary in Flutter" loading="lazy" width="1920" height="1080"/></figure><h3 id="configure-project%E2%80%8B">Configure Project<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#configure-project">​</a></h3><h4 id="for-android%E2%80%8B">For Android<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#for-android">​</a></h4><ul><li>Update the <code>/android/app/src/main/AndroidManifest.xml</code> for the permissions we will be using to implement the audio and video features.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-xml">&lt;uses-feature android:name="android.hardware.camera" /&gt;
&lt;uses-feature android:name="android.hardware.camera.autofocus" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET"/&gt;
&lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE"/&gt;
&lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;</code></pre><figcaption>AndroidManifest.xml</figcaption></figure><ul><li>Also, you will need to set your build settings to Java 8 because the official WebRTC jar now uses static methods in <code>EglBase</code> interface. Just add this to your app-level <code>/android/app/build.gradle</code>.</li></ul><pre><code class="language-gradle">android {
    //...
    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8
    }
}</code></pre><ul><li>If necessary, in the same <code>build.gradle</code> you will need to increase <code>minSdkVersion</code> of <code>defaultConfig</code> up to <code>23</code> (currently, the default Flutter generator is set it to <code>16</code>).</li><li>If necessary, in the same <code>build.gradle</code> you will need to increase <code>compileSdkVersion</code> and <code>targetSdkVersion</code> up to <code>31</code> (currently, the default Flutter generator is set it to <code>30</code>).</li></ul><h4 id="for-ios%E2%80%8B">For iOS<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#for-ios">​</a></h4><ul><li>Add the following entries which allow your app to access the camera and microphone of your <code>/ios/Runner/Info.plist</code> file :</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-plist">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Camera Usage!&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Microphone Usage!&lt;/string&gt;</code></pre><figcaption>Info.plist</figcaption></figure><ul><li>Uncomment the following line to define a global platform for your project in <code>/ios/Podfile</code> :</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-podfile"># platform :ios, '12.0'</code></pre><figcaption>Podfile</figcaption></figure><h4 id="for-macos%E2%80%8B">For MacOS<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#for-macos">​</a></h4><ul><li>Add the following entries to your <code>/macos/Runner/Info.plist</code> file which allow your app to access the camera and microphone.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-plist">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Camera Usage!&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Microphone Usage!&lt;/string&gt;</code></pre><figcaption>Info.plist</figcaption></figure><ul><li>Add the following entries to your <code>/macos/Runner/DebugProfile.entitlements</code> file which allows your app to access the camera, and microphone and open outgoing network connections.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-entitlements">&lt;key&gt;com.apple.security.network.client&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.camera&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.microphone&lt;/key&gt;
&lt;true/&gt;</code></pre><figcaption>DebugProfile.entitlements</figcaption></figure><ul><li>Add the following entries to your <code>/macos/Runner/Release.entitlements</code> file which allows your app to access the camera, and microphone and open outgoing network connections.<br/></li></ul><figure class="kg-card kg-code-card"><pre><code class="language-entitlements">&lt;key&gt;com.apple.security.network.server&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.network.client&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.camera&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.microphone&lt;/key&gt;
&lt;true/&gt;</code></pre><figcaption>Release.entitlements</figcaption></figure><h3 id="step-1-get-started-with-apicalldart%E2%80%8B">Step 1: Get started with api_call.dart<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-1-get-started-with-api_calldart">​</a></h3><p>Before jumping to anything else, you will write a function to generate a unique meetingId. You will require an auth token, you can generate it using either by using <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or generate it from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">Video SDK Dashboard</a> for development.</p><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'dart:convert';
import 'package:http/http.dart' as http;

//Auth token we will use to generate a meeting and connect to it
String token = "&lt;Generated-from-dashboard&gt;";

// API call to create meeting
Future&lt;String&gt; createMeeting() async {
  final http.Response httpResponse = await http.post(
    Uri.parse("https://api.videosdk.live/v2/rooms"),
    headers: {'Authorization': token},
  );

//Destructuring the roomId from the response
  return json.decode(httpResponse.body)['roomId'];
}</code></pre><figcaption>api_call.dart</figcaption></figure><h3 id="step-2-creating-the-joinscreen%E2%80%8B">Step 2: Creating the JoinScreen<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-2--creating-the-joinscreen">​</a></h3><p>Let's create <code>join_screen.dart</code> file in <code>lib</code> directory and create JoinScreen <code>StatelessWidget</code>.</p><p>The JoinScreen will consist of:</p><ul><li><strong>Create a Meeting Button</strong> - This button will create a new meeting for you.</li><li><strong>Meeting ID TextField</strong> - This text field will contain the meeting ID, you want to join.</li><li><strong>Join Meeting Button</strong> - This button will join the meeting, which you have provided.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'api_call.dart';
import 'meeting_screen.dart';

class JoinScreen extends StatelessWidget {
  final _meetingIdController = TextEditingController();

  JoinScreen({super.key});

  void onCreateButtonPressed(BuildContext context) async {
    // call api to create meeting and then navigate to MeetingScreen with meetingId,token
    await createMeeting().then((meetingId) {
      if (!context.mounted) return;
      Navigator.of(context).push(
        MaterialPageRoute(
          builder: (context) =&gt; MeetingScreen(
            meetingId: meetingId,
            token: token,
          ),
        ),
      );
    });
  }

  void onJoinButtonPressed(BuildContext context) {
    String meetingId = _meetingIdController.text;
    var re = RegExp("\\w{4}\\-\\w{4}\\-\\w{4}");
    // check meeting id is not null or invaild
    // if meeting id is vaild then navigate to MeetingScreen with meetingId,token
    if (meetingId.isNotEmpty &amp;&amp; re.hasMatch(meetingId)) {
      _meetingIdController.clear();
      Navigator.of(context).push(
        MaterialPageRoute(
          builder: (context) =&gt; MeetingScreen(
            meetingId: meetingId,
            token: token,
          ),
        ),
      );
    } else {
      ScaffoldMessenger.of(context).showSnackBar(const SnackBar(
        content: Text("Please enter valid meeting id"),
      ));
    }
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: const Text('VideoSDK QuickStart'),
      ),
      body: Padding(
        padding: const EdgeInsets.all(12.0),
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: [
            ElevatedButton(
              onPressed: () =&gt; onCreateButtonPressed(context),
              child: const Text('Create Meeting'),
            ),
            Container(
              margin: const EdgeInsets.fromLTRB(0, 8.0, 0, 8.0),
              child: TextField(
                decoration: const InputDecoration(
                  hintText: 'Meeting Id',
                  border: OutlineInputBorder(),
                ),
                controller: _meetingIdController,
              ),
            ),
            ElevatedButton(
              onPressed: () =&gt; onJoinButtonPressed(context),
              child: const Text('Join Meeting'),
            ),
          ],
        ),
      ),
    );
  }
}</code></pre><figcaption>join_screen.dart</figcaption></figure><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/Untitledjoining-screen.png" class="kg-image" alt="Post-Call Transcription & Summary in Flutter" loading="lazy" width="636" height="682"/></figure><ul><li>Update the home screen of the app in the <code>main.dart</code> </li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'join_screen.dart';

void main() {
  runApp(const MyApp());
}

class MyApp extends StatelessWidget {
  const MyApp({super.key});

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'VideoSDK QuickStart',
      theme: ThemeData(
        primarySwatch: Colors.blue,
      ),
      home: JoinScreen(),
    );
  }
}</code></pre><figcaption>main.dart</figcaption></figure><h4 id="output">Output</h4><p/><h3 id="step-3-creating-the-meetingcontrols%E2%80%8B">Step 3: Creating the MeetingControls<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-3--creating-the-meetingcontrols">​</a></h3><p>Let's create <code>meeting_controls.dart</code> file and create MeetingControls <code>StatelessWidget</code>.</p><p>The MeetingControls will consist of:</p><ul><li>Leave Button - This button will leave the meeting.</li><li>Toggle Mic Button - This button will unmute or mute the mic.</li><li>Toggle Camera Button - This button will enable or disable the camera.</li></ul><p>MeetingControls will accept 3 functions in the constructor.</p><ul><li>onLeaveButtonPressed - invoked when the Leave button is pressed.</li><li>onToggleMicButtonPressed - invoked when the Toggle Mic button is pressed.</li><li>onToggleCameraButtonPressed - invoked when the Toggle Camera button pressed</li><li>onStartRecordingPressed - invoked when the Start Recording button is pressed.</li><li>onStopRecordingPressed - invoked when the Stop Recording button is pressed.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';

class MeetingControls extends StatelessWidget {
  final void Function() onToggleMicButtonPressed;
  final void Function() onToggleCameraButtonPressed;
  final void Function() onLeaveButtonPressed;
  final void Function() onStartRecordingPressed;
  final void Function() onStopRecordingPressed;

  const MeetingControls(
      {super.key,
      required this.onToggleMicButtonPressed,
      required this.onToggleCameraButtonPressed,
      required this.onLeaveButtonPressed,
      required this.onStartRecordingPressed,
      required this.onStopRecordingPressed});

  @override
  Widget build(BuildContext context) {
    return Row(
      mainAxisAlignment: MainAxisAlignment.spaceEvenly,
      children: [
        ElevatedButton(
            onPressed: onLeaveButtonPressed, child: const Text('Leave')),
        ElevatedButton(
            onPressed: onToggleMicButtonPressed,
            child: const Text('Toggle Mic')),
        ElevatedButton(
            onPressed: onToggleCameraButtonPressed,
            child: const Text('Toggle WebCam')),
        ElevatedButton(
          onPressed: onStartRecordingPressed,
          child: const Text("Start Recording"),
        ),
        ElevatedButton(
          onPressed: onStopRecordingPressed,
          child: const Text("Stop recording"),
        ),
      ],
    );
  }
}
</code></pre><figcaption>meeting_controls.dart</figcaption></figure><h3 id="step-4-creating-participanttile%E2%80%8B">Step 4: Creating ParticipantTile<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-4--creating-participanttile">​</a></h3><p>Let's create <code>participant_tile.dart</code> file and create ParticipantTile <code>StatefulWidget</code>.</p><p>The ParticipantTile will consist of:</p><ul><li>RTCVideoView - This will show participant's video stream.</li></ul><p>ParticipantTile will accept <code>Participant</code> in constructor</p><ul><li>participant - participant of the meeting.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class ParticipantTile extends StatefulWidget {
  final Participant participant;
  const ParticipantTile({super.key, required this.participant});

  @override
  State&lt;ParticipantTile&gt; createState() =&gt; _ParticipantTileState();
}

class _ParticipantTileState extends State&lt;ParticipantTile&gt; {
  Stream? videoStream;

  @override
  void initState() {
    // initial video stream for the participant
    widget.participant.streams.forEach((key, Stream stream) {
      setState(() {
        if (stream.kind == 'video') {
          videoStream = stream;
        }
      });
    });
    _initStreamListeners();
    super.initState();
  }

  _initStreamListeners() {
    widget.participant.on(Events.streamEnabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; videoStream = stream);
      }
    });

    widget.participant.on(Events.streamDisabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; videoStream = null);
      }
    });
  }

  @override
  Widget build(BuildContext context) {
    return Padding(
      padding: const EdgeInsets.all(8.0),
      child: videoStream != null
          ? RTCVideoView(
              videoStream?.renderer as RTCVideoRenderer,
              objectFit: RTCVideoViewObjectFit.RTCVideoViewObjectFitContain,
            )
          : Container(
              color: Colors.grey.shade800,
              child: const Center(
                child: Icon(
                  Icons.person,
                  size: 100,
                ),
              ),
            ),
    );
  }
}
</code></pre><figcaption>participant_tile.dart</figcaption></figure><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/Untitledfirst-image.png" class="kg-image" alt="Post-Call Transcription & Summary in Flutter" loading="lazy" width="1138" height="682"/></figure><h3 id="step-5-creating-the-meetingscreen-configuring-transcription%E2%80%8B">Step 5: Creating the MeetingScreen &amp; Configuring Transcription<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-5--creating-the-meetingscreen">​</a></h3><p>In this step, we create <code>meeting_screen.dart</code> file and create MeetingScreen <code>StatefulWidget</code>. We set up the configuration for post-call transcription and summary generation. We define the webhook URL where the webhooks will be received.</p><hr><p>We have introduced the <code>setupRoomEventListener()</code> function, which ensures that the post-transcription service automatically starts when recording the function <code>startRecording</code> begins and stops when <code>stopRecording</code> is called. This enhancement streamlines the transcription process, providing seamless integration with our recording feature.</p><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';
import './participant_tile.dart';
import 'meeting_controls.dart';

class MeetingScreen extends StatefulWidget {
  final String meetingId;
  final String token;

  MeetingScreen({
    Key? key,
    required this.meetingId,
    required this.token,
  }) : super(key: key);

  @override
  _MeetingScreenState createState() =&gt; _MeetingScreenState();
}

class _MeetingScreenState extends State&lt;MeetingScreen&gt; {
  late Room _room;
  var micEnabled = true;
  var camEnabled = true;

  Map&lt;String, Participant&gt; participants = {};

  @override
  void initState() {
    super.initState();

    // create room
    _room = VideoSDK.createRoom(
      roomId: widget.meetingId,
      token: widget.token,
      displayName: "John Doe",
      micEnabled: micEnabled,
      camEnabled: camEnabled,
      defaultCameraIndex:
          kIsWeb ? 0 : 1, // Index of MediaDevices for default camera
    );

    setMeetingEventListener();

    // Join room
    _room.join();
  }

  // Set up meeting event listeners
  void setMeetingEventListener() {
    _room.on(Events.roomJoined, () {
      setState(() {
        participants[_room.localParticipant.id] = _room.localParticipant;
      });
    });

    _room.on(Events.participantJoined, (Participant participant) {
      setState(() {
        participants[participant.id] = participant;
      });
    });

    _room.on(Events.participantLeft, (String participantId) {
      if (participants.containsKey(participantId)) {
        setState(() {
          participants.remove(participantId);
        });
      }
    });

    _room.on(Events.roomLeft, () {
      setState(() {
        participants.clear();
      });
      Navigator.popUntil(context, ModalRoute.withName('/'));
    });
  }

  // Handle back button press to leave the room
  Future&lt;bool&gt; _onWillPop() async {
    _room.leave();
    return true;
  }

  void setupRoomEventListener() { //Events for Recording
    _room.on(Events.recordingStateChanged, (String status) {
      //Status can be :: RECORDING_STARTING
      //Status can be :: RECORDING_STARTED
      //Status can be :: RECORDING_STOPPING
      //Status can be :: RECORDING_STOPPED
      print("Meeting Recording status : $status");
    });
  }

  Map&lt;String, dynamic&gt; config = {
    "layout": {
      "type": "GRID",
      "priority": "SPEAKER",
      "gridSize": 4,
    },
    "theme": "DARK",
    "mode": "video-and-audio",
    "quality": "high",
    "orientation": "portrait",
  };

  Map&lt;String, dynamic&gt; transcription = {
    "enabled": true,
    "summary": {
      "enabled": true,
      "prompt":
          "Write summary in sections like Title, Agenda, Speakers, Action Items, Outlines, Notes and Summary",
    }
  };

  @override
  Widget build(BuildContext context) {
    return WillPopScope(
      onWillPop: _onWillPop,
      child: Scaffold(
        appBar: AppBar(
          title: const Text('VideoSDK QuickStart'),
        ),
        body: Padding(
          padding: const EdgeInsets.all(8.0),
          child: Column(
            children: [
              Text(widget.meetingId),
              // Render all participants
              Expanded(
                child: Padding(
                  padding: const EdgeInsets.all(8.0),
                  child: GridView.builder(
                    gridDelegate:
                        const SliverGridDelegateWithFixedCrossAxisCount(
                      crossAxisCount: 2,
                      crossAxisSpacing: 10,
                      mainAxisSpacing: 10,
                      mainAxisExtent: 300,
                    ),
                    itemBuilder: (context, index) {
                      return ParticipantTile(
                        key: Key(participants.values.elementAt(index).id),
                        participant: participants.values.elementAt(index),
                      );
                    },
                    itemCount: participants.length,
                  ),
                ),
              ),
              MeetingControls(
                onToggleMicButtonPressed: () {
                  micEnabled ? _room.muteMic() : _room.unmuteMic();
                  setState(() {
                    micEnabled = !micEnabled;
                  });
                },
                onToggleCameraButtonPressed: () {
                  camEnabled ? _room.disableCam() : _room.enableCam();
                  setState(() {
                    camEnabled = !camEnabled;
                  });
                },
                onLeaveButtonPressed: () {
                  _room.leave();
                },
                onStartRecordingPressed: () {
                  _room.startRecording( 
                      config: config, transcription: transcription);
                  setupRoomEventListener();
                },
                onStopRecordingPressed: () {
                  _room.stopRecording();
                  setupRoomEventListener();
                },
              ),
            ],
          ),
        ),
      ),
    );
  }
}
</code></pre><figcaption>meeting_screen.dart</figcaption></figure><h2 id="run-and-test">Run and Test</h2><p>Type <code>flutter run</code><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#run-and-test">​</a> to start your app</p><p>The app is all set to test. Make sure to update the <code>token</code> in <code>api_call.dart</code></p><p>Your app should look like this after the implementation.</p><h4 id="output-1">Output</h4><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/20240702_164832-ezgif.com-video-to-gif-converter.gif" class="kg-image" alt="Post-Call Transcription & Summary in Flutter" loading="lazy" width="800" height="450"><figcaption>Your Post-Call Transcription will Start once you start recording</figcaption></img></figure><h2 id="fetching-the-transcription-from-the-dashboard">Fetching the Transcription from the Dashboard</h2><p>Once the transcription is ready, you can fetch it from the VideoSDK dashboard. The dashboard provides a user-friendly interface where you can view, download, and manage your Transcriptions &amp; Summary.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/20240702_171456-ezgif.com-resize.gif" class="kg-image" alt="Post-Call Transcription & Summary in Flutter" loading="lazy" width="1920" height="1080"><figcaption>To Access Transcription &amp; Summary Files</figcaption></img></figure><blockquote>If you get <code>webrtc/webrtc.h file not found</code> error at a runtime in ios then check solution <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/known-issues#issue--1" rel="noopener noreferrer">here</a>.</blockquote><h2 id="conclusion">Conclusion</h2><p>Integrating post-call transcription and summary features into your React application using VideoSDK provides significant advantages for capturing and documenting meeting content. This guide has meticulously detailed the steps required to set up and implement these features, ensuring that every conversation during a meeting is accurately transcribed and easily accessible for future reference.</p></hr>]]></content:encoded></item><item><title><![CDATA[Integrate Active Speaker Indication in React JS Video Call App: Complete Guide]]></title><description><![CDATA[Learn how to implement Active Speaker Indication in your React JS video call app using VideoSDK.]]></description><link>https://www.videosdk.live/blog/integrate-active-speaker-indication-in-react-js</link><guid isPermaLink="false">662b35dc2a88c204ca9d4dc3</guid><category><![CDATA[React]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Mon, 23 Sep 2024 07:04:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Active-Speaker-React.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Active-Speaker-React.jpg" alt="Integrate Active Speaker Indication in React JS Video Call App: Complete Guide"/><p>In your <a href="https://www.videosdk.live/blog/react-js-video-calling">React JS video call application</a>, implementing Active Speaker Indication can enhance the user experience by highlighting the participant currently speaking. This feature helps users easily identify who is talking, fostering smoother communication and engagement within the call. By dynamically updating the interface to visually indicate the active speaker, such as through border highlights or avatar animations, participants can focus their attention more effectively during group discussions.</p><p><strong>Benefits of Implement Active Speaker Indication in React JS Video App:</strong></p><ol><li><strong>Enhanced Communication</strong>: Active Speaker Indication improves communication by helping participants easily identify who is speaking, reducing confusion and interruptions.</li><li><strong>Improved Engagement</strong>: Users stay more engaged in the conversation when they can easily follow who is speaking, leading to more active participation.</li><li><strong>Smooth User Experience</strong>: It provides a smoother user experience by making it effortless for participants to understand the flow of conversation.</li></ol><p><strong>Use case of Implement Active Speaker Indication in React JS Video App:</strong></p><ol><li><strong>Group Meetings</strong>: During group meetings, Active Speaker Indication helps participants follow the conversation flow and know who is speaking at any given moment.</li><li><strong>Remote Work</strong>: In remote work scenarios, where team members communicate through video calls, this feature ensures smoother collaboration and better engagement.</li><li><strong>Educational Webinars</strong>: Online educational webinars or virtual classrooms, it help students identify the speaker and stay focused on the lesson.</li></ol><p>In the below guide, you will learn how to highlight active speakers in the React JS Video chat application.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the active speaker functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (Not having one? Follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a>)</li><li>Basic understanding of React.</li><li><a href="https://www.npmjs.com/package/@videosdk.live/react-sdk" rel="noopener noreferrer"><strong>React VideoSDK</strong></a></li><li>Make sure Node and NPM are installed on your device.</li><li>Basic understanding of Hooks (useState, useRef, useEffect).</li><li>React Context API (optional).</li></ul><p>Follow the steps to create the environment necessary to add video calls to your app. You can also find the code sample for <a href="https://github.com/videosdk-live/quickstart/tree/main/react-rtc" rel="noopener noreferrer">Quickstart here</a>.<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#create-new-react-app" rel="noopener noreferrer">​</a></p><p><strong>Create a new React App using the below command.</strong></p><pre><code class="language-js">$ npx create-react-app videosdk-rtc-react-app</code></pre><h2 id="%E2%AC%87%EF%B8%8F-install-videosdk%E2%80%8B">⬇️ Install VideoSDK<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#install-videosdk">​</a></h2><p>It is necessary to set up VideoSDK within your project before going into the details of integrating the Active Speaker feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.</p><ul><li>For NPM</li></ul><pre><code class="language-js">$ npm install "@videosdk.live/react-sdk"

//For the Participants Video
$ npm install "react-player"</code></pre><ul><li>For Yarn</li></ul><pre><code class="language-js">$ yarn add "@videosdk.live/react-sdk"

//For the Participants Video
$ yarn add "react-player"</code></pre><p>You are going to use functional components to leverage React's reusable component architecture. There will be components for users, videos and controls (mic, camera, leave) over the video.</p><h3 id="app-architecture">App Architecture</h3>
<p>The App will contain a <code>MeetingView</code> component which includes a <code>ParticipantView</code> component which will render the participant's name, video, audio, etc. It will also have a <code>Controls</code> component that will allow the user to perform operations like leave and toggle media.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-quick-start-fafbfbc2ed2d7cbfd4c5ee2e36296f9e.png" class="kg-image" alt="Integrate Active Speaker Indication in React JS Video Call App: Complete Guide" loading="lazy" width="1356" height="780"/></figure><p>You will be working on the following files:</p><ul><li>API.js: Responsible for handling API calls such as generating unique <code>meetingId</code> tokens.</li><li>App.js: Responsible for rendering <code>MeetingView</code> and joining the meeting.</li></ul><h2 id="essential-steps-to-implement-video-calling-functionality">Essential Steps to Implement Video Calling Functionality</h2><p>To add video capability to your React application, you must first complete a sequence of prerequisites.</p><h3 id="step-1-get-started-with-apijs%E2%80%8B">Step 1: Get started with API.js<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-1-get-started-with-apijs">​</a></h3><p>Before moving on, you must create an API request to generate a unique <code>meetingId</code>. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><pre><code class="language-js">//This is the Auth token, you will use it to generate a meeting and connect to it
export const authToken = "&lt;Generated-from-dashbaord&gt;";
// API call to create a meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${authToken}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });
  //Destructuring the roomId from the response
  const { roomId } = await res.json();
  return roomId;
};</code></pre><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the <strong>Context of Provider</strong> and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as a prop. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value proposition changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meetings, such as join/leave, enable/disable the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant, such as name, webcamStream, micStream, etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the <strong>App.js</strong> file.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">import "./App.css";
import React, { useEffect, useMemo, useRef, useState } from "react";
import {
  MeetingProvider,
  MeetingConsumer,
  useMeeting,
  useParticipant,
} from "@videosdk.live/react-sdk";
import { authToken, createMeeting } from "./API";
import ReactPlayer from "react-player";

function JoinScreen({ getMeetingAndToken }) {
  return null;
}

function ParticipantView(props) {
  return null;
}

function Controls(props) {
  return null;
}

function MeetingView(props) {
  return null;
}

function ChatView() {
  return null;
}

function App() {
  const [meetingId, setMeetingId] = useState(null);

  //Getting the meeting id by calling the api we just wrote
  const getMeetingAndToken = async (id) =&gt; {
    const meetingId =
      id == null ? await createMeeting({ token: authToken }) : id;
    setMeetingId(meetingId);
  };

  //This will set Meeting Id to null when meeting is left or ended
  const onMeetingLeave = () =&gt; {
    setMeetingId(null);
  };

  return authToken &amp;&amp; meetingId ? (
    &lt;MeetingProvider
      config={{
        meetingId,
        micEnabled: true,
        webcamEnabled: true,
        name: "C.V. Raman",
      }}
      token={authToken}
    &gt;
      &lt;MeetingView meetingId={meetingId} onMeetingLeave={onMeetingLeave} /&gt;
    &lt;/MeetingProvider&gt;
  ) : (
    &lt;JoinScreen getMeetingAndToken={getMeetingAndToken} /&gt;
  );
}

export default App;</code></pre><figcaption><p><span style="white-space: pre-wrap;">App.js</span></p></figcaption></figure><h3 id="step-3-implement-join-screen%E2%80%8B">Step 3: Implement Join Screen<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-3-implement-join-screen">​</a></h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><pre><code class="language-js">function JoinScreen({ getMeetingAndToken }) {
  const [meetingId, setMeetingId] = useState(null);
  const onClick = async () =&gt; {
    await getMeetingAndToken(meetingId);
  };
  return (
    &lt;div&gt;
      &lt;input
        type="text"
        placeholder="Enter Meeting Id"
        onChange={(e) =&gt; {
          setMeetingId(e.target.value);
        }}
      /&gt;
      &lt;button onClick={onClick}&gt;Join&lt;/button&gt;
      {" or "}
      &lt;button onClick={onClick}&gt;Create Meeting&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><h4 id="output">Output</h4>
<figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-join-screen-06fb57cf0d9e3bcc1e7da9fc032298c3.jpeg" class="kg-image" alt="Integrate Active Speaker Indication in React JS Video Call App: Complete Guide" loading="lazy" width="720" height="130"/></figure><h3 id="step-4-implement-meetingview-and-controls%E2%80%8B">Step 4: Implement MeetingView and Controls<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-4-implement-meetingview-and-controls">​</a></h3><p>The next step is to create <code>MeetingView</code> and <code>Controls</code> components to manage features such as join, leave, mute, and unmute.</p><pre><code class="language-js">function MeetingView(props) {
  const [joined, setJoined] = useState(null);
  //Get the method which will be used to join the meeting.
  //We will also get the participants list to display all participants
  const { join, participants } = useMeeting({
    //callback for when meeting is joined successfully
    onMeetingJoined: () =&gt; {
      setJoined("JOINED");
    },
    //callback for when meeting is left
    onMeetingLeft: () =&gt; {
      props.onMeetingLeave();
    },
  });
  const joinMeeting = () =&gt; {
    setJoined("JOINING");
    join();
  };

  return (
    &lt;div className="container"&gt;
      &lt;h3&gt;Meeting Id: {props.meetingId}&lt;/h3&gt;
      {joined &amp;&amp; joined == "JOINED" ? (
        &lt;div&gt;
          &lt;Controls /&gt;
          //For rendering all the participants in the meeting
          {[...participants.keys()].map((participantId) =&gt; (
            &lt;ParticipantView
              participantId={participantId}
              key={participantId}
            /&gt;
          ))}
          &lt;ChatView /&gt;
        &lt;/div&gt;
      ) : joined &amp;&amp; joined == "JOINING" ? (
        &lt;p&gt;Joining the meeting...&lt;/p&gt;
      ) : (
        &lt;button onClick={joinMeeting}&gt;Join&lt;/button&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><figure class="kg-card kg-code-card"><pre><code class="language-js">function Controls() {
  const { leave, toggleMic, toggleWebcam } = useMeeting();
  return (
    &lt;div&gt;
      &lt;button onClick={() =&gt; leave()}&gt;Leave&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleMic()}&gt;toggleMic&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleWebcam()}&gt;toggleWebcam&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">Control Component</span></p></figcaption></figure><h4 id="output-of-controls-component">Output of Controls Component</h4>
<figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-container-controls-2cebdfdfd1371b010b773cb6fb9c7ae8.jpeg" class="kg-image" alt="Integrate Active Speaker Indication in React JS Video Call App: Complete Guide" loading="lazy" width="720" height="177"/></figure><h3 id="step-5-implement-participant-view%E2%80%8B">Step 5: Implement Participant View<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-5-implement-participant-view">​</a></h3><p>Before implementing the participant view, you need to understand a couple of concepts.</p><h4 id="51-forwarding-ref-for-mic-and-camera">5.1 Forwarding Ref for mic and camera</h4>
<p>The <code>useRef</code> hook is responsible for referencing the audio and video components. It will be used to play and stop the audio and video of the participant.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const webcamRef = useRef(null);
const micRef = useRef(null);</code></pre><figcaption><p><span style="white-space: pre-wrap;">Forwarding Ref for mic and camera</span></p></figcaption></figure><h4 id="52-useparticipant-hook">5.2 useParticipant Hook</h4>
<p>The <code>useParticipant</code> hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take <code>participantId</code> as an argument.</p><pre><code class="language-js">const { webcamStream, micStream, webcamOn, micOn } = useParticipant(
  props.participantId
);</code></pre><h4 id="53-mediastream-api">5.3 MediaStream API</h4>
<p>The MediaStream API is beneficial for adding a MediaTrack to the audio/video tag, enabling the playback of audio or video.</p><pre><code class="language-js">const webcamRef = useRef(null);
const mediaStream = new MediaStream();
mediaStream.addTrack(webcamStream.track);

webcamRef.current.srcObject = mediaStream;
webcamRef.current
  .play()
  .catch((error) =&gt; console.error("videoElem.current.play() failed", error));</code></pre><h4 id="54-implement-participantview%E2%80%8B">5.4 Implement <code>ParticipantView</code>​</h4>
<p>Now you can use both of the hooks and the API to create <code>ParticipantView</code></p><pre><code class="language-js">function ParticipantView(props) {
  const micRef = useRef(null);
  const { webcamStream, micStream, webcamOn, micOn, isLocal, displayName } =
    useParticipant(props.participantId);

  const videoStream = useMemo(() =&gt; {
    if (webcamOn &amp;&amp; webcamStream) {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(webcamStream.track);
      return mediaStream;
    }
  }, [webcamStream, webcamOn]);

  useEffect(() =&gt; {
    if (micRef.current) {
      if (micOn &amp;&amp; micStream) {
        const mediaStream = new MediaStream();
        mediaStream.addTrack(micStream.track);

        micRef.current.srcObject = mediaStream;
        micRef.current
          .play()
          .catch((error) =&gt;
            console.error("videoElem.current.play() failed", error)
          );
      } else {
        micRef.current.srcObject = null;
      }
    }
  }, [micStream, micOn]);

  return (
    &lt;div&gt;
      &lt;p&gt;
        Participant: {displayName} | Webcam: {webcamOn ? "ON" : "OFF"} | Mic:{" "}
        {micOn ? "ON" : "OFF"}
      &lt;/p&gt;
      &lt;audio ref={micRef} autoPlay playsInline muted={isLocal} /&gt;
      {webcamOn &amp;&amp; (
        &lt;ReactPlayer
          //
          playsinline // extremely crucial prop
          pip={false}
          light={false}
          controls={false}
          muted={true}
          playing={true}
          //
          url={videoStream}
          //
          height={"300px"}
          width={"300px"}
          onError={(err) =&gt; {
            console.log(err, "participant video error");
          }}
        /&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><h2 id="integrate-active-speaker-feature">Integrate Active Speaker Feature</h2><p>The Active Speaker Indication feature in VideoSDK allows you to identify the participant who is currently the active speaker in a meeting. This feature proves especially valuable in larger meetings or webinars, where numerous participants can make it challenging to identify the active speaker.</p><p>Whenever any participant speaks in a meeting, the <code>onSpeakerChanged</code> event will trigger, providing the participant ID of the active speaker.</p><p>For example, the meeting is running with <strong>Alice</strong> and <strong>Bob</strong>. Whenever any of them speaks, <code>onSpeakerChanged</code> the event will trigger and return the speaker's <code>participantId</code>.</p><pre><code class="language-js">import { useMeeting } from "@videosdk.live/react-sdk";

const MeetingView = () =&gt; {
  /** useMeeting hooks events */
  const {
    /** Methods */
  } = useMeeting({
    onSpeakerChanged: (activeSpeakerId) =&gt; {
      console.log("Active Speaker participantId", activeSpeakerId);
    },
  });
};</code></pre><h2 id="more-resources-for-react-js-video-calling-app">More Resources for React JS Video Calling App</h2><p>If you found this guide helpful and want to explore more features for your React JS video calling app, check out these additional resources:</p><ul><li>HLS Player: <a href="https://www.videosdk.live/blog/implement-hls-player-in-react-js">Link</a></li><li>RTMP Live Stream: <a href="https://www.videosdk.live/blog/integrate-rtmp-livestream-in-react-js" rel="nofollow noopener">Link</a></li><li>Image Capture: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-react-js" rel="nofollow noopener">Link</a></li><li>Screen Share: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-js" rel="nofollow noopener">Link</a></li><li>Chat Feature: <a href="https://www.videosdk.live/blog/integrate-chat-feature-in-react-js" rel="nofollow noopener">Link</a></li><li>Collaborative Whiteboard: <a href="https://www.videosdk.live/blog/integrate-whiteboard-in-react-js" rel="nofollow noopener">Link</a></li><li>Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/integrate-picture-in-picture-pip-in-react-js" rel="nofollow noopener">Link</a></li></ul><h2 id="wrap-up">Wrap-up</h2><p>By implementing active speaker indication, you can significantly enhance the usability and clarity of your React JS video call application. Not only does it improve user experience, but it also fosters a more natural and focused communication flow. With VideoSDK, the active speaker indication feature is straightforward to integrate, allowing you to focus on building a captivating user experience.</p><p>And, If you are new here and want to build an interactive react app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get ? <em>10000 free minutes every month.</em> This will help your new video calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[How to Implement Chat in Android Video Calling App Using Java?]]></title><description><![CDATA[In this guide, we'll walk you through the seamless integration of chat functionality using PubSub within your Android(Java) Video Chat App.]]></description><link>https://www.videosdk.live/blog/integrate-chat-in-java-video-call-app</link><guid isPermaLink="false">6603ea3a2a88c204ca9cf068</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[Android]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Mon, 23 Sep 2024 05:45:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Chat-using-PubSub-Java-1.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Chat-using-PubSub-Java-1.png" alt="How to Implement Chat in Android Video Calling App Using Java?"/><p>Integrating real-time communication features like chat functionality adds an active layer to video applications. Features like video chat have become imperative for engaging user experiences in Android apps. This article aims to guide you through the process of integrating chat leveraging the capabilities of the VideoSDK. </p><h2 id="goals">Goals</h2><p>By the End of this Article:</p><ol><li>Create a <a href="https://www.videosdk.live/signup">VideoSDK account</a> and generate your VideoSDK auth token.</li><li>Integrate the VideoSDK library and dependencies into your project.</li><li>Implement core functionalities for video calls using VideoSDK.</li><li>Integrate chat feature.</li></ol><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the chat functionality, we will need to use the capabilities that the VideoSDK offers. Before we dive into the implementation steps, let's make sure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token plays a crucial role in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider the <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/authentication-and-token#1-generating-token-from-dashboard">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Java Development Kit is supported.</li><li>Android Studio version 3.0 or later.</li><li>Android SDK API level 21 or higher.</li><li>A mobile device with Android 5.0 or later version.</li></ul><h2 id="integrate-videosdk">Integrate VideoSDK</h2><p>Following the account creation and token generation steps, we'll guide you through the process of adding the VideoSDK library and other dependencies to your project. We'll also ensure your app has the required permissions to access features like audio recording, camera usage, and internet connectivity, all crucial for a seamless video experience.</p><h3 id="step-a-add-the-repositories-to-the-projects-settingsgradle-file">Step (a): Add the repositories to the project's <code>settings.gradle</code> file</h3><pre><code class="language-Java">dependencyResolutionManagement{
  repositories {
    // ...
    google()
    mavenCentral()
    maven { url 'https://jitpack.io' }
    maven { url "https://maven.aliyun.com/repository/jcenter" }
  }
}</code></pre><h3 id="step-b-include-the-following-dependency-within-your-applications-buildgradle-file">Step (b): Include the following dependency within your application's <code>build.gradle</code> file</h3><pre><code class="language-Java">dependencies {
  implementation 'live.videosdk:rtc-android-sdk:0.1.26'

  // library to perform Network call to generate a meeting id
  implementation 'com.amitshekhar.android:android-networking:1.0.2'

  // Other dependencies specific to your app
}</code></pre><blockquote>If your project has set <code>android.useAndroidX=true</code>, then set <code>android.enableJetifier=true</code> in the <code>gradle.properties</code> file to migrate your project to AndroidX and avoid duplicate class conflict.</blockquote><h3 id="step-c-add-permissions-to-your-project">Step (c): Add permissions to your project</h3><p>In <code>/app/Manifests/AndroidManifest.xml</code>, add the following permissions after <code>&lt;/application&gt;</code>.</p><pre><code class="language-Java">&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;</code></pre><p>These permissions are essential for enabling core functionalities like audio recording, internet connectivity for real-time communication, and camera access for video streams within your Android video chat application.</p><h2 id="essential-steps-for-building-the-video-calling-functionality">Essential Steps for Building the Video Calling Functionality</h2><p>We'll now add the functionalities that make your Android video chat application more exciting after setting up your project with VideoSDK. This section outlines the essential steps for implementing core functionalities within your app.</p><p>This section will guide you through four key aspects:</p><h3 id="step-1-generate-a-meetingid">Step 1: Generate a <code>meetingId</code></h3><p>Now, we can create the <code>meetingId</code> from the VideoSDK's rooms API. You can refer to this <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/setup-call/initialize-meeting#generating-meeting-id">documentation</a> to generate meetingId.</p><h3 id="step-2-initializing-the-meeting">Step 2: Initializing the Meeting</h3><p>After getting <code>meetingId</code> , the next step involves initializing the meeting for that we need to,</p><ol><li>Initialize VideoSDK.</li><li>Configure VideoSDK with a token.</li><li>Initialize the meeting with required params such as <code>meetingId</code>, <code>participantName</code>, <code>micEnabled</code>, <code>webcamEnabled</code> and more.</li><li>Add <code>MeetingEventListener</code> for listening events such as Meeting Join/Left and Participant Join/Left.</li><li>Join the room with <code>meeting.join()</code> a method.</li></ol><p>Please copy the .xml file of the <code>MeetingActivity</code> from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/activity_meeting.xml"><strong>here</strong></a>.</p><pre><code class="language-Java">public class MeetingActivity extends AppCompatActivity {
  // declare the variables we will be using to handle the meeting
  private Meeting meeting;
  private boolean micEnabled = true;
  private boolean webcamEnabled = true;

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_meeting);

    final String token = ""; // Replace with the token you generated from the VideoSDK Dashboard
    final String meetingId = ""; // Replace with the meetingId you have generated
    final String participantName = "John Doe";

    // 1. Initialize VideoSDK
    VideoSDK.initialize(applicationContext);

    // 2. Configuration VideoSDK with Token
    VideoSDK.config(token);

    // 3. Initialize VideoSDK Meeting
    meeting = VideoSDK.initMeeting(
            MeetingActivity.this, meetingId, participantName,
            micEnabled, webcamEnabled,null, null, false, null, null);

    // 4. Add event listener for listening upcoming events
    meeting.addEventListener(meetingEventListener);

    // 5. Join VideoSDK Meeting
    meeting.join();

    ((TextView)findViewById(R.id.tvMeetingId)).setText(meetingId);
  }

  // creating the MeetingEventListener
  private final MeetingEventListener meetingEventListener = new MeetingEventListener() {
    @Override
    public void onMeetingJoined() {
      Log.d("#meeting", "onMeetingJoined()");
    }

    @Override
    public void onMeetingLeft() {
      Log.d("#meeting", "onMeetingLeft()");
      meeting = null;
      if (!isDestroyed()) finish();
    }

    @Override
    public void onParticipantJoined(Participant participant) {
      Toast.makeText(MeetingActivity.this, participant.getDisplayName() + " joined", Toast.LENGTH_SHORT).show();
    }

    @Override
    public void onParticipantLeft(Participant participant) {
      Toast.makeText(MeetingActivity.this, participant.getDisplayName() + " left", Toast.LENGTH_SHORT).show();
    }
  };
}</code></pre><h3 id="step-3-handle-local-participant-media">Step 3: Handle Local Participant Media</h3><p>After successfully entering the meeting, it's time to manage the webcam and microphone for the local participant (you).</p><p>To enable or disable the webcam, we'll use the <code>Meeting</code> class methods <code>enableWebcam()</code> and <code>disableWebcam()</code>, respectively. Similarly, to mute or unmute the microphone, we'll utilize the methods <code>muteMic()</code> and <code>unmuteMic()</code></p><pre><code class="language-Java">public class MeetingActivity extends AppCompatActivity {
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_meeting);
    //...Meeting Setup is Here

    // actions
    setActionListeners();
  }

  private void setActionListeners() {
    // toggle mic
    findViewById(R.id.btnMic).setOnClickListener(view -&gt; {
      if (micEnabled) {
        // this will mute the local participant's mic
        meeting.muteMic();
        Toast.makeText(MeetingActivity.this, "Mic Disabled", Toast.LENGTH_SHORT).show();
      } else {
        // this will unmute the local participant's mic
        meeting.unmuteMic();
        Toast.makeText(MeetingActivity.this, "Mic Enabled", Toast.LENGTH_SHORT).show();
      }
      micEnabled=!micEnabled;
    });

    // toggle webcam
    findViewById(R.id.btnWebcam).setOnClickListener(view -&gt; {
      if (webcamEnabled) {
        // this will disable the local participant webcam
        meeting.disableWebcam();
        Toast.makeText(MeetingActivity.this, "Webcam Disabled", Toast.LENGTH_SHORT).show();
      } else {
        // this will enable the local participant webcam
        meeting.enableWebcam();
        Toast.makeText(MeetingActivity.this, "Webcam Enabled", Toast.LENGTH_SHORT).show();
      }
      webcamEnabled=!webcamEnabled;
    });

    // leave meeting
    findViewById(R.id.btnLeave).setOnClickListener(view -&gt; {
      // this will make the local participant leave the meeting
      meeting.leave();
    });
  }
}</code></pre><h3 id="step-4-handling-the-participants-view">Step 4: Handling the Participants' View</h3><p>To display a list of participants in your video UI, we'll utilize a <code>RecyclerView</code>.</p><p><strong>(a)</strong> This involves creating a new layout for the participant view named <code>item_remote_peer.xml</code> in the <code>res/layout</code> folder. You can copy <code>item_remote_peer.xml </code>file from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/item_remote_peer.xml"><strong>here</strong></a>.</p><p><strong>(b)</strong> Create a RecyclerView adapter <code>ParticipantAdapter</code> which will be responsible for displaying the participant list. Within this adapter, define a <code>PeerViewHolder</code> class that extends <code>RecyclerView.ViewHolder</code>.</p><pre><code class="language-Java">public class ParticipantAdapter extends RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt; {

  @NonNull
  @Override
  public PeerViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
      return new PeerViewHolder(LayoutInflater.from(parent.getContext()).inflate(R.layout.item_remote_peer, parent, false));
  }

  @Override
  public void onBindViewHolder(@NonNull PeerViewHolder holder, int position) {
  }

  @Override
  public int getItemCount() {
      return 0;
  }

  static class PeerViewHolder extends RecyclerView.ViewHolder {
    // 'VideoView' to show Video Stream
    public VideoView participantView;
    public TextView tvName;
    public View itemView;

    PeerViewHolder(@NonNull View view) {
        super(view);
        itemView = view;
        tvName = view.findViewById(R.id.tvName);
        participantView = view.findViewById(R.id.participantView);
    }
  }
}</code></pre><p><strong>(c)</strong> Now, we will render a list of <code>Participant</code> for the meeting. We will initialize this list in the constructor of the <code>ParticipantAdapter</code></p><pre><code class="language-Java">public class ParticipantAdapter extends RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt; {

  // creating a empty list which will store all participants
  private final List&lt;Participant&gt; participants = new ArrayList&lt;&gt;();

  public ParticipantAdapter(Meeting meeting) {
    // adding the local participant(You) to the list
    participants.add(meeting.getLocalParticipant());

    // adding Meeting Event listener to get the participant join/leave event in the meeting.
    meeting.addEventListener(new MeetingEventListener() {
      @Override
      public void onParticipantJoined(Participant participant) {
        // add participant to the list
        participants.add(participant);
        notifyItemInserted(participants.size() - 1);
      }

      @Override
      public void onParticipantLeft(Participant participant) {
        int pos = -1;
        for (int i = 0; i &lt; participants.size(); i++) {
          if (participants.get(i).getId().equals(participant.getId())) {
            pos = i;
            break;
          }
        }
        // remove participant from the list
        participants.remove(participant);

        if (pos &gt;= 0) {
          notifyItemRemoved(pos);
        }
      }
    });
  }

  // replace getItemCount() method with following.
  // this method returns the size of total number of participants
  @Override
  public int getItemCount() {
    return participants.size();
  }
  //...
}</code></pre><p><strong>(d)</strong> We have listed our participants. Let's set up the view holder to display a participant video.</p><pre><code class="language-Java">public class ParticipantAdapter extends RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt; {

  // replace onBindViewHolder() method with following.
  @Override
  public void onBindViewHolder(@NonNull PeerViewHolder holder, int position) {
    Participant participant = participants.get(position);

    holder.tvName.setText(participant.getDisplayName());

    // adding the initial video stream for the participant into the 'VideoView'
    for (Map.Entry&lt;String, Stream&gt; entry : participant.getStreams().entrySet()) {
      Stream stream = entry.getValue();
      if (stream.getKind().equalsIgnoreCase("video")) {
        holder.participantView.setVisibility(View.VISIBLE);
        VideoTrack videoTrack = (VideoTrack) stream.getTrack();
        holder.participantView.addTrack(videoTrack)
        break;
      }
    }
    // add Listener to the participant which will update start or stop the video stream of that participant
    participant.addEventListener(new ParticipantEventListener() {
      @Override
      public void onStreamEnabled(Stream stream) {
        if (stream.getKind().equalsIgnoreCase("video")) {
          holder.participantView.setVisibility(View.VISIBLE);
          VideoTrack videoTrack = (VideoTrack) stream.getTrack();
          holder.participantView.addTrack(videoTrack)
        }
      }

      @Override
      public void onStreamDisabled(Stream stream) {
        if (stream.getKind().equalsIgnoreCase("video")) {
          holder.participantView.removeTrack();
          holder.participantView.setVisibility(View.GONE);
        }
      }
    });
  }
}</code></pre><p><strong>(e)</strong> Now, add this adapter to the <code>MeetingActivity</code></p><pre><code class="language-Java">@Override
protected void onCreate(Bundle savedInstanceState) {
  //Meeting Setup...
  //...
  final RecyclerView rvParticipants = findViewById(R.id.rvParticipants);
  rvParticipants.setLayoutManager(new GridLayoutManager(this, 2));
  rvParticipants.setAdapter(new ParticipantAdapter(meeting));
}</code></pre><h2 id="integrate-chat-feature">Integrate Chat Feature</h2><p>This section delineates the process of implementing group and private chat functionalities within your Android video calling app. VideoSDK provides <code>pubSub</code> a class that uses the Publish-Subscribe mechanism and can be used to develop various functions. </p><p>For example, participants can use it to send chat messages to each other, share files or other media, or trigger actions such as muting or unmuting audio or video. Now we will see how we can use PubSub to implement chat functionality. If you are not familiar with the PubSub mechanism and <code>pubSub</code> class, you can <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/pubsub">follow this guide</a>.</p><h3 id="implement-group-chat">Implement Group Chat </h3><ul><li>The first step in creating a group chat is choosing the topic that all the participants will publish and subscribe to send and receive the messages. We will be using <code>CHAT</code> as the topic for this one.</li><li>On the send button, we will publish the message that the sender typed in the <code>EditText</code> field.</li></ul><pre><code class="language-Java">import androidx.appcompat.app.AppCompatActivity;
import android.os.Bundle;
import java.util.List;
import live.videosdk.rtc.android.Meeting;
import live.videosdk.rtc.android.lib.PubSubMessage;
import live.videosdk.rtc.android.listeners.PubSubMessageListener;
import live.videosdk.rtc.android.model.PubSubPublishOptions;

public class ChatActivity extends AppCompatActivity {
  // Meeting
  Meeting meeting;

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_chat);

    /**
     * Here, we have created 'MainApplication' class, which extends android.app.Application class.
     * It has Meeting property and getter and setter methods of Meeting property.
     * In your android manifest, you must declare the class implementing android.app.Application
     * (add the android:name=".MainApplication" attribute to the existing application tag):
     * In MainActivity.java, we have set Meeting property.
     *
     * For Example: (MainActivity.java)
     * Meeting meeting = VideoSDK.initMeeting(context, meetingId, ParticipantName, micEnabled, webcamEnabled,participantId,mode,multiStream,customTrack,metaData);
     * ((MainApplication) this.getApplication()).setMeeting(meeting);
    */

    // Get Meeting
    meeting = ((MainApplication) this.getApplication()).getMeeting();

    findViewById(R.id.btnSend).setOnClickListener(view -&gt; sendMessage());
  }

  private void sendMessage()
  {
    // get message from EditText
    String message = etmessage.getText().toString();
    if (!message.equals("")) {
        PubSubPublishOptions publishOptions = new PubSubPublishOptions();
        publishOptions.setPersist(true);

        // Sending the Message using the publish method
        meeting.pubSub.publish("CHAT", message, publishOptions);

        // Clearing the message input
        etmessage.setText("");
    } else {
        Toast.makeText(ChatActivity.this, "Please Enter Message",
                Toast.LENGTH_SHORT).show();
    }
  }
}</code></pre><ul><li>The next step would be to display the messages others send. For this we have to <code>subscribe</code> to that topic i.e <code>CHAT</code> and display all the messages.</li></ul><pre><code class="language-Java">public class ChatActivity extends AppCompatActivity {

  // PubSubMessageListener
  private PubSubMessageListener pubSubMessageListener = new PubSubMessageListener() {
    @Override
    public void onMessageReceived(PubSubMessage message) {
        // New message
        Toast.makeText(
          ChatActivity.this, message.senderName + " says : "+ message.getMessage(),
          Toast.LENGTH_SHORT
        ).show();
    }
  };

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_chat);

    //..

    // Subscribe for 'CHAT' topic
    List&lt;PubSubMessage&gt; pubSubMessageList = meeting.pubSub.subscribe("CHAT", pubSubMessageListener);

    for(PubSubMessage message : pubSubMessageList){
        // Persisted messages
        Toast.makeText(
          ChatActivity.this, message.senderName + " says : "+ message.getMessage(),
          Toast.LENGTH_SHORT
        ).show();
    }
  }
}</code></pre><ul><li>The final step in the group chat would be <code>unsubscribe</code> to that topic, which you had previously subscribed to but no longer needed. Here we are <code>unsubscribe</code> to <code>CHAT</code> topic on activity destroy.</li></ul><pre><code class="language-Java">public class ChatActivity extends AppCompatActivity {
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_chat);

    //..
  }

  @Override
  protected void onDestroy() {
    // Unsubscribe for 'CHAT' topic
    meeting.pubSub.unsubscribe("CHAT", pubSubMessageListener);
    super.onDestroy();
  }
}</code></pre><h3 id="implement-private-chat">Implement Private Chat</h3><ul><li>If you want to convert into a private chat between two participants, then all you have to do is pass <code>sendOnly</code> parameter in <code>PubSubPublishOptions</code>.</li></ul><pre><code class="language-Java">public class ChatActivity extends AppCompatActivity {
 
  //...

  private void sendMessage()
  {
    // get message from EditText
    String message = etmessage.getText().toString();
    if (!message.equals("")) {
        PubSubPublishOptions publishOptions = new PubSubPublishOptions();
        publishOptions.setPersist(true);
        // Pass the participantId of the participant to whom you want to send the message.
        String[] sendOnly = {
          "xyz" 
        };
        publishOptions.setSendOnly(sendOnly);

        // Sending the Message using the publish method
        meeting.pubSub.publish("CHAT", message, publishOptions);

        // Clearing the message input
        etmessage.setText("");
    } else {
        Toast.makeText(ChatActivity.this, "Please Enter Message",
                Toast.LENGTH_SHORT).show();
    }
  }
}</code></pre><h3 id="downloading-chat-messages%E2%80%8B">Downloading Chat Messages<a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/chat-using-pubsub#downloading-chat-messages">​</a></h3><p>All the messages from the PubSub that were published  <code>persist : true</code> can be downloaded as a <code>.csv</code> file. This file will be available in the VideoSDK dashboard as well as through the <a href="https://docs.videosdk.live/api-reference/realtime-communication/fetch-session-using-sessionid">Sessions API</a>.</p><h2 id="conclusion">Conclusion</h2><p>Now, your android video chat app not only enhances its functionality but also opens doors to endless possibilities in real-time communication. Whether it's group chat, private chat, or downloading chat messages, leveraging PubSub adds a new dimension to your app, making it more engaging and user-friendly. Embrace this integration, explore its potential, and elevate your app's user experience to new heights. Happy coding!</p><p><a href="https://www.videosdk.live/signup"><strong>Sign up with VideoSDK</strong></a> today and Get <strong>10000 minutes free</strong> to take your video app to the next level!<br/></p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Chat Feature in Android(Kotlin) Video Calling App?]]></title><description><![CDATA[This article walks you through integrating chat into your Android(Kotlin) video chat app to unlock the full potential of real-time communication.]]></description><link>https://www.videosdk.live/blog/integrate-chat-in-kotlin-video-call-app</link><guid isPermaLink="false">6603f88f2a88c204ca9cf178</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[Android]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Mon, 23 Sep 2024 05:44:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Chat-using-PubSub-Kotlin-1.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Chat-using-PubSub-Kotlin-1.png" alt="How to Integrate Chat Feature in Android(Kotlin) Video Calling App?"/><p>In this technological landscape, where seamless video experiences are in high demand, integrating real-time chat features into Android applications has become paramount. Incorporating chat functionality within an Android(Kotlin) video call app is a strategic move. </p><p>This article aims to guide you through the process of leveraging the capabilities of the VideoSDK, integrating chat, and using pubSub in your Android (Kotlin) Video call app. Our goal is to empower you to create a dynamic and immersive user experience, ensuring robust communication capabilities within your application.</p><h2 id="goals">Goals</h2><p>By the End of this Article:</p><ol><li>Create a <a href="https://app.videosdk.live/signup">VideoSDK account</a> and generate your VideoSDK auth token.</li><li>Integrate the VideoSDK library and dependencies into your project.</li><li>Implement core functionalities for video calls using VideoSDK.</li><li>Enable Chat feature.</li></ol><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the chat functionality, we will need to use the capabilities that the VideoSDK offers. Before we dive into the implementation steps, let's make sure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token plays a crucial role in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/authentication-and-token#1-generating-token-from-dashboard">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Java Development Kit is supported.</li><li>Android Studio version 3.0 or later.</li><li>Android SDK API level 21 or higher.</li><li>A mobile device with Android 5.0 or later version.</li></ul><h2 id="integrate-videosdk">Integrate VideoSDK</h2><p>Following the account creation and token generation steps, we'll guide you through the process of adding the VideoSDK library and other dependencies to your project. We'll also ensure your app has the required permissions to access features like audio recording, camera usage, and internet connectivity, all crucial for a seamless video experience.</p><h3 id="step-a-add-the-repositories-to-the-projects-settingsgradle-file">Step (a): Add the repositories to the project's <code>settings.gradle</code> file.</h3><pre><code class="language-kotlin">dependencyResolutionManagement {
  repositories {
    // ...
    google()
    mavenCentral()
    maven { url '&lt;https://jitpack.io&gt;' }
    maven { url "&lt;https://maven.aliyun.com/repository/jcenter&gt;" }
  }
}
</code></pre><h3 id="step-b-include-the-following-dependency-within-your-applications-buildgradle-file">Step (b): Include the following dependency within your application's <code>build.gradle</code> file:</h3><pre><code class="language-kotlin">dependencies {
  implementation 'live.videosdk:rtc-android-sdk:0.1.26'

  // library to perform Network call to generate a meeting id
  implementation 'com.amitshekhar.android:android-networking:1.0.2'

  // Other dependencies specific to your app
}
</code></pre><blockquote>If your project has set <code>android.useAndroidX=true</code>, then set <code>android.enableJetifier=true</code> in the <code>gradle.properties</code> file to migrate your project to AndroidX and avoid duplicate class conflict.</blockquote><h3 id="step-c-add-permissions-to-your-project">Step (c): Add permissions to your project</h3><p>In <code>/app/Manifests/AndroidManifest.xml</code>, add the following permissions after <code>&lt;/application&gt;</code>.</p><pre><code class="language-kotlin">&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
</code></pre><p>These permissions are essential for enabling core functionalities like audio recording, internet connectivity for real-time communication, and camera access for video streams within your video application.</p><h2 id="essential-steps-for-building-the-video-calling-functionality">Essential Steps for Building the Video Calling Functionality</h2><p>We'll now delve into the functionalities that make your video application after setting up your project with VideoSDK. This section outlines the essential steps for implementing core functionalities within your app.</p><p>This section will guide you through four key aspects:</p><h3 id="step-1-generate-a-meetingid">Step 1: Generate a <code>meetingId</code></h3><p>Now, we can create the <code>meetingId</code> from the VideoSDK's rooms API. You can refer to this <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/setup-call/initialize-meeting#generating-meeting-id">documentation</a> to generate meetingId.</p><h3 id="step-2-initializing-the-meeting">Step 2: Initializing the Meeting</h3><p>After getting <code>meetingId</code> , the next step involves initializing the meeting for that we need to,</p><ol><li>Initialize VideoSDK.</li><li>Configure VideoSDK with the token.</li><li>Initialize the meeting with required params such as <code>meetingId</code>, <code>participantName</code>, <code>micEnabled</code>, <code>webcamEnabled</code> and more.</li><li>Add <code>MeetingEventListener</code> for listening events such as Meeting Join/Left and Participant Join/Left.</li><li>Join the room with <code>meeting.join()</code> a method.</li></ol><p>Please copy the .xml file of the <code>MeetingActivity</code> from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/activity_meeting.xml">here</a>.</p><pre><code class="language-kotlin">class MeetingActivity : AppCompatActivity() {
  // declare the variables we will be using to handle the meeting
  private var meeting: Meeting? = null
  private var micEnabled = true
  private var webcamEnabled = true

  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_meeting)

    val token = "" // Replace with the token you generated from the VideoSDK Dashboard
    val meetingId = "" // Replace with the meetingId you have generated
    val participantName = "John Doe"
    
    // 1. Initialize VideoSDK
    VideoSDK.initialize(applicationContext)

    // 2. Configuration VideoSDK with Token
    VideoSDK.config(token)

    // 3. Initialize VideoSDK Meeting
    meeting = VideoSDK.initMeeting(
      this@MeetingActivity, meetingId, participantName,
      micEnabled, webcamEnabled,null, null, false, null, null)

    // 4. Add event listener for listening upcoming events
    meeting!!.addEventListener(meetingEventListener)

    // 5. Join VideoSDK Meeting
    meeting!!.join()

    (findViewById&lt;View&gt;(R.id.tvMeetingId) as TextView).text = meetingId
  }

  // creating the MeetingEventListener
  private val meetingEventListener: MeetingEventListener = object : MeetingEventListener() {
    override fun onMeetingJoined() {
      Log.d("#meeting", "onMeetingJoined()")
    }

    override fun onMeetingLeft() {
      Log.d("#meeting", "onMeetingLeft()")
      meeting = null
      if (!isDestroyed) finish()
    }

    override fun onParticipantJoined(participant: Participant) {
      Toast.makeText(
        this@MeetingActivity, participant.displayName + " joined",
        Toast.LENGTH_SHORT
      ).show()
    }

    override fun onParticipantLeft(participant: Participant) {
      Toast.makeText(
         this@MeetingActivity, participant.displayName + " left",
         Toast.LENGTH_SHORT
      ).show()
    }
  }
}
</code></pre><h3 id="step-3-handle-local-participant-media">Step 3: Handle Local Participant Media</h3><p>After successfully entering the meeting, it's time to manage the webcam and microphone for the local participant (you).</p><p>To enable or disable the webcam, we'll use the <code>Meeting</code> class methods <code>enableWebcam()</code> and <code>disableWebcam()</code>, respectively. Similarly, to mute or unmute the microphone, we'll utilize the methods <code>muteMic()</code> and <code>unmuteMic()</code></p><pre><code class="language-kotlin">class MeetingActivity : AppCompatActivity() {
  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_meeting)
    //...Meeting Setup is Here

    // actions
    setActionListeners()
  }

  private fun setActionListeners() {
    // toggle mic
    findViewById&lt;View&gt;(R.id.btnMic).setOnClickListener { view: View? -&gt;
      if (micEnabled) {
        // this will mute the local participant's mic
        meeting!!.muteMic()
        Toast.makeText(this@MeetingActivity, "Mic Muted", Toast.LENGTH_SHORT).show()
      } else {
        // this will unmute the local participant's mic
        meeting!!.unmuteMic()
        Toast.makeText(this@MeetingActivity, "Mic Enabled", Toast.LENGTH_SHORT).show()
      }
        micEnabled=!micEnabled
    }

    // toggle webcam
    findViewById&lt;View&gt;(R.id.btnWebcam).setOnClickListener { view: View? -&gt;
      if (webcamEnabled) {
        // this will disable the local participant webcam
        meeting!!.disableWebcam()
        Toast.makeText(this@MeetingActivity, "Webcam Disabled", Toast.LENGTH_SHORT).show()
      } else {
        // this will enable the local participant webcam
        meeting!!.enableWebcam()
        Toast.makeText(this@MeetingActivity, "Webcam Enabled", Toast.LENGTH_SHORT).show()
      }
       webcamEnabled=!webcamEnabled
    }

    // leave meeting
    findViewById&lt;View&gt;(R.id.btnLeave).setOnClickListener { view: View? -&gt;
      // this will make the local participant leave the meeting
      meeting!!.leave()
    }
  }
}
</code></pre><h3 id="step-4-handling-the-participants-view">Step 4: Handling the Participants' View</h3><p>To display a list of participants in your video UI, we'll utilize a <code>RecyclerView</code>.</p><p><strong>(a)</strong> This involves creating a new layout for the participant view named <code>item_remote_peer.xml</code> in the <code>res/layout</code> folder. You can copy <code>item_remote_peer.xml </code>file from <a href="https://github.com/videosdk-live/quickstart/blob/main/android-rtc/Videosdk_android_kotlin_quickstart/app/src/main/res/layout/item_remote_peer.xml">here</a>.</p><p><strong>(b)</strong> Create a RecyclerView adapter <code>ParticipantAdapter</code> which will be responsible for displaying the participant list. Within this adapter, define a <code>PeerViewHolder</code> class that extends <code>RecyclerView.ViewHolder</code>.</p><pre><code class="language-kotlin">class ParticipantAdapter(meeting: Meeting) : RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt;() {

  override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): PeerViewHolder {
    return PeerViewHolder(
      LayoutInflater.from(parent.context)
        .inflate(R.layout.item_remote_peer, parent, false)
    )
  }

  override fun onBindViewHolder(holder: PeerViewHolder, position: Int) {
  }

  override fun getItemCount(): Int {
    return 0
  }

  class PeerViewHolder(view: View) : RecyclerView.ViewHolder(view) {
    // 'VideoView' to show Video Stream
    var participantView: VideoView
    var tvName: TextView

    init {
        tvName = view.findViewById(R.id.tvName)
        participantView = view.findViewById(R.id.participantView)
    }
  }
}
</code></pre><p><strong>(c)</strong> Now, we will render a list of <code>Participant</code> for the meeting. We will initialize this list in the constructor of the <code>ParticipantAdapter</code></p><pre><code class="language-kotlin">class ParticipantAdapter(meeting: Meeting) :
    RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt;() {

  // creating a empty list which will store all participants
  private val participants: MutableList&lt;Participant&gt; = ArrayList()

  init {
    // adding the local participant(You) to the list
    participants.add(meeting.localParticipant)

    // adding Meeting Event listener to get the participant join/leave event in the meeting.
    meeting.addEventListener(object : MeetingEventListener() {
      override fun onParticipantJoined(participant: Participant) {
        // add participant to the list
        participants.add(participant)
        notifyItemInserted(participants.size - 1)
      }

      override fun onParticipantLeft(participant: Participant) {
        var pos = -1
        for (i in participants.indices) {
          if (participants[i].id == participant.id) {
            pos = i
            break
          }
        }
        // remove participant from the list
        participants.remove(participant)
        if (pos &gt;= 0) {
          notifyItemRemoved(pos)
        }
      }
    })
  }

  // replace getItemCount() method with following.
  // this method returns the size of total number of participants
  override fun getItemCount(): Int {
    return participants.size
  }
  //...
}
</code></pre><p><strong>(d)</strong> We have listed our participants. Let's set up the view holder to display a participant video.</p><pre><code class="language-kotlin">class ParticipantAdapter(meeting: Meeting) :
    RecyclerView.Adapter&lt;ParticipantAdapter.PeerViewHolder&gt;() {

  // replace onBindViewHolder() method with following.
  override fun onBindViewHolder(holder: PeerViewHolder, position: Int) {
    val participant = participants[position]

    holder.tvName.text = participant.displayName

    // adding the initial video stream for the participant into the 'VideoView'
    for ((_, stream) in participant.streams) {
      if (stream.kind.equals("video", ignoreCase = true)) {
        holder.participantView.visibility = View.VISIBLE
        val videoTrack = stream.track as VideoTrack
        holder.participantView.addTrack(videoTrack)
        break
      }
    }

    // add Listener to the participant which will update start or stop the video stream of that participant
    participant.addEventListener(object : ParticipantEventListener() {
      override fun onStreamEnabled(stream: Stream) {
        if (stream.kind.equals("video", ignoreCase = true)) {
          holder.participantView.visibility = View.VISIBLE
          val videoTrack = stream.track as VideoTrack
          holder.participantView.addTrack(videoTrack)
       }
      }

      override fun onStreamDisabled(stream: Stream) {
        if (stream.kind.equals("video", ignoreCase = true)) {
          holder.participantView.removeTrack()
          holder.participantView.visibility = View.GONE
        }
      }
    })
  }
}
</code></pre><p><strong>(e)</strong> Now, add this adapter to the <code>MeetingActivity</code></p><pre><code class="language-kotlin">override fun onCreate(savedInstanceState: Bundle?) {
  // Meeting Setup...
  //...
  val rvParticipants = findViewById&lt;RecyclerView&gt;(R.id.rvParticipants)
  rvParticipants.layoutManager = GridLayoutManager(this, 2)
  rvParticipants.adapter = ParticipantAdapter(meeting!!)
}
</code></pre><h2 id="integrate-chat-feature">Integrate Chat feature</h2><p>This section delineates the process of implementing group and private chat functionalities within your Android video App. The VideoSDK provides <code>pubSub</code> the class that uses the Publish-Subscribe mechanism and can be used to develop various functions. </p><p>For example, participants can use it to send chat messages to each other, share files or other media, or trigger actions such as muting or unmuting audio or video. Now we will see how we can use pubSub to implement chat functionality. If you are not familiar with the pub-sub mechanism and <code>pubSub</code> class, you can <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/pubsub">follow this guide</a>.</p><p>Let's delve deeper into the implementation of chat functionality, exploring both group and private chat features, along with the seamless integration of chat message downloads for enhanced user engagement and convenience.</p><h3 id="implement-group-chat">Implement Group Chat </h3><ul><li>The first step in creating a group chat is choosing the topic that all the participants will publish and subscribe to send and receive the messages. We will be using <code>CHAT</code> as the topic for this one.</li><li>On the send button, we will publish the message that the sender typed in the <code>EditText</code> field.</li></ul><pre><code class="language-Kotlin">import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
import android.view.View
import android.widget.EditText
import android.widget.Toast
import androidx.appcompat.widget.Toolbar
import live.videosdk.rtc.android.Meeting
import live.videosdk.rtc.android.listeners.PubSubMessageListener
import live.videosdk.rtc.android.model.PubSubPublishOptions

class ChatActivity : AppCompatActivity() {
  // Meeting
  var meeting: Meeting? = null

  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_chat)

    /**
     * Here, we have created 'MainApplication' class, which extends android.app.Application class.
     * It has Meeting property and getter and setter methods of Meeting property.
     * In your android manifest, you must declare the class implementing android.app.Application
     * (add the android:name=".MainApplication" attribute to the existing application tag):
     * In MainActivity.kt, we have set Meeting property.
     *
     * For Example: (MainActivity.kt)
     * var meeting = VideoSDK.initMeeting(context, meetingId, ParticipantName, micEnabled, webcamEnabled,paricipantId,mode,multiStream,customTrack,metaData)
     * (this.application as MainApplication).meeting = meeting
    */

    // Get Meeting
    meeting = (this.application as MainApplication).meeting

    findViewById(R.id.btnSend).setOnClickListener(view -&gt; sendMessage());
  }

  private fun sendMessage() {
    // get message from EditText
    val message: String = etmessage.getText().toString()
    if (!TextUtils.isEmpty(message)) {
        val publishOptions = PubSubPublishOptions()
        publishOptions.setPersist(true)

        // Sending the Message using the publish method
        meeting!!.pubSub.publish("CHAT", message, publishOptions)

        // Clearing the message input
        etmessage.setText("")
    } else {
        Toast.makeText(
            this@ChatActivity, "Please Enter Message",
            Toast.LENGTH_SHORT
        ).show()
    }
  }
}</code></pre><ul><li>The next step would be to display the messages others send. For this, we have to <code>subscribe</code> to that topic i.e <code>CHAT</code> and display all the messages.</li></ul><pre><code class="language-Kotlin">class ChatActivity : AppCompatActivity() {

  // PubSubMessageListener
  var pubSubMessageListener =
    PubSubMessageListener { message -&gt;
      // New message
      Toast.makeText(
            this@ChatActivity, message.senderName + " says : " + message.message,
            Toast.LENGTH_SHORT
        ).show()
  }

  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_chat)

    //...

    // Subscribe for 'CHAT' topic
    val pubSubMessageList = meeting!!.pubSub.subscribe("CHAT", pubSubMessageListener)

    for (message in pubSubMessageList) {
      // Persisted messages
      Toast.makeText(
            this@ChatActivity, message.senderName + " says : " + message.message,
            Toast.LENGTH_SHORT
        ).show()
    }
  }
}</code></pre><ul><li>The final step in the group chat would be <code>unsubscribe</code> to that topic, which you had previously subscribed to but no longer needed. Here we are <code>unsubscribe</code> to <code>CHAT</code> topic on activity destroy.</li></ul><pre><code class="language-Kotlin">class ChatActivity : AppCompatActivity() {
  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_chat)

    //...
  }

  override fun onDestroy() {
    // Unsubscribe for 'CHAT' topic
    meeting!!.pubSub.unsubscribe("CHAT", pubSubMessageListener)
    super.onDestroy()
  }
}</code></pre><h3 id="implement-private-chat">Implement Private Chat</h3><ul><li>If you want to convert into a private chat between two participants, then all you have to do is pass <code>sendOnly</code> parameter in <code>PubSubPublishOptions</code>.</li></ul><pre><code class="language-Kotlin">class ChatActivity : AppCompatActivity() {

  //..
  
  private fun sendMessage() {
    // get message from EditText
    val message: String = etmessage.getText().toString()
    if (!TextUtils.isEmpty(message)) {
        val publishOptions = PubSubPublishOptions()
        publishOptions.setPersist(true)
        // Pass the participantId of the participant to whom you want to send the message.
        var sendOnly: Array&lt;String&gt; = arrayOf("xyz")
        publishOptions.setSendOnly(sendOnly);

        // Sending the Message using the publish method
        meeting!!.pubSub.publish("CHAT", message, publishOptions)

        // Clearing the message input
        etmessage.setText("")
    } else {
        Toast.makeText(
            this@ChatActivity, "Please Enter Message",
            Toast.LENGTH_SHORT
        ).show()
    }
  }
}</code></pre><h3 id="downloading-chat-messages%E2%80%8B">Downloading Chat Messages<a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/chat-using-pubsub#downloading-chat-messages">​</a></h3><p>All the messages from the PubSub that were published  <code>persist : true</code> can be downloaded as a <code>.csv</code> file. This file will be available in the VideoSDK dashboard as well as through the <a href="https://docs.videosdk.live/api-reference/realtime-communication/fetch-session-using-sessionid">Sessions API</a>.</p><h2 id="conclusion">Conclusion</h2><p>Integrating Chat using pubSub in your Android (Kotlin) video call app elevates your app's communication capabilities to new heights. Whether it's facilitating group chats, enabling private conversations, or downloading chat messages for analysis, you can enhance the communication capabilities of your app, providing users with a richer, more immersive video chat experience.</p><p><a href="https://www.videosdk.live/signup"><strong>Sign up with VideoSDK</strong></a> today and Get <strong>10000 minutes free</strong> to take your video app to the next level!</p>]]></content:encoded></item><item><title><![CDATA[Build JavaScript Live Video  Streaming App]]></title><description><![CDATA[In this blog, You will learn how to add interactive JavaScript live-streaming features in your app in just 6 steps with VideoSDK.]]></description><link>https://www.videosdk.live/blog/javascript-live-streaming</link><guid isPermaLink="false">649019788ecddeab7f172df6</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 20 Sep 2024 08:14:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/06/ils_JS_blog.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<img src="https://assets.videosdk.live/static-assets/ghost/2023/06/ils_JS_blog.jpg" alt="Build JavaScript Live Video  Streaming App"/><p>The tutorial is about creating an interactive live-streaming app using <a href="https://en.wikipedia.org/wiki/JavaScript">JavaScript</a> with VideoSDK. It includes JavaScript, Video SDK, and a server for streaming.</p><p>You should learn this tutorial to enhance your understanding of <a href="https://en.wikipedia.org/wiki/HTTP_Live_Streaming">live streaming</a> and to gain practical skills in creating interactive live streams. The benefits of using this software configuration include the ability to create dynamic streaming experiences, engage with viewers in real-time, and customize the streaming functionality according to specific needs.</p><p>By following this tutorial, you will set up a server, integrate the VideoSDK with JavaScript, and create interactive live streams. You will have hands-on experience in building and testing a streaming application, allowing you to understand the process from start to finish.</p><p>After completing the tutorial, you will have acquired new skills in video streaming and JavaScript programming. You will be able to create your own interactive live streams, customize streaming features, and deploy video streaming applications with the Video SDK and JavaScript.</p><p>Building a JavaScript live video streaming app, it's crucial to consider various aspects to ensure a seamless user experience. While we delve into real-time streaming, incorporating real-time messaging can foster interactivity among users during live sessions. Additionally, exploring features like, </p><ol><li><strong>Mute/unmute their mic</strong> for a seamless audio experience. </li><li>The <strong>on/off camera</strong> functionality ensures control over video visibility. </li><li><strong>Screen sharing</strong> enables participants to display their screen content effortlessly. </li><li><strong>Active speaker indication</strong> highlights the current speaker for better communication. </li><li>The <strong>image capture feature </strong>allows users to take snapshots during the stream. </li></ol><p>Moreover, discussing integration with media services platforms and optimizing for low latency enhances streaming quality. Security measures, such as API keys and stream keys, safeguard streaming sessions. By incorporating these elements and exploring open-source <a href="https://github.com/videosdk-live/videosdk-rtc-javascript-sdk-example">javascript livestreaming templates </a>so, you can create a comprehensive and customizable live-streaming experience for your users.</p><h2 id="prerequisites-for-building-your-javascript-live-video-streaming-app">Prerequisites for Building Your Javascript Live Video Streaming App</h2>
<ul><li>VideoSDK Developer Account (Not having one? Follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a><strong>-&gt; API Key</strong>)</li><li>Have <a href="https://en.wikipedia.org/wiki/Node.js">Node</a> and NPM installed on your device.</li></ul><h3 id="install-video-sdk%E2%80%8B">Install Video SDK<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start-ILS#install-video-sdk">​</a></h3><p>Run the following npm command in your app directory.</p><pre><code class="language-js">npm install @videosdk.live/js-sdk
</code></pre>
<h3 id="structure-of-the-project%E2%80%8B">Structure of the project<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start-ILS#structure-of-the-project">​</a></h3><p>Your project directory should include:</p><pre><code class="language-js">  root
   ├── index.html
   ├── config.js
   ├── index.js
</code></pre>
<p>We are going to work on three files:</p><ul><li>index.html : Responsible for creating basic UI.</li><li>config.js : Responsible for storing the token.</li><li>index.js : Responsible for rendering meeting view and joining the meeting.</li></ul><h2 id="step-1-set-up-your-projects-user-interface">STEP 1: Set Up Your Project's User Interface</h2>
<p>Create the <a href="https://en.wikipedia.org/wiki/HTML">HTML</a> structure with join and grid screens in your <code>index.html</code> file. Include controls for managing meetings and initiating live streams with HLS (HTTP Live Streaming).</p><pre><code class="language-js">&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt; &lt;/head&gt;

  &lt;body&gt;
    &lt;div id="join-screen"&gt;
      &lt;!-- Create new Meeting Button --&gt;
      &lt;button id="createMeetingBtn"&gt;Create Meeting&lt;/button&gt;
      OR
      &lt;!-- Join existing Meeting --&gt;
      &lt;input type="text" id="meetingIdTxt" placeholder="Enter Meeting id" /&gt;
      &lt;button id="joinHostBtn"&gt;Join As Host&lt;/button&gt;
      &lt;button id="joinViewerBtn"&gt;Join As Viewer&lt;/button&gt;
    &lt;/div&gt;

    &lt;!-- for Managing meeting status --&gt;
    &lt;div id="textDiv"&gt;&lt;/div&gt;

    &lt;div id="grid-screen" style="display: none"&gt;
      &lt;!-- To Display MeetingId --&gt;
      &lt;h3 id="meetingIdHeading"&gt;&lt;/h3&gt;
      &lt;h3 id="hlsStatusHeading"&gt;&lt;/h3&gt;

      &lt;div id="speakerView" style="display: none"&gt;
        &lt;!-- Controllers --&gt;
        &lt;button id="leaveBtn"&gt;Leave&lt;/button&gt;
        &lt;button id="toggleMicBtn"&gt;Toggle Mic&lt;/button&gt;
        &lt;button id="toggleWebCamBtn"&gt;Toggle WebCam&lt;/button&gt;
        &lt;button id="startHlsBtn"&gt;Start HLS&lt;/button&gt;
        &lt;button id="stopHlsBtn"&gt;Stop HLS&lt;/button&gt;
      &lt;/div&gt;

      &lt;!-- render Video --&gt;
      &lt;div id="videoContainer"&gt;&lt;/div&gt;
    &lt;/div&gt;
    &lt;script src="https://sdk.videosdk.live/js-sdk/0.0.67/videosdk.js"&gt;&lt;/script&gt;
    &lt;script src="config.js"&gt;&lt;/script&gt;
    &lt;script src="index.js"&gt;&lt;/script&gt;

    &lt;!-- hls lib script  --&gt;
    &lt;script src="https://cdn.jsdelivr.net/npm/hls.js"&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;
</code></pre>
<h3 id="output">Output</h3>
<p><img src="https://cdn.videosdk.live/website-resources/docs-resources/js_ils_join_screen.png" alt="Build JavaScript Live Video  Streaming App" loading="lazy"/></p>
<h2 id="step-2-implement-the-join-screen-of-your-project">STEP 2: Implement the Join Screen of your project</h2>
<p>Set up token authentication in the <code>config.js</code> file for creating and joining meetings. Implement DOM manipulations in <code>index.js</code> to facilitate session joining as a host or viewer.  (that you have generated from <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a><strong>)</strong> in <code>config.js</code> file.</p><pre><code class="language-js">// Auth token we will use to generate a meeting and connect to it
TOKEN = "Your_Token_Here";
</code></pre>
<p>Now get all the elements from DOM and declare the following variables in <code>index.js</code> file and then add Event Listener to the join and create meeting buttons.<br>The join screen will work as a medium to either schedule a new meeting or to join an existing meeting as a host or as a viewer.</br></p><p>These will have 3 buttons:<br><strong>1. Join as a Host: </strong> When this button is clicked, the person will join the entered <code>meetingId</code> as a <code>HOST</code>.<br><strong>2. Join as a Viewer:</strong> When this button is clicked, the person will join the entered <code>meetingId</code> as a <code>VIEWER</code>.<br><strong>3. New Meeting:</strong> When this button is clicked, the person will join a new meeting as <code>HOST</code>.</br></br></br></p><pre><code class="language-js">// getting Elements from Dom
const joinHostButton = document.getElementById("joinHostBtn");
const joinViewerButton = document.getElementById("joinViewerBtn");
const leaveButton = document.getElementById("leaveBtn");
const startHlsButton = document.getElementById("startHlsBtn");
const stopHlsButton = document.getElementById("stopHlsBtn");
const toggleMicButton = document.getElementById("toggleMicBtn");
const toggleWebCamButton = document.getElementById("toggleWebCamBtn");
const createButton = document.getElementById("createMeetingBtn");
const videoContainer = document.getElementById("videoContainer");
const textDiv = document.getElementById("textDiv");
const hlsStatusHeading = document.getElementById("hlsStatusHeading");

// declare Variables
let meeting = null;
let meetingId = "";
let isMicOn = false;
let isWebCamOn = false;

const Constants = VideoSDK.Constants;

function initializeMeeting() {}

function createLocalParticipant() {}

function createVideoElement() {}

function createAudioElement() {}

function setTrack() {}

// Join Meeting As Host Button Event Listener
joinHostButton.addEventListener("click", async () =&gt; {
  document.getElementById("join-screen").style.display = "none";
  textDiv.textContent = "Joining the meeting...";

  roomId = document.getElementById("meetingIdTxt").value;
  meetingId = roomId;

  initializeMeeting(Constants.modes.CONFERENCE);
});

// Join Meeting As Viewer Button Event Listener
joinViewerButton.addEventListener("click", async () =&gt; {
  document.getElementById("join-screen").style.display = "none";
  textDiv.textContent = "Joining the meeting...";

  roomId = document.getElementById("meetingIdTxt").value;
  meetingId = roomId;

  initializeMeeting(Constants.modes.VIEWER);
});

// Create Meeting Button Event Listener
createButton.addEventListener("click", async () =&gt; {
  document.getElementById("join-screen").style.display = "none";
  textDiv.textContent = "Please wait, we are joining the meeting";

  const url = `https://api.videosdk.live/v2/rooms`;
  const options = {
    method: "POST",
    headers: { Authorization: TOKEN, "Content-Type": "application/json" },
  };

  const { roomId } = await fetch(url, options)
    .then((response) =&gt; response.json())
    .catch((error) =&gt; alert("error", error));
  meetingId = roomId;

  initializeMeeting(Constants.modes.CONFERENCE);
});
</code></pre>
<h3 id="output">Output</h3>
<p><img src="https://cdn.videosdk.live/website-resources/docs-resources/js_ils_waiting.png" alt="Build JavaScript Live Video  Streaming App" loading="lazy"/></p>
<h2 id="step-3-initialize-your-meeting">STEP 3: Initialize your meeting</h2>
<p>Configure the meeting settings based on the session type (host or viewer) and initiate media streams. Here, focus on low-latency configurations to enhance real-time interaction.</p><pre><code class="language-js">// Initialize meeting
function initializeMeeting(mode) {
  window.VideoSDK.config(TOKEN);

  meeting = window.VideoSDK.initMeeting({
    meetingId: meetingId, // required
    name: "Thomas Edison", // required
    mode: mode,
  });

  meeting.join();

  meeting.on("meeting-joined", () =&gt; {
    textDiv.textContent = null;

    document.getElementById("grid-screen").style.display = "block";
    document.getElementById(
      "meetingIdHeading"
    ).textContent = `Meeting Id: ${meetingId}`;

    if (meeting.hlsState === Constants.hlsEvents.HLS_STOPPED) {
      hlsStatusHeading.textContent = "HLS has not stared yet";
    } else {
      hlsStatusHeading.textContent = `HLS Status: ${meeting.hlsState}`;
    }

    if (mode === Constants.modes.CONFERENCE) {
      // we will pin the local participant if he joins in `CONFERENCE` mode
      meeting.localParticipant.pin();

      document.getElementById("speakerView").style.display = "block";
    }
  });

  meeting.on("meeting-left", () =&gt; {
    videoContainer.innerHTML = "";
  });

  meeting.on("hls-state-changed", (data) =&gt; {
    //
  });

  if (mode === Constants.modes.CONFERENCE) {
    // creating local participant
    createLocalParticipant();

    // setting local participant stream
    meeting.localParticipant.on("stream-enabled", (stream) =&gt; {
      setTrack(stream, null, meeting.localParticipant, true);
    });

    // participant joined
    meeting.on("participant-joined", (participant) =&gt; {
      if (participant.mode === Constants.modes.CONFERENCE) {
        participant.pin();

        let videoElement = createVideoElement(
          participant.id,
          participant.displayName
        );

        participant.on("stream-enabled", (stream) =&gt; {
          setTrack(stream, audioElement, participant, false);
        });

        let audioElement = createAudioElement(participant.id);
        videoContainer.appendChild(videoElement);
        videoContainer.appendChild(audioElement);
      }
    });

    // participants left
    meeting.on("participant-left", (participant) =&gt; {
      let vElement = document.getElementById(`f-${participant.id}`);
      vElement.remove(vElement);

      let aElement = document.getElementById(`a-${participant.id}`);
      aElement.remove(aElement);
    });
  }
}
</code></pre>
<h3 id="output">Output</h3>
<p><img src="https://cdn.videosdk.live/website-resources/docs-resources/js_ils_controls.png" alt="Build JavaScript Live Video  Streaming App" loading="lazy"/></p>
<h2 id="step-4-controls-for-the-speaker">STEP 4: Controls for the Speaker</h2>
<p>Detail the process for setting up speaker controls, such as muting/unmuting and camera management. Explain how these controls impact the streaming experience, especially in a low-latency environment.<br><br>You'll get all the <code>participants</code> from <code>meeting</code> object and filter them for the mode set to <code>CONFERENCE</code> so only Speakers are shown on the screen.</br></br></p><pre><code class="language-js">// leave Meeting Button Event Listener
leaveButton.addEventListener("click", async () =&gt; {
  meeting?.leave();
  document.getElementById("grid-screen").style.display = "none";
  document.getElementById("join-screen").style.display = "block";
});

// Toggle Mic Button Event Listener
toggleMicButton.addEventListener("click", async () =&gt; {
  if (isMicOn) {
    // Disable Mic in Meeting
    meeting?.muteMic();
  } else {
    // Enable Mic in Meeting
    meeting?.unmuteMic();
  }
  isMicOn = !isMicOn;
});

// Toggle Web Cam Button Event Listener
toggleWebCamButton.addEventListener("click", async () =&gt; {
  if (isWebCamOn) {
    // Disable Webcam in Meeting
    meeting?.disableWebcam();

    let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
    vElement.style.display = "none";
  } else {
    // Enable Webcam in Meeting
    meeting?.enableWebcam();

    let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
    vElement.style.display = "inline";
  }
  isWebCamOn = !isWebCamOn;
});

// Start Hls Button Event Listener
startHlsButton.addEventListener("click", async () =&gt; {
  meeting?.startHls({
    layout: {
      type: "SPOTLIGHT",
      priority: "PIN",
      gridSize: "20",
    },
    theme: "LIGHT",
    mode: "video-and-audio",
    quality: "high",
    orientation: "landscape",
  });
});

// Stop Hls Button Event Listener
stopHlsButton.addEventListener("click", async () =&gt; {
  meeting?.stopHls();
});
</code></pre>
<h2 id="step-5-configure-media-elements-for-speakers">STEP 5: Configure Media Elements for Speakers</h2>
<p>Guide on creating and managing audio and video elements for displaying both local and remote participants effectively. Highlight synchronization techniques for audio and video streams to ensure seamless playback.</p><pre><code class="language-js">// creating video element
function createVideoElement(pId, name) {
  let videoFrame = document.createElement("div");
  videoFrame.setAttribute("id", `f-${pId}`);

  //create video
  let videoElement = document.createElement("video");
  videoElement.classList.add("video-frame");
  videoElement.setAttribute("id", `v-${pId}`);
  videoElement.setAttribute("playsinline", true);
  videoElement.setAttribute("width", "300");
  videoFrame.appendChild(videoElement);

  let displayName = document.createElement("div");
  displayName.innerHTML = `Name : ${name}`;

  videoFrame.appendChild(displayName);
  return videoFrame;
}

// creating audio element
function createAudioElement(pId) {
  let audioElement = document.createElement("audio");
  audioElement.setAttribute("autoPlay", "false");
  audioElement.setAttribute("playsInline", "true");
  audioElement.setAttribute("controls", "false");
  audioElement.setAttribute("id", `a-${pId}`);
  audioElement.style.display = "none";
  return audioElement;
}

// creating local participant
function createLocalParticipant() {
  let localParticipant = createVideoElement(
    meeting.localParticipant.id,
    meeting.localParticipant.displayName
  );
  videoContainer.appendChild(localParticipant);
}

// setting media track
function setTrack(stream, audioElement, participant, isLocal) {
  if (stream.kind == "video") {
    isWebCamOn = true;
    const mediaStream = new MediaStream();
    mediaStream.addTrack(stream.track);
    let videoElm = document.getElementById(`v-${participant.id}`);
    videoElm.srcObject = mediaStream;
    videoElm
      .play()
      .catch((error) =&gt;
        console.error("videoElem.current.play() failed", error)
      );
  }
  if (stream.kind == "audio") {
    if (isLocal) {
      isMicOn = true;
    } else {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(stream.track);
      audioElement.srcObject = mediaStream;
      audioElement
        .play()
        .catch((error) =&gt; console.error("audioElem.play() failed", error));
    }
  }
}
</code></pre>
<h3 id="output">Output</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/06/Screenshot--22-.png" class="kg-image" alt="Build JavaScript Live Video  Streaming App" loading="lazy" width="1366" height="726"/></figure><h2 id="step-6-set-up-the-viewer-view-for-live-streaming">STEP 6: Set Up the Viewer View for Live Streaming</h2>
<p>Implement viewer-specific functionalities using HLS. Discuss the role of media servers in managing and distributing live content efficiently<br><br>To implement player view, You need to use <code>hls.js</code>. It will be helpful to play HLS stream. You have already added the script of <code>hls.js</code> in <code>index.html</code> file. Now on the <code>hls-state-changed</code> event, when participant mode is set to <code>VIEWER</code> and the status of HLS is <code>HLS_PLAYABLE</code>, Pass the downstream URL to the hls.js and play it.</br></br></p><pre><code class="language-js">// Initialize meeting
function initializeMeeting() {
  // ...

  // hls-state-chnaged event
  meeting.on("hls-state-changed", (data) =&gt; {
    const { status } = data;

    hlsStatusHeading.textContent = `HLS Status: ${status}`;

    if (mode === Constants.modes.VIEWER) {
      if (status === Constants.hlsEvents.HLS_PLAYABLE) {
        const { downstreamUrl } = data;
        let video = document.createElement("video");
        video.setAttribute("width", "100%");
        video.setAttribute("muted", "false");
        // enableAutoPlay for browser autoplay policy
        video.setAttribute("autoplay", "true");

        if (Hls.isSupported()) {
          var hls = new Hls();
          hls.loadSource(downstreamUrl);
          hls.attachMedia(video);
          hls.on(Hls.Events.MANIFEST_PARSED, function () {
            video.play();
          });
        } else if (video.canPlayType("application/vnd.apple.mpegurl")) {
          video.src = downstreamUrl;
          video.addEventListener("canplay", function () {
            video.play();
          });
        }

        videoContainer.appendChild(video);
      }

      if (status === Constants.hlsEvents.HLS_STOPPING) {
        videoContainer.innerHTML = "";
      }
    }
  });
}
</code></pre>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/06/tada-ta-1.gif" class="kg-image" alt="Build JavaScript Live Video  Streaming App" loading="lazy" width="640" height="532"/></figure><p>You're done with the implementation of a customized live-streaming app in JavaScript using VideoSDK.</p><h2 id="conclusion">Conclusion</h2>
<ul><li>You have configured a server and integrated the Video SDK with JavaScript to create interactive live streams.</li><li>You have gained practical skills in live streaming and JavaScript programming.</li><li>You can now create your own customized streaming experiences, engage with viewers in real-time, and deploy video streaming applications.</li><li>Go ahead and create advanced features like real-time messaging, screen-sharing, and others. Browse our <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start-ILS">documentation</a>.</li><li>If you face any problems, Feel free to join our <a href="https://discord.gg/Gpmj6eCq5u">Discord community</a>.</li></ul><h2 id="more-javascript-resources">More JavaScript Resources</h2>
<ul><li><a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start">JavaScript Audio/Video call documentation</a></li><li><a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start-ILS">JavaScript interactive live stream documentation</a></li><li><a href="https://github.com/videosdk-live/videosdk-rtc-javascript-sdk-example">Code sample of JavaScript Video call</a></li><li><a href="https://www.videosdk.live/blog/video-calling-javascript">Build JavaScript Video calling app</a></li></ul>]]></content:encoded></item><item><title><![CDATA[How to Integrate Active Speaker Indication in JavaScript Video Call App?]]></title><description><![CDATA[Integrate Active Speaker Indication into your JavaScript Video Chat App with a clear, step-by-step guide.]]></description><link>https://www.videosdk.live/blog/integrate-active-speaker-indication-in-javascript-video-chat-app</link><guid isPermaLink="false">662620b42a88c204ca9d47bc</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Fri, 20 Sep 2024 07:43:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Active-Speaker-JS-1.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Active-Speaker-JS-1.jpg" alt="How to Integrate Active Speaker Indication in JavaScript Video Call App?"/><p>Integrating Active Speaker Indication in a <a href="https://www.videosdk.live/blog/video-calling-javascript">JavaScript video chat app</a> enhances user experience by highlighting the participant currently speaking. This functionality visually distinguishes the active speaker, improving communication flow in group calls. Through JavaScript, real-time audio analysis detects sound levels and determines the speaker. Visual cues, such as highlighting their video feed or displaying a speaker icon, notify participants of the active speaker.</p><p><strong>Benefits of Integrating Active Speaker Indication:</strong></p><ol><li><strong>Improved Communication Flow</strong>: Participants can easily identify the active speaker, leading to smoother conversations and reduced interruptions.</li><li><strong>Enhanced User Experience</strong>: Active speaker indication adds a layer of interactivity, making the video chat app more engaging and user-friendly.</li><li><strong>Increased Engagement</strong>: Visual cues encourage active participation and attentiveness among users, fostering more meaningful interactions.</li><li><strong>Reduced Confusion</strong>: With clear visual indicators, users can avoid confusion about who's speaking, leading to more efficient communication.</li></ol><p><strong>Use Cases of Integrating Active Speaker Indication:</strong></p><ol><li><strong>Remote Work</strong>: In <a href="https://www.the-legends-agency.com/the-remote-work-landscape-in-2024/">remote work</a> scenarios, active speaker indication ensures smooth communication during team meetings, allowing members to follow discussions more easily.</li><li><strong>Online Education</strong>: In virtual classrooms, teachers can use active speaker indication to monitor student participation and facilitate discussions effectively.</li><li><strong>Customer Support</strong>: Active speaker indication in customer support video calls helps agents to know when customers are speaking, improving response times and service quality.</li></ol><p>This tutorial guides you through integrating this valuable feature into your JavaScript video call application using VideoSDK. We'll cover the steps required to leverage VideoSDK's capabilities and implement visual cues that highlight the active speaker within your app's interface. </p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of Active Speaker Indication functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. consider referring to the <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a> for a more visual understanding of the account creation and token generation process.</p><h3 id="prerequisites">Prerequisites</h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (if you do not have one, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer">VideoSDK Dashboard</a>)</li><li>Have Node and NPM installed on your device.</li></ul><h2 id="%E2%AC%87%EF%B8%8F-install-videosdk">⬇️ Install VideoSDK</h2><p>Import VideoSDK using the <code>&lt;script&gt;</code> tag or install it using the following npm command. Make sure you are in your app directory before you run this command.</p><pre><code class="language-js">&lt;html&gt;
  &lt;head&gt;
    &lt;!--.....--&gt;
  &lt;/head&gt;
  &lt;body&gt;
    &lt;!--.....--&gt;
    &lt;script src="https://sdk.videosdk.live/js-sdk/0.0.83/videosdk.js"&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre><ul><li><strong>npm</strong></li></ul><pre><code class="language-js">npm install @videosdk.live/js-sdk</code></pre><ul><li><strong>Yarn</strong></li></ul><pre><code class="language-js">yarn add @videosdk.live/js-sdk</code></pre><h3 id="structure-of-the-project">Structure of the project</h3><p>Your project structure should look like this.</p><pre><code class="language-js">  root
   ├── index.html
   ├── config.js
   ├── index.js</code></pre><p>You will be working on the following files:</p><ul><li><strong>index.html</strong>: Responsible for creating a basic UI.</li><li><strong>config.js</strong>: Responsible for storing the token.</li><li><strong>index.js</strong>: Responsible for rendering the meeting view and the join meeting functionality.</li></ul><h2 id="essential-steps-to-implement-video-call-functionality">Essential Steps to Implement Video Call Functionality</h2><p>Once you've successfully installed VideoSDK in your project, you'll have access to a range of functionalities for building your video call application. Active Speaker Indication is one such feature that leverages VideoSDK's capabilities. It leverages VideoSDK's capabilities to identify the user with the strongest audio signal (the one speaking)</p><h3 id="step-1-design-the-user-interface-ui%E2%80%8B">Step 1: Design the user interface (UI)<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-1--design-the-user-interface-ui">​</a></h3><p>Create an HTML file containing the screens, <code>join-screen</code> and <code>grid-screen</code>.</p><pre><code class="language-js">&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt; &lt;/head&gt;

  &lt;body&gt;
    &lt;div id="join-screen"&gt;
      &lt;!-- Create new Meeting Button --&gt;
      &lt;button id="createMeetingBtn"&gt;New Meeting&lt;/button&gt;
      OR
      &lt;!-- Join existing Meeting --&gt;
      &lt;input type="text" id="meetingIdTxt" placeholder="Enter Meeting id" /&gt;
      &lt;button id="joinBtn"&gt;Join Meeting&lt;/button&gt;
    &lt;/div&gt;

    &lt;!-- for Managing meeting status --&gt;
    &lt;div id="textDiv"&gt;&lt;/div&gt;

    &lt;div id="grid-screen" style="display: none"&gt;
      &lt;!-- To Display MeetingId --&gt;
      &lt;h3 id="meetingIdHeading"&gt;&lt;/h3&gt;

      &lt;!-- Controllers --&gt;
      &lt;button id="leaveBtn"&gt;Leave&lt;/button&gt;
      &lt;button id="toggleMicBtn"&gt;Toggle Mic&lt;/button&gt;
      &lt;button id="toggleWebCamBtn"&gt;Toggle WebCam&lt;/button&gt;

      &lt;!-- render Video --&gt;
      &lt;div class="row" id="videoContainer"&gt;&lt;/div&gt;
    &lt;/div&gt;

    &lt;!-- Add VideoSDK script --&gt;
    &lt;script src="https://sdk.videosdk.live/js-sdk/0.0.83/videosdk.js"&gt;&lt;/script&gt;
    &lt;script src="config.js"&gt;&lt;/script&gt;
    &lt;script src="index.js"&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre><h3 id="step-2-implement-join-screen%E2%80%8B">Step 2: Implement Join Screen<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-2--implement-join-screen">​</a></h3><p>Configure the token in the <code>config.js</code> file, which you can obtain from the <a href="https://app.videosdk.live/login" rel="noopener noreferrer">VideoSDK Dashbord</a>.</p><pre><code class="language-js">// Auth token will be used to generate a meeting and connect to it
TOKEN = "Your_Token_Here";</code></pre><p>Next, retrieve all the elements from the DOM and declare the following variables in the <code>index.js</code> file. Then, add an event listener to the join and create meeting buttons.</p><pre><code class="language-js">// Getting Elements from DOM
const joinButton = document.getElementById("joinBtn");
const leaveButton = document.getElementById("leaveBtn");
const toggleMicButton = document.getElementById("toggleMicBtn");
const toggleWebCamButton = document.getElementById("toggleWebCamBtn");
const createButton = document.getElementById("createMeetingBtn");
const videoContainer = document.getElementById("videoContainer");
const textDiv = document.getElementById("textDiv");

// Declare Variables
let meeting = null;
let meetingId = "";
let isMicOn = false;
let isWebCamOn = false;

function initializeMeeting() {}

function createLocalParticipant() {}

function createVideoElement() {}

function createAudioElement() {}

function setTrack() {}

// Join Meeting Button Event Listener
joinButton.addEventListener("click", async () =&gt; {
  document.getElementById("join-screen").style.display = "none";
  textDiv.textContent = "Joining the meeting...";

  roomId = document.getElementById("meetingIdTxt").value;
  meetingId = roomId;

  initializeMeeting();
});

// Create Meeting Button Event Listener
createButton.addEventListener("click", async () =&gt; {
  document.getElementById("join-screen").style.display = "none";
  textDiv.textContent = "Please wait, we are joining the meeting";

  // API call to create meeting
  const url = `https://api.videosdk.live/v2/rooms`;
  const options = {
    method: "POST",
    headers: { Authorization: TOKEN, "Content-Type": "application/json" },
  };

  const { roomId } = await fetch(url, options)
    .then((response) =&gt; response.json())
    .catch((error) =&gt; alert("error", error));
  meetingId = roomId;

  initializeMeeting();
});</code></pre><h3 id="step-3-initialize-meeting%E2%80%8B">Step 3: Initialize Meeting<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-3--initialize-meeting">​</a></h3><p>Following that, initialize the meeting using the <code>initMeeting()</code> function and proceed to join the meeting.</p><pre><code class="language-js">// Initialize meeting
function initializeMeeting() {
  window.VideoSDK.config(TOKEN);

  meeting = window.VideoSDK.initMeeting({
    meetingId: meetingId, // required
    name: "Thomas Edison", // required
    micEnabled: true, // optional, default: true
    webcamEnabled: true, // optional, default: true
  });

  meeting.join();

  // Creating local participant
  createLocalParticipant();

  // Setting local participant stream
  meeting.localParticipant.on("stream-enabled", (stream) =&gt; {
    setTrack(stream, null, meeting.localParticipant, true);
  });

  // meeting joined event
  meeting.on("meeting-joined", () =&gt; {
    textDiv.style.display = "none";
    document.getElementById("grid-screen").style.display = "block";
    document.getElementById(
      "meetingIdHeading"
    ).textContent = `Meeting Id: ${meetingId}`;
  });

  // meeting left event
  meeting.on("meeting-left", () =&gt; {
    videoContainer.innerHTML = "";
  });

  // Remote participants Event
  // participant joined
  meeting.on("participant-joined", (participant) =&gt; {
    //  ...
  });

  // participant left
  meeting.on("participant-left", (participant) =&gt; {
    //  ...
  });
}</code></pre><h3 id="step-4-create-the-media-elements%E2%80%8B">Step 4: Create the Media Elements<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-4--create-the-media-elements">​</a></h3><p>In this step, Create a function to generate audio and video elements for displaying both local and remote participants. Set the corresponding media track based on whether it's a video or audio stream.</p><pre><code class="language-js">// creating video element
function createVideoElement(pId, name) {
  let videoFrame = document.createElement("div");
  videoFrame.setAttribute("id", `f-${pId}`);
  videoFrame.style.width = "300px";
    

  //create video
  let videoElement = document.createElement("video");
  videoElement.classList.add("video-frame");
  videoElement.setAttribute("id", `v-${pId}`);
  videoElement.setAttribute("playsinline", true);
  videoElement.setAttribute("width", "300");
  videoFrame.appendChild(videoElement);

  let displayName = document.createElement("div");
  displayName.innerHTML = `Name : ${name}`;

  videoFrame.appendChild(displayName);
  return videoFrame;
}

// creating audio element
function createAudioElement(pId) {
  let audioElement = document.createElement("audio");
  audioElement.setAttribute("autoPlay", "false");
  audioElement.setAttribute("playsInline", "true");
  audioElement.setAttribute("controls", "false");
  audioElement.setAttribute("id", `a-${pId}`);
  audioElement.style.display = "none";
  return audioElement;
}

// creating local participant
function createLocalParticipant() {
  let localParticipant = createVideoElement(
    meeting.localParticipant.id,
    meeting.localParticipant.displayName
  );
  videoContainer.appendChild(localParticipant);
}

// setting media track
function setTrack(stream, audioElement, participant, isLocal) {
  if (stream.kind == "video") {
    isWebCamOn = true;
    const mediaStream = new MediaStream();
    mediaStream.addTrack(stream.track);
    let videoElm = document.getElementById(`v-${participant.id}`);
    videoElm.srcObject = mediaStream;
    videoElm
      .play()
      .catch((error) =&gt;
        console.error("videoElem.current.play() failed", error)
      );
  }
  if (stream.kind == "audio") {
    if (isLocal) {
      isMicOn = true;
    } else {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(stream.track);
      audioElement.srcObject = mediaStream;
      audioElement
        .play()
        .catch((error) =&gt; console.error("audioElem.play() failed", error));
    }
  }
}</code></pre><h3 id="step-5-handle-participant-events%E2%80%8B">Step 5: Handle participant events<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-5--handle-participant-events">​</a></h3><p>Thereafter, implement the events related to the participants and the stream.</p><p>The following are the events to be executed in this step:</p><ol><li><code>participant-joined</code>: When a remote participant joins, this event will trigger. In the event callback, create video and audio elements previously defined for rendering their video and audio streams.</li><li><code>participant-left</code>: When a remote participant leaves, this event will trigger. In the event callback, remove the corresponding video and audio elements.</li><li><code>stream-enabled</code>: This event manages the media track of a specific participant by associating it with the appropriate video or audio element.</li></ol><pre><code class="language-js">// Initialize meeting
function initializeMeeting() {
  // ...

  // participant joined
  meeting.on("participant-joined", (participant) =&gt; {
    let videoElement = createVideoElement(
      participant.id,
      participant.displayName
    );
    let audioElement = createAudioElement(participant.id);
    // stream-enabled
    participant.on("stream-enabled", (stream) =&gt; {
      setTrack(stream, audioElement, participant, false);
    });
    videoContainer.appendChild(videoElement);
    videoContainer.appendChild(audioElement);
  });

  // participants left
  meeting.on("participant-left", (participant) =&gt; {
    let vElement = document.getElementById(`f-${participant.id}`);
    vElement.remove(vElement);

    let aElement = document.getElementById(`a-${participant.id}`);
    aElement.remove(aElement);
  });
}</code></pre><h3 id="step-6-implement-controls%E2%80%8B">Step 6: Implement Controls<a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-6--implement-controls">​</a></h3><p>Next, implement the meeting controls such as <code>toggleMic</code>, <code>toggleWebcam</code>, and leave the meeting.</p><pre><code class="language-js">// leave Meeting Button Event Listener
leaveButton.addEventListener("click", async () =&gt; {
  meeting?.leave();
  document.getElementById("grid-screen").style.display = "none";
  document.getElementById("join-screen").style.display = "block";
});

// Toggle Mic Button Event Listener
toggleMicButton.addEventListener("click", async () =&gt; {
  if (isMicOn) {
    // Disable Mic in Meeting
    meeting?.muteMic();
  } else {
    // Enable Mic in Meeting
    meeting?.unmuteMic();
  }
  isMicOn = !isMicOn;
});

// Toggle Web Cam Button Event Listener
toggleWebCamButton.addEventListener("click", async () =&gt; {
  if (isWebCamOn) {
    // Disable Webcam in Meeting
    meeting?.disableWebcam();

    let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
    vElement.style.display = "none";
  } else {
    // Enable Webcam in Meeting
    meeting?.enableWebcam();

    let vElement = document.getElementById(`f-${meeting.localParticipant.id}`);
    vElement.style.display = "inline";
  }
  isWebCamOn = !isWebCamOn;
});</code></pre><p><strong>You can check out the complete.</strong></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/videosdk-live/quickstart/tree/main/js-rtc"><div class="kg-bookmark-content"><div class="kg-bookmark-title">quickstart/js-rtc at main · videosdk-live/quickstart</div><div class="kg-bookmark-description">A short and sweet tutorial for getting up to speed with VideoSDK in less than 10 minutes - videosdk-live/quickstart</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="How to Integrate Active Speaker Indication in JavaScript Video Call App?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">videosdk-live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/de970b6f7db5c97b5728471cbf13bae285388d9a9ccaa9bd294da67c509984d5/videosdk-live/quickstart" alt="How to Integrate Active Speaker Indication in JavaScript Video Call App?" onerror="this.style.display = 'none'"/></div></a></figure><h2 id="integrate-active-speaker-indication">Integrate Active Speaker Indication</h2><p>The Active Speaker Indication feature allows you to identify the participant who is currently the active speaker in a meeting. This feature proves especially valuable in larger conferences or webinars, where numerous participants can make it challenging to identify the active speaker.</p><p>Whenever any participant speaks in a meeting, the <code>onSpeakerChanged</code> event will trigger, providing the participant ID of the active speaker.</p><p>For example, the meeting is running with <strong>Alice</strong> and <strong>Bob</strong>. Whenever any of them speaks, <code>onSpeakerChanged</code> the event will trigger and return the speaker's <code>participantId</code>.</p><pre><code class="language-js">let meeting;

// Initialize Meeting
meeting = VideoSDK.initMeeting({
  // ...
});

meeting.on("speaker-changed", (activeSpeakerId) =&gt; {
    console.log("Active Speaker", activeSpeakerId);
    if (activeSpeakerId != null) {

      // To check if there was any previous active participant
      if (previousActiveSpeaker) {
        var previousDivElement = document.getElementById(`f-${previousActiveSpeaker}`);
        // To check if the previous active participant video element is still present or not
        if(previousDivElement){
          previousDivElement.style.webkitBoxShadow = '';
          previousDivElement.style.mozBoxShadow = '';
          previousDivElement.style.boxShadow = '';
        }
      }

      // Apply box shadow to the current active speaker
      var currentDivElement = document.getElementById(`f-${activeSpeakerId}`);
      // To check if the active participant video element is still present or not
      if(currentDivElement){
        currentDivElement.style.webkitBoxShadow = '0 0 20px blue';
        currentDivElement.style.mozBoxShadow = '0 0 20px blue';
        currentDivElement.style.boxShadow = '0 0 20px blue';
      }

      // Update the previous active speaker ID
      previousActiveSpeaker = activeSpeakerId;
    }
  })
}</code></pre><h2 id="%E2%9C%A8-want-to-add-more-features-to-javascript-video-calling-app">✨ Want to Add More Features to JavaScript Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your JavaScript video-calling app, check out these additional resources:</p><ul><li>Image Capture: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-javascript-chat-app">Link</a></li><li>Screen Share Feature: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-javascript-video-chat-app">Link</a></li><li>Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/integrate-picture-in-picture-pip-mode-in-javascript-video-chat-app">Link</a></li><li>RTMP Livestream: <a href="https://www.videosdk.live/blog/integrate-rtmp-livestream-in-javascript-video-chat-app">Link</a></li></ul><h2 id="conclusion">Conclusion</h2><p>You have successfully integrated active speaker indication. From this, you'll significantly improve your video call app, and users will appreciate the clarity and reduce confusion during conversations, leading to a more engaging and productive video conferencing experience.</p><p>If you are new here and want to build an interactive JavaScript app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get ? <em>10000 free minutes every month.</em> This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Active Speaker Indication in iOS Video Call App?]]></title><description><![CDATA[Integrate Active Speaker Indication into your iOS Video Call App using VideoSDK for seamless identification of the current speaker.]]></description><link>https://www.videosdk.live/blog/integrate-active-speaker-indication-in-ios-video-call-app</link><guid isPermaLink="false">661929ca2a88c204ca9d0b2e</guid><category><![CDATA[iOS]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Fri, 20 Sep 2024 07:34:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Active-Speaker-iOS.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Active-Speaker-iOS.jpg" alt="How to Integrate Active Speaker Indication in iOS Video Call App?"/><p>Integrating Active Speaker Indication in your <a href="https://www.videosdk.live/blog/ios-video-calling-sdk">iOS Video Call App</a> enhances user experience by highlighting the current speaker, making conversations more engaging and natural. With this feature, users can easily identify who is speaking, leading to smoother communication and better interaction.</p><p><strong>Benefits of Integrate Active Speaker Indication in iOS Video Call App:</strong></p><ol><li><strong>Enhanced Communication:</strong> Active Speaker Indication allows participants to easily identify who is speaking during video calls, facilitating smoother communication and reducing interruptions.</li><li><strong>Improved Engagement:</strong> By highlighting the current speaker, this feature keeps participants more engaged in the conversation, leading to better interaction and collaboration.</li><li><strong>Reduced Confusion:</strong> Active Speaker Indication reduces confusion by providing visual cues, eliminating the need for participants to guess who is talking and preventing talking over each other.</li></ol><p><strong>Use Case of Integrate Active Speaker Indication in iOS Video Call App:</strong></p><ol><li><strong>Enhanced Collaboration:</strong> During brainstorming sessions, when one team member starts speaking, their video feed becomes highlighted, allowing others to know who is presenting their ideas.</li><li><strong>Reduced Miscommunication:</strong> In discussions involving multiple participants, Active Speaker Indication helps in avoiding confusion by clearly indicating the current speaker, preventing overlapping conversations.</li><li><strong>Professional Meetings:</strong> Whether it's client meetings or internal discussions, Active Speaker Indication adds a level of professionalism to the video calls, making the interactions more effective and impactful.</li></ol><p>By following the guide provided by <a href="https://www.videosdk.live/"><strong>VideoSDK</strong></a>, you'll seamlessly implement Active Speaker Indication into your iOS app, ensuring that users can see who is actively speaking during video calls. This not only improves the overall usability of your app but also adds a professional touch to your video calling experience. With clear visual cues, participants can focus more on the conversation rather than guessing who is talking.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>VideoSDK enables the opportunity to integrate video &amp; audio calling into Web, Android, and iOS applications with so many different frameworks. It is the best infrastructure solution that provides programmable SDKs and REST APIs to build scalable video conferencing applications. This guide will get you running with the VideoSDK video &amp; audio calling in minutes.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/server-setup">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><ul><li>iOS 11.0+</li><li>Xcode 12.0+</li><li>Swift 5.0+</li></ul><p>This App will contain two screens:</p><p><strong>Join Screen</strong>: This screen allows the user to either create a meeting or join the predefined meeting.</p><p><strong>Meeting Screen</strong>: This screen basically contains local and remote participant views and some meeting controls such as Enable/Disable the mic &amp; Camera and Leave the meeting.</p><h2 id="integrate-videosdk%E2%80%8B">Integrate VideoSDK​</h2><p>To install VideoSDK, you must initialize the pod on the project by running the following command:</p><pre><code class="language-pod">pod init</code></pre><p>It will create the podfile in your project folder, Open that file and add the dependency for the VideoSDK, like below:</p><pre><code class="language-pod">pod 'VideoSDKRTC', :git =&gt; 'https://github.com/videosdk-live/videosdk-rtc-ios-sdk.git'</code></pre><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/ios_quickstart_podfile.png" class="kg-image" alt="How to Integrate Active Speaker Indication in iOS Video Call App?" loading="lazy"/></figure><p>then run the below code to install the pod:</p><pre><code class="language-swift">pod install</code></pre><p>then declare the permissions in Info.plist :</p><pre><code class="language-swift">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Camera permission description&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Microphone permission description&lt;/string&gt;</code></pre><h3 id="project-structure">Project Structure</h3><pre><code class="language-swift">iOSQuickStartDemo
   ├── Models
        ├── RoomStruct.swift
        └── MeetingData.swift
   ├── ViewControllers
        ├── StartMeetingViewController.swift
        └── MeetingViewController.swift
   ├── AppDelegate.swift // Default
   ├── SceneDelegate.swift // Default
   └── APIService
           └── APIService.swift
   ├── Main.storyboard // Default
   ├── LaunchScreen.storyboard // Default
   └── Info.plist // Default
 Pods
     └── Podfile</code></pre><h3 id="create-models%E2%80%8B">Create models<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#create-models">​</a></h3><p>Create a swift file for <code>MeetingData</code> and <code>RoomStruct</code> class model for setting data in object pattern.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
struct MeetingData {
    let token: String
    let name: String
    let meetingId: String
    let micEnabled: Bool
    let cameraEnabled: Bool
}</code></pre><figcaption>MeetingData.swift</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
struct RoomsStruct: Codable {
    let createdAt, updatedAt, roomID: String?
    let links: Links?
    let id: String?
    enum CodingKeys: String, CodingKey {
        case createdAt, updatedAt
        case roomID = "roomId"
        case links, id
    }
}

// MARK: - Links
struct Links: Codable {
    let getRoom, getSession: String?
    enum CodingKeys: String, CodingKey {
        case getRoom = "get_room"
        case getSession = "get_session"
    }
}</code></pre><figcaption>RoomStruct.swift</figcaption></figure><h2 id="essential-steps-for-building-the-video-calling">Essential Steps for Building the Video Calling</h2><p>This guide is designed to walk you through the process of integrating Active Speaker Indication with <a href="https://www.videosdk.live/">VideoSDK</a>. We'll cover everything from setting up the SDK to incorporating the visual cues into your app's interface, ensuring a smooth and efficient implementation process.</p><h3 id="step-1-get-started-with-apiclient%E2%80%8B">Step 1: Get started with APIClient<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-1--get-started-with-apiclient">​</a></h3><p>Before jumping to anything else, we have to write an API to generate a unique <code>meetingId</code>. You will require an <strong>authentication token;</strong> you can generate it either using <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-server-api-example</a> or from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation

let TOKEN_STRING: String = "&lt;AUTH_TOKEN&gt;"

class APIService {

  class func createMeeting(token: String, completion: @escaping (Result&lt;String, Error&gt;) -&gt; Void) {

    let url = URL(string: "https://api.videosdk.live/v2/rooms")!

    var request = URLRequest(url: url)
    request.httpMethod = "POST"
    request.addValue(TOKEN_STRING, forHTTPHeaderField: "authorization")

    URLSession.shared.dataTask(
      with: request,
      completionHandler: { (data: Data?, response: URLResponse?, error: Error?) in

        DispatchQueue.main.async {

          if let data = data, let utf8Text = String(data: data, encoding: .utf8) {
            do {
              let dataArray = try JSONDecoder().decode(RoomsStruct.self, from: data)

              completion(.success(dataArray.roomID ?? ""))
            } catch {
              print("Error while creating a meeting: \(error)")
              completion(.failure(error))
            }
          }
        }
      }
    ).resume()
  }
}
</code></pre><figcaption>APIService.swift</figcaption></figure><h3 id="step-2-implement-join-screen%E2%80%8B">Step 2: Implement Join Screen<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-2--implement-join-screen">​</a></h3><p>The Join Screen will work as a medium to either schedule a new meeting or join an existing meeting.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">import Foundation
import UIKit

class StartMeetingViewController: UIViewController, UITextFieldDelegate {

  private var serverToken = ""

  /// MARK: outlet for create meeting button
  @IBOutlet weak var btnCreateMeeting: UIButton!

  /// MARK: outlet for join meeting button
  @IBOutlet weak var btnJoinMeeting: UIButton!

  /// MARK: outlet for meetingId textfield
  @IBOutlet weak var txtMeetingId: UITextField!

  /// MARK: Initialize the private variable with TOKEN_STRING &amp;
  /// setting the meeting id in the textfield
  override func viewDidLoad() {
    txtMeetingId.delegate = self
    serverToken = TOKEN_STRING
    txtMeetingId.text = "PROVIDE-STATIC-MEETING-ID"
  }

  /// MARK: method for joining meeting through seague named as "StartMeeting"
  /// after validating the serverToken in not empty
  func joinMeeting() {

    txtMeetingId.resignFirstResponder()

    if !serverToken.isEmpty {
      DispatchQueue.main.async {
        self.dismiss(animated: true) {
          self.performSegue(withIdentifier: "StartMeeting", sender: nil)
        }
      }
    } else {
      print("Please provide auth token to start the meeting.")
    }
  }

  /// MARK: outlet for create meeting button tap event
  @IBAction func btnCreateMeetingTapped(_ sender: Any) {
    print("show loader while meeting gets connected with server")
    joinRoom()
  }

  /// MARK: outlet for join meeting button tap event
  @IBAction func btnJoinMeetingTapped(_ sender: Any) {
    if (txtMeetingId.text ?? "").isEmpty {

      print("Please provide meeting id to start the meeting.")
      txtMeetingId.resignFirstResponder()
    } else {
      joinMeeting()
    }
  }

  // MARK: - method for creating room api call and getting meetingId for joining meeting

  func joinRoom() {

    APIService.createMeeting(token: self.serverToken) { result in
      if case .success(let meetingId) = result {
        DispatchQueue.main.async {
          self.txtMeetingId.text = meetingId
          self.joinMeeting()
        }
      }
    }
  }

  /// MARK: preparing to animate to meetingViewController screen
  override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

    guard let navigation = segue.destination as? UINavigationController,

      let meetingViewController = navigation.topViewController as? MeetingViewController
    else {
      return
    }

    meetingViewController.meetingData = MeetingData(
      token: serverToken,
      name: txtMeetingId.text ?? "Guest",
      meetingId: txtMeetingId.text ?? "",
      micEnabled: true,
      cameraEnabled: true
    )
  }
}
</code></pre><figcaption>StartMeetingViewController.swift</figcaption></figure><h3 id="step-3-initialize-and-join-meeting%E2%80%8B">Step 3: Initialize and Join Meeting<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-3--initialize-and-join-meeting">​</a></h3><p>Using the provided <code>token</code> and <code>meetingId</code>, we will configure and initialize the meeting in <code>viewDidLoad()</code>.</p><p>Then, we'll add <strong>@IBOutlet</strong> for <code>localParticipantVideoView</code> and <code>remoteParticipantVideoView</code>, which can render local and remote participant media respectively.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">class MeetingViewController: UIViewController {

import UIKit
import VideoSDKRTC
import WebRTC
import AVFoundation

class MeetingViewController: UIViewController {

// MARK: - Properties
// outlet for local participant container view
   @IBOutlet weak var localParticipantViewContainer: UIView!

// outlet for label for meeting Id
   @IBOutlet weak var lblMeetingId: UILabel!

// outlet for local participant video view
   @IBOutlet weak var localParticipantVideoView: RTCMTLVideoView!

// outlet for remote participant video view
   @IBOutlet weak var remoteParticipantVideoView: RTCMTLVideoView!

// outlet for remote participant no media label
   @IBOutlet weak var lblRemoteParticipantNoMedia: UILabel!

// outlet for remote participant container view
   @IBOutlet weak var remoteParticipantViewContainer: UIView!

// outlet for local participant no media label
   @IBOutlet weak var lblLocalParticipantNoMedia: UILabel!

// Meeting data - required to start
   var meetingData: MeetingData!

// current meeting reference
   private var meeting: Meeting?

    // MARK: - video participants including self to show in UI
    private var participants: [Participant] = []

        // MARK: - Lifecycle Events

        override func viewDidLoad() {
        super.viewDidLoad()
        // configure the VideoSDK with token
        VideoSDK.config(token: meetingData.token)

        // init meeting
        initializeMeeting()

        // set meeting id in button text
        lblMeetingId.text = "Meeting Id: \(meetingData.meetingId)"
      }

      override func viewWillAppear(_ animated: Bool) {
          super.viewWillAppear(animated)
          navigationController?.navigationBar.isHidden = true
      }

    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)
        navigationController?.navigationBar.isHidden = false
        NotificationCenter.default.removeObserver(self)
    }

        // MARK: - Meeting

        private func initializeMeeting() {

            // Initialize the VideoSDK
            meeting = VideoSDK.initMeeting(
                meetingId: meetingData.meetingId,
                participantName: meetingData.name,
                micEnabled: meetingData.micEnabled,
                webcamEnabled: meetingData.cameraEnabled
            )

            // Adding the listener to meeting
            meeting?.addEventListener(self)

            // joining the meeting
            meeting?.join()
        }
}</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h3 id="step-4-implement-controls%E2%80%8B">Step 4: Implement Controls<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#step-4--implement-controls">​</a></h3><p>After initializing the meeting in the previous step, we will now add <strong>@IBOutlet</strong> for <code>btnLeave</code>, <code>btnToggleVideo</code> and <code>btnToggleMic</code> which can control the media in the meeting.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">class MeetingViewController: UIViewController {

...

    // outlet for leave button
    @IBOutlet weak var btnLeave: UIButton!

    // outlet for toggle video button
    @IBOutlet weak var btnToggleVideo: UIButton!

    // outlet for toggle audio button
    @IBOutlet weak var btnToggleMic: UIButton!

    // bool for mic
    var micEnabled = true
    // bool for video
    var videoEnabled = true


    // outlet for leave button click event
    @IBAction func btnLeaveTapped(_ sender: Any) {
            DispatchQueue.main.async {
                self.meeting?.leave()
                self.dismiss(animated: true)
            }
        }

    // outlet for toggle mic button click event
    @IBAction func btnToggleMicTapped(_ sender: Any) {
        if micEnabled {
            micEnabled = !micEnabled // false
            self.meeting?.muteMic()
        } else {
            micEnabled = !micEnabled // true
            self.meeting?.unmuteMic()
        }
    }

    // outlet for toggle video button click event
    @IBAction func btnToggleVideoTapped(_ sender: Any) {
        if videoEnabled {
            videoEnabled = !videoEnabled // false
            self.meeting?.disableWebcam()
        } else {
            videoEnabled = !videoEnabled // true
            self.meeting?.enableWebcam()
        }
    }

...

}</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h3 id="step-5-implementing-meetingeventlistener%E2%80%8B">Step 5: Implementing <code>MeetingEventListener</code>​</h3><p>In this step, we'll create an extension for the <code>MeetingViewController</code> that implements the MeetingEventListener, which implements the <code>onMeetingJoined</code>, <code>onMeetingLeft</code>, <code>onParticipantJoined</code>, <code>onParticipantLeft</code>, <code>onParticipantChanged</code>, <code>onSpeakerChanged</code>, etc. methods.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">
class MeetingViewController: UIViewController {

...

extension MeetingViewController: MeetingEventListener {

        /// Meeting started
        func onMeetingJoined() {

            // handle local participant on start
            guard let localParticipant = self.meeting?.localParticipant else { return }
            // add to list
            participants.append(localParticipant)

            // add event listener
            localParticipant.addEventListener(self)

            localParticipant.setQuality(.high)

            if(localParticipant.isLocal){
                self.localParticipantViewContainer.isHidden = false
            } else {
                self.remoteParticipantViewContainer.isHidden = false
            }
        }

        /// Meeting ended
        func onMeetingLeft() {
            // remove listeners
            meeting?.localParticipant.removeEventListener(self)
            meeting?.removeEventListener(self)
        }

        /// A new participant joined
        func onParticipantJoined(_ participant: Participant) {
            participants.append(participant)

            // add listener
            participant.addEventListener(self)

            participant.setQuality(.high)

            if(participant.isLocal){
                self.localParticipantViewContainer.isHidden = false
            } else {
                self.remoteParticipantViewContainer.isHidden = false
            }
        }

        /// A participant left from the meeting
        /// - Parameter participant: participant object
        func onParticipantLeft(_ participant: Participant) {
            participant.removeEventListener(self)
            guard let index = self.participants.firstIndex(where: { $0.id == participant.id }) else {
                return
            }
            // remove participant from list
            participants.remove(at: index)
            // hide from ui
            UIView.animate(withDuration: 0.5){
                if(!participant.isLocal){
                    self.remoteParticipantViewContainer.isHidden = true
                }
            }
        }

        /// Called when speaker is changed
        /// - Parameter participantId: participant id of the speaker, nil when no one is speaking.
        func onSpeakerChanged(participantId: String?) {

            // show indication for active speaker
            if let participant = participants.first(where: { $0.id == participantId }) {
                self.showActiveSpeakerIndicator(participant.isLocal ? localParticipantViewContainer : remoteParticipantViewContainer, true)
            }

            // hide indication for others participants
            let otherParticipants = participants.filter { $0.id != participantId }
            for participant in otherParticipants {
                if participants.count &gt; 1 &amp;&amp; participant.isLocal {
                    showActiveSpeakerIndicator(localParticipantViewContainer, false)
                } else {
                    showActiveSpeakerIndicator(remoteParticipantViewContainer, false)
                }
            }
        }

        func showActiveSpeakerIndicator(_ view: UIView, _ show: Bool) {
            view.layer.borderWidth = 4.0
            view.layer.borderColor = show ? UIColor.blue.cgColor : 													UIColor.clear.cgColor
        }

}

...</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><h3 id="step-6-implementing-participanteventlistener">Step 6: Implementing <code>ParticipantEventListener</code></h3><p>In this stage, we'll add an extension for the <code>MeetingViewController</code> that implements the ParticipantEventListener, which implements the <code>onStreamEnabled</code> and <code>onStreamDisabled</code> methods for the audio and video of MediaStreams enabled or disabled.</p><p>The function updateUI is frequently used to control or modify the user interface (enable/disable camera &amp; mic) in accordance with the MediaStream state.</p><pre><code class="language-swift">class MeetingViewController: UIViewController {

...

extension MeetingViewController: ParticipantEventListener {

/// Participant has enabled mic, video or screenshare
/// - Parameters:
/// - stream: enabled stream object
/// - participant: participant object
func onStreamEnabled(_ stream: MediaStream, forParticipant participant: Participant) {
    updateUI(participant: participant, forStream: stream, enabled: true)
 }

/// Participant has disabled mic, video or screenshare
/// - Parameters:
///   - stream: disabled stream object
///   - participant: participant object
        
func onStreamDisabled(_ stream: MediaStream, 
			forParticipant participant: Participant) {
            
  updateUI(participant: participant, forStream: stream, enabled: false)
 }
 
}

private extension MeetingViewController {

 func updateUI(participant: Participant, forStream stream: MediaStream, 							enabled: Bool) { // true
        switch stream.kind {
        case .state(value: .video):
            if let videotrack = stream.track as? RTCVideoTrack {
                if enabled {
                    DispatchQueue.main.async {
                        UIView.animate(withDuration: 0.5){
                        
                            if(participant.isLocal) {
                            
    	self.localParticipantViewContainer.isHidden = false
	self.localParticipantVideoView.isHidden = false       
	self.localParticipantVideoView.videoContentMode = .scaleAspectFill                            self.localParticipantViewContainer.bringSubviewToFront(self.localParticipantVideoView)                         									
    videotrack.add(self.localParticipantVideoView)
    self.lblLocalParticipantNoMedia.isHidden = true

} else {
		self.remoteParticipantViewContainer.isHidden = false
        	self.remoteParticipantVideoView.isHidden = false
                                self.remoteParticipantVideoView.videoContentMode = .scaleAspectFill
                                self.remoteParticipantViewContainer.bringSubviewToFront(self.remoteParticipantVideoView)
                                		        videotrack.add(self.remoteParticipantVideoView)
 self.lblRemoteParticipantNoMedia.isHidden = true
        }
     }
  }
} else {
         UIView.animate(withDuration: 0.5){
                if(participant.isLocal){
                
                    self.localParticipantViewContainer.isHidden = false
                    self.localParticipantVideoView.isHidden = true
                    self.lblLocalParticipantNoMedia.isHidden = false
                            videotrack.remove(self.localParticipantVideoView)
} else {
                   self.remoteParticipantViewContainer.isHidden = false
                   self.remoteParticipantVideoView.isHidden = true
                   self.lblRemoteParticipantNoMedia.isHidden = false
                            videotrack.remove(self.remoteParticipantVideoView)
      }
    }
  }
}

     case .state(value: .audio):
            if participant.isLocal {
                
               localParticipantViewContainer.layer.borderWidth = 4.0
               localParticipantViewContainer.layer.borderColor = enabled ? UIColor.clear.cgColor : UIColor.red.cgColor
            } else {
                remoteParticipantViewContainer.layer.borderWidth = 4.0
                remoteParticipantViewContainer.layer.borderColor = enabled ? UIColor.clear.cgColor : UIColor.red.cgColor
            }
        default:
            break
        }
    }
}

...
</code></pre><h3 id="known-issue%E2%80%8B">Known Issue<a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/quick-start#known-issue">​</a></h3><p>Please add the following line to the <code>MeetingViewController.swift</code> file's <code>viewDidLoad</code> method If you get your video out of the container view like the below image.</p><figure class="kg-card kg-code-card"><pre><code class="language-swift">override func viewDidLoad() {

  localParticipantVideoView.frame = CGRect(x: 10, y: 0, 
    		width: localParticipantViewContainer.frame.width, 
   		height: localParticipantViewContainer.frame.height)

  localParticipantVideoView.bounds = CGRect(x: 10, y: 0, 
  		width: localParticipantViewContainer.frame.width, 
        	height: localParticipantViewContainer.frame.height)

  localParticipantVideoView.clipsToBounds = true

  remoteParticipantVideoView.frame = CGRect(x: 10, y: 0, 
  		width: remoteParticipantViewContainer.frame.width, 
        	height: remoteParticipantViewContainer.frame.height)
        
  remoteParticipantVideoView.bounds = CGRect(x: 10, y: 0, 
  		width: remoteParticipantViewContainer.frame.width, 
    		height: remoteParticipantViewContainer.frame.height)
    
    remoteParticipantVideoView.clipsToBounds = true
}
</code></pre><figcaption>MeetingViewController.swift</figcaption></figure><blockquote><strong>TIP:</strong><br>Stuck anywhere? Check out this <a href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example" rel="noopener noreferrer">example code</a> on GitHub.</br></blockquote><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - videosdk-live/videosdk-rtc-ios-sdk-example: WebRTC based video conferencing SDK for iOS (Swift / Objective C)</div><div class="kg-bookmark-description">WebRTC based video conferencing SDK for iOS (Swift / Objective C) - videosdk-live/videosdk-rtc-ios-sdk-example</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="How to Integrate Active Speaker Indication in iOS Video Call App?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">videosdk-live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/3d2f5eef43ad3d03fbe693ee2c8633053215e90252a63bddc775d8b8d8a7e380/videosdk-live/videosdk-rtc-ios-sdk-example" alt="How to Integrate Active Speaker Indication in iOS Video Call App?"/></div></a></figure><p>Once you've successfully installed the videoSDK into your iOS project, you'll unlock a range of functionality to enhance your video call application. This feature uses VideoSDK's advanced audio processing capabilities to identify the participant with the strongest audio signal in real-time. This translates to directing the active speaker during a call, allowing you to provide visual feedback to users.</p><h2 id="integrate-active-speaker-indication">Integrate Active Speaker Indication </h2><p>This feature can be especially useful in large meetings or webinars, where there may be many participants and it may be difficult to tell who is speaking. Integrating Active Speaker Indication can significantly improve the user experience in various ways. It facilitates smoother communication by clearly indicating the current speaker, thus reducing confusion, particularly in larger group calls, and creating a more engaging environment for all participants involved.</p><p>Whenever any participant speaks in the meeting, the <code>onSpeakerChanged</code> event will be triggered with the participant ID of the active speaker.</p><p>For example, Alice and Bob are in a meeting. Whenever one of them speaks, the <code>onSpeakerChanged</code> event will be triggered and the speaker will return the <code>participantId</code>.</p><pre><code class="language-swift">extension MeetingViewController: MeetingEventListener {
/// Called when speaker is changed
/// - Parameter participantId: participant id of the speaker, nil when no one is speaking.
func onSpeakerChanged(participantId: String?) {

    // show indicator for active speaker
    if let participant = participants.first(where: { $0.id == participantId }),
        // show indication for active speaker
        // ex. show border color
        // cell.contentView.layer.borderColor = UIColor.blue.cgColor : UIColor.clear.cgColor
    }

    // hide indicator for others participants
    let otherParticipants = participants.filter { $0.id != participantId }
    for participant in otherParticipants {
        // ex. remove border color
        //cell.contentView.layer.borderColor = UIColor.clear.cgColor
    }
}</code></pre><h2 id="conclusion">Conclusion</h2><p>Active Speaker Indication is a valuable feature for enhancing the functionality and user experience of your iOS video call app. By providing visual cues to identify the current speaker, this feature improves communication, engagement, and overall efficiency during virtual meetings.</p><p>With Active Speaker Indication, users benefit from clearer understanding, reduced confusion, and a more professional video calling experience. Whether it's team collaborations, client meetings, or remote learning sessions, this feature ensures smoother interactions and more productive discussions.</p><p>Enhance your iOS video call app today with VideoSDK and offer an unparalleled video calling experience to your users. Remember, VideoSDK provides <strong><strong><strong><strong>10</strong></strong>,<strong><strong>000</strong></strong> <strong><strong>fre</strong></strong>e<strong><strong> minutes</strong></strong></strong></strong> to empower your video call app with advanced features without breaking any initial investment. <a href="https://www.videosdk.live/signup"><strong><strong><strong><strong>Sign up with VideoSDK</strong></strong></strong></strong></a> today and take the video app to the next level.</p>]]></content:encoded></item><item><title><![CDATA[How to Implement Active Speaker Indication in Flutter Video Call App?]]></title><description><![CDATA[Add Active Speaker Indication to your Flutter video call app for seamless communication. Enhance the user experience by highlighting the speaker dynamically.]]></description><link>https://www.videosdk.live/blog/implement-active-speaker-indication-in-flutter-video-call-app</link><guid isPermaLink="false">661e0e962a88c204ca9d3fb1</guid><category><![CDATA[Flutter]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Thu, 19 Sep 2024 12:17:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Active-Speaker-Flutter--1-.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Active-Speaker-Flutter--1-.jpg" alt="How to Implement Active Speaker Indication in Flutter Video Call App?"/><p>Active speaker indication is an important feature in any video calling application, particularly during group calls. It plays a crucial role in keeping users engaged and informed about who is speaking at any given moment. This not only gives smoother communication but also reduces confusion, leading to an overall improved user experience.</p><p>In this comprehensive guide, we will understand the process of implementing active speaker indication in your Flutter video calling app using VideoSDK. Leveraging the advanced capabilities offered by VideoSDK, we will demonstrate how to accurately identify the loudest speaker in a call and seamlessly integrate visual cues within your app's interface.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of Active Speaker, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/authentication-and-tokens#1-generating-token-from-dashboard">provided tutorial</a>.</p><h3 id="prerequisites%E2%80%8B">Prerequisites<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#prerequisites">​</a></h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>Video SDK Developer Account (if you do not have one, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a>)</li><li>The basic understanding of Flutter.</li><li><strong><a href="https://pub.dev/packages/videosdk" rel="noopener noreferrer">Flutter VideoSDK</a></strong></li><li>Have Flutter installed on your device.</li></ul><h2 id="install-videosdk%E2%80%8B">Install VideoSDK<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#install-video-sdk">​</a></h2><p>Install the VideoSDK using the below-mentioned flutter command. Make sure you are in your Flutter app directory before you run this command.</p><pre><code class="language-dart">$ flutter pub add videosdk

//run this command to add http library to perform network call to generate roomId
$ flutter pub add http</code></pre><h3 id="videosdk-compatibility">VideoSDK Compatibility</h3><!--kg-card-begin: html--><table style="border: 1px solid black;">
<thead>
<tr>
<th style="border:1px solid white;">Android and iOS app</th>
<th style="border:1px solid white;">Web</th>
<th style="border:1px solid white;">Desktop app</th>
<th style="border:1px solid white;">Safari browser</th>
</tr>
</thead>
<tbody>
<tr>
<td style="border:1px solid white;"><center> ✅ </center></td>
<td style="border:1px solid white;"><center> ✅ </center></td>
<td style="border:1px solid white;"><center> ✅ </center></td>
<td style="border:1px solid white;"><center> ❌ </center></td>
</tr>
</tbody>
</table>
<!--kg-card-end: html--><h3 id="structure-of-the-project%E2%80%8B">Structure of the project<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#structure-of-the-project">​</a></h3><p>Your project structure should look like this.</p><pre><code class="language-dart">    root
    ├── android
    ├── ios
    ├── lib
         ├── api_call.dart
         ├── join_screen.dart
         ├── main.dart
         ├── meeting_controls.dart
         ├── meeting_screen.dart
         ├── participant_tile.dart</code></pre><p>We are going to create flutter widgets (JoinScreen, MeetingScreen, MeetingControls, and ParticipantTile).</p><h3 id="app-structure%E2%80%8B">App Structure<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#app-structure">​</a></h3><p>The app widget will contain <code>JoinScreen</code> and <code>MeetingScreen</code> widget. <code>MeetingScreen</code> will have <code>MeetingControls</code> and <code>ParticipantTile</code> widget.</p><figure class="kg-card kg-image-card"><img src="https://cdn.videosdk.live/website-resources/docs-resources/flutter_quick_start_arch.png" class="kg-image" alt="How to Implement Active Speaker Indication in Flutter Video Call App?" loading="lazy"/></figure><h3 id="configure-project-for-android%E2%80%8B">Configure Project For Android<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#for-android">​</a></h3><ul><li>Update <code>/android/app/src/main/AndroidManifest.xml</code> for the permissions we will be using to implement the audio and video features.</li><li>Also, you will need to set your build settings to Java 8 because the official WebRTC jar now uses static methods in <code>EglBase</code> the interface. Just add this to your app-level <code>/android/app/build.gradle</code>.</li><li>If necessary, in the same <code>build.gradle</code> you will need to increase <code>minSdkVersion</code> of <code>defaultConfig</code> up to <code>23</code> (currently default Flutter generator set to <code>16</code>).</li><li>If necessary, in the same <code>build.gradle</code> you will need to increase <code>compileSdkVersion</code> and <code>targetSdkVersion</code> up to <code>33</code> (currently, the default Flutter generator sets it to <code>30</code>).</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">&lt;uses-feature android:name="android.hardware.camera" /&gt;
&lt;uses-feature android:name="android.hardware.camera.autofocus" /&gt;
&lt;uses-permission android:name="android.permission.CAMERA" /&gt;
&lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
&lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" /&gt;
&lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
&lt;uses-permission android:name="android.permission.INTERNET"/&gt;
&lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE"/&gt;
&lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;</code></pre><figcaption>AndroidManifest.xml</figcaption></figure><pre><code class="language-dart">android {
    //...
    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8
    }
}</code></pre><h4 id="configure-project-for-ios%E2%80%8B">Configure Project For iOS<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#for-ios">​</a></h4><ul><li>Add the following entries that allow your app to access the camera and microphone in your <code>/ios/Runner/Info.plist</code> file:</li></ul><pre><code class="language-dart">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Camera Usage!&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Microphone Usage!&lt;/string&gt;</code></pre><ul><li>Uncomment the following line to define a global platform for your project in <code>/ios/Podfile</code> :</li></ul><pre><code class="language-dart"># platform :ios, '12.0'</code></pre><h4 id="for-macos%E2%80%8B">For MacOS<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#for-macos">​</a></h4><ul><li>Add the following entries to your <code>/macos/Runner/Info.plist</code> file that allow your app to access the camera and microphone:</li></ul><pre><code class="language-dart">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Camera Usage!&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;$(PRODUCT_NAME) Microphone Usage!&lt;/string&gt;</code></pre><ul><li>Add the following entries to your <code>/macos/Runner/DebugProfile.entitlements</code> file that allows your app to access the camera, microphone, and open outgoing network connections:</li></ul><pre><code class="language-dart">&lt;key&gt;com.apple.security.network.client&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.camera&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.microphone&lt;/key&gt;
&lt;true/&gt;</code></pre><ul><li>Add the following entries to your <code>/macos/Runner/Release.entitlements</code> file that allows your app to access the camera, microphone, and open outgoing network connections:</li></ul><pre><code class="language-dart">&lt;key&gt;com.apple.security.network.server&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.network.client&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.camera&lt;/key&gt;
&lt;true/&gt;
&lt;key&gt;com.apple.security.device.microphone&lt;/key&gt;
&lt;true/&gt;</code></pre><h2 id="essential-steps-to-implement-video-call-functionality">Essential Steps to Implement Video Call Functionality</h2><p>Before diving into the specifics of screen-sharing implementation, it's crucial to ensure you have VideoSDK properly installed and configured within your Flutter project. Refer to <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start">VideoSDK's documentation</a> for detailed installation instructions. Once you have a functional video calling setup, you can proceed with adding the screen-sharing feature.</p><h3 id="step-1-get-started-with-apicalldart%E2%80%8B">Step 1: Get started with <code>api_call.dart</code><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-1-get-started-with-api_calldart">​</a></h3><p>Before jumping to anything else, you will write a function to generate a unique meetingId. You will require an authentication token, you can generate it either by using <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or by generating it from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for development.</p><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'dart:convert';
import 'package:http/http.dart' as http;

//Auth token we will use to generate a meeting and connect to it
String token = "&lt;Generated-from-dashboard&gt;";

// API call to create meeting
Future&lt;String&gt; createMeeting() async {
  final http.Response httpResponse = await http.post(
    Uri.parse("https://api.videosdk.live/v2/rooms"),
    headers: {'Authorization': token},
  );

//Destructuring the roomId from the response
  return json.decode(httpResponse.body)['roomId'];
}</code></pre><figcaption>api_call.dart</figcaption></figure><h3 id="step-2-creating-the-joinscreen%E2%80%8B">Step 2: Creating the JoinScreen<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-2--creating-the-joinscreen">​</a></h3><p>Let's create <code>join_screen.dart</code> file in <code>lib</code> directory and create JoinScreen <code>StatelessWidget</code>.</p><p>The JoinScreen will consist of:</p><ul><li><strong>Create Meeting Button</strong>: This button will create a new meeting for you.</li><li><strong>Meeting ID TextField</strong>: This text field will contain the meeting ID, you want to join.</li><li><strong>Join Meeting Button</strong>: This button will join the meeting, which you have provided.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'api_call.dart';
import 'meeting_screen.dart';

class JoinScreen extends StatelessWidget {
  final _meetingIdController = TextEditingController();

  JoinScreen({super.key});

  void onCreateButtonPressed(BuildContext context) async {
    // call api to create meeting and then navigate to MeetingScreen with meetingId,token
    await createMeeting().then((meetingId) {
      if (!context.mounted) return;
      Navigator.of(context).push(
        MaterialPageRoute(
          builder: (context) =&gt; MeetingScreen(
            meetingId: meetingId,
            token: token,
          ),
        ),
      );
    });
  }

  void onJoinButtonPressed(BuildContext context) {
    String meetingId = _meetingIdController.text;
    var re = RegExp("\\w{4}\\-\\w{4}\\-\\w{4}");
    // check meeting id is not null or invaild
    // if meeting id is vaild then navigate to MeetingScreen with meetingId,token
    if (meetingId.isNotEmpty &amp;&amp; re.hasMatch(meetingId)) {
      _meetingIdController.clear();
      Navigator.of(context).push(
        MaterialPageRoute(
          builder: (context) =&gt; MeetingScreen(
            meetingId: meetingId,
            token: token,
          ),
        ),
      );
    } else {
      ScaffoldMessenger.of(context).showSnackBar(const SnackBar(
        content: Text("Please enter valid meeting id"),
      ));
    }
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: const Text('VideoSDK QuickStart'),
      ),
      body: Padding(
        padding: const EdgeInsets.all(12.0),
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: [
            ElevatedButton(
              onPressed: () =&gt; onCreateButtonPressed(context),
              child: const Text('Create Meeting'),
            ),
            Container(
              margin: const EdgeInsets.fromLTRB(0, 8.0, 0, 8.0),
              child: TextField(
                decoration: const InputDecoration(
                  hintText: 'Meeting Id',
                  border: OutlineInputBorder(),
                ),
                controller: _meetingIdController,
              ),
            ),
            ElevatedButton(
              onPressed: () =&gt; onJoinButtonPressed(context),
              child: const Text('Join Meeting'),
            ),
          ],
        ),
      ),
    );
  }
}</code></pre><figcaption>join_screen.dart</figcaption></figure><ul><li>Update the home screen of the app in the <code>main.dart</code></li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'join_screen.dart';

void main() {
  runApp(const MyApp());
}

class MyApp extends StatelessWidget {
  const MyApp({super.key});

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'VideoSDK QuickStart',
      theme: ThemeData(
        primarySwatch: Colors.blue,
      ),
      home: JoinScreen(),
    );
  }
}</code></pre><figcaption>main.dart</figcaption></figure><h3 id="step-3-creating-the-meetingcontrols%E2%80%8B">Step 3: Creating the MeetingControls<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-3--creating-the-meetingcontrols">​</a></h3><p>Let's create <code>meeting_controls.dart</code> file and create MeetingControls <code>StatelessWidget</code>.</p><p>The MeetingControls will consist of:</p><ul><li><strong>Leave Button</strong>: This button will leave the meeting.</li><li><strong>Toggle Mic Button</strong>: This button will unmute or mute the mic.</li><li><strong>Toggle Camera Button</strong>: This button will enable or disable the camera.</li></ul><p>MeetingControls will accept 3 functions in the constructor.</p><ul><li><strong>onLeaveButtonPressed</strong>: invoked when the Leave button pressed</li><li><strong>onToggleMicButtonPressed</strong>: invoked when the toggle mic button pressed</li><li><strong>onToggleCameraButtonPressed</strong>: invoked when the toggle Camera button pressed</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';

class MeetingControls extends StatelessWidget {
  final void Function() onToggleMicButtonPressed;
  final void Function() onToggleCameraButtonPressed;
  final void Function() onLeaveButtonPressed;

  const MeetingControls(
      {super.key,
      required this.onToggleMicButtonPressed,
      required this.onToggleCameraButtonPressed,
      required this.onLeaveButtonPressed});

  @override
  Widget build(BuildContext context) {
    return Row(
      mainAxisAlignment: MainAxisAlignment.spaceEvenly,
      children: [
        ElevatedButton(
            onPressed: onLeaveButtonPressed, child: const Text('Leave')),
        ElevatedButton(
            onPressed: onToggleMicButtonPressed, child: const Text('Toggle Mic')),
        ElevatedButton(
            onPressed: onToggleCameraButtonPressed,
            child: const Text('Toggle WebCam')),
      ],
    );
  }
}</code></pre><figcaption>meeting_controls.dart</figcaption></figure><h3 id="step-4-creating-participanttile%E2%80%8B">Step 4: Creating ParticipantTile<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-4--creating-participanttile">​</a></h3><p>Let's create <code>participant_tile.dart</code> file and create ParticipantTile <code>StatefulWidget</code>.</p><p>The ParticipantTile will consist of:</p><ul><li><strong>RTCVideoView</strong>: This will show the participant's video stream.</li></ul><p>ParticipantTile will accept <code>Participant</code> in constructor</p><ul><li><strong>participant:</strong> participant of the meeting.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class ParticipantTile extends StatefulWidget {
  final Participant participant;
  const ParticipantTile({super.key, required this.participant});

  @override
  State&lt;ParticipantTile&gt; createState() =&gt; _ParticipantTileState();
}

class _ParticipantTileState extends State&lt;ParticipantTile&gt; {
  Stream? videoStream;

  @override
  void initState() {
    // initial video stream for the participant
    widget.participant.streams.forEach((key, Stream stream) {
      setState(() {
        if (stream.kind == 'video') {
          videoStream = stream;
        }
      });
    });
    _initStreamListeners();
    super.initState();
  }

  _initStreamListeners() {
    widget.participant.on(Events.streamEnabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; videoStream = stream);
      }
    });

    widget.participant.on(Events.streamDisabled, (Stream stream) {
      if (stream.kind == 'video') {
        setState(() =&gt; videoStream = null);
      }
    });
  }

  @override
  Widget build(BuildContext context) {
    return Padding(
      padding: const EdgeInsets.all(8.0),
      child: videoStream != null
          ? RTCVideoView(
              videoStream?.renderer as RTCVideoRenderer,
              objectFit: RTCVideoViewObjectFit.RTCVideoViewObjectFitCover,
            )
          : Container(
              color: Colors.grey.shade800,
              child: const Center(
                child: Icon(
                  Icons.person,
                  size: 100,
                ),
              ),
            ),
    );
  }
}</code></pre><figcaption>participant_tile.dart</figcaption></figure><h3 id="step-5-creating-the-meetingscreen%E2%80%8B">Step 5: Creating the MeetingScreen<a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start#step-5--creating-the-meetingscreen">​</a></h3><p>Let's create <code>meeting_screen.dart</code> file and create MeetingScreen <code>StatefulWidget</code>.</p><p>MeetingScreen will accept meetingId and token in the constructor.</p><ul><li><strong>meetingID:</strong> meetingId, you want to join</li><li><strong>token</strong>: VideoSDK Auth token.</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-dart">import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';
import './participant_tile.dart';

class MeetingScreen extends StatefulWidget {
  final String meetingId;
  final String token;

  const MeetingScreen(
      {super.key, required this.meetingId, required this.token});

  @override
  State&lt;MeetingScreen&gt; createState() =&gt; _MeetingScreenState();
}

class _MeetingScreenState extends State&lt;MeetingScreen&gt; {
  late Room _room;
  var micEnabled = true;
  var camEnabled = true;

  Map&lt;String, Participant&gt; participants = {};

  @override
  void initState() {
    // create room
    _room = VideoSDK.createRoom(
      roomId: widget.meetingId,
      token: widget.token,
      displayName: "John Doe",
      micEnabled: micEnabled,
      camEnabled: camEnabled
    );

    setMeetingEventListener();

    // Join room
    _room.join();

    super.initState();
  }

  // listening to meeting events
  void setMeetingEventListener() {
    _room.on(Events.roomJoined, () {
      setState(() {
        participants.putIfAbsent(
            _room.localParticipant.id, () =&gt; _room.localParticipant);
      });
    });

    _room.on(
      Events.participantJoined,
      (Participant participant) {
        setState(
          () =&gt; participants.putIfAbsent(participant.id, () =&gt; participant),
        );
      },
    );

    _room.on(Events.participantLeft, (String participantId) {
      if (participants.containsKey(participantId)) {
        setState(
          () =&gt; participants.remove(participantId),
        );
      }
    });

    _room.on(Events.roomLeft, () {
      participants.clear();
      Navigator.popUntil(context, ModalRoute.withName('/'));
    });
  }

  // onbackButton pressed leave the room
  Future&lt;bool&gt; _onWillPop() async {
    _room.leave();
    return true;
  }

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return WillPopScope(
      onWillPop: () =&gt; _onWillPop(),
      child: Scaffold(
        appBar: AppBar(
          title: const Text('VideoSDK QuickStart'),
        ),
        body: Padding(
          padding: const EdgeInsets.all(8.0),
          child: Column(
            children: [
              Text(widget.meetingId),
              //render all participant
              Expanded(
                child: Padding(
                  padding: const EdgeInsets.all(8.0),
                  child: GridView.builder(
                    gridDelegate: const SliverGridDelegateWithFixedCrossAxisCount(
                      crossAxisCount: 2,
                      crossAxisSpacing: 10,
                      mainAxisSpacing: 10,
                      mainAxisExtent: 300,
                    ),
                    itemBuilder: (context, index) {
                      return ParticipantTile(
                        key: Key(participants.values.elementAt(index).id),
                          participant: participants.values.elementAt(index));
                    },
                    itemCount: participants.length,
                  ),
                ),
              ),
              MeetingControls(
                onToggleMicButtonPressed: () {
                  micEnabled ? _room.muteMic() : _room.unmuteMic();
                  micEnabled = !micEnabled;
                },
                onToggleCameraButtonPressed: () {
                  camEnabled ? _room.disableCam() : _room.enableCam();
                  camEnabled = !camEnabled;
                },
                onLeaveButtonPressed: () {
                  _room.leave()
                },
              ),
            ],
          ),
        ),
      ),
      home: JoinScreen(),
    );
  }
}</code></pre><figcaption>meeting_screen.dart</figcaption></figure><blockquote>CAUTION<br>If you get <code>webrtc/webrtc.h file not found</code> error at a runtime in iOS, then check the solution <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/known-issues#issue--1" rel="noopener noreferrer">here</a>.</br></blockquote><blockquote>TIP<br>You can checkout the complete <a href="https://github.com/videosdk-live/quickstart/tree/main/flutter-rtc" rel="noopener noreferrer">quick start example here</a>.</br></blockquote><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/videosdk-live/quickstart/tree/main/flutter-rtc"><div class="kg-bookmark-content"><div class="kg-bookmark-title">quickstart/flutter-rtc at main · videosdk-live/quickstart</div><div class="kg-bookmark-description">A short and sweet tutorial for getting up to speed with VideoSDK in less than 10 minutes - videosdk-live/quickstart</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="How to Implement Active Speaker Indication in Flutter Video Call App?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">videosdk-live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/de970b6f7db5c97b5728471cbf13bae285388d9a9ccaa9bd294da67c509984d5/videosdk-live/quickstart" alt="How to Implement Active Speaker Indication in Flutter Video Call App?"/></div></a></figure><h2 id="integrate-active-speaker-indication-feature">Integrate Active Speaker Indication Feature</h2><p>Once you've seamlessly integrated VideoSDK into your Flutter project, you'll unlock the powerful capability of Active Speaker Indication. This innovative feature operates by gauging the audio volume of participants, dynamically discerning the speaker in real time. By implementing VideoSDK's provided guides, your application gains the ability to stay synced with levels, ensuring a seamless experience for your users.</p><p>In the following sections, we'll go through these guides, offering comprehensive guidance on their integration. By following our step-by-step instructions, you'll adeptly implement these callbacks, effectively illuminating the active speaker within your video call interface.</p><p>The active Speaker indication feature in VideoSDK lets you know, which participant in a meeting is an active speaker. This feature can be particularly useful in larger meetings or webinars, where there may be many participants and it can be difficult to tell who is speaking.</p><p>Whenever any participant speaks in a meeting, <code>speakerChanged</code> the event will trigger with the participant id of the active speaker.</p><p>For example, the meeting is running with <strong>Alice</strong> and <strong>Bob</strong>. Whenever any of them speaks, <code>speakerChanged</code> the event will trigger and return the speaker <code>participantId</code>.</p><pre><code class="language-dart">import 'package:flutter/material.dart';
import 'package:videosdk/videosdk.dart';

class MeetingScreen extends StatefulWidget {
  ...
}

class _MeetingScreenState extends State&lt;MeetingScreen&gt; {
  late Room room;

  @override
  void initState() {
    ...

    setupRoomEventListener();
  }

  @override
  Widget build(BuildContext context) {
    return YourMeetingWidget();
  }

  void setupRoomEventListener() {

    room.on(Events.speakerChanged, (String? activeSpeakerId) {
      //Room active speaker has changed
      //Participant ID of current active speaker is activeSpeakerId
    });
  }
}</code></pre><h2 id="conclusion">Conclusion</h2><p>By following these steps, you'll successfully integrate active speaker indication into your Flutter video-calling app, elevating its group call functionalities to new heights. Empower your users with real-time insights into the active speaker, fostering more engaging and efficient communication experiences.</p><p>Remember to refer to <a href="https://www.videosdk.live/">VideoSDK's </a>documentation for ongoing support and updates, ensuring your app remains at the forefront of video conferencing innovation.</p>]]></content:encoded></item><item><title><![CDATA[Insurance Virtual Claim Settlement]]></title><description><![CDATA[Learn about insurance claim settlement, different types of claims, procedures involved, and importance of technology in enhancing customer experience.]]></description><link>https://www.videosdk.live/blog/insurance-claim-settlement</link><guid isPermaLink="false">66742bcd20fab018df10ecaf</guid><category><![CDATA[Industry Update]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Thu, 19 Sep 2024 11:08:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/06/Virtual-Claim-Settlement.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/Virtual-Claim-Settlement.jpg" alt="Insurance Virtual Claim Settlement"/><p>The insurance industry is built on the foundation of trust and reliability, with policyholders (proposers) trusting their insurance providers to protect them from unexpected events and losses. However, the claims settlement process is often a critical and complex aspect of this relationship, requiring an in-depth understanding of the various types of claims, the procedures involved, and the role of technology in streamlining the process.</p><p>In this article, we will delve into the intricacies of claim settlement, exploring the different types of claims, the procedures involved, and the importance of technology in enhancing the customer experience.</p><h2 id="what-is-claim-settlement">What is Claim Settlement?</h2><p>Claim settlement refers to the process by which an insurance company pays out compensation to a policyholder for a covered loss or event. This process involves investigating the claim, verifying its validity, and deciding the appropriate amount of compensation to be paid. The claim settlement process is a crucial aspect of the insurance industry, as it directly impacts the customer's experience and satisfaction with their insurance provider.</p><p>The prior purpose of claim settlement is to ensure that policyholders receive the financial support they need to recover from unexpected events or losses, such as property damage, personal injury, or life-altering incidents. By fulfilling proposers' contractual commitments, insurance companies play a vital role in helping individuals and businesses mitigate the financial impact of these unexpected circumstances.</p><h2 id="why-is-claims-settlement-important">Why is Claims Settlement Important?</h2><p>Claims settlement is a critical segment of the insurance industry, as it directly affects the customer experience and the overall reputation of insurance providers. When policyholders file a claim, they are often in a vulnerable state, having experienced a loss or unexpected event.</p><p>The efficiency and transparency of the claims settlement/policy process can significantly impact their level of trust and satisfaction with their insurance provider. Fair policy servicing settlement is essential for several reasons:</p><ul><li><strong>Customer Satisfaction</strong>: Efficient and transparent claim settlement processes can enhance customer satisfaction, leading to increased loyalty and positive word-of-mouth referrals.</li><li><strong>Reputation and Brand Image</strong>: Effective claim settlement practices can strengthen an insurance company's reputation and brand image, making it more appealing to potential customers.</li><li><strong>Regulatory Compliance</strong>: Insurance regulators closely monitor claim settlement practices to ensure that policyholders are treated fairly and that insurance companies adhere to industry standards and regulations.</li><li><strong>Financial Stability</strong>: Proper claim settlement practices help insurance companies manage their financial risks and maintain a healthy balance sheet, ensuring their long-term viability.</li></ul><h2 id="what-is-the-current-market-size-of-insurance">What is the Current Market Size of Insurance?</h2><p>The global insurance market has seen substantial growth over the past few years and is expected to continue this trend. Here’s an overview of the market size from the last five years and projections for the next five years.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/Global-Insurance-Market-Size.png" class="kg-image" alt="Insurance Virtual Claim Settlement" loading="lazy" width="2475" height="1425"/></figure><p>This growth is driven by several factors, including advancements in technology, increased adoption of digital platforms, and rising awareness of insurance benefits. Further, the integration of AI and data analytics in the insurance process is improving efficiency and customer satisfaction, contributing to market expansion.</p><h2 id="what-is-the-objective-of-the-settlement-of-the-claim">What is the Objective of the Settlement of the Claim?</h2><p>The primary objective of the claim settlement process is to provide policyholders with the financial compensation they are allowed under their insurance policy. This compensation is intended to help the policyholder recover from the loss or event that initiated the claim, whether it's property damage, personal injury, or a life-altering incident.</p><p>The claim settlement procedure aims to ensure that policyholders receive the support they need in a timely and efficient manner, allowing them to focus on the recovery process rather than worrying about the financial implications of the event.</p><p>The claim settlement process also helps maintain the integrity of the insurance industry by maintaining fair, transparent, and customer-centric principles. This principle encourages trust and confidence in the insurance system, benefiting both policyholders and insurance providers.</p><h2 id="what-are-the-types-of-claim-settlement-in-insurance">What are the Types of Claim Settlement in Insurance?</h2><p>In the insurance industry, there are several types of claim settlement, each with its unique characteristics and requirements. Understanding the different types of claims is essential for both policyholders and policy service providers to navigate the claims process effectively. Some of the most common types of claim settlement include:</p><ul><li><strong>Survival Benefit Claim</strong>: This type of claim is applicable in endowment or money-back insurance plans, where the policyholder receives a predetermined amount of money after a specified period, provided the policy is in force and the policyholder is alive.</li><li><strong>Maturity Benefit Claim</strong>: This claim is paid out when the insurance policy reaches its maturity date, and the policyholder receives the accumulated funds, including any bonuses or additions.</li><li><strong>Death Benefit Claim</strong>: This claim is paid to the policyholder's nominated beneficiary or legal heir upon the policyholder's death during the policy term.</li><li><strong>Rider Claim</strong>: Riders are additional benefits attached to the main insurance policy, and a rider claim is filed to receive the specified compensation for the covered event, such as critical illness or accidental death.</li></ul><p>Each type of claim settlement procedure has its own set of requirements, documentation, and procedures that must be followed to ensure a smooth and efficient claims process.</p><h2 id="what-is-the-procedure-for-the-claim-settlement">What is the Procedure for the Claim Settlement?</h2><p>The insurance claim settlement process typically involves several steps, each prepared to ensure a fair and transparent outcome for the policyholder. The general procedure for the settlement of claims includes:</p><ul><li><strong>Claim Intimation</strong>: The policyholder or their representative must inform the insurance company about the loss or event that has occurred, providing the necessary details such as the policy number, date of incident, and a brief description of the claim.</li><li><strong>Claim Documentation</strong>: The policyholder is required to submit various documents, such as the original policy document, death certificate, medical reports, and any other relevant evidence to support the claim.</li><li><strong>Claim Investigation</strong>: The insurance company will assign a claim adjuster to investigate the claim, verify the information provided, and evaluate the importance of the loss or damage.</li><li><strong>Claim Evaluation</strong>: Based on the policy terms and conditions, the insurance company will decide the eligible amount of compensation and any applicable deductibles or exclusions.</li><li><strong>Claim Settlement</strong>: Once the claim is approved, the insurance company will issue the payment to the policyholder or their nominated beneficiary, typically within a specified timeframe as per regulatory guidelines.</li></ul><p>In some cases, the claims process may involve additional steps, such as the involvement of third-party experts or the need for further investigation. Insurance companies are also required to adhere to specific regulations and guidelines set by the regulatory authorities to ensure fair and timely claim settlement.</p><h2 id="importance-of-virtual-claim-settlement">Importance of Virtual Claim Settlement</h2><p>The insurance industry has witnessed a significant transformation with the advent of video-based claims processing, also known as virtual claim assessment. This technology has become increasingly important in the insurance claim settlement process, as it offers several benefits to both policyholders and insurance providers.</p><p>Virtual Claim Settlement allows policyholders to capture and submit visual evidence of the loss or damage directly from their mobile devices, streamlining the claims process and reducing the need for in-person inspections. This not only saves time and resources but also enhances the overall customer experience by providing a more convenient and efficient claims submission process.</p><h2 id="how-does-videosdk-transform-claim-settlement-through-the-virtual-claim-solution">How does VideoSDK Transform Claim Settlement through the Virtual Claim Solution?</h2><p>VideoSDK is the first IRDAI-compliant <a href="https://www.videosdk.live/solutions/virtual-claim-in-insurtech" rel="noreferrer">real-time video solution</a>, that has developed a comprehensive virtual claim settlement (as well as <a href="https://www.videosdk.live/blog/video-mer-medical-examination-report" rel="noreferrer">MER</a>) solution that is transforming the insurance industry's claims settlement process. This innovative solution leverages the power of real-time video to streamline the claims submission and validation process, delivering significant benefits to both proposers and insurance providers.</p><p>The VideoSDK’s real-time audio-video solution for virtual claim settlement enables proposers to capture and submit high-quality video evidence of their claims directly from their mobile devices. This visual documentation can include footage of the incident, damage assessments, and any other relevant information, providing claims adjusters with a comprehensive view of the claim.</p><p>By integrating VideoSDK's solution, insurance companies can benefit from:</p><ul><li><strong>Faster Claim Processing</strong>: The ability to quickly validate claims through video evidence can significantly reduce the time required for insurance claim settlement, improving the overall customer experience.</li><li><strong>Reduced Fraud Risk</strong>: The tamper-proof nature of the video evidence collected through the VideoSDK solution helps insurance companies mitigate the risk of fraudulent claims, ensuring the integrity of the claims process.</li><li><strong>Enhanced Operational Efficiency</strong>: The automation and streamlining of the claims submission and validation process can lead to cost savings and improved operational efficiency for insurance providers.</li><li><strong>Improved Customer Satisfaction</strong>: The convenience and transparency offered by VideoSDK’s real-time video solution can enhance policyholders' satisfaction with the claims settlement process, fostering stronger customer loyalty.</li></ul><h2 id="conclusion">Conclusion</h2><p>By understanding the different types of claims, the settlement procedures, and the role of technology in streamlining the process, both proposers and insurance providers can navigate the claims landscape more effectively. Virtual claim settlement is a critical factor in the insurance industry, ensuring policyholders receive the compensation they are entitled to when faced with unexpected events or losses.</p><p>The integration of innovative solutions like VideoSDK is transforming virtual claim settlement in the insurance industry, operating faster claim processing, reducing fraud risk, and enhancing operational efficiency. As the insurance landscape continues to evolve, a focus on customer-centric claim settlement practices will be important in maintaining trust, loyalty, and the overall integrity of the insurance system.</p>]]></content:encoded></item><item><title><![CDATA[How to Integrate Collaborative Whiteboard in React JS Video Call App?]]></title><description><![CDATA[Discover step-by-step instructions for integrating a Collaborative Whiteboard into your React JS video call app for dynamic real-time collaboration.]]></description><link>https://www.videosdk.live/blog/integrate-whiteboard-in-react-js</link><guid isPermaLink="false">6618fe552a88c204ca9d09e0</guid><category><![CDATA[React]]></category><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Thu, 19 Sep 2024 07:06:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/Collaborative-Whiteboard.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Collaborative-Whiteboard.png" alt="How to Integrate Collaborative Whiteboard in React JS Video Call App?"/><p>Adding a Collaborative Whiteboard feature to your React JS Video Call App not only enhances collaboration but also boosts productivity. This feature allows users to brainstorm ideas, sketch diagrams, and annotate documents in real time while engaged in video calls. By seamlessly integrating this feature, your app facilitates smoother communication and empowers users to visualize concepts, fostering a more interactive and engaging experience.</p><p><strong>Benefits of using Whiteboard:</strong></p><ul><li><strong>Enhanced Collaboration</strong>: Users can visually illustrate concepts, making communication more effective.</li><li><strong>Increased Productivity</strong>: Whiteboarding allows for on-the-fly problem-solving and idea generation, reducing the need for separate tools or meetings.</li><li><strong>Visual Learning</strong>: Visual aids help convey complex ideas more clearly, catering to different learning styles.</li><li><strong>Remote Work Facilitation</strong>: This is especially beneficial for remote teams, who, along with Whiteboard, can use <a href="https://www.virtosoftware.com/microsoft-365/virto-calendar-overlay-app/">Microsoft 365 calendar</a> to enable seamless collaboration despite geographical barriers.</li><li><strong>Documented Discussions</strong>: Whiteboard content can be saved for future reference, ensuring that valuable insights aren't lost.</li></ul><p><strong>Use Cases of Whiteboard:</strong></p><ul><li><strong>Education</strong>: Teachers can explain complex topics visually, engaging students in interactive lessons.</li><li><strong>Business Meetings</strong>: Teams can brainstorm strategies, visualize data, and plan projects together.</li><li><strong>Design Reviews</strong>: Designers can share concepts and receive feedback in real-time.</li><li><strong>Technical Support</strong>: Support teams can troubleshoot issues by visually demonstrating solutions to customers.</li></ul><p>Transform your React JS video call app into a dynamic platform for collaborative innovation, driving productivity and creativity to new heights. Follow the below tutorial and build a React JS Video Calling App with the Collaborative Whiteboard feature.</p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the whiteboard functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Before proceeding, ensure that your development environment meets the following requirements:</p><ul><li>VideoSDK Developer Account (Not having one?, follow <a href="https://app.videosdk.live/" rel="noopener noreferrer"><strong>VideoSDK Dashboard</strong></a>)</li><li>Basic understanding of React.</li><li><a href="https://www.npmjs.com/package/@videosdk.live/react-sdk" rel="noopener noreferrer"><strong>React VideoSDK</strong></a></li><li>Make sure Node and NPM are installed on your device.</li><li>Basic understanding of Hooks (useState, useRef, useEffect)</li><li>React Context API (optional)</li></ul><p>Follow the steps to create the environment necessary to add video calls to your app. You can also find the code sample for <a href="https://github.com/videosdk-live/quickstart/tree/main/react-rtc" rel="noopener noreferrer">Quickstart here</a>.<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#create-new-react-app" rel="noopener noreferrer">​</a></p><p><strong>Create a new React App using the below command.</strong></p><pre><code class="language-js">$ npx create-react-app videosdk-rtc-react-app</code></pre><h2 id="%E2%AC%87%EF%B8%8F-install-videosdk%E2%80%8B">⬇️ Install VideoSDK<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#install-videosdk">​</a></h2><p>It is necessary to set up VideoSDK within your project before going into the details of integrating the whiteboard feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.</p><ul><li>For NPM</li></ul><pre><code class="language-js">$ npm install "@videosdk.live/react-sdk"

//For the Participants Video
$ npm install "react-player"</code></pre><ul><li>For Yarn</li></ul><pre><code class="language-js">$ yarn add "@videosdk.live/react-sdk"

//For the Participants Video
$ yarn add "react-player"</code></pre><p>You are going to use functional components to leverage React's reusable component architecture. There will be components for users, videos and controls (mic, camera, leave) over the video.</p><h3 id="app-architecture">App Architecture</h3>
<p>The App will contain a <code>MeetingView</code> component which includes a <code>ParticipantView</code> component which will render the participant's name, video, audio, etc. It will also have a <code>Controls</code> component that will allow the user to perform operations like leave and toggle media.</p><figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-quick-start-fafbfbc2ed2d7cbfd4c5ee2e36296f9e.png" class="kg-image" alt="How to Integrate Collaborative Whiteboard in React JS Video Call App?" loading="lazy" width="1356" height="780"/></figure><p>You will be working on the following files:</p><ul><li>API.js: Responsible for handling API calls such as generating unique meetingId and token</li><li>App.js: Responsible for rendering <code>MeetingView</code> and joining the meeting.</li></ul><h2 id="essential-steps-to-implement-video-calling-functionality">Essential Steps to Implement Video Calling Functionality</h2><p>To add video capability to your React application, you must first complete a sequence of prerequisites.</p><h3 id="step-1-get-started-with-apijs">Step 1: Get started with API.js</h3><p>Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">//This is the Auth token, you will use it to generate a meeting and connect to it
export const authToken = "&lt;Generated-from-dashbaord&gt;";
// API call to create a meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${authToken}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });
  //Destructuring the roomId from the response
  const { roomId } = await res.json();
  return roomId;
};</code></pre><figcaption><p><span style="white-space: pre-wrap;">API.js</span></p></figcaption></figure><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the <strong>Context Provider</strong> and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meetings such as join/leave, enable/disable the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as <strong>name</strong>, <strong>webcamStream</strong>, <strong>micStream</strong>, etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the <strong>App.js</strong> file.</p><pre><code class="language-js">import "./App.css";
import React, { useEffect, useMemo, useRef, useState } from "react";
import {
  MeetingProvider,
  MeetingConsumer,
  useMeeting,
  useParticipant,
} from "@videosdk.live/react-sdk";
import { authToken, createMeeting } from "./API";
import ReactPlayer from "react-player";

function JoinScreen({ getMeetingAndToken }) {
  return null;
}

function ParticipantView(props) {
  return null;
}

function Controls(props) {
  return null;
}

function MeetingView(props) {
  return null;
}

function App() {
  const [meetingId, setMeetingId] = useState(null);

  //Getting the meeting id by calling the api we just wrote
  const getMeetingAndToken = async (id) =&gt; {
    const meetingId =
      id == null ? await createMeeting({ token: authToken }) : id;
    setMeetingId(meetingId);
  };

  //This will set Meeting Id to null when meeting is left or ended
  const onMeetingLeave = () =&gt; {
    setMeetingId(null);
  };

  return authToken &amp;&amp; meetingId ? (
    &lt;MeetingProvider
      config={{
        meetingId,
        micEnabled: true,
        webcamEnabled: true,
        name: "C.V. Raman",
      }}
      token={authToken}
    &gt;
      &lt;MeetingView meetingId={meetingId} onMeetingLeave={onMeetingLeave} /&gt;
    &lt;/MeetingProvider&gt;
  ) : (
    &lt;JoinScreen getMeetingAndToken={getMeetingAndToken} /&gt;
  );
}

export default App;</code></pre><h3 id="step-3-implement-join-screen%E2%80%8B">Step 3: Implement Join Screen<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-3-implement-join-screen">​</a></h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><pre><code class="language-js">function JoinScreen({ getMeetingAndToken }) {
  const [meetingId, setMeetingId] = useState(null);
  const onClick = async () =&gt; {
    await getMeetingAndToken(meetingId);
  };
  return (
    &lt;div&gt;
      &lt;input
        type="text"
        placeholder="Enter Meeting Id"
        onChange={(e) =&gt; {
          setMeetingId(e.target.value);
        }}
      /&gt;
      &lt;button onClick={onClick}&gt;Join&lt;/button&gt;
      {" or "}
      &lt;button onClick={onClick}&gt;Create Meeting&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><h4 id="output">Output</h4>
<figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-join-screen-06fb57cf0d9e3bcc1e7da9fc032298c3.jpeg" class="kg-image" alt="How to Integrate Collaborative Whiteboard in React JS Video Call App?" loading="lazy" width="720" height="130"/></figure><h3 id="step-4-implement-meetingview-and-controls%E2%80%8B">Step 4: Implement MeetingView and Controls<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-4-implement-meetingview-and-controls">​</a></h3><p>The next step is to create <code>MeetingView</code> and <code>Controls</code> components to manage features such as join, leave, mute, and unmute.</p><pre><code class="language-js">function MeetingView(props) {
  const [joined, setJoined] = useState(null);
  //Get the method which will be used to join the meeting.
  //We will also get the participants list to display all participants
  const { join, participants } = useMeeting({
    //callback for when meeting is joined successfully
    onMeetingJoined: () =&gt; {
      setJoined("JOINED");
    },
    //callback for when meeting is left
    onMeetingLeft: () =&gt; {
      props.onMeetingLeave();
    },
  });
  const joinMeeting = () =&gt; {
    setJoined("JOINING");
    join();
  };

  return (
    &lt;div className="container"&gt;
      &lt;h3&gt;Meeting Id: {props.meetingId}&lt;/h3&gt;
      {joined &amp;&amp; joined == "JOINED" ? (
        &lt;div&gt;
          &lt;Controls /&gt;
          //For rendering all the participants in the meeting
          {[...participants.keys()].map((participantId) =&gt; (
            &lt;ParticipantView
              participantId={participantId}
              key={participantId}
            /&gt;
          ))}
        &lt;/div&gt;
      ) : joined &amp;&amp; joined == "JOINING" ? (
        &lt;p&gt;Joining the meeting...&lt;/p&gt;
      ) : (
        &lt;button onClick={joinMeeting}&gt;Join&lt;/button&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><figure class="kg-card kg-code-card"><pre><code class="language-js">function Controls() {
  const { leave, toggleMic, toggleWebcam } = useMeeting();
  return (
    &lt;div&gt;
      &lt;button onClick={() =&gt; leave()}&gt;Leave&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleMic()}&gt;toggleMic&lt;/button&gt;
      &lt;button onClick={() =&gt; toggleWebcam()}&gt;toggleWebcam&lt;/button&gt;
    &lt;/div&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">Control Component</span></p></figcaption></figure><h4 id="output-of-controls-component">Output of Controls Component</h4>
<figure class="kg-card kg-image-card"><img src="https://docs.videosdk.live/assets/images/react-container-controls-2cebdfdfd1371b010b773cb6fb9c7ae8.jpeg" class="kg-image" alt="How to Integrate Collaborative Whiteboard in React JS Video Call App?" loading="lazy" width="720" height="177"/></figure><h3 id="step-5-implement-participant-view%E2%80%8B">Step 5: Implement Participant View<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start#step-5-implement-participant-view">​</a></h3><p>Before implementing the participant view, you need to understand a couple of concepts.</p><h4 id="51-forwarding-ref-for-mic-and-camera">5.1 Forwarding Ref for mic and camera</h4>
<p>The <code>useRef</code> hook is responsible for referencing the audio and video components. It will be used to play and stop the audio and video of the participant.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">const webcamRef = useRef(null);
const micRef = useRef(null);</code></pre><figcaption><p><span style="white-space: pre-wrap;">Forwarding Ref for mic and camera</span></p></figcaption></figure><h4 id="52-useparticipant-hook">5.2 useParticipant Hook</h4>
<p>The <code>useParticipant</code> hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take participantId as an argument.</p><pre><code class="language-js">const { webcamStream, micStream, webcamOn, micOn } = useParticipant(
  props.participantId
);</code></pre><h4 id="53-mediastream-api">5.3 MediaStream API</h4>
<p>The MediaStream API is beneficial for adding a MediaTrack to the audio/video tag, enabling the playback of audio or video.</p><pre><code class="language-js">const webcamRef = useRef(null);
const mediaStream = new MediaStream();
mediaStream.addTrack(webcamStream.track);

webcamRef.current.srcObject = mediaStream;
webcamRef.current
  .play()
  .catch((error) =&gt; console.error("videoElem.current.play() failed", error));</code></pre><h4 id="54-implement-participantview%E2%80%8B">5.4 Implement <code>ParticipantView</code>​</h4>
<p>Now you can use both of the hooks and the API to create <code>ParticipantView</code></p><pre><code class="language-js">function ParticipantView(props) {
  const micRef = useRef(null);
  const { webcamStream, micStream, webcamOn, micOn, isLocal, displayName } =
    useParticipant(props.participantId);

  const videoStream = useMemo(() =&gt; {
    if (webcamOn &amp;&amp; webcamStream) {
      const mediaStream = new MediaStream();
      mediaStream.addTrack(webcamStream.track);
      return mediaStream;
    }
  }, [webcamStream, webcamOn]);

  useEffect(() =&gt; {
    if (micRef.current) {
      if (micOn &amp;&amp; micStream) {
        const mediaStream = new MediaStream();
        mediaStream.addTrack(micStream.track);

        micRef.current.srcObject = mediaStream;
        micRef.current
          .play()
          .catch((error) =&gt;
            console.error("videoElem.current.play() failed", error)
          );
      } else {
        micRef.current.srcObject = null;
      }
    }
  }, [micStream, micOn]);

  return (
    &lt;div&gt;
      &lt;p&gt;
        Participant: {displayName} | Webcam: {webcamOn ? "ON" : "OFF"} | Mic:{" "}
        {micOn ? "ON" : "OFF"}
      &lt;/p&gt;
      &lt;audio ref={micRef} autoPlay playsInline muted={isLocal} /&gt;
      {webcamOn &amp;&amp; (
        &lt;ReactPlayer
          //
          playsinline // extremely crucial prop
          pip={false}
          light={false}
          controls={false}
          muted={true}
          playing={true}
          //
          url={videoStream}
          //
          height={"300px"}
          width={"300px"}
          onError={(err) =&gt; {
            console.log(err, "participant video error");
          }}
        /&gt;
      )}
    &lt;/div&gt;
  );
}</code></pre><blockquote>You can check out the complete <a href="https://github.com/videosdk-live/quickstart/tree/main/react-rtc" rel="noopener noreferrer">quick start example here</a>.</blockquote><h2 id="integrate-collaborative-whiteboard-canvas-drawing">Integrate Collaborative Whiteboard (Canvas Drawing)</h2><p>When in a meeting, it can be very handy to draw and share your views with all the collaborators. To achieve this, you can develop a drawing board shared in real-time using the publish-subscribe mechanism. If you are not familiar with the PubSub mechanism <code>usePubSub hook</code>, you can <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/pubsub">follow this guide</a>.</p><h3 id="implementing-collaborative-whiteboard%E2%80%8B">Implementing Collaborative Whiteboard<a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/canvas-drawing-using-pubsub#implementing-canvas-drawing">​</a></h3><p>To implement the Whiteboard (Canvas Drawing) feature, you need to use a third-party library that provides an easy solution for drawing and rendering on the canvas.</p><ul><li>First, install all the dependencies.</li></ul><pre><code class="language-js">npm i "@shawngoh87/react-sketch-canvas"</code></pre><ul><li>With the dependencies installed, make a new <code>Canvas</code> component which will be placed in the <code>MeetingView</code> component, and also add a basic whiteboard (canvas) to it.</li></ul><pre><code class="language-js">import { ReactSketchCanvas } from "@shawngoh87/react-sketch-canvas";

const MeetingView = () =&gt; {
  return (
    &lt;div&gt;
      &lt;WhiteboardView /&gt;
    &lt;/div&gt;
  );
};

const WhiteboardView = () =&gt; {
  //Define a refernce for the canvas
  const canvasRef = useRef();

  //Define the props required by the canvas element used
  const canvasProps = {
    width: "100%",
    height: "500px",
    backgroundImage:
      "https://upload.wikimedia.org/wikipedia/commons/7/70/Graph_paper_scan_1600x1000_%286509259561%29.jpg",
    preserveBackgroundImageAspectRatio: "none",
    strokeWidth: 4,
    eraserWidth: 5,
    strokeColor: "#000000",
    canvasColor: "#FFFFFF",
    allowOnlyPointerType: "all",
    withViewBox: false,
  };
  return (
    &lt;div&gt;
      //Adding the actual canvas object
      &lt;ReactSketchCanvas ref={canvasRef} {...canvasProps} /&gt;
    &lt;/div&gt;
  );
};</code></pre><ul><li>With this, your canvas(whiteboard) is ready for drawing. If you draw something on your board, other participants won't be able to see those drawings yet. To share your drawings with others, use the <code>usePubSub</code> hook. Get the <code>publish()</code> method from the <code>usePubSub</code> hook for the topic <code>WHITEBOARD</code> to send your drawings to all the participants in the meeting.</li><li>The data you need to send to all the participants is the strokes you are drawing, so you will send a stringified JSON to everyone in the message.</li></ul><pre><code class="language-js">import { usePubSub } from "@videosdk.live/react-sdk";

const WhiteboardView = () =&gt; {
  //.. other declarations

  const { publish } = usePubSub("WHITEBOARD");

  // This callback from the canvas component will give us the stroke json we need to share
  const onStroke = (stroke, isEraser) =&gt; {
    // We will be setting the `persist:true` so that all the strokes
    // are available for the participants who have recently joined
    publish(JSON.stringify(stroke), { persist: true });
  };

  return (
    &lt;div&gt;
      &lt;ReactSketchCanvas ref={canvasRef} onStroke={onStroke} {...canvasProps} /&gt;
    &lt;/div&gt;
  );
};</code></pre><ol><li>Even after publishing, the drawings won't appear to other participants because they need to redraw the strokes received from others. This involves handling the <code>onMessageReceived</code> event and the <code>onOldMessagesReceived</code> event.</li></ol><ul><li>The data received in these events will be <code>stringified JSON</code>, which need to be parsed before drawing.</li><li>Additionally, to avoid redrawing the strokes created by the local participant, an extra check is implemented to determine whether the stroke was drawn by the local participant or not.</li></ul><pre><code class="language-js">import { useMeeting,usePubSub } from "@videosdk.live/react-sdk";

const WhiteboardView = () =&gt; {
  //.. other declarations

  const { localParticipant } = useMeeting();

  const { publish } = usePubSub("WHITEBOARD", {
    onMessageReceived: (message) =&gt; {
      //Check if the stroke is from remote participant only
      if (message.senderId !== localParticipant.id) {
        canvasRef.current.loadPaths(JSON.parse(message.message));
      }
    },
    onOldMessagesReceived: (messages) =&gt; {
      messages.map((message) =&gt; {
        canvasRef.current.loadPaths(JSON.parse(message.message));
      });
    },
  });

  //This callback from the canvas component will give us the stroke json we need to share
  const onStroke = (stroke, isEraser) =&gt; {
    ...
  };

  return (
    &lt;div&gt;
      &lt;ReactSketchCanvas ref={canvasRef} onStroke={onStroke} {...canvasProps} /&gt;
    &lt;/div&gt;
  );
};</code></pre><p>Congratulations! You've successfully integrated a whiteboard feature into your React.js video-calling application.</p><h2 id="%E2%9C%A8-want-to-add-more-features-to-react-js-video-calling-app">✨ Want to Add More Features to React JS Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your React video-calling app,</p><p><strong>Check out these additional resources:</strong></p><ul><li>HLS Player: <a href="https://www.videosdk.live/blog/implement-hls-player-in-react-js">Link</a></li><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/integrate-active-speaker-indication-in-react-js">Link</a></li><li>RTMP Live Stream: <a href="https://www.videosdk.live/blog/integrate-rtmp-livestream-in-react-js">Link</a></li><li>Image Capture Feature: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-react-js">Link</a></li><li>Screen Share Feature: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-js">Link</a></li><li>Chat Feature: <a href="https://www.videosdk.live/blog/integrate-chat-feature-in-react-js">Link</a></li><li>Picture-in-Picture (PiP) Mode: <a href="https://www.videosdk.live/blog/integrate-picture-in-picture-pip-in-react-js">Link</a></li></ul><h2 id="conclusion">Conclusion</h2><p>In conclusion, integrating a Collaborative Whiteboard into your React JS Video Call App enriches communication and collaboration experiences. By providing users with a versatile platform to brainstorm, illustrate ideas, and annotate documents in real-time, this feature enhances productivity, fosters creativity, and facilitates seamless remote collaboration. </p><p>Whether used in educational settings, business meetings, design reviews, or technical support sessions. Embrace this innovative feature today and enhance your video call app with a Collaborative Whiteboard from <a href="https://www.videosdk.live/">VideoSDK</a>. Elevate your users' experience and drive success in diverse contexts.</p><p>If you are new here and want to build an interactive react app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get? <em>10000 free minutes every month</em>. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectiv<strong>ely.</strong></p>]]></content:encoded></item><item><title><![CDATA[How to Implement Picture-in-Picture(PIP) Mode in React Native (Android)?]]></title><description><![CDATA[Learn how to integrate Picture-in-Picture (PIP) in React Native for Android effortlessly. Elevate user experience by enabling seamless multitasking within your app.]]></description><link>https://www.videosdk.live/blog/picture-in-picture-pip-in-react-native</link><guid isPermaLink="false">660e72b92a88c204ca9cfb08</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[React Native]]></category><dc:creator><![CDATA[Kishan Nakrani]]></dc:creator><pubDate>Thu, 19 Sep 2024 06:59:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/PIP-1.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/PIP-1.png" alt="How to Implement Picture-in-Picture(PIP) Mode in React Native (Android)?"/><p>Picture-in-Picture (PiP) is a commonly used feature in video conferencing software, enabling users to simultaneously engage in a video conference and perform other tasks on their devices. With PiP, you can keep the video conference window open, resize it to a smaller size, and continue working on other tasks while still seeing and hearing the other participants in the conference. This feature proves beneficial when you need to take notes, send an email, or look up information during the conference.</p><p>PiP mode is very effective in apps that require users to watch a video while engaging with other information or features. For example, in a video conferencing program, users can continue their video call in a tiny window while accessing other elements of the application, such as chat or file sharing. Students on an e-learning platform can view a lecture video while taking notes or accessing other information. <br><br>PiP mode is effective because it improves the user experience without interfering with their workflow. PiP mode increases the flexibility and usability of applications by allowing users to multitask and access additional functions while watching a video. This can result in greater user engagement, satisfaction, and, ultimately, success for your application. </br></br></p><p>In this post, we'll explore how to implement Picture-in-Picture(PIP) into a React Native video calling app. We will guide you through the steps of installing VideoSDK, integrating it into your project, and adding PiP mode to improve the video viewing experience in your application. </p><h2 id="getting-started-with-videosdk">Getting Started with VideoSDK</h2><p>To take advantage of the Picture-in-Picture(PIP) Mode functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.</p><h3 id="create-a-videosdk-account">Create a VideoSDK Account</h3><p>Go to your <a href="https://app.videosdk.live/dashboard/">VideoSDK dashboard</a> and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.</p><h3 id="generate-your-auth-token">Generate your Auth Token</h3><p>Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.</p><p>For a more visual understanding of the account creation and token generation process, consider referring to the <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/authentication-and-token">provided tutorial</a>.</p><h3 id="prerequisites-and-setup">Prerequisites and Setup</h3><p>Make sure your development environment meets the following requirements:</p><ul><li>Node.js v12+</li><li>NPM v6+ (comes installed with newer Node versions)</li><li>Android Studio or Xcode installed</li></ul><h2 id="%E2%AC%87%EF%B8%8F-integrate-videosdk-%E2%80%8B"><strong>⬇️ </strong>Integrate VideoSDK <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#videosdk-installation">​</a></h2><p>Install the VideoSDK by using the following command. Ensure that you are in your project directory before running this command.</p><ul><li>For NPM </li></ul><pre><code class="language-js">npm install "@videosdk.live/react-native-sdk"  "@videosdk.live/react-native-incallmanager"</code></pre><ul><li>For Yarn</li></ul><pre><code class="language-js">yarn add "@videosdk.live/react-native-sdk" "@videosdk.live/react-native-incallmanager"</code></pre><h3 id="project-configuration">Project Configuration</h3><p>Setting up your project correctly is the next essential phase in ensuring smooth operation and integration. Setting up permissions, updating required files, and connecting dependencies are all part of project configuration, which helps to make sure your application can make the most out of VideoSDK capabilities.</p><h4 id="android-setup">Android Setup</h4>
<ul><li>Add the required permissions in the <code>AndroidManifest.xml</code> file.</li></ul><pre><code class="language-js">&lt;manifest
  xmlns:android="http://schemas.android.com/apk/res/android"
  package="com.cool.app"
&gt;
    &lt;!-- Give all the required permissions to app --&gt;
    &lt;uses-permission android:name="android.permission.INTERNET" /&gt;
    &lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Legacy up to Android 11) --&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH"
        android:maxSdkVersion="30" /&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH_ADMIN"
        android:maxSdkVersion="30" /&gt;

    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Android 12 upwards)--&gt;
    &lt;uses-permission android:name="android.permission.BLUETOOTH_CONNECT" /&gt;

    &lt;uses-permission android:name="android.permission.CAMERA" /&gt;
    &lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
    &lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
    &lt;uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" /&gt;
    &lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE"/&gt;
    &lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;

    &lt;application&gt;
   &lt;meta-data
      android:name="live.videosdk.rnfgservice.notification_channel_name"
      android:value="Meeting Notification"
     /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_channel_description"
    android:value="Whenever meeting started notification will appear."
    /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_color"
    android:resource="@color/red"
    /&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundService" android:foregroundServiceType="mediaProjection"&gt;&lt;/service&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundServiceTask"&gt;&lt;/service&gt;
  &lt;/application&gt;
&lt;/manifest&gt;</code></pre><ul><li>Update your <code>colors.xml</code> file for internal dependencies: </li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">&lt;resources&gt;
  &lt;item name="red" type="color"&gt;
    #FC0303
  &lt;/item&gt;
  &lt;integer-array name="androidcolors"&gt;
    &lt;item&gt;@color/red&lt;/item&gt;
  &lt;/integer-array&gt;
&lt;/resources&gt;</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/app/src/main/res/values/colors.xml</span></p></figcaption></figure><ul><li>Link the necessary VideoSDK Dependencies:</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">  dependencies {
   implementation project(':rnwebrtc')
   implementation project(':rnfgservice')
  }</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/app/build.gradle</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">include ':rnwebrtc'
project(':rnwebrtc').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-webrtc/android')

include ':rnfgservice'
project(':rnfgservice').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-foreground-service/android')</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/settings.gradle</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">import live.videosdk.rnwebrtc.WebRTCModulePackage;
import live.videosdk.rnfgservice.ForegroundServicePackage;

public class MainApplication extends Application implements ReactApplication {
  private static List&lt;ReactPackage&gt; getPackages() {
      @SuppressWarnings("UnnecessaryLocalVariable")
      List&lt;ReactPackage&gt; packages = new PackageList(this).getPackages();
      // Packages that cannot be autolinked yet can be added manually here, for example:

      packages.add(new ForegroundServicePackage());
      packages.add(new WebRTCModulePackage());

      return packages;
  }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MainApplication.java</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">/* This one fixes a weird WebRTC runtime problem on some devices. */
android.enableDexingArtifactTransform.desugaring=false</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/gradle.properties</span></p></figcaption></figure><ul><li>Include the following line in your <code>proguard-rules.pro</code> file (optional: if you are using Proguard)</li></ul><figure class="kg-card kg-code-card"><pre><code class="language-js">-keep class org.webrtc.** { *; }</code></pre><figcaption><p><span style="white-space: pre-wrap;">android/app/proguard-rules.pro</span></p></figcaption></figure><ul><li>In your <code>build.gradle</code> file, update the minimum OS/SDK version to <code>23</code>.</li></ul><pre><code class="language-js">buildscript {
  ext {
      minSdkVersion = 23
  }
}</code></pre><h4 id="register-service">Register Service</h4>
<p>Register VideoSDK services in your root <code>index.js</code> file for the initialization service.</p><pre><code class="language-js">import { AppRegistry } from "react-native";
import App from "./App";
import { name as appName } from "./app.json";
import { register } from "@videosdk.live/react-native-sdk";

register();

AppRegistry.registerComponent(appName, () =&gt; App);</code></pre><h2 id="essential-steps-for-building-the-video-calling">Essential Steps for Building the Video Calling </h2><p>By following essential steps, you can seamlessly implement video into your applications with VideoSDK, which provides a robust set of tools and APIs to facilitate the integration of video capabilities into applications.</p><h3 id="step-1-get-started-with-apijs%E2%80%8B">Step 1: Get started with api.js<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-1--get-started-with-apijs">​</a></h3><p>Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or directly from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developers.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">export const token = "&lt;Generated-from-dashbaord&gt;";
// API call to create meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${token}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });

  const { roomId } = await res.json();
  return roomId;
};</code></pre><figcaption><p><span style="white-space: pre-wrap;">API.js</span></p></figcaption></figure><h3 id="step-2-wireframe-appjs-with-all-the-components%E2%80%8B">Step 2: Wireframe App.js with all the components<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-2-wireframe-appjs-with-all-the-components">​</a></h3><p>To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.</p><p>First, you need to understand the <strong>Context Provider</strong> and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: This is the Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: This is the meeting hook API. It includes all the information related to meetings such as join/leave, enable/disable the mic or webcam, etc.</li><li><strong>useParticipant</strong>: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as name, webcamStream, micStream, etc.</li></ul><p>The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.</p><p>Begin by making a few changes to the code in the <strong>App.js</strong> file.</p><pre><code class="language-js">import React, { useState } from "react";
import {
  SafeAreaView,
  TouchableOpacity,
  Text,
  TextInput,
  View,
  FlatList,
} from "react-native";
import {
  MeetingProvider,
  useMeeting,
  useParticipant,
  MediaStream,
  RTCView,
} from "@videosdk.live/react-native-sdk";
import { createMeeting, token } from "./api";

function JoinScreen(props) {
  return null;
}

function ControlsContainer() {
  return null;
}

function MeetingView() {
  return null;
}

export default function App() {
  const [meetingId, setMeetingId] = useState(null);

  const getMeetingId = async (id) =&gt; {
    const meetingId = id == null ? await createMeeting({ token }) : id;
    setMeetingId(meetingId);
  };

  return meetingId ? (
    &lt;SafeAreaView style={{ flex: 1, backgroundColor: "#F6F6FF" }}&gt;
      &lt;MeetingProvider
        config={{
          meetingId,
          micEnabled: false,
          webcamEnabled: true,
          name: "Test User",
        }}
        token={token}
      &gt;
        &lt;MeetingView /&gt;
      &lt;/MeetingProvider&gt;
    &lt;/SafeAreaView&gt;
  ) : (
    &lt;JoinScreen getMeetingId={getMeetingId} /&gt;
  );
}</code></pre><h3 id="step-3-implement-join-screen%E2%80%8B">Step 3: Implement Join Screen<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-3--implement-join-screen">​</a></h3><p>The join screen will serve as a medium to either schedule a new meeting or join an existing one.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">function JoinScreen(props) {
  const [meetingVal, setMeetingVal] = useState("");
  return (
    &lt;SafeAreaView
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        paddingHorizontal: 6 * 10,
      }}
    &gt;
      &lt;TouchableOpacity
        onPress={() =&gt; {
          props.getMeetingId();
        }}
        style={{ backgroundColor: "#1178F8", padding: 12, borderRadius: 6 }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Create Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;

      &lt;Text
        style={{
          alignSelf: "center",
          fontSize: 22,
          marginVertical: 16,
          fontStyle: "italic",
          color: "grey",
        }}
      &gt;
        ---------- OR ----------
      &lt;/Text&gt;
      &lt;TextInput
        value={meetingVal}
        onChangeText={setMeetingVal}
        placeholder={"XXXX-XXXX-XXXX"}
        style={{
          padding: 12,
          borderWidth: 1,
          borderRadius: 6,
          fontStyle: "italic",
        }}
      /&gt;
      &lt;TouchableOpacity
        style={{
          backgroundColor: "#1178F8",
          padding: 12,
          marginTop: 14,
          borderRadius: 6,
        }}
        onPress={() =&gt; {
          props.getMeetingId(meetingVal);
        }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Join Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;
    &lt;/SafeAreaView&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">JoinScreen Component</span></p></figcaption></figure><h3 id="step-4-implement-controls%E2%80%8B">Step 4: Implement Controls<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-4--implement-controls">​</a></h3><p>The next step is to create a <code>ControlsContainer</code> component to manage features such as Join/leave a Meeting and Enable/Disable the Webcam or Mic.</p><p>In this step, the <code>useMeeting</code> hook is utilized to acquire all the required methods such as <code>join()</code>, <code>leave()</code>, <code>toggleWebcam</code> and <code>toggleMic</code>. </p><figure class="kg-card kg-code-card"><pre><code class="language-js">const Button = ({ onPress, buttonText, backgroundColor }) =&gt; {
  return (
    &lt;TouchableOpacity
      onPress={onPress}
      style={{
        backgroundColor: backgroundColor,
        justifyContent: "center",
        alignItems: "center",
        padding: 12,
        borderRadius: 4,
      }}
    &gt;
      &lt;Text style={{ color: "white", fontSize: 12 }}&gt;{buttonText}&lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
  );
};

function ControlsContainer({ join, leave, toggleWebcam, toggleMic }) {
  return (
    &lt;View
      style={{
        padding: 24,
        flexDirection: "row",
        justifyContent: "space-between",
      }}
    &gt;
      &lt;Button
        onPress={() =&gt; {
          join();
        }}
        buttonText={"Join"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleWebcam();
        }}
        buttonText={"Toggle Webcam"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleMic();
        }}
        buttonText={"Toggle Mic"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          leave();
        }}
        buttonText={"Leave"}
        backgroundColor={"#FF0000"}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ControlsContainer Component</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantList() {
  return null;
}
function MeetingView() {
  const { join, leave, toggleWebcam, toggleMic, meetingId } = useMeeting({});

  return (
    &lt;View style={{ flex: 1 }}&gt;
      {meetingId ? (
        &lt;Text style={{ fontSize: 18, padding: 12 }}&gt;
          Meeting Id :{meetingId}
        &lt;/Text&gt;
      ) : null}
      &lt;ParticipantList /&gt;
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingView Component</span></p></figcaption></figure><h3 id="step-5-render-participant-list%E2%80%8B">Step 5: Render Participant List<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-5--render-participant-list">​</a></h3><p>After implementing the controls, the next step is to render the joined participants in <strong>ParticipantList Component</strong>.</p><p>You can get all the joined <code>participants</code> from the <code>useMeeting</code> Hook.</p><pre><code class="language-js">function ParticipantView() {
  return null;
}

function ParticipantList({ participants }) {
  return participants.length &gt; 0 ? (
    &lt;FlatList
      data={participants}
      renderItem={({ item }) =&gt; {
        return &lt;ParticipantView participantId={item} /&gt;;
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 20 }}&gt;Press Join button to enter meeting.&lt;/Text&gt;
    &lt;/View&gt;
  );
}</code></pre><figure class="kg-card kg-code-card"><pre><code class="language-js">function MeetingView() {
  // Get `participants` from useMeeting Hook
  const { join, leave, toggleWebcam, toggleMic, participants } = useMeeting({});
  const participantsArrId = [...participants.keys()];

  return (
    &lt;View style={{ flex: 1 }}&gt;
      &lt;ParticipantList participants={participantsArrId} /&gt;
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">MeetingView Component</span></p></figcaption></figure><h3 id="step-6-handling-participants-media%E2%80%8B">Step 6: Handling Participant's Media<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/quick-start#step-6--handling-participants-media">​</a></h3><p>Before Handling the Participant's Media, you need to understand a couple of concepts.</p><h4 id="1-useparticipant-hook">1. useParticipant Hook</h4>
<p>The <code>useParticipant</code> hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take <code>participantId</code> as argument.</p><pre><code class="language-js">const { webcamStream, webcamOn, displayName } = useParticipant(participantId);</code></pre><h4 id="2-mediastream-api">2. MediaStream API</h4>
<p>The MediaStream API is beneficial for adding a MediaTrack into the <code>RTCView</code> component, enabling the playback of audio or video.</p><figure class="kg-card kg-code-card"><pre><code class="language-js">&lt;RTCView
  streamURL={new MediaStream([webcamStream.track]).toURL()}
  objectFit={"cover"}
  style={{
    height: 300,
    marginVertical: 8,
    marginHorizontal: 8,
  }}
/&gt;</code></pre><figcaption><p><span style="white-space: pre-wrap;">useParticipant Hook Example</span></p></figcaption></figure><h4 id="rendering-participant-media">Rendering Participant Media</h4>
<figure class="kg-card kg-code-card"><pre><code class="language-js">function ParticipantView({ participantId }) {
  const { webcamStream, webcamOn } = useParticipant(participantId);

  return webcamOn &amp;&amp; webcamStream ? (
    &lt;RTCView
      streamURL={new MediaStream([webcamStream.track]).toURL()}
      objectFit={"cover"}
      style={{
        height: 300,
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        backgroundColor: "grey",
        height: 300,
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 16 }}&gt;NO MEDIA&lt;/Text&gt;
    &lt;/View&gt;
  );
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">ParticipantView Component</span></p></figcaption></figure><p>Congratulations! By following these steps, you're on your way to unlocking the video within your application. Now, we are moving forward to integrate the feature that builds immersive video experiences for your users!</p><h2 id="integrate-picture-in-picture-pip-feature">Integrate Picture-in-picture (PiP) Feature</h2><p>Picture-in-picture (PiP) is a commonly used feature in video conferencing software, enabling users to simultaneously engage in a video conference and perform other tasks on their devices. With PiP, you can keep the video conference window open, resize it to a smaller size, and continue working on other tasks while still seeing and hearing the other participants in the conference. This feature proves beneficial when you need to take notes, send an email, or look up information during the conference.</p><p>This guide explains the steps to implement the Picture-in-Picture feature using VideoSDK.</p><h3 id="step-1-install-package%E2%80%8B">Step 1: Install Package<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/render-media/picture-in-picture#step-1-install-package">​</a></h3><p>To begin with, you need to install a third-party package <a href="https://www.npmjs.com/package/react-native-pip-android" rel="noopener noreferrer">react-native-pip-android</a> to achieve PiP mode in Android.</p><pre><code class="language-npm">npm install react-native-pip-android</code></pre><h3 id="step-2-setup%E2%80%8B">Step 2: Setup<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/render-media/picture-in-picture#step-2-setup">​</a></h3><p>Include the following attribute in <code>/android/app/src/main/AndroidManifest.xml</code> file.</p><pre><code class="language-js">  &lt;activity
    ...
      android:supportsPictureInPicture="true"
      android:configChanges=
        "screenSize|smallestScreenSize|screenLayout|orientation"
        ...</code></pre><h3 id="step-3-import-activity-in-mainactivityjava-file%E2%80%8B">Step 3: Import Activity in <code>MainActivity.java</code> file<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/render-media/picture-in-picture#step-3-import-activity-in-mainactivityjava-file">​</a></h3><pre><code class="language-js">...
import com.reactnativepipandroid.PipAndroidModule;

public class MainActivity extends ReactActivity {

...

@Override
  public void onPictureInPictureModeChanged (boolean isInPictureInPictureMode) {
    PipAndroidModule.pipModeChanged(isInPictureInPictureMode);
  }</code></pre><h3 id="step-4-setup-pip-for-rendering-participant-media%E2%80%8B">Step 4: Setup PiP for Rendering participant media<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/render-media/picture-in-picture#step-4-setup-pip-for-rendering-participant-media">​</a></h3><p>To set up Picture-in-Picture (PiP) mode for rendering participant media, you can utilize the <code>usePipModeListener</code> hook to control the rendering. The example below demonstrates how to render a participant list.</p><pre><code class="language-js">import PipHandler, { usePipModeListener } from "react-native-pip-android";

function ParticipantView({ participantId, inPipMode }) {
  const { webcamStream, webcamOn } = useParticipant(participantId);

  return webcamOn &amp;&amp; webcamStream ? (
    &lt;RTCView
      streamURL={new MediaStream([webcamStream.track]).toURL()}
      objectFit={"cover"}
      style={{
        height: inPipMode ? 75 : 300,
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    /&gt;
  ) : null;
}

function MeetingView() {
  // Get `participants` from useMeeting Hook
  const { participants } = useMeeting({});

  const inPipMode = usePipModeListener();

  // Use this boolean to show / hide ui when pip mode changes
  if (inPipMode) {
    // Render the participant in PiP Box

    return [...participants.keys()].map((participantId, index) =&gt; (
      &lt;ParticipantView
        key={index}
        participantId={participantId}
        inPipMode={true}
      /&gt;
    ));
  }

  return (
    &lt;View style={{ flex: 1 }}&gt;
      // ..
      // ..
    &lt;/View&gt;
  );
}</code></pre><h3 id="step-5-enter-pip-mode%E2%80%8B">Step 5: Enter PiP mode<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/render-media/picture-in-picture#step-5-enter-in-pip-mode">​</a></h3><p>To enter Picture-in-Picture (PiP) mode, use the <code>enterPipMode</code> method of <code>PipHandler</code>. Provide the PiP box's height and width as parameters for this method.</p><pre><code class="language-js">function ControlsContainer() {
  return (
    &lt;View
      style={{
        padding: 24,
        flexDirection: "row",
        justifyContent: "space-evenly",
      }}
    &gt;
      &lt;Button
        onPress={() =&gt; {
          PipHandler.enterPipMode(300, 500);
        }}
        buttonText={"PiP"}
        backgroundColor={"blue"}
      /&gt;
    &lt;/View&gt;
  );
}</code></pre><p>By following these essential steps, you can successfully integrate VideoSDK into your application and enhance it with the Picture-in-Picture (PiP) feature.</p><h2 id="%E2%9C%A8-want-to-add-more-features-to-react-native-video-calling-app">✨ Want to Add More Features to React Native Video Calling App?</h2><p>If you found this guide helpful and want to explore more features for your React Native video-calling app,</p><p><strong>Check out these additional resources:</strong></p><ul><li>Active Speaker Indication: <a href="https://www.videosdk.live/blog/active-speaker-indication-in-react-native-video-call-app">Link</a></li><li>RTMP Live Stream: <a href="https://www.videosdk.live/blog/integrate-rtmp-in-react-native-video-app">Link</a></li><li>Image Capture Feature: <a href="https://www.videosdk.live/blog/integrate-image-capture-in-react-native-for-android-app">Link</a></li><li>Screen Share Feature in Android: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-native-android-video-call-app">Link</a></li><li>Screen Share Feature in iOS: <a href="https://www.videosdk.live/blog/integrate-screen-share-in-react-native-ios-video-call-app">Link</a></li><li>Chat Feature: <a href="https://www.videosdk.live/blog/integrate-chat-feature-in-react-native-video-call-app">Link</a></li></ul><h2 id="conclusion">Conclusion</h2><p>Congratulations You’ve now learned<strong> how to implement Picture-in-Picture mode in a React Native Android app</strong>! By following these systematic guidelines, developers can unlock the full potential of video communication with the Picture-in-Picture (PiP) feature within their applications. This feature can offer a great user experience and enable users to multitask more effectively. </p><p>VideoSDK provides a powerful set of tools and APIs that make it easier than ever to create immersive video experiences for your users. Whether you're building a video conferencing app, a live streaming service, or a virtual event platform, VideoSDK has everything you need to succeed.</p><p>If you are new here and want to build an interactive React Native app with free resources, you can <a href="https://www.videosdk.live/signup">Sign up with VideoSDK</a> and get? <em>10000 free minutes every month</em>. This will help your new video-calling app go to the next level without any costs associated with initial usage, allowing you to focus on building and scaling your application effectively.</p><p/>]]></content:encoded></item><item><title><![CDATA[Building a React Native Video Calling App with VideoSDK]]></title><description><![CDATA[In this tutorial, you’ll learn how to make a video calling app feature in your React Native app using VideoSDK.]]></description><link>https://www.videosdk.live/blog/how-to-make-a-video-calling-app-using-react-native</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb7e</guid><category><![CDATA[Developer Blog]]></category><dc:creator><![CDATA[Rajan Surani]]></dc:creator><pubDate>Wed, 18 Sep 2024 15:12:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/05/React-Native1.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/05/React-Native1.jpg" alt="Building a React Native Video Calling App with VideoSDK"/><p>Build a powerful video &amp; audio calling app using VideoSDK. The solution that helps you build, scale, &amp; innovate. </p><h2 id="why-use-videosdk-for-your-react-native-app">Why Use VideoSDK for Your React Native App?</h2><p>Let's simplify the process of building a cross-platform video &amp; audio calling app in React Native. Video calling has become an essential requirement, no doubt about that because the feature is being used in ways never seen before from Telehealth apps to <a href="https://www.videosdk.live/blog/live-streaming-is-the-new-future-of-e-commerce">live commerce</a> but it's only the start. So we can't assume the kind of app you will be building, but whatever the app is we provide you the best Video SDK solution which is feature-rich, easy to implement, robust, &amp; not to say <a href="https://www.videosdk.live/pricing">free to use</a>!</p><h2 id="key-features-to-enhance-your-react-native-video-calling-app">Key Features to Enhance Your React Native Video Calling App</h2><p>React Native is a great choice for building cross-platform apps. However, providing the exact same features for both platforms becomes difficult. But at VideoSDK, we have you covered fam! We provide some great features you can right away that work both for React native Android and React<a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/extras/react-native-ios-screen-share"> native iOS screen sharing</a> on devices, which is unheard of for iOS! We provide 20+ features, so let your futuristic mind take over to create the best video &amp; audio calling app in React Native. <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/features/recording-meeting">Here's the list of features</a> you can add to your React Native application.</p><p>Now we are all set, so let's get started with the tutorial! </p><h2 id="getting-started-with-your-react-video-calling-app">Getting Started with Your React Video Calling App</h2><p>The below steps will give you all the information to quickly integrate video SDK into your app. Please follow along carefully &amp; if you face any difficulty then let us know immediately on <a href="https://discord.com/invite/Gpmj6eCq5u">discord</a> and we will help you right away. </p><h3 id="prerequisites-for-installing-videosdk">Prerequisites for Installing VideoSDK</h3><ul><li>Node.js v12+</li><li>NPM v6+ (comes installed with newer Node versions)</li><li>Android Studio or Xcode installed</li><li>A token from the VideoSDK <a href="https://app.videosdk.live/api-keys">dashboard</a></li></ul><h2 id="create-a-new-react-native-app">Create a new React Native app</h2><p>Let's start by creating a new React Native app using the command:</p><pre><code class="language-js">$ npx react-native init AppName</code></pre><h3 id="install-video-sdk">Install Video SDK </h3><p>Install the Video SDK by following the below command. Make sure you are in your project directory before you run this command.</p><p>$ npm install "@videosdk.live/react-native-sdk"</p>
<h3 id="project-structure">Project Structure </h3><figure class="kg-card kg-code-card"><pre><code class="language-React Native">  root
   ├── node_modules
   ├── android
   ├── ios
   ├── App.js
   ├── api.js
   ├── index.js
</code></pre><figcaption><p><span style="white-space: pre-wrap;">Project Structure&nbsp;</span></p></figcaption></figure><h3 id="project-configuration">Project Configuration</h3><p>You need to configure your project for Android and iOS to make sure the app runs smoothly.</p><h2 id="step-by-step-guide-to-creating-a-new-react-native-app">Step-by-Step Guide to Creating a New React Native App</h2><h3 id="android-setup">Android Setup</h3><h4 id="step-1-add-the-required-permission-in-the-androidmanifestxml-file">Step 1:  Add the required permission in the AndroidManifest.xml file.</h4>
<pre><code class="language-xml">&lt;manifest
  xmlns:android="http://schemas.android.com/apk/res/android"
  package="com.cool.app"
&gt;
    &lt;!-- Give all the required permissions to app --&gt;
    &lt;uses-permission android:name="android.permission.INTERNET" /&gt;
    &lt;uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /&gt;
    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Legacy up to Android 11) --&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH"
        android:maxSdkVersion="30" /&gt;
    &lt;uses-permission
        android:name="android.permission.BLUETOOTH_ADMIN"
        android:maxSdkVersion="30" /&gt;

    &lt;!-- Needed to communicate with already-paired Bluetooth devices. (Android 12 upwards)--&gt;
    &lt;uses-permission android:name="android.permission.BLUETOOTH_CONNECT" /&gt;

    &lt;uses-permission android:name="android.permission.CAMERA" /&gt;
    &lt;uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /&gt;
    &lt;uses-permission android:name="android.permission.RECORD_AUDIO" /&gt;
    &lt;uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" /&gt;
    &lt;uses-permission android:name="android.permission.FOREGROUND_SERVICE"/&gt;
    &lt;uses-permission android:name="android.permission.WAKE_LOCK" /&gt;
​
  &lt;application&gt;
   &lt;meta-data
      android:name="live.videosdk.rnfgservice.notification_channel_name"
      android:value="Meeting Notification"
     /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_channel_description"
    android:value="Whenever meeting started notification will appear."
    /&gt;
    &lt;meta-data
    android:name="live.videosdk.rnfgservice.notification_color"
    android:resource="@color/red"
    /&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundService" android:foregroundServiceType="mediaProjection"&gt;&lt;/service&gt;
    &lt;service android:name="live.videosdk.rnfgservice.ForegroundServiceTask"&gt;&lt;/service&gt;
  &lt;/application&gt;
&lt;/manifest&gt;
</code></pre>
<h4 id="step-2-link-couple-of-internal-library-dependencies-in-androidappbuildgradle-file">Step 2: Link couple of internal library dependencies in android/app/build.gradle file</h4>
<pre><code class="language-js">dependencies {
    compile project(':rnfgservice') 
    compile project(':rnwebrtc') 
    compile project(':rnincallmanager')
  }

</code></pre>
<p>Include dependencies in <strong>android/settings.gradle</strong></p><pre><code class="language-js">include ':rnwebrtc'
project(':rnwebrtc').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-webrtc/android')

include ':rnincallmanager'
project(':rnincallmanager').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-incallmanager/android')

include ':rnfgservice'
project(':rnfgservice').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-foreground-service/android')

</code></pre>
<p>Update <strong>MainApplication.java </strong>to use InCall manager and run some foreground services.</p><pre><code class="language-js">import live.videosdk.rnfgservice.ForegroundServicePackage;
import live.videosdk.rnincallmanager.InCallManagerPackage;
import live.videosdk.rnwebrtc.WebRTCModulePackage;

public class MainApplication extends Application implements ReactApplication {
  private static List&lt;ReactPackage&gt; getPackages() {
      return Arrays.&lt;ReactPackage&gt;asList(
          /* Initialise foreground service, incall manager and webrtc module */
          new ForegroundServicePackage(),
          new InCallManagerPackage(),
          new WebRTCModulePackage(),
      );
  }
}
</code></pre>
<p>Some devices might face WebRTC problems and to solve that update your <strong>android/gradle.properties</strong> file with the following</p><pre><code class="language-JS">/* This one fixes a weird WebRTC runtime problem on some devices. */
android.enableDexingArtifactTransform.desugaring=false</code></pre><p>If you use <strong>proguard</strong> , make the changes shown below in <strong>android/app/proguard-rules.pro </strong>file (this is optional)</p><pre><code class="language-js">-keep class org.webrtc.** { *; }
</code></pre><h4 id="step-3-update-colorsxml-file-with-some-new-colours-for-internal-dependencies">Step 3: Update colors.xml file with some new colours for internal dependencies.</h4>
<pre><code class="language-xml">&lt;resources&gt;
    &lt;item name="red" type="color"&gt;#FC0303&lt;/item&gt;
    &lt;integer-array name="androidcolors"&gt;
    &lt;item&gt;@color/red&lt;/item&gt;
    &lt;/integer-array&gt;
&lt;/resources&gt;
</code></pre>
<h3 id="ios-setup">iOS Setup</h3><h4 id="step-1-install-react-native-incallmanager">Step 1: Install react-native-incallmanager</h4>
<pre><code class="language-JS">$ yarn add @videosdk.live/react-native-incallmanager
</code></pre><h4 id="step-2-make-sure-you-are-using-cocoapods-110-or-higher-to-update-cocoapods-you-can-simply-install-the-gem-again">Step 2: Make sure you are using CocoaPods 1.10 or higher. To update CocoaPods, you can simply install the gem again.</h4>
<pre><code class="language-JS">$[sudo] gem install cocoapods
</code></pre><h4 id="step-3-manually-linking-if-react-native-incall-manager-is-not-linked-automatically">Step 3: Manually linking (if react-native-incall-manager is not linked automatically)</h4>
<ul>
<li>
<p>Drag node_modules/@videosdk.live/react-native-incall-manager/ios/RNInCallManager.xcodeproj under &lt;your_xcode_project&gt;/Libraries</p>
</li>
<li>
<p>Select &lt;your_xcode_project&gt; --&gt; Build Phases --&gt; Link Binary With Libraries</p>
</li>
<li>
<p>Drag Libraries/RNInCallManager.xcodeproj/Products/libRNInCallManager.a to Link Binary With Libraries</p>
</li>
<li>
<p>Select &lt;your_xcode_project&gt; --&gt; Build Settings In Header Search Paths, add $(SRCROOT)/../node_modules/@videosdk.live/react-native-incall-manager/ios/RNInCallManager</p>
</li>
</ul>
<h4 id="step-4-change-path-of-react-native-webrtc">Step 4: Change path of react-native-webrtc</h4>
<pre><code class="language-JS">pod ‘react-native-webrtc’, :path =&gt; ‘../node_modules/@videosdk.live/react-native-webrtc’</code></pre><h4 id="step-5-change-your-platform-version">Step 5: Change your platform version</h4>
<ul>
<li>You have change platform field of podfile to 11.0 or above it, as react-native-webrtc doesn’t support IOS &lt; 11 platform :ios, ‘11.0’</li>
</ul>
<h4 id="step-6-after-updating-the-version-you-have-to-install-pods">Step 6: After updating the version, you have to install pods</h4>
<pre><code class="language-JS">Pod install
</code></pre><h4 id="step-7-then-add-%E2%80%9Clibreact-native-webrtca%E2%80%9D-in-link-binary-with-libraries-in-target-of-main-project-folder">Step 7: Then Add “libreact-native-webrtc.a” in Link Binary with libraries. In target of main project folder.</h4>
<h4 id="step-8-now-add-following-permissions-to-infoplist-project-folderiosprojectnameinfoplist">Step 8: Now add following permissions to info.plist (project folder/IOS/projectname/info.plist):</h4>
<pre><code class="language-JS">&lt;key&gt;NSCameraUsageDescription&lt;/key&gt;
&lt;string&gt;Camera permission description&lt;/string&gt;
&lt;key&gt;NSMicrophoneUsageDescription&lt;/key&gt;
&lt;string&gt;Microphone permission description&lt;/string&gt;</code></pre><h3 id="registering-and-initializing-videosdk-services">Registering and Initializing VideoSDK Services</h3><p>Register VideoSDK services in root <strong>index.js</strong> file for initialization service.</p><pre><code class="language-js">import { register } from '@videosdk.live/react-native-sdk';
import { AppRegistry } from 'react-native';
import { name as appName } from './app.json';
import App from './src/App.js';
​
// Register the service
register();
AppRegistry.registerComponent(appName, () =&gt; App);
</code></pre>
<h3 id="building-the-app-interface-and-controls">Building the App Interface and Controls</h3><p><strong>Step 1</strong>: Before jumping to anything else, we have to write API to generate a unique meetingId. You will require an auth token, you can generate it either by using <a href="https://github.com/videosdk-live/videosdk-rtc-api-server-examples" rel="noopener noreferrer">videosdk-rtc-api-server-examples</a> or generate it from the <a href="https://app.videosdk.live/api-keys" rel="noopener noreferrer">VideoSDK Dashboard</a> for developer.</p><pre><code class="language-js">export const token = "&lt;Generated-from-dashbaord&gt;";
// API call to create meeting
export const createMeeting = async ({ token }) =&gt; {
  const res = await fetch(`https://api.videosdk.live/v1/meetings`, {
    method: "POST",
    headers: {
      authorization: `${token}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({ region: "sg001" }),
  });

  const { meetingId } = await res.json();
  return meetingId;
};
</code></pre>
<p><strong>Step 2</strong>: To build up the wireframe of <strong>App.js</strong>, we are going to use <strong>Video SDK Hooks</strong> and <strong>Context Providers</strong>. Video SDK provides us MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks. Let's understand each of them.</p><p>First, we will explore Context Provider and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.</p><ul><li><strong>MeetingProvider</strong>: It is a Context Provider. It accepts value <code>config</code> and <code>token</code> as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.</li><li><strong>MeetingConsumer</strong>: It is Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.</li><li><strong>useMeeting</strong>: It is meeting react hook API for the meeting. It includes all the information related to meetings such as joining, leaving, enabling/disabling the mic or webcam, etc.</li><li><strong>useParticipant</strong>: It is participant hook API. useParticipant hook is responsible for handling all the events and props related to one particular participant such as name, webcamStream, micStream, etc.</li></ul><p>Meeting Context helps to listen to all the changes when a participant joins a meeting or changes the mic or camera etc.</p><p>Let's get started with changing couple of lines of code in<strong> App.js</strong></p><pre><code class="language-js">import React, { useState } from "react";
import {
  SafeAreaView,
  TouchableOpacity,
  Text,
  TextInput,
  View,
  FlatList,
} from "react-native";
import {
  MeetingProvider,
  useMeeting,
  useParticipant,
  MediaStream,
  RTCView,
} from "@videosdk.live/react-native-sdk";
import { createMeeting, token } from "./api";

function JoinScreen(props) {
  return null;
}

function ControlsContainer() {
  return null;
}

function MeetingView() {
  return null;
}

export default function App() {
  const [meetingId, setMeetingId] = useState(null);

  const getMeetingId = async (id) =&gt; {
    const meetingId = id == null ? await createMeeting({ token }) : id;
    setMeetingId(meetingId);
  };

  return meetingId ? (
    &lt;SafeAreaView style={{ flex: 1, backgroundColor: "#F6F6FF" }}&gt;
      &lt;MeetingProvider
        config={{
          meetingId,
          micEnabled: false,
          webcamEnabled: true,
          name: "Test User",
        }}
        token={token}
      &gt;
        &lt;MeetingView /&gt;
      &lt;/MeetingProvider&gt;
    &lt;/SafeAreaView&gt;
  ) : (
    &lt;JoinScreen getMeetingId={getMeetingId} /&gt;
  );
}
</code></pre>
<p><strong>Step 3</strong>: Lets now add a Join Screen to our app with which you can create new meetings or join existing meetings.</p><pre><code class="language-js">function JoinScreen(props) {
  const [meetingVal, setMeetingVal] = useState("");
  return (
    &lt;SafeAreaView
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        paddingHorizontal: 6 * 10,
      }}
    &gt;
      &lt;TouchableOpacity
        onPress={() =&gt; {
          props.getMeetingId();
        }}
        style={{ backgroundColor: "#1178F8", padding: 12, borderRadius: 6 }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Create Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;

      &lt;Text
        style={{
          alignSelf: "center",
          fontSize: 22,
          marginVertical: 16,
          fontStyle: "italic",
          color: "grey",
        }}
      &gt;
        ---------- OR ----------
      &lt;/Text&gt;
      &lt;TextInput
        value={meetingVal}
        onChangeText={setMeetingVal}
        placeholder={"XXXX-XXXX-XXXX"}
        style={{
          padding: 12,
          borderWidth: 1,
          borderRadius: 6,
          fontStyle: "italic",
        }}
      /&gt;
      &lt;TouchableOpacity
        style={{
          backgroundColor: "#1178F8",
          padding: 12,
          marginTop: 14,
          borderRadius: 6,
        }}
        onPress={() =&gt; {
          props.getMeetingId(meetingVal);
        }}
      &gt;
        &lt;Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}&gt;
          Join Meeting
        &lt;/Text&gt;
      &lt;/TouchableOpacity&gt;
    &lt;/SafeAreaView&gt;
  );
}
</code></pre>
<p><strong>Step 4: The next</strong> step is to create a <strong>ControlsContainer</strong> component that manages features such as Joining or leaving Meeting and Enable or Disable Webcam/Mic.</p><p>In this steps, we will use <strong>useMeeting</strong> hook to get all required methods such as <strong>join()</strong>, <strong>leave()</strong>, <strong>toggleWebcam(),</strong> and <strong>toggleMic()</strong>.</p><p>So let's update <strong>ControlsContainer</strong> and add it to our <strong>MeetingView</strong>.</p><pre><code class="language-js">const Button = ({ onPress, buttonText, backgroundColor }) =&gt; {
  return (
    &lt;TouchableOpacity
      onPress={onPress}
      style={{
        backgroundColor: backgroundColor,
        justifyContent: "center",
        alignItems: "center",
        padding: 12,
        borderRadius: 4,
      }}
    &gt;
      &lt;Text style={{ color: "white", fontSize: 12 }}&gt;{buttonText}&lt;/Text&gt;
    &lt;/TouchableOpacity&gt;
  );
};

function ControlsContainer({ join, leave, toggleWebcam, toggleMic }) {
  return (
    &lt;View
      style={{
        padding: 24,
        flexDirection: "row",
        justifyContent: "space-between",
      }}
    &gt;
      &lt;Button
        onPress={() =&gt; {
          join();
        }}
        buttonText={"Join"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleWebcam();
        }}
        buttonText={"Toggle Webcam"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          toggleMic();
        }}
        buttonText={"Toggle Mic"}
        backgroundColor={"#1178F8"}
      /&gt;
      &lt;Button
        onPress={() =&gt; {
          leave();
        }}
        buttonText={"Leave"}
        backgroundColor={"#FF0000"}
      /&gt;
    &lt;/View&gt;
  );
}

function ParticipantList() {
  return null;
}
function MeetingView() {
  const { join, leave, toggleWebcam, toggleMic, meetingId } = useMeeting({});

  return (
    &lt;View style={{ flex: 1 }}&gt;
      {meetingId ? (
        &lt;Text style={{ fontSize: 18, padding: 12 }}&gt;
          Meeting Id :{meetingId}
        &lt;/Text&gt;
      ) : null}
      &lt;ParticipantList /&gt; // Will implement in next steps
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}

</code></pre>
<p><strong>Step 5: </strong>After implementing controls, it's time to render joined participants.</p><p>We will get joined <strong>participants</strong> from <strong>useMeeting</strong> Hook.</p><pre><code class="language-js">function ParticipantView() {
  return null;
}

function ParticipantList({ participants }) {
  return participants.length &gt; 0 ? (
    &lt;FlatList
      data={participants}
      renderItem={({ item }) =&gt; {
        return &lt;ParticipantView participantId={item} /&gt;;
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 20 }}&gt;Press Join button to enter meeting.&lt;/Text&gt;
    &lt;/View&gt;
  );
}

function MeetingView() {
  // Get `participants` from useMeeting Hook
  const { join, leave, toggleWebcam, toggleMic, participants } = useMeeting({});
  const participantsArrId = [...participants.keys()]; // Add this line

  return (
    &lt;View style={{ flex: 1 }}&gt;
      &lt;ParticipantList participants={participantsArrId} /&gt; // Pass participants
      &lt;ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      /&gt;
    &lt;/View&gt;
  );
}
</code></pre>
<p><strong>Step 6: The next</strong> step is to update the participant view to show the participant media i.e. video and audio. Before handling participant media, We need to understand a couple of concepts.</p><h4 id="1-useparticipant-hook">1. useParticipant Hook</h4>
<p>useParticipant hook is responsible for handling all the properties and events of one particular participant who joined in the meeting. It will take participantId as an argument.</p><pre><code class="language-js">//Example for useParticipant Hook
const { webcamStream, webcamOn, displayName } = useParticipant(participantId);
</code></pre>
<h4 id="2-mediastream-api">2. MediaStream API</h4>
<p>MediaStream is useful to add MediaTrack to the <code>RTCView</code> component to play the audio and video.</p><pre><code class="language-js">//MediaStream API example
&lt;RTCView
  streamURL={new MediaStream([webcamStream.track]).toURL()}
  objectFit={"cover"}
  style={{
    height: 300,
    marginVertical: 8,
    marginHorizontal: 8,
  }}
/&gt;
</code></pre>
<p>So let us combine these two concepts and render the participant view.</p><pre><code class="language-js">function ParticipantView({ participantId }) {
  const { webcamStream, webcamOn } = useParticipant(participantId);
  return webcamOn ? (
    &lt;RTCView
      streamURL={new MediaStream([webcamStream.track]).toURL()}
      objectFit={"cover"}
      style={{
        height: 300,
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    /&gt;
  ) : (
    &lt;View
      style={{
        backgroundColor: "grey",
        height: 300,
        justifyContent: "center",
        alignItems: "center",
      }}
    &gt;
      &lt;Text style={{ fontSize: 16 }}&gt;NO MEDIA&lt;/Text&gt;
    &lt;/View&gt;
  );
}
</code></pre>
<h3 id="launching-your-video-calling-app">Launching Your Video Calling App</h3><pre><code class="language-js">//for android
npx react-native run-android

//for ios
npx react-native run-ios</code></pre><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/05/giphy--6-.gif" class="kg-image" alt="Building a React Native Video Calling App with VideoSDK" loading="lazy" width="480" height="270"/></figure><h2 id="conclusion">Conclusion</h2><p>With this, we successfully built the React Native video calling app using the video SDK in React-Native. If you wish to add functionalities like chat messaging, and screen sharing, you can always check out our <a href="https://docs.videosdk.live/">documentation</a>. If you face any difficulty with the implementation you can connect with us on our <a href="https://discord.gg/Gpmj6eCq5u">discord community</a>.</p>]]></content:encoded></item><item><title><![CDATA[What is WebRTC Control?]]></title><description><![CDATA[Learn all about WebRTC control in Firefox, Chrome, Opera, and extensions. Explore how to manage WebRTC effectively, ensuring privacy and security.]]></description><link>https://www.videosdk.live/blog/webrtc-control</link><guid isPermaLink="false">653789ffbbb6901662fde132</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Wed, 18 Sep 2024 14:54:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/10/Frame-2-2.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/10/Frame-2-2.jpg" alt="What is WebRTC Control?"/><p>In today's digital age, where online communication is the norm, ensuring the security and privacy of your internet activities is crucial. One of the technologies that has become a central part of web-based communication is WebRTC (Web Real-Time Communication). In this article, we will delve into what WebRTC control is and how it works in popular browsers like Firefox, Chrome, and Opera, along with browser extensions to enhance your online experience.</p><p>WebRTC is a powerful technology that enables real-time communication between web browsers and applications. It allows for voice and video calls, file sharing, and chat directly in your browser, eliminating the need for third-party plugins. However, with great power comes great responsibility, and it's essential to have control over WebRTC to maintain your online privacy and security.</p><h2 id="understanding-webrtc-control">Understanding WebRTC Control</h2><p>WebRTC is integrated into various web browsers, making it convenient for users to communicate without the need for external applications. However, this technology can also pose privacy risks, as it reveals your IP address and potentially allows websites to track your location. To address these concerns, browser developers have introduced features to control WebRTC.</p><h3 id="enhancing-webrtc-security-with-vpns">Enhancing WebRTC Security with VPNs</h3><p>When using WebRTC for communication, security and privacy are paramount. Utilizing a Virtual Private Network (VPN) can enhance security by masking your IP address, making it more difficult for third parties to track your location or intercept your data. By rerouting your internet connection through a secure server, a VPN ensures that your actual IP remains hidden, safeguarding your personal information from potential WebRTC leaks.</p><h3 id="understanding-ip-addresses-in-webrtc">Understanding IP Addresses in WebRTC</h3><p>WebRTC requires access to your IP addresses to establish direct connections. However, this can lead to the exposure of both your public and local IP addresses. Public IP addresses are assigned by your internet service provider and are visible on the internet, making them vulnerable to malicious activities. Local IPs, on the other hand, are used within your local network and are generally not exposed online. Knowing how WebRTC uses these IPs helps users take proactive steps to protect their privacy.</p><h3 id="public-ip-vs-local-ip-in-webrtc-leaks">Public IP vs. Local IP in WebRTC Leaks</h3><p>It's crucial to understand the difference between public and local IPs in the context of WebRTC. Public IPs can be used to identify your approximate location and internet service provider, which might be exposed during a WebRTC session if not properly secured. Local IPs are less critical but can reveal details about your internal network structure. Users should be aware of these differences to better understand and mitigate potential WebRTC leaks.</p><h3 id="webrtc-in-microsoft-edge">WebRTC in Microsoft Edge</h3><p>Microsoft Edge supports WebRTC and offers various settings to manage privacy and security related to WebRTC features. Users can control permissions for camera and microphone access, which are crucial for WebRTC's <a href="https://dialaxy.com/" rel="noreferrer">real-time communication </a>capabilities. Familiarizing yourself with Edge's privacy settings can help ensure that these features are used securely.</p><h3 id="finding-webrtc-controls-in-the-chrome-web-store">Finding WebRTC Controls in the Chrome Web Store</h3><p>For Chrome users concerned about WebRTC privacy, the Chrome Web Store offers extensions that can control WebRTC's behavior. These extensions can block or manage how WebRTC handles IP exposures, providing an added layer of privacy for users.</p><h3 id="webrtc-in-yandex-browser">WebRTC in Yandex Browser</h3><p>Yandex Browser supports WebRTC, but its management tools and privacy settings might differ slightly from other browsers. Users of Yandex Browser should check the settings menu for options to control WebRTC connections and protect personal information.</p><h3 id="accessing-webrtc-settings-via-browser-bars">Accessing WebRTC Settings via Browser Bars</h3><p>Most browsers allow users to access settings related to WebRTC through the search bar or address bar. This can include managing camera and microphone permissions or adjusting security settings to prevent IP leaks through WebRTC channels.</p><h3 id="impact-of-vpn-servers-on-webrtc">Impact of VPN Servers on WebRTC</h3><p>Connecting to different VPN servers can significantly influence WebRTC behavior and the exposure of your IP address. Different servers may offer varying levels of security and speed, affecting the overall performance and safety of your WebRTC sessions. Choosing the right VPN server is essential for maintaining privacy and ensuring high-quality communications.</p><h3 id="firefox-taking-control-of-webrtc">Firefox: Taking Control of WebRTC</h3><p>Firefox, a popular open-source browser, provides built-in settings to control WebRTC. By accessing the browser's configuration settings, you can disable WebRTC or use add-ons like "uBlock Origin" to have more granular control over WebRTC. These options give you the power to maintain your privacy while still enjoying the benefits of WebRTC.</p><h3 id="chrome-managing-webrtc">Chrome: Managing WebRTC</h3><p>Chrome, another widely used browser, offers an extension called "WebRTC Network Limiter," which allows users to control WebRTC functionality. This extension provides a balance between functionality and privacy, making it an excellent choice for those who rely on Chrome for their browsing needs.</p><h3 id="opera-embracing-webrtc-control">Opera: Embracing WebRTC Control</h3><p>Opera, known for its innovative features, also provides ways to manage WebRTC. Similar to Chrome, Opera offers the "WebRTC Network Limiter" extension for users to control their WebRTC settings. This ensures a secure and seamless browsing experience.</p><h3 id="browser-extensions-enhancing-control">Browser Extensions: Enhancing Control</h3><p>Browser extensions play a vital role in providing users with added control over WebRTC. These extensions offer features like IP masking, disabling WebRTC altogether, and selective control over websites' access to WebRTC. Some popular extensions include "WebRTC Leak Prevent" and "Privacy Badger."</p><h3 id="overview-of-the-webrtc-api">Overview of the WebRTC API</h3><p>For developers, the WebRTC API provides the framework for managing WebRTC settings programmatically. Understanding the API allows developers to customize WebRTC behaviors, such as connection establishment, data channel management, and security configurations, tailoring the experience to user needs while enhancing privacy.</p><h2 id="take-advantage-of-webrtc-with-videosdk">Take Advantage of WebRTC with VideoSDK</h2><p>With the rapid growth of online communication and real-time video interactions, harnessing the power of WebRTC (Web Real-Time Communication) through a <a href="https://www.videosdk.live/">VideoSDK </a>can greatly enhance your applications and services. Whether you're looking to integrate video capabilities into your web or mobile applications, We support various frameworks and provide comprehensive documentation to make the process seamless.</p><p><strong>WEB SDK:</strong></p><p>Our Web SDK offers <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/getting-started">prebuilt solutions</a> for quick and easy integration, supporting popular <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start">JavaScript</a> and <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/concept-and-architecture">React</a> frameworks. Our <a href="https://docs.videosdk.live/javascript/api/sdk-reference/setup">API reference</a>, <a href="https://www.videosdk.live/blog/tag/product">developer blogs</a>, and <a href="https://docs.videosdk.live/code-sample">code samples</a> are available to guide you through the implementation process.</p><p><strong>MOBILE SDK:</strong></p><p>For mobile app development, our SDK supports <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/concept-and-architecture">React Native</a>, <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/concept-and-architecture">Flutter</a>, <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/concept-and-architecture">Android</a>, and <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/getting-started">iOS platforms</a>, ensuring a consistent and reliable video experience across devices. As an <a href="https://www.branex.ae/mobile-app-development-company/">Application development company</a>, we provide comprehensive solutions that ensure seamless performance and compatibility across all platforms.</p><p><strong>Use Cases:</strong></p><p>Our Video SDK finds applications across multiple industries, such as:</p><ol><li><a href="https://www.videosdk.live/solutions/video-kyc">Video KYC</a>: Streamline and secure your Know Your Customer processes with real-time video verification.</li><li><a href="https://www.videosdk.live/solutions/telehealth">Telehealth</a>: Enable high-quality video consultations for healthcare professionals and patients.</li><li><a href="https://www.videosdk.live/solutions/education">Education</a>: Enhance online learning experiences with interactive video classes and collaboration tools.</li><li><a href="https://www.videosdk.live/solutions/live-shopping">Live Shopping</a>: Create engaging live shopping experiences, enabling customers to interact with sellers in real-time.</li><li><a href="https://www.videosdk.live/solutions/virtual-events">Virtual Events</a>: Host virtual conferences, expos, and trade shows with integrated video features.</li><li><a href="https://www.videosdk.live/solutions/social">Social Media</a>: Enhance social platforms with live video streaming for more engaging user interactions.</li><li><a href="https://www.videosdk.live/solutions/live-audio-streaming">Live Audio Streaming</a>: Offer real-time audio streaming for podcasts, music, and more.</li></ol><p>More importantly, it is <strong>FREE</strong> to start. You are guaranteed to receive <a href="https://app.videosdk.live/"><strong>10,000 minutes of free EVERY MONTH</strong>.</a></p>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h2 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h2>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="faqs">FAQs</h2><h3 id="what-are-the-privacy-risks-associated-with-webrtc">What are the privacy risks associated with WebRTC?</h3><p>WebRTC can reveal your IP address, potentially compromising your privacy and allowing websites to track your location.</p><h3 id="how-can-i-disable-webrtc-in-firefox">How can I disable WebRTC in Firefox?</h3><p>You can disable WebRTC in Firefox by accessing the browser's configuration settings or using browser add-ons like "uBlock Origin."</p><h3 id="which-browser-offers-the-webrtc-network-limiter-extension">Which browser offers the "WebRTC Network Limiter" extension?</h3><p>Both Chrome and Opera offer the "WebRTC Network Limiter" extension, providing users with control over WebRTC functionality.</p><h3 id="are-there-browser-extensions-to-enhance-webrtc-control">Are there browser extensions to enhance WebRTC control?</h3><p>Yes, there are browser extensions like "WebRTC Leak Prevent" and "Privacy Badger" that offer enhanced control over WebRTC.</p><h3 id="how-can-i-ensure-my-online-privacy-while-using-webrtc">How can I ensure my online privacy while using WebRTC?</h3><p>To ensure online privacy, you can use browser settings or extensions to control WebRTC and limit its access.</p><h3 id="is-webrtc-essential-for-online-communication">Is WebRTC essential for online communication?</h3><p>WebRTC is a valuable technology for real-time communication in web browsers, but it's essential to manage its settings for privacy and security.</p><h3 id="conclusion">Conclusion</h3><p>WebRTC control is a vital aspect of maintaining your online privacy and security while enjoying the benefits of real-time communication. By understanding how to control WebRTC in browsers like Firefox, Chrome, and Opera, along with browser extensions, you can protect your online activities and communicate with confidence.</p><p>Remember that while WebRTC enhances your online communication experience, taking control of it ensures that your personal information remains private. So, make the most of these options to enjoy the best of both worlds - seamless communication and online security.</p>]]></content:encoded></item><item><title><![CDATA[HLS vs WebRTC: Video Streaming Protocol Comparison?]]></title><description><![CDATA[HLS and WebRTC are video streaming protocols. HLS delivers adaptive streaming via HTTP, suitable for on-demand content. WebRTC offers real-time communication for interactive applications, prioritizing low latency.]]></description><link>https://www.videosdk.live/blog/hls-vs-webrtc</link><guid isPermaLink="false">65a52adc6c68429b5fdf0fa7</guid><dc:creator><![CDATA[Chetan Sandanshiv]]></dc:creator><pubDate>Tue, 17 Sep 2024 12:05:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/HLS-vs-WebRTC.png" medium="image"/><content:encoded><![CDATA[<h2 id="what-are-the-video-streaming-protocols">What are the Video Streaming Protocols?</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/HLS-vs-WebRTC.png" alt="HLS vs WebRTC: Video Streaming Protocol Comparison?"/><p>Video streaming protocols are essential for delivering multimedia content over the internet. Common protocols include HTTP Live Streaming (HLS), Dynamic Adaptive Streaming over HTTP (DASH), and <a href="https://www.videosdk.live/blog/what-is-rtmp">Real-Time Messaging Protocol</a> (RTMP). These ensure seamless playback by adapting to varying network conditions, enhancing the user's streaming experience.</p><h2 id="what-is-hls-http-live-streaming">What is HLS (HTTP Live Streaming)?</h2><p><a href="https://www.videosdk.live/blog/what-is-http-live-streaming">HLS</a>, or HTTP Live Streaming, is a widely adopted protocol for delivering streaming content over the internet. It breaks down video files into small, easily downloadable segments, providing adaptive streaming for a smoother viewing experience.</p><h3 id="how-does-hls-work">How does HLS Work?</h3><p>HLS divides video content into segments, each with multiple quality options. Clients download the segments based on their network conditions, enabling adaptive streaming and minimizing buffering.</p><p><strong>Key Features and Benefits of HLS:</strong></p><ul><li>Broad compatibility with various devices and browsers.</li><li>Support for adaptive streaming, ensuring optimal playback under changing network conditions.</li><li>Robust error recovery mechanism for uninterrupted streaming.</li></ul><h3 id="use-cases-for-hls">Use Cases for HLS:</h3><p>HLS is commonly used for live streaming events, video-on-demand services, and delivering video content over Content Delivery Networks (CDNs).</p><p><strong>Limitations and Challenges of HLS:</strong></p><ul><li>Increased latency due to segmented content delivery.</li><li>Limited real-time interactivity, making it less suitable for applications requiring low latency.</li></ul><h2 id="what-is-webrtc">What is WebRTC?</h2><p><a href="https://www.videosdk.live/blog/webrtc">WebRTC</a>, or Web Real-Time Communication, is a free, open-source project that provides web browsers and mobile applications with real-time communication via simple application programming interfaces (APIs).</p><h3 id="how-webrtc-enables-real-time-interactions">How WebRTC Enables Real-Time Interactions?</h3><p>WebRTC enables direct peer-to-peer communication between browsers or devices, allowing for <a href="https://www.videosdk.live/audio-video-conferencing">real-time audio and video streaming</a> without the need for plugins or additional software.</p><p><strong>Key Features and Benefits of WebRTC:</strong></p><ul><li>Low-latency communication, making it suitable for real-time applications.</li><li>Native browser support, eliminating the need for third-party plugins.</li><li>Encryption and security features for secure communication.</li></ul><h3 id="use-cases-for-webrtc">Use Cases for WebRTC: </h3><p>WebRTC is ideal for applications requiring low latency, such as video conferencing, online gaming, and <a href="https://www.videosdk.live/interactive-live-streaming">interactive live streaming</a>.</p><p><strong>Limitations and Challenges of WebRTC:</strong></p><ul><li>Dependency on browser support.</li><li>Challenges with firewall traversal in certain network environments.</li></ul><h2 id="webrtc-vs-hls-a-comparative-analysis">WebRTC vs HLS: A Comparative Analysis</h2><p><strong>Bandwidth Efficiency:</strong><em> </em>HLS optimizes bandwidth by offering adaptive streaming, adjusting the quality based on network conditions. WebRTC, on the other hand, provides efficient bandwidth utilization through peer-to-peer communication.</p><p><strong>Latency:</strong><em> </em>WebRTC excels in low-latency scenarios, making it suitable for applications requiring real-time interaction. HLS, with its segmented content delivery, may experience higher latency.</p><p><strong>Browser Compatibility: </strong>HLS enjoys broad compatibility across various browsers and devices. WebRTC, while widely supported, may face limitations in some older browsers.</p><p><strong>Adaptive Streaming Capabilities:</strong><em> </em>Both HLS and WebRTC support adaptive streaming, but they implement it differently. HLS uses segmented delivery, while WebRTC achieves adaptability through direct communication between peers.</p><p><strong>Security Considerations</strong><em><strong>:</strong> </em>Both protocols prioritize security, with encryption mechanisms in place. However, WebRTC's direct peer-to-peer communication can enhance security by reducing the attack surface.</p><p>Below is the comparison table between HLS (HTTP Live Streaming) and WebRTC (Web Real-Time Communication).</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>Feature</th>
<th>HLS</th>
<th>WebRTC</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Protocol</strong></td>
<td>HTTP</td>
<td>Real-time communication protocol</td>
</tr>
<tr>
<td><strong>Latency</strong></td>
<td>Higher latency (typically 10-30 seconds)</td>
<td>Lower latency (typically 0.5 - 2 seconds)</td>
</tr>
<tr>
<td><strong>Real-time</strong></td>
<td>Not designed for real-time communication</td>
<td>Designed for real-time communication</td>
</tr>
<tr>
<td><strong>Browser Support</strong></td>
<td>Supported by most browsers through HTML5</td>
<td>Requires browser support, but widely adopted</td>
</tr>
<tr>
<td><strong>Adaptability</strong></td>
<td>Easily adaptable to different network conditions</td>
<td>Adaptable to varying network conditions</td>
</tr>
<tr>
<td><strong>Bi-Directional</strong></td>
<td>Unidirectional (client to server)</td>
<td>Bidirectional (supports both client and server)</td>
</tr>
<tr>
<td><strong>Encoding Overhead</strong></td>
<td>May involve additional encoding steps</td>
<td>Directly transmits media without additional encoding</td>
</tr>
<tr>
<td><strong>Use Cases</strong></td>
<td>Streaming of pre-recorded content, VOD</td>
<td>Real-time communication, live streaming, video conferencing</td>
</tr>
<tr>
<td><strong>Scalability</strong></td>
<td>Scalable for large audiences (CDN support)</td>
<td>Scalable for peer-to-peer and server-based architectures</td>
</tr>
<tr>
<td><strong>Firewall Traversal</strong></td>
<td>Easier to traverse firewalls</td>
<td>May require additional configurations for firewall traversal</td>
</tr>
<tr>
<td><strong>Encryption</strong></td>
<td>Supports encryption through HTTPS</td>
<td>Built-in encryption for data transmission</td>
</tr>
<tr>
<td><strong>Standardization</strong></td>
<td>Standardized by IETF (RFC 8216)</td>
<td>Standardized by W3C and IETF</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><h2 id="choosing-the-right-protocol-for-your-use-case">Choosing the Right Protocol for Your Use Case:</h2><p>Factors to Consider When Deciding Between HLS and WebRTC:</p><ul><li>Latency requirements</li><li>Browser and device compatibility</li><li>Interactivity needs</li></ul><p>Use Case Scenarios and Recommended Protocols:</p><ul><li>Real-time communication applications: WebRTC</li><li>High-quality streaming with adaptive bitrate: HLS</li></ul><h3 id="importance-of-scalability-and-ease-of-implementation">Importance of Scalability and Ease of Implementation:</h3><p>Consideration of scalability is crucial for applications expecting varying numbers of users. WebRTC's peer-to-peer architecture can be advantageous for scalability, while HLS may require additional infrastructure for larger audiences.</p><h2 id="videosdk-leveraging-the-power-of-webrtc">VideoSDK: Leveraging the Power of WebRTC:</h2><h3 id="introduction-to-videosdk">Introduction to VideoSDK:</h3><p><a href="https://www.videosdk.live/">VideoSDK</a>, powered by WebRTC, offers real-time audio-video SDKs with complete flexibility, scalability, and control for seamless integration into web and mobile apps.</p><h3 id="how-videosdk-utilizes-webrtc-for-seamless-video-streaming">How VideoSDK Utilizes WebRTC for Seamless Video Streaming:</h3><p>VideoSDK leverages WebRTC's peer-to-peer communication to deliver low-latency and high-quality audio-video experiences.</p><p><strong>Key Features and Benefits of Using VideoSDK:</strong></p><ul><li>Real-time communication capabilities</li><li>Scalability for varying user loads</li><li>Developer-friendly APIs for easy integration</li></ul><p><em>Have questions about integrating HLS and Webrtc? Our team offers expert advice tailored to your unique needs. Unlock the full potential—<a href="https://www.videosdk.live/blog/what-is-http-live-streaming?utm_source=blog&amp;utm_medium=google&amp;utm_campaign=organic">sign up </a>now to access resources and join our <a href="https://discord.com/invite/Qfm8j4YAUJ?utm_source=blog&amp;utm_medium=google&amp;utm_campaign=organic">developer community</a>. <a href="https://bookings.videosdk.live/#/discovery?utm_source=blog&amp;utm_medium=google&amp;utm_campaign=organic">Schedule a demo</a> to see features in action and discover how our solutions meet your streaming app needs.</em></p><h2 id="does-videosdk-support-hls-and-webrtc">Does VideoSDK support HLS and Webrtc?</h2><p>Yes, VideoSDK supports HLS for adaptive streaming, delivering high-quality video content, and integrates with WebRTC for real-time communication features like video calls and conferencing.</p><h2 id="does-videosdk-support-hls-for-adaptive-streaming">Does VideoSDK support HLS for adaptive streaming?</h2><p>Yes, VideoSDK is designed to support HLS for adaptive streaming, allowing you to deliver high-quality video content with adaptive bitrate streaming for a seamless viewer experience across different network conditions.</p><h2 id="can-videosdk-leverage-webrtc-for-real-time-communication">Can VideoSDK leverage WebRTC for real-time communication?</h2><p>Absolutely, VideoSDK integrates with WebRTC to enable real-time communication features such as video calls, conferencing, and collaboration within your application or platform.</p><h2 id="which-platforms-does-videosdk-support-for-hls-integration">Which platforms does VideoSDK support for HLS integration?</h2><p>VideoSDK is designed to be versatile, supporting integration with various platforms, including web applications, mobile apps (iOS and Android), and desktop applications. Go to <a href="https://docs.videosdk.live/?utm_source=blog&amp;utm_medium=google&amp;utm_campaign=organic">VideoSDK documentation</a> for specific platform compatibility details.</p><h2 id="what-is-the-main-difference-between-the-hls-and-webrtc">What is the Main Difference Between the HLS and WebRTC? </h2><p>The main difference between HLS and WebRTC lies in their purpose. HLS is primarily used for adaptive streaming, delivering pre-recorded content, while WebRTC facilitates real-time communication for live interactions and collaboration.</p>]]></content:encoded></item><item><title><![CDATA[Top 10 LiveKit Alternatives in 2026]]></title><description><![CDATA[Discover an innovative and game-changing alternative to LiveKit that will revolutionize your online experience and propel you towards unparalleled success. Seize this golden opportunity to unlock limitless potential and embark on a remarkable journey to reach new heights.]]></description><link>https://www.videosdk.live/blog/livekit-alternative</link><guid isPermaLink="false">64afde9a9eadee0b8b9e6d95</guid><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 17 Sep 2024 10:23:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/07/LiveKit-alternative.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/07/LiveKit-alternative.jpg" alt="Top 10 LiveKit Alternatives in 2026"/><p>If you're seeking a seamless integration of real-time video into your application and considering an <a href="https://www.videosdk.live/livekit-vs-videosdk" rel="noreferrer"><strong>alternative to LiveKit</strong></a>, you've come to the right place! While LiveKit is well-known, there are numerous unexplored opportunities beyond their platform waiting to be discovered. Stay tuned to unveil what you may have been missing out on, particularly if you're already using LiveKit. Get ready to explore new possibilities and expand your horizons!</p><p>LiveKit is a powerful platform offering robust video and audio solutions. LiveKit Meet provides seamless video conferencing capabilities, ensuring high-quality communication. With flexible LiveKit pricing, you can choose a plan that fits your needs. The LiveKit API allows developers to integrate video and audio features effortlessly into their applications. Additionally, the LiveKit SDK offers extensive tools and documentation, making it easy to build customized real-time communication solutions. Explore LiveKit for scalable and reliable video technology.</p><h2 id="need-of-a-livekit-alternative">Need of a LiveKit alternative</h2>
<p>There are several reasons to consider exploring alternatives to LiveKit. One of the critical issues is audio streaming produces choppy playback on Android devices. Also, There's a need for additional features that aren't available in LiveKit, concerns about scalability or customization limitations, incompatibility with existing tools, insufficient support or documentation, or cost-related considerations. Evaluating these factors can help in finding a suitable alternative that better meets specific requirements and preferences. </p><p>The <strong>top 10 LiveKit Alternatives</strong> are VideoSDK, Twilio, MirrorFly, Agora, Jitsi, Vonage, AWS Chime, EnableX, WhereBy, and SignalWire.</p><blockquote>
<h2 id="top-10-livekit-alternatives-for-2026">Top 10 LiveKit Alternatives for 2026</h2>
<ul>
<li>VideoSDK</li>
<li>Twilio Video</li>
<li>MirrorFly</li>
<li>Agora</li>
<li>Jitsi</li>
<li>Vonage</li>
<li>AWS Chime SDK</li>
<li>Enablex</li>
<li>Whereby</li>
<li>SignalWire</li>
</ul>
</blockquote>
<h2 id="1-videosdk-your-premier-livekit-alternative">1. VideoSDK: Your Premier LiveKit Alternative</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-SDK-for-Real-time-Communication-Live-Streaming-Video-API-8.jpeg" class="kg-image" alt="Top 10 LiveKit Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Discover the remarkable capabilities of <a href="https://www.videosdk.live">VideoSDK</a>, an API designed to seamlessly incorporate robust audio and video functionalities into your applications. Effortlessly enhance your app by providing live audio and video experiences on multiple platforms with minimal implementation effort.</p><h3 id="key-benefits-of-choosing-videosdk">Key Benefits of Choosing VideoSDK</h3>
<ul><li>Experience the seamless integration and efficiency of VideoSDK, allowing you to focus more on developing innovative features that enhance user retention. Say goodbye to complex integration processes and unlock a world of possibilities.</li><li>Embrace the benefits of VideoSDK, including its outstanding scalability, adaptive bitrate technology, extensive customization options, exceptional recording quality, comprehensive analytics, cross-platform streaming, effortless scaling, and broad platform support. </li><li>Whether you're on mobile (Flutter, Android, iOS), web (JavaScript Core SDK + UI Kit), or desktop (Flutter Desktop), Video SDK empowers you to effortlessly create captivating video experiences.</li></ul><h3 id="competitive-pricing-of-videosdk">Competitive Pricing of VideoSDK</h3>
<ul><li>Discover incredible value with VideoSDK! Take advantage of the generous offer of <a href="https://www.videosdk.live/pricing">$20 free credit</a> and enjoy <a href="https://www.videosdk.live/pricing#pricingCalc">flexible pricing</a> options for both video and audio calls. </li><li><strong>Video calls</strong> start at <strong>just $0.003</strong> per participant per minute, while <strong>audio calls</strong> begin at a minimal cost of <strong>$0.0006</strong>.</li><li><strong>Cloud recordings</strong> are available at an affordable rate of <strong>$0.015</strong> per minute, and <strong>RTMP output</strong> comes at a competitive price of <strong>$0.030</strong> per minute. </li><li>Additionally, benefit from <strong>free 24/7 customer support</strong>, ensuring assistance whenever you need it. Elevate your video capabilities today and embark on a journey of excellence!</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/livekit-vs-videosdk"><strong>LiveKit and VideoSDK</strong></a><strong>.</strong></blockquote>
<!--kg-card-begin: html-->
<!DOCTYPE html>
<html lang="en">

<head>
	<meta charset="UTF-8">
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<title>Your Page Title</title>
	<!-- Include Tailwind CSS -->
	<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
</link></meta></meta></head>

<body>
	<div class="relative w-full overflow-hidden rounded-2xl bg-gradient-to-b from-pink-700 to-purple-900 p-4 text-center shadow-xl">
		<h3 class="mx-auto text-3xl font-bold tracking-tight text-white sm:text-2xl" style="margin-top: 3px; margin-bottom: 12px;">
			Schedule a Demo with Our Live Video Expert!
		</h3>
		<p class="mx-auto mt-3 max-w-xl text-sm text-gray-400">
			Discover how VideoSDK can help you build a cutting-edge real-time video app.
			<span class="font-semibold text-lato"/>
		</p>
		<div class="mt-4 flex items-center justify-center">
			<a href="https://www.videosdk.live/contact" class="rounded-md bg-white px-8 py-3 text-sm font-semibold text-gray-900 shadow-sm hover:bg-gray-100 focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:outline-white" target="_blank" style="text-decoration: none;color: black;" data-faitracker-click-bind="true">
				Book a call
			</a>	
		</div>
		
	</div>
</body>

</html>
<!--kg-card-end: html-->
<h2 id="2-twilio-video-a-versatile-sdk-for-live-communication">2. Twilio Video: A Versatile SDK for Live Communication</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Communication-APIs-for-SMS-Voice-Video-Authentication_twilio-7.jpeg" class="kg-image" alt="Top 10 LiveKit Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Twilio is a top-tier SDK solution that enables businesses to seamlessly integrate live video into their mobile and web applications. The key strength of Twilio lies in its versatility, allowing you to build an app from scratch or enhance your existing solutions with powerful communication features. Whether you're starting fresh or expanding your app's capabilities, Twilio offers a reliable and comprehensive solution for effortlessly integrating live video into your applications.</p><h3 id="unique-selling-points-of-twilio-video">Unique Selling Points of Twilio Video</h3>
<ul><li>Twilio offers web, iOS, and Android SDKs for seamless integration of live video into applications.</li><li>Utilizing multiple audio and video inputs may require manual configuration and additional code.</li><li>Call insights provided by Twilio can track and analyze errors, but implementation requires extra code.</li><li>As usage grows, pricing can become a concern due to the lack of a built-in tiering system in the dashboard.</li><li>Twilio supports up to 50 hosts and participants in a call.</li><li>There are no available plugins for easy product development with Twilio.</li><li>The level of customization provided by the Twilio Video SDK may not meet the requirements of all developers, necessitating additional code development.</li></ul><h3 id="pricing-for-twilio">Pricing for Twilio</h3>
<ul><li><a href="https://www.videosdk.live/blog/twilio-video-alternative"><strong>Twilio</strong></a>'s <a href="https://www.twilio.com/en-us/video/pricing">pricing</a> begins at <strong>$4</strong> per 1,000 minutes. <strong>Recordings</strong> are charged at <strong>$0.004</strong> per participant minute, while <strong>recording compositions</strong> cost <strong>$0.01</strong> per composed minute.</li><li><strong>Storage</strong> is priced at <strong>$0.00167</strong> per gigabyte per day after the initial 10 gigabytes.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/twilio-vs-livekit"><strong>LiveKit and Twilio</strong></a><strong>.</strong></blockquote><h2 id="3-mirrorfly-tailored-for-enterprise-communication-needs">3. MirrorFly: Tailored for Enterprise Communication Needs</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Live-Video-Call-API-Best-Video-Chat-SDK-for-Android-iOS-mirrorfly-7.jpeg" class="kg-image" alt="Top 10 LiveKit Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>MirrorFly is an outstanding in-app communication suite designed specifically for enterprises. It offers a wide range of powerful APIs and SDKs that deliver exceptional chat and calling experiences. With over 150 features for chat, voice, and video calling, this cloud-based solution seamlessly integrates to create a robust communication platform.</p><h3 id="challenges-and-pricing-model-of-mirrorfly">Challenges and Pricing Model of MirrorFly</h3>
<ul><li>MirrorFly's customization options may be limited, hindering the ability to tailor the platform to specific branding and user experience needs. This can restrict the uniqueness and personalization of the communication features.</li><li>Additionally, scaling MirrorFly for larger applications or handling a high volume of users can pose challenges, potentially impacting performance and stability under significant traffic or complex use cases. </li><li>Users have reported mixed experiences with MirrorFly's technical support, with some encountering delays or difficulties in issue resolution. </li><li>The pricing structure may not be suitable for all budgets or use cases, as costs can vary based on desired features and scalability requirements. </li><li>Integrating MirrorFly into existing applications or workflows may also require significant effort and technical expertise, as comprehensive documentation and robust developer resources may be lacking.</li></ul><h3 id="mirrorfly-pricing">MirrorFly pricing</h3>
<ul><li>MirrorFly's pricing starts at <strong>$299</strong> per month, positioning it as a higher-cost option to take into account.</li></ul><h2 id="4-agora-enhancing-real-time-interactions-with-rich-features">4. Agora: Enhancing Real-Time Interactions with Rich Features</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Agora-Real-Time-Voice-and-Video-Engagement-7.jpeg" class="kg-image" alt="Top 10 LiveKit Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Agora's <a href="https://www.videosdk.live/audio-video-conferencing" rel="noreferrer">video calling SDK</a> provides an extensive range of features, such as embedded voice and video chat, real-time recording, live streaming, and instant messaging. These features equip developers with the necessary tools to create engaging and immersive live experiences within their applications.</p><h3 id="benefits-and-pricing-structure-of-agora">Benefits and Pricing Structure of Agora</h3>
<ul><li>Agora's video SDK offers a comprehensive range of features, including embedded voice and video chat, real-time recording, live streaming, and instant messaging.</li><li>Additional add-ons like AR facial masks, sound effects, whiteboards, and more are available at an extra cost, enhancing the creative possibilities.</li><li>Agora's SD-RTN ensures extensive global coverage, facilitating connections from over 200 countries and regions with ultra-low latency streaming capabilities.</li><li>The pricing structure can be complex and may not be suitable for businesses with limited budgets, requiring careful evaluation.</li><li>Users seeking hands-on support may experience delays as Agora's support team may require additional time to provide assistance.</li></ul><h3 id="agora-pricing">Agora pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/agora-alternative"><strong>Agora</strong></a> provides Premium and Standard <a href="https://www.agora.io/en/pricing/">pricing</a> options, where the usage duration for audio and video is calculated monthly.</li><li>The pricing is categorized into four types based on video resolution, offering flexibility and cost-effectiveness.</li><li>The pricing structure includes <strong>Audio</strong> at <strong>$0.99</strong> per 1,000 participant minutes, <strong>HD Video</strong> at <strong>$3.99</strong> per 1,000 participant minutes, and <strong>Full HD</strong> Video at <strong>$8.99</strong> per 1,000 participant minutes.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/agora-vs-livekit"><strong>LiveKit and Agora</strong></a><strong>.</strong></blockquote><h2 id="5-jitsi-open-source-flexibility-for-video-conferencing">5. Jitsi: Open-Source Flexibility for Video Conferencing</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Free-Video-Conferencing-Software-for-Web-Mobile-Jitsi-8.jpeg" class="kg-image" alt="Top 10 LiveKit Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Jitsi is a collection of several open-source projects designed for video conferencing. Being open source, it offers the flexibility to customize and utilize it according to your specific requirements.</p><p>The key components of Jitsi include Jitsi Meet, Jitsi Videobridge, Jibri, and Jigsai, each serving different functions within the Jitsi ecosystem.</p><h3 id="key-points-about-jitsi">Key points about Jitsi</h3>
<ul><li>Jitsi is an open-source and free platform that allows users to utilize it in any way they want.</li><li>Jitsi Meet, one of its featured projects, offers various features such as text sharing via Etherpad, room locking, text chatting (web only), raising hands, YouTube video access during calls, audio-only calls, and third-party app integrations.</li><li>However, Jitsi Meet alone does not provide essential collaborative features like screen sharing, recording, or telephone dial-in to a conference. To access those features, additional setup of projects like Jibri and Jigsai is required, which entails more time, resources, and coding efforts.</li><li>This additional setup makes Jitsi less suitable for users seeking a low-code option.</li><li>While Jitsi provides end-to-end encryption for video calls, it does not cover chat or polls, so it may not be the best choice for users prioritizing robust security.</li><li>Jitsi can consume a significant amount of bandwidth, considering the functioning of Jitsi Videobridge.</li><li>For large organizations requiring an SDK for frequent long video sessions with a large number of participants, Jitsi might feel underwhelming and may not meet their specific needs.</li></ul><h3 id="jitsi-pricing">Jitsi pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/jitsi-alternative"><strong>Jitsi</strong></a> is offered <strong>free of charge</strong>, meaning you don't need to pay for any of its components. </li><li>However, it's important to note that there is no dedicated technical support available. In case you encounter any issues or require assistance, you may find help from community members who contribute to the Jitsi project.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/jitsi-vs-livekit"><strong>LiveKit and Jitsi</strong></a><strong>.</strong></blockquote><h2 id="6-vonage-reliable-communication-solutions-with-extensive-features">6. Vonage: Reliable Communication Solutions with Extensive Features</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-API-Fully-Programmable-and-Customizable-Vonage-6.jpeg" class="kg-image" alt="Top 10 LiveKit Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Despite being acquired by Vonage and rebranded as the "Vonage API," TokBox is still commonly recognized by its original name. TokBox's SDKs deliver dependable point-to-point communication, making it a fitting option for establishing proof of concepts during hackathons or meeting investor deadlines. The SDKs provide developers with the essential tools to build secure and smooth communication experiences within their applications.</p><h3 id="key-points-about-vonage">Key points about Vonage</h3>
<ul><li>Vonage enables developers to create custom audio/video streams with effects, filters, and AR/VR capabilities on mobile devices.</li><li>It supports a wide range of use cases, including 1:1 video, group video chat, and large-scale broadcast sessions.</li><li>Participants in a call can share screens, exchange messages via chat, and send data during the call.</li><li>One challenge with Vonage is the scalability costs, as the price per stream per minute increases with the growth of the user base.</li><li>Additional features like recording and interactive broadcasts come at an extra cost.</li><li>Once the number of connections reaches 2,000, the platform switches to CDN delivery, resulting in higher latency.</li><li>Real-time streaming at scale can be challenging, as anything beyond 3,000 viewers requires switching to HLS, which introduces significant latency.</li></ul><h3 id="vonage-pricing">Vonage pricing</h3>
<ul><li><a href="https://www.videosdk.live/blog/vonage-alternative"><strong>Vonage</strong></a> implements a usage-based <a href="https://www.vonage.com/communications-apis/video/pricing/">pricing</a> model for their video sessions, with costs based on the number of participants and dynamically calculated every minute.</li><li>Their <strong>pricing plans</strong> commence at <strong>$9.99</strong> per month and include a free allowance of <strong>2,000</strong> minutes per month for all plans.</li><li>Once the free allowance is utilized, users are billed at a rate of <strong>$0.00395</strong> per participant per minute. <strong>Recording</strong> services are available starting at <strong>$0.010</strong> per minute, while <strong>HLS streaming</strong> is priced at <strong>$0.003</strong> per minute.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/vonage-vs-livekit"><strong>LiveKit and Vonage</strong></a><strong>.</strong></blockquote><h2 id="7-aws-chime-sdk-high-quality-video-meetings">7. AWS Chime SDK: High-Quality Video Meetings</h2>
<p>The Amazon Chime SDK serves as the underlying technology of Amazon Chime, operating independently without its user interface or outer shell.</p><h3 id="key-points-about-aws-chime-sdk">Key points about AWS Chime SDK</h3>
<ul><li>The <a href="https://www.videosdk.live/blog/amazon-chime-sdk-alternative"><strong>Amazon Chime SDK</strong></a> supports video meetings with up to 25 participants (or 50 for mobile users).</li><li>Simulcast technology guarantees consistent video quality across various devices and networks.</li><li>All calls, videos, and chats are encrypted to ensure enhanced security.</li><li>However, it lacks certain features like polling, auto-sync with Google Calendar, and background blur effects.</li><li>Compatibility issues have been reported in Linux environments and with participants using the Safari browser.</li><li>Customer support experiences can vary, with inconsistent query resolution times depending on the support agent.</li></ul><h3 id="aws-chime-pricing">AWS Chime pricing</h3>
<ul><li>The <strong>free basic plan</strong> allows users to have <strong>one-on-one audio/video calls</strong> and <strong>group chats</strong>.</li><li>The <strong>Plus plan</strong>, <a href="https://aws.amazon.com/chime/pricing/">priced</a> at <strong>$2.50</strong> per monthly user, provides additional features including <strong>screen sharing</strong>, <strong>remote desktop control</strong>, <strong>1 GB</strong> <strong>of message history</strong> per user, and <strong>Active Directory integration</strong>.</li><li>The <strong>Pro plan</strong>, priced at <strong>$15</strong> per user per month, includes all the features of the Plus plan and allows for <strong>meetings</strong> with <strong>three</strong> or more participants.</li></ul><blockquote><strong>Here's a detailed comparison of </strong><a href="https://www.videosdk.live/amazon-chime-sdk-vs-livekit"><strong>LiveKit and AWS Chime</strong></a><strong>.</strong></blockquote><h2 id="8-enablex-comprehensive-sdk-for-advanced-video-interaction">8. EnableX: Comprehensive SDK for Advanced Video Interaction</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Call-API-Video-Chat-API-Voice-API-Video-Conferencing_enebleX-8.jpeg" class="kg-image" alt="Top 10 LiveKit Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>The EnableX SDK provides a diverse set of capabilities, encompassing video and audio calling, along with collaborative features like a whiteboard, screen sharing, annotation, recording, host control, and chat. By integrating this SDK into your application, you can effortlessly incorporate these functionalities. The SDK includes a video builder tool that enables the creation of customized video-calling solutions tailored to your application's needs. It offers flexibility in personalizing live video streams with a custom user interface, selecting suitable hosting options, integrating billing functionality, and implementing other essential features that cater to your specific requirements.</p><h3 id="key-points-about-enablex">Key points about EnableX</h3>
<ul><li>EnableX offers a self-service portal that includes reporting capabilities and live analytics, allowing users to track quality and facilitate online payments.</li><li>The SDK supports JavaScript, PHP, and Python programming languages, providing flexibility for developers.</li><li>Users have the option to stream live content directly from their app or website, as well as on platforms like YouTube and Facebook.</li><li>However, it's important to note that the support team's response time may take up to 72 hours, which could be a potential drawback for users seeking timely assistance.</li></ul><h3 id="enablex-pricing">EnableX pricing</h3>
<ul><li>EnableX <a href="https://www.enablex.io/cpaas/pricing/our-pricing">pricing</a> begins at <strong>$0.004</strong> per minute per participant for rooms accommodating <strong>up to</strong> <strong>50</strong> <strong>people</strong>. For larger meetings or events, custom pricing options can be obtained through their sales team.</li><li><strong>Recording</strong> services are available at a rate of <strong>$0.010</strong> per minute per participant.</li><li><strong>Transcoding</strong> of video into a different format can be done at a rate of <strong>$0.010</strong> per minute.</li><li><strong>Additional storage</strong> can be acquired at a rate of <strong>$0.05</strong> per gigabyte per month, while <strong>RTMP streaming</strong> is priced at <strong>$0.10</strong> per minute.</li></ul><h2 id="9-whereby-simple-and-efficient-video-conferencing">9. Whereby: Simple and Efficient Video Conferencing</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Video-Calling-API-for-Web-and-App-Developers-Whereby-8.jpeg" class="kg-image" alt="Top 10 LiveKit Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>Whereby is a user-friendly video conferencing platform specifically designed for small to medium-sized meetings. It offers a straightforward experience, but it may not be the most suitable choice for larger businesses or those in need of advanced features.</p><h3 id="key-points-about-whereby">Key points about Whereby</h3>
<ul><li>Whereby offers basic customization options for the video interface, although the choices are limited and do not support a fully customized experience.</li><li>Video calls can be easily embedded directly into websites, mobile apps, and web products, eliminating the need for external links or additional apps.</li><li>While Whereby provides a seamless video conferencing experience, it may have fewer advanced features compared to other tools. The maximum meeting capacity is limited to 50 participants.</li><li>Screen sharing for mobile users and customization options for the host interface may be restricted.</li><li>Whereby does not offer a virtual background feature, and some users have reported issues with the mobile app, which can impact the overall user experience.</li></ul><h3 id="whereby-pricing">Whereby pricing</h3>
<ul><li>Whereby provides a <a href="https://whereby.com/information/pricing"><strong>pricing</strong></a><strong> model</strong> starting at <strong>$6.99</strong> per month, which includes an allocation of up to <strong>2,000</strong> user minutes that renew monthly.</li><li>After the allocated minutes are used up, an additional charge of <strong>$0.004</strong> per minute applies.</li><li><strong>Cloud recording</strong> and <strong>live streaming</strong> options are available at a rate of <strong>$0.01</strong> per minute.</li><li>Email and chat support are provided free of charge to all users, ensuring accessible assistance.</li><li>Paid support plans offer additional features such as technical onboarding, customer success management, and HIPAA compliance.</li></ul><h2 id="10-signalwire-streamlined-video-integration-for-developers">10. SignalWire: Streamlined Video Integration for Developers</h2>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Building-The-Software-Defined-Telecom-Network-SignalWire-7.jpeg" class="kg-image" alt="Top 10 LiveKit Alternatives in 2026" loading="lazy" width="1920" height="967"/></figure><p>SignalWire is a platform that leverages APIs to enable developers to easily integrate live and on-demand video experiences into their applications. Its primary goal is to streamline the tasks of video encoding, delivery, and renditions, ensuring a smooth and uninterrupted video streaming experience for users.</p><h3 id="overview-of-signalwire">Overview of SignalWire</h3>
<ul><li>SignalWire provides an SDK that facilitates the integration of real-time video and live streams into web, iOS, and Android applications. </li><li>With the SDK, developers can enable video calls in a real-time webRTC environment, accommodating up to 100 participants.</li><li>However, it's worth noting that the SDK does not offer built-in support for managing disruptions or user publish-subscribe logic. Developers will need to implement these functionalities separately.</li></ul><h3 id="signalwire-pricing">SignalWire pricing</h3>
<ul><li>SignalWire implements a <a href="https://signalwire.com/pricing/video">pricing</a> model based on per-minute usage. For <strong>HD video</strong> <strong>calls</strong>, the pricing is <strong>$0.0060</strong> per minute, while <strong>Full HD video calls</strong> are priced at <strong>$0.012</strong> per minute. The actual cost may vary depending on the desired video quality for your application.</li><li>Additionally, SignalWire offers additional features such as <strong>recording</strong>, which is available at a rate of <strong>$0.0045</strong> per minute. This allows you to capture and store video content for future use. </li><li>The platform also provides <strong>streaming</strong> capabilities priced at <strong>$0.10</strong> per minute, enabling real-time broadcasting of your video content.</li></ul><h2 id="certainly">Certainly!</h2>
<p><a href="https://www.videosdk.live">VideoSDK</a> is an exceptional software development kit (SDK) that prioritizes fast and seamless integration. It empowers developers with a low-code solution to quickly construct live video experiences within their applications. With VideoSDK, custom video conferencing solutions can be created and deployed in less than 10 minutes, significantly reducing the time and effort required for integration. Unlike other SDKs, Video SDK offers a streamlined process that simplifies the creation and embedding of live video experiences, facilitating effortless real-time connections, communication, and collaboration.</p><h2 id="still-skeptical">Still skeptical?</h2>
<p>Explore the comprehensive <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/quick-start">Quickstart guide</a> of VideoSDK and uncover its limitless possibilities. Immerse yourself in its potential by delving into the specially designed <a href="https://docs.videosdk.live/code-sample">sample app</a> that demonstrates the power of VideoSDK. Sign up now and embark on your integration journey, seizing the opportunity to claim your <a href="https://www.videosdk.live/pricing">complimentary  $20 free credit</a> and unleash the full potential of VideoSDK. Rest assured, our dedicated team is just a click away, ready to assist you whenever you need support. Prepare to unleash your creativity and showcase the extraordinary experiences you can create with VideoSDK. Let the world witness your creations!</p><h2 id="faqs">FAQs</h2>
<p><strong>1. What is LiveKit?</strong></p><p>LiveKit is a robust communication platform facilitating seamless real-time interactions. It offers a versatile range of features, including video and audio calls, secure data handling, integration capabilities, and customizable interfaces, making it an ideal solution for diverse communication needs across businesses and industries.</p><p><strong>2. Is there a free trial option for LiveKit?</strong></p><p>Yes, LiveKit offers a free trial option, enabling users to experience its features and functionality before making a commitment. This allows potential users to explore the platform's capabilities and assess its suitability for their specific needs.</p><p><strong>3. Does LiveKit offer recording and playback features for meetings?</strong></p><p>Yes, LiveKit provides recording and playback features for meetings, allowing users to capture valuable discussions. However, it's essential to note that this feature may incur additional costs, and pricing details should be considered based on usage requirements</p><p><strong>4. What are the top competitors and alternatives of LiveKit?</strong></p><p>Top alternatives and competitors to LiveKit include <a href="https://www.videosdk.live/livekit-vs-videosdk">VideoSDK</a>, <a href="https://www.videosdk.live/agora-vs-livekit">Agora</a>, <a href="https://www.videosdk.live/twilio-vs-livekit">Twilio Video</a>, <a href="https://www.videosdk.live/daily-vs-livekit">Daily.co</a>, and <a href="https://www.videosdk.live/vonage-vs-livekit">TokBox</a>. Each offers real-time video and audio solutions, with varying features and pricing to cater to different user needs.</p>]]></content:encoded></item><item><title><![CDATA[Product Updates August 2024]]></title><description><![CDATA[Explore and stay informed about VideoSDK.live's new feature releases, product updates, and improvements from August 2024 on our latest blog.]]></description><link>https://www.videosdk.live/blog/product-update-august-2024</link><guid isPermaLink="false">66dfe31b20fab018df1100cf</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Wed, 11 Sep 2024 12:10:47 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/09/August-2024.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/09/August-2024.png" alt="Product Updates August 2024"/><p>Hello,? everyone! what’s going on?</p><p>This month, we're thrilled to have a range of thrilling updates designed to simplify your development experience as usual. We've rolled out noteworthy updates and fixed bugs across our SDKs. Let's dive into what's new:</p><h3 id="%F0%9F%8E%A5-custom-video-processor-in-android-sdk">? <strong>Custom Video Processor</strong> in Android SDK</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/09/Custom-Video-Processor-min.png" class="kg-image" alt="Product Updates August 2024" loading="lazy" width="1280" height="720"/></figure><p>We've introduced a Custom Video Processor in Android, which gives the flexibility to modify raw video frames and add real-time video effects. This feature helps you to apply any kind of filter including virtual background, blur, or face emoji, to create unique and engaging video experiences in your Flutter applications.</p><p>? <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/video-processor/overview">Documentation</a></p><h3 id="%F0%9F%94%94-participant-join-leave-notification-alerts-in-prebuilt-sdk">? Participant Join &amp; Leave Notification Alerts in Prebuilt SDK</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/09/Join-Left-meeting-min.png" class="kg-image" alt="Product Updates August 2024" loading="lazy" width="1280" height="720"/></figure><p>A new <code>participantNotificationAlertsEnabled</code> parameter has been added to show notifications. You can now receive notifications when participants join or leave a meeting.</p><p>? <a href="https://docs.videosdk.live/prebuilt/api/sdk-reference/parameters/basic-parameters#participantnotificationalertsenabled">Documentation</a></p><h3 id="%F0%9F%93%8A-view-%E2%80%9Cuser-option-selection%E2%80%9D-in-the-poll-for-prebuilt-sdk">? View “User Option Selection” in the Poll for Prebuilt SDK </h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/09/user-option-selection-min.png" class="kg-image" alt="Product Updates August 2024" loading="lazy" width="1280" height="720"/></figure><p>The Poll List tab panel now includes a <code>SHOW RESULTS</code> button that displays the options chosen by the participants. </p><h2 id="%F0%9F%98%84-meme-of-the-month">? Meme of the month!</h2><p>No one can kick out developers until the real OG comes! ?</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/09/s33aisnoo0md11.jpg" class="kg-image" alt="Product Updates August 2024" loading="lazy" width="562" height="574"/></figure><h2 id="%F0%9F%90%9E-bug-fixed">? Bug Fixed</h2><ul><li>We fixed the video rotation issue in JS, React, and React Native SDK for the Mozilla browser.</li><li>We also fixed the video status issue when removing an external camera in JS and React SDK.</li></ul><p>These bug fixes have been requested by our community which leads to performance improvements.</p><p>We're constantly working to make VideoSDK.live the most developer-friendly real-time communication solution. These updates are aimed at improving performance, expanding capabilities, and making your development process smoother.</p><p>As always, we'd love to hear your feedback! If you have any questions, suggestions, or issues, please don't hesitate to contact our support team.</p><p>➡️ New to VideoSDK? <a href="https://www.videosdk.live/signup">Sign up now</a> and get <em><strong>10,000 free minutes</strong></em> to start building amazing audio &amp; video experiences!</p><p>??‍? Happy coding!</p><p>VideoSDK.live Team</p>]]></content:encoded></item><item><title><![CDATA[2023 Rewind: वसुधैव कुटुम्बकम्]]></title><description><![CDATA[At VideoSDK, we achieved groundbreaking milestones in 2023, including the launch of global latency, the deployment of new servers, securing seed funding, trending on Product Hunt, and completing numerous integrations. Just take a look at this.]]></description><link>https://www.videosdk.live/blog/2023-year-in-review</link><guid isPermaLink="false">659c28fe6c68429b5fdedeb4</guid><category><![CDATA[year-in-review]]></category><dc:creator><![CDATA[Arjun Kava]]></dc:creator><pubDate>Mon, 02 Sep 2024 03:52:00 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/01/Year-in-review-2023-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/01/Year-in-review-2023-1.png" alt="2023 Rewind: वसुधैव कुटुम्बकम्"/><p>At VideoSDK, we believe the world is one family, our hearts swell with pride as we introduce the groundbreaking year of 2023 with the launch of global latency in real-time audio-video with interconnected networks.</p><p>VideoSDK serves as the connective bridge that collapses distances, making the globe feel like a shared living room where everyone is welcomed, heard, and seen. </p><p>We're proud to embrace the philosophy of "<strong>Vasudhaiva Kutumbakam</strong>" with our motto, "<strong>One Earth, One Family, One Future</strong>.” by connecting over 90% of the world's population without adding any delays to your calls.</p><p>Let's take a moment to explore the milestones, triumphs, and lessons that shaped VideoSDK in 2023. Ready to unfold the pages? Let's get started!</p><h2 id="videosdks-global-impact-in-2023">VideoSDK's Global Impact in 2023</h2><h3 id="%F0%9F%8C%8E-131-countries-served-breaking-boundaries-connecting-hearts">🌎 131 Countries Served: Breaking Boundaries, Connecting Hearts</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/Year-in-review-2023-1-1.png" class="kg-image" alt="2023 Rewind: वसुधैव कुटुम्बकम्" loading="lazy" width="2560" height="1440"/></figure><p>We redefined global communication dynamics across 131 countries, witnessing a staggering 180% increase in participant engagement, with an average call joining time of less than 0.7 seconds, ensuring conversations ignite instantly.</p><p><strong>The global latency, a mere blink at less than 80ms</strong>, creates a realm of instant, seamless connections. To further elevate user experiences, we strategically deployed new servers deployed across the globe, solidifying our commitment to a seamless and reliable global network.</p><p>It's not just a network; it's an experience.</p><h3 id="%F0%9F%94%8B-rock-solid-reliability-with-9999-uptime">🔋 Rock Solid Reliability with 99.99% Uptime</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/Year-in-review-2023-2.png" class="kg-image" alt="2023 Rewind: वसुधैव कुटुम्बकम्" loading="lazy" width="2560" height="1440"/></figure><p>VideoSDK became the bedrock of reliability, fuelling dreams globally. Countless app launches, extensive API calls, and a plethora of SDK updates, supported by over 99.99% server uptime, underscore our commitment to unwavering stability.</p><p>Joining us from 140+ countries, 10,000+ developers contributed to a 200% surge in minutes usage a true testament to the collaborative dreams realised on VideoSDK.</p><h3 id="developers-%E2%80%93-weve-got-you-covered-2000-devices-support">Developers – we've got you covered 2000+ devices support</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/Year-in-review-2023-3.png" class="kg-image" alt="2023 Rewind: वसुधैव कुटुम्बकम्" loading="lazy" width="2560" height="1440"/></figure><p>Our commitment to providing a user-friendly environment, compatible across devices, browsers, and operating systems, was unwavering. With a streamlined integration process, developers found themselves spending less time on technicalities and more time on what they do best –Creating.</p><h3 id="enterprise-ready-infra-with-compliance-certified-excellence">Enterprise Ready Infra with Compliance Certified Excellence</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/compliance-2-1.png" class="kg-image" alt="2023 Rewind: वसुधैव कुटुम्बकम्" loading="lazy" width="2560" height="1440"/></figure><p>In 2023, we stood as a guardian of trust, prioritizing the security and privacy of our users. Our commitment to compliance is evident through the implementation of industry-leading standards. </p><p>These badges aren't just symbols; they are a testament to our unwavering dedication to safeguarding your data.</p><p>Your safety is our priority. </p><h3 id="global-community-with-1000-engagement-every-week">Global Community with 1000+ Engagement Every Week</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/Community-numbers.png" class="kg-image" alt="2023 Rewind: वसुधैव कुटुम्बकम्" loading="lazy" width="2560" height="1440"/></figure><p>Our developer community has grown into a thriving force. Developers from diverse backgrounds join our online communities, not just to connect but to share knowledge, solve technical challenges, and enhance their skills. It's a place where innovation thrives, and every developer is empowered to enhance their skills and create impactful solutions.</p><h3 id="fuelling-the-future-with-12m-funding-round">Fuelling the Future with $1.2m Funding Round</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/funding.webp" class="kg-image" alt="2023 Rewind: वसुधैव कुटुम्बकम्" loading="lazy" width="2000" height="1050"/></figure><p>Big news in the offices of VideoSDK!</p><p>We've secured $1.2 million from GVFL and strategic investors, showcasing our dedication to transforming live video. This funding has helped us push even higher, making it easier for developers to build reliable live video apps.</p><h3 id="achieved-most-loved-and-highly-rated-infra-worldwide">Achieved Most Loved and Highly Rated Infra Worldwide</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/Year-in-review-2023-7.png" class="kg-image" alt="2023 Rewind: वसुधैव कुटुम्बकम्" loading="lazy" width="2560" height="1440"/></figure><p>We won the esteemed Golden Kitty Award at the start of the year. Users gave us big thumbs up on G2, <a href="https://www.producthunt.com/products/video-sdk">Product Hunt</a>, and <a href="https://www.capterra.in/software/1015040/videosdklive">Capterra</a>. We also launched the Interactive Live Streaming SDK. These wins tell the world that VideoSDK is all about making things great and having a community that loves what we do.</p><h3 id="2000-features-released-in-2023">2000+ Features Released in 2023</h3>
<figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/01/Year-in-review-2023-8.png" class="kg-image" alt="2023 Rewind: वसुधैव कुटुम्बकम्" loading="lazy" width="2560" height="1440"/></figure><p>In a year marked by innovation, VideoSDK emerged as a pioneer. We haven't just evolved, we've revolutionized the live video landscape with features that are more powerful than imaginable.</p><ul><li><strong>Low Latency Interactive Live Streaming (ILS) </strong>Enhancements – Introducing RTMP integration with Instagram and Twitch and now achieving an impressive 4-5 second low latency with HLS.</li><li><strong>Device agnostic image capture </strong>– Capture seamless moments from any stream with new automatic image capture functionality.</li><li><strong>Geo-fencing </strong>– Streamline connections by enabling connections to a single selected server with geo-fencing.</li><li><strong>Cloud Proxy and Security Measures </strong>- Strengthen security measures with Cloud Proxy and new VPN detection and IP restriction capabilities.</li><li><strong>Cross-Platform Support </strong>– Enjoy Flutter's versatility in cross-platform apps with everything you need, including screen-share functionality.</li><li><strong>Easy TypeScript Integration </strong>- Harness the power of TypeScript with our newly added support for advanced development.</li><li><strong>User-friendly documentation </strong>– Explore our enhanced and user-friendly documentation, ensuring a better user experience.</li><li><strong>Powerful SDK </strong>- Experience advanced capabilities in all SDKs with features like call triggers, statistics, Jitpack and Maven Central Package Manager support, multi-stream parameters, code samples, examples, screen share, peep mode functionality, and more.</li><li><strong>Storage Integration with GCP </strong>- Embrace flexibility with the introduction of Google Cloud Platform (GCP) storage, allowing users to securely store their data and explore custom storage solutions.</li></ul><p>Here's to a year where VideoSDK embraces the cutting-edge, and leaps into the next technologies. This is a journey where your live video experiences are not only enhanced but also redefined.</p><h3 id="whats-in-2024">What's in 2024?</h3><p>In this journey, we are not only looking back but also to the future. VideoSDK is leading the way in generative AI and AR/VR technologies, introducing new frontiers of possibilities. Our focus remains on making things better, as we refine and elevate your VideoSDK encounter step by step.</p><p>As we step into 2024, quiet anticipation fills the air at VideoSDK. The coming year promises thoughtful enhancements and intentional evolution to the live video experience. Reflecting on the achievements of 2023, we are carefully charting the way forward.</p>]]></content:encoded></item><item><title><![CDATA[Product Updates: July 2024]]></title><description><![CDATA[Explore the latest product updates from July 2024 at VideoSDK Live's blog. Stay informed on cutting-edge features shaping the future of video technology.]]></description><link>https://www.videosdk.live/blog/product-updates-july-2024</link><guid isPermaLink="false">66b0b50720fab018df10fc73</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 06 Aug 2024 06:21:15 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/08/July-2024.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/08/July-2024.png" alt="Product Updates: July 2024"/><p>Hello, everyone! how are you?</p><p>We are back again with the latest updates and improvements to VideoSDK.live. This month, we've rolled out new features, fixed bugs, and made noteworthy updates across our SDKs. (so, keep reading!)</p><h2 id="%F0%9F%9A%80-new-features">? New Features</h2><h3 id="individual-recording">Individual Recording</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/08/Individual-Recording.png" class="kg-image" alt="Product Updates: July 2024" loading="lazy" width="1280" height="720"/></figure><p>Now, you can record video streams of specific individual participants in your video calls. This feature is very helpful for industries like ed-tech, interview-as-a-service, contact centers, and many more. With this feature, fraud detection, performance, and behavioral analysis can be leveraged by the users.</p><p>? <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/recording/record-participant">JavaScript Documentation</a></p><p>? <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/recording/record-participant">React Native Documentation</a></p><p>? <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/recording/record-participant">Android Documentation</a></p><p>? <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/recording/record-participant">iOS Documentation</a></p><p>? <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/recording/record-participant">React Documentation</a></p><p>? <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/recording/record-participant">Flutter Documentation</a></p><h3 id="pre-call-in-android-sdk">Pre-call in Android SDK</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/08/Pre-call-Check.png" class="kg-image" alt="Product Updates: July 2024" loading="lazy" width="1280" height="720"/></figure><p>We're thrilled to introduce pre-call functionality in our Android SDK. This feature allows you to set up and configure your call settings before joining, enhancing the user experience.</p><p>?  <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/setup-call/precall">Android Documentation</a></p><h3 id="virtual-background-for-flutter">Virtual Background for Flutter</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/08/Virtual-background.png" class="kg-image" alt="Product Updates: July 2024" loading="lazy" width="1280" height="720"/></figure><p>We're excited to introduce Virtual Background functionality in our Flutter SDK. This addition allows users to blur their backgrounds for enhanced privacy, and use custom images as backgrounds to maintain a consistent brand image.</p><p>? <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/render-media/virtual-background">Virtual Background Documentation</a></p><h3 id="custom-video-processor-in-flutter">Custom Video Processor in Flutter</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/08/Custom-Video-Processor.png" class="kg-image" alt="Product Updates: July 2024" loading="lazy" width="1280" height="720"/></figure><p>We've also introduced a Custom Video Processor in Flutter, which gives the flexibility to modify raw video frames and add real-time video effects. This feature helps you to apply any kind of filter including virtual background, blur, or face emoji, to create unique and engaging video experiences in your Flutter applications.</p><p>? <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/video-processor/overview">Video Processor Documentation</a></p><h3 id="improved-error-handling-in-android-sdk"><strong>Improved Error Handling in Android SDK</strong></h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/08/Improved-Error-Handling.png" class="kg-image" alt="Product Updates: July 2024" loading="lazy" width="1280" height="720"/></figure><p>We've improved our error handling capabilities in Android SDK to display more detailed errors on the <code>onError</code> event. These errors are also visible on the Analytics dashboard, providing users with more accurate and in-depth error-handling capabilities.</p><p>? <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/get-notified/error-events">Android Documentation</a></p><h3 id="network-stats-now-supported-in-the-latest-version-of-mozilla-safari">Network Stats now Supported in the Latest Version of Mozilla &amp; Safari</h3><p>Safari and Mozilla's latest update led to a lack of stats in these browsers, affecting overall data analytics in the dashboard. We've upgraded the <code>getAudioStats</code>, <code>getVideoStats</code>, and <code>getShareStats</code> methods of the <code>Participant</code> Class for the stats support. By implementing these enhancements, we ensure that you have access to accurate, real-time performance data across all browsers.</p><h2 id="%F0%9F%92%A1-updates">? Updates</h2><ul><li><strong>Audio Sharing Stats</strong>: Introduced the <code>getShareAudioStats</code> method for retrieving <strong>audio-sharing statistics</strong> on Chromium-based browsers (e.g., Chrome, Brave).</li><li><strong>Codec Support</strong>: Added support for Hardware Accelerated Codec (H.264) in iOS, improving video performance.</li></ul><h2 id="%F0%9F%98%84-programming-language-hierarchy-joke-of-the-month">? Programming Language Hierarchy: Joke of the Month</h2><p>Enjoy!</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/08/pc5i6b3uqzfd1.png" class="kg-image" alt="Product Updates: July 2024" loading="lazy" width="800" height="598"/></figure><h2 id="%F0%9F%90%9E-bug-fixes">? Bug Fixes</h2><ul><li><strong>Multi-camera Devices</strong>: We've resolved the issue of switching between front and back cameras on devices with multiple cameras in Android SDK.</li><li>Fixed an initial mic stream issue that was occurring when joining meetings in JavaScript and React SDK.</li></ul><p>We're constantly working to make VideoSDK.live the most developer-friendly real-time communication solution. These updates are aimed at improving performance, expanding capabilities, and making your development process smoother.</p><p>As always, we'd love to hear your feedback! If you have any questions, or suggestions, or run into any issues, please don't hesitate to reach out to our support team.</p><p>➡️ And if you are new and are coming for the first time, then <a href="https://www.videosdk.live/signup">sign up for VideoSDK</a> and get <strong>10,000</strong> free minutes.</p><p>??‍? Happy coding!</p><p>VideoSDK.live Team</p>]]></content:encoded></item><item><title><![CDATA[Product Updates: June 2024]]></title><description><![CDATA[Discover the latest VideoSDK updates for June 2024 and the All Feature Recap of the First Half of 2024. Stay ahead with VideoSDK's commitment to innovation and developer support.]]></description><link>https://www.videosdk.live/blog/product-updates-june-2024</link><guid isPermaLink="false">6683ee0a20fab018df10f50d</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 04 Jul 2024 07:22:26 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/07/June-2024.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/07/June-2024.png" alt="Product Updates: June 2024"/><p>Hi, how are you?</p><p>The first half of 2024 has gone by in a flash, hasn't it? Some of your goals have been completed, and some still must be achieved. The same goes for us. At VideoSDK, we have built many features to improve your video experience.</p><p>This June ?️, we're sharing some new updates with you to enhance your development experience.</p><h2 id="%F0%9F%86%95-image-capture-is-now-available-in-ios-android-sdks">? Image Capture is Now Available in iOS &amp; Android SDKs</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/image-capture.png" class="kg-image" alt="Product Updates: June 2024" loading="lazy" width="1280" height="720"/></figure><p>Our iOS and Android SDKs now include an image capture feature, allowing users to capture high-quality images during video calls.</p><p>? <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/handling-media/image-capturer">iOS Docs</a></p><p>? <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/handling-media/image-capturer">Android Docs</a></p><h2 id="%F0%9F%96%BC%EF%B8%8F-virtual-background-added-to-ios-sdk">?️ Virtual Background added to iOS SDK</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/virtual-background.png" class="kg-image" alt="Product Updates: June 2024" loading="lazy" width="1280" height="720"/></figure><p>We've rolled out the virtual background feature for iOS. Say goodbye to distracting backgrounds and choose from a variety of virtual backgrounds. It will help users enhance their professional appearance during video calls, especially for business or education use cases, no matter where they are.</p><p>? <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/render-media/virtual-background">iOS Docs</a></p><h2 id="%F0%9F%98%84-programming-language-hierarchy-joke-of-the-month">? Programming Language Hierarchy: Joke of the Month</h2><p>Enjoy some monthly developer humor from our team.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/programming-meme-v0-i61ewl5omda81.webp" class="kg-image" alt="Product Updates: June 2024" loading="lazy" width="535" height="585"/></figure><h2 id="%F0%9F%8C%8D-geo-tag-recording-guide">? Geo-Tag Recording Guide</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/geo-tag-recording.png" class="kg-image" alt="Product Updates: June 2024" loading="lazy" width="1280" height="720"/></figure><p>We've also added a Geo Tag Recording feature that lets you record video calls with geographic tagging. This feature helps you enable geo-location, including latitude and longitude indications, in your video KYC recording, adding an extra layer of context to your sessions.</p><p>?️ <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/recording/geo-tag-recording">Documentation Guide</a></p><h2 id="%F0%9F%93%A4-temporary-file-upload-added-to-android-sdk">? Temporary File Upload added to Android SDK</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/07/temp-file-upload.png" class="kg-image" alt="Product Updates: June 2024" loading="lazy" width="1280" height="720"/></figure><p>We've introduced a temporary file upload feature in our Android SDK. This feature allows users to upload any documents during live video sessions. After the session is completed, the uploaded documents are automatically deleted, eliminating the need to store these files on servers.</p><p>? <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/collaboration-in-meeting/upload-fetch-temporary-file">Android Docs</a></p><hr><p>We’re really excited about how these features can elevate the apps you're building. Designed for easy integration, they promise to enhance your development experience. If you have any questions, our developer support team is always here to assist you.</p><p>➡️ <a href="https://www.videosdk.live/signup">Signup for VideoSDK</a>, Get <strong>10000</strong> free mins, and Keep building amazing things!  </p></hr>]]></content:encoded></item><item><title><![CDATA[Product Updates: May 2024]]></title><description><![CDATA[Discover the latest updates from May 2024 including Transcription and AI summary across all SDKs, cloud proxy, pre-call support in Flutter, and more.]]></description><link>https://www.videosdk.live/blog/product-updates-may-2024</link><guid isPermaLink="false">6666ec9f20fab018df10eabf</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 11 Jun 2024 12:10:23 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/06/May-Monthly-Update-2024.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/May-Monthly-Update-2024.png" alt="Product Updates: May 2024"/><p>Hello!! everyone, good to meet you again. In May, we introduced one of our most requested features: "Transcription &amp; AI Summary," across all the SDKs and provided additional security via cloud proxy. We also launched and updated several other features that emphasize our commitment to delivering exceptional video solutions. Discover the latest updates from May 2024 that promise to transform your video experience.</p><h2 id="%F0%9F%93%9D-transcription-in-android-react-native">? Transcription in Android &amp; React Native</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/Real-time-transcription.png" class="kg-image" alt="Product Updates: May 2024" loading="lazy" width="1280" height="720"/></figure><p>The Most Demanded Feature is Live Now!</p><p>We're thrilled to announce the launch of our highly anticipated transcription feature in Android and React Native! With real-time and post-call transcription, along with AI-generated summaries, you can now easily convert speech to text and review the key points of your video calls. You can get the transcription with a single call of a function, ensuring the best developer experience.</p><p>All our transcription and summary processes comply with HIPAA, GDPR, and other relevant data protection regulations.</p><p>➡️ <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/transcription-and-summary/realtime-transcribe-meeting">React Native Docs</a></p><p>➡️<a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/transcription-and-summary/realtime-transcribe-meeting">Android Docs</a></p><h2 id="%F0%9F%8C%90-cloud-proxy">? Cloud Proxy</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/Cloud-Proxy.png" class="kg-image" alt="Product Updates: May 2024" loading="lazy" width="1280" height="720"/></figure><p>We're also introducing the ability to choose your protocol mode with our Cloud Proxy feature.</p><p>Cloud Proxy offers three straightforward operating modes to fit different business and firewall requirements:</p><ul><li>UDP_OVER_TCP</li><li>Force UDP</li><li>Force TCP</li></ul><p>The Cloud Proxy enables you to effectively control the routing of your streaming content across diverse network paths. By directing traffic to designated proxy servers, this functionality ensures compliance with different regional regulations, catering to your specific requirements and the geographical location of your users.</p><p>➡️ <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/geo-fencing-and-proxy-controls/cloud-proxy">JavaScript</a></p><p>➡️ <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/geo-fencing-and-proxy-controls/cloud-proxy">React</a></p><h2 id="%F0%9F%8C%89-hello-san-francisco">? Hello, San Francisco!!</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/San-Francisco.png" class="kg-image" alt="Product Updates: May 2024" loading="lazy" width="1280" height="720"/></figure><p>We're excited to announce the addition of a new server in San Francisco, along with our pre-existing server locations. This expansion allows us to provide even better coverage and lower latency for our users on the West Coast of the United States</p><h2 id="%F0%9F%93%9E-pre-call-support-in-flutter">? Pre-Call Support in Flutter</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/Pre-call-in-flutter.png" class="kg-image" alt="Product Updates: May 2024" loading="lazy" width="1440" height="900"/></figure><p>We're introducing the pre-call screen feature for our Flutter users! This highly requested addition allows users to test their audio and video devices before joining a call, ensuring everyone involved has a smooth and frustration-free call experience.</p><p>➡️ <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/setup-call/precall">Pre-Call Docs</a></p><h2 id="debugging-delights-monthly-developer-joke">Debugging Delights: Monthly Developer Joke</h2><p>Enjoy some monthly developer humor from our team.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/joke-of-the-month--1-.png" class="kg-image" alt="Product Updates: May 2024" loading="lazy" width="1682" height="1440"/></figure><h2 id="%F0%9F%AB%B0%F0%9F%8F%BDexpo-support">??Expo support</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/Expo.png" class="kg-image" alt="Product Updates: May 2024" loading="lazy" width="1280" height="720"/></figure><p>We're excited to announce that we now support Expo for our React-Native SDK! Forget about the complexities of configuring Android Studio and Xcode - with our SDK, integration is now smoother than ever. This means you can use our SDK without having to eject from Expo, making it easier than ever to integrate our features into your Expo-based applications.</p><p>➡️ <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/react-native-expo-setup">React native - Expo Docs</a></p><h2 id="%F0%9F%94%92pre-signed-urls-in-cloud-recording">?Pre-signed URLs in Cloud Recording</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/Pre-signed-URL.png" class="kg-image" alt="Product Updates: May 2024" loading="lazy" width="1280" height="720"/></figure><p>We're thrilled to let you know that we now allow you to provide pre-signed URLs, allowing you to securely upload your recordings to AWS, Azure, or GCP. This remarkable feature provides a temporary and secure method of uploading objects to your cloud storage without sharing your full credentials.</p><p>By using this feature, you can guarantee that your files are uploaded safely and efficiently, without any compromise to the security of your account.</p><p>For a pre-signed URL, you have to start recording first. Here is the start recording</p><p>➡️ <a href="https://docs.videosdk.live/api-reference/realtime-communication/start-recording#preSignedUrl">Start Recording Docs</a></p><p>➡️ <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/recording/presigned-URL">React Docs</a></p><p>➡️ <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/recording/presigned-URL">React Native Docs</a></p><p>➡️ <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/recording/presigned-URL">Flutter Docs</a></p><p>➡️ <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/recording/presigned-URL">Android Docs</a></p><p>➡️ <a href=" https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/recording/presigned-URL">iOS Docs</a></p><h2 id="%F0%9F%93%B9-record-your-rtmp-and-live-streaming">? Record Your RTMP and Live-Streaming</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/06/Recording-in-HLS-and-RTMP.png" class="kg-image" alt="Product Updates: May 2024" loading="lazy" width="1280" height="720"/></figure><p>We're introducing the ability to record your RTMP and live-streaming sessions. This feature allows you to capture and save your live events and broadcasts, making it easier than ever to share your content with your audience.</p><p>➡️ <a href="https://docs.videosdk.live/api-reference/realtime-communication/start-livestream">RTMP Docs</a></p><p>➡️ <a href="https://docs.videosdk.live/api-reference/realtime-communication/start-hlsStream">Live Streaming Docs</a></p><h2 id="%F0%9F%93%8C-conclusion">? Conclusion</h2><p>That's a wrap for May's updates. We're really happy to have introduced a bunch of new features and improvements to make your video experience even better. From transcription and cloud proxy to Expo support and pre-signed URLs, we're committed to continuously pushing the boundaries of what's possible with video technology.</p><p>As we look ahead to the future, we're excited to continue innovating and delivering value to our users. Whether you're building a virtual claim settlement platform or creating a seamless video MER experience, we're here to support you every step of the way.</p><p>Thank you for being part of the VideoSDK community. We're honored to have you along for the ride and look forward to sharing more exciting updates with you in the months.</p>]]></content:encoded></item><item><title><![CDATA[Product Updates: April 2024]]></title><description><![CDATA[Explore the latest product updates from April 2024 at VideoSDK Live's blog. Stay informed on cutting-edge features shaping the future of video technology.]]></description><link>https://www.videosdk.live/blog/product-updates-april-2024</link><guid isPermaLink="false">6638776f20fab018df10e494</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Thu, 09 May 2024 19:24:10 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/05/April-2024.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/05/April-2024.png" alt="Product Updates: April 2024"/><p>In April, we celebrated a major achievement in our mission to transform the video experience through the launch of a range of innovative features. Let's explore the latest updates that reinforce our dedication to providing exceptional video solutions.</p><h2 id="%E2%9C%A8-simplified-pricing-structure"><strong>✨ Simplified Pricing Structure</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/05/New-pricing-structure-5.png" class="kg-image" alt="Product Updates: April 2024" loading="lazy" width="1280" height="720"/></figure><p>Based on the valuable feedback we have received from our users, we are simplifying our pricing structure. This update aims to make our pricing more transparent and predictable, making it easier for you to plan and understand your costs.</p><p><a href="https://www.videosdk.live/blog/new-pricing-structure">Read the announcement →</a></p><p>The new pricing will be in effect for new users from 1st June, 2024. For the existing Pay-as-you-Go users, the new pricing will be in effect from 1st August, 2024. Lastly, for the Enterprise users, the pricing mentioned in the contract will remain unchanged until the end of the contract.</p><p>If you wish to retain the current pricing, you have the option to switch to our Enterprise plan.</p><h2 id="%F0%9F%A4%A9-introducing-growth-plan"><strong>? Introducing Growth Plan</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/05/what-is-growth-plan-1.png" class="kg-image" alt="Product Updates: April 2024" loading="lazy" width="1280" height="720"/></figure><p>We are excited to introduce the Growth Plan, which offers exclusive benefits and is a prepaid option, allowing you to access premium features and services without the commitment of a long-term contract. We are committed to providing unparalleled affordability and value to our customers, and the Growth Plan is a testament to that commitment.</p><p><a href="https://www.videosdk.live/pricing/coming-soon#view-pricing">Checkout the pricing benefits →</a></p><h2 id="%F0%9F%92%AC-transcriptions-and-summary"><strong>? Transcriptions and Summary</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/05/real-time-transcription.png" class="kg-image" alt="Product Updates: April 2024" loading="lazy" width="1280" height="720"/></figure><p>We are thrilled to announce the integration of cutting-edge features such as live transcription, post-transcription, and summarisation into our platform. This groundbreaking functionality empowers users to seamlessly convert audio content into text in real-time during a session and access comprehensive transcriptions of past meetings.</p><p>This is just the beginning of our journey to transform the way people collaborate and engage. We will be building upon these foundational capabilities of innovative AI-powered features that will redefine the collaboration user experience.</p><p>Checkout the documentation for the same →<br>➡️ <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/transcription-and-summary/realtime-transcribe-meeting">React SDK</a><br>➡️ <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/transcription-and-summary/realtime-transcribe-meeting">JavaScript SDK</a><br>➡️ <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/transcription-and-summary/realtime-transcribe-meeting">iOS SDK</a><br>➡️ <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/transcription-and-summary/realtime-transcribe-meeting">Flutter SDK</a></br></br></br></br></p><h2 id="%E2%98%8E%EF%B8%8F-sipsession-initiation-protocol"><strong>☎️ SIP - Session Initiation Protocol</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/05/sip-integration.png" class="kg-image" alt="Product Updates: April 2024" loading="lazy" width="1280" height="720"/></figure><p>We have excitingly integrated Session Initiation Protocol (SIP) functionality into our platform. This integration will facilitate seamless connections between video calls and traditional phone networks, creating new possibilities for communication and collaboration.</p><p>➡️ <a href="https://docs.videosdk.live/javascript/guide/sip-connect">Checkout SIP Connect documentation →</a></p><h2 id="%F0%9F%86%99-major-updates-in-ios"><strong>? Major Updates in iOS</strong></h2><h3 id="screen-share">Screen Share</h3><p>We have further enhanced the Screen Share feature ensuring improved clarity and flexibility in your iOS applications.</p><p>️➡️ <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/handling-media/native-ios-screen-share">Screen Share documentation for iOS →</a></p><h3 id="pre-call">Pre-call</h3><p>We have introduced the pre-call screen feature for iOS users, a highly requested addition that allows users to test their audio and video devices before joining a call. This ensures a smooth and hassle-free call experience for all participants.</p><p>️➡️ <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/setup-call/precall">Pre-Call documentation for iOS →</a></p><h2 id="%F0%9F%86%99-general-updates-bug-fixes"><strong>? General Updates &amp; Bug Fixes</strong></h2><p>We are dedicated to enhancing your development experience with VideoSDK. This month, our focus has been on improving documentation across all platforms to offer clearer guidance and better clarity.</p><ul><li>Resolved the problem of connectivity issues when join a meeting using the iOS Firefox browser.</li><li>Enhanced compatibility with WebRTC version 118 to ensure smoother and more efficient performance.</li><li>Included a Swift UI example GitHub repository in the code sample section.</li><li>Additionally, the Flutter issues of voice coming from the earpiece in Flutter iOS and Problem of start screen sharing in Flutter for Android 14 has been fixed.</li></ul><h2 id="%F0%9F%93%8C-conclusion">? Conclusion</h2><p>And that's a wrap for Month. But worry not dear reader, because the hustle continues!  We already have exciting and innovative features in the coming months.</p><p>Oh, and if you haven't already, why not <a href="https://www.videosdk.live/signup">Signup for VideoSDK</a> and join our lively <a href="https://discord.gg/Q6UsUfqa">Discord community</a>? Let's continue this over there.</p><p>Until next time, keep up the momentum and stay tuned for what's next!</p>]]></content:encoded></item><item><title><![CDATA[Product Updates: March 2024]]></title><description><![CDATA[Explore the latest product updates from March 2024 at VideoSDK Live's blog. Stay informed on cutting-edge features shaping the future of video technology.]]></description><link>https://www.videosdk.live/blog/product-updates-march-2024</link><guid isPermaLink="false">661ab63e2a88c204ca9d0bb5</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Sat, 13 Apr 2024 18:45:19 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/04/March-2024.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/04/March-2024.png" alt="Product Updates: March 2024"/><p>This month, we've introduced new features designed to enhance call quality measurement and streamline development processes to meet your needs. Let's explore the exciting updates that are enhancing the VideoSDK experience in March 2024.</p><h2 id="%F0%9F%93%8A-unlock-session-insights-with-advanced-analytics"><strong>? Unlock Session Insights with Advanced Analytics</strong></h2><p>Session Analytics provides developers with insights into call performance to improve user experiences. It offers real-time data for session optimisation, monitoring, and understanding call quality fluctuations. With these detailed metrics, issues can be promptly identified and addressed.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/session-analytics-2.png" class="kg-image" alt="Product Updates: March 2024" loading="lazy" width="1280" height="720"/></figure><p>Session Analytics includes three major components:</p><ol><li><strong>Session Overview: </strong>Provides a holistic view of session and participant details.</li><li><strong>Session Stats:</strong> Offers in-depth data on audio-video quality and participant media stats such as Jitter, RTT, Bitrate, Packet-loss, FPS, etc.</li><li><strong>Errors:</strong> Provides possible reasons for known errors that have occurred.</li></ol><p>️️️️️️➡️ <a href="https://docs.videosdk.live/tutorials/videosdk-session-analytics-dashboard">Read more about Session Analytics</a></p><h2 id="%F0%9F%9B%91-improved-error-messaging"><strong>? Improved Error Messaging</strong></h2><p>We're excited to introduce more precise error messaging for our SDKs, covering:</p><p>➡️ <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/release-notes#v0082">JavaScript SDK</a><br>➡️ <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/release-notes#v0185">React SDK</a><br>➡️ <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/release-notes#v016">React Native SDK</a></br></br></p><p>With detailed error codes and messages tailored for media-related errors, developers now have essential insights to diagnose and resolve issues effectively. This update enhances the onError event, simplifying the troubleshooting process and guaranteeing seamless integration.</p><p>Stay tuned for further updates as we extend this enhancement to our other SDKs, ensuring a consistent and enriched development experience across all platforms.</p><h2 id="%F0%9F%93%A6-introducing-new-pricing-plans-your-request-our-response"><strong>? Introducing New Pricing Plans: Your Request, Our Response!</strong></h2><p>We're thrilled to announce some significant updates to our <a href="https://www.videosdk.live/pricing/coming-soon">pricing plans</a>, driven by your invaluable feedback and requests.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Plans-1.png" class="kg-image" alt="Product Updates: March 2024" loading="lazy" width="1368" height="739"/></figure><p><strong>Catering to Your Needs: </strong>These updates are a direct response to your inquiries and suggestions. We understand the importance of flexibility in choosing the right plan for your needs, ensuring it caters to you at every stage.</p><p><strong>Delivering Premium Experience:</strong> By implementing carefully managed quotas and limitations, we're ensuring that each interaction with VideoSDK is of the utmost quality. It's our way of guaranteeing you a premium service, delivering consistent performance without a compromise.</p><p><strong>Premium Features Made Accessible: </strong>We've made many premium features accessible across all plans. And yes, VideoSDK remains free to get started!</p><p>️➡️ <a href="https://www.videosdk.live/pricing/coming-soon">Read more about New Pricing Plans here</a><br>️➡️ <a href="https://docs.videosdk.live/tutorials/quota-limit-and-plan">Read more quota and limitations here</a></br></p><h2 id="%F0%9F%86%99-general-updates-and-bug-fixes"><strong>? General Updates and Bug fixes</strong></h2><ul><li>Fixed an issue in the Prebuilt SDK where hosts with permission were unable to enable the mic, webcam, or screen share of remote participants.</li><li>Resolved an issue where multiple participants attempting screen sharing in a meeting could lead to a misleading display of stream showcases in the Prebuilt SDK.</li></ul><h2 id="%F0%9F%91%80-coming-soon"><strong>? Coming Soon</strong></h2><p>Here's a sneak peek at what's coming soon to VideoSDK:</p><h3 id="%E2%98%8E%EF%B8%8F-sipsession-initiation-protocol">☎️ <strong>SIP -</strong> Session Initiation Protocol</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/03/SIP-integration.png" class="kg-image" alt="Product Updates: March 2024" loading="lazy" width="1280" height="720"/></figure><p>We're working on the exciting integration of Session Initiation Protocol (SIP) functionality into our platform. This will enable seamless integration between your video calls and traditional phone networks, opening up new avenues for communication and collaboration.</p><h3 id="%F0%9F%92%AC%EF%B8%8F-real-time-transcription">?️ <strong>Real-Time Transcription</strong></h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/04/Real-time-transcription-1.png" class="kg-image" alt="Product Updates: March 2024" loading="lazy" width="1280" height="720"/></figure><p>Additionally, we're developing real-time transcription bundled with speaker diarization, translationsa and many more. Thus, providing a comprehensive solution for seamless communication and productive meetings on the fly.</p><h2 id="%F0%9F%9A%80-and-one-more-thing-about-twilio">? And, One More Thing about Twilio!</h2><p>If you've been impacted by the Twilio Video API sunset and need a smooth transition, then</p><h3 id="%F0%9F%93%A6-we-launched-the-twilio-migration-package">? We launched the Twilio Migration Package</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Migration-package-1.png" class="kg-image" alt="Product Updates: March 2024" loading="lazy" width="1025" height="541"/></figure><p>In this, we're offering <strong>1 Million Free Minutes</strong>, dedicated CS support, and comprehensive migration guides that will guide you every step of the way, making your migration hassle-free from Twilio. Trust VideoSDK to ensure your upgrade journey is seamless and efficient. Let's navigate this transition together!</p><p>➡️ See our <a href="https://www.videosdk.live/alternative/twilio-vs-videosdk">Migration Package</a><br>➡️ Also, check our <a href="https://docs.videosdk.live/tutorials/twilio-to-videosdk-migration-guide">Migration Guides</a></br></p><h2 id="%F0%9F%93%8C-conclusion">? Conclusion</h2><p>And that's a wrap for Month. But worry not dear reader, because the hustle continues!  We already have exciting and innovative features in the coming months.</p><p>Oh, and if you haven't already, why not <a href="https://www.videosdk.live/signup">Signup for VideoSDK</a> and join our lively <a href="https://discord.gg/Q6UsUfqa">Discord community</a>? Let's continue this over there.</p><p>Until next time, keep up the momentum and stay tuned for what's next!</p>]]></content:encoded></item><item><title><![CDATA[Product Updates: February ‘24]]></title><description><![CDATA[Explore the latest product updates from February 2024 at VideoSDK Live's blog. Stay informed on cutting-edge features shaping the future of video technology.]]></description><link>https://www.videosdk.live/blog/product-updates-february-24</link><guid isPermaLink="false">65eaf3412a88c204ca9cea7e</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 11 Mar 2024 05:34:52 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/03/product-update-Feb-24.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/03/product-update-Feb-24.png" alt="Product Updates: February ‘24"/><p>This month, we're thrilled to deliver a range of exciting updates designed to simplify your development experience and empower you to create even more seamless video-calling experiences. Let's dive into what's new:</p><h2 id="%F0%9F%94%A5-new-feature-pre-call-screen-react-native"><strong>? New Feature: Pre-call Screen (React Native)</strong></h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/03/Pre-call.png" class="kg-image" alt="Product Updates: February ‘24" loading="lazy" width="1440" height="900"/></figure><p>We're introducing the pre-call screen feature for our React Native users! This highly requested addition allows users to test their audio and video devices before joining a call, ensuring a smooth and frustration-free call experience for everyone involved.</p><p>️➡️ <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-mediaDevice/introduction">Pre-Call documentation for React Native</a></p><h3 id="%E2%9A%99%EF%B8%8F-react-and-javascript-pre-call-documentation"><strong>⚙️ React and JavaScript Pre-Call Documentation</strong></h3><p>We launched Pre-Call Setup documentation with a significant upgrade, providing improved clarity and guidance. ?</p><p>️➡️ <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/setup-call/precall">Pre-call documentation for React</a></p><p>️➡️ <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/setup-call/precall">Pre-Call documentation for JavaScript</a></p><h2 id="%F0%9F%86%99-general-updates"><strong>? General Updates</strong></h2><p>We're also committed to improving your development experience with VideoSDK. This month, we focused on making our documentation even better across all platforms. We've worked on our <strong>React</strong>, <strong>React Native</strong>, and <strong>JavaScript</strong> documentation to make it even clearer and easier to provide better clarity and guidance.</p><p>➡️ <strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/concept-and-architecture">React</a></strong></p><p>➡️ <strong><a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/concept-and-architecture">React Native</a></strong></p><p>➡️ <strong><a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/concept-and-architecture">JavaScript</a></strong></p><h3 id="internal-updates"><strong>Internal Updates</strong></h3><p>While some updates are user-facing, We've also been busy making some important internal improvements to enhance the overall stability and performance of VideoSDK. This month, we've implemented several internal improvements to further enhance the reliability and efficiency of VideoSDK.</p><h2 id="%F0%9F%91%80-coming-soon"><strong>? Coming Soon</strong></h2><p>We know you might be wondering what the VideoSDK team has been cooking up lately. While February didn't bring a major release, our engineers have been diligently working on some groundbreaking features that will revolutionize your video calling. Here's a sneak peek at what's coming soon to VideoSDK:</p><h3 id="%E2%98%8E%EF%B8%8F-sipsession-initiation-protocol">☎️ <strong>SIP -</strong> Session Initiation Protocol</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/03/SIP-integration.png" class="kg-image" alt="Product Updates: February ‘24" loading="lazy" width="1280" height="720"/></figure><p>We're working on the exciting integration of Session Initiation Protocol (SIP) functionality into our platform. This will enable seamless integration between your video calls and traditional phone networks, opening up new avenues for communication and collaboration.</p><h2 id="%F0%9F%9A%80-and-one-more-thing-about-twilio">? And, One More Thing about Twilio!</h2><p>If you've been impacted by the Twilio Video API sunset and need a smooth transition, then</p><h3 id="%F0%9F%93%A6-we-launched-the-twilio-migration-package">? We launched the Twilio Migration Package </h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Migration-package-1.png" class="kg-image" alt="Product Updates: February ‘24" loading="lazy" width="1025" height="541"/></figure><p>In this, we're offering <strong>1 Million Free Minutes</strong>, dedicated CS support, and comprehensive migration guides that will guide you every step of the way, making your migration hassle-free from Twilio. Trust VideoSDK to ensure your upgrade journey is seamless and efficient. Let's navigate this transition together!</p><p>➡️ See our <a href="https://www.videosdk.live/alternative/twilio-vs-videosdk">Migration Package</a></p><p>➡️ Also, check our <a href="https://docs.videosdk.live/tutorials/twilio-to-videosdk-migration-guide">Migration Guides</a></p><h2 id="%F0%9F%93%8C-conclusion">? Conclusion</h2><p>And that's a wrap for February. But worry not dear reader, because the hustle continues!  We already have exciting and innovative features in the coming months.</p><p>Oh, and if you haven't already, why not <a href="https://www.videosdk.live/signup">Signup for VideoSDK</a> and join our lively <a href="https://discord.gg/Q6UsUfqa">Discord community</a>? Let's continue this over there.</p><p>Until next time, keep up the momentum and stay tuned for what's next!</p>]]></content:encoded></item><item><title><![CDATA[Product updates: January 2024]]></title><description><![CDATA[Explore the latest product updates from January 2024 at VideoSDK Live's blog. Stay informed on cutting-edge features shaping the future of video technology.]]></description><link>https://www.videosdk.live/blog/product-updates-january-2024</link><guid isPermaLink="false">65bcc2a22a88c204ca9ce8ce</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 05 Feb 2024 06:27:20 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2024/02/Product-updates-Jan-24.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Product-updates-Jan-24.png" alt="Product updates: January 2024"/><p>Hope your year started off with a bang and not a bug! As fellow code enthusiasts, we understand that the secret to an amazing year involves simple code, compiler triumphs, and exciting updates. We've been hard at work on <a href="https://www.videosdk.live/">VideoSDK</a>, and we're here to share the first batch of 2024 with you. Let's get into it.</p><h2 id="most-requested-feature-pre-call-testing">?Most Requested Feature: Pre-call testing</h2><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Pre-call.png" class="kg-image" alt="Product updates: January 2024" loading="lazy" width="1440" height="900"/></figure><p>We heard you loud and clear! Introducing Pre-Call Testing in both our JS and React SDKs. Now, you can configure and assign devices before joining a call, ensuring a top-notch experience for users once they join a call.</p><h3 id="%E2%9A%99%EF%B8%8F-pre-call-in-javascript-sdk">⚙️ Pre-Call in JavaScript SDK</h3><p>With the introduction of Pre-Call testing, we've added a set of new methods and events to the JavaScript SDK, providing you with a toolkit to enhance your Pre-call experience. For more information, check out our latest documentation and example here ?</p><p>️➡️ <a href="https://docs.videosdk.live/javascript/api/sdk-reference/videosdk-class/introduction">Pre-Call documentation for JS SDK</a></p><p>️️➡️ <a href="https://github.com/videosdk-live/videosdk-rtc-javascript-sdk-example">Pre-Call example in JS SDK</a></p><h3 id="%E2%9A%9B%EF%B8%8F-pre-call-in-react-sdk">⚛️ Pre-call in React SDK</h3><p>We've also integrated Pre-Call Testing into our React SDK, bringing a host of new methods and hooks to streamline your pre-call experience. To guide you through implementing Pre-Call in React SDK, check our docs page and code sample here ?</p><p>➡️ <a href="https://docs.videosdk.live/react/api/sdk-reference/use-mediaDevice/introduction">Pre-Call documentation in React SDK</a></p><p>➡️ <a href="https://github.com/videosdk-live/videosdk-rtc-react-sdk-example">Pre-Call example in React SDK</a></p><h2 id="security-enhancements">? Security Enhancements</h2><p>Your safety is always our priority. Here's how we're strengthening security in our latest update:</p><p><strong>Improved VPN and Proxy Detections:</strong> We've enhanced our model to detect and restrict VPNs and proxies with Geo-restrictions, ensuring your meetings remain secure and free from unauthorized access. While this strengthens security for your business, users in other contexts—like those looking for the <a href="https://scrapingant.com/blog/web-scraping-api-vpn-safe-isp" rel="noreferrer">best VPNs for web scraping</a>—may find alternative tools more suited to their needs</p><h2 id="general-updates">? General Updates</h2><ul><li><strong>Improved Analytics for React Native:</strong> We've made your React Native calls smarter! Get a clearer picture of what's going on with improved insights.</li><li><strong>Instagram RTMP Improvements: </strong>We've fine-tuned Instagram RTMP to work better than ever. These improvements mean that your live streams on Instagram will be smoother and of higher quality, making your broadcasting experience even more enjoyable.</li></ul><h2 id="bug-fixes">? Bug fixes</h2><ul><li>Resolved an issue causing landscape video problems between the Firefox browser and native Android SDK.</li><li>Fixed an issue where the video stream orientation was incorrect for users utilizing the Firefox browser on iOS SDK.</li><li>Resolved a bug where the system microphone remained active after a user exited a call.</li></ul><h2 id="and-one-more-thing-about-twilio">? And, One More Thing about Twilio!</h2><p>If you've been impacted by the Twilio Video API sunset and need a smooth transition, then</p><h3 id="we-launched-the-twilio-migration-package">? We launched the Twilio Migration Package </h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Migration-package-1.png" class="kg-image" alt="Product updates: January 2024" loading="lazy" width="1025" height="541"/></figure><p>In this, we're offering <strong>1 Million Free Minutes</strong>, dedicated CS support, and comprehensive migration guides that will guide you every step of the way, making your migration hassle-free from Twilio. Trust VideoSDK to ensure your upgrade journey is seamless and efficient. Let's navigate this transition together!</p><p>➡️ See our <a href="https://www.videosdk.live/alternative/twilio-vs-videosdk">Migration Package</a></p><p>➡️ Also, check our <a href="https://docs.videosdk.live/tutorials/twilio-to-videosdk-migration-guide">Migration Guides</a></p><p>Here's what our users have to say about us ??</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2024/02/Dan-tweet.png" class="kg-image" alt="Product updates: January 2024" loading="lazy" width="1388" height="712"/></figure><h2 id="conclusion">? Conclusion</h2><p>And that's a wrap for January. But worry not dear reader, because the hustle continues!  We already have exciting and innovative features in the coming months.</p><p>Oh, and if you haven't already, why not <a href="https://www.videosdk.live/signup">Signup for VideoSDK</a> and join our lively <a href="https://discord.gg/Q6UsUfqa">Discord community</a>? Let's continue this over there.</p><p>Until next time, keep up the momentum and stay tuned for what's next!</p>]]></content:encoded></item><item><title><![CDATA[Video SDK Aug 23' Product Updates for Developers]]></title><description><![CDATA[We're excited to announce our latest monthly release! Let's deep dive into it.]]></description><link>https://www.videosdk.live/blog/video-sdk-aug-23-product-updates-for-developers</link><guid isPermaLink="false">64f71e309eadee0b8b9e8422</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Tue, 05 Sep 2023 12:37:07 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/09/August-Monthly-update-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/09/August-Monthly-update-1.png" alt="Video SDK Aug 23' Product Updates for Developers"/><p/><h3 id="master-live-video-integration-with-our-quickstart-blogs">Master Live Video Integration with Our Quickstart Blogs</h3><p>Ready to Build a Video Calling or Live Streaming App but don’t know how to get started? Go from beginner to pro with our tailored step-by-step guide for developers.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/09/Developer-Blog.jpg" class="kg-image" alt="Video SDK Aug 23' Product Updates for Developers" loading="lazy" width="1881" height="1152"/></figure><p>➡️ Explore → <a href="https://www.videosdk.live/blog/tag/product" rel="noopener noreferrer">Developer blogs</a></p><h3 id="learn-better-with-our-youtube-quickstart-playlist">Learn Better with our YouTube Quickstart Playlist</h3><p>If you are not a blog reading kinda person, then how about a dynamic and engaging video tutorial? Our Youtube quickstart playlist brings learning to life and make your integration process engaging.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/09/Thumbnails.jpg" class="kg-image" alt="Video SDK Aug 23' Product Updates for Developers" loading="lazy" width="1881" height="1152"/></figure><p>➡️ Check out → <a href="https://www.youtube.com/@VideoSDK/videos" rel="noopener noreferrer">Quickstart Playlist</a></p><h3 id="what-our-community-has-to-say">What Our Community Has to Say</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/09/User-testimonial.jpg" class="kg-image" alt="Video SDK Aug 23' Product Updates for Developers" loading="lazy" width="1881" height="1152"/></figure><p>➡️ Join our community → <a href="https://discord.gg/Qfm8j4YAUJ" rel="noopener noreferrer">Discord</a></p><h3 id="elon-musk-pioneering-video-audio-calling-in-x">Elon Musk pioneering Video &amp; Audio Calling in X</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/09/Twitter.jpg" class="kg-image" alt="Video SDK Aug 23' Product Updates for Developers" loading="lazy" width="1881" height="1152"/></figure><h3 id="few-bug-fixes-that-you-should-not-miss">Few bug fixes that you should not miss</h3><p><a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/release-notes" rel="noopener noreferrer">Android SDK</a></p><p><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/release-notes" rel="noopener noreferrer">Flutter SDK</a></p><p/><h3 id="previous-releases">Previous Releases</h3><p>Please check out our previous months' updates and feature launches.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/tag/product-updates"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Video SDK Product Updates</div><div class="kg-bookmark-description">Stay up to date with Video SDK. Get our latest product updates like features release or special announcements. We update something special every month.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/favicons/android-icon-192x192.png" alt="Video SDK Aug 23' Product Updates for Developers"><span class="kg-bookmark-author">Video SDK Product Updates</span><span class="kg-bookmark-publisher">Sagar Kava</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.videosdk.live/site-meta.png" alt="Video SDK Aug 23' Product Updates for Developers"/></div></a></figure><!--kg-card-begin: html--><h2 style="text-align:center;font-weight:bold;">We are always here for you.</h2><!--kg-card-end: html--><!--kg-card-begin: html--><div style="text-align:center;">Contact us for a demo, technical queries, support, or just say hi ?.</div><!--kg-card-end: html--><!--kg-card-begin: html--><!DOCTYPE html>
<html>

<head>
	<meta name="viewport" content="width=device-width, initial-scale=1">
	<style>
		.button {
			border-radius: 4px;
			background-color: #5f7afa;
			border: none;
			color: #ffffff;
			text-align: center;
			font-size: 24px;
			padding: 10px;
			width: 150px;
			transition: all 0.5s;
			cursor: pointer;
			margin: 5px;

			.button span {
				cursor: pointer;
				display: inline-block;
				position: relative;
				transition: 0.5s;
			}

			.button span:after {
				content: '\00bb';
				position: absolute;
				opacity: 0;
				top: 0;
				right: -20px;
				transition: 0.5s;
			}

			.button:hover span {
				padding-right: 25px;
			}

			.button:hover span:after {
				opacity: 1;
				right: 0;
			}
	</style>
</meta></head>

<body>
	<center>
		<h2/>
		<a href="https://www.videosdk.live/contact">
			<button class="button"><span>Talk to us</span></button>
			<center>
</center></a></center></body>

</html><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Video SDK June 23' Product Updates for Developers]]></title><description><![CDATA[We're excited to announce our latest monthly release! This month, we've packed in a bunch of new features.]]></description><link>https://www.videosdk.live/blog/video-sdk-june-23-product-updates-for-developers</link><guid isPermaLink="false">64a2b9fa8ecddeab7f17f3e5</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 03 Jul 2023 12:42:19 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/07/June-2023-updates.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/07/June-2023-updates.jpg" alt="Video SDK June 23' Product Updates for Developers"/><p>This month, our team focused on enhancing user engagement through interactive live streaming examples, delivering valuable content by creating tutorial blogs, and ensuring a smoother experience through timely bug fixes.</p><h2 id="added-javascript-documentation-for-interactive-live-streaming">Added JavaScript Documentation for Interactive Live Streaming</h2><p>We've introduced an interactive live streaming documentation for JavaScript, along with a comprehensive quickstart guide for seamless integration.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/JS-Docs.jpg" class="kg-image" alt="Video SDK June 23' Product Updates for Developers" loading="lazy" width="1254" height="768"/></figure><p>➡️ Read docs: <a href="https://docs.videosdk.live/javascript/guide/interactive-live-streaming/integrate-hls/overview">JavaScript</a></p><h2 id="introducing-our-angular-quickstart-guide-and-example-for-audio-video-calling-app">Introducing Our Angular Quickstart Guide and Example for Audio-Video Calling App</h2><p>Fasten up your development of Audio-Video calling app in Angular using our latest Angular quickstart guide and example.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Angular-1---2.jpg" class="kg-image" alt="Video SDK June 23' Product Updates for Developers" loading="lazy" width="1254" height="768"/></figure><p>➡️ Quickstart Guide : <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/angular/angular-js/quick-start">Angular</a></p><p>➡️ Example: <a href="https://github.com/videosdk-live/quickstart/tree/main/angular-rtc">GitHub Example</a></p><h2 id="expanded-compatibility-in-flutter-example">Expanded Compatibility in Flutter example</h2><p>Flutter example now supports desktop and web applications, providing our users with even more flexibility and reach.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/07/Flutter-Example.jpg" class="kg-image" alt="Video SDK June 23' Product Updates for Developers" loading="lazy" width="1254" height="768"/></figure><p>➡️ Check out the example: <a href="https://github.com/videosdk-live/videosdk-rtc-flutter-sdk-example">Flutter</a></p><h2 id="new-tutorial-blogs-for-developers">New tutorial blogs for developers</h2><ul><li><a href="https://www.videosdk.live/blog/javascript-live-streaming">Build Interactive Live Streaming App - JavaScript</a></li><li><a href="https://www.videosdk.live/blog/android-java-interactive-live-streaming">Build Interactive Live Streaming App - Android</a></li><li><a href="https://www.videosdk.live/blog/1-on-1-video-chat">Build 1-to-1 Video Chat App - Android</a></li><li><a href="https://www.videosdk.live/blog/react-js-video-calling">Build Video Calling App - React</a></li></ul><h2 id="previous-releases">Previous Releases</h2><p>Please check out our previous months' updates and feature launches.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/tag/product-updates"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Video SDK Product Updates</div><div class="kg-bookmark-description">Stay up to date with Video SDK. Get our latest product updates like features release or special announcements. We update something special every month.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/favicons/android-icon-192x192.png" alt="Video SDK June 23' Product Updates for Developers"><span class="kg-bookmark-author">Video SDK Product Updates</span><span class="kg-bookmark-publisher">Sagar Kava</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.videosdk.live/site-meta.png" alt="Video SDK June 23' Product Updates for Developers"/></div></a></figure><!--kg-card-begin: html--><h2 style="text-align:center;font-weight:bold;">We are always here for you.</h2><!--kg-card-end: html--><!--kg-card-begin: html--><div style="text-align:center;">Contact us for a demo, technical queries, support, or just say hi ?.</div><!--kg-card-end: html--><!--kg-card-begin: html--><!DOCTYPE html>
<html>

<head>
	<meta name="viewport" content="width=device-width, initial-scale=1">
	<style>
		.button {
			border-radius: 4px;
			background-color: #5f7afa;
			border: none;
			color: #ffffff;
			text-align: center;
			font-size: 24px;
			padding: 10px;
			width: 150px;
			transition: all 0.5s;
			cursor: pointer;
			margin: 5px;

			.button span {
				cursor: pointer;
				display: inline-block;
				position: relative;
				transition: 0.5s;
			}

			.button span:after {
				content: '\00bb';
				position: absolute;
				opacity: 0;
				top: 0;
				right: -20px;
				transition: 0.5s;
			}

			.button:hover span {
				padding-right: 25px;
			}

			.button:hover span:after {
				opacity: 1;
				right: 0;
			}
	</style>
</meta></head>

<body>
	<center>
		<h2/>
		<a href="https://www.videosdk.live/contact">
			<button class="button"><span>Talk to us</span></button>
			<center>
</center></a></center></body>

</html><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Video SDK May 23' Product Updates for Developers]]></title><description><![CDATA[We're excited to announce our latest monthly release! This month, we've packed in a bunch of new features.]]></description><link>https://www.videosdk.live/blog/video-sdk-may-23-product-updates-for-developers</link><guid isPermaLink="false">647080212c7661a49f38c1b9</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Wed, 07 Jun 2023 12:58:11 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/06/image.jpeg" medium="image"/><content:encoded><![CDATA[<h2 id="free-minutes-for-developers">Free minutes for Developers</h2><img src="https://assets.videosdk.live/static-assets/ghost/2023/06/image.jpeg" alt="Video SDK May 23' Product Updates for Developers"/><p>You can now explore interactive live streaming and other add-on services under our Monthly Free Minutes Plan. This bonus enables you to explore and test these valuable services without incurring additional costs, allowing for greater experimentation and integration possibilities.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/06/Free-minutes-plan-1.jpg" class="kg-image" alt="Video SDK May 23' Product Updates for Developers" loading="lazy" width="1254" height="768"/></figure><h2 id="screen-share-support-in-flutter-sdk-for-desktop-web">Screen share support in Flutter SDK for desktop &amp; web</h2><p>The latest update in flutter SDK introduces the highly anticipated screen-sharing feature for desktop (Mac and Windows) and web. </p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/06/Screenshare-in-Flutter-_final.jpg" class="kg-image" alt="Video SDK May 23' Product Updates for Developers" loading="lazy" width="1254" height="768"/></figure><p>➡️ Read docs: <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/extras/flutter-ios-screen-share">Flutter</a></p><h2 id="add-funds-and-credit-card-feature-for-an-uninterrupted-experience">Add funds and credit card feature for an uninterrupted experience!</h2><p>You can now conveniently add funds and manage your payment methods by adding credit cards. This feature ensures that you have an uninterrupted experience even after using your monthly free minutes.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/06/Add-fund.gif" class="kg-image" alt="Video SDK May 23' Product Updates for Developers" loading="lazy" width="800" height="490"/></figure><h2 id="javascript-documentation-revamp">JavaScript documentation revamp</h2><p>The JavaScript documentation for the Video SDKs has been thoroughly revamped, offering improved clarity and ease of use. The updated documentation provides comprehensive guidance and reference materials, making it even more convenient for developers to integrate.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/06/JS-Docs.jpg" class="kg-image" alt="Video SDK May 23' Product Updates for Developers" loading="lazy" width="1254" height="768"/></figure><p>➡️ Read docs: <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/concept-and-architecture">JavaScipt</a></p><h2 id="interactive-live-streaming-in-ios-sdk">Interactive Live Streaming in iOS SDK</h2><p>Build an immersive live streaming platform utilizing the iOS SDK that empowers you with real-time interaction through video, voice, and chat on a large scale.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/06/ILS-in-iOS.jpg" class="kg-image" alt="Video SDK May 23' Product Updates for Developers" loading="lazy" width="1254" height="768"/></figure><p>➡️ Learn how: <a href="https://docs.videosdk.live/ios/api/sdk-reference/meeting-class/methods#startlivestream">iOS </a></p><h2 id="pin-participant-feature-in-ios-sdk">Pin participant feature in iOS SDK</h2><p>In large meetings, you can pin one or two participants on the main screen to help viewers focus on them.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/06/pin_unpin.jpg" class="kg-image" alt="Video SDK May 23' Product Updates for Developers" loading="lazy" width="1254" height="768"/></figure><h2 id="typescript-support">Typescript support</h2><p>Typescript integration has been implemented for empowering developers working with JavaScript, React and React Native.</p><h3 id="previous-releases">Previous Releases</h3><p>Please check out our previous months' updates and feature launches.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/tag/product-updates"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Video SDK Product Updates</div><div class="kg-bookmark-description">Stay up to date with Video SDK. Get our latest product updates like features release or special announcements. We update something special every month.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/favicons/android-icon-192x192.png" alt="Video SDK May 23' Product Updates for Developers"><span class="kg-bookmark-author">Video SDK Product Updates</span><span class="kg-bookmark-publisher">Sagar Kava</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.videosdk.live/site-meta.png" alt="Video SDK May 23' Product Updates for Developers"/></div></a></figure><!--kg-card-begin: html--><h2 style="text-align:center;font-weight:bold;">We are always here for you.</h2><!--kg-card-end: html--><!--kg-card-begin: html--><div style="text-align:center;">Contact us for a demo, technical queries, support, or just say hi ?.</div><!--kg-card-end: html--><!--kg-card-begin: html--><!DOCTYPE html>
<html>

<head>
	<meta name="viewport" content="width=device-width, initial-scale=1">
	<style>
		.button {
			border-radius: 4px;
			background-color: #5f7afa;
			border: none;
			color: #ffffff;
			text-align: center;
			font-size: 24px;
			padding: 10px;
			width: 150px;
			transition: all 0.5s;
			cursor: pointer;
			margin: 5px;

			.button span {
				cursor: pointer;
				display: inline-block;
				position: relative;
				transition: 0.5s;
			}

			.button span:after {
				content: '\00bb';
				position: absolute;
				opacity: 0;
				top: 0;
				right: -20px;
				transition: 0.5s;
			}

			.button:hover span {
				padding-right: 25px;
			}

			.button:hover span:after {
				opacity: 1;
				right: 0;
			}
	</style>
</meta></head>

<body>
	<center>
		<h2/>
		<a href="https://www.videosdk.live/contact">
			<button class="button"><span>Talk to us</span></button>
			<center>
</center></a></center></body>

</html><!--kg-card-end: html--><p/><p/>]]></content:encoded></item><item><title><![CDATA[Video SDK April 23' Product Updates for Developers]]></title><description><![CDATA[This month has been very exciting for us. We have so much good news to share with our users.]]></description><link>https://www.videosdk.live/blog/video-sdk-april-23-product-updates-for-developers</link><guid isPermaLink="false">6450930d2c7661a49f38c03b</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Thu, 04 May 2023 13:30:58 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/05/April-2023-update--1-.jpg" medium="image"/><content:encoded><![CDATA[<h3 id="we-now-support-flutter-webbeta-version">We now support Flutter Web(Beta Version)</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/05/Flutter-Release-Beta.png" class="kg-image" alt="Video SDK April 23' Product Updates for Developers" loading="lazy" width="3762" height="2304"/></figure><img src="https://assets.videosdk.live/static-assets/ghost/2023/05/April-2023-update--1-.jpg" alt="Video SDK April 23' Product Updates for Developers"/><p>Leverage the power and versatility of Flutter to create and deploy high-quality web applications. </p><p>➡️ Read: <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/concept-and-architecture">Flutter SDK docs</a></p><h3 id="documentation-of-interactive-live-streaming-for-seamless-integration">Documentation of Interactive Live Streaming for seamless integration</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/05/HLS-Docs.png" class="kg-image" alt="Video SDK April 23' Product Updates for Developers" loading="lazy" width="3762" height="2304"/></figure><p>Comprehensive documentation now helps developers to effortlessly integrate Interactive Live Streaming into their platform.</p><p>➡️ Read: <a href="https://docs.videosdk.live/">Documentations</a></p><h3 id="gain-deeper-insights-into-meetings-live-streaming-hls-and-rtmp-simulcast-on-dashboard">Gain deeper insights into Meetings, Live Streaming (HLS), and RTMP Simulcast on dashboard</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/05/HLS-Session-Data.png" class="kg-image" alt="Video SDK April 23' Product Updates for Developers" loading="lazy" width="3762" height="2304"/></figure><p>Get detailed data regarding services like conference meetings, live streaming and RTMP Simulcast. You can gain insights regarding the configuration, webhook status, file URLs etc.</p><h3 id="launch-week">Launch week</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/05/Product-Hunt.png" class="kg-image" alt="Video SDK April 23' Product Updates for Developers" loading="lazy" width="3762" height="2304"/></figure><p>We have hosted a 5-day launch week event from April 10th to April 14th. Each day, we have highlighted a different use-case of our Interactive Live Streaming SDK in multiple platforms</p><p>Watch now: <a href="https://youtube.com/playlist?list=PLrujdOR6BS_3z7ng7YZ6CSVUvMElCHxHI">Launch Week Sessions</a></p><h3 id="we-raised-12m-to-build-worlds-most-reliable-live-video-infrastructure-with-generative-ai">We raised $1.2M to build world's most reliable live video infrastructure with Generative AI</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/05/Funding-banner.png" class="kg-image" alt="Video SDK April 23' Product Updates for Developers" loading="lazy" width="3762" height="2304"/></figure><p>We're excited to share that <a href="https://www.videosdk.live/">Video SDK</a> has secured $1.2 million in funding from GVFL and other strategic investors to drive the future of live video using generative AI, advanced video compression, and media routing technologies.</p><p>➡️ Read: <a href="https://www.videosdk.live/blog/seed-round">Full Story</a> ?</p><h3 id="we-are-hiring-rockstar-developers-and-growth-team-membersjoin-our-amazing-team">We are hiring rockstar developers and growth team members - Join our AMAZING TEAM</h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/05/Join-our-team.png" class="kg-image" alt="Video SDK April 23' Product Updates for Developers" loading="lazy" width="3762" height="2304"/></figure><p>➡️ Apply Now: <a href="https://jobs.videosdk.live/jobs/Careers">View all jobs</a></p><h3 id="previous-releases"><strong>Previous Releases:</strong></h3><p>Please check out our previous months' updates and feature launches.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/tag/product-updates"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Video SDK Product Updates</div><div class="kg-bookmark-description">Stay up to date with Video SDK. Get our latest product updates like features release or special announcements. We update something special every month.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/favicons/android-icon-192x192.png" alt="Video SDK April 23' Product Updates for Developers"><span class="kg-bookmark-author">Video SDK Product Updates</span><span class="kg-bookmark-publisher">Sagar Kava</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.videosdk.live/site-meta.png" alt="Video SDK April 23' Product Updates for Developers"/></div></a></figure><!--kg-card-begin: html--><h2 style="text-align:center;font-weight:bold;">We are always here for you.</h2><!--kg-card-end: html--><!--kg-card-begin: html--><div style="text-align:center;">Contact us for a demo, technical queries, support, or just say hi ?.</div><!--kg-card-end: html--><!--kg-card-begin: html--><!DOCTYPE html>
<html>

<head>
	<meta name="viewport" content="width=device-width, initial-scale=1">
	<style>
		.button {
			border-radius: 4px;
			background-color: #5f7afa;
			border: none;
			color: #ffffff;
			text-align: center;
			font-size: 24px;
			padding: 10px;
			width: 150px;
			transition: all 0.5s;
			cursor: pointer;
			margin: 5px;

			.button span {
				cursor: pointer;
				display: inline-block;
				position: relative;
				transition: 0.5s;
			}

			.button span:after {
				content: '\00bb';
				position: absolute;
				opacity: 0;
				top: 0;
				right: -20px;
				transition: 0.5s;
			}

			.button:hover span {
				padding-right: 25px;
			}

			.button:hover span:after {
				opacity: 1;
				right: 0;
			}
	</style>
</meta></head>

<body>
	<center>
		<h2/>
		<a href="https://www.videosdk.live/contact">
			<button class="button"><span>Talk to us</span></button>
			<center>
</center></a></center></body>

</html><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[We raised $1.2M to build world's most reliable live video infrastructure with Generative AI]]></title><description><![CDATA[We extend our sincere thanks to our customers, team, and investors for their unwavering support.]]></description><link>https://www.videosdk.live/blog/seed-round</link><guid isPermaLink="false">6448e7792c7661a49f388fa2</guid><category><![CDATA[ANNOUNCEMENT]]></category><dc:creator><![CDATA[Arjun Kava]]></dc:creator><pubDate>Wed, 26 Apr 2023 14:04:38 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/04/Funding-banner.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/04/Funding-banner.jpg" alt="We raised $1.2M to build world's most reliable live video infrastructure with Generative AI"/><p>I'm excited to share that <a href="https://www.videosdk.live/">Video SDK</a> has secured $1.2 million in funding from GVFL and other strategic investors to drive the future of live video using generative AI, advanced video compression, and media routing technologies.</p><h2 id="future-of-video-is-immersive-and-interactive">Future of video is immersive and interactive.</h2><p>Building live video product is complex, with introduction camera in each of the new devices like virtual headsets, wearables and smart cameras going live is easy but it is hard for developers to build reliable live video app.</p><p>Video SDK is here to help you. We are empowering 10,000+ developers, 5+ public limited company, 3+ unicorns, 50+ mid-sized companies, and 300+ startups, including renowned names like FYND (frolic), Judge Group, ICICI Bank, Radius Insights, Alan, and many more.</p><p>Integrating live video into their applications has boosted companies' end-user engagement, collaboration, connectivity, and communication, leading to improved user satisfaction and retention, all within just a few days.</p><blockquote>Our mission is to help developers to "build interactive and immersive live video experiences."</blockquote><p>Building live video infrastructure is challenging, but we took the bold step from day one and it has paid off. We recognized the complexity of the problem, including issues with real-time video reliability, compatibility with modern/legacy devices, and inconsistency due to public ISPs and congested networks.</p><p>We are first company innovating to work on changing the fundamental building blocks of live audio and video over the internet. We are aiming to create robust live video infrastructure with generative AI to compress audio / video with neural encoders-decoders to solve reliability and compatibility.</p><p>In addition, our QUIC-based two-way live streaming and global media routing algorithms, independent of ISPs, ensure a seamless live video experience for all our customers, even in congested and inconsistent networks.</p><h2 id="new-network-verticals-and-devices-are-going-to-give-more-power-to-developers">New network, verticals and devices are going to give more power to developers</h2><p>As networks like 5G and Fiber become more capable, and new devices like smart cameras in mobiles, cameras in VR headsets, and wearables become widespread, the use of live video applications is expected to skyrocket. This trend will impact traditional markets such as education, banking, customer engagement, and telecommunication, as well as emerging verticals like gaming, metaverse, live commerce, and fantasy e-sports.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/PR-image.jpg" class="kg-image" alt="We raised $1.2M to build world's most reliable live video infrastructure with Generative AI" loading="lazy" width="3840" height="2760"/></figure><p>We're investing in live video infrastructure to serve as a backbone for every developer's live app. We believe that now is the perfect time to replicate reality virtually.</p><h3 id="investing-in-the-future-with-our-investment">Investing in the future with our investment.</h3><p>We are laser-focused on creating next-generation live video infrastructure that sets the standard for enterprise-grade solutions. Our ambitious goal is to develop a cutting-edge platform that delivers unparalleled performance, reliability, and scalability to meet the demands of the future. </p><p>The investment will be used to further research, develop and enhance Video SDK's protocols and infrastructure for audio and video conferencing and interactive live streaming. Their vision is to build an infrastructure that replicates reality virtually.</p><blockquote>Our vision is build infrastructure that helps to "replicate the reality virtually."</blockquote><p>Our aim is to turn cutting-edge generative AI research into reality. We're actively recruiting the brightest minds in the field to lead our advanced research and development efforts.</p><h3 id="story-behind-the-scene">Story behind the scene</h3><p>As a founder of company, I have been invested with video technology for last many years. Our core team possesses deep expertise in video technology, backed by cutting-edge research capabilities.</p><p>Our team has successfully completed complex research projects at scale, including identifying sellable products from wild video datasets using triangular loss, recognizing faces and paths of individuals in wild CCTV footage using deep learning, developing a large-scale video recommendation engine for social learning videos with motion-based transformers, and designing a large-scale video encoding pipeline with AV1 and AVIF.</p><p>Our research capabilities have allowed us to approach the problem from a unique perspective and develop a robust live video infrastructure for developers.</p><p>The idea for Video SDK came to us while working on our previous venture, Zujo - a Social Learning Platform. We spent months building a live video experience, but we were not satisfied with the outcome. It lacked compatibility with all devices and reliability over congested and inconsistent networks was a major concern.</p><p>After discussing the same concerns with many developers and validating with them, we realized that none of the current solutions were capable of solving this problem. That's when we started working on it as a hunch at night. In just a few months, we had a proof of concept (POC) ready to be tested, and we shared it with developers for feedback. The response was overwhelmingly positive, and they loved it.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/05/research.jpg" class="kg-image" alt="We raised $1.2M to build world's most reliable live video infrastructure with Generative AI" loading="lazy" width="3840" height="2481"/></figure><p>With our research first approach, We are building generative AI cloud for live video including state-of-the-art research on neural audio compression with encoder-decoders, and upscaling video resolution from lower resolution to higher resolution on the fly. <br><br>Apart from this, We are building QUIC-based two-way media streaming and global media packet routing infrastructure using machine learning and customized congestion control algorithms, which is independent of Internet Service Providers. These algorithms are specifically designed for delivering large scale media at low latency.</br></br></p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/team-member-logo--1-.jpg" class="kg-image" alt="We raised $1.2M to build world's most reliable live video infrastructure with Generative AI" loading="lazy" width="5120" height="3680"/></figure><p>Our fresh approach to solving legacy problems has been successful, and we are currently serving thousands of companies to deliver the best live video experience. Our research-first approach has enabled us to build the most reliable live video delivery protocol, and this is just the beginning of our journey.</p><h3 id="hear-from-our-customers-and-investors">Hear from our customers and investors</h3><p>Video SDK was awarded as the Most Innovative Technology Product of the year 2022 by Product Hunt and featured as the Product of the Day on the platform. </p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/client_logo.jpg" class="kg-image" alt="We raised $1.2M to build world's most reliable live video infrastructure with Generative AI" loading="lazy" width="3840" height="1710"/></figure><p>We are having the best customer rating of 4.9 out of 5 on platforms like Product Hunt, Capterra and G2, showcasing its positive user feedback.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/client-testimonial--1-.jpg" class="kg-image" alt="We raised $1.2M to build world's most reliable live video infrastructure with Generative AI" loading="lazy" width="3840" height="1710"/></figure><blockquote>"We are excited to support Video SDK's vision of building cutting-edge live video infrastructure for developers. Their innovative use of generative AI and machine learning to enhance video quality and performance is truly groundbreaking. We believe in the capabilities of their executive team and the momentum they have in the market"<br>Kamal Bansal, Managing Director at GVFL Ventures.</br></blockquote><h3 id="about-video-sdk-and-how-it-can-help-you">About Video SDK and how it can help you</h3><p>Video SDK is a comprehensive live video infrastructure that offers a wide range of features, such as real-time video conferencing, video chat, screen sharing, cloud recording, interactive live streaming, and more. These features are accessible through a robust set of SDKs (Software Development Kits).</p><p>We empower developers with highly reliable and customizable live video experiences, supporting all major platforms such as web, desktop, and mobile. Our solution is cross-channel and easily customizable, with support for popular programming languages including JavaScript, React, React Native, Flutter, Android Native, iOS native, and many more.</p><p>Video SDKL offers a pay-as-you-go usage-based pricing model, eliminating the need for per-license payments. Our platform is highly scalable, customizable, and secure, making it suitable for businesses of all sizes and industries, from startups to enterprises.</p><h3 id="last-but-not-least-we-are-hiring">Last but not least, We are hiring!</h3><p>We are thrilled to announce that we are actively seeking exceptional talent to join our team and be part of our ambitious mission to build a large-scale live video infrastructure. </p><p>We are passionate about pushing the boundaries of what's possible with video technology, and we need brilliant minds to help us achieve our vision. As a team, we are dedicated to driving cutting-edge research initiatives and developing innovative solutions that will revolutionize the live video industry. </p><p>If you share our excitement and have expertise in areas such as video processing, real-time communications, or deep learning, we want to hear from you! Join us on this exhilarating journey as we shape the future of live video and make a significant impact in the industry. </p><p>Reach out to us at <a href="mailto:founders@videosdk.live">founders@videosdk.live</a> or just apply at <a href="https://jobs.videosdk.live/jobs/Careers">jobs</a></p><p><strong><em>Be a part of such success stories right away - </em></strong><a href="https://www.videosdk.live/signup" rel="noopener noreferrer"><strong><em>Sign up</em></strong></a><strong><em> for a free trial or </em></strong><a href="https://www.videosdk.live/contact" rel="noopener noreferrer"><strong><em>book a demo</em></strong></a><strong><em> of Video SDK now!</em></strong></p>]]></content:encoded></item><item><title><![CDATA[Launch Week]]></title><description><![CDATA[To celebrate the launch of our new product, we're hosting a 5-day launch week event from April 10th to April 14th. Each day, we'll be highlighting a different version of our Interactive Live Streaming SDK.]]></description><link>https://www.videosdk.live/blog/launch-week</link><guid isPermaLink="false">6429c9362c7661a49f381db2</guid><category><![CDATA[launch week]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Thu, 06 Apr 2023 08:45:21 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/04/ILS_PH--3-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://assets.videosdk.live/static-assets/ghost/2023/04/ILS_PH--3-.jpg" alt="Launch Week"/><p>The world of live streaming has come a long way in recent years, with millions of people tuning in to watch events, shows, and concerts from all over the world. As the demand for high-quality and engaging live streaming experiences continues to grow, developers are constantly seeking new ways to create more interactive and immersive experiences for their audiences. </p><p>Our team has been working tirelessly to bring you this cutting-edge SDK, which allows developers to create live streaming experiences that are truly immersive and engaging.</p><p>That's where Video SDK comes in with its brand new product - the "<strong><a href="https://www.videosdk.live/live-streaming-api-sdk">Interactive Live Streaming SDK</a></strong>". This SDK offers a seamless live streaming experience with advanced features like interactive chat, polls, and Q&amp;A to engage with your audience in near real-time. Plus, our customizable and easy-to-use APIs make it easy to integrate into your existing platform.<br><br>We know that live streaming has become an essential part of how we consume and share content online. Our goal with this SDK is to empower you to take your live streaming game to the next level and connect with your audience in a whole new way.</br></br></p><p>To celebrate the launch of our new product, we're hosting a 5-day launch week event from April 10th to April 14th. Each day, we'll be highlighting a different version of our Interactive Live Streaming SDK and providing a live demo, along with answering any questions you may have. Here's a breakdown of what to expect each day:</p><h3 id="10-aprilmonday"><strong>10 April - Monday</strong></h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/Product-Hunt-Post_landscape-1.jpg" class="kg-image" alt="Launch Week" loading="lazy" width="1270" height="760"/></figure><p><strong>Day 1: </strong>Interactive Live Streaming SDK on Product Hunt On the first day of our launch week, we'll be featuring our Interactive Live Streaming SDK, <a href="https://www.producthunt.com/products/video-sdk"><strong>FOLLOW</strong></a> us on Product Hunt. </p><ul><li>Launch Product Hunt 10 April 12:30 (IST) <a href="https://www.producthunt.com/products/video-sdk">Follow this page</a></li></ul><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/ILS_PH-2.jpg" class="kg-image" alt="Launch Week" loading="lazy" width="1600" height="900"/></figure><!--kg-card-begin: html--><a href="https://lu.ma/event/evt-AZK4gBvbYcl13tv" class="luma-checkout--button" data-luma-action="checkout" data-luma-event-id="evt-AZK4gBvbYcl13tv">
	
One-Click Register
</a>

<script id="luma-checkout" src="https://embed.lu.ma/checkout-button.js"/><!--kg-card-end: html--><p>​Join every day event 11 to 14 april us for exclusive content that will elevate your development game:</p><ul><li>​?<strong>Discover</strong> why Interactive Livestreaming is crucial for Android developers in the video space, and how Video SDK delivers the fastest latency interactive streaming experience.</li><li>​?<strong>Unleash</strong> your potential with innovative Live Video apps and modern video infrastructure, giving you a competitive edge.</li><li>​?‍?<strong>Learn from developers,</strong> how to craft stunning, interactive video experiences with custom layouts and animated visuals, such as toasts, polls, queues, and watermarks.</li></ul><p>​?Connect with other video developers, including Video SDK developers and those creating video experiences. Acquire expertise in interactive live streaming using Video SDK and gain a competitive advantage. This exclusive material will enable you to reach a global audience of thousands. Don't miss out on this opportunity to expand your skills and network with fellow developers to <strong>join the event, please register below links</strong>.?? </p><h3 id="11-apriltuesday"><strong>11 April - Tuesday</strong></h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/ILS_React-1.jpg" class="kg-image" alt="Launch Week" loading="lazy" width="1600" height="900"/></figure><p><strong>Day 2: </strong></p><p>What's in it for you?</p><ul><li>​Learn how to interactive live streaming work and why it is so popular ?</li><li>Learn how to deploy &amp; integrate react.js video SDK </li><li>Learn how to add features to the react SDK - recordings, interactive chat, HD screen sharing, etc.</li></ul><p>Excited? then don't wait ?</p><!--kg-card-begin: html--><a href="https://lu.ma/event/evt-vnzi5JhZ9HGIWme" class="luma-checkout--button" data-luma-action="checkout" data-luma-event-id="evt-vnzi5JhZ9HGIWme">
One-Click Register
</a>

<script id="luma-checkout" src="https://embed.lu.ma/checkout-button.js"/><!--kg-card-end: html--><h3 id="12-aprilwednesday"><strong>12 April - Wednesday</strong></h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/ILS_React-Native.jpg" class="kg-image" alt="Launch Week" loading="lazy" width="1600" height="900"/></figure><p><strong>Day 3:</strong></p><p>What's in it for you?</p><ul><li>​Learn how to interactive live streaming work and why it is so popular?</li><li>Learn how to deploy &amp; integrate react native video SDK (Android &amp; iOS)</li><li>Learn how to add features to the react native SDK - recordings, interactive chat, HD screen sharing, etc.</li></ul><p>Excited? then don't wait ?</p><!--kg-card-begin: html--><a href="https://lu.ma/event/evt-f0lMAO9uE73wdak" class="luma-checkout--button" data-luma-action="checkout" data-luma-event-id="evt-f0lMAO9uE73wdak">
One-Click Register
</a>

<script id="luma-checkout" src="https://embed.lu.ma/checkout-button.js"/><!--kg-card-end: html--><h3 id="13-aprilthursday"><strong>13 April - Thursday</strong></h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/ILS_Flutter.jpg" class="kg-image" alt="Launch Week" loading="lazy" width="1600" height="900"/></figure><p><strong>Day 4:</strong> </p><p>What's in it for you?</p><ul><li>​Learn how to interactive live streaming work and why it is so popular ?</li><li>Learn how to deploy &amp; integrate flutter video SDK (Android &amp; iOS)</li><li>Learn how to add features to the flutter SDK - recordings, interactive chat, HD screen sharing, etc.</li></ul><p>Excited? then don't wait ?</p><!--kg-card-begin: html--><a href="https://lu.ma/event/evt-1HLSxqcD46K4f7C" class="luma-checkout--button" data-luma-action="checkout" data-luma-event-id="evt-1HLSxqcD46K4f7C">
One-Click Register
</a>

<script id="luma-checkout" src="https://embed.lu.ma/checkout-button.js"/><!--kg-card-end: html--><h3 id="14-aprilfriday"><strong>14 April - Friday</strong></h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/ILS_Android.jpg" class="kg-image" alt="Launch Week" loading="lazy" width="1600" height="900"/></figure><p><strong>Day 5:</strong> </p><p>What's in it for you?</p><ul><li>​Learn how to interactive live streaming work and why it is so popular ?</li><li>Learn how to deploy &amp; integrate android video SDK.</li><li>Learn how to add features to the android SDK - recordings, interactive chat, HD screen sharing, etc.</li></ul><p>Excited? then don't wait ?</p><!--kg-card-begin: html--><a href="https://lu.ma/event/evt-lf5LrClsE5CJ2DV" class="luma-checkout--button" data-luma-action="checkout" data-luma-event-id="evt-lf5LrClsE5CJ2DV">
One-Click Register
</a>

<script id="luma-checkout" src="https://embed.lu.ma/checkout-button.js"/><!--kg-card-end: html--><p/><p>During the launch week event, we're also offering a <a href="https://www.videosdk.live/signup">free 10,000 streaming mins to sign up</a> and experience the future of interactive live streaming for yourself. This is a great opportunity to get hands-on experience with our SDK and see how it can help you create more engaging and immersive live streaming experiences for your audience.</p><p>The launch of our Interactive Live Streaming SDK is a significant milestone for Video SDK. We believe that this product has the potential to revolutionize the world of live streaming by offering developers an easy-to-use and customizable solution for creating more interactive and engaging experiences. We're excited to showcase our product during our launch week event and look forward to seeing how it will help developers take their live.</p>]]></content:encoded></item><item><title><![CDATA[Video SDK March 23' Product Updates for Developers]]></title><description><![CDATA[This week we launch one of our highly requested features: Teams and Workspaces. You can now create workspaces on the Video SDK Dashboard, and invite team members to them.]]></description><link>https://www.videosdk.live/blog/video-sdk-march-23-product-updates-for-developers</link><guid isPermaLink="false">642e46f92c7661a49f3825ca</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Thu, 06 Apr 2023 06:15:18 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/04/March-2023-update.jpg" medium="image"/><content:encoded><![CDATA[<h3 id="team-invite-and-organisation-feature"><strong>Team Invite and Organisation Feature</strong></h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/Add-a-heading--1-.png" class="kg-image" alt="Video SDK March 23' Product Updates for Developers" loading="lazy" width="1920" height="1080"/></figure><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/March-2023-update.jpg" alt="Video SDK March 23' Product Updates for Developers"/><p>This week we launch one of our highly requested features: Teams and organisation. You can now create organisation on the Video SDK Dashboard, and <a href="https://app.videosdk.live/profile/teams">invite team members</a> to them. With this, collaboration becomes easier for a range of use cases: for instance, your team doesn’t have to juggle between different accounts for your production and development environments. Set it up with the following steps, and you're good to go:</p><!--kg-card-begin: markdown--><table>
	<tr>
		<td>Role Type</td>
		<td>Rooms (Create API or Project)</td>
		<td>Invite team members</td>
		<td>Workspace Settings</td>
		<td>Billing</td>
	</tr>
	<tr>
		<td>Organization Owner</td>
		<td>✔</td>
		<td>✔</td>
		<td>✔</td>
		<td>✔</td>
	</tr>
	<tr>
		<td>Developer</td>
		<td>✔</td>
		<td>✔</td>
		<td>✘</td>
		<td>✘</td>
	</tr>
	<tr>
		<td>Billing Manager (coming soon)</td>
		<td>✘</td>
		<td>✔</td>
		<td>✔</td>
		<td>✔</td>
	</tr>
</table><!--kg-card-end: markdown--><p>➡️ Get started by <a href="https://app.videosdk.live/profile/teams">inviting members</a> to your organisation.</p><h3 id="hls-sample-code-for-flutter"><strong>HLS Sample Code for Flutter</strong></h3><p>Flutter example code that illustrate how to build a Live streaming app using HLS with features like chat, raise hand, inviting viewers to livestream.</p><p>➡️ Clone Now →: <a href="https://github.com/videosdk-live/videosdk-hls-flutter-sdk-example">HLS example in Flutter</a></p><h3 id="hls-sample-code-for-react-native"><strong>HLS Sample Code for React-Native</strong></h3><p>React-Native example code that illustrate how to build a Live streaming app using HLS with features like chat, raise hand, inviting viewers to livestream.</p><p>➡️ Clone Now →: <a href="https://github.com/videosdk-live/videosdk-hls-react-native-sdk-example">HLS example in React-Native</a></p><h3 id="what%E2%80%99s-next"><strong>What’s next?</strong></h3><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/04/Product-Hunt-Post_landscape.jpg" class="kg-image" alt="Video SDK March 23' Product Updates for Developers" loading="lazy" width="1270" height="760"/></figure><p>We are launching our Interactive Live Streaming SDK on product hunt on Monday, April 10th, <a href="https://www.producthunt.com/products/video-sdk"><strong>FOLLOW</strong></a><strong> </strong>us on product hunt.</p><p>To celebrate the launch of our new productInteractive Live Streaming SDK, we're hosting a 5-day launch week event from April 10th to April 14th. Each day, we'll be highlighting a different version of our Interactive Live Streaming SDK and providing a live demo, along with answering any questions you may have. Here's a breakdown of what to expect each day: </p><p><a href="https://lu.ma/videosdk"><strong>One-Click Register</strong></a></p><h3 id="previous-releases"><strong>Previous Releases:</strong></h3><p>Please check out our previous months' updates and feature launches.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/tag/product-updates"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Video SDK Product Updates</div><div class="kg-bookmark-description">Stay up to date with Video SDK. Get our latest product updates like features release or special announcements. We update something special every month.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/favicons/android-icon-192x192.png" alt="Video SDK March 23' Product Updates for Developers"><span class="kg-bookmark-author">Video SDK Product Updates</span><span class="kg-bookmark-publisher">Sagar Kava</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.videosdk.live/site-meta.png" alt="Video SDK March 23' Product Updates for Developers"/></div></a></figure><!--kg-card-begin: html--><h2 style="text-align:center;font-weight:bold;">We are always here for you.</h2><!--kg-card-end: html--><!--kg-card-begin: html--><div style="text-align:center;">Contact us for a demo, technical queries, support, or just say hi ?.</div><!--kg-card-end: html--><!--kg-card-begin: html--><!DOCTYPE html>
<html>

<head>
	<meta name="viewport" content="width=device-width, initial-scale=1">
	<style>
		.button {
			border-radius: 4px;
			background-color: #5f7afa;
			border: none;
			color: #ffffff;
			text-align: center;
			font-size: 24px;
			padding: 10px;
			width: 150px;
			transition: all 0.5s;
			cursor: pointer;
			margin: 5px;

			.button span {
				cursor: pointer;
				display: inline-block;
				position: relative;
				transition: 0.5s;
			}

			.button span:after {
				content: '\00bb';
				position: absolute;
				opacity: 0;
				top: 0;
				right: -20px;
				transition: 0.5s;
			}

			.button:hover span {
				padding-right: 25px;
			}

			.button:hover span:after {
				opacity: 1;
				right: 0;
			}
	</style>
</meta></head>

<body>
	<center>
		<h2/>
		<a href="https://www.videosdk.live/contact">
			<button class="button"><span>Talk to us</span></button>
			<center>
</center></a></center></body>

</html><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Video SDK's February 2023 Product Update]]></title><description><![CDATA[We’ve been hard at work these past few weeks improving our code samples to help make VideoSDK even more valuable for you. We hope you enjoy them as much as we do.]]></description><link>https://www.videosdk.live/blog/video-sdk-february-2023-product-update</link><guid isPermaLink="false">64017f922c7661a49f380a2e</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Fri, 03 Mar 2023 15:05:36 GMT</pubDate><media:content url="https://assets.videosdk.live/static-assets/ghost/2023/03/February-2023.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="februarymonthly-update">February - Monthly Update</h2><h2 id="control-your-network-bandwidth-in-prebuilt-sdk">Control your network bandwidth in Prebuilt SDK</h2><img src="https://assets.videosdk.live/static-assets/ghost/2023/03/February-2023.jpg" alt="Video SDK's February 2023 Product Update"/><p>Do you find yourself in a low network area, or want to reduce bandwidth consumption? Now you have the power to adjust the video resolution according to your network conditions.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/03/SD_HD_Resolution.jpg" class="kg-image" alt="Video SDK's February 2023 Product Update" loading="lazy" width="1254" height="768"/></figure><p><strong>➡️ Read docs: <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/release-notes#v0329">v0.3.29</a></strong></p><h2 id="mirror-your-camera-view-in-prebuilt-sdk">Mirror your camera view in Prebuilt SDK</h2><p>When sharing your camera, you can reverse the image of yourself by enabling the “Mirror View”. This is helpful if you are presenting something in the background (like a whiteboard) or your virtual background includes text.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/03/Mirror-View.jpg" class="kg-image" alt="Video SDK's February 2023 Product Update" loading="lazy" width="1254" height="768"/></figure><p><strong>➡️ Read docs: <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/release-notes#v0329">v0.3.29</a></strong></p><h2 id="retrieve-events-related-to-meeting-connections-in-prebuilt-sdk">Retrieve events related to meeting connections in Prebuilt SDK</h2><p>Get the events notification when you switch the networks or become disconnected in the Prebuilt SDK.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/03/network-reconnection.jpg" class="kg-image" alt="Video SDK's February 2023 Product Update" loading="lazy" width="1254" height="768"/></figure><p><strong>➡️ Read docs: <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/release-notes#v0329">v0.3.29</a></strong></p><h2 id="%E2%80%9Cmultistream%E2%80%9D-parameter-for-great-video-quality-experience-in-1-1-meeting">“Multistream” parameter for great video quality experience in 1-1 meeting</h2><p>Get the best quality video experience in 1-1 video meeting by receiving only the HD stream.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/03/multistrea_false_poster.jpg" class="kg-image" alt="Video SDK's February 2023 Product Update" loading="lazy" width="1254" height="768"/></figure><p><strong>➡️ Read docs: <a href="https://docs.videosdk.live/prebuilt/api/sdk-reference/parameters/advance-parameters/customize-audio-video#multistream">Multistream</a></strong></p><h2 id="hls-example-in-android">HLS example in Android</h2><p>Android example that illustrate how to build a Live streaming app using HLS with features like reactions, live commerce etc.</p><figure class="kg-card kg-image-card"><img src="https://assets.videosdk.live/static-assets/ghost/2023/03/hls_example.jpg" class="kg-image" alt="Video SDK's February 2023 Product Update" loading="lazy" width="1254" height="768"/></figure><p><strong>➡️ Clone Now →: <a href="https://github.com/videosdk-live/videosdk-hls-android-java-example">HLS example in Android</a></strong></p><h2 id="what%E2%80%99s-next">What’s next?</h2><h3 id="team-invite-feature">Team invite feature</h3><p>Create your organization, invite team members and build together.</p><h3 id="previous-releases">Previous Releases:</h3><p>Please check out our previous months' updates and feature launches.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/tag/product-updates"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Video SDK Product Updates</div><div class="kg-bookmark-description">Stay up to date with Video SDK. Get our latest product updates like features release or special announcements. We update something special every month.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/favicons/android-icon-192x192.png" alt="Video SDK's February 2023 Product Update"><span class="kg-bookmark-author">Video SDK Product Updates</span><span class="kg-bookmark-publisher">Sagar Kava</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.videosdk.live/site-meta.png" alt="Video SDK's February 2023 Product Update"/></div></a></figure><!--kg-card-begin: html--><h2 style="text-align:center;font-weight:bold;">We are always here for you.</h2><!--kg-card-end: html--><!--kg-card-begin: html--><div style="text-align:center;">Contact us for a demo, technical queries, support, or just say hi ?.</div><!--kg-card-end: html--><!--kg-card-begin: html--><!DOCTYPE html>
<html>

<head>
	<meta name="viewport" content="width=device-width, initial-scale=1">
	<style>
		.button {
			border-radius: 4px;
			background-color: #5f7afa;
			border: none;
			color: #ffffff;
			text-align: center;
			font-size: 24px;
			padding: 10px;
			width: 150px;
			transition: all 0.5s;
			cursor: pointer;
			margin: 5px;

			.button span {
				cursor: pointer;
				display: inline-block;
				position: relative;
				transition: 0.5s;
			}

			.button span:after {
				content: '\00bb';
				position: absolute;
				opacity: 0;
				top: 0;
				right: -20px;
				transition: 0.5s;
			}

			.button:hover span {
				padding-right: 25px;
			}

			.button:hover span:after {
				opacity: 1;
				right: 0;
			}
	</style>
</meta></head>

<body>
	<center>
		<h2/>
		<a href="https://www.videosdk.live/contact">
			<button class="button"><span>Talk to us</span></button>
			<center>
</center></a></center></body>

</html><!--kg-card-end: html--><hr/>]]></content:encoded></item><item><title><![CDATA[Video SDK's January 2023 Product Update]]></title><description><![CDATA[We can’t thank you enough for voting for us, and for the amazing support you’ve given us over the past year. We Won the 2022 Golden Kitty Award for the Most innovative technology category. ???]]></description><link>https://www.videosdk.live/blog/video-sdk-january-2023-product-update</link><guid isPermaLink="false">63d91b0fbd44f53bde5d1e1e</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Wed, 01 Feb 2023 07:11:57 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2023/02/January-2023-update.jpg" medium="image"/><content:encoded><![CDATA[<h3 id="introducing-call-trigger-in-react-native-sdk">Introducing Call Trigger in React Native SDK </h3><img src="http://assets.videosdk.live/static-assets/ghost/2023/02/January-2023-update.jpg" alt="Video SDK's January 2023 Product Update"/><p>Call kit enables you to make or receive VoIP calls on their Android or iOS smartphone Take a look at: <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-call-trigger-example">Call Trigger</a></p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Call-Trigger--1-.jpg" class="kg-image" alt="Video SDK's January 2023 Product Update" loading="lazy" width="1254" height="768"/></figure><h3 id="call-statistics-added-in-android-react-native-flutter-sdk">Call statistics added in Android, React Native &amp; Flutter SDK</h3><p>Methods are added so you can fetch the real-time data for the participants' media streams during an ongoing meeting, including jitter, bitrate, packet loss, latency, etc.</p><p> ➡️ Read docs: →<a href="https://docs.videosdk.live/android/api/sdk-reference/participant-class/methods#getvideostats">Android</a>, <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-participant/methods#getvideostats">React Native</a>, <a href="https://docs.videosdk.live/flutter/api/sdk-reference/participant-class/methods#getvideostats">Flutter</a></p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Call-stats--1-.jpg" class="kg-image" alt="Video SDK's January 2023 Product Update" loading="lazy" width="1254" height="768"/></figure><h3 id="record-the-session-for-up-to-4-hours">Record the session for up to 4 hours </h3><p>Want to capture those lengthy events? You can now record meetings for up to four hours.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/Recording.jpg" class="kg-image" alt="Video SDK's January 2023 Product Update" loading="lazy" width="1254" height="768"/></figure><h3 id="maven-central-is-supported-now">Maven Central is supported now</h3><p>In addition to Jitpack, the Maven Central package manager is supported. You are free to use any of them.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/jitpack-and-maven.jpg" class="kg-image" alt="Video SDK's January 2023 Product Update" loading="lazy" width="1254" height="769"/></figure><h3 id="new-developer-blog-%F0%9F%94%96">New Developer Blog ?</h3><ul><li><a href="https://www.videosdk.live/blog/react-native-android-video-calling-app-with-callkeep">Build a React Native <strong>Android</strong> Video Calling App with Callkeep using Firebase and Video SDK Part -1</a></li><li><a href="https://www.videosdk.live/blog/react-native-ios-video-calling-app-with-callkeep">How to Build React Native <strong>IOS</strong> Video Call app using CallKeep using Firebase and Video SDK Part-2</a></li><li><a href="https://www.videosdk.live/blog/1-to-1-video-chat-app-on-android-using-videosdk">1-to-1 Video Chat App on Android Using Video SDK and Kotlin</a></li><li><a href="https://videosdk.live/blog/webrtc-react-native">How to Build a WebRTC React Native App Free</a></li></ul><h2 id="%F0%9F%8E%89-hooray-we-won-golden-kitty-2022">? Hooray!! We Won Golden Kitty 2022 </h2><!--kg-card-begin: html-->
<blockquote class="twitter-tweet" data-theme="dark"><p lang="en" dir="ltr">We are thrilled to announce that we have won the <a href="https://twitter.com/hashtag/GoldenKittyAward?src=hash&amp;ref_src=twsrc%5Etfw">#GoldenKittyAward</a> in the Most innovative technology category on <a href="https://twitter.com/ProductHunt?ref_src=twsrc%5Etfw">@ProductHunt</a>! ?This is a huge honor for us and we couldn&#39;t have done it without the support of our community.❤️ Thanks to everyone who supported us along the way. ?? <a href="https://t.co/mn66gzJbRv">pic.twitter.com/mn66gzJbRv</a></p>&mdash; Video SDK (@video_sdk) <a href="https://twitter.com/video_sdk/status/1618148038564007936?ref_src=twsrc%5Etfw">January 25, 2023</a></blockquote> <script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"/>
<!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[2022, Year in review]]></title><description><![CDATA[In 2022, we had remarkable growth and were the preferred option of developers for building live audio and video applications with high customization and better reliability.]]></description><link>https://www.videosdk.live/blog/2022-year-in-review</link><guid isPermaLink="false">63aa7f5cbd44f53bde5ce9c0</guid><category><![CDATA[year-in-review]]></category><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Arjun Kava]]></dc:creator><pubDate>Tue, 03 Jan 2023 12:03:59 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2023/01/blog-thumbnail_messi.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2023/01/blog-thumbnail_messi.jpg" alt="2022, Year in review"/><p>Another year has passed quickly, and despite establishing technically sophisticated infrastructure, we have expanded significantly in 2022.</p><p>In 2022, we had remarkable growth and were the preferred option of developers for building live audio and video applications with high customization and better reliability.</p><p>First and foremost, I'd want to thank all of our customers, ranging from public limited businesses to independent developers. You provided us with a cause to work really hard while building such fantastic products.</p><p>Last but not least, I'd want to thank all of my team members and colleagues for believing in Video SDK's vision and purpose. Thank you very much for working so hard and making us all proud while building a global company.</p><blockquote>We are a terrific team of hustlers, builders and executors. We don't learn from mistakes but mistakes learn from us. </blockquote><h2 id="growth-on-steroid">Growth on Steroid</h2><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-13.png" class="kg-image" alt="2022, Year in review" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2023/01/image-13.png 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2023/01/image-13.png 1000w, http://assets.videosdk.live/static-assets/ghost/2023/01/image-13.png 1254w" sizes="(min-width: 720px) 720px"><figcaption>Revenue Growth in 2022</figcaption></img></figure><ul><li>+1,498% Revenue growth in 2022</li><li>150% MOM growth in the last 12 months</li><li>+2,926%  ARPU in 2022</li><li>551% net revenue retention (NRR) in 2022</li><li>4.8 rating on customer success and support (When I said, we are loved by our customers. I mean it)</li><li>500% growth of the community with 1,600+ active members and 200+ 1:1 team interactions.  </li><li>Massive product hunt launch with #1 product of the day against</li></ul><h2 id="2022-highlights-at-video-sdk">2022 highlights at Video SDK</h2><h3 id="global-infrastructure">Global Infrastructure </h3><p>We have established a worldwide infrastructure to provide the finest audio and video experience possible around the world.</p><p>It allows for reduced latency and greater bitrates from San Francisco and Singapore.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-1.png" class="kg-image" alt="2022, Year in review" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2023/01/image-1.png 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2023/01/image-1.png 1000w, http://assets.videosdk.live/static-assets/ghost/2023/01/image-1.png 1254w" sizes="(min-width: 720px) 720px"><figcaption>Global Low Latency Infrastructure</figcaption></img></figure><h3 id="cross-platform-mobile-sdks">Cross-platform mobile SDKs </h3><p>We have released mobile SDKs that are highly optimised and cross-platform. All major programming languages are supported, including React Native (Android + iOS), Flutter (Android + iOS), Android Native, and iOS Native.</p><p>Mobile SDKs are cross-functional and cross-platform. It supports all the popular devices and synchronisation between each device. More power to developers. ?</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-2.png" class="kg-image" alt="2022, Year in review" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2023/01/image-2.png 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2023/01/image-2.png 1000w, http://assets.videosdk.live/static-assets/ghost/2023/01/image-2.png 1254w" sizes="(min-width: 720px) 720px"><figcaption>Cross-Function Mobile SDKs</figcaption></img></figure><h3 id="interactive-live-streaming">Interactive Live Streaming</h3><p>We released interactive live streaming with a seamless mobile and web experience that supports 98% of devices and has a 3x lower live streaming delay than the market norm.</p><p>Web and mobile UI examples that cover all edge situations, such as adaptive streaming based on internet bandwidth, screen resolution, and codec support with cross-device compatibility.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-3.png" class="kg-image" alt="2022, Year in review" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2023/01/image-3.png 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2023/01/image-3.png 1000w, http://assets.videosdk.live/static-assets/ghost/2023/01/image-3.png 1254w" sizes="(min-width: 720px) 720px"><figcaption>Interactive Live Streaming SDK</figcaption></img></figure><h3 id="customisation-on-steroid">Customisation on Steroid</h3><p>We are well-known for providing extensive customization help for building products ranging from customised UI to customised audio and video tracks.</p><p>In 2022, we released capabilities that enabled customers to develop extremely engaging audio/video conferencing, collaborative features, and interactive live-streaming experiences.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-5.png" class="kg-image" alt="2022, Year in review" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2023/01/image-5.png 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2023/01/image-5.png 1000w, http://assets.videosdk.live/static-assets/ghost/2023/01/image-5.png 1254w" sizes="(min-width: 720px) 720px"><figcaption>End-to-End Customisation on Cross-Platform</figcaption></img></figure><p>Aside from that, we haven't stopped there. You can now not only customise the conference experience but also develop a Studio-like product with all of the interactive elements.</p><p>It is now possible to customise RTMP (social media simulation), Cloud Recording, and Interactive live streaming with features such as callouts, background change, custom branding, interactive layouts, and many more.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-7.png" class="kg-image" alt="2022, Year in review" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2023/01/image-7.png 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2023/01/image-7.png 1000w, http://assets.videosdk.live/static-assets/ghost/2023/01/image-7.png 1254w" sizes="(min-width: 720px) 720px"><figcaption>Custom Template Support in RTMP, Cloud Recording &amp; ILS</figcaption></img></figure><h2 id="a-global-community-with-1600-members">A global community with 1,600 members</h2><p>We are really appreciative of everyone in our community who has helped us develop and expand. We love you all, and we know you love us.</p><p>With a 500% community growth rate and an average customer satisfaction score of 4.8. We have now arrived at the top of the world.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-15.png" class="kg-image" alt="2022, Year in review" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2023/01/image-15.png 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2023/01/image-15.png 1000w, http://assets.videosdk.live/static-assets/ghost/2023/01/image-15.png 1254w" sizes="(min-width: 720px) 720px"><figcaption>World's Fastest Growing Community</figcaption></img></figure><p>Every day, our community contributes to the improvement of our products and infrastructure.</p><p>We all enjoy issue-solving problems, and community is the finest approach we've discovered to do it.</p><h2 id="analytics-insights-infrastructure">Analytics + Insights Infrastructure</h2><p>We discovered that many developers are trying to optimise their applications. We've got you covered with our analytics infrastructure.</p><p>It provides information on audio and video quality scores, bitrate, jitter, packet loss, RTT, and many other metrics.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-11.png" class="kg-image" alt="2022, Year in review" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2023/01/image-11.png 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2023/01/image-11.png 1000w, http://assets.videosdk.live/static-assets/ghost/2023/01/image-11.png 1254w" sizes="(min-width: 720px) 720px"><figcaption>Advanced Analytics &amp; Insights Infrastructure</figcaption></img></figure><h2 id="1-product-of-the-day-on-product-hunt">#1 Product of the Day on Product Hunt</h2><p>Last but not least, we ranked first on Product Hunt. It was a pleasure to collaborate with everyone in the team.</p><p>Our community, other founders, and customers all contributed to us being the most loved live video SDK on Product Hunt.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2023/01/image-10.png" class="kg-image" alt="2022, Year in review" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2023/01/image-10.png 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2023/01/image-10.png 1000w, http://assets.videosdk.live/static-assets/ghost/2023/01/image-10.png 1254w" sizes="(min-width: 720px) 720px"><figcaption>#1 Product of the day on Product Hunt</figcaption></img></figure><h3 id="we-became-a-global-company">We became a global company</h3><p>We began with the goal of building infrastructure that would allow you to virtually experience the real world. We are one step closer to achieving the same vision.</p><p>Customers from all across the world use VideoSDK. Their faith in us validates our empathy and goal to become the greatest cloud infrastructure.</p><h3 id="so-what%E2%80%99s-coming-up-in-2023"><strong>So, what’s coming up in 2023?</strong></h3><p>We have a lot of exciting things planned for you in 2023. We are developing technology that will transform the way we communicate and interact with live video.</p><p>Hold on tight!</p>]]></content:encoded></item><item><title><![CDATA[Video SDK November 22' Month Updates for Developers]]></title><description><![CDATA[Just bringing you a quick recap of what the Video SDK team has been developing over the last month. So let’s dive into it.]]></description><link>https://www.videosdk.live/blog/video-sdk-november-22-month-updates-for-developers</link><guid isPermaLink="false">638ae095bd44f53bde5cd6a8</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Sat, 03 Dec 2022 06:57:12 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/12/November-2022.jpg" medium="image"/><content:encoded><![CDATA[<h3 id="1-%E2%80%9Cmultistream%E2%80%9D-parameter-for-great-video-quality-experience-in-1-1-meeting">1. “Multistream” parameter for great video quality experience in 1-1 meeting</h3><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/12/multistream_false.jpg" class="kg-image" alt="Video SDK November 22' Month Updates for Developers" loading="lazy" width="1255" height="768"/></figure><img src="http://assets.videosdk.live/static-assets/ghost/2022/12/November-2022.jpg" alt="Video SDK November 22' Month Updates for Developers"/><p>Get the best quality video experience in 1-1 video meeting by receiving only the HD stream.</p><p>➡️ Read docs: → <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/render-media/optimize-video-track">Javascript</a>, <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/features/custom-track/custom-video-track">React Native</a>, <a href="https://docs.videosdk.live/flutter/api/sdk-reference/videosdk-class/methods#parameters">Flutter</a>.</p><h3 id="2-in-meeting-call-statistics-in-prebuilt-sdk">2. In-meeting call statistics in Prebuilt SDK.</h3><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/12/Call-stats.jpg" class="kg-image" alt="Video SDK November 22' Month Updates for Developers" loading="lazy" width="1254" height="768"/></figure><p>Access the real-time data for the participants' media streams during an ongoing meeting, including jitter, bitrate, packet loss, latency, etc. Simply click over the network symbol in the participant feed's upper right corner.</p><p>➡️ Update version: →<a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/release-notes#v0323">v0.3.23</a></p><h3 id="3-group-call-example-in-react-native">3. Group call example in React Native</h3><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/12/React-native-Group-call.jpg" class="kg-image" alt="Video SDK November 22' Month Updates for Developers" loading="lazy" width="1254" height="768"/></figure><p>A freshly developed code sample of group calling with significant UI enhancements has been released.</p><p>➡️ Read docs: <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-sdk-example">React Native Example</a></p><h3 id="4-interactive-live-streaming-example-for-react-sdk">4. Interactive live streaming example for React SDK</h3><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/12/ILS-React-example.jpg" class="kg-image" alt="Video SDK November 22' Month Updates for Developers" loading="lazy" width="1254" height="768"/></figure><p>A new code sample for interactive live streaming with a bunch of features like reactions, poll, recording has been released</p><p>➡️ Read docs: <a href="https://github.com/videosdk-live/videosdk-rtc-react-sdk-example">React Example</a></p><h3 id="5-banuba-integration-in-android-sdk">5. Banuba integration in Android SDK</h3><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/12/Banuba.jpg" class="kg-image" alt="Video SDK November 22' Month Updates for Developers" loading="lazy" width="1254" height="768"/></figure><p>Now Banuba SDK is supported in Android SDK to enhance video calls with real-time face filters and virtual backgrounds.</p><p>➡️ Read docs: <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/extras/banuba-integration">Android</a></p><h3 id="previous-releases">Previous Releases:</h3><p>Please check out our previous months' updates and feature launches.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.videosdk.live/blog/tag/product-updates"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Video SDK Product Updates</div><div class="kg-bookmark-description">Stay up to date with Video SDK. Get our latest product updates like features release or special announcements. We update something special every month.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.videosdk.live/favicons/android-icon-192x192.png" alt="Video SDK November 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK Product Updates</span><span class="kg-bookmark-publisher">Sagar Kava</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.videosdk.live/site-meta.png" alt="Video SDK November 22' Month Updates for Developers"/></div></a></figure><!--kg-card-begin: html--><h2 style="text-align:center;font-weight:bold;">We are always here for you.</h2>
<!--kg-card-end: html--><!--kg-card-begin: html--><div style="text-align:center;">Contact us for a demo, technical queries, support, or just say hi ?.</div><!--kg-card-end: html--><!--kg-card-begin: html--><!DOCTYPE html>
<html>

<head>
	<meta name="viewport" content="width=device-width, initial-scale=1">
	<style>
		.button {
			border-radius: 4px;
			background-color: #5f7afa;
			border: none;
			color: #ffffff;
			text-align: center;
			font-size: 24px;
			padding: 10px;
			width: 150px;
			transition: all 0.5s;
			cursor: pointer;
			margin: 5px;

			.button span {
				cursor: pointer;
				display: inline-block;
				position: relative;
				transition: 0.5s;
			}

			.button span:after {
				content: '\00bb';
				position: absolute;
				opacity: 0;
				top: 0;
				right: -20px;
				transition: 0.5s;
			}

			.button:hover span {
				padding-right: 25px;
			}

			.button:hover span:after {
				opacity: 1;
				right: 0;
			}
	</style>
</meta></head>

<body>
	<center>
		<h2/>
		<a href="https://www.videosdk.live/contact">
			<button class="button"><span>Talk to us</span></button>
			<center>
</center></a></center></body>

</html><!--kg-card-end: html--><hr/>]]></content:encoded></item><item><title><![CDATA[Video SDK October 22' Month Updates for Developers]]></title><description><![CDATA[We’ve been hard at work these past few weeks improving our code samples to help make VideoSDK even more valuable for you. We hope you enjoy them as much as we do.]]></description><link>https://www.videosdk.live/blog/video-sdk-october-22-month-updates-for-developers</link><guid isPermaLink="false">6361109cbd44f53bde5cbcf7</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Wed, 02 Nov 2022 09:05:46 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/11/October-2022-updates--1-.jpg" medium="image"/><content:encoded><![CDATA[<h3 id="1-new-region-added-for-american-users">1. New Region Added for American Users</h3><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/11/US.jpg" class="kg-image" alt="Video SDK October 22' Month Updates for Developers" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/11/US.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/11/US.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2022/11/US.jpg 1254w" sizes="(min-width: 720px) 720px"/></figure><img src="http://assets.videosdk.live/static-assets/ghost/2022/11/October-2022-updates--1-.jpg" alt="Video SDK October 22' Month Updates for Developers"/><p>We have deployed a new server in the Ohio area in response to the growing user base in the US. To receive the best meeting quality, please choose the server closest to you.</p><p>➡️ Read docs: <a href="https://docs.videosdk.live/api-reference/realtime-communication/create-room">Set Region </a></p><h3 id="2-new-code-sample-in-android-flutter-react-and-react-native">2. New Code Sample in Android, Flutter, React, and React Native</h3><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/11/Code-Sample.jpg" class="kg-image" alt="Video SDK October 22' Month Updates for Developers" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/11/Code-Sample.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/11/Code-Sample.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2022/11/Code-Sample.jpg 1254w" sizes="(min-width: 720px) 720px"/></figure><p>A freshly developed code sample with significant UI enhancements has been released</p><p>➡️ Clone Now → <a href="https://github.com/videosdk-live/videosdk-rtc-react-sdk-example">React</a>, <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-sdk-example">React Native</a>, <a href="https://github.com/videosdk-live/videosdk-rtc-flutter-sdk-example">Flutter</a>, <a href="https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example">Android (Java)</a>, <a href="https://github.com/videosdk-live/videosdk-rtc-android-kotlin-sdk-example">Android (Kotlin)</a>.</p><h3 id="3-audio-output-routing">3. Audio Output Routing</h3><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/11/Audio-output-routing.jpg" class="kg-image" alt="Video SDK October 22' Month Updates for Developers" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/11/Audio-output-routing.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/11/Audio-output-routing.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2022/11/Audio-output-routing.jpg 1254w" sizes="(min-width: 720px) 720px"/></figure><p>The user has the option to change the audio output to a device other than the default one focused by SDK. <br><br>➡️ Read docs: <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/features/switch-audio-output">React Native</a>, <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/features/switch-audio-output">Flutter</a></br></br></p><h3 id="4-audio-recording-free-in-beta">4. Audio Recording (Free in Beta)</h3><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/11/Audio-Recording.jpg" class="kg-image" alt="Video SDK October 22' Month Updates for Developers" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/11/Audio-Recording.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/11/Audio-Recording.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2022/11/Audio-Recording.jpg 1254w" sizes="(min-width: 720px) 720px"/></figure><p>Now, the user has the option to only record the audio of the entire meeting. The output of that recording in the storage will be in <code>mp3</code> format. By default File recording will be done in Video, for audio recording you will need to configure the start recording function and Rest API endpoint like the code snipped shown below:</p><!--kg-card-begin: html-->config:
    mode: "video-and-audio" | "audio"<!--kg-card-end: html--><p>➡️ Read docs: <a href="https://docs.videosdk.live/api-reference/realtime-communication/start-recording">Rest API Endpoint</a>, <a href="https://docs.videosdk.live/javascript/api/sdk-reference/meeting-class/methods#startrecording">JS SDK Method</a>, <a href="https://docs.videosdk.live/react/api/sdk-reference/use-meeting/methods#startrecording">React SDK Method </a></p><h3 id="5-resolution-wise-recording">5. Resolution-wise Recording </h3><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/11/Resolution-wise-recording.jpg" class="kg-image" alt="Video SDK October 22' Month Updates for Developers" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/11/Resolution-wise-recording.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/11/Resolution-wise-recording.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2022/11/Resolution-wise-recording.jpg 1254w" sizes="(min-width: 720px) 720px"/></figure><p>Users will now be able to run a Video Recording of the session in 3 different qualities: Low(480p), Medium(720p), or High(1080p). </p><p>➡️ Read docs: <a href="https://docs.videosdk.live/api-reference/realtime-communication/start-recording">Rest API Endpoint</a>, <a href="https://docs.videosdk.live/javascript/api/sdk-reference/meeting-class/methods#startrecording">JS SDK Method</a>, <a href="https://docs.videosdk.live/react/api/sdk-reference/use-meeting/methods#startrecording">React SDK Method</a> </p><h3 id="6-audio-hls-free-in-beta">6. Audio HLS (Free in Beta)</h3><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2022/11/Audio-HLS.jpg" class="kg-image" alt="Video SDK October 22' Month Updates for Developers" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/11/Audio-HLS.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/11/Audio-HLS.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2022/11/Audio-HLS.jpg 1254w" sizes="(min-width: 720px) 720px"><figcaption>Video SDK New Audio HLS</figcaption></img></figure><p>Users will now be able to record the session in an AUDIO HLS. <strong>Users can build large-scale Audio rooms with Audio HLS</strong>. By default, HLS recording will be done in video, for audio recording you will need to configure the start HLS function and Rest API endpoint like the code snipped shown below:</p><!--kg-card-begin: html-->config:
    mode: "video-and-audio" | "audio"
<!--kg-card-end: html--><p>➡️ Read docs: <a href="https://docs.videosdk.live/api-reference/realtime-communication/start-hlsStream">Rest API Endpoint</a> ,  <a href="https://docs.videosdk.live/javascript/api/sdk-reference/meeting-class/methods#starthls">JS SDK Method</a> ,  <a href="https://docs.videosdk.live/react/api/sdk-reference/use-meeting/methods#starthls">React SDK Method</a> , </p><h3 id="7-added-screen-share-support-in-flutter-ios">7. Added Screen Share Support in Flutter iOS</h3><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/11/Flutter-iOS-screensharing.jpg" class="kg-image" alt="Video SDK October 22' Month Updates for Developers" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/11/Flutter-iOS-screensharing.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/11/Flutter-iOS-screensharing.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2022/11/Flutter-iOS-screensharing.jpg 1254w" sizes="(min-width: 720px) 720px"/></figure><p>Flutter iOS Screen Sharing allows you to publish your screen as a video track. This document explains how to set up screen sharing on iOS.</p><p>➡️ Read docs: <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/extras/flutter-ios-screen-share">Share screen - Flutter iOS</a></p><h3 id="change-logs">Change logs:</h3><p>Please find the bug fixes and change logs below.</p><ul><li>Release Notes: <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/release-notes">Flutter</a></li></ul><h3 id="previous-releases">Previous Releases:</h3><p>Please check out our previous months' updates and feature launches.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://videosdk.live/blog/tag/product-updates"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Video SDK Product Updates</div><div class="kg-bookmark-description">Stay up to date with Video SDK. Get our latest product updates like features release or special announcements. We update something special every month.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://videosdk.live/favicons/android-icon-192x192.png" alt="Video SDK October 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK Product Updates</span><span class="kg-bookmark-publisher">Video</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.videosdk.live/site-meta.png" alt="Video SDK October 22' Month Updates for Developers"/></div></a></figure><!--kg-card-begin: html--><h2 style="text-align:center;font-weight:bold;">We are always here for you.</h2>
<!--kg-card-end: html--><!--kg-card-begin: html--><div style="text-align:center;">Contact us for a demo, technical queries, support, or just say hi ?.</div><!--kg-card-end: html--><!--kg-card-begin: html--><!DOCTYPE html>
<html>

<head>
	<meta name="viewport" content="width=device-width, initial-scale=1">
	<style>
		.button {
			border-radius: 4px;
			background-color: #5f7afa;
			border: none;
			color: #ffffff;
			text-align: center;
			font-size: 24px;
			padding: 10px;
			width: 150px;
			transition: all 0.5s;
			cursor: pointer;
			margin: 5px;

			.button span {
				cursor: pointer;
				display: inline-block;
				position: relative;
				transition: 0.5s;
			}

			.button span:after {
				content: '\00bb';
				position: absolute;
				opacity: 0;
				top: 0;
				right: -20px;
				transition: 0.5s;
			}

			.button:hover span {
				padding-right: 25px;
			}

			.button:hover span:after {
				opacity: 1;
				right: 0;
			}
	</style>
</meta></head>

<body>
	<center>
		<h2/>
		<a href="https://www.videosdk.live/contact">
			<button class="button"><span>Talk to us</span></button>
			<center>
</center></a></center></body>

</html>
        <!--kg-card-end: html--><p/><hr/>]]></content:encoded></item><item><title><![CDATA[Video SDK September 22' Month Updates for Developers]]></title><description><![CDATA[We’re back to share what we have been cooking up in September. We’ve added a bunch of new features and made some major improvements that will make your experience with Video SDK better than ever.]]></description><link>https://www.videosdk.live/blog/video-sdk-september-22-month-updates-for-developers</link><guid isPermaLink="false">63353189bd44f53bde5c8a7f</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Mon, 03 Oct 2022 08:53:50 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/09/September-2022-updates.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="product-updates"><strong>Product Updates</strong><br/></h2><h3 id="light-and-dark-theme">Light and Dark Theme</h3><img src="http://assets.videosdk.live/static-assets/ghost/2022/09/September-2022-updates.jpg" alt="Video SDK September 22' Month Updates for Developers"/><p>You can customize the theme of your meetings, recordings, and live streams in Prebuilt SDK. Besides the default theme, two new themes have been introduced: LIGHT &amp; DARK.</p><p>➡️ Read docs : <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/theme">Meetings</a> | <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/recording-meeting">Cloud Recording</a> | <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/go-live-social-media">Go Live On Social Media</a> | <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/go-live-hls">Go Live On HLS</a></p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/09/New-Thame.gif" class="kg-image" alt="Video SDK September 22' Month Updates for Developers" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/09/New-Thame.gif 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/09/New-Thame.gif 1000w, http://assets.videosdk.live/static-assets/ghost/2022/09/New-Thame.gif 1254w" sizes="(min-width: 720px) 720px"/></figure><h3 id="quality-analytics-beta">Quality Analytics (Beta)</h3><p>You can track call statistics for meetings and individual users and get insights into factors that influence audio and video quality. Currently, it is in the beta version and only available for Web SDKs (Prebuilt, React, and JS). You can access it from the <a href="https://app.videosdk.live/meetings/sessions">Video SDK dashboard</a> (Dashboard &gt; Meetings &gt; Sessions).</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/09/Analytics--1-.jpg" class="kg-image" alt="Video SDK September 22' Month Updates for Developers" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/09/Analytics--1-.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/09/Analytics--1-.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2022/09/Analytics--1-.jpg 1254w" sizes="(min-width: 720px) 720px"/></figure><h3 id="virtual-background">Virtual Background</h3><p>You can choose an image as your background during a meeting. You can also apply a blur effect to your background.</p><p>➡️ Read docs: <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/virtual-background">Prebuilt SDK v0.3.21</a></p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/09/Virtual-background--1-.jpg" class="kg-image" alt="Video SDK September 22' Month Updates for Developers" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/09/Virtual-background--1-.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/09/Virtual-background--1-.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2022/09/Virtual-background--1-.jpg 1254w" sizes="(min-width: 720px) 720px"/></figure><h3 id="noise-suppression">Noise Suppression</h3><p>Once activated, it helps you to reduce the background noise during the meeting and provides you with significantly clearer audio.</p><p>➡️Read docs:<a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/virtual-background">Prebuilt SDK v0.3.21</a></p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/09/denoise-1.jpg" class="kg-image" alt="Video SDK September 22' Month Updates for Developers" loading="lazy" width="1254" height="768" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/09/denoise-1.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/09/denoise-1.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2022/09/denoise-1.jpg 1254w" sizes="(min-width: 720px) 720px"/></figure><h3 id="change-logs">Change logs:</h3><p>Please find the bug fixes and change logs below</p><ul><li>Release Notes: <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/release-notes">Javascript</a></li><li>Release Notes: <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/release-notes">React JS</a></li><li>Release Notes: <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/release-notes">React Native</a> </li><li>Release Notes: <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/release-notes">Flutter</a></li></ul><h3 id="previous-releases">Previous Releases:</h3><p>Please check out our previous months' updates and feature launches.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://videosdk.live/blog/tag/product-updates"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Vide SDK Product Updates</div><div class="kg-bookmark-description">Stay tuned for company news, product updates, and articles about online video technology and performance.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://videosdk.live/favicons/android-icon-192x192.png" alt="Video SDK September 22' Month Updates for Developers"><span class="kg-bookmark-author">Vide SDK Product Updates</span><span class="kg-bookmark-publisher">Video</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.videosdk.live/site-meta.png" alt="Video SDK September 22' Month Updates for Developers"/></div></a></figure><!--kg-card-begin: html--><h2 style="text-align:center;font-weight:bold;">We are always here for you.</h2>
<!--kg-card-end: html--><!--kg-card-begin: html--><div style="text-align:center;">Contact us for a demo, technical queries, support, or just say hi ?.</div><!--kg-card-end: html--><!--kg-card-begin: html--><!DOCTYPE html>
<html>

<head>
	<meta name="viewport" content="width=device-width, initial-scale=1">
	<style>
		.button {
			border-radius: 4px;
			background-color: #5f7afa;
			border: none;
			color: #ffffff;
			text-align: center;
			font-size: 24px;
			padding: 10px;
			width: 150px;
			transition: all 0.5s;
			cursor: pointer;
			margin: 5px;

			.button span {
				cursor: pointer;
				display: inline-block;
				position: relative;
				transition: 0.5s;
			}

			.button span:after {
				content: '\00bb';
				position: absolute;
				opacity: 0;
				top: 0;
				right: -20px;
				transition: 0.5s;
			}

			.button:hover span {
				padding-right: 25px;
			}

			.button:hover span:after {
				opacity: 1;
				right: 0;
			}
	</style>
</meta></head>

<body>
	<center>
		<h2/>
		<a href="https://www.videosdk.live/contact">
			<button class="button"><span>Talk to us</span></button>
			<center>
</center></a></center></body>

</html><!--kg-card-end: html--><hr/>]]></content:encoded></item><item><title><![CDATA[Video SDK August 22' Month Updates for Developers ?]]></title><description><![CDATA[Thank you for your support & love!?
We made it to ?#1 Product of the Day on Product Hunt with your help.?]]></description><link>https://www.videosdk.live/blog/video-sdk-august-22-month-updates-for-developers</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb90</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Fri, 02 Sep 2022 05:36:17 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/09/Aug-2022-updates.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/09/Aug-2022-updates.jpg" alt="Video SDK August 22' Month Updates for Developers ?"/><p/><h2 id="%F0%9F%8E%89-hooray-we-are-the%F0%9F%A5%871-product-of-the-day">? Hooray!! We are the?#1 Product of the Day</h2><!--kg-card-begin: html--><blockquote class="twitter-tweet"><p lang="en" dir="ltr">? Hooray!! We are the #1 Product of the Day on <a href="https://twitter.com/ProductHunt?ref_src=twsrc%5Etfw">@ProductHunt</a> ?<br><br>Overwhelmed by all the love and support we’ve received ?<br><br>Check it out here: <a href="https://t.co/Yjaj8rNhsB">https://t.co/Yjaj8rNhsB</a><a href="https://twitter.com/hashtag/1ProductoftheDay?src=hash&amp;ref_src=twsrc%5Etfw">#1ProductoftheDay</a> <a href="https://twitter.com/hashtag/producthunt?src=hash&amp;ref_src=twsrc%5Etfw">#producthunt</a> <a href="https://twitter.com/hashtag/productlaunch?src=hash&amp;ref_src=twsrc%5Etfw">#productlaunch</a> <a href="https://twitter.com/hashtag/devtools?src=hash&amp;ref_src=twsrc%5Etfw">#devtools</a> <a href="https://t.co/YV9NKKnAeH">pic.twitter.com/YV9NKKnAeH</a></br></br></br></br></p>&mdash; Video SDK (@video_sdk) <a href="https://twitter.com/video_sdk/status/1564876675543752704?ref_src=twsrc%5Etfw">August 31, 2022</a></blockquote> <script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"/><!--kg-card-end: html--><hr><h2 id="prebuilt-sdk">Prebuilt SDK</h2><h3 id="v0317">v0.3.17</h3><p> <br><strong>Change log :</strong><br>1. Waiting screen customization added<br>    <strong>Docs</strong>: <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/waiting-screen">Waiting screen</a></br></br></br></p><p>2. Provided permission for participant icon button visibility.<br>    <strong>Docs</strong>: <a href="https://docs.videosdk.live/prebuilt/api/sdk-reference/parameters/basic-parameters#participanttabpanelenabled">Enabled Participant Tab Panel</a></br></p><p>3. Provided permission to toggle participant tab panel.<br>    <strong>Docs</strong>: <a href="https://docs.videosdk.live/prebuilt/api/sdk-reference/parameters/advance-parameters/permissions#cantoggleparticipanttab">Toggle Participant Tab Panel</a></br></p><p><strong>Bug Fix :</strong><br>1. Timestamp issue in the chat has been resolved.</br></p><h3 id="v0315">v0.3.15</h3><p><br><strong>Change log :</strong><br>1. While Screen sharing, participants can share audio of their chrome tab.<br>    <strong>Docs</strong>: <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/screenshare#screen-share-with-audio">Screen Share with Audio</a></br></br></br></p><h3 id="v0314">v0.3.14</h3><p><strong>Change log :</strong><br>1. Few edge cases have been covered in the poll.</br></p><h2 id="javascript-sdk">Javascript SDK</h2><h3 id="v0049">v0.0.49</h3><p><br><strong>Change Log:</strong><br>1. Fixed issues with Custom audio and video tracks.</br></br></p><p>2. Updated types indicating optional value or not.<br><br><strong>Bug Fix :</strong><br>1. Fix <code>reading s.data on undefined</code> error.</br></br></br></p><h3 id="v0047-v0044">v0.0.47 &amp; v0.0.44</h3><p><br><strong>Change Log:</strong><br>1. Added support for screenshare with audio.<br>    <strong>Docs</strong>: <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/features/screenshare#screen-share-with-audio">Screenshare with Audio</a></br></br></br></p><p>2. Custom audio, video and share track now accepts <code>MediaStream</code> instead of <code>MediaStreamTrack</code>.</p><p>3. Added types for better IDE support.</p><h2 id="react-sdk">React SDK</h2><h3 id="v0149">v0.1.49</h3><p><br><strong>Bug Fix :</strong><br>1. Fix <code>reading s.data on undefined</code> error.<br><br>2. Participant initial audio &amp; video improper state issue fixed.</br></br></br></br></p><h3 id="v0148">v0.1.48</h3><p/><p><strong>Change log:</strong><br>1. Added <code>getVideoStats</code> and <code>getAudioStats</code> methods for getting particular participant streams statistics.<br>    <strong>Docs: </strong><a href="https://docs.videosdk.live/react/api/sdk-reference/use-participant/methods#getvideostats">getVideoStats</a> <br>    <strong>Docs:</strong> <a href="https://docs.videosdk.live/react/api/sdk-reference/use-participant/methods#getaudiostats">getAudioStats </a></br></br></br></p><p>2. Added <code>onMeetingStateChanged</code> event for getting the state of the meeting.<br>    <strong>Docs:</strong> <code>onMeetingStateChanged</code> </br></p><h3 id="v0146">v0.1.46</h3><p/><p><strong>Change log :</strong><br>1. Set Audio packet priority high.<br>2. Internal dependency update.</br></br></p><h2 id="react-native-sdk">React Native SDK</h2><h3 id="v0033">v0.0.33</h3><p><br><strong>Change log :</strong><br>1. Recording and Livestream status event added.<br>    <strong>Docs</strong> : <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-meeting/events#onrecordingstatechanged">Recording Events</a></br></br></br></p><p>2. Set Audio packet priority high.</p><p>3. Internal dependency update.</p><p>4. Added <code>getVideoStats</code> and <code>getAudioStats</code> methods for getting particular participant streams statistics.</p><p><strong>5. SDK Reference</strong> : <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-participant/methods#getvideostats">getVideoStats</a></p><p><strong>6. SDK Reference</strong> : <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-participant/methods#getaudiostats">getAudioStats</a></p><p>7. Added <code>onMeetingStateChanged</code> event for getting state of meeting changes.</p><p><strong>8. SDK Reference</strong> : <a href="https://docs.videosdk.live/react-native/api/sdk-reference/use-meeting/events#onmeetingstatechanged">onMeetingStateChanged</a></p><p>9. Custom audio, video and share track now accepts <code>MediaStream</code> instead of <code>MediaStreamTrack</code>.</p><p>10 .Added types for better IDE support.</p><p><strong>Bug Fix :</strong><br>1. Fixed issues with Custom audio and video tracks.<br><br>2. Updated types indicating optional value or not.<br><br>3. Fix <code>reading s.data on undefined</code> error.</br></br></br></br></br></p><h2 id="android-sdk">Android SDK</h2><h3 id="v0025">v0.0.25</h3><p><br><strong>Change log :</strong><br>1. Add <a href="https://docs.videosdk.live/android/api/sdk-reference/meeting-class/meeting-event-listener-class#onmeetingstatechanged">onMeetingStateChanged</a> Event Listener for Websocket connection status.</br></br></p><p>2. Throw <code>PREV_RECORDING_PROCESSING</code> error.</p><h3 id="v0024">v0.0.24</h3><p><strong>Bug Fix :</strong><br>1. Fixed Echo issue on Xiaomi Device after Unmute and Mute Audio.</br></p><h2 id="flutter-sdk">Flutter SDK</h2><h3 id="v102">v1.0.2</h3><p><br><strong>Change log:</strong><br>1. Renamed <code>Meeting</code> class to <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/introduction"><code>Room</code></a> class.</br></br></p><p>2. Changed import file <code>package:videosdk/rtc.dart</code> to <code>package:videosdk/videosdk.dart</code></p><p>3. Changed events  </p><ul><li><code>Events.meetingJoined</code> to <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/events#roomjoined"><code>Events.roomJoined</code></a>.</li><li><code>Events.meetingLeft</code> to <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/events#roomleft"><code>Events.roomLeft</code></a>.</li><li><code>Events.webcamRequested</code> to <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/events#camerarequested"><code>Events.cameraRequested</code></a>.</li></ul><p>4.  Changed properties and methods for <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/introduction"><code>Room</code></a> class</p><ul><li><code>selectedWebcamId</code> to <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/properties#selectedcamid"><code>selectedCamId</code></a>.</li><li><code>enableWebcam()</code> to <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/methods#enablecam"><code>enableCam()</code></a>.</li><li><code>disableWebcam()</code> to <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/methods#disablecam"><code>disableCam()</code></a>.</li><li><code>changeWebcam()</code> to <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/methods#changecam"><code>changeCam()</code></a>.</li><li><code>getWebcams()</code> to <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/methods#getcameras"><code>getCameras()</code></a>.</li></ul><p>5.  Changed methods for <a href="https://docs.videosdk.live/flutter/api/sdk-reference/participant-class/introduction"><code>Participant</code></a> class</p><ul><li><code>enableMic()</code> to <a href="https://docs.videosdk.live/flutter/api/sdk-reference/participant-class/methods#unmutemic"><code>unmuteMic()</code></a></li><li><code>disableMic()</code> to <a href="https://docs.videosdk.live/flutter/api/sdk-reference/participant-class/methods#mutemic"><code>muteMic()</code></a></li><li><code>enableWebcam()</code> to <a href="https://docs.videosdk.live/flutter/api/sdk-reference/participant-class/methods#enablecam"><code>enableCam()</code></a></li><li><code>disableWebcam()</code> to <a href="https://docs.videosdk.live/flutter/api/sdk-reference/participant-class/methods#disablecam"><code>disableCam()</code></a></li></ul><p>6.  Added <a href="https://docs.videosdk.live/flutter/api/sdk-reference/videosdk-class/methods#createroom"><code>VideoSDK.createRoom()</code></a> to create VideoSDK Rooms. Use <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/methods#join"><code>join()</code></a> to join VideoSDK Room.<br><br>7.  Added <a href="https://docs.videosdk.live/flutter/api/sdk-reference/videosdk-class/methods#createroom"><code>defaultCameraIndex</code></a> option to select default camera for <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/introduction"><code>Room</code></a> Class.</br></br></p><p>8.  Added <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/properties#micenabled"><code>micEnabled</code></a> property for <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/introduction"><code>Room</code></a> Class.</p><p>9.  Added <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/properties#camenabled"><code>camEnabled</code></a> property for <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/introduction"><code>Room</code></a> Class.</p><p>10. Added <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/methods#end"><code>end()</code></a> method for <a href="https://docs.videosdk.live/flutter/api/sdk-reference/room-class/introduction"><code>Room</code></a> Class.</p><p>11. Removed <code>MeetingBuilder</code> Widget.</p><p><strong>Bug Fix</strong> :<br>1. Fixed the issue of joining room (meeting) multiple times.<br><br>2. Fixed issues related to resource consumption.<br><br>3. Fixed issue on room ends.<br><br>4. Remove local participant after calling `end` methods</br></br></br></br></br></br></br></p><hr><h2/></hr></hr>]]></content:encoded></item><item><title><![CDATA[Video SDK July 22' Month Updates for Developers]]></title><description><![CDATA[Thanks for checking out the July month developer updates! This month, we've released some big improvements that help developers work more efficiently.]]></description><link>https://www.videosdk.live/blog/video-sdk-july-22-month-updates-for-developers-2</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb8f</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 01 Aug 2022 14:42:30 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/08/July-2022.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/08/July-2022.jpg" alt="Video SDK July 22' Month Updates for Developers"/><p>NEW! This is the July 2022 release announcement. Here is a list of all new enhancements and product updates on videosdk.live</p><h2 id="prebuilt-sdk">PreBuilt SDK</h2><h3 id="v0313">v0.3.13</h3><p><strong>Change log:</strong></p><ul><li>Added internet connection speed status on each participant's view.</li><li>Integrate Pubsub for <code>CHAT</code> and <code>RAISE HAND</code> a feature so that every Video SDK(Javascript, React JS, React Native, Android, iOS, and Flutter) communicate with Prebuilt SDK.</li></ul><h3 id="v0312">v0.3.12</h3><p><strong>Change log :</strong></p><ul><li>Create a poll(BETA) release.</li><li>any participant can create a poll bypassing <code>canCreatePoll</code> true.</li></ul><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/live-poll"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Prebuilt Live Poll Video &amp; Audio Call | Video SDK Embed Docs | Video SDK</div><div class="kg-bookmark-description">Live Poll features prebuilt Video SDK embedded is an easy-to-use video calling API. Video SDK Prebuilt makes it easy for developers to add video calls 10 in minutes to any website or app.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><h3 id="v0311">v0.3.11</h3><p><strong>Change log :</strong></p><ul><li>In HlS for visible controls, you can pass <code>playerControlsVisible</code> true. so that participants can view control of the player.</li></ul><p><strong>Documentation</strong></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/go-live-hls"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Prebuilt Go Live On HLS Video &amp; Audio Call | Video SDK Embed Docs | Video SDK</div><div class="kg-bookmark-description">Go Live On HLS features prebuilt Video SDK embedded is an easy-to-use video calling API. Video SDK Prebuilt makes it easy for developers to add video calls 10 in minutes to any website or app.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><h3 id="v0310">v0.3.10</h3><p><strong>Change log:</strong></p><ul><li>HLS can be enabled by any participant with permission <code>hls.enabled</code>.</li><li>Participants can toggle other participant's modes bypassing <code>toggleParticipantMode</code> true.</li><li>HLS can be toggled by any participant with passing <code>toggleHls</code>.</li><li>HLS mode added with this parameter <code>mode</code>. it can be <code>VIWER</code> or <code>CONFERENCE</code>.</li></ul><p><strong>Documentation</strong></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/go-live-hls"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Prebuilt Go Live On HLS Video &amp; Audio Call | Video SDK Embed Docs | Video SDK</div><div class="kg-bookmark-description">Go Live On HLS features prebuilt Video SDK embedded is an easy-to-use video calling API. Video SDK Prebuilt makes it easy for developers to add video calls 10 in minutes to any website or app.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><h3 id="v039">v0.3.9</h3><p><strong>Change log :</strong></p><ul><li>Participant can now toggle other participant screenShare if they are having permission <code>partcipantCanToogleOtherScreenShare</code>.</li></ul><p><strong>Documentation</strong></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/prebuilt/api/sdk-reference/parameters/advance-parameters/permissions#toggleparticipantscreenshare"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Permissions Parameters | Video SDK</div><div class="kg-bookmark-description">permissions</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><h3 id="v038">v0.3.8</h3><p><strong>Change log :</strong></p><ul><li>Participant can now toggle other participant screenShare if they are having permission <code>partcipantCanToogleOtherScreenShare</code>.</li></ul><p><strong>Bug Fixes:</strong></p><ul><li>Join screen is now responsive if <code>title</code> or <code>meetingUrl</code> is not provided.</li></ul><p><strong>Documentation</strong></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/prebuilt/api/sdk-reference/parameters/advance-parameters/permissions#toggleparticipantscreenshare"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Permissions Parameters | Video SDK</div><div class="kg-bookmark-description">permissions</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><h3 id="v037">v0.3.7</h3><p><strong>Change log:</strong></p><ul><li>Add google echoCancellation params during the creation of the audio stream.</li><li>Loader Animation added between join screen.</li></ul><p><strong>Bug Fixes:</strong></p><ol><li>Remove googDsp dependency warn.</li></ol><hr><h2 id="javascript-sdk">Javascript SDK</h2><h3 id="v0042"><strong><strong>v0.0.42</strong></strong></h3><p><strong>Change log:</strong></p><ul><li>Added <code>getVideoStats</code> and <code>getAudioStats</code> methods for getting particular participant streams statistics.<br>- <a href="https://docs.videosdk.live/javascript/api/sdk-reference/participant-class/methods#getvideostats">getVideoStats</a>: <br>- <a href="https://docs.videosdk.live/javascript/api/sdk-reference/participant-class/methods#getaudiostats">getAudioStats</a>: </br></br></li><li>Added <code>meeting-state-changed</code> event  for getting state of meeting changes.</li></ul><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/javascript/api/sdk-reference/meeting-class/events#meeting-state-changed"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Meeting Class Events | Video SDK</div><div class="kg-bookmark-description">---</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><h3 id="v0041">v0.0.41</h3><p><strong>Change log:</strong></p><ul><li>Set Audio packet priority high.</li><li>Internal dependency update.</li></ul><h3 id="v0040"><strong><strong>v0.0.40</strong></strong></h3><p><strong>Change log:</strong></p><ul><li>Recording and Livestream status event added.</li></ul><p><strong>Docs</strong>: <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/features/recording-meeting">Recording Events</a></p><hr><h2 id="react-sdk">React SDK</h2><h3 id="v0143"><strong><strong>v0.1.43</strong></strong></h3><p><strong>Change log:</strong></p><ul><li>Added <code>getVideoStats</code> and <code>getAudioStats</code> methods for getting particular participant streams statistics.<br>- <a href="https://docs.videosdk.live/react/api/sdk-reference/use-participant/methods#getvideostats">getVideoStats</a><br>- <a href="https://docs.videosdk.live/react/api/sdk-reference/use-participant/methods#getaudiostats">getAudioStats</a></br></br></li><li>Added <code>onMeetingStateChanged</code> event for getting the state of the meeting.</li></ul><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/react/api/sdk-reference/use-meeting/events#onmeetingstatechanged"><div class="kg-bookmark-content"><div class="kg-bookmark-title">useMeeting Hook Events Callbacks | Video SDK</div><div class="kg-bookmark-description">---</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><h3 id="v0142"><strong><strong>v0.1.42</strong></strong></h3><p><strong>Change log :</strong></p><ol><li>Set Audio packet priority high.</li><li>Internal dependency update.</li></ol><h3 id="v0141">v0.1.41</h3><p><strong>Change log :</strong></p><ul><li>Recording and Livestream status event added. (<strong>Docs</strong>: <a href="https://docs.videosdk.live/react/api/sdk-reference/use-meeting/events#onrecordingstatechanged">Recording Events</a>)</li></ul><hr><h2 id="android-sdk">Android SDK</h2><h3 id="v0023"><strong><strong>v0.0.23</strong></strong></h3><p><strong>Change log :</strong></p><p>CustomTrack for audio, video, and screen share.</p><ul><li><strong>Docs</strong>: <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/features/custom-track/custom-video-track">Custom Video Track</a></li><li><strong>Docs</strong>: <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/features/custom-track/custom-audio-track">Custom Audio Track</a></li><li><strong>Docs</strong>: <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/features/custom-track/custom-screen-share-track">Custom Screen Share Track</a></li></ul><p>Provide support for banuba integration.</p><p><strong>Code Sample</strong>: <a href="https://github.com/videosdk-live/videosdk-rtc-android-sdk-banuba-example">videosdk-rtc-android-sdk-banuba-example</a></p><p><strong>Bug Fix :</strong></p><ol><li>Fix <code>PendingIntent.FLAG_IMMUTABLE</code> for android 12 or later.</li><li>Camera flicker on screen share fix.</li><li>The camera will automatically off when you open another activity.</li></ol><hr><h2 id="video-sdk-api-reference">Video SDK API-Reference</h2><ul><li>Release <strong>Fetch Session Quality Stats API</strong></li></ul><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/api-reference/realtime-communication/fetch-session-quality-stats"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Fetch Session Quality Stats | Video SDK | Video SDK</div><div class="kg-bookmark-description">&lt;Method</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><ul><li>Release <strong>Fetch Participant Quality Stats API</strong></li></ul><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/api-reference/realtime-communication/fetch-session-peer-quality-stats"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Fetch Participant Quality Stats | Video SDK | Video SDK</div><div class="kg-bookmark-description">&lt;Method</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><h2 id="video-sdk-docs">Video SDK Docs</h2><ol><li>Added Release notes for JS SDK.</li></ol><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/release-notes"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Release Notes | Video SDK</div><div class="kg-bookmark-description">This page will keep you update all the releases of JavaScript SDK.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><p>2. Added Release notes for React SDK.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/release-notes"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Release Notes | Video SDK</div><div class="kg-bookmark-description">This page will keep you update all the releases of JavaScript SDK.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><p>3. Added Release notes for React Native SDK.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/release-notes"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Release Notes | Video SDK</div><div class="kg-bookmark-description">This page will keep you update all the releases of JavaScript SDK.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><p>4. Added Release notes for Android SDK.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/release-notes"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Release Notes | Video SDK</div><div class="kg-bookmark-description">This page will keep you update all the releases of JavaScript SDK.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><p>5. Added Release notes for iOS SDK.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/release-notes"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Release Notes | Video SDK</div><div class="kg-bookmark-description">This page will keep you update all the releases of iOS SDK.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><p>6. Added Release notes for Flutter SDK.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/release-notes"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Release Notes | Video SDK</div><div class="kg-bookmark-description">This page will keep you update all the releases of iOS SDK.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><p>7. Added Edit Button at the bottom on every docs page for better contribution.</p><h2 id="video-sdk-infrastructure">Video SDK Infrastructure</h2><ul><li>To store the recordings, Azure cloud storage support is added.</li></ul><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/docs/tutorials/user-dashboard/recording-storage-config"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Configure Recording Storage (Aws/ Azure) | Video SDK</div><div class="kg-bookmark-description">- This feature allows you to configure storage provider for your meeting recordings.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><ul><li>HLS orientation provided <code>landscpae</code> | <code>portrait</code></li></ul><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/api-reference/realtime-communication/start-hlsStream"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Start HLS Stream | Video SDK | Video SDK</div><div class="kg-bookmark-description">&lt;Method</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/icons/favicon.ico" alt="Video SDK July 22' Month Updates for Developers"><span class="kg-bookmark-author">Video SDK logo</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/videosdklive-thumbnail.jpg" alt="Video SDK July 22' Month Updates for Developers"/></div></a></figure><!--kg-card-begin: markdown--><h1 id="%F0%9F%93%A2-open-sourcing-video-sdk-docs">? Open-sourcing Video SDK Docs</h1>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2022/08/Final-GIF-1.gif" class="kg-image" alt="Video SDK July 22' Month Updates for Developers" loading="lazy" width="600" height="338" srcset="http://assets.videosdk.live/static-assets/ghost/2022/08/Final-GIF-1.gif 600w"><figcaption>Added <code>Edit Button</code> at the bottom of every docs page for better contribution.</figcaption></img></figure><h2 id="contributions-welcome-%F0%9F%91%8B">Contributions Welcome! ?</h2><p>If you liked our Docs and are working on it, or you are a customer who spotted a flaw in our docs, please feel free to <a href="https://github.com/videosdk-live/videosdk-docs">make contributions</a> by forking the repository and raising a PR.</p><p>Contributions, issues, and feature requests are welcome! Give a ⭐ (<a href="https://github.com/videosdk-live/videosdk-docs">star</a>)if you like this project!</p><hr><h3 id="%F0%9F%9A%A8updating-our-data-retention-policy">?Updating our Data Retention Policy<br/></h3><p>We know how much you love Video SDK and we want to let you know that we're updating our data retention policy. <br><br><strong><strong>Starting from 15th August, we're implementing a 7-day data retention policy.</strong></strong> <br><br>This means that we will be storing your <em><em>video call recordings</em></em> (video files), <em><em>call quality analytics</em></em> (packet loss, jitter, RTT), <em><em>chat</em></em>, and <em><em>PubSub logs</em></em> only for 7 days. <br><br>We know how many questions you would have to answer to auditors if your user data is stored with vendors. We believe in continuous improvement and therefore this change will benefit both us and you!</br></br></br></br></br></br></p><blockquote><strong><strong><code>Actionable items for you...</code>Please make sure to fetch all your data within 7-days from the session dateHere is REST API to fetch - <a href="https://docs.videosdk.live/api-reference/realtime-communication/intro" rel="noreferrer noopener">https://docs.videosdk.live/api-reference/realtime-communication/intro</a></strong></strong></blockquote><p>For any queries, please <a href="https://discord.gg/8vkDPnSbMg">#raise-an-issue-here</a> discord channel or <em><strong>support@videosdk.live</strong></em></p><hr><p>Feel free to join the Video SDK <a href="https://discord.gg/Gpmj6eCq5u">developer community</a> to know more about future events, community programs, and opportunities, &amp; get developer support from the Video SDK team.</p><p>You can always <a href="https://videosdk.live/contact">connect with us</a> in case of any query or help. We are happy to assist you.</p><p>​​​​We can't wait to see what you build next with Video SDK!<br><br>Thanks for reading.</br></br></p></hr></hr></hr></hr></hr></hr>]]></content:encoded></item><item><title><![CDATA[Launching: Prebuilt Video SDK  1.0 for developers]]></title><description><![CDATA[ Extremely delighted, we launch our video conferencing API, with a pre-built UI set-up.  This blog will make you familiar with our pre-built SDK for video calls and how it becomes helpful in building a video calling feature in your web and application]]></description><link>https://www.videosdk.live/blog/video-sdk-prebuilt</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb65</guid><category><![CDATA[Developer Blog]]></category><category><![CDATA[video-conferencing]]></category><category><![CDATA[Getting Started]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Wed, 13 Jul 2022 12:33:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/07/Integrate-It-Anywhere.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="immensely-glad-we-launch-our-pre-built">Immensely glad, we launch our pre-built!</h2><img src="http://assets.videosdk.live/static-assets/ghost/2021/07/Integrate-It-Anywhere.jpg" alt="Launching: Prebuilt Video SDK  1.0 for developers"/><p>The modern world demands technologies out of the box. Each day a tech company announces something new to keep the community inclined and interested in their product. Altogether, bringing up something new to the users with creativity is the only key to enhancing engagement. The good part is that real-time audio and video communication is seeing rapid advancements in the current time. Decades back our community began searching for long-distance modes of communication. Whether it is a corporate firm or a social community, each one seeks video communication. <br/></p><h3 id="invest-10-minutes-shape-your-video-calling-api-with-an-embedded-prebuilt-code-it%E2%80%99s-that-simple">Invest 10 minutes. Shape your video-calling API with an embedded prebuilt code. It’s that simple!</h3><p>The easiest one-click video that we use for communication is skillfully curated by the developers. Videosdk.live introduces its API for video calls. Easy to construct, our APIs help developers secure a quality video experience for their clients, with a supportable prebuilt UI. <br/></p><h3 id="the-idea-that-turned-into-motivation">The idea that turned into motivation</h3><p>Ever since technology expanded, people have made themselves comfortable with enormous technologies. One such successful technology is easy video conferencing. With several defined features, video conferencing enables communicating with people over a long distance. <br/></p><p>Video conferencing or video calling has been one of the most significant tools for people today. From big corporate conferences to small meetings, from family meetings to one-to-one video calls, each one needs video calling for face-to-face communication. Video conferencing has been a major demand of each industry today to connect with people. A layman user needs just a device and internet to connect to a video conference whereas a developer needs APIs to build a video calling feature for applications to deliver it to the end-user. That made us believe in using our skills to deliver something extraordinary for the developers’ community. <br/></p><h3 id="from-developers-to-developers">From developers to developers</h3><p>This blog describes how simply a developer can build a video calling API with the help of embedded pre-built SDKs here at Videosdk.live. We have designed this API to minimize, or simply cut down the stress of the development of a product by the developers. Our pre-built UI helps developers to build their products with minimal effort. We here at Videosdk.live believe that our pre-builts can help a developer to curate its product in not more than 10 minutes, making themselves extremely organized and swift. With supremacy in our product pre-builts, assure minimal wastage of resources considering the time and costs of development. <br/></p><h3 id="what-is-videosdklive-pre-built-sdk">What is Videosdk.live Pre-built SDK?</h3><p>Videosdk.live brings their embedded pre-built SDK, simple and easy to use, for the developers. Among the various SDKs we offer, this blog describes the embedded pre-built SDK which one can easily substitute for the time-consuming procedures of integrating a video calling tool. We offer economical, time-efficient pre-built SDKs for developers to accomplish half of their tasks, making their work simpler and faster. <br/></p><h3 id="what-makes-our-video-calling-apis-a-sure-shot-deal">What makes our Video Calling APIs a sure shot deal</h3><p>We offer multiple features that emphasize faster delivery of outputs with quality support. Our team works on delivering the best quality support to the developers where everything is designed with simplicity and decorated with modern tech. Here at Videosdk.live, we offer developers some all-time notable assistance.<br/></p><ol><li>We offer top <strong>browsing support</strong> to developers on browsers like Chrome, Edge, Firefox, and Safari, providing flexible support to the users.</li><li>We dispense efficient <strong>support to more than 98% of devices</strong>, helping developers and users make it a fair deal to use.</li><li>Videosdk.live designs its embedded pre-built SDKs in a <strong>simpler and easy-to-use</strong> manner, making it a utility even for a fresher or an inexperienced developer.</li><li>We help to create a <strong>one-click workplace </strong>for video calls, establishing video conferences in one click, enabling a huge participant crowd at once.</li><li>We build a <strong>stronger UI</strong> for developers so that they can make their work simplified and less time-consuming</li><li>Videosdk.live works for <strong>security and privacy</strong>, all the calls are end-to-end encrypted. We prioritize a user’s privacy of the utmost significance.</li><li>We deliver astounding video calling with <strong>low latency and minimal disturbances, </strong>which makes it fitter for communication.<br/></li></ol><h3 id="our-apis-are-super-sleek">Our APIs are super sleek</h3><p>The Videosdk.live APIs give one an amazing video-calling experience to make communication effective. We offer a high-quality video and audio experience for users with any mobile device.<br/></p><p>With our embedded SDK, a developer can embed a video call widget in the web application. It supports 98% of devices across all platforms and adaptive video calling for better quality calls with low latency. Developers can also customize embedded SDK to make it more convenient for an application. <br/></p><h3 id="experience-a-quick-setup-of-video-calling-api-with-our-pre-built-ui">Experience a quick setup of video calling API with our pre-built UI</h3><p>Build your one-click video calls in just a few minutes. We offer real-time communication with wholesome attributes and alluring audio and video quality. Videosdk.live has always developed its products with technical sustainability and modern creativity. Our attractive video-calling SDKs also allow customization as per the developer's needs.</p><p><strong>High audio quality support</strong></p><p>The pre-built embedded SDKs allow developers to experience high-quality natural audio support with clear voice with no clutter, covering a full sound bandwidth. Get scalable, audio from 16kHz to 48 kHz with 360 spatial audio support.</p><p><strong><strong><strong>AI-powered audio mixing and speaker switch</strong></strong></strong></p><p>Experience AI-Powered features of voice and audio mixer with intelligent active speaker switch all with a qualitative approach.</p><p><strong><strong><strong>Noise and disturbance cancellation</strong></strong></strong></p><p>The pre-built allows the cancellation of unwanted distractive sounds with its algorithms, making a meeting noise-free and crystal clear.</p><p><strong><strong><strong>High video quality standards</strong></strong></strong></p><p>With low latency, the SDK ensures video conferencing of high quality with an adaptive quality supporting 98% of devices. Enjoy resolution scalable up to 2K.</p><p><strong><strong><strong>HD and Full HD video calling support</strong></strong></strong></p><p>Enjoy video conferencing with HD and Full HD video conferencing screens, with ultra-wide and enhanced quality screen support.</p><p><strong>Screen sharing</strong></p><p>With our embedded pre-built SDK, get the feature of screen sharing, partially or entirely.</p><p><strong><strong><strong>Notable recording and chat support</strong></strong></strong></p><p>Record the meetings and use them with the VoD facility. Get in-built chat support, with query raising and other functionalities.</p><p><strong><strong><strong>Bonus activities</strong></strong></strong></p><p>An additional bonus of activities like whiteboard and polls, for education, health, and other use cases.<br/></p><blockquote>These video call APIs are beautifully constructed for users of several industries. The developers as mentioned above get the customization features on hand so that they can curate a video calling feature for the sectors they work for, delivering it for their respective applications. <br/></blockquote><ul><li>Easy and simple to use</li><li>Developed with maximum diligence to avoid any sort of glitch</li><li>Flexibility towards device support.</li><li>One-click joining in the meetings</li><li>Secured and encrypted</li><li>Scalable video quality<br/></li></ul><h3 id="use-cases-of-video-conferencing">Use cases of video conferencing</h3><ul><li>Online health consultancy</li><li>E-learning, or education over a long-distance</li><li>Social engagements</li><li>Internal and external Business communication</li><li>Religious and motivational sessions<br/></li></ul><blockquote>Videosdk.live is facilitates multiple features on its video calling API. We offer to build a app via video calling API in just 10-minutes with our pre-built SDKs. Our eminent features make a developer’s work less stressful. Our team puts all its effort into building product with top-notch quality, delivering the best each time to developers.<br/></blockquote><blockquote>Connect with us and get enriched with more such value content and an everlasting corporate relation.<br/></blockquote><blockquote>Find our documentation here:</blockquote><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/docs/realtime-communication/sdk-reference/prebuilt-sdk-js/setup"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Setup | videosdk.live Documentation</div><div class="kg-bookmark-description">Using prebuilt sdk</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/favicon.ico" alt="Launching: Prebuilt Video SDK  1.0 for developers"><span class="kg-bookmark-author">videosdk.live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/zujonow_32.png" alt="Launching: Prebuilt Video SDK  1.0 for developers"/></div></a></figure><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/07/Untitled-design--7--1.gif" class="kg-image" alt="Launching: Prebuilt Video SDK  1.0 for developers" loading="lazy" width="1440" height="810"/></figure>]]></content:encoded></item><item><title><![CDATA[Video SDK June 22' Month Updates for Developers]]></title><description><![CDATA[Thanks for checking out the June month developer updates! This month, we've released some big improvements that help developers work more efficiently.]]></description><link>https://www.videosdk.live/blog/video-sdk-june-22-month-updates-for-developers</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb8e</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 08 Jul 2022 14:43:10 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/07/June-2022-updates.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/07/June-2022-updates.jpg" alt="Video SDK June 22' Month Updates for Developers"/><p>NEW! This is the June 2022 release announcement. Here is a list of all new enhancements and product updates on videosdk.live<br/></p><h1 id="prebuilt-sdk">Prebuilt SDK</h1><p><strong><strong>v0.3.6</strong></strong></p><ol><li>Fix : Resolve UDP port blocking and video blackout issue</li></ol><p><strong><strong>v0.3.7</strong></strong></p><ol><li>Add ViewPort  for better quality.</li><li>Provide Echo Cancellation on audio stream.</li><li>Remove googDsp dependency warn.</li><li>Loader Animation added between join screen.</li></ol><p><a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/getting-started"><strong>Update Version Now ➡️</strong></a></p><hr><h1 id="javascript-sdk">Javascript SDK</h1><p><strong>v0.0.34 and v0.0.35</strong></p><ol><li>Fix: Resolve UDP port blocking and video blackout issue.</li></ol><p><strong><strong>v0.0.36</strong></strong></p><ol><li>Add the ViewPort method for better quality. ?</li><li>Docs : <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/features/set-viewport">https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/features/set-viewport</a></li><li>Provide Echo Cancellation on the audio stream.</li><li>Remove googDsp dependency warn.</li><li>Fix: resolve <code>changeWebcam</code> and <code>changeMic</code> customTrack issue.</li></ol><p><a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/getting-started"><strong>Update Version Now ➡️</strong></a></p><hr><h1 id="react-js-sdk">React JS SDK</h1><p><strong>v0.1.35 and v0.1.36</strong></p><ol><li>Fix: Resolve UDP port blocking and video blackout issue</li></ol><p><strong><strong>v0.0.37</strong></strong></p><ol><li>Add the ViewPort method for better quality. ?</li><li>Docs : <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/features/set-viewport">https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/features/set-viewport</a></li><li>Provide Echo Cancellation on the audio stream.</li><li>Remove googDsp dependency warn.</li><li>Fix : resolve <code>changeWebcam</code> and <code>changeMic</code> customTrack issue.</li></ol><p><strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/getting-started">Update Version Now ➡️</a></strong></p><hr><h1 id="react-native-sdk">React Native SDK</h1><p><strong>v0.0.30 and v0.0.31</strong></p><ol><li>Fix: Resolvethe  UDP port blocking and video blackout issue</li></ol><p><strong><strong>v0.0.32</strong></strong></p><ol><li>Add ViewPort method for better quality. ?</li><li>Docs : <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/features/set-viewport">https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/features/set-viewport</a></li><li>Provide Echo Cancellation on the audio stream.</li><li>Fix : resolve <code>changeWebcam</code> and <code>changeMic</code> customTrack issue.</li></ol><p><a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/getting-started"><strong>Update Version Now ➡️</strong></a></p><hr><h1 id="flutter-sdk">Flutter SDK</h1><p><strong>v0.0.14</strong></p><ol><li>Add the ViewPort method for better quality. ?</li><li>Docs : <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/features/set-viewport">https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/features/set-viewport</a></li></ol><p><a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/getting-started"><strong>Update Version Now ➡️</strong></a></p><hr><h1 id="android-sdk">Android SDK</h1><p><strong>v0.0.19</strong></p><ol><li>Fix: Resolve UDP port blocking and video blackout issue</li></ol><p><strong><strong>v0.0.20</strong></strong></p><ol><li>For <code>arembi-v7a</code> architecture remove throws audio-related errors from SDK.</li></ol><p><strong><strong>v0.0.21</strong></strong></p><ol><li>Add the ViewPort method for better quality. ?</li><li>Docs: <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/getting-started">https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/getting-started</a></li></ol><p><a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/getting-started"><strong>Update Version Now ➡️</strong></a></p><hr><h1 id="ios-sdk">iOS SDK</h1><p><strong>v0.1.20</strong></p><ol><li>Bug Fix: Resolve error <code>s.delegate?.didReceive(event: event, client: s)</code> coming on meeting left or end</li></ol><p><a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/getting-started"><strong>Update Version Now ➡️</strong></a></p><hr><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://www.videosdk.live/blog/worlds-fastest-growing-audio-video-developers-community"><img src="http://assets.videosdk.live/static-assets/ghost/2022/07/fastest-growing-community.jpg" class="kg-image" alt="Video SDK June 22' Month Updates for Developers" loading="lazy" width="1280" height="720" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/07/fastest-growing-community.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/07/fastest-growing-community.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2022/07/fastest-growing-community.jpg 1280w" sizes="(min-width: 720px) 720px"/></a><figcaption>World's Fastest-Growing Video Engineer Community? <a href="https://discord.gg/Gpmj6eCq5u">join community</a></figcaption></figure><p>Feel free to join the Video SDK <a href="https://discord.gg/Gpmj6eCq5u">developer community</a> to know more about future events, community programs, and opportunities, &amp; get developer support from the Video SDK team.</p><p>You can always <a href="https://videosdk.live/contact">connect with us</a> in case of any query or help. We are happy to assist you.</p><p>​​​​We can't wait to see what you build next with Video SDK!<br><br>Thanks for reading.</br></br></p></hr></hr></hr></hr></hr></hr></hr>]]></content:encoded></item><item><title><![CDATA[World's Fastest-Growing Audio-Video Developers Community?]]></title><description><![CDATA[videosdk community is for developers, founders, and anyone else who wants to share their work with others. They have created a place where people can come together and share their work to help each other grow.]]></description><link>https://www.videosdk.live/blog/worlds-fastest-growing-audio-video-developers-community</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb8d</guid><category><![CDATA[community]]></category><dc:creator><![CDATA[Arjun Kava]]></dc:creator><pubDate>Tue, 21 Jun 2022 10:35:05 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/06/fastest-growing-community-2.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/06/fastest-growing-community-2.jpg" alt="World's Fastest-Growing Audio-Video Developers Community?"/><p>Become part of the fastest-growing developer community on Discord. Join the Video SDK discord server to meet like-minded people and grow together! The below metrics displays the power of the community-led product. </p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/06/Screenshot-from-2022-06-11-18-14-23.png" class="kg-image" alt="World's Fastest-Growing Audio-Video Developers Community?" loading="lazy" width="868" height="377"/></figure><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="http://assets.videosdk.live/static-assets/ghost/2022/06/Screenshot-from-2022-06-11-18-14-02.png" width="428" height="375" loading="lazy" alt="World's Fastest-Growing Audio-Video Developers Community?"/></div><div class="kg-gallery-image"><img src="http://assets.videosdk.live/static-assets/ghost/2022/06/Screenshot-from-2022-06-11-18-14-58.png" width="869" height="377" loading="lazy" alt="World's Fastest-Growing Audio-Video Developers Community?"/></div></div></div><figcaption>Video SDK family is growing &amp; talking a lot!</figcaption></figure><h2 id="why-did-we-choose-discord">Why Did We Choose Discord?</h2><p>Real-time communication has become essential to communicate with different groups, very quickly. To fulfill this demand, Discord is rising to become the preferred platform, especially among developers, and rightfully so. </p><p>What makes discord unique is its flexibility to categorize whatever you throw at it. For example, you can create different channels for your specific need. Furthermore, discord provides the ability to automate many things. That's what developers love to do! They automate! </p><p>There are these reasons and plenty of others for why at Video SDK we prefer discord for communication. </p><p>Furthermore, it helps us chat with you casually and help resolve your issues asap! Not only that, but even other channel members can help you out. So this makes sure that whatever your issue might be that it always gets resolved.</p><h2 id="introduction-to-video-sdk-server">Introduction to Video SDK Server!</h2><p>We welcome you to our Discord channel where all the cool devs hang out. Also, I want to give you a quick introduction to the Video SDK Discord server. </p><p>I know that discord channels can get pretty complex for newcomers while going through the rules &amp; roles but worry not, as we have kept everything pretty simple &amp; chill. </p><ul><li><strong>WELCOME TO VIDEO SDK</strong></li></ul><p>Catch all the latest announcements &amp; product updates here and be up-to-date with all the new features we are pumping out!</p><ul><li><strong>HELP ME: VIDEO SDK TEAM </strong>?</li></ul><p>Now, if you've something that is super urgent then go to the "<strong>help me raise an issue</strong>" channel, and raise a ticket to tell us what's wrong then our developers will get in touch with you ASAP.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2022/06/Untitled-design.png" class="kg-image" alt="World's Fastest-Growing Audio-Video Developers Community?" loading="lazy" width="425" height="611"><figcaption><strong>Help me: Video SDK team</strong></figcaption></img></figure><ul><li><strong>HELP ME: COMMUNITY DEVS </strong>?‍? </li></ul><p>If you have any issues not as urgent or you think that other developers might already have solved the issue then you can find the respective channel under this and ask your query.</p><ul><li><strong>VIDEO SDK COMMUNITY </strong>? </li></ul><p>You can just chill with us on the general chat, we would love that. Further, we would also love your feedback which you can give in the feedback channel. Open your hearts and share whatever you think of the platform or new feature suggestions. We appreciate it all. </p><!--kg-card-begin: markdown--><blockquote>
<h3 id="if-youre-a-developer-who-wants-to-explore-the-possibility-of-using-video-sdk-for-a-small-proof-of-concept-chances-are-youre-not-going-to-need-your-bosss-approval-to-pay-for-it-or-try-it-out">&quot;If you're a developer who wants to explore the possibility of using Video SDK for a small proof of concept, chances are, you're not going to need your boss's approval to pay for it or try it out&quot;</h3>
<h1 id=""/>
<h1 id=""/>
<h4 id="-arjun-kava">- Arjun Kava</h4>
</blockquote>
<!--kg-card-end: markdown--><h2 id="why-developers-are-choosing-the-video-sdk">Why Developers Are Choosing The Video SDK?</h2><p/><p>What's not there to love in Video SDK when we provide support for <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/getting-started">React</a>, <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/getting-started">JavaScript</a>, <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/getting-started">React Native</a>, <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/getting-started">Flutter</a>, <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/getting-started">Android</a>, &amp; <a href="https://docs.videosdk.live/ios/guide/video-and-audio-calling-api-sdk/getting-started">iOS</a>. More importantly, apart from building your <a href="https://docs.videosdk.live/">custom SDK</a>, you can also work with the <a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/getting-started">Pre-build SDK</a> to quickly integrate it into your platform.</p><p>? It takes less than 10 minutes to integrate Video SDK on any platform be it an app or website. </p><p>? Video SDK pricing is killer. No one else in the competition can compete with it. </p><p>? Detailed analytics of videos &amp; audio streaming activity of your app/website.</p><p>? Custom interactive live streaming.</p><p>? Drive exposure to your own website, not to Zoom!</p><p>? Easy to integrate, &amp; easy to migrate to.</p><p>? Premium support, call us anytime.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/06/all-platform--1-.png" class="kg-image" alt="World's Fastest-Growing Audio-Video Developers Community?" loading="lazy" width="425" height="611"/></figure><p>Become part of Video SDK where every role is essentially a badge that you can earn. </p><p>Thanks for becoming a part of the Video SDK Discord server and contributing. We appreciate everyone's efforts in helping the server become this big. It wouldn't be possible without you!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2022/06/devloper-develope-developre.gif" class="kg-image" alt="World's Fastest-Growing Audio-Video Developers Community?" loading="lazy" width="360" height="270"><figcaption>Developers, developer, developers ? <a href="https://youtu.be/Vhh_GeBPOhs" rel="">link</a></figcaption></img></figure><!--kg-card-begin: html--><!DOCTYPE html>
<html>

<head>
	<meta name="viewport" content="width=device-width, initial-scale=1">
	<style>
		.button {
			border-radius: 4px;
			background-color: #5f7afa;
			border: none;
			color: #FFFFFF;
			text-align: center;
			font-size: 17px;
			padding: 10px;
			width: 300px;
			transition: all 0.5s;
			cursor: pointer;
			margin: 5px;
		}

		.button span {
			cursor: pointer;
			display: inline-block;
			position: relative;
			transition: 0.5s;
		}

		.button span:after {
			content: '\00bb';
			position: absolute;
			opacity: 0;
			top: 0;
			right: -20px;
			transition: 0.5s;
		}

		.button:hover span {
			padding-right: 25px;
		}

		.button:hover span:after {
			opacity: 1;
			right: 0;
		}
	</style>
</meta></head>

<body>

	<h2/>
	<a href="https://discord.gg/Gpmj6eCq5u">
          <button class="button"><span>???? ??????? ?????????</span></button>

		
		


</a></body></html><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[May 22' Month Updates for Developers]]></title><description><![CDATA[Thanks for checking out the May month developer updates! This month, we've released some big improvements that help developers work more efficiently.]]></description><link>https://www.videosdk.live/blog/may-22-month-updates-for-developers</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb8c</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 03 Jun 2022 10:57:52 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/06/May-2022-updates.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/06/May-2022-updates.jpg" alt="May 22' Month Updates for Developers"/><p>NEW! This is the May 2022 release announcement. Here is a list of all new enhancements and product updates on videosdk.live</p><!--kg-card-begin: markdown--><h1 id="prebuilt-sdk">Prebuilt SDK</h1>
<h4 id="v032">v0.3.2</h4>
<p>• Fix: Mozilla browser (Mac OS) Video Control button not working.</p>
<h4 id="v033">v0.3.3</h4>
<ul>
<li>Region support added for new meetings</li>
</ul>
<h4 id="v034">v0.3.4</h4>
<ul>
<li>added <code>preferedProtocols</code> into init meeting config</li>
<li>fixed custom track issue</li>
<li>added new event when getUserMedia / getDisplayMedia failed</li>
<li>Resolved error when recorder joined &quot;No peers found for the Data consumer&quot;</li>
</ul>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2022/06/604892287590e58910d67314_giphy-1.gif" class="kg-image" alt="May 22' Month Updates for Developers" loading="lazy" width="480" height="270"><figcaption>Bug Fix</figcaption></img></figure><!--kg-card-begin: markdown--><h1 id="javascript-sdk">Javascript SDK</h1>
<h4 id="v0030">v0.0.30</h4>
<ul>
<li>Custom Video track on <code>changeWebcam</code> method.</li>
<li>Custom Audio track on <code>changeMic</code> method.</li>
<li>Bug Fix : Mozila browser (Mac OS) localParticipant Video blackout</li>
</ul>
<h4 id="v0031">v0.0.31</h4>
<ul>
<li>Bug Fix : Custom track issue on initMeeting fix.</li>
<li>Added new error event when device or browser does not support audio or video communication.</li>
<li>Bug Fix : Resolved error &quot;No peers found for the Data consumer&quot;  while start recording/ livestream/hls.</li>
</ul>
<h4 id="v0032-and-v0033">v0.0.32 and v0.0.33</h4>
<ul>
<li>SDK Enhancements.</li>
</ul>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h1 id="react-js-sdk">React JS SDK</h1>
<h4 id="v0131">v0.1.31</h4>
<ul>
<li>Custom Video track on <code>changeWebcam</code> method.</li>
<li>Custom Audio track on <code>changeMic</code> method.</li>
<li>Bug Fix : Mozila browser (Mac OS) localParticipant Video blackout.</li>
</ul>
<h4 id="v0132">v0.1.32</h4>
<ul>
<li>Bug Fix : Custom track issue on initMeeting fix</li>
<li>Added new error event when device or browser does not support audio or video communication.</li>
<li>Bug Fix : Resolved error &quot;No peers found for the Data consumer&quot;  while start recording/ livestream/hls.</li>
</ul>
<h4 id="v0133-and-v0134">v0.1.33 and v0.1.34</h4>
<ul>
<li>SDK Enhancements</li>
</ul>
<h1 id="react-native-sdk">React Native SDK</h1>
<h4 id="v0027">v0.0.27</h4>
<ul>
<li>Custom Video track on <code>changeWebcam</code> method.</li>
<li>Custom Audio track on <code>changeMic</code> method.</li>
<li>Bug Fix : Custom track issue on initMeeting fix</li>
<li>Added new error event when device does not support audio or video communication.</li>
<li>Bug Fix : Resolved error &quot;No peers found for the Data consumer&quot;  while start recording/ livestream/hls.</li>
</ul>
<h4 id="v0028-and-v0029">v0.0.28 and v0.0.29</h4>
<ul>
<li>SDK Enhancements</li>
</ul>
<h1 id="android-sdk">Android SDK</h1>
<h4 id="v0015">v0.0.15</h4>
<ul>
<li>Crash on <code>meeting.end()</code> issue fixes</li>
<li>Bug Fix: Open native camera to stop publishing video from meeting.</li>
</ul>
<h4 id="v0016">v0.0.16</h4>
<ul>
<li>Custom ParticipantId on <code>initMeeting</code> <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/features/start-join-meeting#2-initialization">Here</a></li>
<li>Error Codes events Added <a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/features/error-event">Here</a></li>
<li>Bug Fix: FATAL EXCEPTION: AudioTrackJavaThread java.lang.UnsatisfiedLinkError on leave meeting.</li>
</ul>
<h1 id="ios-sdk">iOS SDK</h1>
<h4 id="v0129">v0.1.29</h4>
<ul>
<li>Bug Fix: Crash fix on Meeting Leave.</li>
</ul>
<!--kg-card-end: markdown--><hr><!--kg-card-begin: markdown--><blockquote>
<h4 id="video-sdk-is-now-my-go-to-software-for-my-meetings-very-simple-to-use-and-uncomplicated-also-completely-white-labeled-so-it-gives-a-very-professional-experience-when-we-do-client-meetings">&quot;Video SDK is now my go to software for my meetings. Very simple to use and uncomplicated. Also completely white labeled so it gives a very professional experience when we do client meetings.&quot;</h4>
<h6 id="venkatesh-b-group-ceo-compunet-connections">Venkatesh B. Group CEO, Compunet Connections</h6>
</blockquote>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://www.videosdk.live/blog/how-to-make-a-video-calling-app-using-react-native"><img src="http://assets.videosdk.live/static-assets/ghost/2022/06/React-Native1.jpg" class="kg-image" alt="May 22' Month Updates for Developers" loading="lazy" width="1281" height="721" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/06/React-Native1.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/06/React-Native1.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2022/06/React-Native1.jpg 1281w" sizes="(min-width: 720px) 720px"/></a><figcaption><a href="https://www.videosdk.live/blog/how-to-make-a-video-calling-app-using-react-native">Building Video Calling App in React Native</a></figcaption></figure><p><em>In this tutorial, you’ll learn how to make a <a href="https://www.videosdk.live/blog/how-to-make-a-video-calling-app-using-react-native">video calling app feature in your React Native</a> app using Video SDK.</em></p><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe width="200" height="113" src="https://www.youtube.com/embed/kI3hJpMA9mo?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""/><figcaption>Interactive Live Streaming App in React Js with Video SDK</figcaption></figure><h3 id="twitter-%F0%9F%90%A6">Twitter ?</h3><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="und" dir="ltr"><a href="https://t.co/5nK2qW0RVp">https://t.co/5nK2qW0RVp</a></p>&mdash; Video SDK (@video_sdk) <a href="https://twitter.com/video_sdk/status/1531617853979656194?ref_src=twsrc%5Etfw">May 31, 2022</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"/>
</figure><figure class="kg-card kg-image-card"><a href="https://twitter.com/i/broadcasts/1YqJDqnzgRoxV"><img src="http://assets.videosdk.live/static-assets/ghost/2022/06/unnamed-1.png" class="kg-image" alt="May 22' Month Updates for Developers" loading="lazy" width="800" height="800" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/06/unnamed-1.png 600w, http://assets.videosdk.live/static-assets/ghost/2022/06/unnamed-1.png 800w" sizes="(min-width: 720px) 720px"/></a></figure><p>Feel free to join the Video SDK <a href="https://discord.gg/Gpmj6eCq5u">developer community</a> to know more about future events, community programs, and opportunities, &amp; get developer support from the Video SDK team.</p><p>​​​​We can't wait to see what you build next with Video SDK!<br><br/></br></p></hr>]]></content:encoded></item><item><title><![CDATA[April 22' Month Updates for Developers]]></title><description><![CDATA[Thanks for checking out the April month developer updates! This month, we've released some big improvements that help developers work more efficiently.]]></description><link>https://www.videosdk.live/blog/april-22-month-updates-for-developers</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb85</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Tue, 03 May 2022 13:04:17 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/05/April-2022-updates.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/05/April-2022-updates.jpg" alt="April 22' Month Updates for Developers"/><p>NEW! This is the April 2022 release announcement. Here is a list of all new enhancements and product updates on videosdk.live<br><br>When the Video SDK's default custom video track cannot meet your application's requirement, the SDK allows you to customize the video track process. By enabling custom video capture, you can manage the video capture on your own and send the captured video data to the SDK for the subsequent video encoding and stream publishing. </br></br></p><p>With custom video capture enabled, you can still call the SDK's API to render the video for local preview, which means you don't have to implement the rendering yourself.</p><p>Listed below are some scenarios where enabling a custom video track is recommended:</p><ul><li>Your application needs to use a third-party beauty SDK (ex.<a href="https://www.banuba.com/">Banuba</a> SDK). In such cases, You can perform video capture, and video preprocessing using the beauty SDK and then pass the preprocessed video data to the Video SDK for the subsequent video encoding and stream publishing.<br/></li><li>Your application needs to perform another task that also needs to use the camera during the live streaming, which will cause a conflict with the Video SDK's default video capturing module.</li><li>Custom audio tract functions this feature can be used to add custom layers like background noise removal, echo cancellation, etc. on audio and send it to other participants.</li></ul><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/05/videosdk-custom-track.gif" class="kg-image" alt="April 22' Month Updates for Developers" loading="lazy" width="1280" height="720"/></figure><p><br>Now you can have more control over camera and audio publishing before starting calls such as: </br></p><p><strong>cameraId</strong>: Select a customized camera by default before starting a call. </p><p><strong>encoderConfig</strong>: Provide the best matching resolution before publishing the video or audio streams. We came up with all the resolution and audio modes such as stereo, music rooms, and high-quality audio support. </p><p><strong>facingMode</strong>: This will help to select the front or back camera before publishing the video stream. </p><p><strong>optimizationMode: </strong>This will help with the quality of the video. It is really helpful as per the use-case of the app. It has three modes:</p><ul><li><strong>Text</strong>: This mode is really helpful while mentoring students or solving their issues while writing on paper. it will focus on the text as the first priority.</li><li><strong>Motion</strong>:<strong> </strong>This mode will prioritize motion in comparison to details. </li><li><strong>Detail</strong>: This mode will prioritize details in comparison to motion.</li></ul><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/05/REC--3-.gif" class="kg-image" alt="April 22' Month Updates for Developers" loading="lazy" width="1280" height="720"/></figure><p><strong>Javascript: v0.0.29</strong></p><ol><li>Custom Video Track<strong><strong>: </strong><a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/render-media/optimize-video-track">Docs</a></strong></li><li>Custom Audio Track<strong><strong>:</strong> <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/render-media/optimize-audio-track">Docs</a></strong></li><li>Custom Screen share Track<strong><strong>:</strong> <a href="https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/handling-media/screen-share">Docs</a></strong></li></ol><p><strong>ReactJS: v0.1.30</strong></p><ol><li>Custom Video Track<strong><strong>: </strong><a href=" https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/features/custom-track/custom-video-track">Docs</a></strong></li><li>Custom Audio Track: <a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/features/custom-track/custom-audio-track"><strong>Docs</strong></a></li><li>Custom Screen share Track<strong><strong>:</strong> </strong><a href="https://docs.videosdk.live/react/guide/video-and-audio-calling-api-sdk/features/custom-track/custom-screen-share-track"><strong>Docs</strong></a></li></ol><p><strong>React Native: v0.0.26</strong></p><ol><li>Custom Video Track:<strong> <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/features/custom-track/custom-video-track">Docs</a><strong> </strong></strong></li><li>Custom Audio Track:<strong> <a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/features/custom-track/custom-audio-track">Docs</a></strong></li><li>Custom Screen share Track<strong><strong>:</strong> </strong><a href="https://docs.videosdk.live/react-native/guide/video-and-audio-calling-api-sdk/features/custom-track/custom-screen-share-track"><strong>Docs</strong></a></li></ol><p><strong>Flutter : v0.0.12</strong></p><ol><li>IOS mic issue resolved</li></ol><p><strong>Prebuilt SDK: v0.3.2</strong></p><ol><li>Firefox browser camera permission issue has been resolved</li></ol><p><br><strong>Rest API /v2 released.</strong><br>HLS Live Streaming APIs :</br></br></p><p>Low latency HLS streaming for you now with API as well. It can help to broadcast to thousands of viewers with low latency. </p><ol><li>Start HLS: <a href="https://docs.videosdk.live/api-reference/realtime-communication/start-hlsStream"><strong>Docs</strong></a><strong> </strong></li><li>Stop HLS: <strong><a href="https://docs.videosdk.live/api-reference/realtime-communication/stop-hlsStream">Docs</a></strong> </li><li>Fetch All HLS: <strong><a href="https://docs.videosdk.live/api-reference/realtime-communication/fetch-all-hls">Docs</a></strong></li><li>Fetch an HLS: <a href="https://docs.videosdk.live/api-reference/realtime-communication/fetch-an-hls"><strong>Docs</strong></a></li></ol><p>Auth Security in v2: If you are really concerned with security and want to generate token only for accessing v2 APIs, then you need to provide `versions` and `role` under payload while signing JWT: <strong><a href="https://docs.videosdk.live/api-reference/realtime-communication/intro">Docs</a></strong></p><pre><code class="language-NodeJS">const jwt = require('jsonwebtoken');
const API_KEY = &lt;YOUR API KEY&gt;;
const SECRET = &lt;YOUR SECRET&gt;;
const options = { 
 expiresIn: '10m', 
 algorithm: 'HS256' 
};
const payload = {
 apikey: API_KEY,
 version: 2,
 role: ['CRAWLER'],
};
const token = jwt.sign(payload, SECRET, options);
console.log(token);
</code></pre><p><strong>Tutorials</strong>: Customized Layout: This feature allows you to start recording / HLS / Livestream with Customized Layout by providing `<a href="https://docs.videosdk.live/docs/tutorials/customized-layout"><strong>templateUrl</strong></a>`<strong><strong>.</strong></strong><br><br><strong>We are VERY excited to announce our new website is now live in BETA </strong>?<br><br><strong>Go check out</strong>: <a href="https://www.videosdk.live/v1">https://www.videosdk.live/v1</a><br><br>Thanks for reading.</br></br></br></br></br></br></p>]]></content:encoded></item><item><title><![CDATA[March 22' Month Updates for Developers]]></title><description><![CDATA[Thanks for checking out the March developer updates! This month, we've released some big improvements that help developers work more efficiently.]]></description><link>https://www.videosdk.live/blog/march-22-month-updates-for-developers</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb84</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Mon, 04 Apr 2022 09:07:35 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/04/March-2022-updates.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/04/March-2022-updates.jpg" alt="March 22' Month Updates for Developers"/><p>NEW! This is the March 2022 release announcement. Here is a list of all new enhancements and product updates on videosdk.live</p><blockquote>⭐ <strong>Important</strong>: If you are a couple of versions behind, upgrade your web &amp; app VideoSDK latest version. We <strong>strongly recommend upgrading SDK versions</strong> incrementally if possible. </blockquote><!--kg-card-begin: markdown--><ul>
<li>Prebuild SDK v0.3.1</li>
<li>Javascript: v0.0.28</li>
<li>ReactJS: v0.1.27</li>
<li>React Native: v0.0.25</li>
<li>Flutter: v0.0.11</li>
<li>Android: v0.0.14</li>
<li>iOS: v1.2.6</li>
</ul>
<!--kg-card-end: markdown--><p>If you have questions while upgrading SDK versions, join the <a href="https://discord.gg/Gpmj6eCq5u" rel="noopener ugc nofollow">Discord Community</a>. You can ask your questions on the <code>#general</code> channel. Or you can reach out to me on <a href="https://twitter.com/video_sdk">Twitter</a>.</p><h3 id="whats-new">What's New </h3><ul><li>Meeting HLS (Beta)</li><li>Participants will now be able to start HLS for meetings. And can watch HLS using the <em><strong>.m3u8</strong></em> file. This feature is released in JS and ReactJS SDK</li><li>Change layout in prebuilt meetings dynamically for all participants.</li><li>Redesigned Documentation Website</li><li>Provided Quickstart guide for all SDKs, which will help developers to integrate VideoSDK into their projects within minutes.</li><li>Precise API Reference for SDK and Rest API.</li><li>Rest API v2 released</li><li>New Regions are supported</li></ul><h3 id="prebuilt-sdk-v031"><a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/getting-started">Prebuilt SDK: v0.3.1</a><br/></h3><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="http://assets.videosdk.live/static-assets/ghost/2022/04/go-live-bcbded3bfdac93a1b7df31d4e30fb2ed.png" width="1366" height="768" loading="lazy" alt="March 22' Month Updates for Developers" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/04/go-live-bcbded3bfdac93a1b7df31d4e30fb2ed.png 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/04/go-live-bcbded3bfdac93a1b7df31d4e30fb2ed.png 1000w, http://assets.videosdk.live/static-assets/ghost/2022/04/go-live-bcbded3bfdac93a1b7df31d4e30fb2ed.png 1366w" sizes="(min-width: 720px) 720px"/></div><div class="kg-gallery-image"><img src="http://assets.videosdk.live/static-assets/ghost/2022/04/configuration-5b7667850e669d8e6d755692b451eec0--1-.png" width="1366" height="768" loading="lazy" alt="March 22' Month Updates for Developers" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/04/configuration-5b7667850e669d8e6d755692b451eec0--1-.png 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/04/configuration-5b7667850e669d8e6d755692b451eec0--1-.png 1000w, http://assets.videosdk.live/static-assets/ghost/2022/04/configuration-5b7667850e669d8e6d755692b451eec0--1-.png 1366w" sizes="(min-width: 720px) 720px"/></div></div></div><figcaption><strong><strong>RTMP Live Stream</strong> &amp; <strong>Change layout dynamically</strong></strong></figcaption></figure><!--kg-card-begin: markdown--><ul>
<li>
<h6 id="start-rtmp-live-stream"><a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/go-live-social-media">Start RTMP Live Stream</a></h6>
</li>
</ul>
<!--kg-card-end: markdown--><p>Livestreaming allows participants to broadcast meetings on various social media platforms such as Facebook, Youtube, Twitter, Twitch, and more.</p><pre><code class="language-Prebuild SDK 0.3.1">const config = {
  // ...

  permissions: {
    //other permissions
    toggleLivestream: true,
  },
  livestream: {
    autoStart: true,
    enabled: true,
  },
  // ...</code></pre><!--kg-card-begin: markdown--><ul>
<li>
<h6 id="change-layout-dynamically-for-all-participants"><a href="https://docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/change-layout">Change layout dynamically for all participants</a></h6>
</li>
</ul>
<!--kg-card-end: markdown--><p>Change the layout of a meeting on your screen. For example, if a lot of people are sharing ideas, you might want to see as many participants as possible. The layout options you have depend on what's happening in your meeting.</p><pre><code class="language-Prebuild SDK 0.3.1">const config = {
  // ...

  permissions: {
    //other permissions
    toggleLivestream: true,
  },
  livestream: {
    autoStart: true,
    enabled: true,
  },
  // ...
};</code></pre><h3 id="javascript-v0028"><a href="https://docs.videosdk.live/javascript/api/sdk-reference/meeting-class/methods#starthls">Javascript: v0.0.28</a></h3><p><strong>HLS Streaming</strong>: delivers live and on-demand content streams to global-scale audiences. Historically, HLS has favored stream reliability over latency. Low-Latency HLS extends the protocol to enable video streaming while maintaining scalability.</p><h3 id="reactjs-v0122"><a href="https://docs.videosdk.live/react/api/sdk-reference/use-meeting/methods#starthls">ReactJS: v0.1.22</a></h3><p><strong>HLS Streaming</strong>: delivers live and on-demand content streams to global-scale audiences. Historically, HLS has favored stream reliability over latency. Low-Latency HLS extends the protocol to enable video streaming while maintaining scalability.</p><h3 id="android-v0014"><a href="https://docs.videosdk.live/android/guide/video-and-audio-calling-api-sdk/getting-started">Android v0.0.14</a></h3><!--kg-card-begin: markdown--><p>1.Change input/output devices</p>
<ul>
<li><a href="https://docs.videosdk.live/android/api/sdk-reference/meeting-class/methods#changemic">changeMic()</a></li>
<li><a href="https://docs.videosdk.live/android/api/sdk-reference/meeting-class/methods#changewebcam">changeWebcam()</a></li>
<li><a href="https://docs.videosdk.live/android/api/sdk-reference/meeting-class/methods#setaudiodevicechangelistener">setAudioDeviceChangeListener()</a></li>
</ul>
<ol start="2">
<li>Pause/Resume Participant Stream</li>
</ol>
<ul>
<li><a href="https://docs.videosdk.live/android/api/sdk-reference/stream-class/methods#resume">resume()</a></li>
<li><a href="https://docs.videosdk.live/android/api/sdk-reference/stream-class/methods#pause">pause()</a></li>
</ul>
<ol start="3">
<li>setQuality for participant stream</li>
</ol>
<ul>
<li><a href="https://docs.videosdk.live/android/api/sdk-reference/participant-class/methods#setquality">setQuality()</a></li>
</ul>
<ol start="4">
<li>External call detection event</li>
</ol>
<ul>
<li><a href="https://docs.videosdk.live/android/api/sdk-reference/meeting-class/meeting-event-listener-class#onexternalcallstarted">onExternalCallStarted()</a></li>
</ul>
<!--kg-card-end: markdown--><h3 id="rest-api-v2-released"><a href="https://docs.videosdk.live/api-reference/realtime-communication/intro)">Rest API /v2 released</a> </h3><!--kg-card-begin: markdown--><ul>
<li>Rooms APIs</li>
<li>Sessions APIs</li>
<li>Recordings APIs</li>
</ul>
<!--kg-card-end: markdown--><h3 id="new-regions-are-supported"><a href="https://docs.videosdk.live/docs/api-reference/realtime-communication/create-join-meeting#create-meeting">New Regions are supported</a> </h3><ul><li><strong>eu001</strong> Region Code for Frankfurt ??</li></ul><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/TuRZQWkAWjE?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""/></figure><p>Join our <a href="https://discord.gg/f2WsNDN9S5">Discord</a> Community<br><br>You can always <a href="https://videosdk.live/contact">connect with us</a> in case of any query or help. We are happy to assist you.</br></br></p><p>Thanks for reading.<br/></p>]]></content:encoded></item><item><title><![CDATA[February 22' Month Updates for Developers ?]]></title><description><![CDATA[Thanks for checking out the February developer updates! This month, we've released some big improvements that help developers work more efficiently.]]></description><link>https://www.videosdk.live/blog/february-22-month-updates-for-developers</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb83</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Tue, 08 Mar 2022 05:57:55 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/03/February-2022.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/03/February-2022.jpg" alt="February 22' Month Updates for Developers ?"/><p>NEW! This is the February 2022 release announcement. Here is a list of all new enhancements and product updates on videosdk.live<br/></p><h3 id="prebuilt-sdk024">Prebuilt SDK:(<a href="https://dev-docs.videosdk.live/prebuilt/guide/prebuilt-video-and-audio-calling/features/recording-meeting">0.2.4</a>)</h3><p>The new recording &amp; RTMP Custom Layout feature is live (in beta) with Prebuilt SDK (<strong>0.2.4</strong>) Please refer to the below sample code for reference and instructions to use.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2022/03/Dynamic-layout--1-.png" class="kg-image" alt="February 22' Month Updates for Developers ?" loading="lazy" width="1600" height="900" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2022/03/Dynamic-layout--1-.png 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2022/03/Dynamic-layout--1-.png 1000w, http://assets.videosdk.live/static-assets/ghost/2022/03/Dynamic-layout--1-.png 1600w" sizes="(min-width: 720px) 720px"/></figure><p><strong><strong>What</strong>'<strong>s new</strong></strong><br>Regular <strong>layout</strong> attributes &amp; Recording/RTMP <strong>layout</strong> attributes have been changed and separated. This change is offering better flexibility if you want to use different layout settings on both sides.</br></p><p>Layout types: <strong>"SPOTLIGHT" | "SIDEBAR" | "GRID" </strong>This offers different types of looks. <br><br>Priority: `// <strong>"SPEAKER | "PIN</strong>", This offers you an option to whom you want to give priority on-screen, the active speaker or PINNED speaker</br></br></p><p><strong><strong>P.S. At any point in time you can have six types of settings</strong></strong> <br><br>1. <strong>GRID</strong> (type) &amp;<strong> SPEAKER</strong> (priority) <br>2. <strong>SIDEBAR</strong> (type) &amp; <strong>SPEAKER </strong>(priority) <br>3. <strong>SPOTLIGHT </strong>(type) &amp; <strong>SPEAKER</strong> (priority) <br>4. <strong>GRID</strong> (type) &amp; <strong>PIN</strong> (priority) <br>5. <strong>SIDEBAR</strong> (type) &amp; <strong>PIN</strong> (priority) <br>6. <strong>SPOTLIGHT</strong> (type) &amp; <strong>PIN</strong> (priority)</br></br></br></br></br></br></br></p><p>gridSize: <strong>"3 | 4 | 5"</strong> This offers you an option to set how many maximum speakers you want to show ON-SCREEN<br><br><strong>Recording</strong></br></br></p><!--kg-card-begin: markdown--><pre><code class="language-js">const config = {
  // ...
  recording: {
    enabled: true,
    webhookUrl: &quot;https://www.videosdk.live/callback&quot;,
    awsDirPath: `/meeting-recordings/${meetingId}/`,
    autoStart: false,

    layout: {
      type: &quot;SIDEBAR&quot;, // &quot;SPOTLIGHT&quot; | &quot;SIDEBAR&quot; | &quot;GRID&quot;
      priority: &quot;PIN&quot;, // &quot;SPEAKER&quot; | &quot;PIN&quot;,
      gridSize: 3,
    },
  },

  permissions: {
    toggleRecording: true,
    //...
  },

  //...
};</code></pre>
<!--kg-card-end: markdown--><p><strong>RTMP</strong></p><!--kg-card-begin: markdown--><pre><code class="language-js">const config = {
  // ...
  livestream: {
    autoStart: true,
    outputs: [
      {
        url: &quot;rtmp://x.rtmp.youtube.com/live2&quot;,
        streamKey: &quot;&lt;STREAM KEY FROM YOUTUBE&gt;&quot;,
      },
    ],
    layout: {
      type: &quot;SIDEBAR&quot;, // &quot;SPOTLIGHT&quot; | &quot;SIDEBAR&quot; | &quot;GRID&quot;
      priority: &quot;PIN&quot;, // &quot;SPEAKER&quot; | &quot;PIN&quot;,
      gridSize: 3,
    },
  },
  // ...
};</code></pre>
<!--kg-card-end: markdown--><figure class="kg-card kg-code-card"><pre><code class="language-Prebuilt SDK (0.2.4)">&lt;script src="https://sdk.videosdk.live/rtc-js-prebuilt/0.2.4/rtc-js-prebuilt.js"&gt;&lt;/script&gt;</code></pre><figcaption>Prebuilt SDK (0.2.4)</figcaption></figure><h3 id="javascript-v0025"><br>Javascript: v0.0.25</br></h3><ol><li><strong><strong><strong>PubSub</strong>: </strong></strong>PubSub feature allows the participant to send and receive messages of the topics to which he has subscribed.</li><li><strong><strong><strong>Customize Recording and Live Streaming layout</strong>:</strong></strong> Can provide options to change the Recording and Live Streaming layout while calling startRecording(...options) and startLivestream(...options)</li></ol><h3 id="reactjs-v0122">ReactJS: v0.1.22</h3><ol><li><strong><strong><strong>PubSub</strong>: </strong></strong>PubSub feature allows the participant to send and receive messages of the topics which he has subscribed to.</li><li><strong><strong><strong>Customize Recording and Live Streaming layout</strong>: </strong></strong>Can provide options to change the Recording and Live Streaming layout while calling startRecording(...options) and startLivestream(...options)</li></ol><h3 id="react-native-v0024">React Native: v0.0.24</h3><ol><li><strong><strong><strong>PubSub</strong>: </strong></strong>PubSub feature allows the participant to send and receive messages of the topics which he has subscribed to.</li><li><strong><strong><strong>Customize </strong></strong></strong>Recording and Live Streaming layout: Can provide options to change the Recording and Live Streaming layout while calling startRecording(...options) and startLivestream(...options)</li></ol><h3 id="flutter-v0010">Flutter : v0.0.10</h3><ol><li><strong><strong><strong>Custom participnatId</strong>: </strong></strong>provide custom participantId while initializing the MeetingBuilder, so that you can track that participant events according to your need.</li><li><strong><strong><strong>PubSub</strong>: </strong></strong>PubSub feature allows the participant to send and receive messages of the topics which he has subscribed to.</li><li><strong><strong><strong>Screenshare (Flutter Android): </strong> </strong></strong>Enable screen share from android device and share to other participants.</li><li><strong>Enums for Events</strong>: Provided Enums support for all event listeners of meetings, participants, and streams.</li></ol><h3 id="android-v009">Android v0.0.9</h3><ol><li><strong><strong>Screenshare:</strong></strong> Enable screen share from android device and share to other participants.</li><li><strong><strong><strong>PubSub</strong>: </strong></strong>PubSub feature allows the participant to send and receive messages of the topics to which he has subscribed to.</li></ol><h3 id="ios-v125">iOS v1.2.5</h3><ol><li><strong>Stream Pause / Resume<strong>: </strong></strong>Pause and resume the video stream of the remote participant</li><li><strong>Stream setQuality<strong><strong>y</strong>: </strong></strong>Can set the quality of remote participant webcam stream from low, med, or high.</li><li><strong><strong><strong>Output Device Selection</strong>: </strong></strong>Can select the audio output device.</li><li><strong><strong><strong>PubSub</strong>: </strong></strong>PubSub feature allows the participant to send and receive messages of the topics to which he has subscribed to.</li></ol><h3 id="pubsub-for-all-client-sdk%E2%80%99s-new-feature">Pubsub for all client SDK’s New Feature</h3><p>PubSub feature allows the participant to send and receive messages of the topics to which he has subscribed to.</p><ol><li><strong><strong>Javascript: v0.0.25</strong></strong></li><li><strong><strong>ReactJS: v0.1.22</strong></strong></li><li><strong><strong>React Native: v0.0.24</strong></strong></li><li><strong><strong>Flutter: v0.0.10</strong></strong></li><li><strong><strong>Android: v0.0.9</strong></strong></li><li><strong><strong>iOS: v1.2.5</strong></strong><br/></li></ol><p>You can always <a href="https://videosdk.live/contact">connect with us</a> in case of any query or help. We are happy to assist you.</p><p>Thanks for reading.<br><br><br/></br></br></p>]]></content:encoded></item><item><title><![CDATA[January 22' Month Updates for Developers ?]]></title><description><![CDATA[Here we announce the January updates on our product line and web applications. The newly enhanced features work best towards fulfilling developer needs.]]></description><link>https://www.videosdk.live/blog/january-22-month-updates-for-developers</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb7c</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Wed, 02 Feb 2022 13:28:20 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2022/02/January-2022-update.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2022/02/January-2022-update.jpg" alt="January 22' Month Updates for Developers ?"/><p/><p>NEW! This is the January 2022 release announcement. Here is a list of all new enhancements and product updates on videosdk.live<br/></p><p><strong>1. User dashboard </strong></p><p>All new signup and login pages</p><p>Added an option to generate tokens from the API keys table</p><p>Profile simplified and billing details are now available in the same form.<br/></p><p><strong>2. RTC Javascript Prebuilt SDK v0.1.30</strong></p><ol><li>Added the <a href="https://docs.videosdk.live/docs/guide/prebuilt-video-and-audio-calling/features/left-screen">Meeting left screen</a>, with configurable options, shown on leaving the meeting.</li><li>Introducing <a href="https://docs.videosdk.live/docs/guide/prebuilt-video-and-audio-calling/features/debug-mode">Debug Mode</a> for better development experience to raise a popup for errors encountered during the meeting.</li></ol><p><strong>3. Javascript SDK v0.0.23</strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://assets.videosdk.live/static-assets/ghost/2022/02/Connection-poster.jpg" class="kg-image" alt="January 22' Month Updates for Developers ?" loading="lazy" width="1600" height="900"><figcaption>Connect Meetings (BETA)</figcaption></img></figure><p>Connect Meetings (BETA): This new feature enables you to fetch participant data between two or more meetings and make participants switch meetings.</p><p>Added meeting.on(“<a href="https://docs.videosdk.live/docs/guide/video-and-audio-calling-api-sdk/features/error-event">error</a>”) event listener to subscribe to all meeting errors occurring in the SDK.</p><p>You can now pass a unique custom <a href="https://docs.videosdk.live/docs/realtime-communication/sdk-reference/javascript-sdk/meeting-class/#properties">participantId</a> while initializing the meeting.</p><p><strong>4. React SDK v0.1.18</strong></p><p>Connect Meetings (BETA): This new feature enables you to fetch participant data between two or more meetings and make participants switch meetings.</p><p><a href="https://docs.videosdk.live/docs/guide/video-and-audio-calling-api-sdk/features/error-event/">Added error event</a> listener to subscribe to all meeting errors occurring in the SDK.</p><p>You can now pass a unique custom <a href="https://docs.videosdk.live/docs/realtime-communication/sdk-reference/react-sdk/use-participant/#parameters">participantId</a> while initializing the meeting.<br/></p><p><strong>5. React Native SDK v0.0.21</strong></p><p>Connect Meetings (BETA): This new feature enables you to fetch participant data between two or more meetings and make participants switch meetings.</p><p>Added meeting.on(“<a href="https://docs.videosdk.live/docs/guide/video-and-audio-calling-api-sdk/features/error-event/">error</a>”) event listener to subscribe to all meeting errors occurring in the SDK.</p><p>You can now pass a unique custom <a href="https://docs.videosdk.live/docs/realtime-communication/sdk-reference/react-native-sdk/use-participant#parameters">participantId</a> while initializing the meeting.<br/></p><p><strong>6. Android SDK v0.0.7</strong></p><p>Added onPresenterChanged event for displaying screen share from other platforms.</p><p>Documentation &amp; Support</p><p><strong>7. Documentation &amp; Support</strong></p><p>A list of all meeting error codes is available <a href="https://docs.videosdk.live/docs/guide/video-and-audio-calling-api-sdk/features/error-event/">here</a>.<br/></p><p><strong>8. Rest APIs</strong></p><p>Added streamUrl in livestream API request response to be paired with streamKey.</p><p/><!--kg-card-begin: html--><iframe width="100%" height="315" src="https://www.youtube-nocookie.com/embed/videoseries?list=PLrujdOR6BS_2EpApD_L9xGziVJ4h8nf2A" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""/>
<!--kg-card-end: html--><p/><!--kg-card-begin: markdown--><ul>
<li><a href="https://github.com/videosdk-live/videosdk-rtc-javascript-sdk-example">GitHub Code</a> (feel free to give it a star ⭐):</li>
<li>Official Video SDK <a href="https://www.npmjs.com/package/@videosdk.live/js-sdk">NPM package</a></li>
<li>Official Video SDK <a href="https://docs.videosdk.live/docs/guide/video-and-audio-calling-api-sdk/getting-started/">Documentation</a></li>
</ul>
<!--kg-card-end: markdown--><p><br>You can always <a href="https://videosdk.live/contact">connect with us</a> in case of any query or help. We are happy to assist you.</br></p><p>Thanks for reading.</p>]]></content:encoded></item><item><title><![CDATA[December 21' Month Updates for Developers ?]]></title><description><![CDATA[Here we announce the December updates on our product line and web applications. The newly enhanced features work best towards fulfilling developer needs.]]></description><link>https://www.videosdk.live/blog/december-month-updates-for-developers</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb7b</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Fri, 31 Dec 2021 12:18:04 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/12/December-updates.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/12/December-updates.jpg" alt="December 21' Month Updates for Developers ?"/><p>NEW! This is the December 2021 release announcement. Here is a list of all new enhancements and product updates on videosdk.live</p><p><strong>1. <strong><strong>User dashboard</strong></strong></strong></p><ul><li><strong>End meeting or remove a participant</strong> from an ongoing meeting session.</li><li><strong>Wildcard domain</strong> entry is now supported for <strong>whitelisting</strong> in apikeys.</li><li>And other minor bug fixes.<br/></li></ul><p><strong>2. <strong><strong>RTC Javascript prebuilt v0.1.26</strong></strong></strong></p><ul><li><strong>End meeting and remove a participant</strong> option now available.</li><li><strong>Whiteboard</strong> released in BETA phase.</li></ul><!--kg-card-begin: html--><div style="position: relative; width: 100%; height: 0; padding-top: 56.2500%;
 padding-bottom: 48px; box-shadow: 0 2px 8px 0 rgba(63,69,81,0.16); margin-top: 1.6em; margin-bottom: 0.9em; overflow: hidden;
 border-radius: 8px; will-change: transform;">
  <iframe loading="lazy" style="position: absolute; width: 100%; height: 100%; top: 0; left: 0; border: none; padding: 0;margin: 0;" src="https:&#x2F;&#x2F;www.canva.com&#x2F;design&#x2F;DAE0HUweuIY&#x2F;watch?embed" allowfullscreen="allowfullscreen" allow="fullscreen">
  </iframe>
</div>
<a href="https:&#x2F;&#x2F;www.canva.com&#x2F;design&#x2F;DAE0HUweuIY&#x2F;watch?utm_content=DAE0HUweuIY&amp;utm_campaign=designshare&amp;utm_medium=embeds&amp;utm_source=link" target="_blank" rel="noopener">Design</a> by Sagar Kava
<!--kg-card-end: html--><ul><li><strong>Chat history</strong> now available for those who join an <strong>ongoing meeting</strong>.</li><li><strong>Meeting Leave Screen</strong> now available for those who leave the meeting but haven’t provided <em><strong>redirectOnLeave</strong></em>.<br/></li></ul><p><strong>3. <strong><strong>Javascript SDK v0.0.18</strong></strong></strong></p><ul><li><strong><strong><strong>Chat history </strong></strong></strong>is now available once you join an ongoing meeting.<br/></li></ul><p><strong>4. <strong><strong>React SDK v0.1.12</strong></strong></strong></p><ul><li><strong><strong><strong>Chat history</strong></strong></strong> is now available once you join an ongoing meeting.<br/></li></ul><p><strong>5. <strong><strong>React Native SDK v0.0.18</strong></strong></strong></p><ul><li><strong><strong><strong>Screenshare</strong> </strong></strong>is now available on the iOS platform too<strong><strong>.</strong></strong></li><li><strong>Chat history </strong>is<strong> </strong>now available once you join an ongoing meeting.<br/></li></ul><p><strong>6. <strong><strong>Android SDK v0.0.6</strong></strong></strong></p><ul><li><strong><strong>Better support for devices <strong>lower than Android 10</strong>.</strong></strong></li><li><strong>End meeting</strong> to remove all participants.<br/></li></ul><p><strong>7. <strong><strong>Documentation &amp; Support</strong></strong></strong></p><ul><li><a href="https://docs.videosdk.live/docs/tutorials/realtime-communication/prebuilt-sdk/quickstart-prebuilt-wordpress/">Wordpress integration</a> guide released<br/></li></ul><p><strong>8. <strong><strong>Rest APIs</strong></strong></strong></p><ul><li><a href="https://docs.videosdk.live/docs/realtime-communication/rest-api-reference/create-join-meeting">Meeting create API</a>: Added region selection option for better latency.</li><li><a href="https://docs.videosdk.live/docs/realtime-communication/rest-api-reference/end-session">End meeting session API</a>: For ending an ongoing meeting session.</li><li><a href="https://docs.videosdk.live/docs/realtime-communication/rest-api-reference/remove-participant">Remove participant API</a>: For removing participants from an ongoing session.</li></ul><h3 id="build-a-clubhouse-%F0%9F%91%8B-like-audio-streaming-app-in-flutter-in-30-minutes">Build a Clubhouse ? like Audio Streaming App in Flutter in 30 minutes</h3><p/><!--kg-card-begin: html--><iframe width="100%" height="315" src="https://www.youtube.com/embed/g-S6rQR8GyI" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""/>

<!--kg-card-end: html--><p/><!--kg-card-begin: markdown--><ul>
<li>Here is the package manager of the <a href="https://pub.dev/packages/videosdk">flutter meeting SDK</a> integration</li>
<li>Here is the <a href="https://github.com/videosdk-live/videosdk-rtc-flutter-sdk-example">GitHub repo</a> of the flutter meeting SDK integration</li>
<li>Here is the <a href="https://docs.videosdk.live/flutter/guide/video-and-audio-calling-api-sdk/quick-start">documentation</a> link for VideoSDK Flutter</li>
</ul>
<!--kg-card-end: markdown--><h3 id="react-video-chat-app-with-video-sdk-full-tutorial"><br><br>React Video Chat App with Video SDK (Full Tutorial)</br></br></h3><p/><!--kg-card-begin: html--><iframe width="100%" height="315" src="https://www.youtube.com/embed/videoseries?list=PLrujdOR6BS_3L_RO99U_ocVipm7Gc8wJq" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""/>
<!--kg-card-end: html--><p/><!--kg-card-begin: markdown--><ul>
<li><a href="https://github.com/videosdk-live/videosdk-rtc-react-sdk-example">GitHub Code</a> (feel free to give it a star ⭐):</li>
<li>Official Video SDK <a href="https://www.npmjs.com/package/@videosdk.live/react-sdk">NPM package</a></li>
<li>Official Video SDK <a href="https://docs.videosdk.live/docs/guide/video-and-audio-calling-api-sdk/getting-started/">Documentation</a></li>
</ul>
<!--kg-card-end: markdown--><h3 id="you-can-always-connect-with-us-in-case-of-any-query-or-help-we-are-happy-to-assist-you">You can always <a href="https://videosdk.live/contact">connect with us</a> in case of any query or help. We are happy to assist you.</h3><p>Thanks for reading.</p><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/12/prI.gif" class="kg-image" alt="December 21' Month Updates for Developers ?" loading="lazy" width="480" height="260"/></figure>]]></content:encoded></item><item><title><![CDATA[November 21' Month Updates for Developers ?]]></title><description><![CDATA[Here we announce the november updates on our product line and web applications. The newly enhanced features work best towards fulfilling developer needs.]]></description><link>https://www.videosdk.live/blog/november-month-updates</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb79</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Wed, 01 Dec 2021 06:47:52 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/12/November-updates.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/12/November-updates.jpg" alt="November 21' Month Updates for Developers ?"/><p><br><strong>NEW! This is the November 2021 release announcement. Here is a list of all new enhancements and product updates on Video SDK.</strong></br></p><h3 id="1-user-dashboard">1. User dashboard</h3><ul><li>Now <strong>download chat</strong> for new sessions as a CSV file.</li><li><strong>Domains</strong> prefixed with <strong>www</strong> are now automatically <strong>allowed</strong>.</li><li><strong>New guide links</strong> in quickstart and overview page.</li></ul><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/12/https___s3.amazonaws.com_appforest_uf_f1628685577359x338695008335103200_giphy-2.gif" class="kg-image" alt="November 21' Month Updates for Developers ?" loading="lazy" width="512" height="283"/></figure><h3 id="2-rtc-javascript-prebuilt-v0117">2. RTC Javascript prebuilt v0.1.17</h3><ul><li>The <strong>prebuilt website</strong> code is now <a href="https://github.com/videosdk-live/videosdk-rtc-react-prebuilt-ui">open-sourced</a> and available on our Github repo for <strong>contribution</strong> or use in your own website: <a href="https://github.com/videosdk-live/videosdk-rtc-react-prebuilt-ui">videosdk-live/videosdk-rtc-react-prebuilt-ui</a>.<br/></li></ul><h3 id="3-flutter-sdk-v008">3. Flutter SDK v0.0.8</h3><ul><li><strong><strong>Toggle other <strong>participants'</strong> <strong>mic or webcam</strong>.</strong></strong></li><li><strong>Remove any participant</strong> from the meeting.</li><li><strong>Pause and resume</strong> incoming participant video and audio <strong>streams</strong>.</li><li>Set <strong>incoming video</strong> stream quality based to <strong>low, med or high</strong>.</li><li><strong>Presenter change event</strong> available for screenshare from other platforms.<br/></li></ul><h3 id="4-android-sdk-v004">4. Android SDK v0.0.4</h3><ul><li><strong><strong>Toggle other <strong>participants' mic or webcam</strong>.</strong></strong></li><li><strong>Remove any participant</strong> from the meeting.</li><li><strong>Start cloud recording</strong> for the meeting.</li><li><strong>Livestream</strong> meeting to <strong>Youtube and other RTMP</strong> supported platforms.<br/></li></ul><h3 id="5-ios-sdk-v100">5. iOS SDK v1.0.0</h3><ul><li><strong><strong>Toggle other <strong>participants' mic or webcam</strong>.</strong></strong></li><li><strong>Remove any participant</strong> from the meeting.</li><li><strong>Start cloud recording</strong> for the meeting.</li><li><strong>Livestream</strong> meeting to <strong>Youtube and other RTMP-supported</strong> platforms.<br/></li></ul><h3 id="6-react-native-sdk-v0018">6. React Native SDK v0.0.18</h3><ul><li><strong><strong>Support for projects ejected from <strong>expo</strong>.</strong></strong><br/></li></ul><h3 id="7-code-samples">7. Code Samples</h3><ul><li>Code sample released for <a href="https://github.com/videosdk-live/videosdk-rtc-react-prebuilt-ui">React Prebuilt UI</a> and <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-prebuilt-ui">React Native UI</a></li><li>(Experimental) <a href="https://github.com/videosdk-live/videosdk-rtc-android-prebuilt-webview-example">Prebuilt webview for Android</a>: ideal for one-to-one calls and small meetings with up to on-screen 6 participants.</li><li>(Experimental) <a href="https://github.com/videosdk-live/videosdk-rtc-ios-prebuilt-webview-example">Prebuilt webview for iOS</a>: ideal for one-to-one calls and small meetings with up to on-screen 6 participants.</li></ul><h3 id="8-rest-apis">8. Rest APIs</h3><ul><li>Meeting <strong>chat CSV file link</strong> now available in <a href="https://docs.videosdk.live/docs/realtime-communication/rest-api-reference/list-meeting-sessions">Meeting sessions API</a></li></ul><h3 id="9-documentation-support">9. Documentation &amp; Support</h3><p>We are going live <strong>every Tuesday</strong> providing a live quickstart demo for all of our SDKs one by one. Here are the links for the previously hosted events. Stay tuned for more events on our <a href="https://www.linkedin.com/company/video-sdk/events/">Linkedin page</a>.<br/></p><ul><li><strong>Build a Video Conferencing App in Flutter</strong></li></ul><!--kg-card-begin: html--><iframe width="560" height="315" src="https://www.youtube.com/embed/jvzE4j1Pj2Q" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""/><!--kg-card-end: html--><p/><ul><li><strong>Create a Low Latency Live Streaming App in React Native</strong> ⚛️</li></ul><!--kg-card-begin: html--><iframe width="560" height="315" src="https://www.youtube.com/embed/BQ1vWEC5WrE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""/><!--kg-card-end: html--><p/><ul><li><strong>Plug &amp; Play  Live Shopping in E-commerce Website.</strong></li></ul><!--kg-card-begin: html--><iframe width="560" height="315" src="https://www.youtube.com/embed/ZMLMBmkSwDA" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""/><!--kg-card-end: html--><p/><ul><li><strong>Build Video Calling in any No-code Platform within 5 Minutes</strong>.</li></ul><!--kg-card-begin: html--><iframe width="560" height="315" src="https://www.youtube.com/embed/sI6vJGc2XH0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""/><!--kg-card-end: html--><p/><p><strong>Website &amp; Support</strong></p><ul><li>Join our <a href="https://discord.gg/f2WsNDN9S5">Discord</a> Community<br><br>You can always <a href="https://videosdk.live/contact">connect with us</a> in case of any query or help. We are happy to assist you.</br></br></li></ul><p>Thanks for reading.</p>]]></content:encoded></item><item><title><![CDATA[October 2021 Updates for Developers ?]]></title><description><![CDATA[Here we announce the october updates on our product line and web applications. The newly enhanced features work best towards fulfilling developer needs.]]></description><link>https://www.videosdk.live/blog/october-updates-2021</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb76</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 29 Oct 2021 12:35:00 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/10/October-updates-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/10/October-updates-1.jpg" alt="October 2021 Updates for Developers ?"/><p>NEW! This is the October 2021 release announcement. Here is a list of all new enhancements and product updates on <a href="https://videosdk.live/">videosdk.live</a></p><h3 id="user-dashboard">User dashboard</h3><ul><li><strong><a href="https://app.videosdk.live/live-streams/all-live-streams">List all live streams</a></strong> and get live stream details for standard live streaming.</li><li>Now <strong>filter usage</strong> in the dashboard by date range.</li><li>And other minor bug fixes.</li></ul><h3 id="rtc-javascript-prebuilt-v0115">RTC Javascript prebuilt v0.1.15</h3><ul><li><strong>Pin participants are </strong>now available with 3 different layout options.</li><li>Layout ( <strong><a href="https://docs.videosdk.live/docs/guide/prebuilt-video-and-audio-calling/features/pin-participants#1-grid-layout">GRID</a> | <a href="https://docs.videosdk.live/docs/guide/prebuilt-video-and-audio-calling/features/pin-participants#2-sidebar-layout">SIDEBAR</a> | <a href="https://docs.videosdk.live/docs/guide/prebuilt-video-and-audio-calling/features/pin-participants#3-spotlight-layout">SPOTLIGHT</a> </strong>)</li><li>Disabled uppercase of branding title.</li></ul><h3 id="javascript-sdk-v0016"><strong>Javascript SDK v0.0.16</strong></h3><ul><li><strong><strong><strong>Pin participants are </strong>now available with 3 different <a href="https://docs.videosdk.live/docs/guide/video-and-audio-calling-api-sdk/features/pin-participants">layout options.</a></strong></strong></li></ul><h3 id="react-sdk-v019">React SDK v0.1.9</h3><ul><li><strong><strong><strong>Pin participants are </strong>now available with 3 different <a href="https://docs.videosdk.live/docs/guide/video-and-audio-calling-api-sdk/features/pin-participants">layout options</a>.</strong></strong></li></ul><h3 id="documentation-support">Documentation &amp; Support</h3><p>We have prepared an extensive guide in the docs for all SDK and API integrations.</p><ul><li><strong><a href="https://docs.videosdk.live/docs/guide/prebuilt-video-and-audio-calling/getting-started">Prebuilt Video &amp; Audio Calling SDK</a></strong></li><li><strong><a href="https://docs.videosdk.live/docs/guide/video-and-audio-calling-api-sdk/getting-started">Custom Video &amp; Audio Calling SDK</a></strong></li><li><strong><a href="https://docs.videosdk.live/docs/guide/standard-live-streaming-api-sdk/getting-started">Standard Live Stream API</a></strong></li><li><strong><a href="https://docs.videosdk.live/docs/guide/video-on-demand/getting-started">Video on Demand API</a></strong></li></ul><h3 id="code-samples">Code Samples</h3><ul><li>Code sample released for <a href="https://github.com/videosdk-live/videosdk-rtc-javascript-sdk-example"><strong>Javascript SDK</strong></a></li><li>Pin participant and other new features now available in Prebuilt code samples: <strong><a href="https://github.com/videosdk-live/videosdk-rtc-js-prebuilt-embedded-example">Javascript</a>, <a href="https://github.com/videosdk-live/videosdk-rtc-angular-prebuilt-example">Angular</a>, <a href="https://github.com/videosdk-live/videosdk-rtc-react-prebuilt-example">React</a>, <a href="https://github.com/videosdk-live/videosdk-rtc-vue-prebuilt-example">Vue</a></strong></li></ul><h3 id="rest-apis">Rest APIs</h3><ul><li><a href="https://docs.videosdk.live/docs/realtime-communication/rest-api-reference/get-meeting-details"><strong>Meeting details API</strong></a>: For meeting id and other details</li><li><a href="https://docs.videosdk.live/docs/realtime-communication/rest-api-reference/list-meeting-sessions"><strong>Meeting sessions API</strong></a>: For participants log and session duration</li><li><a href="https://docs.videosdk.live/docs/realtime-communication/rest-api-reference/list-recordings"><strong>Meeting recordings API</strong></a>: For meeting and session recordings</li></ul><h3 id="website-support">Website &amp; Support</h3><ul><li>Join our <a href="https://discord.gg/f2WsNDN9S5"><strong>Discord</strong></a> Community</li></ul><blockquote>You can always <a href="https://videosdk.live/contact"><strong>connect with us</strong></a> in case of any query or help. We are happy to assist you.</blockquote><p>Thanks for reading.</p>]]></content:encoded></item><item><title><![CDATA[September 2021 New launch and Updates for Developers ?]]></title><description><![CDATA[September went fast but amazing too. We have made some positive updates and alterations to our product line. This blog is amazing, you’ll like it. We have also launched two SDKs this month, stay tuned!]]></description><link>https://www.videosdk.live/blog/video-sdk-september-2021-updates</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb73</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Sagar Kava]]></dc:creator><pubDate>Fri, 01 Oct 2021 09:06:18 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/10/September-updates.jpg" medium="image"/><content:encoded><![CDATA[<h2 id=""><br/></h2><img src="http://assets.videosdk.live/static-assets/ghost/2021/10/September-updates.jpg" alt="September 2021 New launch and Updates for Developers ?"/><p>NEW! This is the September 2021 release announcement. Here is a list of all new enhancements and product updates on videosdk.live<br/></p><p><strong>1. <strong><strong>User dashboard</strong></strong></strong></p><ul><li>Simplified API key creation and domain whitelisting.</li><li><strong>Site tour</strong> tutorial now available! Get a quick walkthrough of all features.</li><li>Now join the <a href="https://discord.gg/f2WsNDN9S5">Discord community</a> from the console itself.</li><li>Find the <strong>Quickstart</strong> on the top right corner to get started with any SDK.</li><li>Upfront payment credit is now visible on the homepage.</li><li>And other minor bug fixes.<br/></li></ul><p><strong>2. <strong><strong>RTC Javascript prebuilt v0.1.12</strong></strong></strong></p><ul><li><strong>Join screen</strong> now available with minimal configuration.</li><li>Smoother navigation for 100+ participants.</li></ul><h3 id="%F0%9F%9A%80-new-rtc-sdk-launch-ios-sdk-and-flutter-sdk">? New RTC SDK launch iOS SDK and Flutter SDK </h3><figure class="kg-card kg-image-card"><img src="http://assets.videosdk.live/static-assets/ghost/2021/10/giphy--2-.gif" class="kg-image" alt="September 2021 New launch and Updates for Developers ?" loading="lazy" width="388" height="274"/></figure><p><strong>1. iOS SDK v1.0.0 (NEW!) </strong>?</p><p>We are launching the iOS SDK this month which is completely compatible with our other SDKs. Visit our docs to start integrating right now!<br/></p><ul><li><strong><strong><strong>Join or start</strong> the same meeting from iOS devices.</strong></strong></li><li>Manage and <strong>display participant videos</strong> in a grid or list or any custom layout.</li><li>Support for wired and wireless headsets for <strong>audio calling</strong>.</li><li><a href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example">Code sample</a> on iOS</li></ul><p><strong>2. Flutter SDK v0.0.4 (NEW!) </strong>?</p><p>We are also launching the Flutter SDK this October which will be completely compatible with our other SDKs. The documentation is still a work in progress. Stay tuned for the update.<br/></p><ul><li>A single <strong>MeetingBuilder</strong> widget for integrating the meeting.</li><li>Manage and <strong>display participant videos</strong> in a grid or list or any custom layout.</li><li>Support for wired and wireless headsets for <strong>audio calling</strong>.</li><li><a href="https://github.com/videosdk-live/videosdk-rtc-flutter-sdk-example">Code sample</a> on Flutter</li></ul><h3 id="code-samples">Code Samples</h3><p>New code samples released.</p><ul><li>RTC SDK: <a href="https://github.com/videosdk-live/videosdk-rtc-flutter-sdk-example">Flutter</a></li><li>RTC SDK:<a href="https://github.com/videosdk-live/videosdk-rtc-ios-sdk-example"> IOS</a></li></ul><p>You can always <a href="https://videosdk.live/contact">connect with us</a> in case of any query or help. We are happy to assist you.</p><p>Thanks for reading.</p><p>Videosdk.live presents you its Flutter and IOS Video SDK. Integrate easy-to-use real-time audio and video calling with this robust flutter video API and make experiences better with full flexibility and customization.<br/></p>]]></content:encoded></item><item><title><![CDATA[August 2021 Updates for Developers]]></title><description><![CDATA[Here we announce the August updates on our product line and web applications. The newly enhanced features work best towards fulfilling developer needs.]]></description><link>https://www.videosdk.live/product-update-and-announcements/</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb72</guid><category><![CDATA[Product Updates]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Mon, 06 Sep 2021 12:26:53 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/09/August-updates.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/09/August-updates.jpg" alt="August 2021 Updates for Developers"/><p>NEW! This is the August 2021 release announcement. Here is a list of all new enhancements and product updates on <a href="https://videosdk.live">videosdk.live</a></p><h2 id="customer-console">Customer console</h2><ul><li><strong>Find and list</strong> all the <a href="https://app.videosdk.live/vod/encoding-jobs"><strong>encoding jobs</strong></a> in detail for video-on-demand service.</li><li><strong>Domain Whitelisting</strong> is now available for the prebuilt in <a href="https://app.videosdk.live/settings/api-keys">API keys</a> page.</li><li><strong><strong>Update <a href="https://app.videosdk.live/settings/billing-details">billing details</a> and make <a href="https://app.videosdk.live/settings/invoices">invoice payments</a> directly from the console.</strong></strong></li></ul><h2 id="rtc-javascript-prebuilt-v015">RTC Javascript prebuilt v<strong><strong>0.1.5</strong></strong></h2><ul><li><strong>Branding: </strong>Toggle branding feature, add a custom logo, add a brand name, enable/disable powered by remark from the navbar.</li><li><strong>Recording: </strong>Enable/disable recording feature, set recording notification URL, allow the participant to toggle recording.</li><li><strong>Restream with RTMP: </strong>Livestream the meeting to streaming platforms like YouTube, Facebook, or other platforms supporting RTMP.</li><li><strong>Host control: </strong>Permission to join without asking and toggle other participants’ mic &amp; webcam.</li><li><strong>Join screen: </strong>Make join screen visible, set title for join screen, set meeting link for copying</li></ul><h2 id="javascript-sdk-v010">Javascript SDK v<strong><strong>0.1.0</strong></strong></h2><ul><li><strong><strong><strong>Pause, resume and seek</strong> external video to a particular time.</strong></strong></li><li><strong>Request</strong> other participants to enable their <strong>mic or webcam</strong>.</li><li>Simplified entry requests with new <strong><em>allow</em>()</strong> and <strong><em>deny</em>()</strong> callbacks.</li></ul><h2 id="react-sdk-v010">React SDK v<strong><strong>0.1.0</strong></strong></h2><ul><li><strong><strong><strong>Pause, resume and seek</strong> external video to a particular time.</strong></strong></li><li><strong>Request</strong> other participants to enable their <strong>mic or webcam</strong>.</li><li>Simplified entry requests with new <strong><em>allow</em>()</strong> and <strong><em>deny</em>()</strong> callbacks.</li></ul><h2 id="react-native-sdk-v0012">React Native SDK v<strong><strong>0.0.12</strong></strong></h2><ul><li><strong>Pause, resume and seek</strong> external video to a particular time.</li></ul><h2 id="android-sdk-alpha">Android SDK (Alpha)</h2><ul><li><strong>Join</strong> <strong>meeting</strong> with meeting Id</li><li><strong>Participants list</strong> and join/leave events.</li><li>Share your <strong>mic and webcam</strong>.</li><li><strong>Toggle</strong> your own mic and webcam.</li><li>View other participant’s videos.</li><li>Documentation available on <a href="https://docs.videosdk.live/docs/realtime-communication/sdk-reference/android-sdk/setup">docs.videosdk.live</a></li><li><a href="https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example">Code sample</a> for Android SDK is available on Github</li></ul><h2 id="code-samples">Code Samples</h2><p>New code samples released</p><ul><li><strong>RTC Prebuilt</strong>: <a href="https://github.com/videosdk-live/videosdk-rtc-js-prebuilt-embedded-example">Javascript</a>, <a href="https://github.com/videosdk-live/videosdk-rtc-angular-prebuilt-example">Angular</a>, <a href="https://github.com/videosdk-live/videosdk-rtc-vue-prebuilt-example">Vue</a>, <a href="https://github.com/videosdk-live/videosdk-rtc-react-prebuilt-example">React</a></li><li><strong>RTC SDK</strong>: <a href="https://github.com/videosdk-live/videosdk-rtc-react-sdk-example">React</a>, <a href="https://github.com/videosdk-live/videosdk-rtc-react-native-sdk-example">React Native</a>, <a href="https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example">Android</a></li><li><strong>Video-on-demand</strong>: <a href="https://github.com/videosdk-live/videosdk-vod-react-api-example">React</a>, <a href="https://github.com/videosdk-live/videosdk-vod-react-native-api-example">React Native</a></li><li><strong>Standard Live streaming</strong>: <a href="https://github.com/videosdk-live/videosdk-live-streaming-react-api-example">React</a>, <a href="https://github.com/videosdk-live/videosdk-live-streaming-react-native-api-example">React Native</a> </li></ul><h2 id="website-support">Website &amp; Support</h2><ul><li>Join our <a href="https://discord.gg/f2WsNDN9S5">Discord</a> Community</li></ul><blockquote>You can always <a href="https://videosdk.live/contact">connect with us</a> in case of any query or help. We are happy to assist you.</blockquote><p>Thanks for reading.</p>]]></content:encoded></item><item><title><![CDATA[Video-on-Demand Pricing]]></title><description><![CDATA[On-demand video at the most effective pricing. Easy and simple to integrate with all affordability]]></description><link>https://www.videosdk.live/video-on-demand-pricing/</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb6e</guid><category><![CDATA[Pricing]]></category><dc:creator><![CDATA[Video SDK Team]]></dc:creator><pubDate>Fri, 03 Sep 2021 05:34:36 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/09/VOD-Pricing-thumbnail.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/09/VOD-Pricing-thumbnail.jpg" alt="Video-on-Demand Pricing"/><p>The majority of the content today is found engaging and attractive when it is in a video format. Video on demand has been a great idea to manage online pre-recorded content deliveries for a long-time approach. On-demand videos are the recorded videos uploaded on various video engagement platforms. It allows videos to get stored in a compressed format and makes them available at viewers’ comfort. <br/></p><p>Videosdk.live brings its on-demand video facility with the most amazing experience for viewers. We make sure to deliver the facilities with a trouble-free approach. You can always <a href="https://videosdk.live/contact">connect with us</a> for any queries and help. This blog describes the pricing of our Video-on-demand facility. <br/></p><h2 id="calculation-of-the-cost-of-video-on-demand-facility">Calculation of the Cost of Video-on-Demand facility</h2><p>A pre-recorded video requires an on-demand facility for uploading it on streaming platforms. A video is composed from raw data to a polished video in three steps.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/VOD-component--1-.jpg" class="kg-image" alt="Video-on-Demand Pricing" loading="lazy" width="722" height="473" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2021/09/VOD-component--1-.jpg 600w, http://assets.videosdk.live/static-assets/ghost/2021/09/VOD-component--1-.jpg 722w"/></figure><p><strong>1. Encoding  2. Storage  3. Delivery</strong></p><ul><li>The total price of an on-demand video is the sum of the costs of all three components.</li><li><strong><strong><strong>The total cost=</strong></strong> <strong><strong>Encoding + Storage + Delivery</strong></strong></strong><br/></li></ul><h2 id="pro-plan">Pro Plan</h2><p>A pro plan is a plan which is devised by videosdk.live to keep up with the costs for the viewers with an elementary approach.<br/></p><p>Let’s understand the computation</p><p><strong>Example 1 :</strong></p><figure class="kg-card kg-image-card kg-width-wide"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/Pro-plan-pricing--1-.jpg" class="kg-image" alt="Video-on-Demand Pricing" loading="lazy" width="1457" height="717" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2021/09/Pro-plan-pricing--1-.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2021/09/Pro-plan-pricing--1-.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2021/09/Pro-plan-pricing--1-.jpg 1457w" sizes="(min-width: 1200px) 1200px"/></figure><p>We can observe how the price computation for a month.</p><p><strong>To make an observation :</strong></p><ul><li>The encoding price is only charged once for a lifetime.</li><li>The video will not comply with further encoding costs for the next time</li><li>The lifetime encoding cost does not fall with any other video uploaded by the source</li><li>Storage costs will be computed for each month.</li><li>We calculate costs on 100% of the video<br/></li></ul><p><strong>Lifetime video encoding= $0.05 per minute; Per month video storage= $0.003 per minute</strong><br/></p><p>As we see, the image explains the cost with a resolution of 1080p. Similarly, the cost can also be calculated for different resolutions as well. Let’s take the same example. <br/></p><p><strong>Calculation of cost for Video-on-demand</strong></p><p><strong>Total Minutes= 30; Total Views= 100</strong></p><p><strong>Encoding- 30 x 0.05= $1.5; Storage- 30 x 0.003= $0.09</strong></p><ol><li>Resolution- 240p; Unit price per Minute- 0.0004</li></ol><p>Calculation- Delivery= 30 x 100 x 0.0004 = $1.2</p><p><strong>Total cost at 240p= 1.5 + 0.09 + 1.2= $ 2.79</strong><br/></p><p>2. Resolution- 360p; Unit price per Minute- 0.0006</p><p>Calculation- Delivery= 30 x 100 x 0.0006 = $1.8</p><p><strong>Total cost at 360p= 1.5 + 0.09 + 1.8= $ 3.39</strong><br/></p><p>3. Resolution- 480p; Unit price per Minute- 0.0008</p><p>Calculation- Delivery= 30 x 100 x 0.0008 = $2.4</p><p><strong>Total cost at 480p= 1.5 + 0.09 + 2.4= $ 3.99</strong><br/></p><p>4. Resolution- 720p; Unit price per Minute- 0.0010</p><p>Calculation- Delivery= 30 x 100 x 0.0010 = $3</p><p><strong>Total cost at 720p= 1.5 + 0.09 + 3= $ 4.59</strong><br/></p><p>5. Resolution- 1080p; Unit price per Minute- 0.0012</p><p>Calculation- Delivery= 30 x 100 x 0.0012 = $3.6</p><p><strong>Total cost at 1080p= 1.5 + 0.09 + 3.6= $ 5.19</strong><br/></p><blockquote>Let's take another example and look at the pricing where the streaming minutes are increased. This example will let you understand the calculations with change in different units- Minutes, or Views, or both.</blockquote><p><strong>Example 2;</strong></p><p><strong>Calculation of cost for Video-on-demand</strong></p><p><strong>Total Minutes- 150; Total views- 100</strong></p><p><strong>Encoding- 150 x 0.05= $7.5; Storage- 150 x 0.003= $0.45</strong><br/></p><ol><li>Resolution- 240p; Unit price per Minute- 0.0004</li></ol><p>Calculation- Delivery= 150 x 100 x 0.0004 = $6</p><p><strong>Total cost at 240p= 7.5 + 0.45 + 6= $ 13.95</strong><br/></p><p>2. Resolution- 360p; Unit price per Minute- 0.0006</p><p>Calculation- Delivery= 150 x 100 x 0.0009 = $6</p><p><strong>Total cost at 360p= 7.5 + 0.45 + 9= $ 16.95</strong><br/></p><p>3. Resolution- 480p; Unit price per Minute- 0.0008</p><p>Calculation- Delivery= 150 x 100 x 0.0008 = $12</p><p><strong>Total cost at 480p= 7.5 + 0.45 + 12= $ 19.95</strong><br/></p><p>4. Resolution- 720p; Unit price per Minute- 0.0010</p><p>Calculation- Delivery= 150 x 100 x 0.0010 = $15</p><p><strong>Total cost at 720p= 7.5 + 0.45 + 15= $ 22.95</strong><br/></p><p>5. Resolution- 1080p; Unit price per Minute- 0.0012</p><p>Calculation- Delivery= 150 x 100 x 0.0012 = $18</p><p><strong>Total cost at 1080p= 7.5 + 0.45 + 18= $ 25.95</strong><br/></p><h2 id="enterprise-plan">Enterprise plan</h2><p>An Enterprise Plan is a plan for companies that pre-record and upload videos regularly and demand increasing viewers on their platform. We bring this plan to promote mass engagement at affordable prices.<br/></p><p><a href="https://videosdk.live/contact">Contact Support</a> for the best pricing deals.<br/></p><h2 id="amazon-web-services-aws-video-on-demand-pricing">Amazon Web Services (AWS) Video-on-Demand Pricing</h2><p>AWS is a popular name among the providers of on-demand video services. To be on point, it provides on-demand services in two specific resolutions- 1080p and 4K. We will focus on the 1080p resolution for this blog. <br/></p><blockquote>Note that: AWS works with only two resolutions. It does not service in video resolutions lower than 1080p. Where other companies make a provision of on-demand videos with different resolutions i.e., 240p to 1080p, and AWS only provides the VoD facility with 1080p resolution.</blockquote><p>AWS calculates its pricing based on GB. We have converted the pricing units into minutes to make a better understanding of the pricing concept. We have explained unit conversion in the coming part of this blog.<br/></p><p><strong>AWS Pricing at 1080p per minute </strong></p><p>Encoding= $ 0.062</p><p>Storage= $ 0.0024</p><p>Delivery= $ 0.003<br/></p><p><strong>Calculation of costs in Minutes or GB?</strong></p><p>Calculating the cost of on-demand videos can be done in either of the ways. Some companies compute costs with GB, the reason being they provide multiple services and prefer keeping a unified unit for all. <br/></p><p>While other companies prefer calculating costs through the video minutes to make calculations effortless for their clients. A video is always uploaded in minutes, therefore calculation in minutes is easy and not challenging. <strong>Videosdk.live calculates costs based on video minutes.</strong><br/></p><h2 id="how-to-convert-gb-into-minutes">How to convert GB into minutes?</h2><p>This is a simple table that describes the ratio of GB to minute.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/gb-into-min.jpg" class="kg-image" alt="Video-on-Demand Pricing" loading="lazy" width="722" height="473" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2021/09/gb-into-min.jpg 600w, http://assets.videosdk.live/static-assets/ghost/2021/09/gb-into-min.jpg 722w"/></figure><h2 id="comparison">Comparison</h2><p>Videosdk.live and AWS both work on an identical aspect and that is providing the best quality videos for playback. We have distinguished these two based on their pricing policies. <br/></p><blockquote>Note that we have converted the on-demand video units of AWS into minutes from GB to make an unbiased comparison of pricing.</blockquote><figure class="kg-card kg-image-card kg-width-wide"><img src="http://assets.videosdk.live/static-assets/ghost/2021/09/pricing-table_vod.jpg" class="kg-image" alt="Video-on-Demand Pricing" loading="lazy" width="1292" height="717" srcset="http://assets.videosdk.live/static-assets/ghost/size/w600/2021/09/pricing-table_vod.jpg 600w, http://assets.videosdk.live/static-assets/ghost/size/w1000/2021/09/pricing-table_vod.jpg 1000w, http://assets.videosdk.live/static-assets/ghost/2021/09/pricing-table_vod.jpg 1292w" sizes="(min-width: 1200px) 1200px"/></figure><p>On comparing the prices and quality of both the companies, it can be well observed what calls for a better choice. Videosdk.live comes up to be an affordable approach for the VoD facility when we compare it to AWS. Nothing is distinguishable, the quality, the deliverables, the features. Everything is identical, still, there’s a huge pricing gap.</p><blockquote>AWS deals in the resolution of 1080p and videosdk.live provides the VoD facility in multiple resolutions, we observe a price variance. Even on comparing the 1080p resolution of both the companies, there is too a huge price difference, let’s just double it!</blockquote>]]></content:encoded></item><item><title><![CDATA[Introduction to Video On Demand]]></title><description><![CDATA[Video-on-demand API provides end-to-end solution for building scalable on demand video platform for millions of the users.]]></description><link>https://www.videosdk.live/introduction-to-video-on-demand/</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb62</guid><category><![CDATA[Getting Started]]></category><dc:creator><![CDATA[Arjun Kava]]></dc:creator><pubDate>Wed, 30 Jun 2021 13:35:05 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/06/adaptive-video-streaming.jpg" medium="image"/><content:encoded><![CDATA[<h1/><img src="http://assets.videosdk.live/static-assets/ghost/2021/06/adaptive-video-streaming.jpg" alt="Introduction to Video On Demand"/><p>Video-on-demand API provides end-to-end solution for building scalable on-demand video platforms for millions of users.</p><p>VOD APIs are divided into three major part</p><h3 id="1-storage-api">1. Storage API</h3><p>These APIs are useful to store media on cloud with ease. It fetches all the meta-information with video records to make developer life much easy.</p><h3 id="2-encoding-api">2. Encoding API</h3><p>Encoding API compresses video into multiple resolutions from 240p to 4K and also make it supported in 98% of the devices across the globe.</p><h3 id="3-streaming-api">3. Streaming API</h3><p>Streaming API distributes all the media files across the globe on more than 150 edge locations.</p><p>Find out more about it</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/docs/video-on-demand/intro"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Introduction | VideoSDK.live Documentation</div><div class="kg-bookmark-description">Video-on-demand API provides end-to-end solution for building scalable on demand video platform for millions of the users.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/favicon.ico" alt="Introduction to Video On Demand"><span class="kg-bookmark-author">videosdk.live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/img/zujonow_32.png" alt="Introduction to Video On Demand"/></div></a></figure>]]></content:encoded></item><item><title><![CDATA[Introduction to Live Streaming SDK]]></title><description><![CDATA[Live streaming lets you stream your live videos with just few lines of code. Reach to your audience across the globe.]]></description><link>https://www.videosdk.live/blog/introduction-to-live-streaming-sdk</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb61</guid><category><![CDATA[Getting Started]]></category><dc:creator><![CDATA[Arjun Kava]]></dc:creator><pubDate>Wed, 30 Jun 2021 13:31:46 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/06/embedded-player.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/06/embedded-player.jpg" alt="Introduction to Live Streaming SDK"/><p>Live streaming lets you stream your live videos with just few lines of code. Reach your audience across the globe.</p><h2 id="adaptive-live-streaming">Adaptive Live Streaming</h2><p>Live Stream SDK uses adaptive live streaming to provide best user experience and concurrent live events up to 4K.</p><h2 id="98-device-support">98% Device Support</h2><p>It helps you to stream your video from your Browser, Online Meeting, OBS, StreamYard, Phone etc. Apart from that streaming video supports 98% of devices.</p><h2 id="low-latency-live-streaming">Low Latency Live Streaming</h2><p>Live streaming SDK enables low latency streaming to edge locations based on internet availability and bandwidth.</p><p>Find out more about it in documentation.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.videosdk.live/docs/live-streaming/intro"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Introduction | VideoSDK.live Documentation</div><div class="kg-bookmark-description">Live streming lets you stream your live videos with just few lines of code. Reach to your audience across the globe.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.videosdk.live/img/favicon.ico" alt="Introduction to Live Streaming SDK"><span class="kg-bookmark-author">videosdk.live</span></img></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.videosdk.live/assets/images/live-streaming-20847dd496d7ef5166746aaec3747f49.jpg" alt="Introduction to Live Streaming SDK"/></div></a></figure>]]></content:encoded></item><item><title><![CDATA[ZujoNow ----> Videosdk.live]]></title><description><![CDATA[VideoSDK.live is a content-centric APIs specifically for media. We cater to a large range of services which includes Real-time communication, live streaming, on-demand videos, and more. We focus on bringing up solutions to the problems.]]></description><link>https://www.videosdk.live/blog/zujonow-videosdk-live</link><guid isPermaLink="false">6322de0b5ed4260c94d4fb5f</guid><category><![CDATA[Getting Started]]></category><dc:creator><![CDATA[Arjun Kava]]></dc:creator><pubDate>Wed, 30 Jun 2021 12:07:38 GMT</pubDate><media:content url="http://assets.videosdk.live/static-assets/ghost/2021/06/ZUJONOW--1-.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://assets.videosdk.live/static-assets/ghost/2021/06/ZUJONOW--1-.jpg" alt="ZujoNow ----> Videosdk.live"/><p/><p>In the tech-growing world today, content has been the most useful source of conveying and presenting information. </p><p>The use content centric APIs has made it easy to connect people. It has helped a presentable, responsible, and ethical platform for industries engaged worldwide serving their customers. </p><p>VideoSDK.live is a content-centric APIs specifically for media. We cater to a large range of services which includes Real-time communication, live streaming, on-demand videos, and more. We focus on bringing up solutions to the problems.</p><p>Our cutting-edge technology and algorithms like Adaptive Bitrate Streaming, low latency UDP Streaming, and real-time data streaming are drafted to perfection.</p><p>Writing about our company makes me feel proud because the enormous appreciation we have received from our clients has been a huge wonder. </p><p>VideoSDK.live works on the aspect of trust and reliability. To date, we have never left any clients unsatisfied and that’s the best part of our company.</p><p>Our mission is to make an enormous difference in engagement for our clients. We have achieved trust, reliability, and a supreme level of bonding with our clients. We provide end-to-end industry solutions to our clients.</p><p>VideoSDK.live has helped its customers solve high core problems, scaling solutions to millions of users, saving their research and development costs. our technology of adaptive streaming helps in smooth access over all compatible platforms.</p><p>We look forward to building up an everlasting relationship between VideoSDK.live and your company marking up success and satisfaction. Connect with us today. Reach out at VideoSDK.live for better understanding and effective future communication.</p><p/><!--kg-card-begin: html-->Join our developer community!
<embed src="https://invidget.switchblade.xyz/V4anwpNVuA" style="width: 60%" type="text/html">
</embed><!--kg-card-end: html--><p/><p/>]]></content:encoded></item></channel></rss>