Skip to content
forked from livekit/agents

Build real-time multimodal AI applications 🤖🎙️📹

License

Notifications You must be signed in to change notification settings

okay-openai/agents

 
 

Repository files navigation

The LiveKit icon, the name of the repository and some sample code in the background.

LiveKit Agents

The Agent Framework is designed for building realtime, programmable participants that run on servers. Use it to create conversational, multi-modal voice agents that can see, hear, and understand.

The framework includes plugins for common workflows, such as voice activity detection and speech-to-text.

Agents integrates seamlessly with Cloud or self-hosted LiveKit server, offloading job queuing and scheduling responsibilities to it. This eliminates the need for additional queuing infrastructure. Agent code developed on your local machine can scale to support thousands of concurrent sessions when deployed to a server in production.

This SDK is currently in Developer Preview. During this period, you may encounter bugs and the APIs may change.

We welcome and appreciate any feedback or contributions. You can create issues here or chat live with us in the LiveKit Community Slack.

Docs & Guides

Note

There are breaking API changes between versions 0.7.x and 0.8.x. Please refer to the 0.8 migration guide for a detailed overview of the changes.

Examples

  • Voice assistant: A voice assistant with STT, LLM, and TTS. Demo
  • Video publishing: A demonstration of publishing RGB frames to a LiveKit Room
  • STT: An agent that transcribes a participant's audio into text
  • TTS: An agent that publishes synthesized speech to a LiveKit Room

Installation

To install the core Agents library:

pip install livekit-agents

Agents includes a set of prebuilt plugins that make it easier to compose together agents. These plugins cover common tasks like converting speech to text or vice versa and running inference on a generative AI model. You can install a plugin as follows:

pip install livekit-plugins-deepgram

The following plugins are available today:

Plugin Features
livekit-plugins-anthropic LLM
livekit-plugins-azure STT, TTS
livekit-plugins-cartesia TTS
livekit-plugins-deepgram STT
livekit-plugins-elevenlabs TTS
livekit-plugins-google STT, TTS
livekit-plugins-nltk Utilities for working with text
livekit-plugins-openai LLM, STT, TTS
livekit-plugins-silero VAD

Using LLM models

Agents framework supports a wide range of LLMs and hosting providers.

OpenAI-compatible models

Most LLM providers offer an OpenAI-compatible API, which can be used with the livekit-plugins-openai plugin.

from livekit.plugins.openai.llm import LLM
  • OpenAI: LLM(model="gpt-4o")
  • Azure: LLM.with_azure(azure_endpoint="", azure_deployment="")
  • Cerebras: LLM.with_cerebras(api_key="", model="")
  • Fireworks: LLM.with_fireworks(api_key="", model="")
  • Groq: LLM.with_groq(api_key="", model="")
  • OctoAI: LLM.with_octo(api_key="", model="")
  • Ollama: LLM.with_ollama(base_url="http://localhost:11434/v1", model="")
  • Perplexity: LLM.with_perplexity(api_key="", model="")
  • TogetherAI: LLM.with_together(api_key="", model="")

Anthropic Claude

Anthropic Claude can be used with livekit-plugins-anthropic plugin.

from livekit.plugins.anthropic.llm import LLM

myllm = LLM(model="claude-3-opus-20240229")

Concepts

  • Agent: A function that defines the workflow of a programmable, server-side participant. This is your application code.
  • Worker: A container process responsible for managing job queuing with LiveKit server. Each worker is capable of running multiple agents simultaneously.
  • Plugin: A library class that performs a specific task, like speech-to-text, from a specific provider. An agent can compose multiple plugins together to perform more complex tasks.

Running an agent

The framework exposes a CLI interface to run your agent. To get started, you'll need the following environment variables set:

  • LIVEKIT_URL
  • LIVEKIT_API_KEY
  • LIVEKIT_API_SECRET

Starting the worker

This will start the worker and wait for users to connect to your LiveKit server:

python my_agent.py start

To run the worker in dev-mode (with hot code reloading), you can use the dev command:

python my_agent.py dev

Using playground for your agent UI

To ease the process of building and testing an agent, we've developed a versatile web frontend called "playground". You can use or modify this app to suit your specific requirements. It can also serve as a starting point for a completely custom agent application.

Joining a specific room

To join a LiveKit room that's already active, you can use the connect command:

python my_agent.py connect --room <my-room>

What happens when I run my agent?

When you follow the steps above to run your agent, a worker is started that opens an authenticated WebSocket connection to a LiveKit server instance(defined by your LIVEKIT_URL and authenticated with an access token).

No agents are actually running at this point. Instead, the worker is waiting for LiveKit server to give it a job.

When a room is created, the server notifies one of the registered workers about a new job. The notified worker can decide whether or not to accept it. If the worker accepts the job, the worker will instantiate your agent as a participant and have it join the room where it can start subscribing to tracks. A worker can manage multiple agent instances simultaneously.

If a notified worker rejects the job or does not accept within a predetermined timeout period, the server will route the job request to another available worker.

What happens when I SIGTERM a worker?

The orchestration system was designed for production use cases. Unlike the typical web server, an agent is a stateful program, so it's important that a worker isn't terminated while active sessions are ongoing.

When calling SIGTERM on a worker, the worker will signal to LiveKit server that it no longer wants additional jobs. It will also auto-reject any new job requests that get through before the server signal is received. The worker will remain alive while it manages any agents connected to rooms.

Downloading model files

Some plugins require model files to be downloaded before they can be used. To download all the necessary models for your agent, execute the following command:

python my_agent.py download-files

If you're developing a custom plugin, you can integrate this functionality by implementing a download_files method in your Plugin class:

class MyPlugin(Plugin):
    def __init__(self):
        super().__init__(__name__, __version__)

    def download_files(self):
        _ = torch.hub.load(
            repo_or_dir="my-repo",
            model="my-model",
        )


LiveKit Ecosystem
Realtime SDKsReact Components · Browser · Swift Components · iOS/macOS/visionOS · Android · Flutter · React Native · Rust · Node.js · Python · Unity (web) · Unity (beta)
Server APIsNode.js · Golang · Ruby · Java/Kotlin · Python · Rust · PHP (community)
Agents FrameworksPython · Playground
ServicesLiveKit server · Egress · Ingress · SIP
ResourcesDocs · Example apps · Cloud · Self-hosting · CLI

About

Build real-time multimodal AI applications 🤖🎙️📹

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 88.2%
  • C++ 7.9%
  • CMake 1.8%
  • C 1.4%
  • Other 0.7%