Traditional human-machine interaction has often been a frustrating experience, with systems struggling to grasp natural language and user intent. This communication gap leads to inefficiencies, poor customer experiences, and barriers to accessing information.
Modern Conversational AI, which is powered by large language models (LLMs), is changing this experience. It could completely change how we use technology by allowing us to interact with it in a more natural and intuitive way.
This guide explores the world of Conversational AI, covering:
The role of LLMs in Conversational AI
Key components of LLM-powered systems
How Conversational AI systems work
Step-by-step guide on how you can integrate Conversational AI into your own projects.
Let’s learn how Conversational AI bridges the gap between humans and machines. 🚀
Understanding Conversational AI
Before we get into the details of Conversational AI, it is important to build a strong base. This section will help you understand what Conversational AI is, the different kinds of it, and why LLM-powered Conversational AI is better than other methods.
What is Conversational AI?
Computers can now use conversational AI to chat with people in natural language, making text or voice conversations sound like they are between people. In the 1960s, early chatbots like ELIZA were the first ones of their kind.
However, recent progress in AI, especially in Large Language Models (LLMs), has made Conversational AI much better at understanding context, responding in a way that sounds like a human, and keeping conversations going smoothly with multiple turns.
These improvements have made AI interactions more natural, accurate, and adaptable across various applications.
Types of Conversational AI
We can broadly categorize Conversational AI into three main types, each with distinct mechanisms and capabilities:
Rule-based systems: Usually chatbots that follow a predefined set of rules and can only respond to specific keywords or phrases. While they are simple to implement, they lack the flexibility to handle complex queries or contextual understanding.
Retrieval-based systems: These systems use machine learning algorithms to rank the most appropriate response from a predefined set of responses. They offer more flexibility than rule-based systems but are still limited by the scope of their training data.
Generative chat systems powered by LLMs: These systems use LLMs to generate responses dynamically based on the input and context of the conversation. LLMs enable them to engage in more natural, human-like conversations and handle various topics and queries.
Advantages of LLM-powered Conversational AI over Traditional Approaches
LLM-powered Conversational AI offers several advantages over traditional approaches:
Improved natural language understanding: LLMs can better understand the nuances and context of human language for a more accurate interpretation of user intent and sentiment.
Greater flexibility: LLMs can handle a wide range of topics and adapt to different conversational styles, which makes them suitable for various applications and industries.
Ability to handle complex conversations: LLMs can maintain context across multiple turns of conversation, allowing for more engaging and coherent interactions.
Continuous learning: You can fine-tune LLMs on domain-specific data so they can improve over time and adapt to evolving user needs and preferences.
Previous generations of chatbots aimed for similar goals but were constrained by their simpler, rule-based designs, lack of coherence reasoning, and limited ability to understand and generate human-like text.
Modern LLMs bring in-depth contextual comprehension and generation capabilities that previous systems couldn't match. While most interactions happen via text, human interaction goes beyond text and involves other modalities and channels, like voice.
Conversational AI Interfaces
When building or using a Conversational AI system, the interface is crucial to how users interact with the technology. There are two primary types of interfaces:
Chat Interface (Text)
Chat interfaces enable users to interact with Conversational AI systems through text-based communication. These interfaces are commonly found in various forms:
1. Web Chat: Web chat interfaces are embedded within websites or web applications, allowing users to engage in text-based conversations with chatbots or virtual assistants. These interactions often involve more formal language, such as "Hello, I need help changing the delivery address on my account." They reflect the context of online inquiries and support interactions.
2. SMS/Messaging Platforms: Conversational AI can also be integrated into SMS or messaging platforms like WhatsApp or Facebook Messenger. These interfaces facilitate communication through shorter, informal language like "I need to change my address." to mirror the conversational nature of messaging.
3. Cobots: Cobots, or collaborative robots, are AI-powered assistants that work alongside human agents to enhance customer support interactions. They can process conversations in real-time, suggest responses to agents (the next best action), and even automate certain tasks to improve efficiency and response times.
Voice Interface
Voice interfaces enable users to interact with Conversational AI systems through spoken language. These interfaces can be deployed in various forms:
Software-based virtual assistants: Virtual assistants like Siri, Google Assistant, and Alexa are prime examples of software-based voice interfaces that can be accessed through smartphones, smart speakers, or other devices.
Video-based agents: Video-based agents incorporate visual elements, such as animated avatars or facial expressions, to improve the conversational experience. These agents are often used in customer service or virtual receptionist scenarios.
Phone-based agents: Phone-based agents, commonly used in interactive voice response (IVR) systems, enable users to interact with Conversational AI through phone calls. These agents handle tasks like routing calls, providing information, or processing transactions.
Voice interfaces have their own problems compared to chat interfaces. Voice changes, accents, sentence structures, pauses, and tones make spoken language naturally complex and variable. Interference and background noise can also make voice interactions more difficult.
Therefore, voice AI systems require specialized components to handle the complexities of spoken language before applying Natural Language Understanding (NLU) to extract meaning and intent. These components include Automatic Speech Recognition (ASR) to convert speech to text and potentially Text-to-Speech (TTS) to generate spoken responses.
In the following sections, we'll explore the key components of LLM-powered Conversational AI systems and how they work together to create seamless conversational experiences for both chat (text) and voice-based interfaces.
How Conversational AI Works: A Technical Overview
To fully understand how Conversational AI works, it is necessary to examine the technical parts that make human-computer interactions possible. In this section, we will examine the main components that make it all possible.
First, you will get a quick look at a Conversational AI pipeline (workflow).
Simplified Overview of a Conversational AI Pipeline
To build a Conversational AI system, let’s consider the general flow of the pipeline. The pipeline involves several key stages that work together to process and respond to human language.
Here's a high-level overview of the typical stages involved in a Conversational AI interaction:
Input Capture: This stage captures the user's input, which could be speech or text.
Automatic Speech Recognition (ASR): For voice-based interactions, ASR converts the spoken language into text. This step is crucial for transforming speech into a machine-readable format (text).
Natural Language Understanding (NLU): NLU processes the text to extract meaning, intent, and relevant entities. It involves techniques like syntactic parsing, semantic analysis, and intent classification.
Dialogue Management: This component maintains the conversation's context, handles user responses, manages multi-turn interactions, and generates system replies.
Natural Language Generation (NLG): NLG takes the structured response generated by the dialogue manager and converts it into natural, human-like language.
Response Delivery: Finally, the generated response is delivered to the user as text or speech.
Core Components of a Conversational AI System
While the architecture and workings of different Conversational AI systems may vary, they fundamentally contain the following components:
Automatic Speech Recognition (ASR)
In voice-based Conversational AI, ASR takes center stage. It converts spoken language into text, bridging the user's voice and the system's understanding.
Advanced ASR models, like those offered by Deepgram (e.g., Nova and Whisper Cloud), use Transformer-based architectures to achieve high accuracy and robustness, even in challenging acoustic environments (noisy backgrounds).
Natural Language Understanding (NLU)
Once the user input is converted into text, NLU extracts meaning and intent—it’s responsible for comprehension.
NLU involves techniques like syntactic parsing, which analyzes the grammatical structure of the text, and entity recognition, which identifies and categorizes key elements like names, dates, and locations.
Because of their extensive world knowledge, LLMs have improved NLU by providing a deeper understanding of context (sentences and sequences of sentences).
For example, they can comprehend idiomatic expressions and detect sentiment (common sense), which makes interactions more natural and effective.
Dialogue Management (DM)
DM is responsible for storing and maintaining the conversation's flow. It tracks the conversation’s context, handles user responses, manages multi-turn conversations, and deals with ambiguity or context switches. It ensures the interaction remains coherent, even when users ask follow-up questions or shift topics (context switching).
LLMs contribute to dialogue management by enabling more dynamic conversations because they can maintain and recall context over extended interactions. LLMs, through a chain of thought prompting, can be creative problem solvers too.
Natural language generation (NLG)
NLG is responsible for creating human-like responses that are both informative and engaging. It transforms the extracted meaning and intent into well-structured and contextually relevant text.
LLMs are crucial in generating more natural, fluent, and personalized responses, making the interaction feel truly conversational.
Integration with External Systems
To be truly effective, Conversational AI systems often need to connect with external knowledge bases, APIs, or databases to retrieve information, handle queries, and execute tasks. This integration allows the system to handle queries and transactions beyond its built-in capabilities. For example, if you want the system to fetch real-time weather data or pull knowledge from a specific domain knowledge repository.
RAG-based Applications
Retrieval-Augmented Generation (RAG) plays a crucial role here by retrieving relevant data from external sources, like a database or knowledge graph, to enrich the conversation.
The system then integrates this information into the conversation context for more informed and contextually accurate dialogue. Finally, the system uses this augmented context to generate highly relevant, accurate, and useful responses.
AI Agents
AI agents, which are software programs that can perform tasks autonomously, are also increasingly being used in Conversational AI applications. These agents can handle complex tasks, such as scheduling appointments, making reservations, or providing personalized recommendations, by integrating with external systems and APIs.
In the following sections, we’ll focus on generative, LLM-powered conversational AI because today's human-machine conversations are becoming more complex and dynamic.
Challenges of Developing LLM-Powered Conversational AI Systems
Conversational AI has changed a lot because of LLMs, but implementing them is not always easy. These challenges span various aspects, including ensuring the quality of interactions, providing a seamless developer experience, and addressing privacy, security, and compliance concerns.
Quality
The quality of interactions in LLM-powered Conversational AI systems is paramount. However, achieving and maintaining high quality can be challenging due to:
Accuracy concerns:
LLMs can sometimes generate inaccurate or nonsensical responses, often called "hallucinations." They may also struggle to follow instructions precisely, leading to unexpected or undesirable outputs. In other cases, they might perperuate the user’s opinions and biases that are likely inaccurate or objective—a phenomenon called sycophancy.
You need to adopt robust error handling and validation techniques to mitigate these issues and ensure the reliability of the system's responses.
Balancing cost and latency vs. quality:
Striking the right balance between cost, latency, and quality is crucial when deploying LLM-powered systems. Larger models often offer superior performance but come with increased computational costs and latency.
You must carefully evaluate their requirements and choose models that deliver optimal performance while remaining cost-effective and responsive.
Developer Experience
Building and maintaining LLM-powered Conversational AI systems can present challenges for developers:
Control and execution:
Controlling the precise execution steps of LLMs can be challenging because their generative nature can sometimes produce unexpected outputs. A developer may struggle to ensure the system follows a specific sequence of actions—to guarantee deterministic behavior—or adheres to predefined rules.
Techniques like prompt engineering with structured output and fine-tuning can help improve control, but it remains an ongoing area of research and development.
Prompt complexity:
Managing prompts—instructions to the LLM—for various scenarios and user inputs can become increasingly complex as the system's capabilities expand. This complexity can hinder development and maintenance efforts.
You must adopt strategies like modular prompt design and version control to manage the prompt complexity effectively.
Scalability:
Scaling LLM-powered applications from proof-of-concept (POC) to production environments can be a hurdle. Handling increased traffic, ensuring consistent performance, and managing computational resources require careful planning and architectural considerations.
Debugging and improvement:
Debugging and troubleshooting LLM-powered systems can be challenging due to their inherent complexity and the black-box nature of LLMs. Identifying the root cause of issues and implementing effective fixes may require extensive analysis and experimentation.
Privacy, Security, and Compliance
Protecting user data, ensuring cybersecurity, and adhering to compliance regulations are critical aspects of building LLM-powered Conversational AI systems:
Data privacy and IP protection:
Protecting user data and intellectual property (IP) is paramount. To ensure the confidentiality and integrity of sensitive information, you need to implement robust security measures, including encryption, access controls, and data anonymization techniques.
Cybersecurity:
Like any other online application, conversational AI systems are susceptible to cyberattacks and vulnerabilities. Therefore, it is paramount to ensure robust security measures to protect against threats and safeguard user information.
Compliance:
Compliance with industry-specific regulations and regional data protection laws, such as GDPR in Europe or HIPAA in healthcare, is essential. You must ensure that your conversational AI systems adhere to these requirements, including obtaining necessary consent, implementing data retention policies, and providing transparency to users.
When developers understand and deal with these issues related to quality, developer experience, privacy, security, and compliance, they can build LLM-powered Conversational AI applications that are robust, trustworthy, and useful for both users and businesses.
Implementation Approaches for Developing Conversational AI Systems
Choosing the right implementation approach is critical when developing a conversational AI system. There are three primary paths to consider, each with its own trade-offs regarding ease of execution, investment required, and level of control over the solution.
Self-developed Platform
This approach involves building a platform in-house from scratch. This offers full control and customization over the solution but demands significant investment and specialized talent.
Ease of execution: Difficult
Investment: Significant (development, infrastructure, talent)
Control: Full control over the entire system
If your firm has access to top developer talent, prioritizes complete control, and is willing to invest in the resources required, this approach might be suitable. However, carefully evaluate the feasibility and potential risks of such an undertaking.
Third-Party Platforms
The second approach involves leveraging end-to-end third-party platforms from major cloud tech providers like Amazon, Google (e.g., Gen App Builder), and Microsoft (Power Apps). This simplifies development and integration but limits customization and locks you into a specific technology stack.
It could also involve modularizing the stack using different tools that integrate well with the system's components.
Ease of execution: Easier than self-developed
Investment: Significant (licensing, customization, support)
Control: Control with limitations imposed by the framework
This approach balances control and ease of implementation, especially for organizations with existing cloud infrastructure. However, it requires specialized talent to customize and support the solution effectively.
Partnership with Specialists
This approach involves partnering with a Conversational AI specialist. This leverages their expertise and technology but sacrifices some control over the solution.
Ease of execution: Easier
Investment: No specialized talent required (partner handles development and support)
Control: Loss of control over the underlying technology
This approach is beneficial if you want a quick and efficient implementation, particularly if the partner offers pre-trained models for your industry.
Choosing the right approach requires honest self-reflection. Evaluate your organization's capabilities, past experiences with in-house development, and comfort level with relying on external partners. If you lack experience building AI systems, partnering with a specialist might be the most prudent choice.
The optimal implementation approach ultimately depends on your needs, resources, and priorities. Carefully consider each option before deciding to ensure your Conversational AI system aligns with your goals and delivers the desired outcomes.
Step-by-Step Process for Implementing an LLM-Powered Conversational AI System
Implementing a Conversational AI system involves several key steps, from defining objectives and selecting the right technology stack to designing conversation flows, optimizing models, and continuously monitoring and improving the system.
Let's explore each step in detail.
Step 1: Define Clear Objectives and Use Cases
Identify business goals:
Determine what you want to achieve with Conversational AI.
Common objectives include improving customer service, automating routine tasks, or enhancing user engagement.
Choose use cases:
Based on your objectives, select specific use cases where Conversational AI can add value.
Examples include customer support chatbots, virtual assistants, or voice-activated systems for smart devices.
Choose an implementation approach:
Refer back to the previous section, "Implementation Approaches for Developing Conversational AI Systems," for guidance.
Align your use cases with the most pressing needs of your business or target audience to ensure that the system delivers tangible benefits.
Step 2: Choose the Right Technology Stack
Large language model (LLM):
Select a suitable LLM (e.g., GPT-3, GPT-4, BLOOM, LLaMA) based on your requirements and budget.
Considerations:
Accuracy (Word Error Rate (WER), Word Recognition Rate (WRR), etc.)
Ability to handle accents and dialects
Multilingual support
Contextual understanding
Cost and licensing
Prompt engineering framework:
Choose a framework (e.g., LangChain, Haystack) to help structure and manage prompts for the LLM effectively, considering voice-specific nuances and contexts.
Considerations:
Ease of use and flexibility
Support for various prompt engineering techniques
Integration capabilities with other components of your stack
Automatic speech recognition (ASR) system:
Select a high-quality ASR system (e.g., Deepgram, Whisper) known for its accuracy, low latency, and robust handling of diverse accents and noisy environments.
Considerations:
Accuracy and Word Error Rate (WER)
Real-time or batch processing capabilities
Language and accent support
Customization options
Integration with other components
Text-to-Speech (TTS) system (if applicable):
If your use case requires it, choose a natural-sounding TTS system (e.g., Amazon Polly, Google Cloud Text-to-Speech) to convert the LLM-generated responses into spoken language.
Considerations:
Voice quality and naturalness
Language and accent support
Customization options (e.g., voice styles, emotional expressions)
Integration with other components
Backend infrastructure:
Set up a robust backend system (e.g., Node.js, Python Flask, Django) to handle API calls, database interactions, LLM integration, ASR integration, and potentially TTS integration.
Considerations:
Scalability and performance
Security and data privacy
Integration with existing systems
Ease of development and maintenance
Deployment platform:
Choose a deployment platform (e.g., cloud providers like AWS, Azure, GCP, or on-premises servers) that can handle the computational demands of LLMs and ASR models, ensuring scalability and low latency for real-time voice interactions.
Considerations:
Cost-effectiveness
Scalability and performance
Security and compliance (user authentication, privacy certifications like GDPR and HIPAA)
Level of support and documentation
Integration with other tools and services
Step 3: Design the Conversation Flow and Prompts
Create user stories:
Develop user stories to outline typical voice interactions and identify various conversation paths, accounting for potential ASR errors and ambiguities.
Design prompts:
Craft effective prompts for the LLM, considering the voice-specific context and nuances. Include system instructions, few-shot examples (using transcribed speech), and clear guidelines for handling ASR uncertainties.
Experiment with prompt engineering:
Iterate on your prompts, trying different variations and techniques (e.g., chain-of-thought prompting, role prompting) to optimize response quality and control, especially in handling transcribed speech variations.
Handle ASR errors:
Implement robust strategies to gracefully handle potential errors in ASR transcriptions. Consider techniques like confidence scores, clarification requests, and context-based inference to improve understanding.
Implement safety measures:
Incorporate stringent safety guidelines and filters to prevent the LLM from generating harmful or biased content, especially when dealing with potentially sensitive voice inputs.
Define fallback strategies:
For when the system encounters unfamiliar requests or cannot fulfill a user's intent.
Step 4: Select and Optimize the Models
Data preparation:
Clean and preprocess your collected audio and text data.
For ASR, consider techniques like noise reduction, data augmentation, and accent diversity to enhance robustness.
For the LLM, ensure high-quality and diverse text data relevant to your use case.
Considerations:
Data quality and relevance
Data cleaning and preprocessing techniques
Data augmentation strategies for ASR
Fine-tuning:
Fine-tune the pre-trained LLM for your specific domain and use cases, incorporating transcribed speech data to improve its understanding and generation of voice-based responses. Fine-tune the ASR model to your specific domain and audio characteristics if possible.
Considerations:
Fine-tuning techniques and strategies
Hyperparameter tuning
Evaluation metrics
Optimization:
Optimize the LLM and ASR model parameters and hyperparameters to improve response quality, reduce latency, and minimize computational costs.
Considerations:
Model compression and quantization
Hardware acceleration
Performance monitoring and optimization tools
Step 5: Develop and Integrate
Build the Conversational AI system:
Implement the core components of your system (e.g., prompt engineering framework, LLM integration, ASR integration, backend logic, TTS integration if needed) using your chosen programming language and frameworks.
Considerations:
Software development best practices
Code modularity and reusability
Version control
Integrate with external systems:
Connect your system to relevant external systems (e.g., databases, APIs, knowledge graphs) to access and update information, enabling the LLM to provide more comprehensive and up-to-date responses.
Considerations:
API integration and data exchange protocols
Data synchronization and consistency
Streamline the ASR-LLM-TTS pipeline:
Ensure efficient communication and data flow between the ASR, LLM, and TTS systems (if used). Consider real-time streaming or batch processing based on your use case and latency requirements.
Considerations:
Data flow optimization
Latency reduction techniques
Error handling and recovery mechanisms
Test thoroughly:
Conduct rigorous testing across different scenarios, user inputs, user acceptance testing (UAT), and audio conditions to identify and fix any issues. Pay close attention to the interplay between ASR, LLM, and TTS (if used).
Considerations:
Test case design and coverage
Automated testing frameworks
User experience testing
Deploy the system:
Deploy your Conversational AI system to your chosen platform, ensuring it can handle the expected load and traffic, including real-time interactions.
Considerations:
Deployment strategies (e.g., blue-green deployment, canary deployment)
Load balancing and scaling
Monitoring and logging
Step 6: Monitor, Maintain, and Improve
Monitor performance:
Use analytics and logs to continuously monitor the system's performance (e.g., response quality, user satisfaction, conversation completion rate, ASR accuracy, and TTS naturalness).
Collect user feedback:
Gather user feedback through surveys or in-app mechanisms to identify areas for improvement, especially regarding voice interactions and ASR performance.
Iterate and enhance:
Regularly update and enhance your system based on performance data, user feedback, and evolving requirements.
Fine-tune:
Periodically fine-tune your LLM and ASR models on new data to adapt to changing user behavior, language patterns, knowledge, and audio characteristics.
Remember, implementing a successful Conversational AI system is an ongoing process. Continuous monitoring, improvement, and adaptation are key to ensuring your system remains effective and delivers exceptional user experiences.
Real-World Applications of Conversational AI
Conversational AI's versatility has led to its adoption across various domains, revolutionizing how we interact with technology and conduct our daily lives.
Let's explore some real-world applications of Conversational AI, categorized into business and personal use cases.
Business Use Cases
Conversational AI is changing how businesses work and talk to their customers. This is making businesses more efficient, making customers happier, and cutting costs. Let’s see how.
1. Customer support and service:
Conversational AI-powered chatbots and virtual assistants provide 24/7 customer support, handling inquiries, troubleshooting issues, and guiding customers through various processes. This frees human agents to focus on complex tasks, leading to faster response times and improved customer satisfaction.
2. Sales and marketing:
Conversational AI can assist in lead generation, qualify prospects, and provide personalized product recommendations. Chatbots can engage customers in interactive conversations, answer questions about products or services, and even guide them through purchasing.
3. HR and employee engagement:
Conversational AI can streamline HR processes, such as onboarding new employees, answering policy-related questions, and providing training and development resources. It can also facilitate employee engagement surveys and feedback collection, enhancing internal communication and employee satisfaction.
4. Industry-specific applications:
Conversational AI finds applications in various industries, including healthcare, finance, and retail. In healthcare, AI-powered systems can assist in appointment scheduling, provide basic medical advice, and support mental health through conversational therapy.
Chatbots can help customers with account inquiries and transaction tracking in finance and even provide personalized financial advice. In retail, AI-powered assistants can offer product recommendations, assist with online shopping, and provide virtual styling advice.
Personal Use Cases
Conversational AI also transforms how we manage our daily lives, from simplifying tasks to enhancing entertainment and well-being.
1. Virtual assistants (e.g., Siri, Alexa, Google Assistant):
Virtual assistants have become an integral part of many households, enabling users to control smart home devices, set reminders, play music, get weather updates, and much more using simple voice commands.
2. Personalized recommendations and content curation:
Conversational AI can leverage user data and preferences to deliver personalized product, service, or content recommendations. This enhances user engagement and helps individuals discover new and relevant information tailored to their interests.
3. Mental health and wellness support:
Conversational AI-powered chatbots can provide mental health support and resources, offering coping strategies and stress reduction techniques and even connecting individuals with professional help.
These are just a few examples of the diverse applications of Conversational AI. As the technology continues to evolve, we can expect even more innovative and impactful use cases to emerge, further blurring the lines between human-machine interaction and transforming various aspects of our lives.
Conclusion
This article has shown how Conversational AI combines LLMs, ASR, and TTS to enable natural, human-like interactions between machines and users. Businesses and people can use Conversational AI to make operations more efficient, interesting, and easy to access if they know its main parts, how to implement them, and how they are used in real life.
Our contributions at Deepgram, particularly with Nova for Speech-to-Text (STT) and Aura for Text-to-Speech (TTS), have improved the accuracy and naturalness of conversational AI systems for many users.
We encourage you to continue exploring Conversational AI through hands-on experimentation, integration into your business, or contribution to ethical AI development.
Frequently Asked Questions
1. What are the key components of a Conversational AI system?
A Conversational AI system typically includes components like Automatic Speech Recognition (ASR), Natural Language Understanding (NLU), Dialogue Management, and Natural Language Generation (NLG). These work together to interpret and respond to user inputs effectively.
2. How does Conversational AI improve customer engagement?
Conversational AI enhances customer engagement by providing 24/7 support, personalizing interactions, and resolving queries quickly, leading to higher satisfaction and loyalty.
3. What are the benefits of using AI-powered chatbots in businesses?
AI-powered chatbots automate routine tasks, reduce operational costs, and provide consistent customer service, helping businesses improve efficiency and customer experience.
4. What are some challenges in implementing voice-based conversational AI?
Voice-based AI faces challenges like accurately transcribing diverse accents and handling background noise. It requires robust ASR and strategies for addressing transcription errors. Also, ensure ethical handling of voice data and user privacy.
Unlock language AI at scale with an API call.
Get conversational intelligence with transcription and understanding on the world's best speech AI platform.