Linkt reposted this
Our LLM token-use at Linkt is ripping. GTM teams are loving our research agents for finding and monitoring companies. Excited to upgrade to Gemini 3.0 (...this week?👀)
Linkt is at the forefront of democratizing AI technology. Our primary goal is to transform how various industries leverage artificial intelligence, ensuring it's not just accessible but also profoundly beneficial. We excel in creating tailor-made AI solutions that revolutionize customer engagement and significantly elevate team productivity. Our expertise lies in deploying cutting-edge algorithms and user-friendly interfaces to deliver seamless experiences. Join us on our journey as we redefine the future of AI in business, fostering innovation, and driving measurable success across sectors.
External link for Linkt
Austin, Texas, US
Linkt reposted this
Coding orgs look completely different than they did this time last year, with AI agents fundamentally changing how they work. Over the next 6 months, sales teams will undergo a similar transition, and our team at Linkt is excited to lead this charge. We're officially launching Signal. Sales teams don't need another tool that creates more noise. They need real intelligence. Signal is a fleet of research agents that does the prospecting for you—finding your ICP and the verifiable buying triggers that matter, so your team can win more deals. Learn about our new approach: https://lnkd.in/gZ4WjS5K
Linkt reposted this
It was an honor to speak at last week's ISG (Information Services Group) AI Impact Summit in Boston. I discussed the tangible results businesses are achieving with artificial intelligence. During my session, I covered: - How Linkt is helping companies boost operational efficiency by 50%+ with custom AI workflows. - How our fleet of AI Agents, 'Signal,' is empowering sales teams to uncover net-new revenue. A huge thank you to the team at ISG for hosting such a fantastic event, and to everyone who attended the session and shared their valuable insights.
Linkt reposted this
The Linkt team is thrilled to launch two new AI Agents—Research Agent and Admin Agent—designed to streamline your team’s day-to-day tasks. Research Agent: Digs through the entire web to find key information like emerging trends, new products, and top talent. Admin Agent: Manages repetitive digital tasks such as data entry and document processing, giving you back valuable time. What Sets Us Apart? Agent Onboarding: Just like training a new employee, our agents learn the specific processes unique to your workflow. Agent Feedback: We continually monitor and fine-tune your agents to reduce error rate and ensure real ROI. #SXSW
Linkt reposted this
This year our dev team delivered 10x the number of features—while only adding two new engineers. Here’s how: AI in our workflow: We made Cursor our companies required IDE, greatly increasing our teams efficiency. Lean team structure: We operate without traditional PM or design roles, relying on direct customer feedback, using call transcripts and AI for instant Jira ticket creation. AI problem solving: Our team is full of AI-native problem solvers. Bottlenecks tend to just be a few ChatGPT questions away from resolution. Curious to see the impact? Take a look at two of our developers’ GitHub stats below.
A recent Microsoft paper reveals that OpenAI's GPT-4o-mini is an approximately 8 billion parameter model, demonstrating effective distillation techniques. The paper outlines other notable model sizes, including Claude 3.5 Sonnet at 175 billion, GPT-4 at 1.76 trillion, GPT-4o at 200 billion, o1-preview at 300 billion, and o1-mini also at 200 billion. These metrics highlight the advancements in AI model development, particularly in optimizing performance while reducing size. The successful distillation of GPT-4o-mini suggests that smaller models can retain significant reasoning capabilities, making them more accessible for various applications or devices like smartphones. The findings are vital as they indicate that smaller models like GPT-4o-mini can provide competitive performance compared to larger counterparts, potentially reducing computational costs and improving deployment efficiency. This is particularly relevant for developers and organizations looking to integrate AI solutions without the extensive resource requirements of larger models. For more in-depth insights, you can access the full paper here: https://lnkd.in/ezZcEDzq
Sonus AI has launched its new family of large language models (LLMs) with the Sonus-1 series, designed to cater to various needs in AI applications. The series includes four models: Sonus-1 Mini, optimized for speed and cost-effectiveness; Sonus-1 Air, which balances performance and resource usage; Sonus-1 Pro, the top-tier model for complex tasks; and Sonus-1 Pro (with Reasoning), featuring advanced chain-of-thought reasoning capabilities. The Sonus-1 Pro model has demonstrated impressive performance across several benchmarks. It achieved an 87.5% accuracy on the MMLU benchmark, which increased to 90.15% when utilizing its reasoning capabilities. On the MMLU-Pro benchmark, Sonus-1 Pro scored 71.8%, showcasing its versatility in handling diverse tasks. The model excels in mathematical reasoning as well, achieving a remarkable 91.8% on the MATH-500 benchmark, indicating its strong capabilities in complex calculations. Additionally, it scored 88.9% on the DROP reasoning benchmark and 67.3% on GPQA-Diamond, further solidifying its position as a competitive player in the LLM space. These advancements highlight Sonus AI's commitment to delivering high-performance models tailored for various applications, from rapid responses to intricate reasoning tasks. For more details, visit https://lnkd.in/eH-YeZBm
DeepSeek-AI has launched DeepSeek-V3, a powerful Mixture-of-Experts (MoE) language model boasting 671 billion parameters, with 37 billion activated per token. This model aims to optimize inference and training costs through innovative architectures like Multi-head Latent Attention (MLA) and an auxiliary-loss-free strategy for load balancing. DeepSeek-V3 was pre-trained on an impressive 14.8 trillion diverse tokens, followed by supervised fine-tuning and reinforcement learning stages. The model's performance was rigorously evaluated across multiple benchmarks, demonstrating superior accuracy in various tasks. For instance, it achieved 88.5 on the MMLU benchmark and 75.9 on MMLU-Pro, outperforming all other open-source models and rivaling leading closed-source models like GPT-4o. Training DeepSeek-V3 required only 2.788 million H800 GPU hours, translating to approximately $5.576 million in costs, making it economically efficient for the scale of its capabilities. The training process was notably stable, with no irrecoverable loss spikes observed. DeepSeek-V3 excels in code generation and mathematical reasoning, achieving state-of-the-art performance on benchmarks like MATH-500 and LiveCodeBench. This positions it as a leading choice for developers and researchers seeking robust AI solutions. 🔗 Read more about DeepSeek-V3 here: https://lnkd.in/gkUTJRd8
A recent study published on arXiv reveals that OpenAI's o1-preview model significantly outperforms human doctors in medical reasoning tasks. The findings show that while doctors achieved only a 30% correct diagnosis rate, the AI model reached an impressive 80% accuracy. This research, conducted by teams from Harvard Medical School and Stanford University, evaluated o1-preview’s performance through five rigorous experiments, including differential diagnosis and management reasoning. Traditional evaluations of large language models (LLMs) often rely on multiple-choice questions, which do not effectively simulate real clinical scenarios. In contrast, the o1-preview model was assessed based on its ability to synthesize clinical data and provide accurate diagnoses, showcasing its potential in real-world applications. The model excelled particularly in generating differential diagnoses and demonstrated high-quality diagnostic reasoning. Despite its strengths, the study noted that o1-preview did not show significant improvements in probabilistic reasoning tasks. Nevertheless, this advancement is crucial as it opens up new avenues for accessible medical care. With AI providing high-quality diagnostic advice that is always available, it could alleviate some challenges faced by individuals seeking medical attention. Overall, this study highlights the potential of AI in enhancing clinical decision-making and improving healthcare accessibility while emphasizing the need for ongoing research to refine these technologies further. 🔗 Read more about the study here: https://lnkd.in/e9HD2j3y