The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more.
-
Updated
Sep 16, 2024 - JavaScript
The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more.
[AI Agent Application Development Framework] - 🚀 Build AI agent native application in very few code 💬 Easy to interact with AI agent in code using structure data and chained-calls syntax 🧩 Enhance AI Agent using plugins instead of rebuild a whole new agent
An open-source LLM based automatically daily news collecting workflow showcase powered by Agently AI application development framework.
awesome llm plaza: daily tracking all sorts of awesome topics of llm, e.g. llm for coding, robotics, reasoning, multimod etc.
A Spotlight app. You can talk and snip anything to ChatGPT at your finger-tips
This is the official repository for HypoGeniC (Hypothesis Generation in Context), which is an automated, data-driven tool that leverages large language models to generate hypothesis for open-domain research. For more details, please see the original paper using the link below.
AI powered twitter Bio Generator built Using Next.js and Groq | Shadcn | Llama 3
Open Interview automates technical Q&A generation from resumes, offers document and audio outputs, and customizable settings for efficient interview prep.
AI powered Social Media Bio Generator built Using Next.js and Groq | Shadcn | Llama 3
一个使用Flutter开发,支持诸多AI大模型API调用的智能工作生活助手应用。
Samples showing a Java Spring Backend Application powered by Ollama's Generative AI and LLMs using Spring AI
An innovative Python tool that sifts through Gmail for bank notifications, harnesses OpenAI's GPT-3.5-turbo for insightful analysis, and seamlessly syncs with YNAB, revolutionizing financial tracking for hackers and developers.
技术学习、软件工具便捷下载、教程教学类项目构建和管理工具
A starter repo on how to create your first simple LLM application using LangChain
An LLM Chatbot for Design a Pediatric CT Protocol
Automatic translation of i18n messages for your products with AI.
The application utilizes GPT 3.5 Turbo, making calls to Language Models (LLMs) over Portkey. This ensures efficient and accurate translations. Additionally, the project uses Vite to substitute environment variables and to avoid Cross-Origin Resource Sharing (CORS) issues by using the Vite server proxy.
一个包含极简的记账、幸运转盘随机菜品,和简单的AI对话、文生图、图像理解等功能的flutter应用。
This repository provides a template for building Large Language Model (LLM) powered microservices using FastAPI. It's designed to help you quickly set up and deploy AI-driven APIs that leverage the power of LLMs like GPT-3, GPT-4, Claude or other similar models.
Add a description, image, and links to the llm-application topic page so that developers can more easily learn about it.
To associate your repository with the llm-application topic, visit your repo's landing page and select "manage topics."