-
stealth-startup
- Houston
- pubsubmusic.com
- in/abramflansburg
Lists (1)
Sort Name ascending (A-Z)
Stars
Community maintained hardware plugin for vLLM on Apple Silicon
Build applications that make decisions (chatbots, agents, simulations, etc...). Monitor, trace, persist, and execute on your own infrastructure.
A Rust crate for cooking up terminal user interfaces (TUIs) π¨βπ³π https://ratatui.rs
π¨ NeMo Data Designer: A general library for generating high-quality synthetic data from scratch or based on seed data.
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workfloβ¦
Main repo including core data model, data marts, data quality tests, and terminology sets.
π» Ghostty is a fast, feature-rich, and cross-platform terminal emulator that uses platform-native UI and GPU acceleration.
MedAgentBench: A Realistic Virtual EHR Environment to Benchmark Medical LLM Agents
The AI framework that adds the engineering to prompt engineering (Python/TS/Ruby/Java/C#/Rust/Go compatible)
90% of what you need for LLM app development. Nothing you don't.
Replace zsh's default completion selection menu with fzf!
An Open Standard for lineage metadata collection
A scalable, distributed, collaborative, document-graph database, for the realtime web
PandaAGI provides a simple, intuitive API for building general AI agents in just a few lines of code
Expose your FastAPI endpoints as Model Context Protocol (MCP) tools, with Auth!
What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?
Virtual whiteboard for sketching hand-drawn like diagrams
Provide a conversational assistant that can answer DevOps-related questions. This helps demonstrate how large language models (LLMs) can be applied to real-world knowledge domains (in this case, clβ¦
π€ smolagents: a barebones library for agents that think in code.
Python package wrapping llama.cpp for on-device LLM inference