Stars
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
Autonomous coding agent right in your IDE, capable of creating/editing files, executing commands, using the browser, and more with your permission every step of the way.
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
An open-source AI agent that brings the power of Gemini directly into your terminal.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
An Open Source Machine Learning Framework for Everyone
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
A high-throughput and memory-efficient inference and serving engine for LLMs
JDK main-line development https://openjdk.org/projects/jdk
A machine learning compiler for GPUs, CPUs, and ML accelerators
Lightweight coding agent that runs in your terminal
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
🔥 The Web Data API for AI - Turn entire websites into LLM-ready markdown or structured data
IntelliJ IDEA & IntelliJ Platform
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Production-Grade Container Scheduling and Management
OpenVINO™ is an open source toolkit for optimizing and deploying AI inference
The FreeBSD src tree publish-only repository. Experimenting with 'simple' pull requests....
AWS MCP Servers — helping you get the most out of AWS, wherever you use MCP.
A lightweight, powerful framework for multi-agent workflows
Open deep learning compiler stack for cpu, gpu and specialized accelerators
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. Tensor…
🦜🔗 The platform for reliable agents.