- All languages
- Assembly
- AutoIt
- Batchfile
- C
- C#
- C++
- CMake
- CSS
- Clojure
- CoffeeScript
- Cuda
- Cython
- Dart
- Dockerfile
- Eagle
- Elixir
- Fortran
- GDScript
- Go
- HTML
- Haskell
- JSON
- Java
- JavaScript
- Julia
- Jupyter Notebook
- Kotlin
- LLVM
- Lua
- MATLAB
- MDX
- MLIR
- Makefile
- Markdown
- Mathematica
- Mojo
- Nim
- OCaml
- Objective-C
- PHP
- Perl
- Pony
- PowerShell
- Protocol Buffer
- Python
- QML
- R
- Raku
- Roff
- Ruby
- Rust
- SCSS
- Scala
- Shell
- Slint
- Svelte
- Swift
- SystemVerilog
- TeX
- Terra
- TypeScript
- VBScript
- Verilog
- Vue
- XSLT
- Zig
Starred repositories
On-device AI across mobile, embedded and edge for PyTorch
An Open Source Machine Learning Framework for Everyone
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
🌐 Make websites accessible for AI agents. Automate tasks online with ease.
An open-source AI agent that brings the power of Gemini directly into your terminal.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
🚀 The fast, Pythonic way to build MCP servers and clients
A Datacenter Scale Distributed Inference Serving Framework
Code at the speed of thought – Zed is a high-performance, multiplayer code editor from the creators of Atom and Tree-sitter.
🚀 Fast, secure, lightweight containers based on WebAssembly
Protocol Buffers - Google's data interchange format
Custom AI agent platform to speed up your work.
PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Built by researchers, for research.
Autonomous coding agent right in your IDE, capable of creating/editing files, executing commands, using the browser, and more with your permission every step of the way.
CUDA Python: Performance meets Productivity
TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. Tensor…
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM
Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!
Langflow is a powerful tool for building and deploying AI-powered agents and workflows.
An open-source, cross-platform terminal for seamless workflows
Lightweight coding agent that runs in your terminal