Lists (19)
Sort Name ascending (A-Z)
🤖 cognitive agents
🕸️ cyber
💾 data mesh
🏎 embedded systems
♾️ energy based models
🎮 game dev
📓 language models
💽 machine learning
🦾 mathematical optimization
🗺️ model guidance
📯 recommender systems
🕹️ reinforcement learning
🦿robotics
🧠 second brain
💻 software agent
☄️ space software
🛸 uav
🥽 vision model
🗣️ voice ai
- All languages
- Assembly
- Bicep
- C
- C#
- C++
- CMake
- CSS
- Clarity
- CoffeeScript
- Cuda
- Dockerfile
- GDScript
- Go
- HTML
- Java
- JavaScript
- Julia
- Jupyter Notebook
- MATLAB
- MDX
- MLIR
- Makefile
- Modelica
- Mojo
- OCaml
- PHP
- PLpgSQL
- PowerShell
- Prolog
- Python
- Rich Text Format
- Ruby
- Rust
- Scala
- Shell
- Solidity
- Svelte
- SystemVerilog
- TeX
- TypeScript
- Verilog
- Zig
Starred repositories
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
Protocol Buffers - Google's data interchange format
🔍 A Hex Editor for Reverse Engineers, Programmers and people who value their retinas when working at 3 AM.
GoogleTest - Google Testing and Mocking Framework
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
FoundationDB - the open source, distributed, transactional key-value store
Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk
World's largest Contributor driven code dataset | Used in Quark Search Engine, @OpenGenus IQ, OpenGenus Visual Project
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
Speech-to-text, text-to-speech, speaker diarization, speech enhancement, source separation, and VAD using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Andr…
LostRuins / koboldcpp
Forked from ggml-org/llama.cppRun GGUF models easily with a KoboldAI UI. One File. Zero Install.
The POCO C++ Libraries are powerful cross-platform C++ libraries for building network- and internet-based applications that run on desktop, server, mobile, IoT, and embedded systems.
High-speed Large Language Model Serving for Local Deployment
lightweight, standalone C++ inference engine for Google's Gemma models.
Transformer related optimization, including BERT, GPT
The AI-native database built for LLM applications, providing incredibly fast hybrid search of dense vector, sparse vector, tensor (multi-vector), and full-text.
Fast inference engine for Transformer models
Kernels & AI inference engine for mobile devices.
A machine learning compiler for GPUs, CPUs, and ML accelerators
Turbopilot is an open source large-language-model based code completion engine that runs locally on CPU