• Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • 1
    c/ua

    c/ua

    c/ua is the Docker Container for Computer-Use AI Agents

    Cua is a Docker-based framework that facilitates the deployment and management of computer-use AI agents. It provides a sandboxed environment where agents can perform tasks on macOS and Linux virtual machines, supporting various AI models and ensuring safe execution. Cua is particularly useful for developers looking to test and run AI agents in controlled settings.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    llama2.c

    llama2.c

    Inference Llama 2 in one file of pure C

    llama2.c is a minimalist implementation of the Llama 2 language model architecture designed to run entirely in pure C. Created by Andrej Karpathy, this project offers an educational and lightweight framework for performing inference on small Llama 2 models without external dependencies. It provides a full training and inference pipeline: models can be trained in PyTorch and later executed using a concise 700-line C program (run.c). While it can technically load Meta’s official Llama 2 models...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    GPT4All

    GPT4All

    Run Local LLMs on Any Device. Open-source

    GPT4All is an open-source project that allows users to run large language models (LLMs) locally on their desktops or laptops, eliminating the need for API calls or GPUs. The software provides a simple, user-friendly application that can be downloaded and run on various platforms, including Windows, macOS, and Ubuntu, without requiring specialized hardware. It integrates with the llama.cpp implementation and supports multiple LLMs, allowing users to interact with AI models privately. This...
    Downloads: 144 This Week
    Last Update:
    See Project
  • 4

    PaddleOCR

    Awesome multilingual OCR toolkits based on PaddlePaddle

    PaddleOCR offers exceptional, multilingual, and practical Optical Character Recognition (OCR) tools that can help users train better models and apply them into practice. Inspired by PaddlePaddle, PaddleOCR is an ultra lightweight OCR system, with multilingual recognition, digit recognition, vertical text recognition, as well as long text recognition. It features a PPOCR series of high-quality pre-trained models, which includes: ultra lightweight ppocr_mobile series models, general...
    Downloads: 42 This Week
    Last Update:
    See Project
  • Simple, Secure Domain Registration Icon
    Simple, Secure Domain Registration

    Get your domain at wholesale price. Cloudflare offers simple, secure registration with no markups, plus free DNS, CDN, and SSL integration.

    Register or renew your domain and pay only what we pay. No markups, hidden fees, or surprise add-ons. Choose from over 400 TLDs (.com, .ai, .dev). Every domain is integrated with Cloudflare's industry-leading DNS, CDN, and free SSL to make your site faster and more secure. Simple, secure, at-cost domain registration.
    Sign up for free
  • 5
    TensorFlow

    TensorFlow

    TensorFlow is an open source library for machine learning

    Originally developed by Google for internal use, TensorFlow is an open source platform for machine learning. Available across all common operating systems (desktop, server and mobile), TensorFlow provides stable APIs for Python and C as well as APIs that are not guaranteed to be backwards compatible or are 3rd party for a variety of other languages. The platform can be easily deployed on multiple CPUs, GPUs and Google's proprietary chip, the tensor processing unit (TPU). TensorFlow...
    Downloads: 16 This Week
    Last Update:
    See Project
  • 6
    Aider

    Aider

    Aider is AI pair programming in your terminal

    Aider is an open-source AI pair programming tool that runs directly in your terminal, allowing developers to collaborate with LLMs as if they were coding alongside a senior engineer. It maps your entire codebase to provide deep context, making it effective for both small scripts and large, complex projects. Aider supports leading cloud models like Claude 3.7 Sonnet, DeepSeek R1, OpenAI’s o1/o3/GPT-4o, as well as local LLMs for privacy-conscious workflows. Its Git integration ensures every...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 7
    AlphaFold 3

    AlphaFold 3

    AlphaFold 3 inference pipeline

    AlphaFold 3, developed by Google DeepMind, is an advanced deep learning system for predicting biomolecular structures and interactions with exceptional accuracy. This repository provides the complete inference pipeline for running AlphaFold 3, though access to the model parameters is restricted and must be obtained directly from Google under specific terms of use. The system is designed for scientific research applications in structural biology, biochemistry, and bioinformatics, enabling...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 8
    Triton Inference Server

    Triton Inference Server

    The Triton Inference Server provides an optimized cloud

    ... real-time, batched, ensembles, and audio/video streaming. Provides Backend API that allows adding custom backends and pre/post-processing operations. Model pipelines using Ensembling or Business Logic Scripting (BLS). HTTP/REST and GRPC inference protocols based on the community-developed KServe protocol. A C API and Java API allow Triton to link directly into your application for edge and other in-process use cases.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 9
    torchvision

    torchvision

    Datasets, transforms and models specific to Computer Vision

    ... conda install libpng or any of the package managers for debian-based and RHEL-based Linux distributions, and libjpeg, which can be installed via conda conda install jpeg or any of the package managers for debian-based and RHEL-based Linux distributions. It supports libjpeg-turbo as well. libpng and libjpeg must be available at compilation time in order to be available. TorchVision also offers a C++ API that contains C++ equivalent of python models.
    Downloads: 6 This Week
    Last Update:
    See Project
  • Build Securely on Azure with Proven Frameworks Icon
    Build Securely on Azure with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 10
    CodeGeeX

    CodeGeeX

    CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)

    CodeGeeX is a large-scale multilingual code generation model with 13 billion parameters, trained on 850B tokens across more than 20 programming languages. Developed with MindSpore and later made PyTorch-compatible, it is capable of multilingual code generation, cross-lingual code translation, code completion, summarization, and explanation. It has been benchmarked on HumanEval-X, a multilingual program synthesis benchmark introduced alongside the model, and achieves state-of-the-art...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 11
    ChatGPT Academic

    ChatGPT Academic

    ChatGPT extension for scientific research work

    ChatGPT extension for scientific research work, specially optimized academic paper polishing experience, supports custom shortcut buttons, supports custom function plug-ins, supports markdown table display, double display of Tex formulas, complete code display function, new local Python/C++/Go project tree Analysis function/Project source code self-translation ability, newly added PDF and Word document batch summary function/PDF paper full-text translation function. All buttons are dynamically...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 12
    BERTopic

    BERTopic

    Leveraging BERT and c-TF-IDF to create easily interpretable topics

    BERTopic is a topic modeling technique that leverages transformers and c-TF-IDF to create dense clusters allowing for easily interpretable topics whilst keeping important words in the topic descriptions. BERTopic supports guided, supervised, semi-supervised, manual, long-document, hierarchical, class-based, dynamic, and online topic modeling. It even supports visualizations similar to LDAvis! Corresponding medium posts can be found here, here and here. For a more detailed overview, you can read...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 13
    openpilot

    openpilot

    Open source driver assistance system

    openpilot is an open-source driver assistance system designed to improve upon the existing driver assistance of most modern cars today. openpilot gives you Tesla Autopilot-like functionality with functions like Adaptive Cruise Control (ACC), Automated Lane Centering (ALC), Forward Collision Warning (FCW) and Lane Departure Warning (LDW). All these with just a push of a button. openpilot also includes a camera-based driver monitoring feature that alerts distracted and asleep drivers while it...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 14
    Megatron

    Megatron

    Ongoing research training transformer models at scale

    ... and training sophisticated natural language processing models with billions and trillions of parameters. Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 15
    fairseq2

    fairseq2

    FAIR Sequence Modeling Toolkit 2

    fairseq2 is a modern, modular sequence modeling framework developed by Meta AI Research as a complete redesign of the original fairseq library. Built from the ground up for scalability, composability, and research flexibility, fairseq2 supports a broad range of language, speech, and multimodal content generation tasks, including instruction fine-tuning, reinforcement learning from human feedback (RLHF), and large-scale multilingual modeling. Unlike the original fairseq—which evolved into a...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 16
    Pedalboard

    Pedalboard

    A Python library for audio

    pedalboard is a Python library for working with audio: reading, writing, rendering, adding effects, and more. It supports the most popular audio file formats and a number of common audio effects out of the box and also allows the use of VST3® and Audio Unit formats for loading third-party software instruments and effects. pedalboard was built by Spotify’s Audio Intelligence Lab to enable using studio-quality audio effects from within Python and TensorFlow. Internally at Spotify, pedalboard...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 17
    fastdup

    fastdup

    An unsupervised and free tool for image and video dataset analysis

    fastdup is a powerful free tool designed to rapidly extract valuable insights from your image & video datasets. Assisting you to increase your dataset images & labels quality and reduce your data operations costs at an unparalleled scale.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 18
    Jittor

    Jittor

    Jittor is a high-performance deep learning framework

    ... learning, etc. The front-end language is Python. Module Design and Dynamic Graph Execution is used in the front-end, which is the most popular design for deep learning framework interface. The back-end is implemented by high-performance languages, such as CUDA, C++. Jittor'op is similar to NumPy. Let's try some operations. We create Var a and b via operation jt.float32, and add them. Printing those variables shows they have the same shape and dtype.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 19
    SHAP

    SHAP

    A game theoretic approach to explain the output of ml models

    SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions. While SHAP can explain the output of any machine learning model, we have developed a high-speed exact algorithm for tree ensemble methods. Fast C++ implementations are supported for XGBoost, LightGBM, CatBoost, scikit-learn and pyspark...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 20
    Kaldi

    Kaldi

    kaldi-asr/kaldi is the official location of the Kaldi project

    Kaldi is an open source toolkit for speech recognition research. It provides a powerful framework for building state-of-the-art automatic speech recognition (ASR) systems, with support for deep neural networks, Gaussian mixture models, hidden Markov models, and other advanced techniques. The toolkit is widely used in both academia and industry due to its flexibility, extensibility, and strong community support. Kaldi is designed for researchers who need a highly customizable environment to...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    Diplomacy Cicero

    Diplomacy Cicero

    Code for Cicero, an AI agent that plays the game of Diplomacy

    ... focused on no-press diplomacy) as described in the README. The codebase is implemented primarily in Python with performance-critical components in C++ (via pybind11 bindings) and is configured to run in a high‐GPU cluster environment. Configuration is managed via protobuf files to define tasks such as self-play, benchmark agent comparisons, and RL training. The project is now archived and read-only, reflecting that it is no longer actively developed but remains publicly available for research use.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    BitNet

    BitNet

    Inference framework for 1-bit LLMs

    BitNet (bitnet.cpp) is a high-performance inference framework designed to optimize the execution of 1-bit large language models, making them more efficient for edge devices and local deployment. The framework offers significant speedups and energy reductions, achieving up to 6.17x faster performance on x86 CPUs and 70% energy savings, allowing the running of models such as the BitNet b1.58 100B with impressive efficiency. With support for lossless inference and enhanced processing power,...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    PaddleSpeech

    PaddleSpeech

    Easy-to-use Speech Toolkit including Self-Supervised Learning model

    PaddleSpeech is an open-source toolkit on PaddlePaddle platform for a variety of critical tasks in speech and audio, with state-of-art and influential models. Via the easy-to-use, efficient, flexible and scalable implementation, our vision is to empower both industrial application and academic research, including training, inference & testing modules, and deployment process. Low barriers to install, CLI, Server, and Streaming Server is available to quick-start your journey. We provide...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    Transformer Engine

    Transformer Engine

    A library for accelerating Transformer models on NVIDIA GPUs

    Transformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower memory utilization in both training and inference. TE provides a collection of highly optimized building blocks for popular Transformer architectures and an automatic mixed precision-like API that can be used seamlessly with your framework-specific code. TE also includes a framework-agnostic C++ API...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    llm.c

    llm.c

    LLM training in simple, raw C/CUDA

    llm.c is a minimalist, systems-level implementation of a small transformer-based language model in C that prioritizes clarity and educational value. By stripping away heavy frameworks, it exposes the core math and memory flows of embeddings, attention, and feed-forward layers. The code illustrates how to wire forward passes, losses, and simple training or inference loops with direct control over arrays and buffers. Its compact design makes it easy to trace execution, profile hotspots...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next