Skip to content
Change the repository type filter

Archived

    Repositories list

    • cortex.llamacpp

      Public archive
      cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server at runtime.
      C++
      1441122Updated Jul 4, 2025Jul 4, 2025
    • cortex.cpp

      Public archive
      Local AI API Platform
      C++
      1822.8k1646Updated Jul 4, 2025Jul 4, 2025
    • homebrew.ltd

      Public archive
      Homebrew Website
      MDX
      2241Updated Feb 24, 2025Feb 24, 2025
    • ichigo-package

      Public archive
      pip install ichigo
      Python
      137100Updated Feb 6, 2025Feb 6, 2025
    • cortex.python

      Public archive
      C++ code that run Python embedding
      C++
      1511Updated Oct 17, 2024Oct 17, 2024
    • docs

      Public archive
      Jan.ai Website & Documentation
      MDX
      933501Updated Oct 14, 2024Oct 14, 2024
    • cortex.so

      Public archive
      TypeScript
      2532Updated Oct 9, 2024Oct 9, 2024
    • cortex.tensorrt-llm

      Public archive
      Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU accelerated inference on NVIDIA's GPUs.
      C++
      2k4203Updated Sep 26, 2024Sep 26, 2024
    • cortex.onnx

      Public archive
      C++
      0721Updated Aug 12, 2024Aug 12, 2024
    • cortex.js

      Public archive
      The official Node.js / Typescript library for the OpenAI API
      TypeScript
      1.4k3019Updated Aug 9, 2024Aug 9, 2024
    • cortex.audio

      Public archive
      C++
      0000Updated Jul 9, 2024Jul 9, 2024
    • homebrew-cortexso

      Public archive
      Ruby
      0100Updated Jul 5, 2024Jul 5, 2024
    • cortex.py

      Public archive
      The official Python library for the OpenAI API
      Python
      4.5k300Updated May 20, 2024May 20, 2024
    • pymaker

      Public archive
      Make the py
      0000Updated Apr 9, 2024Apr 9, 2024
    • tensorrtllm_backend

      Public archive
      The Triton TensorRT-LLM Backend
      Python
      133000Updated Mar 19, 2024Mar 19, 2024
    • triton_tensorrt_llm

      Public archive
      Shell
      0000Updated Mar 15, 2024Mar 15, 2024
    • openai_trtllm

      Public archive
      OpenAI compatible API for TensorRT LLM triton backend
      Rust
      31000Updated Mar 15, 2024Mar 15, 2024
    • This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows instead of cloud.
      Python
      16000Updated Mar 8, 2024Mar 8, 2024
    • architecture

      Public archive
      1000Updated Feb 28, 2024Feb 28, 2024
    • infinity

      Public archive
      The AI-native database built for LLM applications, providing incredibly fast vector and full-text search
      C++
      406100Updated Feb 19, 2024Feb 19, 2024
    • langchainjs

      Public archive
      TypeScript
      2.9k000Updated Feb 19, 2024Feb 19, 2024
    • model-converter

      Public archive
      Python
      52320Updated Dec 14, 2023Dec 14, 2023