-
Nvidia GmbH
- Aachen
Stars
The NVIDIA TensorRT RTX Execution Provider (EP) is an inference deployment solution designed specifically for NVIDIA RTX GPUs, optimized for client-centric use cases.
NVIDIA FastGen: Fast Generation from Diffusion Models
MLPerf Client is a benchmark for Windows, Linux and macOS, focusing on client form factors in ML inference scenarios.
Official inference repo for FLUX.1 models
Tool to statically recompile N64 games into native executables
Generative AI extensions for onnxruntime
A developer reference project for creating Retrieval Augmented Generation (RAG) chatbots on Windows using TensorRT-LLM
Official Code for Stable Cascade
Modern C++ Programming Course (C++03/11/14/17/20/23/26)
The NVIDIA® Tools Extension SDK (NVTX) is a C-based Application Programming Interface (API) for annotating events, code ranges, and resources in your applications.
TensorRT Extension for Stable Diffusion Web UI
The Free and Open Source Cross Platform YUV Viewer with an advanced analytics toolset
Stable Diffusion web UI
top #1 and most feature rich GPT wrapper for git — generate commit messages with an LLM in 1 sec — works with Claude, GPT and every other provider, supports local Ollama models too
Sample showing OpenGL and CUDA interop
C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows
CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.
GGRS is a reimagination of GGPO, enabling P2P rollback networking in Rust. Rollback to the future!
A VSCode extension that allows you to use ChatGPT
AI-related samples made available by the DevTech ProViz team
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
GPU & Accelerator process monitoring for AMD, Apple, Huawei, Intel, NVIDIA and Qualcomm
Extended and advanced applications to the Rivermax SDK (Networking SDK for Media and Data Streaming).