-
NVIDIA
- Warsaw, Poland
- alex0dd.github.io
Highlights
- Pro
- All languages
- Assembly
- C
- C#
- C++
- CSS
- Cuda
- Cython
- Dart
- GDScript
- Go
- Go Template
- HTML
- Java
- JavaScript
- Julia
- Jupyter Notebook
- KiCad Layout
- Kotlin
- Lua
- MATLAB
- MDX
- MLIR
- Makefile
- Markdown
- Max
- Nunjucks
- Processing
- PureBasic
- Python
- R
- Rich Text Format
- Ruby
- Rust
- Scala
- Shell
- Swift
- TeX
- TypeScript
- Vim Script
- Vue
- Zig
Starred repositories
SGLang Omni: High-Performance Multi-Stage Pipeline Framework for Omni Models
Fastest, smallest, and fully autonomous AI assistant infrastructure written in Zig
Run OpenClaw more securely inside NVIDIA OpenShell with managed inference
Hundreds of models & providers. One command to find what runs on your hardware.
A curated list of awesome LLM and AI Agent Skills, resources and tools for customising AI Agent workflows - that works with Claude Code, Codex, Gemini CLI and your custom AI Agents
[ECCV2024] This is an official inference code of the paper "Glyph-ByT5: A Customized Text Encoder for Accurate Visual Text Rendering" and "Glyph-ByT5-v2: A Strong Aesthetic Baseline for Accurate Mu…
[NeurIPS 2025] 4KAgent: Agentic Any Image to 4K Super-Resolution. An intelligent computer vision agent that can magically restore any image to perfect-4K!
Docker configuration for running VLLM on dual DGX Sparks
One-command vLLM installation for NVIDIA DGX Spark with Blackwell GB10 GPUs (sm_121 architecture)
A framework for efficient model inference with omni-modality models
The Ultimate Linux micro distribution written in JavaScript! A very functional minimal userspace for Linux written in... pure JavaScript! Not quite, but almost. It's good, I promise!
FlashInfer: Kernel Library for LLM Serving
Native and Compact Structured Latents for 3D Generation
Helpful kernel tutorials and examples for tile-based GPU programming
cuTile is a programming model for writing parallel kernels for NVIDIA GPUs
Wan: Open and Advanced Large-Scale Video Generative Models
Official inference repo for FLUX.1 models
Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model
xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism
Collective communications library with various primitives for multi-machine training.
CUDA Python: Performance meets Productivity
Index your Gmail Inbox with Elasticsearch
Open-source library for scalable, reproducible evaluation of AI models and benchmarks.
Command line tool to create and query container image manifest list/indexes
DC-Gen: Post-Training Diffusion Acceleration with Deeply Compressed Latent Space
[ICLR2025 Spotlight] SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models