-
NVIDIA
- Austin, TX
Stars
cuTile Rust provides a safe, tile-based kernel programming DSL for the Rust programming language. It features a safe host-side API for passing tensors to asynchronously executed kernel functions.
Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Large Language Models.
TabICLv2: A state-of-the-art tabular foundation model
21 Lessons, Get Started Building with Generative AI
Helpful kernel tutorials and examples for tile-based GPU programming
cuTile is a programming model for writing parallel kernels for NVIDIA GPUs
A high-performance, zero-overhead, extensible Python compiler with built-in NumPy support
Python for HPC Tutorial Notebooks
No-GIL Python environment featuring NVIDIA Deep Learning libraries.
CUDA Python: Performance meets Productivity
A place to collect questions and discussions concerning PyData repositories
NVIDIA curated collection of educational resources related to general purpose GPU programming.
Ecosystem of libraries and tools for writing and executing fast GPU code fully in Rust.
✊🏾 Diversity & Inclusion in Scientific Computing committee of NumFOCUS
The NumFOCUS DISCOVER Cookbook (Diverse & Inclusive Spaces and Conferences: Overall Vision and Essential Resources). A guide for organizing more diverse and inclusive events and conferences, produc…
Tensors and Dynamic neural networks in Python with strong GPU acceleration
A clean, three-column Sphinx theme with Bootstrap for the PyData community
C implementation of nqq (modified implementation of clox)
🔮 Instill Core is a full-stack AI infrastructure tool for data, model and pipeline orchestration, designed to streamline every aspect of building versatile AI-first applications
Slides for "Power Users, Long Tail Users, and Everything In Between" Presentation - Dror Guldin and Alon Nir
Policies, Configurations, and Documentation of NumFOCUS Managed Infrastructure
📺 Discover the latest machine learning / AI courses on YouTube.
The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!