Skip to content
View aahouzi's full-sized avatar
🎯
Focusing
🎯
Focusing
  • Paris, France
  • 09:08 (UTC +02:00)

Organizations

@NVIDIA

Block or report aahouzi

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Showing results

NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process communication and coordination overheads by allowing programmer…

C++ 494 67 Updated Mar 24, 2026

NVIDIA Inference Xfer Library (NIXL)

C++ 960 278 Updated Mar 28, 2026

A Datacenter Scale Distributed Inference Serving Framework

Rust 6,432 968 Updated Mar 30, 2026

Official inference framework for 1-bit LLMs

Python 36,838 3,208 Updated Mar 10, 2026

SYCL implementation of Fused MLPs for Intel GPUs

C++ 50 11 Updated Nov 24, 2025
C++ 60 20 Updated Dec 18, 2024

Grok open release

Python 51,515 8,468 Updated Aug 30, 2024

An innovative library for efficient LLM inference via low-bit quantization

C++ 352 38 Updated Aug 30, 2024

Real-time human detection and tracking camera using YOLOV5 and Arduino

Python 10 4 Updated Nov 26, 2023

Official inference library for Mistral models

Jupyter Notebook 10,743 1,029 Updated Feb 26, 2026

SPEAR: A Simulator for Photorealistic Embodied AI Research

C++ 318 24 Updated Mar 30, 2026

Intel XeSS SDK

C 932 55 Updated Mar 9, 2026

SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime

Python 2,608 301 Updated Mar 27, 2026

⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡

Python 2,179 217 Updated Oct 8, 2024

Intel® Extension for TensorFlow*

C++ 352 45 Updated Oct 29, 2025

An Open Framework for Federated Learning.

Python 833 235 Updated Feb 21, 2026