Skip to content
View aahouzi's full-sized avatar
🎯
Focusing
🎯
Focusing
  • Paris, France
  • 15:47 (UTC +01:00)

Organizations

@NVIDIA

Block or report aahouzi

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Showing results

NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process communication and coordination overheads by allowing programmer…

C++ 417 48 Updated Nov 13, 2025

NVIDIA Inference Xfer Library (NIXL)

C++ 773 208 Updated Dec 18, 2025

A Datacenter Scale Distributed Inference Serving Framework

Rust 5,653 748 Updated Dec 19, 2025

Official inference framework for 1-bit LLMs

Python 24,464 1,913 Updated Jun 3, 2025

SYCL implementation of Fused MLPs for Intel GPUs

C++ 49 11 Updated Nov 24, 2025
C++ 61 20 Updated Dec 18, 2024

Grok open release

Python 50,572 8,373 Updated Aug 30, 2024

An innovative library for efficient LLM inference via low-bit quantization

C++ 351 39 Updated Aug 30, 2024

Real-time human detection and tracking camera using YOLOV5 and Arduino

Python 11 4 Updated Nov 26, 2023

Official inference library for Mistral models

Jupyter Notebook 10,603 1,000 Updated Nov 21, 2025

SPEAR: A Simulator for Photorealistic Embodied AI Research

C++ 290 25 Updated Nov 20, 2025

Intel XeSS SDK

C 889 53 Updated Nov 13, 2025

SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime

Python 2,549 286 Updated Dec 19, 2025

⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡

Python 2,170 216 Updated Oct 8, 2024

Intel® Extension for TensorFlow*

C++ 350 45 Updated Oct 29, 2025

An Open Framework for Federated Learning.

Python 822 233 Updated Dec 1, 2025