Over 60 figures and diagrams of LLMs, quantization, low-rank adapters (LoRA), and chat templates FREE TO USE in your blog posts, slides, presentations, or papers.
-
Updated
Feb 18, 2025
Over 60 figures and diagrams of LLMs, quantization, low-rank adapters (LoRA), and chat templates FREE TO USE in your blog posts, slides, presentations, or papers.
A flexible utility for converting tensor precision in PyTorch models and safetensors files, enabling efficient deployment across various platforms.
Converts a floating-point number or hexadecimal representation of a floating-point numbers into various formats and displays them into binary/hexadecimal.
Python implementations for multi-precision quantization in computer vision and sensor fusion workloads, targeting the XR-NPE Mixed-Precision SIMD Neural Processing Engine. The code includes visual inertial odometry (VIO), object classification, and eye gaze extraction code in FP4, FP8, Posit4, Posit8, and BF16 formats.
Ultralytics YOLO11 🚀 with Prodigy Optimizer and BF16 precision
Add a description, image, and links to the bf16 topic page so that developers can more easily learn about it.
To associate your repository with the bf16 topic, visit your repo's landing page and select "manage topics."