13 releases

Uses new Rust 2024

0.21.0-pre.3 Apr 8, 2026
0.21.0-pre.2 Mar 2, 2026
0.21.0-pre.1 Feb 9, 2026
0.20.1 Jan 23, 2026
0.19.0 Oct 28, 2025

#795 in Machine learning

Download history 2287/week @ 2025-12-28 3423/week @ 2026-01-04 3293/week @ 2026-01-11 3749/week @ 2026-01-18 6275/week @ 2026-01-25 5569/week @ 2026-02-01 5929/week @ 2026-02-08 7313/week @ 2026-02-15 8229/week @ 2026-02-22 6241/week @ 2026-03-01 6348/week @ 2026-03-08 8191/week @ 2026-03-15 8680/week @ 2026-03-22 7322/week @ 2026-03-29 7302/week @ 2026-04-05 6923/week @ 2026-04-12

30,989 downloads per month
Used in 104 crates (18 directly)

MIT/Apache

1.5MB
29K SLoC

Contains (Zip file, 42KB) large_shape.pt, (Zip file, 2KB) bfloat16.pt, (Zip file, 2KB) bool.pt, (Zip file, 2KB) buffers.pt, (Zip file, 4KB) checkpoint.pt, (Zip file, 4KB) complex_structure.pt and 20 more.

Burn Store

Advanced model storage and serialization for the Burn deep learning framework

Current Crates.io Version Documentation

A comprehensive storage library for Burn that enables efficient model serialization, cross-framework interoperability, and advanced tensor management.

Migrating from burn-import? See the Migration Guide for help moving from PyTorchFileRecorder/SafetensorsFileRecorder to the new Store API.

Features

  • Burnpack Format - Native Burn format with CBOR metadata, memory-mapped loading, ParamId persistence for stateful training, and no-std support
  • SafeTensors Format - Industry-standard format for secure and efficient tensor serialization
  • PyTorch Support - Direct loading of PyTorch .pth/.pt files with automatic weight transformation
  • Zero-Copy Loading - Memory-mapped files and lazy tensor materialization for optimal performance
  • Flexible Filtering - Load/save specific model subsets with regex, exact paths, or custom predicates
  • Tensor Remapping - Rename tensors during load/save for framework compatibility
  • Half-Precision Storage - Automatic F32/F16 conversion with smart defaults for reduced model file size
  • No-std Support - Burnpack and SafeTensors formats available in embedded and WASM environments

Quick Start

use burn_store::{ModuleSnapshot, PytorchStore, SafetensorsStore, BurnpackStore, HalfPrecisionAdapter};

// Load from PyTorch
let mut store = PytorchStore::from_file("model.pt");
model.load_from(&mut store)?;

// Load from SafeTensors (with PyTorch adapter)
let mut store = SafetensorsStore::from_file("model.safetensors")
    .with_from_adapter(PyTorchToBurnAdapter);
model.load_from(&mut store)?;

// Save to Burnpack
let mut store = BurnpackStore::from_file("model.bpk");
model.save_into(&mut store)?;

// Save with half-precision (F32 -> F16, ~50% smaller files)
let adapter = HalfPrecisionAdapter::new();
let mut store = BurnpackStore::from_file("model_f16.bpk")
    .with_to_adapter(adapter.clone());
model.save_into(&mut store)?;

// Load half-precision back (F16 -> F32, same adapter)
let mut store = BurnpackStore::from_file("model_f16.bpk")
    .with_from_adapter(adapter);
model.load_from(&mut store)?;

Documentation

For comprehensive documentation including:

  • Exporting weights from PyTorch
  • Loading weights into Burn models
  • Saving models to various formats
  • Advanced features (filtering, remapping, partial loading, zero-copy)
  • API reference and troubleshooting

See the Burn Book - Saving and Loading chapter.

Running Benchmarks

# Generate model files (one-time setup)
uv run benches/generate_unified_models.py

# Run loading benchmarks
cargo bench --bench unified_loading

# Run saving benchmarks
cargo bench --bench unified_saving

# With specific backend
cargo bench --bench unified_loading --features metal

License

This project is dual-licensed under MIT and Apache-2.0.

Dependencies

~9–58MB
~1M SLoC