Skip to content

A natively parallel dataloader for Python, written in Rust. Serving data at GB/s speeds, while covering aspect ratio bucketing, crop and resize for image ML workloads.

License

Notifications You must be signed in to change notification settings

Photoroom/datago

Repository files navigation

datago

Rust Rust-py

A Rust-written data loader which can be used as a python module. Handles several data sources, from local files to webdataset or a VectorDB focused http stack soon-to-be open sourced. Focused on image data at the moment, could also easily be more generic.

Datago handles, outside of the Python GIL

  • per sample IO
  • deserialization (jpg and png decompression)
  • some optional vision processing (aligning different image payloads)
  • optional serialization

Samples are exposed in the Python scope as python native objects, using PIL and Numpy base types. Speed will be network dependent, but GB/s is typical. Depending on the front ends, datago can be rank and world-size aware, in which case the samples are dispatched depending on the samples hash.

Datago organization

Use it

You can simply install datago with [uv] pip install datago

Use the package from Python

Please note that in all the of the following cases, you can directly get an IterableDataset (torch compatible) with the following code snippet

from dataset import DatagoIterDataset
client_config = {} # See below for examples
datago_dataset = DatagoIterDataset(client_config, return_python_types=True)

return_python_types enforces that images will be of the PIL.Image sort for instance, being an external binary module should be transparent.

Dataroom
from datago import DatagoClient, initialize_logging
import os
import json

# Respects RUST_LOG=INFO env var for setting log level
# If omitted the logger will be initialized when the client starts.
initialize_logging()

config = {
    "source_config": {
        "sources": os.environ.get("DATAROOM_TEST_SOURCE", ""),
        "page_size": 500,
        "rank": 0,
        "world_size": 1,
    },
    "limit": 200,
    "samples_buffer_size": 32,
}

client = DatagoClient(json.dumps(config))

for _ in range(10):
    sample = client.get_sample()

Please note that the image buffers will be passed around as raw pointers, see below (we provide python utils to convert to PIL types).

Local files

To test datago while serving local files (jpg, png, ..), code would look like the following. Note that datago serving files with a lot of concurrent threads means that, even if random_sampling is not set, there will be some randomness in the sample ordering.

from datago import DatagoClient, initialize_logging
import os
import json

# Can also set the log level directly instead of using RUST_LOG env var
initialize_logging(log_level="warn")

config = {
    "source_type": "file",
    "source_config": {
        "root_path": "myPath",
        "random_sampling": False, # True if used directly for training
        "rank": 0, # Optional, distributed workloads are possible
        "world_size": 1,
    },
    "limit": 200,
    "samples_buffer_size": 32,
}

client = DatagoClient(json.dumps(config))

for _ in range(10):
    sample = client.get_sample()
[experimental] Webdataset

Please note that this implementation is very new, and probably has significant limitations still. It has not yet been tested at scale. Please also note that you can find a better example in /python/benchmark_webdataset.py, which will show how to convert everything to more pythonic types (PIL images).

from datago import DatagoClient, initialize_logging
import os
import json

# Can also set the log level directly instead of using RUST_LOG env var
initialize_logging(log_level="warn")

# URL of the test bucket
bucket = "https://storage.googleapis.com/webdataset/fake-imagenet"
dataset = "/imagenet-train-{000000..001281}.tar"
url = bucket + dataset

client_config = {
    "source_type": "webdataset",
    "source_config": {
        "url": url,
        "random_sampling": False,
        "max_concurrency": 8, # The number of TarballSamples which should be handled concurrently
        "rank": 0,
        "world_size": 1,
    },
    "prefetch_buffer_size": 128,
    "samples_buffer_size": 64,
    "limit": 1_000_000, # Dummy example, max number of samples you would like to serve
}

client = DatagoClient(json.dumps(client_config))

for _ in range(10):
    sample = client.get_sample()

Process images on the fly

Datago can also process images on the fly, for instance to align different image payloads. This is done by adding an image_config to the configuration. The following example shows how to align different image payloads.

Processing can be very CPU heavy, but it will be distributed over all CPU cores wihout requiring multiple python processes. I.e., you can keep a single python process using get_sample() on the client and still saturate all CPU cores.

There are three main processing topics that you can choose from:

  • crop the images to within an aspect ratio bucket (which is very handy for all Transformer / patch based architectures)
  • resize the images (setting here will be related to the square aspect ratio bucket, other buckets will differ of course)
  • pre-encode the images to a specific format (jpg, png, ...)
   config = {
    "source_type": "file",
    "source_config": {
        "root_path": "myPath",
        "random_sampling": False, # True if used directly for training
    },
    # Optional pre-processing of the images, placing them in an aspect ratio bucket to preserve as much as possible of the original content
    "image_config": {
        "crop_and_resize": True, # False to turn it off, or just omit this part of the config
        "default_image_size": 1024,
        "downsampling_ratio": 32,
        "min_aspect_ratio": 0.5,
        "max_aspect_ratio": 2.0,
        "pre_encode_images": False,
    },
    "limit": 200,
    "samples_buffer_size": 32,
}

Match the raw exported buffers with typical python types

See helper functions provided in raw_types.py, should be self explanatory. Check python benchmarks for examples. As mentioned above, we also provide a wrapper so that you get a dataset directly.

Logging

We are using the log crate with env_logger. You can set the log level using the RUST_LOG environment variable. E.g. RUST_LOG=INFO.

When using the library from Python, env_logger will be initialized automatically when creating a DatagoClient. There is also a initialize_logging function in the datago module, which if called before using a client, allows to customize the log level. This only works if RUST_LOG is not set.

Env variables

There are a couple of env variables which will change the behavior of the library, for settings which felt too low level to be exposed in the config.

  • DATAGO_MAX_TASKS: refers to the number of threads which will be used to load the samples. Defaults to a multiple of the CPU cores.
  • RUST_LOG: see above, will change the level of logging for the whole library, could be useful for debugging or to report an issue here.
  • DATAGO_MAX_RETRIES: number of retries for a failed sample load, defaults to 3.
Build it

Preamble

Just install the rust toolchain via rustup

[Apple Silicon MacOS only]

If you are using an Apple Silicon Mac OS machine, create a .cargo/config file and paste the following:

[target.x86_64-apple-darwin]
rustflags = [
  "-C", "link-arg=-undefined",
  "-C", "link-arg=dynamic_lookup",
]

[target.aarch64-apple-darwin]
rustflags = [
  "-C", "link-arg=-undefined",
  "-C", "link-arg=dynamic_lookup",
]

Build a benchmark CLI

Cargo run --release -- -h to get all the information, should be fairly straightforward

Run the rust test suite

From the datago folder

cargo test

Generate the python package binaries manually

Build a wheel useable locally

maturin build -i python3.11 --release --target "x86_64-unknown-linux-gnu"

Build a wheel which can be uploaded to pypi or related

  • either use a manylinux docker image

  • or cross compile using zip

maturin build -i python3.11 --release --target "x86_64-unknown-linux-gnu" --manylinux 2014 --zig

then you can pip install from target/wheels

Update the pypi release (maintainers)

Create a new tag and a new release in this repo, a new package will be pushed automatically.

Benchmarks As usual, benchmarks are a tricky game, and you shouldn't read too much into the following plots but do your own tests. Some python benchmark examples are provided in the [python](./python/) folder.

In general, Datago will be impactful if you want to load a lot of images very fast, but if you consume them as you go at a more leisury pace then it's not really needed. The more CPU work there is with the images and the higher quality they are, the more Datago will shine. The following benchmarks are using ImageNet 1k, which is very low resolution and thus kind of a worst case scenario. Data is served from cache (i.e. the OS cache) and the images are not pre-processed. In this case the receiving python process is typically the bottleneck, and caps at around 2000 images per second.

AMD Zen3 laptop - IN1k - disk

AMD Zen3 laptop & M2 SSD

AMD EPYC 9454 - IN1k - disk

AMD EPYC 9454

This benchmark is using the PD12M dataset, which is a 12M images dataset, with a lot of high resolution images. It's accessed through the webdataset front end, datago is compared with the popular python webdataset library. Note that datago will start streaming the images faster here (almost instantly !), so given enough time the two results would look closer.

AMD EPYC 9454 - pd12m - webdataset

AMD EPYC 9454

License

MIT License

Copyright (c) 2025 Photoroom

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

About

A natively parallel dataloader for Python, written in Rust. Serving data at GB/s speeds, while covering aspect ratio bucketing, crop and resize for image ML workloads.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 6