#synchronization-primitive #async-concurrency #mutex #concurrency-mutex

saa

Word-sized low-level synchronization primitives providing both asynchronous and synchronous interfaces

35 stable releases (3 major)

Uses new Rust 2024

new 5.6.0 May 12, 2026
5.5.0 Feb 16, 2026
5.4.2 Dec 17, 2025
5.3.3 Nov 7, 2025
0.4.0 Aug 22, 2025

#55 in Concurrency

Download history 28024/week @ 2026-01-22 31555/week @ 2026-01-29 36342/week @ 2026-02-05 31950/week @ 2026-02-12 33006/week @ 2026-02-19 42980/week @ 2026-02-26 57793/week @ 2026-03-05 70818/week @ 2026-03-12 77783/week @ 2026-03-19 74477/week @ 2026-03-26 62445/week @ 2026-04-02 77194/week @ 2026-04-09 68155/week @ 2026-04-16 81082/week @ 2026-04-23 63749/week @ 2026-04-30 53503/week @ 2026-05-07

283,705 downloads per month
Used in 132 crates (2 directly)

Apache-2.0

185KB
3.5K SLoC

Synchronous and Asynchronous Synchronization Primitives

Cargo Crates.io

Word-sized low-level synchronization primitives providing both asynchronous and synchronous interfaces.

Features

  • No heap allocation.
  • No hidden global variables.
  • Provides both asynchronous and synchronous interfaces.
  • lock_api support: features = ["lock_api"].
  • Loom support: features = ["loom"].

Lock

saa::Lock is a low-level shared-exclusive lock providing both asynchronous and synchronous interfaces. Synchronous locking methods such as lock_sync and share_sync can be used simultaneously with their asynchronous counterparts lock_async and share_async. saa::Lock implements an allocation-free fair wait queue shared between both synchronous and asynchronous methods.

use saa::{Lock, TryLockError};

// At most `62` concurrent shared owners are allowed.
assert_eq!(Lock::MAX_SHARED_OWNERS, 62);

let lock = Lock::default();

assert!(lock.try_lock().is_ok());
assert_eq!(lock.try_lock(), Err(TryLockError::WouldBlock));
assert_eq!(lock.try_share(), Err(TryLockError::WouldBlock));

assert!(!lock.release_share());
assert!(lock.release_lock());

assert!(lock.lock_sync());

// `Lock` can be poisoned.
assert!(lock.poison_lock());
assert!(!lock.lock_sync());
assert!(lock.clear_poison());

async {
    assert!(lock.share_async().await);
    assert!(lock.release_share());
    assert!(lock.lock_async().await);
    assert!(lock.release_lock());
};

lock_api support

The lock_api feature is automatically disabled when the loom feature is enabled since loom atomic types cannot be instantiated in const contexts.

#[cfg(all(feature = "lock_api", not(feature = "loom")))]
use saa::{Mutex, RwLock, lock_async, read_async, write_async};

#[cfg(all(feature = "lock_api", not(feature = "loom")))]
fn example() {
    let mutex: Mutex<usize> = Mutex::new(0);
    let rwlock: RwLock<usize> = RwLock::new(0);

    let mut mutex_guard = mutex.lock();
    assert_eq!(*mutex_guard, 0);
    *mutex_guard += 1;
    assert_eq!(*mutex_guard, 1);
    drop(mutex_guard);

    let mut write_guard = rwlock.write();
    assert_eq!(*write_guard, 0);
    *write_guard += 1;
    drop(write_guard);

    let read_guard = rwlock.read();
    assert_eq!(*read_guard, 1);
    drop(read_guard);

    async {
        let mutex_guard = lock_async(&mutex).await;
        assert_eq!(*mutex_guard, 1);
        drop(mutex_guard);

        let mut write_guard = write_async(&rwlock).await;
        *write_guard += 1;
        drop(write_guard);

        let reader_guard = read_async(&rwlock).await;
        assert_eq!(*reader_guard, 2);
        drop(reader_guard);
    };
}

Barrier

saa::Barrier is a synchronization primitive that enables multiple tasks to start execution at the same time.

use std::sync::Arc;
use std::thread;

use saa::Barrier;

// At most `63` concurrent tasks/threads can be synchronized.
assert_eq!(Barrier::MAX_TASKS, 63);

let barrier = Arc::new(Barrier::with_count(8));

let mut threads = Vec::new();

for _ in 0..8 {
    let barrier = barrier.clone();
    threads.push(thread::spawn(move || {
        for _ in 0..4 {
            barrier.wait_sync();
        }
    }));
}

for thread in threads {
    thread.join().unwrap();
}

Semaphore

saa::Semaphore is a synchronization primitive that allows a fixed number of tasks or threads to concurrently access a resource.

use saa::Semaphore;

// At most `63` concurrent tasks/threads can be synchronized.
assert_eq!(Semaphore::MAX_PERMITS, 63);

let semaphore = Semaphore::default();

semaphore.acquire_many_sync(Semaphore::MAX_PERMITS - 1);

assert!(semaphore.try_acquire());
assert!(!semaphore.try_acquire());

assert!(semaphore.release());
assert!(!semaphore.release_many(Semaphore::MAX_PERMITS));
assert!(semaphore.release_many(Semaphore::MAX_PERMITS - 1));

async {
    semaphore.acquire_async().await;
    assert!(semaphore.release());
};

Gate

saa::Gate is an unbounded barrier that can be opened or sealed manually as needed.

use std::sync::Arc;
use std::thread;

use saa::Gate;
use saa::gate::State;

let gate = Arc::new(Gate::default());

let mut threads = Vec::new();

for _ in 0..4 {
    let gate = gate.clone();
    threads.push(thread::spawn(move || {
        assert_eq!(gate.enter_sync(), Ok(State::Controlled));
    }));
}

let mut count = 0;
while count != 4 {
    if let Ok(n) = gate.permit() {
        count += n;
    }
}

for thread in threads {
    thread.join().unwrap();
}

Pager

saa::Pager enables waiting for a resource to become available from anywhere in the program.

use std::pin::pin;

use saa::{Gate, Pager};
use saa::gate::State;

let gate = Gate::default();

let mut pinned_pager = pin!(Pager::default());

assert!(gate.register_pager(&mut pinned_pager, true));
assert_eq!(gate.open().1, 1);

assert_eq!(pinned_pager.poll_sync(), Ok(State::Open));

Notes

Using synchronous methods in an asynchronous context may lead to deadlocks. Consider a scenario where an asynchronous runtime uses two threads to execute three tasks.

  • ThreadId(0): task-0: share-waiting / pending || task-1: "synchronous"-lock-waiting.
  • ThreadId(1): task-2: release-lock / ready: wake-up task-0 -> task-2: lock-waiting / pending.

In this example, task-0 has logically acquired a shared lock after task-2 releases it; however, it may remain in the task queue indefinitely depending on the task scheduling policy.

Changelog

Dependencies

~0–1.8MB
~21K SLoC