A custom async runtime for Rust featuring a priority-based task queue system. This runtime provides fine-grained control over task execution with separate high and low priority queues, allowing you to build efficient concurrent applications with custom scheduling semantics.
- Priority-based Task Queues: Separate high and low priority queues for task scheduling
- Configurable Worker Threads: Customize the number of worker threads for each priority level
- Custom HTTP Connector: Built-in HTTP/HTTPS client integration with Hyper
- Macro-based API: Convenient macros for spawning tasks and joining futures
- Thread-safe Execution: Safe concurrent task execution with proper synchronization
async_runtime/
├── src/
│ ├── main.rs # Example usage and entry point
│ ├── config_runtime/ # Runtime configuration and initialization
│ ├── queue/ # Priority queue implementation
│ ├── connector/ # Custom HTTP connector for Hyper
│ ├── macros/ # Convenience macros for task spawning
│ └── examples/ # Example futures and async implementations
└── Cargo.toml
The runtime maintains two separate task queues:
- High Priority Queue: Tasks scheduled here are processed first by dedicated high-priority worker threads
- Low Priority Queue: Tasks scheduled here are processed when high-priority queues are empty
Worker threads intelligently fall back to the other queue when their primary queue is empty, ensuring efficient resource utilization. When both queues are empty, workers back off briefly to avoid busy-waiting.
- Rust 1.70 or later (with edition 2024)
- Cargo (Rust's package manager)
- Clone the repository:
git clone <your-repo-url>
cd runtime- Build the project:
cd async_runtime
cargo build --releaseuse async_runtime::{Runtime, spawn_tasks, FuturePriority};
use futures_lite::future;
fn main() {
// Initialize runtime with custom worker thread counts
let runtime = Runtime::new()
.with_high_nums(3) // 3 high-priority worker threads
.with_low_nums(1); // 1 low-priority worker thread
runtime.run();
// Spawn a high-priority task
let task = spawn_tasks!(async {
println!("High priority task executed!");
}, FuturePriority::High);
// Wait for the task to complete
future::block_on(task);
}The runtime includes a custom HTTP connector that integrates seamlessly with Hyper:
use async_runtime::{Runtime, connector::fetch, spawn_tasks, FuturePriority};
use futures_lite::future;
use http::request;
use hyper::Body;
fn main() {
let runtime = Runtime::new().with_high_nums(3).with_low_nums(1);
runtime.run();
let future = async {
let request = request::Request::get("https://example.com")
.body(Body::empty())
.unwrap();
let response = fetch(request).await.unwrap();
// Process response...
};
let task = spawn_tasks!(future, FuturePriority::High);
future::block_on(task);
}The Runtime struct provides several configuration options:
Runtime::new(): Creates a runtime with default settings (number of cores - 2 for high priority, 1 for low priority)with_high_nums(n): Sets the number of high-priority worker threadswith_low_nums(n): Sets the number of low-priority worker threadsrun(): Initializes the runtime and spawns worker threads
Tasks can be spawned using the spawn_tasks! macro:
// Spawn with default (low) priority
let task = spawn_tasks!(async { /* ... */ });
// Spawn with explicit priority
let task = spawn_tasks!(async { /* ... */ }, FuturePriority::High);Use the join! macro to wait for multiple futures:
use async_runtime::join;
let task1 = spawn_tasks!(async { 1 }, FuturePriority::High);
let task2 = spawn_tasks!(async { 2 }, FuturePriority::Low);
let results = join!(task1, task2);After building, you can run the example program:
cargo run --releaseThe example demonstrates:
- Runtime initialization
- HTTP request execution using the custom connector
- Task spawning with priority levels
async-task: Lightweight task spawningfutures-lite: Future utilitiesflume: Fast, bounded and unbounded channelshyper: HTTP client/server librarysmol: Small async runtime utilitiesasync-native-tls: TLS support for async I/Otokio: Async runtime (used for I/O traits)
The runtime uses a work-stealing approach where:
- High-priority workers primarily process high-priority tasks
- Low-priority workers primarily process low-priority tasks
- Both worker types fall back to the other queue when their primary queue is empty
- Workers back off when no work is available to reduce CPU usage
This design ensures that high-priority tasks get preferential treatment while still maintaining good throughput for lower-priority work.
Contributions are welcome! Please feel free to submit a Pull Request.