24 releases (stable)
| 2.0.0 | Apr 29, 2026 |
|---|---|
| 1.8.4 | Dec 20, 2024 |
| 1.8.3 | Aug 29, 2024 |
| 1.8.2 | Jul 2, 2024 |
| 0.5.3 | Dec 31, 2023 |
#231 in HTTP server
115KB
2K
SLoC
girolle
Description
A nameko-rpc like lib in rust. Check the To-Do section to see limitation.
Do not use in production!
Girolle use Nameko architecture to send request and get response.
Documentation
User documentation and Rust documentation
Installation
cargo add girolle
Configuration
There is two way to create a configuration. The first one is to use the Config::with_yaml_defaults function that will read a configuration from
a YAML file, see example. The second one is to create a configuration by hand.
Create a configuration from a yaml file
The configuration is done by a yaml file. It should be compliant with a Nameko one. The file should look like this:
AMQP_URI: 'amqp://toto:super@$172.16.1.1:5672//'
rpc_exchange: 'nameko-rpc'
max_workers: 10
parent_calls_tracked: 10
In this example:
- The
AMQP_URIis the connection string to the RabbitMQ server. - The
rpc_exchangeis the exchange name for the rpc calls. - The
max_workersis the max number of workers that will be created to handle the rpc calls. - The
parent_calls_trackedis the number of parent calls that will be tracked by the service.
Create a configuration by hand
let conf = Config::default();
conf.with_amqp_uri("amqp://toto:super@localhost:5672/")
.with_rpc_exchange("nameko-rpc")
.with_max_workers(10)
.with_parent_calls_tracked(10);
Environment variables
The configuration supports the expansion of the environment variables with the
following syntax ${VAR_NAME}. Like in this example:
AMQP_URI: 'amqp://${RABBITMQ_USER}:${RABBITMQ_PASSWORD}@${RABBITMQ_HOST}:${RABBITMQ_PORT}/%2f'
rpc_exchange: 'nameko-rpc'
max_workers: 10
parent_calls_tracked: 10
How to use it
The core concept is to remove the pain of queue creation and replies by
mirroring the Nameko architecture with a RpcService or RpcClient, and
to use an abstract type serde_json::Value to carry serializable data.
Service handlers run as async futures and receive an RpcContext giving
access to inbound headers and two capability handles:
ctx.rpc.call(service, method, payload).await— call any other service.ctx.events.dispatch(source, event_type, &payload).await— emit a Nameko-compatible event on<source>.events.
The #[girolle] attribute generates a handler from a sync or async function.
An optional first ctx: RpcContext argument is detected automatically.
If you'd rather skip the macro you can build an RpcTask by hand from an
RpcHandler closure:
use girolle::prelude::*;
use std::sync::Arc;
fn fibonacci() -> RpcTask {
RpcTask::new(
"fibonacci",
Arc::new(|_ctx: RpcContext, payload: Payload| -> BoxFuture<GirolleResult<Value>> {
Box::pin(async move {
let n: u64 = serde_json::from_value(payload.args()[0].clone())?;
Ok(serde_json::to_value(fibonacci_core(n))?)
})
}),
)
}
Examples
The examples/ crate contains runnable services and senders:
| example | what it shows |
|---|---|
simple_macro |
basic #[girolle] service with sync handlers |
simple_service |
hand-rolled RpcTask::new with an async closure |
proxy_service |
ctx.rpc.call from inside a handler |
event_emitter |
ctx.events.dispatch from inside a handler |
event_observer |
RpcService::subscribe to consume events |
cli_sender |
generic CLI sender — <service> <method> [arg…] |
simple_sender |
RpcClient round-trip with sync + async calls |
Create a simple service
use girolle::prelude::*;
use std::{thread, time};
#[girolle]
fn hello(s: String) -> String {
format!("Hello, {}!", s)
}
#[girolle]
fn sub(a: i64, b: i64) -> i64 {
a - b
}
#[girolle]
fn slip(n: u64) -> String {
thread::sleep(time::Duration::from_secs(n));
format!("Slept for {} seconds", n)
}
#[girolle]
fn fibonacci(n: u64) -> u64 {
if n <= 1 {
return n;
}
return fibonacci(n - 1) + fibonacci(n - 2);
}
fn main() {
let conf: Config = Config::with_yaml_defaults("staging/config.yml".to_string()).unwrap();
let _ = RpcService::new(conf, "video")
.register(hello)
.register(sub)
.register(slip)
.register(fibonacci)
.start();
}
Call another service from a handler
use girolle::prelude::*;
#[girolle]
async fn proxy_hello(ctx: RpcContext, name: String) -> String {
let result = ctx
.rpc
.call("video", "hello", Payload::new().arg(name))
.await
.expect("rpc call failed");
serde_json::from_value(result).expect("video.hello did not return a String")
}
Emit an event from a handler
use girolle::prelude::*;
use serde::Serialize;
#[derive(Serialize)]
struct UserCreated { name: String }
#[girolle]
async fn create_user(ctx: RpcContext, name: String) -> String {
ctx.events
.dispatch("users", "user_created", &UserCreated { name: name.clone() })
.await
.expect("event dispatch failed");
format!("User {} created", name)
}
Subscribe to events
use girolle::prelude::*;
use std::sync::Arc;
fn main() {
let conf = Config::with_yaml_defaults("staging/config.yml".to_string()).unwrap();
let _ = RpcService::new(conf, "event-observer")
.subscribe(
"users",
"user_created",
Arc::new(|_ctx: RpcContext, payload: Value| -> BoxFuture<GirolleResult<()>> {
Box::pin(async move {
println!("[users.user_created] {}", payload);
Ok(())
})
}),
)
.start();
}
A service can mix RPC methods (register(...)) and event subscriptions
(subscribe(...)) freely; either one alone is also valid.
Create multiple calls to service of methods, sync and async
use girolle::prelude::Payload;
use girolle::{serde_json, Config, RpcClient, Value};
use std::time::Instant;
use std::{thread, time};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Load the configuration
let conf: Config = Config::with_yaml_defaults("staging/config.yml".to_string())?;
let service_name = "video";
// Create the client from the configuration
let mut rpc_client = RpcClient::new(conf);
// Register the service
rpc_client.register_service(service_name).await?;
// Start the client and the consumers
rpc_client.start().await?;
// Build the payload
let p = Payload::new().arg(30);
// Send the request sync
let new_result = rpc_client.send(service_name, "fibonacci", p)?;
// Deserialize the result
let fib_result: u64 = serde_json::from_value(new_result.get_value())?;
// Print the result
println!("fibonacci :{:?}", fib_result);
assert_eq!(fib_result, 832040);
// Close the client
rpc_client.unregister_service(service_name)?;
rpc_client.close().await?;
Ok(())
}
Stack
Girolle use lapin as an AMQP client/server library.
Supported features
- standalone client (sync and async)
- simple service with
#[girolle]- error handling
- tests
- macro
- basic
- handle
return - handle recursive functions
- support
async fnand an optionalctx: RpcContextfirst argument
- in-service RPC client —
ctx.rpc.call(...)from inside a handler - event publishing —
ctx.events.dispatch(...)(Nameko{source}.events) - event subscriptions —
RpcService::subscribe(source, event_type, handler) -
#[girolle_event]macro for ergonomic subscription handlers - HTTP / web service entrypoint
nameko-client
The Girolle client provides sync and async sends. There's also a
cli_sender example for poking at any service
from a terminal without hand-editing a sender file.
nameko-rpc
RpcService plus the #[girolle] macro is the core. Handlers are async,
receive an RpcContext, and can call other services from inside the handler
via ctx.rpc.call(...) — the in-service RPC core handles the reply queue,
correlation map, and nameko.call_id_stack propagation transparently.
nameko-pubsub
Both sides are supported: handlers can publish events with
ctx.events.dispatch(source, event_type, &payload) and services can
subscribe to events with RpcService::subscribe(source, event_type, handler).
Exchanges follow the Nameko {source}.events topic convention, so a Girolle
service can publish events that a Python Nameko service subscribes to (and
vice versa).
nameko-web
The web service is not implemented. I'm not sure if i will implement it. I need to rework the client to be make it 100% thread safe. It should be a commun subject with the proxy.
Limitation
The current code as been tested with the nameko and girolle examples in this repository.
| nameko_test.py | simple_sender.rs | |
|---|---|---|
| nameko_service.py | x | x |
| simple_macro | x | x |
Benchmark
Simple message benchmark
| nameko_test.py | simple_sender.rs | |
|---|---|---|
| nameko_service.py | 15.587 s | 11.532 s |
| simple_macro.rs | 15.654 s | 8.078 s |
Client benchmark
Using hyperfine to test the client benchmark.
Girolle client ( with Girolle service )
hyperfine -N './target/release/examples/simple_sender'
Benchmark 1: ./target/release/examples/simple_sender
Time (mean ± σ): 9.995 s ± 0.116 s [User: 0.163 s, System: 0.197 s]
Range (min … max): 9.778 s … 10.176 s 10 runs
Nameko client ( with Girolle service )
hyperfine -N --warmup 3 'python nameko_test.py'
Benchmark 1: python nameko_test.py
Time (mean ± σ): 15.654 s ± 0.257 s [User: 1.455 s, System: 0.407 s]
Range (min … max): 15.202 s … 15.939 s 10 runs
Service benchmark
Girolle service ( with Girolle client )
hyperfine -N './target/release/examples/simple_sender'
Benchmark 1: ./target/release/examples/simple_sender
Time (mean ± σ): 9.995 s ± 0.116 s [User: 0.163 s, System: 0.197 s]
Range (min … max): 9.778 s … 10.176 s 10 runs
Nameko service running python 3.9.15 ( with Girolle client )
hyperfine -N --warmup 3 'target/release/examples/simple_sender'
Benchmark 1: target/release/examples/simple_sender
Time (mean ± σ): 11.532 s ± 0.091 s [User: 0.199 s, System: 0.213 s]
Range (min … max): 11.396 s … 11.670 s 10 runs
Nameko service running python 3.9.15 ( with Nameko client )
hyperfine -N --warmup 3 'python nameko_test.py'
Benchmark 1: python nameko_test.py
Time (mean ± σ): 15.587 s ± 0.325 s [User: 1.443 s, System: 0.420 s]
Range (min … max): 15.181 s … 16.034 s 10 runs
Fibonacci benchmark
The benchmark use a static set of random int to compute fibonacci.
| nameko_fib_payload.py | |
|---|---|
| nameko_service.py | 03 min 58.11 s |
| simple_macro.rs | 6.99 s |
Macro-overhead benchmark
The benchmark is done to test the overhead of the macro.
Dependencies
~20–39MB
~510K SLoC