A project by the HOPR Association
HOPR is a privacy-preserving messaging protocol which enables the creation of a secure communication network via relay nodes powered by economic incentives using digital tokens.
- Table of Contents
- About
- Install
- Usage
- Testnet accessibility
- Migrating between releases
- Develop
- Local cluster
- Test
- Profiling & Instrumentation
- Contact
- License
The HOPR project produces multiple artifacts that allow running, maintaining and modifying the HOPR node. The most relevant components for production use cases are:
- hopr-lib
- A fully self-contained referential implementation of the HOPR protocol over a libp2p based connection mechanism that can be incorporated into other projects as a transport layer.
- hoprd
- Daemon application providing a higher level interface for creating a HOPR protocol compliant node that can use a dedicated REST API.
- hoprd-api-schema
- Utility to generate the OpenAPI spec for the
hoprdserved REST API.
- Utility to generate the OpenAPI spec for the
- hoprd-cfg
- Utility for configuration management of the
hoprd
- Utility for configuration management of the
Unless stated otherwise, the following sections only apply to hoprd.
Multiple options for installation exist, the preferred choice for any production system should be to use the container image (e.g. using docker).
All releases and associated changelogs are located in the official releases section of the hoprnet repository.
Checkout DockerHub for specific version tag or use a custom tag:
- I want the latest code from active development →
latestorlatest-main - I want the latest stable release from active development →
release-main - I want the latest code from the LTS branch →
latest-lts - I want the latest stable release from the LTS branch →
release-lts - I want what is currently running in production →
stable
docker pull hoprnet/hoprd:stableIt is recommended to setup an alias hoprd for the docker command invocation.
Install via Nix package manager
WARNING: This setup should only be used for development or advanced usage without any further support.
Clone and initialize the hoprnet repository:
git clone https://github.com/hoprnet/hoprnet
cd hoprnetBuild and install the hoprd binary, e.g. on a UNIX platform:
nix build
sudo cp result/bin/* /usr/local/bin/To build and access man pages for hoprd:
# Build man page for hoprd
nix build .#hoprd-man
man ./result/share/man/man1/hoprd.1.gz
# Or install them system-wide
sudo cp -r result/share/man/man1/* /usr/local/share/man/man1/Linux packages are available at every github release, download the latest package from https://github.com/hoprnet/hoprnet/releases/latest To install on specific distribution, see detailed information
hoprd provides various command-line switches to configure its behaviour. For reference these are documented here as well:
$ hoprd --help
HOPR node executable
Usage: hoprd [OPTIONS]
Options:
--identity <IDENTITY>
The path to the identity file [env: HOPRD_IDENTITY=]
--data <DATA>
Specifies the directory to hold all the data [env: HOPRD_DATA=]
--host <HOST>
Host to listen on for P2P connections [env: HOPRD_HOST=]
--announce...
Announce the node on chain with a public address [env: HOPRD_ANNOUNCE=]
--api...
Expose the API on localhost:3001 [env: HOPRD_API=]
--apiHost <HOST>
Set host IP to which the API server will bind [env: HOPRD_API_HOST=]
--apiPort <PORT>
Set port to which the API server will bind [env: HOPRD_API_PORT=]
--apiToken <TOKEN>
A REST API token and for user authentication [env: HOPRD_API_TOKEN=]
--password <PASSWORD>
A password to encrypt your keys [env: HOPRD_PASSWORD=]
--blokliUrl <URL>
URL for Blokli provider to be used for the node to connect to blockchain [env: HOPRD_BLOKLI_URL=]
--init...
initialize a database if it doesn't already exist [env: HOPRD_INIT=]
--forceInit...
initialize a database, even if it already exists [env: HOPRD_FORCE_INIT=]
--probeRecheckThreshold <SECONDS>
Timeframe in seconds after which it is reasonable to recheck the nearest neighbor [env: HOPRD_PROBE_RECHECK_THRESHOLD=]
--configurationFilePath <CONFIG_FILE_PATH>
Path to a file containing the entire HOPRd configuration [env: HOPRD_CONFIGURATION_FILE_PATH=]
--safeAddress <HOPRD_SAFE_ADDR>
Address of Safe that safeguards tokens [env: HOPRD_SAFE_ADDRESS=]
--moduleAddress <HOPRD_MODULE_ADDR>
Address of the node management module [env: HOPRD_MODULE_ADDRESS=]
-h, --help
Print help
-V, --version
Print versionOn top of the default configuration options generated for the command line, the following environment variables can be used in order to tweak the node functionality:
ENV_WORKER_THREADS- the number of environment worker threads for the tokio executorHOPRD_LOG_FORMAT- override for the default stdout log formatter (follows tracing formatting options)HOPRD_USE_OPENTELEMETRY- enable the OpenTelemetry output for this nodeHOPRD_OTEL_SIGNALS- comma-separated OTLP signals to export when OpenTelemetry is enabled (traces,logs,metrics), defaults totracesHOPR_INTERNAL_CHAIN_DISCOVERY_CHANNEL_CAPACITY- the maximum capacity of the channel for chain generated discovery signals for the p2p transportHOPR_INTERNAL_ACKED_TICKET_CHANNEL_CAPACITY- the maximum capacity of the acknowledged ticket processing queueHOPR_INTERNAL_LIBP2P_MAX_CONCURRENTLY_DIALED_PEER_COUNT- the maximum number of concurrently dialed peers in libp2pHOPR_INTERNAL_LIBP2P_MAX_NEGOTIATING_INBOUND_STREAM_COUNT- the maximum number of negotiating inbound streamsHOPR_INTERNAL_LIBP2P_SWARM_IDLE_TIMEOUT- timeout for all idle libp2p swarm connections in secondsHOPR_INTERNAL_DB_PEERS_PERSISTENCE_AFTER_RESTART_IN_SECONDS- cutoff duration from now to not retain the peers with older records in the peers database (e.g. after a restart)HOPR_INTERNAL_MANUAL_PING_CHANNEL_CAPACITY- the maximum capacity of awaiting manual ping queueHOPR_INTERNAL_MIXER_CAPACITY- capacity of the mixer bufferHOPR_INTERNAL_MIXER_MINIMUM_DELAY_IN_MS- the minimum mixer delay in millisecondsHOPR_INTERNAL_MIXER_DELAY_RANGE_IN_MS- the maximum range of the mixer delay from the minimum value in millisecondsHOPR_INTERNAL_PROTOCOL_BIDIRECTIONAL_CHANNEL_CAPACITY- the maximum capacity of HOPR messages processed by the nodeHOPR_INTERNAL_SESSION_CTL_CHANNEL_CAPACITY- the maximum capacity of the session control channelHOPR_INTERNAL_SESSION_INCOMING_CAPACITY- the maximum capacity of the queue storing unprocessed incoming and outgoing messages inside a sessionHOPR_INTERNAL_SESSION_BALANCER_LEVEL_CAPACITY- the maximum capacity of the session balancerHOPR_INTERNAL_RAW_SOCKET_LIKE_CHANNEL_CAPACITY- the maximum capacity of the raw socket-like bidirectional API interfaceHOPR_INTERNAL_IN_PACKET_PIPELINE_CONCURRENCY- the maximum number of incoming packets to process concurrently (default: 8 × CPU cores, 0 = no limit)HOPR_INTERNAL_OUT_PACKET_PIPELINE_CONCURRENCY- the maximum number of outgoing packets to process concurrently (default: 8 × CPU cores, 0 = no limit)HOPR_CPU_TASK_QUEUE_LIMIT- maximum number of CPU-bound tasks (queued + running) in the Rayon thread pool. If not set, the queue is unbounded. Set this to prevent unbounded queue growth under sustained load (e.g.,1000). When the limit is reached, new tasks are rejected and packet decode operations may fail with "local CPU queue full" errors.HOPR_BALANCER_PID_P_GAIN- proportional (P) gain for the PID controller in outgoing SURB balancer (default:0.6)HOPR_BALANCER_PID_I_GAIN- integral (I) gain for the PID controller in outgoing SURB balancer (default:0.7)HOPR_BALANCER_PID_D_GAIN- derivative (D) gain for the PID controller in outgoing SURB balancer (default:0.2)HOPR_CAPTURE_PACKETS- allow capturing customized HOPR packet format to a PCAP file or to audpdumphost. Note thathoprdmust be built with thecapturefeature.HOPR_CAPTURE_PATH_TRIGGER- path used as trigger to start capturing customized HOPR packets. When there exists a file in that path, it will start capturing data.HOPR_TRANSPORT_MAX_CONCURRENT_PACKETS- maximum number of concurrently processed incoming packets from all peers (default: 10)HOPR_TRANSPORT_STREAM_OPEN_TIMEOUT_MS- maximum time (in milliseconds) to wait until a stream connection is established to a peer (default:2000 ms)HOPR_PACKET_PLANNER_CONCURRENCY- maximum number of concurrently planned outgoing packets (default:10)HOPR_SESSION_FRAME_SIZE- The maximum chunk of data that can be written to the Session's input buffer (default: 1500)HOPR_SESSION_FRAME_TIMEOUT_MS- The maximum time (in milliseconds) for an incomplete frame to stay in the Session's output buffer (default: 800 ms)HOPR_PROTOCOL_SURB_RB_SIZE- size of the SURB ring buffer (default: 10 000)HOPR_PROTOCOL_SURB_RB_DISTRESS- threshold since number of SURBs in the ring buffer triggers a distress packet signal (default: 1000)HOPRD_SESSION_PORT_RANGE- allows restricting the port range (syntax:start:endinclusive) of Session listener automatic port selection (when port 0 is specified)HOPRD_SESSION_ENTRY_UDP_RX_PARALLELISM- sets the number of UDP listening sockets for UDP sessions on Entry node (defaults to number of CPU cores)HOPRD_SESSION_EXIT_UDP_RX_PARALLELISM- sets the number of UDP listening sockets for UDP sessions on Exit node (defaults to number of CPU cores)HOPRD_NAT- indicates whether the host is behind a NAT and sets transport-specific settings accordingly (default:false)HOPRD_NUM_CPU_THREADS- sets the number of threads for CPU-bound tasks (default: number of CPU cores / 2)HOPRD_NUM_IO_THREADS- sets the number of threads for IO-bound tasks (default: number of CPU cores / 2)HOPRD_THREAD_STACK_SIZE- sets the thread stack size (default: 10 MB)HOPR_METRICS_UNACK_PER_PEER- enable per-peer unacknowledged ticket cache metrics (disabled by default to reduce Prometheus cardinality). Set to1ortruefor debugging specific peer acknowledgement issues. Warning: Do not enable in production with many peers.
Running the node without any command-line argument might not work depending on the installation method used. Some command line arguments are required.
Some basic reasonable setup uses a custom identity and enables the REST API of the hoprd:
hoprd --identity /app/hoprd-db/.hopr-identity --password switzerland --init --announce --host "0.0.0.0:9091" --apiToken <MY_TOKEN> --blokliUrl "http://blokli-provider.here"Here is a short breakdown of each argument.
hoprd
# store your node identity information in the persisted database folder
--identity /app/hoprd-db/.hopr-identity
# set the encryption password for your identity
--password switzerland
# initialize the database and identity if not present
--init
# announce the node to other nodes in the network and act as relay if publicly reachable
--announce
# set IP and port of the P2P API to the container's external IP so it can be reached on your host
--host "0.0.0.0:9091"
# specify password for accessing REST API
--apiToken <MY_TOKEN>
# blokli provider supplying the HOPR updates
--blokliUrl "http://blokli-provider.here"Please follow the documentation for docker compose based deployment.
hoprd running a REST API exposes an endpoint at http://<address>/api-docs/openapi.json with full OpenApi specification of the used REST API, including the current version of the API.
To participate in a public network the node must be eligible. See Network Registry for details.
Node eligibility is not required in a local development cluster (see Develop section below).
There is NO backward compatibility between releases.
We attempt to provide instructions on how to migrate your tokens between releases.
- Set your automatic channel strategy to
passive. - Redeem all unredeemed tickets.
- Close all open payment channels.
- Once all payment channels have closed, withdraw your funds to an external wallet.
- Run
infoand take note of the network name. - Once funds are confirmed to exist in a different wallet, backup
.hopr-identityfolder. - Launch new
HOPRdinstance using latest release, observe the account address. - Only transfer funds to new
HOPRdinstance ifHOPRdoperates on the same network as last release, you can compare the two networks usinginfo.
Either setup nix and flake to use the nix environment, or install Rust toolchain from the rust-toolchain.toml, as well as foundry-rs binaries (forge, anvil).
Install nix from the official website at https://nix.dev/install-nix.html.
Create a nix configuration file at ~/.config/nix/nix.conf with the following content:
experimental-features = nix-command flakesInstall the nix-direnv package to introduce the direnv:
nix-env -i nix-direnvAppend the following line to the shell rc file (depending on the shell used it can be ~\.zshrc, ~\.bashrc, ~\.cshrc, etc.). Modify the <shell> variable inside the below command with the currently used (zsh, bash, csh, etc.):
eval "$(direnv hook <shell>)"From within the hoprnet repository's directory, execute the following command.
direnv allow .We provide a couple of packages, apps and shells to make building and development easier. You may get the full list like so:
nix flake showAll nix, rust, solidity and python code can be automatically formatted:
nix fmtThese formatters are also automatically run as a Git pre-commit check.
All linters can be executed via a Nix flake helper app:
nix run .#lintThis will in particular run clippy for the entire Rust codebase.
A Python SDK is not distributed but can be generated to connect to the HOPRd API using the generate-python-sdk.sh script.
Prerequisites:
- swagger-codegen3
- build the repository to get the
hoprd-api-schemagenerated
The generated SDK will be available in the /tmp/hoprd-sdk-python/ directory. Modify the script to generate SDKs for different programming languages supported by swagger-codegen3.
For usage examples of the generated SDK, refer to the generated README.md file in the SDK directory.
Docker images can be built using the respective nix flake outputs. The following command builds the hoprd image for x86_64-linux platform:
nix build .#docker-hoprd-x86_64-linuxThe best way to test with multiple HOPR nodes is by using a local cluster of interconnected nodes.
Run all tests: cargo test.
Run only unit tests: cargo test --lib
We run a fair amount of automation using Github Actions. Too see the full list of workflows checkout workflow docs
When using the nix environment, the test environment preparation and activation is automatic.
Tests are using the pytest infrastructure.
With the environment activated, execute the tests locally:
just run-smoke-test integrationCoverage reports are generated using LLVM source-based instrumentation and uploaded to Codecov. See docs/coverage.md for workspace-wide and single-crate usage.
Multiple layers of profiling and instrumentation can be used to debug the hoprd:
Requires a special build:
- Set
RUSTFLAGS="--cfg tokio_unstable"before building - Enable the
proffeature on thehoprdpackage:cargo build --feature prof
Once an instrumented tokio is built into hoprd, the application can be instrumented by tokio_console as described in the official crate documentation.
hoprd can stream OpenTelemetry to a compatible endpoint. This behavior is turned off by default. To enable it, configure these environment variables:
Detailed reference: OTLP.md
HOPRD_USE_OPENTELEMETRY-trueto enable the OpenTelemetry streaming,falseto disable itHOPRD_OTEL_SIGNALS- comma-separated signal list fromtraces,logs,metrics(default:traces)HOPRD_OTLP_ENDPOINT- base URL of an OTLP endpoint. Transport is inferred from URL scheme (grpc://...,http://..., orhttps://...)HOPRD_METRIC_EXPORT_INTERVAL- OTLP metric export interval config indefault,prefix=intervalform (for example15000,hopr_session=1000). Intervals support raw milliseconds (15000) or suffixes (1s,250ms,1m).HOPRD_OTEL_EXPORT_LABELS- comma-separatedkey=valuepairs added as extra attributes to all OTEL signals (for exampleHOPRD_OTEL_EXPORT_LABELS="country=UK,city=london").
Examples:
- Traces only (backward-compatible default):
HOPRD_OTEL_SIGNALS=traces - Metrics only:
HOPRD_OTEL_SIGNALS=metrics - Full export:
HOPRD_OTEL_SIGNALS=traces,logs,metrics - OTEL traces, logs, and metrics include
node_addressandnode_peer_idattributes automatically. - OTLP logs are emitted as structured objects (typed fields/attributes), and use OTLP HTTP JSON protocol when
HOPRD_OTLP_ENDPOINTishttp(s)://.... - With metrics enabled, OTEL exports keep Prometheus family naming (
<metric>,<metric>_count,<metric>_sum,<metric>_bucket) and labels (lefor histogram buckets,quantilefor summaries). - Session metrics are exported to OTEL (
hopr_session_*series withsession_idattribute) and are excluded from the Prometheus/metricsendpoint.
perfinstalled on the host system- flamegraph (install via e.g.
cargo install flamegraph)
-
Perform a build of your chosen benchmark with
--no-rosegmentlinker flag:RUSTFLAGS="-Clink-arg=-fuse-ld=lld -Clink-arg=-Wl,--no-rosegment" cargo bench --no-run -p hopr-crypto-packetUse
moldinstead oflldif needed. -
Find the built benchmarking binary and check if it contains debug symbols:
readelf -S target/release/deps/packet_benches-ce70d68371e6d19a | grep debugThe output of the above command should contain AT LEAST:
.debug_line,.debug_infoand.debug_loc -
Run
flamegraphon the benchmarking binary of a selected benchmark with a fixed profile time (e.g.: 30 seconds):flamegraph -- ./target/release/deps/packet_benches-ce70d68371e6d19a --bench --exact packet_sending_no_precomputation/0_hop_0_surbs --profile-time 30 -
The
flamegraph.svgwill be generated in the project root directory and can be opened in a browser.
Using the environment variable HOPR_CAPTURE_PACKETS allows capturing customized HOPR packet format to a PCAP file or to a udpdump host. Also define the environment variable HOPR_CAPTURE_PATH_TRIGGER with a path that will be periodically inspected, and once a file exists on that path, it will start capturing packets.
However, for that to work the hoprd binary has to be built with the feature capture.
For ease of use we provide different nix flake outputs that build the hoprd
with the capture feature enabled:
nix build .#binary-hoprd-profile-x86_64-linuxnix build .#binary-hoprd-profile-aarch64-linuxnix build .#binary-hoprd-profile-x86_64-darwinnix build .#binary-hoprd-profile-aarch64-darwin
GPL v3 © HOPR Association