A TCP reverse proxy built with pingora
# Single proxy
pj --proxy 0.0.0.0:8787:127.0.0.1:22
# Multiple proxies
pj \
--proxy 0.0.0.0:8787:127.0.0.1:22 \
--proxy 0.0.0.0:8788:127.0.0.1:80 \
--proxy 0.0.0.0:8789:127.0.0.1:443
# Show help
pj --helpYou can also configure proxy mappings using environment variables:
# Single proxy mapping
export PJ_PROXY="0.0.0.0:8787:127.0.0.1:22"
pj
# Multiple proxy mappings (comma or semicolon separated)
export PJ_PROXIES="0.0.0.0:8787:127.0.0.1:22,0.0.0.0:8080:127.0.0.1:80"
pj
# Or using semicolons
export PJ_PROXIES="0.0.0.0:8787:127.0.0.1:22;0.0.0.0:8080:127.0.0.1:80;0.0.0.0:8443:127.0.0.1:443"
pjPriority Order:
- Command line arguments (highest priority)
PJ_PROXIESenvironment variable (for multiple mappings)PJ_PROXYenvironment variable (for single mapping)
If command line arguments are provided, environment variables are ignored.
Enable OpenTelemetry for traces and metrics export to Grafana stack:
# Export to local OTLP collector (Tempo/Prometheus)
PJ_OTLP_ENDPOINT=http://localhost:4317 pj --proxy 0.0.0.0:8787:127.0.0.1:22
# Export to Grafana Cloud
PJ_OTLP_ENDPOINT=https://otlp-gateway-prod-us-east-0.grafana.net/otlp pj --proxy 0.0.0.0:8787:127.0.0.1:22Exported Metrics:
| Metric | Type | Description |
|---|---|---|
pj_connections_total |
Counter | Total connections established |
pj_connections_active |
Gauge | Currently active connections |
pj_bytes_sent_total |
Counter | Total bytes sent to clients |
pj_bytes_received_total |
Counter | Total bytes received from clients |
pj_connection_duration_seconds |
Histogram | Connection duration |
All metrics include listen_addr and backend_addr labels.
Options:
-p, --proxy <PROXY> Proxy mapping in format "listen_ip:listen_port:proxy_ip:proxy_port"
Can be specified multiple times for multiple mappings
-h, --help Print help
-V, --version Print version
Environment Variables:
PJ_PROXY Single proxy mapping (same format as --proxy)
PJ_PROXIES Multiple proxy mappings, comma or semicolon separated
PJ_LOG Log level (error, warn, info, debug, trace). Default: info
PJ_OTLP_ENDPOINT OTLP endpoint for OpenTelemetry export (e.g., http://localhost:4317)
PJ_CONN_ID_RESET_INTERVAL Reset connection ID after interval (e.g., 6h, 1d)
PJ_CONN_ID_RESET_COUNT Reset connection ID after count (e.g., 100k, 1m)
-
SSH proxy:
pj --proxy 0.0.0.0:8787:127.0.0.1:22
-
Multiple service proxy:
pj --proxy 0.0.0.0:8787:127.0.0.1:22 --proxy 0.0.0.0:8080:127.0.0.1:80
-
Docker container proxy:
pj --proxy 0.0.0.0:8080:172.17.0.2:80
-
Using environment variables:
# Single proxy PJ_PROXY="0.0.0.0:8787:127.0.0.1:22" pj # Multiple proxies PJ_PROXIES="0.0.0.0:8787:127.0.0.1:22,0.0.0.0:8080:127.0.0.1:80" pj # In Docker docker run -e PJ_PROXY="0.0.0.0:8080:backend:80" -p 8080:8080 pj:latest
- Rust 1.82.0 or later
- CMake (required for building dependencies)
# Clone the repository
git clone https://github.com/yvictor/pj.git
cd pj
# Build debug version
cargo build
# Build release version
cargo build --release
# Run directly with cargo
cargo run -- --proxy 0.0.0.0:8787:127.0.0.1:22The project includes comprehensive unit tests and integration tests to ensure reliability.
# Run all tests
cargo test
# Run unit tests only
cargo test --lib
# Run integration tests only
cargo test --test integration_test
# Run tests with single thread (useful for integration tests)
cargo test -- --test-threads=1
# Run tests with output displayed
cargo test -- --nocapture
# Run a specific test
cargo test test_basic_proxy_functionality-
Parser Tests: Validate proxy mapping format parsing
- Valid IPv4 format parsing
- Localhost format support
- Invalid format error handling
- Various edge cases
-
ProxyApp Tests: Core proxy functionality
- ProxyApp instance creation
- Service creation and configuration
- ProxyMapping clone and debug traits
-
DuplexEvent Tests: Data transfer event handling
- Downstream read events
- Upstream read events
- Event structure validation
- Basic Proxy Functionality: End-to-end proxy data transfer
- Multiple Concurrent Connections: Simultaneous connection handling
- Multiple Proxy Mappings: Multiple port mappings in single instance
- Large Data Transfer: Bulk data transmission (10KB+)
- Error Handling: Unreachable upstream server scenarios
# Run tests with verbose output
cargo test -- --nocapture
# Run tests in release mode for performance testing
cargo test --release
# Check test coverage (requires cargo-tarpaulin)
cargo install cargo-tarpaulin
cargo tarpaulin --out Html# Build the Docker image
make image
# Run the container
make runversion: '3'
services:
tcp-proxy:
image: pj:latest
# Option 1: Using command line arguments
command: --proxy 0.0.0.0:8787:backend:22 --proxy 0.0.0.0:8080:backend:80
# Option 2: Using environment variables (uncomment to use)
# environment:
# - PJ_PROXIES=0.0.0.0:8787:backend:22,0.0.0.0:8080:backend:80
ports:
- "8787:8787"
- "8080:8080"The proxy is built using Cloudflare's Pingora framework and follows these design principles:
- Async I/O: Uses Tokio for high-performance async operations
- Zero-copy: Efficient data transfer between client and upstream
- Memory Safety: Written in Rust with compile-time guarantees
- Resource Isolation: Each proxy mapping runs in its own service
- 1:1 Mapping: Each listening port maps to exactly one backend (no load balancing)
The proxy is optimized for low latency and high throughput:
- Uses jemalloc for efficient memory management
- Implements bidirectional data copying with minimal overhead
- Supports thousands of concurrent connections
- Buffer size: 1024 bytes (configurable in source)
Contributions are welcome! Please ensure:
- All tests pass (
cargo test) - Code follows Rust conventions (
cargo fmtandcargo clippy) - New features include appropriate tests
- Error Handling: Remove all
unwrap()calls and implement proper error handling- Replace with proper
Resulttypes and error propagation - Add graceful error recovery
- Implement comprehensive error logging
- Ensure no panics in production
- Replace with proper
- Connection Logging: Display connection information
- Log client IP addresses
- Log destination addresses
- Connection timestamps
- Connection duration
- Bytes transferred
- Connection status (success/failure)
- Format:
[timestamp] Connection #ID established/closed: client_ip:port -> proxy:port -> backend:port | Duration: Xs | Sent: X bytes | Received: X bytes
- Load Balancing: Support multiple backends for a single listening port
- Round-robin algorithm
- Least connections algorithm
- Health checks for backend servers
- Automatic failover
- Configuration format:
--proxy "0.0.0.0:8080:backend1:80,backend2:80,backend3:80"
- Metrics & Monitoring: OpenTelemetry support with OTLP export for Grafana stack
- Configuration File: Support YAML/TOML configuration files
- Hot Reload: Reload configuration without downtime
- TLS/SSL Support: Add support for encrypted connections
- Connection Pooling: Reuse upstream connections for better performance
- Rate Limiting: Add per-client rate limiting capabilities
- Access Control: IP-based access control lists