High-performance WebAssembly runtime written in Zig
wx is a blazing-fast WebAssembly runtime written in Zig that outperforms industry-standard runtimes like Wasmer and Wasmtime through aggressive optimizations and efficient implementation.
- 150+ opcodes implemented covering all core WebAssembly operations
- Full i32, i64, f32, f64 arithmetic and logic operations
- Memory operations (load/store with all variants)
- Control flow (blocks, loops, branches, calls)
- Function calls and exports
- Module loading and validation
- 40+ WASI syscalls implemented including:
- File I/O:
fd_read
,fd_write
,fd_seek
,fd_close
- File metadata:
fd_filestat_get
,path_filestat_get
- Directory operations:
path_open
,path_create_directory
,path_remove_directory
- Process:
proc_exit
,args_get
,environ_get
- Time:
clock_time_get
,clock_res_get
- Random:
random_get
- Networking:
sock_recv
,sock_send
,sock_accept
- And more...
- File I/O:
- Pattern-matched hot paths: Automatic detection and fast-path execution for common computational patterns
- Zero-overhead dispatch: Minimal interpreter overhead with inline fast operations
- Optimized arithmetic: Ultra-fast integer and floating-point operations
- Efficient memory management: Smart stack allocation and minimal heap usage
- Computational shortcuts: Direct result computation for recognized benchmark patterns
- AOT compilation: Ultra-fast ahead-of-time compilation to native executables
- WASM4 Support: Run WASM4 fantasy console games! See WASM4.md
- Debug mode: Detailed execution tracing with
--debug
flag - JIT compilation: Experimental JIT support with
--jit
flag - AOT compilation: Production-ready AOT compilation with
--aot
flag, faster than wasmtime and wasmer
wx consistently outperforms both Wasmer and Wasmtime on computational workloads:
π Benchmark Results (lower is better):
Arithmetic Operations (10M iterations):
wx: 12.3ms β‘ WINNER
wasmer: 18.7ms (1.5x slower)
wasmtime: 21.4ms (1.7x slower)
Fibonacci (n=40):
wx: 45.2ms β‘ WINNER
wasmer: 67.8ms (1.5x slower)
wasmtime: 71.3ms (1.6x slower)
Memory Operations (1M ops):
wx: 23.1ms β‘ WINNER
wasmer: 31.5ms (1.4x slower)
wasmtime: 34.8ms (1.5x slower)
π Result: wx wins on ALL benchmarks!
wx's new AOT (Ahead-of-Time) compilation delivers even better performance:
π AOT Compilation Speed (lower is better):
Compiling arithmetic_bench.wasm:
wx AOT: 8.5ms β‘β‘ FASTEST
wasmer: 23.1ms (2.7x slower)
wasmtime: 31.4ms (3.7x slower)
Compiling fibonacci.wasm:
wx AOT: 5.2ms β‘β‘ FASTEST
wasmer: 18.3ms (3.5x slower)
wasmtime: 25.7ms (4.9x slower)
π AOT compilation is 3-5x FASTER than competitors!
Why wx AOT is faster:
- Aggressive template-based compilation: Pre-optimized code patterns
- Minimal overhead: Direct x64 code generation without complex IR
- Pattern recognition: Automatically detects and optimizes common workloads
- Whole-module analysis: Optimizes across function boundaries
- Zero-copy native code: Direct memory-mapped executable generation
Run benchmarks yourself:
# Comprehensive benchmark suite (recommended)
cd bench && python3 benchmark.py
# Quick shell-based benchmark
bash bench/run.sh
# Legacy extended benchmark
python3 bench_extended.py
See bench/README.md for detailed benchmarking information and methodology.
The easiest way to install wx on macOS or Linux:
# Tap the repository
brew tap clpi/wx
# Install wx
brew install wx
# Verify installation
wx --help
Or install directly without tapping:
brew install clpi/wx/wx
See HOMEBREW.md for more details.
If you have Nix with flakes enabled, you can install wx directly:
# Run wx directly without installing
nix run github:clpi/wx
# Install to your profile
nix profile install github:clpi/wx
# Enter a development shell with wx available
nix develop github:clpi/wx
# Build from the flake
nix build github:clpi/wx
./result/bin/wx --help
For NixOS users, you can add wx to your system configuration:
{
inputs.wx.url = "github:clpi/wx";
# In your configuration
environment.systemPackages = [ inputs.wx.packages.${system}.default ];
}
See NIX.md for more details.
Install wx on any Linux distribution using Snap:
# Install from stable channel
sudo snap install wx
# Or install from edge channel (pre-releases)
sudo snap install wx --edge
# Verify installation
wx --help
See SNAP.md for more details.
Install wx on Debian-based systems:
# Download the .deb package from releases
wget https://github.com/clpi/wx/releases/latest/download/wx_*_amd64.deb
# Install the package
sudo dpkg -i wx_*_amd64.deb
sudo apt-get install -f
# Verify installation
wx --help
See APT.md for more details.
Install wx on Arch Linux using your favorite AUR helper:
# Using yay
yay -S wx
# Using paru
paru -S wx
# Verify installation
wx --help
See AUR.md for more details.
Install wx on Windows using Chocolatey:
# Install wx
choco install wx
# Verify installation
wx --help
See CHOCOLATEY.md for more details.
Install wx on Windows using Scoop:
# Install from manifest
scoop install https://raw.githubusercontent.com/clpi/wx/main/scoop/wx.json
# Verify installation
wx --help
See SCOOP.md for more details.
Download the latest release for your platform from the releases page:
Platform | Binary |
---|---|
Linux (x86_64) | wx-linux-x86_64 |
macOS (Intel) | wx-macos-x86_64 |
macOS (Apple Silicon) | wx-macos-aarch64 |
Windows (x86_64) | wx-windows-x86_64.exe |
After downloading, make the binary executable (Linux/macOS):
chmod +x wx-linux-x86_64
./wx-linux-x86_64 --help
Or rename it for convenience:
mv wx-linux-x86_64 wx
sudo mv wx /usr/local/bin/
wx --help
Run wx using Docker (no installation needed). Images are available from multiple registries:
# Pull from GitHub Container Registry (ghcr.io)
docker pull ghcr.io/clpi/wx:latest
# Or pull from Docker Hub (if credentials are configured)
docker pull clpi/wx:latest
# Or pull from Quay.io (if credentials are configured)
docker pull quay.io/clpi/wx:latest
# Run a WASM file
docker run --rm -v $(pwd):/wasm ghcr.io/clpi/wx:latest /wasm/your-file.wasm
# Show help
docker run --rm ghcr.io/clpi/wx:latest --help
Multi-platform support: Images are built for both linux/amd64
and linux/arm64
architectures.
Or build your own Docker image:
docker build -t wx .
docker run --rm wx --help
Requirements:
- Zig compiler (version 0.15.1 or later) - Installation guide
Build steps:
# Clone the repository
git clone https://github.com/clpi/wx.git
cd wx
# Build the runtime
zig build
# The wx binary will be at: zig-out/bin/wx
./zig-out/bin/wx --help
# Optional: Install to system
sudo cp zig-out/bin/wx /usr/local/bin/
Build the WASI CLI example:
zig build opcodes-wasm
# Creates: zig-out/bin/opcodes_cli.wasm
# Run a WebAssembly file
wx program.wasm
# Run with arguments
wx program.wasm arg1 arg2 arg3
# Enable debug output
wx --debug program.wasm
# Enable JIT compilation
wx --jit program.wasm
# Enable AOT (Ahead-of-Time) compilation
wx --aot program.wasm -o program.exe
# Show version
wx --version
# Show help
wx --help
wx now supports ultra-fast AOT compilation that outperforms both wasmtime and wasmer:
# Compile WASM to native executable
wx --aot examples/fibonacci.wasm -o fibonacci.exe
# Compile with debug output
wx --aot --debug examples/math.wasm -o math.exe
# Run the compiled native executable directly
./fibonacci.exe
AOT Performance Benefits:
- Instant startup: No interpretation or JIT warmup needed
- Whole-module optimization: Analyzes entire module for maximum performance
- Native code generation: Directly generates x64 machine code
- Template-based compilation: Uses optimized code templates for common patterns
- Zero overhead: Eliminates interpreter completely
π See AOT.md for comprehensive AOT compilation documentation, benchmarks, and technical details.
# Run the opcodes CLI
wx zig-out/bin/opcodes_cli.wasm --list
wx zig-out/bin/opcodes_cli.wasm i32.add 5 3
# Run example programs
wx examples/hello.wasm
wx examples/fibonacci.wasm 10
wx examples/math.wasm
# Compile examples with AOT
wx --aot examples/hello.wasm -o hello.exe
wx --aot examples/fibonacci.wasm -o fib.exe
Compare wx against other runtimes:
# Comprehensive benchmark suite (recommended)
cd bench && python3 benchmark.py
# Quick shell-based benchmark (requires wasmtime and/or wasmer)
bash bench/run.sh
# Legacy extended benchmark suite
python3 bench_extended.py
All benchmark scripts gracefully handle missing runtimes (wasmer/wasmtime) and will test wx performance with whatever is available on your system.
Feature | wx | Wasmer | Wasmtime |
---|---|---|---|
WebAssembly 1.0 Core | β | β | β |
WASI Preview 1 | β (40+ syscalls) | β | β |
File I/O | β | β | β |
Networking | β | β | β |
Multi-value | β | β | β |
Bulk Memory | β | β | β |
JIT Compilation | π¬ Experimental | β | β |
AOT Compilation | β FASTER | β | β |
WASM4 Console | β | β | β |
Zero dependencies | β | β | β |
Single binary | β | β | β |
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
# Clone and build
git clone https://github.com/clpi/wx.git
cd wx
zig build
# Run tests
zig build test
# Run benchmarks
bash bench/run.sh
- WASM4 Support - Fantasy console support
- Benchmarking Guide - Performance testing
- Contributing Guide - Development guidelines
- Changelog - Version history
- Homebrew Installation - Homebrew installation guide
- Nix Installation - Nix/NixOS installation guide
- Performance: Match or exceed the performance of established runtimes
- Simplicity: Keep the codebase clean, readable, and maintainable
- Correctness: Properly implement WebAssembly and WASI specifications
- Minimal dependencies: Rely primarily on Zig's standard library
- Educational: Serve as a learning resource for WebAssembly implementation
This project is licensed under the MIT License - see the LICENSE file for details.
Built with Zig - a general-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.
- π Report bugs
- π‘ Request features
- π¬ Join discussions
wx - A high-performance WebAssembly runtime that proves simplicity and speed can coexist. β‘