Production-grade MCP server for universal reverse engineering automation.
Connect Claude Desktop, MCP-compatible IDEs, or custom tooling to a complete reverse engineering backend. One server, every RE tool, orchestrated through the Model Context Protocol.
- Features
- Quick Start
- IDE & Client Setup
- Configuration
- Tool Availability
- Architecture
- Security Model
- Testing
- Scripts & Automation
- Usage Examples
- Performance & Limitations
- Troubleshooting
- Contributing
- License
- Binary Parsing: PE/ELF/Mach-O via LIEF with hash computation and suspicious indicator detection
- Disassembly: Multi-backend support including Capstone (always available), radare2, and objdump for x86/x64/ARM/MIPS/RISC-V
- String Extraction: FLOSS integration, regex fallback, 17 classifier patterns (URLs, IPs, crypto, registry keys)
- Entropy Analysis: Shannon entropy with sliding window, per-section analysis, and packing detection
- Symbol Extraction: DWARF, PDB, LIEF universal; function prologue scanning for stripped binaries
- YARA Scanning: Inline rules, file/directory rules, and community rules support
- Capa Integration: ATT&CK mapping, MBC behaviors, capability enumeration
- Decompilation: Ghidra (headless), RetDec, Binary Ninja with caching
- GDB Adapter: Full GDB/MI protocol with breakpoints, stepping, registers, memory, backtrace, and heap inspection
- LLDB Adapter: Native SB API integration for macOS/Linux debugging
- Frida Adapter: Spawn/attach, script injection, function interception, memory scan/dump, and RPC exports
- Code Coverage: DynamoRIO drcov, Frida Stalker block tracing, and coverage analysis
- APK Parsing: Manifest extraction, permission analysis, component enumeration, and resource inspection
- DEX Analysis: Class/method listing, bytecode stats, and string extraction
- Decompilation: jadx/apktool integration, smali disassembly/assembly/patching
- Native Binary Analysis: ARM/AArch64 .so analysis with JNI detection
- Device Interaction: ADB bridge with 12 actions (logcat, install, shell, dumpsys, screenshot)
- Frida for Android: Root bypass, crypto hooking, SSL pinning bypass, API tracing, and memory dump
- Traffic Interception: tcpdump/mitmproxy integration with SSL key extraction
- Repack and Sign: APK rebuild with smali patches, zipalign + apksigner
- Security Scanners: MobSF, Quark-Engine, Semgrep, and manifest vulnerability detection
- Rizin/r2: Automated analysis with 13 actions and binary diffing
- GDB Enhanced: Heap analysis, ROP gadget finding, exploit helpers (pattern create/find, checksec)
- QEMU: User-mode emulation (4 actions) and full system emulation (5 actions)
- Shellcode: Generation, encoding, bad-char analysis, extraction, and emulation testing
- Format String: Offset calculation, write payload generation, GOT overwrite, and address leaking
- Detection: Scan for anti-debug, anti-VM, anti-tamper, and packing indicators
- Bypass Generation: Frida/GDB/patch/LD_PRELOAD scripts for ptrace, IsDebuggerPresent, timing, and VM checks
- Triage: Multi-hash, IoC extraction, suspicious import scoring, and risk assessment
- Sandbox Queries: VirusTotal, Hybrid Analysis, and MalwareBazaar API integration
- YARA Generation: Auto-generate YARA rules from binary artifacts
- Config Extraction: C2 URLs, IPs, domains, encryption keys, and mutexes
- Extraction: binwalk scan/extract, entropy analysis, and filesystem identification
- Vulnerability Scanning: Hardcoded credentials, known CVEs, unsafe functions, and weak crypto
- Base Address Detection: String reference analysis for firmware base address recovery
- PCAP Analysis: tshark-based with 8 actions (summary, flows, DNS, HTTP, TLS, filter, export, IoC)
- Protocol Dissection: Binary structure inference, field boundary detection, and pattern analysis
- Protocol Fuzzing: Mutation-based, boundary testing, field-specific, and template fuzzing
- Packer Detection: UPX, Themida, VMProtect, ASPack, PECompact, MPRESS, and more
- UPX Unpacking: Static unpacking with automatic backup
- Dynamic Unpacking: Frida-based memory dump with OEP detection
- PE Rebuild: Fix section alignments, imports, and entry point after memory dump
- String Deobfuscation: XOR brute force, ROT variants, Base64, RC4, and stack string reconstruction
- Control Flow Flattening Detection: OLLVM-style CFF pattern identification
- Opaque Predicate Detection: Always-true/false branch identification
- angr Integration: Path exploration, constraint solving, CFG generation, and vulnerability scanning
- Triton DSE: Dynamic symbolic execution with concrete and symbolic state
- APK/DEX: Android analysis including manifest, permissions, native libs, and DEX parsing
- .NET IL: Assembly metadata, type/method listing, and IL disassembly
- Java Class: Class file parsing, javap integration, and bytecode disassembly
- WebAssembly: WASM section parsing, import/export extraction, and disassembly
- Hex Tools: Hexdump, pattern search (IDA-style wildcards), and binary diff
- Crypto: Hashing (MD5/SHA/TLSH/ssdeep), XOR analysis, and crypto constant scanning
- Patching: Binary patching with backup and NOP-sled support
- Network: PCAP analysis with protocol stats, DNS extraction, and C2 beacon detection
- Server Status: Version, tool count, cache stats, rate limit stats, and available tools
- Cache Management: View stats, clear cache, and invalidate specific entries
- Python 3.11 or later
- Linux recommended (macOS and WSL2 supported)
pip(oruv/pipxfor isolated installs)
# Clone
git clone https://github.com/president-xd/revula.git
cd revula
# Option 1: Automated install (recommended)
bash scripts/install/install_all.sh
# Option 2: Manual install
pip install -e .
# Option 3: Install with all optional dependencies
pip install -e ".[full]"
# Verify installation
python scripts/test/validate_install.pyThe automated installer handles Python version checks, virtual environment creation, dependency installation, external tool detection, and configuration file generation.
python -c "from revula.config import get_config, format_availability_report; print(format_availability_report(get_config()))"This prints a table showing which external tools and Python modules are detected on your system.
Revula uses stdio transport only. The server reads JSON-RPC from stdin and writes to stdout. Every MCP client listed below launches revula as a local subprocess. There is no HTTP server, no SSE endpoint, and no remote connection.
What this means for you:
- Revula must be installed on the same machine where your IDE/client runs.
- If you use a remote server or Docker, you must run both the client and revula inside the same environment (or use SSH piping; see Custom / Other Clients).
- Every client below uses the same
revulacommand. The only difference is where you put the config.
Make sure revula is installed and the command works:
# Should print the MCP protocol handshake (Ctrl+C to exit)
revula
# If you installed in a venv, activate it first:
source /path/to/venv/bin/activate
revula
# Or use the full path:
/path/to/venv/bin/revulaIf revula is not in your PATH, use the full path in every config below.
Status: Fully supported. This is the primary client.
Config file locations:
| Platform | Path |
|---|---|
| Linux | ~/.config/Claude/claude_desktop_config.json |
| macOS | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Windows | %APPDATA%\Claude\claude_desktop_config.json |
| WSL2 | /mnt/c/Users/<YOU>/AppData/Roaming/Claude/claude_desktop_config.json |
Option A: Automatic setup (recommended)
python scripts/setup/setup_claude_desktop.pyThis auto-detects your OS, finds the config file, and merges the revula entry. It creates a backup first.
Option B: Manual setup
Add to your claude_desktop_config.json:
{
"mcpServers": {
"revula": {
"command": "revula",
"args": []
}
}
}If revula is in a virtualenv:
{
"mcpServers": {
"revula": {
"command": "/home/you/venvs/revula/bin/revula",
"args": []
}
}
}If using uvx (zero-install):
{
"mcpServers": {
"revula": {
"command": "uvx",
"args": ["revula"]
}
}
}After editing: Quit and reopen Claude Desktop. Check the MCP tools icon to confirm 107 tools are available.
Status: Fully supported.
Option A: CLI command (recommended)
claude mcp add revula -- revulaClaude Code will start revula as a subprocess when needed.
Option B: Manual config
Edit ~/.claude.json (or ~/.claude/settings.json depending on version):
{
"mcpServers": {
"revula": {
"command": "revula",
"args": []
}
}
}Status: Supported. Requires GitHub Copilot extension with MCP support (VS Code 1.99+).
Important: MCP support in VS Code is available through the GitHub Copilot Chat extension. Make sure you have:
- VS Code 1.99 or later
- GitHub Copilot extension installed and active
- MCP enabled in settings:
"chat.mcp.enabled": true
Option A: Workspace config (already included in this repo)
This repo ships with .vscode/mcp.json:
{
"servers": {
"revula": {
"command": "revula",
"args": [],
"env": {}
}
}
}Just open this project in VS Code and Copilot will discover the MCP server automatically.
Option B: User-level config (global, all projects)
Open VS Code settings (Ctrl+,) → search "mcp" → edit settings.json:
{
"chat.mcp.enabled": true,
"mcp": {
"servers": {
"revula": {
"command": "revula",
"args": [],
"env": {}
}
}
}
}Option C: Create .vscode/mcp.json in any project
Copy the file from this repo or create it manually:
mkdir -p .vscode
cat > .vscode/mcp.json << 'EOF'
{
"servers": {
"revula": {
"command": "revula",
"args": [],
"env": {}
}
}
}
EOFAfter editing: Reload VS Code window (Ctrl+Shift+P → "Developer: Reload Window"). The MCP tools should appear in Copilot Chat.
Status: Supported. Cursor has built-in MCP support.
Config file: ~/.cursor/mcp.json (global) or .cursor/mcp.json (per-project).
This repo ships with .cursor/mcp.json for per-project use.
Option A: Per-project (already included)
The .cursor/mcp.json in this repo:
{
"mcpServers": {
"revula": {
"command": "revula",
"args": []
}
}
}Option B: Global config
mkdir -p ~/.cursor
cat > ~/.cursor/mcp.json << 'EOF'
{
"mcpServers": {
"revula": {
"command": "revula",
"args": []
}
}
}
EOFAfter editing: Restart Cursor. Check Settings → MCP to verify revula appears.
Status: Supported. Windsurf Cascade supports MCP servers.
Config file: ~/.codeium/windsurf/mcp_config.json
mkdir -p ~/.codeium/windsurf
cat > ~/.codeium/windsurf/mcp_config.json << 'EOF'
{
"mcpServers": {
"revula": {
"command": "revula",
"args": []
}
}
}
EOFAfter editing: Restart Windsurf. The Cascade panel should show revula tools.
Status: Supported. Continue has MCP support in recent versions.
Config file: ~/.continue/config.json
Add to your existing config.json:
{
"mcpServers": [
{
"name": "revula",
"command": "revula",
"args": []
}
]
}If you use config.yaml:
mcpServers:
- name: revula
command: revula
args: []After editing: Restart your IDE. Continue should detect the MCP server.
Status: Supported. Zed has native MCP support via context servers.
Config file: ~/.config/zed/settings.json (Linux/macOS)
Add to your settings.json:
{
"context_servers": {
"revula": {
"command": "revula",
"args": []
}
}
}After editing: Restart Zed. The context server should appear in the Assistant panel.
Any MCP client that supports stdio transport will work with revula. The protocol is standard JSON-RPC over stdin/stdout.
Direct invocation:
# Start the server (reads from stdin, writes to stdout, logs to stderr)
revulaOver SSH (remote machine):
# Run revula on a remote machine with stdio piped through SSH
ssh user@remote-host revulaIn Docker:
FROM python:3.11-slim
RUN pip install revula
# The entrypoint speaks stdio MCP
ENTRYPOINT ["revula"]docker build -t revula .
# Use docker as the command in your client config:
# "command": "docker", "args": ["run", "-i", "--rm", "revula"]Python client (programmatic):
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
async def main():
server_params = StdioServerParameters(command="revula", args=[])
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
tools = await session.list_tools()
print(f"Connected: {len(tools.tools)} tools available")
# Call a tool
result = await session.call_tool("re_entropy", {"binary_path": "/bin/ls"})
print(result)
asyncio.run(main())Configure any client with one command:
# Interactive: pick a client from the menu
python scripts/setup/setup_ide.py
# Configure a specific client
python scripts/setup/setup_ide.py --client vscode
python scripts/setup/setup_ide.py --client cursor
python scripts/setup/setup_ide.py --client claude-desktop
python scripts/setup/setup_ide.py --client windsurf
python scripts/setup/setup_ide.py --client zed
# Configure all detected clients at once
python scripts/setup/setup_ide.py --all
# Print all configs without writing files (review first)
python scripts/setup/setup_ide.py --print-only
# Override the command (e.g., full path to venv)
python scripts/setup/setup_ide.py --client cursor --command "/home/you/venv/bin/revula"The script auto-detects how to run revula (PATH, uvx, or python -m), creates backups before writing, and merges into existing configs.
Create ~/.revula/config.toml (or use the interactive generator):
python scripts/setup/setup_config_toml.pyExample configuration:
[tools]
ghidra_install = "/opt/ghidra"
r2_path = "/usr/bin/radare2"
ida_path = "/opt/idapro"
jadx_path = "/usr/local/bin/jadx"
[security]
max_memory_mb = 512
timeout_seconds = 60
allowed_dirs = ["/home/user/samples", "/tmp/analysis"]
[cache]
dir = "~/.revula/cache"
max_entries = 256
ttl_seconds = 600
[rate_limit]
global_rpm = 120
per_tool_rpm = 30Environment variables override config file values:
export GHIDRA_INSTALL=/opt/ghidra # Ghidra installation path
export REVULA_TIMEOUT=120 # Subprocess timeout (seconds)
export REVULA_MAX_MEMORY=1024 # Memory limit (MB)
export VT_API_KEY=your-key-here # VirusTotal API key
export HA_API_KEY=your-key-here # Hybrid Analysis API keyrevula degrades gracefully. Tools that depend on missing backends return clear error messages instead of crashing. Here is what each category needs:
| Category | Always Available | Needs External Tool | Needs Python Module |
|---|---|---|---|
| Static | PE/ELF parsing, entropy, strings | objdump, radare2, ghidra, retdec, floss, capa |
capstone ✓, lief ✓, pefile ✓, yara ✓ |
| Dynamic | gdb, lldb |
frida |
|
| Android | APK manifest/DEX parsing (via zipfile) | jadx, apktool, adb, zipalign, apksigner, tcpdump |
frida, quark-engine |
| Platform | rizin, radare2, gdb, qemu-user, qemu-system-* |
r2pipe, binaryninja |
|
| Exploit | Format string calculator | pwntools, keystone-engine |
|
| Anti-Analysis | Pattern scanning (via lief + capstone) |
||
| Malware | File hashing, IoC extraction, risk scoring | yara ✓, ssdeep, tlsh |
|
| Firmware | binwalk, sasquatch |
||
| Protocol | Binary protocol dissection, fuzzing | tshark |
scapy |
| Unpacking | Packer signature detection | upx |
frida |
| Deobfuscation | XOR/ROT/Base64 deobfuscation | capstone ✓ |
|
| Symbolic | angr, triton |
||
| Binary Formats | aapt, javap, monodis, wasm2wat |
||
| Utilities | Hex dump, binary diff, patching | tshark |
scapy, ssdeep, tlsh |
✓ = included in core dependencies (always installed).
# Frida (dynamic instrumentation)
pip install frida frida-tools
# angr (symbolic execution, large install ~2 GB)
pip install angr
# radare2 bindings
pip install r2pipe
# Fuzzy hashing
pip install ssdeep tlsh
# Network analysis
pip install scapy
# Everything at once
pip install -e ".[full]"# Core analysis
sudo apt install gdb binutils radare2 binwalk upx-ucl
# Android RE
sudo apt install apktool jadx android-sdk adb zipalign apksigner
# Network
sudo apt install tshark
# Ghidra: download from https://ghidra-sre.org/
export GHIDRA_INSTALL=/opt/ghidrasrc/revula/ # 19,400+ LOC across 63 Python files
├── __init__.py # Version (__version__ = "0.1.0")
├── config.py # Tool detection, TOML config, env var loading
├── sandbox.py # Secure subprocess execution, path validation
├── session.py # Session lifecycle manager (debuggers, Frida)
├── server.py # MCP server entrypoint (stdio transport)
├── cache.py # LRU result cache with TTL
├── rate_limit.py # Token-bucket rate limiter
└── tools/
├── __init__.py # Tool registry + @register_tool decorator
├── static/ # 8 files: PE/ELF, disasm, strings, entropy, symbols, YARA, capa, decompile
├── dynamic/ # 4 files: GDB, LLDB, Frida, coverage
├── android/ # 9 files: APK, DEX, decompile, native, device, frida, traffic, repack, scanners
├── platform/ # 3 files: Rizin, GDB-enhanced, QEMU
├── exploit/ # 2 files: shellcode generation, format string exploitation
├── antianalysis/ # 1 file: anti-debug/VM detection and bypass generation
├── malware/ # 1 file: triage, sandbox queries, YARA gen, config extraction
├── firmware/ # 1 file: extraction, vuln scanning, base address detection
├── protocol/ # 1 file: PCAP analysis, protocol dissection, fuzzing
├── deobfuscation/ # 1 file: string deobfuscation, CFF, opaque predicates
├── unpacking/ # 1 file: packer detection, UPX, dynamic unpack, PE rebuild
├── symbolic/ # 1 file: angr + Triton
├── binary_formats/ # 1 file: .NET, Java, WASM
├── utils/ # 4 files: hex, crypto, patching, network
└── admin/ # 1 file: server status, cache management
-
Startup.
server.pyloadsconfig.py, which probes the system for external tools (viashutil.which) and Python modules (viaimportlib.util.find_spec). Results are cached in aServerConfigsingleton. -
Tool Registration. Each tool file uses
@TOOL_REGISTRY.register()to declare its name, description, JSON Schema, and async handler. Tools self-register on import. -
Request Dispatch. When a
tools/callrequest arrives, the server looks up the handler inTOOL_REGISTRY, validates arguments against the JSON Schema, checks rate limits, checks the result cache, and dispatches to the handler. -
Subprocess Execution. All external tool invocations go through
sandbox.safe_subprocess(), which enforcesshell=False, setsRLIMIT_ASandRLIMIT_CPU, validates paths, and captures stdout/stderr. -
Result Caching. Deterministic operations (disassembly, parsing) are cached with a configurable TTL. Mutating operations (patching, Frida injection) bypass the cache automatically.
-
Session Management. Long-lived debugger and Frida sessions are tracked by
SessionManager, with automatic cleanup after 30 minutes of idle time.
| Component | Purpose | Key Detail |
|---|---|---|
| ResultCache | Avoid redundant subprocess calls | LRU, 256 entries, 10-minute TTL |
| RateLimiter | Prevent resource exhaustion | Token-bucket, 120 global / 30 per-tool RPM |
| ToolRegistry | Decorator-based tool dispatch | JSON Schema validation before handler call |
| SessionManager | Debugger/Frida persistence | Auto-cleanup after 30 min idle |
| sandbox.py | Secure execution layer | shell=False, RLIMIT enforcement, path validation |
revula operates on the principle that user-supplied arguments are untrusted. The following hardening measures are applied:
- No
shell=True: Every subprocess call usesshell=Falsewith explicit argument lists. This is enforced by a CI test (test_no_shell_true) that scans every source file. - No
eval()/exec(): No dynamic code evaluation of user input. - No f-string injection: User-supplied values are never interpolated into
python3 -ccode strings. Values are passed viasys.argv,stdin, or environment variables. Enforced bytest_no_fstring_in_subprocess_python_code. - JavaScript escaping: All user-controlled values interpolated into Frida JavaScript strings pass through
_js_escape(), which escapes backslashes, quotes, newlines, and other injection vectors. - Resource limits: Every subprocess gets
RLIMIT_AS(512 MB default) andRLIMIT_CPU(60 s default) viaresource.setrlimit(). - Timeout enforcement:
asyncio.wait_for()wraps all subprocess calls.
- Fail-closed:
validate_path()rejects all paths when noallowed_dirsare configured (falls back toget_config().security.allowed_dirs). It does not silently pass. - Traversal blocked:
..components are rejected afteros.path.realpath()resolution. - Absolute paths required: Relative paths are rejected.
- Validated everywhere: All file-accepting tool handlers call
validate_path()before any file I/O.
- Script size limit: Frida scripts are capped at 1 MB to prevent memory exhaustion.
- Memory dump limit: Memory dumps are capped at 100 MB.
- JS injection prevention: Class names, method names, module names, and other user-supplied values are escaped before interpolation into JavaScript templates.
- No
tempfile.mktemp(): All temporary files usetempfile.NamedTemporaryFile()ortempfile.mkdtemp()to prevent TOCTOU race conditions. - No hardcoded
/tmppaths: All temporary paths use thetempfilemodule.
- Global limit: 120 requests per minute (configurable).
- Per-tool limit: 30 requests per minute (configurable).
- Cache TTL: 10-minute TTL on read-only results, 256-entry LRU.
- Session TTL: Idle sessions auto-cleaned after 30 minutes.
# Run all 161 tests
python -m pytest tests/ --timeout=30
# With coverage
python -m pytest tests/ --cov=revula --cov-report=html --timeout=30
# Verbose output
python -m pytest tests/ -v --timeout=30
# Specific test suites
python -m pytest tests/test_infra.py -v # Cache, rate limiter, sessions
python -m pytest tests/test_core.py -v # Config, sandbox, tool registry
python -m pytest tests/test_static.py -v # Static analysis tools
python -m pytest tests/test_android.py -v # Android module tests
python -m pytest tests/test_tools_new.py -v # Exploit, malware, firmware, protocol, etc.
python -m pytest tests/test_security.py -v # Security invariant tests
# Using the test runner script
bash scripts/test/run_tests.sh| Suite | Tests | Covers |
|---|---|---|
test_infra.py |
Cache, rate limiter, session manager | Infrastructure correctness |
test_core.py |
Config loading, sandbox, tool registry | Core module behavior |
test_static.py |
Entropy, hex, crypto, strings, symbols | Static analysis tools |
test_android.py |
APK parse, DEX, device, Frida Android | Android module tests |
test_tools_new.py |
Exploit, malware, firmware, protocol, antianalysis, platform, deobfuscation, symbolic, unpacking, binary formats | All remaining tool categories |
test_security.py |
shell=True scan, injection scan, mktemp scan, hardcoded /tmp scan, path validation, JS escaping, shellcode validation |
Security regression tests |
The TestVulnerabilityHardeningV3 suite in test_security.py enforces:
- No f-string code injection: Scans all source files for
"-c"arguments containing f-strings. - No
tempfile.mktemp(): Prevents TOCTOU race conditions. - No hardcoded
/tmp/paths: Enforces use of thetempfilemodule. - Fail-closed path validation: Verifies
validate_path()rejects paths whenallowed_dirsis empty. - Frida JS escaping: Verifies
_js_escape()blocks injection payloads. - Shellcode hex validation: Verifies non-hex input is rejected, not passed to subprocess.
All scripts are in scripts/ and are fully implemented:
| Script | Purpose |
|---|---|
scripts/install/install_all.sh |
Master installer: Python check, venv, deps, external tools, config |
scripts/install/install_verify.sh |
Post-install verification: checks all dependencies and paths |
| Script | Purpose |
|---|---|
scripts/setup/setup_ide.py |
Universal IDE/client configurator for Claude Desktop, VS Code, Cursor, Windsurf, Zed, and Continue |
scripts/setup/setup_claude_desktop.py |
Claude Desktop-specific auto-configurator (legacy, still functional) |
scripts/setup/setup_config_toml.py |
Interactive config.toml generator |
scripts/setup/setup_android_device.sh |
Prepare an Android device for RE (root, frida-server, certs) |
| Script | Purpose |
|---|---|
scripts/test/run_tests.sh |
Run full test suite with coverage |
scripts/test/validate_install.py |
Comprehensive installation validator (526 lines) |
scripts/dev/add_tool.py |
Scaffold a new tool module (creates file, registers, adds test) |
scripts/dev/lint_and_type.sh |
Run ruff + mypy |
scripts/utils/download_frida_server.py |
Download frida-server for a target architecture |
Ask Claude: "Analyze this binary for me: /home/user/samples/malware.exe"
Behind the scenes, Claude can call:
re_pe_elfto parse PE headers, sections, imports, and exportsre_stringsto extract and classify strings (URLs, IPs, crypto constants)re_entropyto check for packing (high entropy sections)re_yara_scanto scan with YARA rulesre_capa_scanto map to ATT&CK techniques
Ask Claude: "Debug /home/user/crackme and find the password check"
Claude can orchestrate:
re_gdbwith actionstartto launch the binary under GDBre_disasmto disassemble key functionsre_gdbwith actionbreakpointto set breakpoints at comparison instructionsre_gdbwith actioncontinueandregistersto run and inspect state
Ask Claude: "Analyze this APK for security issues: /home/user/app.apk"
Claude can call:
re_apk_parseto extract manifest, permissions, and componentsre_dex_analyzeto list classes and find suspicious methodsre_android_decompileto decompile with jadxre_android_scannerto run security scannersre_antianalysis_detectto check for anti-tampering
Ask Claude: "Triage this suspected malware sample"
Claude can call:
re_malware_triagefor hashes, IoCs, import analysis, and risk scorere_malware_configto extract C2 URLs and encryption keysre_malware_yara_gento generate a YARA rule for the samplere_malware_sandboxto query VirusTotal/Hybrid Analysis
With just the core install (pip install -e .), you get full functionality for:
- PE/ELF/Mach-O parsing and header analysis
- Multi-architecture disassembly (via Capstone)
- String extraction and classification
- Shannon entropy analysis and packing detection
- YARA rule scanning
- Binary patching
- Hex dump and pattern search
- File hashing (MD5, SHA-1, SHA-256)
- Format string payload calculation
- XOR/ROT/Base64 deobfuscation
- Anti-analysis pattern detection
These tools produce clear "tool not found" errors when backends are missing:
- Decompilation requires Ghidra, RetDec, or Binary Ninja
- Dynamic analysis requires GDB, LLDB, or Frida
- Android RE requires jadx, apktool, and ADB
- Symbolic execution requires angr (large dependency, ~2 GB)
- Network analysis requires tshark or scapy
- Firmware extraction requires binwalk
- Startup: ~1 second (probes system for available tools via
shutil.whichandimportlib.util.find_spec) - Static analysis: Sub-second for most operations on files under 100 MB
- Disassembly: Capstone disassembles ~1 MB/s; radare2 adds full analysis overhead
- Subprocess calls: Each external tool invocation has ~50-200 ms overhead from process spawn
- Caching: Repeat calls to deterministic tools (disassembly, parsing) return instantly from cache
- Rate limiting: 120 requests/minute global, 30/minute per tool (configurable)
- No Windows native support. Designed for Linux. macOS works for most tools. Windows requires WSL2.
- stdio transport only. There is no HTTP/SSE server. Revula must run on the same machine as your IDE (or be piped via SSH/Docker). This is a deliberate design choice for security: MCP over stdio is simpler and avoids exposing a network socket.
- No GUI. This is a headless MCP server. Use Claude Desktop, VS Code Copilot, Cursor, or another MCP client for the interface.
- Large binary analysis. Files over 500 MB may hit the default memory limit (512 MB). Increase via
REVULA_MAX_MEMORY. - angr install size. The
angroptional dependency is ~2 GB and takes several minutes to install. - Frida version coupling. Frida client and server versions must match exactly. Use
scripts/utils/download_frida_server.pyto get the right version. - Single-user design. The server handles one MCP client at a time via stdio. There is no multi-tenant isolation. Each IDE/client spawns its own server process.
- IDA Pro integration. Requires IDA Pro with the REST API plugin. Not included.
# Check Python version (need 3.11+)
python --version
# Check MCP is installed
python -c "import mcp; print(mcp.__version__)"
# Run with debug logging
REVULA_LOG_LEVEL=DEBUG revula# Check what's available
python -c "from revula.config import get_config, format_availability_report; print(format_availability_report(get_config()))"
# The report shows ✓/✗ for every external tool and Python module.
# Install what you need and restart the server.Error: Path /some/path is not within allowed directories
Add the directory to your config:
[security]
allowed_dirs = ["/home/user/samples", "/tmp/analysis", "/some/path"]Alternatively, set the value via environment variable. The server will use the config file's allowed_dirs as a fallback.
Error: Rate limit exceeded for tool re_disasm
Increase limits in config:
[rate_limit]
global_rpm = 240
per_tool_rpm = 60# Check Frida version match
frida --version
frida-server --version # on device
# Download matching server
python scripts/utils/download_frida_server.py --arch arm64# Run with verbose output
python -m pytest tests/ -v --timeout=30 --tb=long
# Clear bytecode cache (fixes stale imports)
find . -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null
python -m pytest tests/ --timeout=30# Run the device setup script
bash scripts/setup/setup_android_device.sh
# Manual check
adb devices
adb shell id # should show root or shellUse the scaffold generator:
python scripts/dev/add_tool.pyThis creates the tool file, registers it in the category __init__.py, and generates a test stub.
# Lint and type-check
bash scripts/dev/lint_and_type.sh
# Run full test suite
python -m pytest tests/ --timeout=30 -q
# Validate install
python scripts/test/validate_install.py- Every tool handler is
asyncand returnslist[dict](MCP content blocks). - All subprocess calls go through
sandbox.safe_subprocess(). - All file paths must be validated via
sandbox.validate_path(). - No
shell=True, noeval(), no f-string interpolation into subprocess code. - Every new tool needs at least one test.
Released under the GNU General Public License. See LICENSE for details.