Always sanitize user input.
Langroid executes Python code generated by Large Language Models (LLMs) (e.g., through TableChatAgent and LanceDocChatAgent). While this provides powerful data analysis capabilities, it can lead to unintended consequences if exposed unsafely. Malicious users may exploit LLM responses to execute harmful code, potentially resulting in sensitive data exposure, denial-of-service, or complete system compromise.
If your LLM application accepts untrusted input, implement input sanitization and sandboxing to mitigate these risks.
Security updates are supported on Langroid version >= 0.18.x
If you discover a security vulnerability in this repository, please report it privately. Security issues should not be reported using GitHub Issues or any other public forum.
To report a security vulnerability privately:
- Go to the repository's Security Advisories section.
- Click on "Report a vulnerability".
- Provide the necessary details about the vulnerability.
Your report will remain confidential, and we will respond as quickly as possible (usually within 48 hours) to evaluate the issue and work on a fix. We greatly appreciate your responsible disclosure.
Please do not report vulnerabilities through GitHub Issues, discussions, or other public channels as this could expose the issue to a wider audience before it is resolved.
Once a security vulnerability is reported, we will work to:
- Acknowledge the report within 48 hours.
- Investigate and confirm the issue.
- Develop a patch or mitigation strategy.
- Publish the fix and disclose the advisory publicly after the resolution.