AI and ML Treat Modelling Resources
Threat modelling AI systems is a critical practice for understanding and mitigating potential vulnerabilities. This process involves identifying potential threats, assessing the risks, and developing strategies to defend against these threats. By proactively analysing the ways in which an AI system can be compromised, organizations can bolster their defences and ensure the integrity, confidentiality, and availability of their AI-driven solutions. Effective threat modelling not only addresses known attack vectors but also anticipates emerging threats, fostering a robust security posture in the ever-evolving landscape of AI technology.
ATLAS Matrix
ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a globally accessible, living knowledge base of adversary tactics and techniques against Al-enabled systems based on real-world attack observations and realistic demonstrations from Al red teams and security groups. The ATLAS Matrix shows the progression of tactics used in attacks.
The MIT – AI Risk Repository
The AI Risk Repository has three parts:
- The AI Risk Database captures 1000+ risks extracted from 56 existing frameworks and classifications of AI risks
- The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur
- The Domain Taxonomy of AI Risks classifies these risks into 7 domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”)
The OWASP Gen AI Security Project
The OWASP Top 10 for LLM Applications list is a significant undertaking, built on the collective expertise of an international team of more than 500 experts and over 150 active contributors. Our contributors come from diverse backgrounds, including AI companies, security companies, ISVs, cloud hyperscalers, hardware providers, and academia.