This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
The Artificial Intelligence Security Verification Standard (AISVS) focuses on providing developers, architects, and security professionals with a structured framework to evaluate and verify the security and ethical considerations of AI-driven applications. Modeled after existing OWASP standards (such as the ASVS for web applications), AISVS will define categories of requirements for areas including:
- Data Integrity and Privacy: Ensuring the integrity of training data, verifying minimal and privacy-respecting data collection, and monitoring for data poisoning or bias.
- Model Security: Guidance on tampering resistance, distribution of models, and policy enforcement.
- Model Explainability and Transparency: Requirements that encourage interpretability and accountability.
- Infrastructure and Deployment Security: Verification of containerization, cloud security, and code dependencies.
- Ethical and Compliance Considerations: Requirements for fairness, bias mitigation, and regulatory compliance where applicable.
Please log issues if you find any bugs or if you have ideas. We may subsequently ask you to open a pull request based on the discussion in the issue.
The project is led by the two project leaders Jim Manico and Russ Memisyazici.
The entire project content is under the Creative Commons Attribution-Share Alike v4.0 license.