AI Risks Are Mounting
But along with these opportunities, dangers and risks have surged: AI-enabled scams, deepfakes, and misinformation, as well as risks to privacy and safety (including national security).
For organisations looking to capitalise on AI technology, the clock is running. To keep up with AI regulations, the time to act is now
Regulations
The EU AI Act is the world’s first comprehensive AI law. It analyses and classifies AI systems into four categories (low/minimal risk, limited risk, high risk, and unacceptable risk) according to the risk they pose to users for specific applications. Depending on the level of risk, the systems will be more or less closely regulated. The European Parliament passed the Act in June 2023, and it is expected to become law by the end of 2023.
President Biden’s Executive Order, signed in October 2023, seeks to set out guidelines for how to to expand the ways in which the US will make use of Artificial Intelligence to achieve pre-existing goals, while also setting up stricter regulations on the private use of AI, in order to manage the potential risks. It tasks US agencies with drafting more detailed regulatory guidelines, which are expected in 2024.
NYC 144 regulates the use of AI tools in employment decisions. The law focuses on the results of the tool (the impact) rather than how the results are achieved (the process). Other legislatures have proposed similar laws that focus on the impact of tools used for employment purposes, including New York (Proposed): Bill A00567, New Jersey (Proposed): Bill A4909, California (Proposed): Bill AB331, and Massachusetts (Proposed): Bill H.1873.
Colorado SB21-169 protects consumers in Colorado from insurance practices that unfairly discriminate on the basis of race, colour, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression. The law makes insurers accountable for testing their AI systems - including external consumer data and information sources, algorithms, and predictive models - to avoid unfair discrimination against consumers and to take corrective action to address any consumer harms that are discovered.
The Canadian AI and Data Act is designed to provide a meaningful framework to be completed and brought into effect through detailed regulations. While these regulations are still being drafted, the Canadian government has stated their intent for the regulations to be built on existing best practices.UK’s Pro-Innovation approach: The UK’s framework aims to support innovation while ensuring that risks are identified and addressed. It is a proportionate and pro-innovation regulatory framework, focusing on the context in which AI is deployed.
Standards
The NIST AI Risk Management Framework (AI RMF) offers a resource for designing, developing, deploying, or using AI systems to manage the risks of AI and promote trustworthy and responsible development and use of AI systems. Unlike many emerging regulations, the framework is voluntary and use-case agnostic.
The World Ethical Data Foundation has written an open letter with questions and considerations to ensure more AI ethical models are released. The format is a free online forum where new suggestions and approaches are invited. The steps are isolated based on the core elements of building AI (Training, Building, Testing) and the actors who engage in the process of reducing silos.
SR 7-11, published by the Federal Reserve, offers supervisory guidance on firms’ model risk management practices. These guidelines are aimed at all banks regulated by the Fed, and is thus quickly becoming industry standard.
The ISO Framework for AI was published in February 2023. It offers strategic guidance to all businesses for managing risk connected to the development and use of AI. A key benefit of the ISO framework is that it can be customised to any organisation and its business context.
Built by Expert Lawyers with Top Regulatory Experience
Latest in AI Regulations
Compliance by Design
Speak to an Expert