The European Economic and Social Committee (EESC) suggests that the EU should develop a certification for trustworthy AI applications, to be delivered by an independent body after testing the products for key requirements such as resilience, safety, and absence of prejudice, discrimination or bias.
The proposal has been put forward in two recent EESC opinions assessing the European Commission’s ethical guidelines on AI, one covering the communication on Building trust in human-centric artificial intelligence as a whole and the other on its specific implications for the automotive sector – stress that such a certification would go a long way towards increasing public trust in artificial intelligence (AI) in Europe.
The issue of how to build confidence in AI is central to the conversation on AI in Europe, which has focused on ethics and a human-in command
approach. While some insist that, for people to trust AI products, algorithms need to be explainable, but they are so complex that even people who are developing them do not really know what their outcome will be, and have to develop testing tools to see where their limits are.
The EESC proposes entrusting the testing to an independent body – an agency, a consortium or some other entity to be determined – which would test the systems for prejudice, discrimination, bias, resilience, robustness and particularly safety. Companies could use the certificate to prove that they are developing AI systems that are safe, reliable and in line with European values and standards. The EESC believes that such a certification system would give Europe a competitive edge on the international scene and stresses the need for clear rules on responsibility which must always be linked to a person, either natural or legal.
Background
In December 2018 the European Commission’s high-level expert group on AI published a set of draft ethical guidelines for developing AI in Europe in a way that put people at the centre. The guidelines, revised in March 2019, identify the following seven key requirements that AI applications should respect to be considered trustworthy:
- human agency and oversight
- technical robustness and safety
- privacy and data governance
- transparency
- diversity, non-discrimination and fairness
- societal and environmental well-being
- accountability
As a next step, the Commission has launched a piloting phase where stakeholders are invited to test the assessment list and provide practical feedback on how it can be improved. In early 2020, the assessment list will be reviewed and if appropriate the Commission will propose further measures.
More information
R+I Section
Leave a Reply