European Commission Guidelines for Trustworthy AI
On 8 April 2019, the High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence. This followed the publication of the guidelines’ first draft in December 2018 on which more than 500 comments were received through an open consultation. According to the Guidelines, trustworthy AI should be: (1) lawful – respecting all applicable laws and regulations; (2) ethical – respecting ethical principles and values; (3) robust – both from a technical perspective while taking into account its social environment. The Guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements. Read more here.
You might also like
-
What’s happening with GenAI Ethics and Governance?
-
Join the AI and African Evaluation Working Group Meet ‘n’ Mix Session on May 7!
-
Hands on with GenAI: predictions and observations from The MERL Tech Initiative and Oxford Policy Management’s ICT4D Training Day
-
When Might We Use AI for Evaluation Purposes? A discussion with New Directions for Evaluation (NDE) authors