El Model PIO (Principis, Indicadors i Observables) sobre Obligacions i Drets: un enfocament d'autoavaluació per al compliment de la regulació en dades i sistemes d'intel·ligència artificial en el marc de la Unió Europea

Share
The Observatory for Ethics in Artificial Intelligence (OEIAC) has developed an interactive tool, the PIO Model, to determine whether or not an Artificial Intelligence (AI) system and its data is ready and comply with current legislative requirements as well as ethical standards and recommendations. The PIO Model promotes compliance and readiness on responsible use of AI through a comprehensive self-assessment checklist. It identifies appropriate actions through several questions linked to regulations and ethical standards, and raises awareness to the quadruple helix on the responsible use of data and AI systems. Further, this free and comprehensive checklist helps to classify the risk level of an AI system and facilitates an understanding of what needs to be done under the EU AI Act. It is a resource open to everyone and currently available in Catalan, Spanish, English and French. The PIO Model developed is part of CATALONIA.AI, the Artificial Intelligence Strategy of Catalonia promoted by the Government of Catalonia ​
This document is licensed under a Creative Commons:Attribution – Share alike (by-sa) Creative Commons by-sa4.0