loader image

A methodology to conduct human rights impact assessments of AI systems

After the Council of Europe‘s call to develop methodologies to conduct HRIA, The Alan Turing Institute developed the Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems (developed 2021, published Feb 2022)

This framework has 4 stages

1) Preliminary Context-Based Risk Analysis​

• overview of the risks that the AI system may cause to HR, D, RL

• definition of the stakeholder level involvement

2) Stakeholder Engagement Process​

• establishment of the who and how of the participation

• developers’ reflection on their pre-conception (positionality matrix)

• reception of stakeholder feedback

3) Human Rights, Democracy, and the Rule of Law Impact Assessment.

• evaluation of the negative effects that the design, development and deployment of the AI system

• mitigation strategies

4) HR, D, RL Assurance Case​

• determination of the risk management strategy

• impact mitigation plan and further steps

Next week we’ll publish the tools to apply this framework

Link to the document here

 

 

Categories
Latest news

Related Posts

Comments

0 comentarios