loader image

Council of Europe’s Guidelines on the Human Rights Impacts of Algorithmic Systems (2020)

Have you ever heard about Human Rights Impact Assessments for AI systems?

On April 8, 2020, the Committee of Ministers of the Council of Europe (CoE) released the Recommendation on the human rights impacts of algorithmic systems.

In this document, the CoE called on its Member States to take a precautionary approach to the development and use of AI systems and adopt legislation, policies and practices that fully respect fundamental rights.

Crucially, it issued a set of guidelines to address the Human Rights Impacts of AI Systems.

What private parties developing or using AI systems should you know about them? When and how should private actors conduct a human rights impact assessment (HRIA)?

Chapter C outlines the responsibilities of private sector actors with respect to human rights and fundamental freedoms in the context of algorithmic systems.

The measures apply to every organisation, irrespective of its categorisation (SME or not) or domain.

It demands due diligence with respect to human rights and to take proactive and reactive steps to avoid human rights violations (and documentation of these efforts)

But it also requires

• ongoing review: human rights impacts should be evaluated at regular intervals and throughout the entire AI system lifecycle (C.1.2)

• democratic participation and public awareness: include in the evaluation of the AIS the views of relevant stakeholders and promote knowledge about the opportunities and challenges of AIS (B.1.3)

• informational self-determination: organisations must communicate individuals about the interaction with an AIS beforehand. It should be permitted to individuals to avoid being identified by automated systems, in accordance with the law (B2.1)

• computational experimentation: where computational experimentation may impair fundamental rights, it could only be performed after a HRIA (B.3.1)

• testing: periodic assessment of the AIS against relevant standards should be integrated into the evaluatory routine, especially if AIS function and generate outputs in real-time (B3.3)

• identifiability of ADS: AI systems must be identifiable and traceable (B. 4.2)

• AI systems should not produce discriminatory outputs (C1.4)

• where personal data is used, individuals should be informed and consent should be obtained to process their personal data (C.2.1), except other legal basis apply

• data minimisation and default opt-in for tracking (C2.2)

• ensure they use high-quality data, and data is free from errors and bias (C3.1)

• datasets should be representative of the populations (C.3.2.)

• implement security measures to ensure CIA (C.3.3)

• organisations should provide information about the potential human rights impacts, and give an opportunity to challenge the use of the AIS. (C4.1), and end-users should be given the opportunity to review the decision by humans (C.4.2), as well as effective remedies by an impartial and independent reviewer (C.4.4)

• organisations should inform the number and type of complaints received conceived concerning the AIS (C.4.3) and engage in a consultation process for the design, development and use of the AIS (C.4.5)

• build and register internal procedures to guarantee that the development and use of the AIS is continuously monitored (C.5.1)

• HRIA: stakeholder should be involved in the HRIA, and risk mitigation techniques should be implemented (C5.3), the staff conducting the HRIA should be trained (C5.2), and the HRIA should be reviewed at regular intervals (C5.4)

Also related to HRIA, but in the context of public institutions, “For algorithmic systems carrying high risks to human rights, impact assessments should include an evaluation of the possible transformations that these systems may have on existing social, institutional or governance structures, and should contain clear recommendations on how to prevent or mitigate the high risks to human rights” B5.2

Therefore, HRIA is an important accountability tool that AI providers and AI users should start considering for successfully harnessing trustworthy AI.

Recommendation CM/Rec(2020)1 on the human rights impacts of algorithmic systems

Categories
Latest news

Related Posts

Comments

0 comentarios

Enviar un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *