loader image

Are Human Rights Impact Assessments for AI systems mandatory?

To date, AI systems are largely unregulated. Currently, AI systems are only regulated for certain AI systems used by public institutions (e.g. Canadian Directive on Automated Decision-Making),  at the sectoral level (e.g. AI systems for health-care or systems using personal data by GDPR) or local level (e.g. NY law on automated employment decision tools). The most comprehensive initiative to regulate AI systems was started by the EU, but other countries also proposed laws to regulate, at least some aspects, of AI systems (E.g. USA and Brazil).

In Europe, data protection authorities have taken significant steps to ensure that providers and users of AI systems respect fundamental rights, in particular, the right to data protection and to privacy (see for instance, decisions issued by the Italian DPA against Clearview AI, Deliveroo or Foodinho).

Despite the efforts made by data protection authorities, a question is worth answering: is the current data protection framework enough to deal with the challenges posed by AI systems? The simple answer to this question is no. Data protection frameworks, however useful, falls short in protecting individuals and society against the risks posed by AI systems.

One of the most important tools included data protection laws to evaluate the effects of the processing activities and to mitigate the risks to individuals are the data protection impact assessments. However, DPIA (or PIAs) are mostly concerned with risks related to the privacy of individuals.

What about human rights impact assessments (HRIA)?

HRIA analyses the effects that the activities of public administrations or private actors have on rights-holders such workers, local community members, consumers and others. They are not limited to their data protection rights and include the whole range of fundamental rights that individuals enjoy.

Should providers or users of AI systems undertake HRIA?

For some high-risk AI systems, while not mandatory, it is recommended as a best practice.

Are HRIAs mandatory?

According to the Slovak Constitutional Court, users of AI systems, under certain circumstances must carry out an HRIA.

In December 2021, the Slovak Constitutional Court in the case eKasa considered that where public administrations deploy automated decision-making systems the impact assessments must focus on the overall human rights impact on individuals.
In the case, it was challenged a Slovak law that required the collection of all store receipts and their transmission to a central database administered by the Tax Authority. The authority created a risk profiling of all companies using data from the receipts and combining it with other inputs (lists or datasets). The profiles were used to conduct supervisory activities.

Some of the most important parts of the ruling are summarised here:

• Automated assessment of an individual on the basis of comprehensive data collection constitutes an interference with the right to informational self-determination, regardless of whether it has a certain consequence for him (para 120), hence it must be established by law (para 122)

• Automation cannot be used wherever it is technically possible and useful, simply because it saves public resources, and the mere fact that there is sufficient interest in the collection of certain data does not mean that the same interest exists in their further use (para 125). Any state that develops and uses new technologies has to balance it with fundamental rights (para 126).

• The application of technological progress in public administration cannot result in an impersonal state whose decisions are inexplicable, unexplained and at the same time no one is responsible for them (para 127). Automated assessment often concerns a large group of persons and the criteria, patterns or linked databases used are not easy to understand for the addressee, and they may contain errors or lead to erroneous conclusions about the individual (para 128)

• The specific details of the algorithm were not even disclosed to the Constitutional Court (para 130)

• While the processing of non-personal data may fall out of the scope of application of the Slovakian Constitution concerning the protection of personal data and privacy (art. 19 and 22), it may still fall under other provisions, in particular, the right to a fair trial, the prohibition of unequal treatment and the freedom of expression or assembly. Also, the Slovak Constitution may protect legal persons under art. 19 and 22 (para 131)

• Laws must ensure that the criteria, models or linked databases used in the context of automated systems are up-to-date, reliable and non-discriminatory. In addition to general safeguards for the processing of personal data, it must be ensured: a) transparency; b) individual protection; and c) collective supervision (para 132)

• Individuals must be informed that administrative decisions are taken or assisted by automated systems. In particular, individuals should be aware of the existence, scope and implications of the decisions, to effectively be able to challenge such decisions (para 133)

• While the PA should perform a DPIA for these processing operations, the impact assessment MUST focus on the overall human rights impact of automated systems on individuals (Recommendation CM/Rec (2020)1, B.5.2). It must also identify specific risks, document the scope of assessment in the steps of the data processing process, the method of dataset testing and model used, and mention alternative more environmentally friendly solutions (para 134)

The same conditions of transparency must be reached where the state procures the AI solution from a vendor. IP rights cannot be a reason to deny access to the information. Otherwise, the data subjects’ rights would be denied by simply involving external suppliers (para 135)

• There must be independent collective control over the use of such a system, which operates both ex-ante and ex-post. The data protection framework is not enough, since it relies on individual protection. Collective control, either through independent state institutions, certification, civil society involvement, or academia, complements individual protection against collective harm (para 136)

Control must concern the quality of the system, its components, errors and imperfections before and after deployment (eg. via audits, quality review of decisions, reporting and statistics, etc). The more complex the system, the deeper the control must be. Civil servants must be aware of the blind spots of the system. Documentation and recording of logs must enable collective and individual control (para 137)

Individuals should be in a position to effectively defend themselves against the imperfection of errors of the system, but it must be entrusted to an independent supervisory authority (para 138). Proper supervision should lead to a change in the conduct o the PA, and as a last resort, there should be the possibility to issue an order to stop the use of the system (para 139)

• The GDPR allows states to introduce more specific provisions (art. 6(2) and 6(1)(e) – public interest GDPR) (para 140)

 

 

 

Categories
Latest news

Related Posts

Comments

0 comentarios