loader image

Explaining decisions made with AI – summary

I made a brief summary of the Information Commissioner’s Office guideline explaining decisions made with #ai

While a complete read of this guideline is highly recommended to be aware of the complexities surrounding the explainability of decisions made by or assisted by AI-powered systems, this document summarises what I consider the most important parts

 

 

Explaining decisions made with artificial intelligence systems

(summary based on the ICO guide ‘Explaining decisions made with AI’)

 

Two subcategories of explanations:

  1. A) Process-based explanations of AI systems: aimed at demonstrating that those who develop/deploy AI systems have followed good governance processes and best practices throughout the design and use
  2. B) Outcome-based explanations of AI systems: aimed at clarifying the results of a specific decision, and they involve explaining the reasoning behind a particularly algorithmically-generated outcome in clear, easily understandable language.

 

Ways of explaining AI decisions:

1.- Rationale explanation: the reasons that led to a decision (the ‘why’ of the decision), delivered in an accessible and non-technical way. Purposes: challenging the decision and changing behaviour

2.- Responsibility explanation: who is involved in the development, management and implementation of an AI system, and who to contact for a human review of a decision. Purposes: challenging a decision and informative

3.- Data explanation: what data has been used in a particular decision and how, and which are the sources of data. Purposes: challenging a decision and providing reassurance.

4.- Fairness explanation: steps taken across the design and implementation of an AI system to ensure that the decisions it supports are generally unbiased and fair, and whether or not an individual has been treated equitably. Purposes: challenging a decision and building trust.

Fairness can also be related to: a) dataset fairness: the system is trained and tested on properly representative, relevant, accurately measured, and generalisable datasets; b) Design fairness: it has model architectures that do not include target variables, features, processes, or analytical structures which are unreasonable or unjustifiable; c) outcome fairness: it does not have discriminatory or inequitable impacts on the lives of the people it affects; d) implementation fairness: it is deployed by users sufficiently trained to implement it responsibly and without bias.

5.- Safety and performance explanation: steps taken across the design and implementation of an AI system to maximise the accuracy, reliability, security and robustness of its decisions and behaviours. You need to show accuracy, reliability, security and robustness. Purposes: challenging a decision, reassurance and informative.

6.- Impact explanation: steps taken across the design and implementation of an AI system to consider and monitor the impacts that the use of an AI system and its decisions has or may have on an individual, and on wider society. Purposes: knowing the consequences and reassurance

 

 

TYPE OF EXPLANATION PROCESS-BASED EXPLANATION OUTCOME-BASED EXPLANATION
RATIONALE – How the procedures help you provide meaningful explanations of the underlying logic of your AI model’s results.

– How these procedures are suitable given the model’s particular domain context and its possible impacts

– How you have set up your system’s design and deployment workflow

– The formal and logical rationale of the AI system: how the system is verified against its formal

specifications

– The technical rationale of the system’s output: how the model’s components (its variables and rules)

transform inputs into outputs (to identify the features and parameters that significantly influence a particular output).

– Translation of the system’s workings into accessible everyday language

– Clarification of how a statistical result is applied to the individual concerned

RESPONSIBILITY – The roles and functions across your organisation that are involved in the various stages of your AI system. If your system is procured, you should include information about the providers or

developers involved.

– Broadly, what the roles do and who is ultimately accountable.

– Who is responsible at each step of an AI system

– Information on how to request a human review of an AI-enabled decision or object to the use

of AI

– Give individuals a way to directly contact the role or team responsible for the review

DATA – What data was collected, the sources of that data, and the methods that

were used to collect it.

– Who took part in choosing the data to be collected and in its collection

– How data quality was assessed and the steps that were taken to address any quality issues discovered

– What the training/ testing/ validating split was and how it was determined.

– How data pre-processing, labelling, and augmentation supported the interpretability and explainability

of the model.

– What measures were taken to ensure the data used to train, test, and validate the system was representative, relevant, accurately measured, and generalisable.

– How you ensured that any potential bias and discrimination in the dataset have been mitigated.

– Clarify the input data used for a specific decision, and its sources.

 

FAIRNESS – which are the measures to mitigate risks of bias and discrimination at any stage

– how these measures were chosen and how you have managed informational barriers to bias-aware design

– the results of your fairness testing, self-assessment, and external validation

 

– how your formal fairness criteria were implemented in the case of a particular decision

– presentation of the relevant fairness metrics and performance measurements in the delivery

interface of your model

– explanations of how others similar to the individual were treated (ie whether they received the same

decision outcome as the individual)

SAFETY-PERFORMANCE For accuracy:

– How you measure it

– Why you chose those measures

– What you did at the data collection stage to ensure your training data was up-to-date and reflective of the characteristics of the people to whom the results apply.

– What kinds of external validation you have undertaken

– What the overall accuracy rate of the system was at testing stage.

– What you do to monitor this

For reliability:

– How you measure it

– Results of the formal verification of the system’s programming specifications

For security:

– How you measure it

– How you manage the security of confidential and private information that is processed in the model.

For robustness:

– How you measure it.

– Why you chose those measures

– Provide assurance that, at run-time, your AI system operated reliably, securely, and robustly for a specific decision.

– In the case of accuracy and the other performance metrics, include in your model’s delivery interface the results of your cross-validation (training/ testing splits) and any

external validation carried out.

IMPACT – Show the considerations you gave to your AI system’s potential effects, how you undertook these

considerations, and the measures and steps you took to mitigate possible negative impacts on society

– how you plan to monitor and re-assess impacts while your system is deployed.

– Help decision recipients understand the impact of the AI-assisted decisions that specifically affect them, e.g. explain the consequences for the individual of the different possible decision outcomes and how changes in their behaviour would have brought about a different outcome.

 

 

CONTEXTUAL FACTORS to consider

1.- Domain factor: sector where the AI system is deployed

2.- Impact factor: the effect an AI-assisted decision can have on an individual and wider society

3.- Data factor: data used to train and test the AI model, and it also relates to the input at the point of the decision

4.- Urgency factor: urgency relates to the importance of receiving, or acting on the outcome of an AI-assisted decision within a short timeframe.

5.- Audience factor: consider the individuals you are explaining the decision to

FACTOR What types of explanations should be prioritized?
DOMAIN – In safety critical settings: safety and performance explanation

– Bias and discrimination concerns:  fairness explanation

– Low stake domains: basic rationale and responsibility explanation

IMPACT – High impact: fairness, safety-performance, and impact explanation. Also rationale and responsibility
DATA – Social data: rationale, fairness and data explanation

– Biophysical data: rationale and impact and safety-performance data

URGENCY – Urgency factor: impact and safety and performance explanation
AUDIENCE – In general, we should accommodate the explanation to the need of most vulnerable people

– Domain expertise: rationale and safety-performance explanation

– Ordinary people: responsibility and safety-performance explanation. Also rationale

 

 

TASKS

1.- Select priority explanations by considering the domain, use case, and impact on the individual

Acknowledge the different types of explanation (i.e. process/outcome-based explanation vs rationale/responsibility/data/fairness/safety/impact explanation). This should help you to separate out the different aspects of an AI-assisted decision that people may want you to explain.

2.- Collect and pre-process your data in an explanation-aware manner

The manner in which you collect and pre-process the data you use in your chosen model has a bearing on the quality of the explanation you can offer to decision recipients.

3.- Build your system to ensure you are able to extract relevant information for a range of explanation types

You should understand the inner workings of your AI system, particularly to be able to comply with certain parts of the GDPR. The model you choose should be at the right level of interpretability for your use case and for the impact it will have on the decision recipient. If you use a ‘black box’ model, use supplementary explanation techniques to provide a reliable and accurate representation of the system’s behaviour.

4.- Translate the rationale of your system’s results into usable and easily understandable reasons

Determine how you are going to convey your model’s statistical results to users and decision recipients as understandable reasons. It is crucial to inform how the statistical inferences, which were the basis for your model’s output, played a part in your thinking. This involves translating the mathematical rationale of the explanation extraction tools into easily understandable language to justify the outcome. Be sure that there is a simple way to describe or explain the result to an individual.

5.- Prepare implementers to deploy your AI-system

When human decision-makers are meaningfully involved in an AI-assisted outcome they must be appropriately trained and prepared to use your model’s results responsibly and fairly. If the system is wholly automated and provides a result directly to the decision recipient, it should be set up to provide understandable explanations to them

6.- Consider how to build and present your explanation

Consider how you will build and present your explanation to an individual, whether you are doing this through a website or app, in writing or in person.

Evaluate contextual factors (domain, impact, data, urgency, audience) to help you decide:

– how you should deliver appropriate information to the individual

– what kind of and how much information to provide

– what information you should provide before and after the decision

A layered approach is recommendable: priorities in the first layer, detailed information into a second layer.

 

Source

ICO, Explaining decisions made with AI’ (2020)

Categories
Latest news

Related Posts

Comments

0 comentarios

Enviar un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *