State of AI applied to Quality Engineering 2021-22
Section 9: Trust AI

Chapter 3 by Capgemini Engineering

How to build AI that people trust

Business ●●●●○
Technical ●○○○○

Listen to the audio version

Download the "Section 9: Trust AI" as a PDF

Use the site navigation to visit other sections and download further PDF content

By submitting this form, I understand that my data will be processed by Sogeti as described in the Privacy Policy.*

Introduction

Many assume that AI success hinges on getting the model to accurately interpret data. This is important, but accuracy is one element of a more nuanced challenge: Trust.

Take a hypothetical AI which looks at medical scans and identifies whether they contain signs of an early stage tumour. The model is tested using medical data and shown to be 80% accurate in spotting early stage tumours, 10% higher than a human specialist (figures for illustration only).

But the AI has learned based on making complex correlation between the images in its training data and — whilst the data shows it is accurate - it is not entirely clear how it reached that decision.

Do we trust it? Is 80% accuracy enough? Should we believe the claim of 80% accuracy? Are we sure the scans were interpreted correctly? Are we sure the test data was right? If a specialist disagrees, should she go with the AI or her own diagnosis, neither of which are perfect?

AI must be planned and designed for human trust and understanding

What should she do about that instinctive feeling that something is wrong that she can’t quite put her finger on?

These are complex questions. For many, the straightforward answer is ‘if an AI is shown to perform better than a human, we should trust the AI’. But such relative concepts as ‘better than’ are hard be sure of in the complex world of AI. This raises very difficult questions when it comes to trust. Lots of companies have implemented AI but have issues trusting the results.

AI is a technical concept, but trust is a human one

Trust is certainly undermined by low accuracy, but high accuracy alone does not guarantee trust. An AI can be 99% accurate, but so complex and confusing – or new – that no one trusts it. We need comprehensible evidence it works, and explainability of how and why it works, before we can trust it.

Even with all that, trust may take time to earn. Would you get on the maiden flight of a plane piloted only by AI? If you were told it had been shown in trials to be safer than human pilots, would that be enough?

 

It means managing expectations around what AI can or can´t do

AI must be planned and designed for human trust and understanding. That means managing expectations around what AI can and can’t do,
and rolling out a complex technology in a way that aligns with how people learn.

This whitepaper will explain the importance of ‘Trusted AI’ and discuss how to develop it.

Why is trust a unique issue for AI?

Most people are familiar with software in their everyday life and workplace, which operates based on rules. They may be complex rules, but they are (bugs aside) predictable. They have been explicitly programmed to follow a set of instructions which turn inputs into outputs.

AI works differently. It ingests data and learns how to interpret it by establishing connections between different data sets. So, an English- Spanish translation AI is not explicitly told word by word ‘perro = dog’, ‘gato = cat’, etc, alongside fixed grammar rules. It is fed texts which have been translated and told to learn what pattern links one to the other (with guidance from language and data experts).

This allows it to learn complex tasks such as translation or image recognition quickly. Many tasks performed with AI would not be possible with traditional software, or would take decades of programming.

However, this approach brings unpredictability because the input data is complex and imperfect, not a set of binary options. To learn a language, an AI needs huge amounts of text and there is not time to manually check it all. Some translations may be poor, or contain mistakes, or deliberately misuse language. Even correct ones
contain nuance, where experts disagree on the precise translation. A phrase can be translated in several ways, depending on the context. Anyone who has used a translation app will know they are good, but not perfect.

Translation is usually low stakes, and we can trust a language translation AI for many applications, even if we can see it makes some mistakes. But for AIs which diagnose disease, spot when a plane engine component needs replacing, or predict drug formulations, we need to be very confident that it has reached the right answer before we can trust it.

Added to this complexity is that AI conclusions may be confusing, but still be correct. NASA used AI to design an antenna against
a defined set of criteria. The result would never have occurred to a human, but it was better aligned to their needs than anything a human came up with. What does one do when an AI recommends something completely counter-intuitive? It could be breakthrough
(as in NASA’s case), or it could be a spectacular oversight in the AI
design or training. How do we know?

All of this raises questions of trust. If we know it is not 100% accurate, we need to reach a decision about how much we trust its recommendation. This comes down to multiple factors including
how accurate we are told it is, how much we believe that claim, how much control we had over the inputs, how well we understand its decision-making, what supplementary information it has provided to back up its recommendations, its past record, the consequences of it being wrong, and the user’s own knowledge of the problem.

Many tasks performed with AI would not be possible with traditional software.

Why trust matters: Examples of good and bad AI

High-profile examples of AI failure have undermined trust. But this is not the full story, plenty of AIs are also having hugely positive impact upon organizations. We look at two cases that got trust wrong, and two that got it right.

The good
The project The goal What went well The outcome
AI drug design Use AI to identify drug molecules for treating OCD Algorithms were used to sift through potential compounds for an OCD treatment, checking them against a huge database of parameters. There was a dedicated focus on high-quality data acquisition and checking, and tailoring algorithms to the specific task at hand. This was achieved through close collaboration between AI and drug chemistry experts, who checked inputs and outputs throughout to ensure results could be trusted. A drug molecule ready to go into clinical trials was developed in 12 months, where the industry average is 4.5 years.
Google’s Bolo reading-tutor Develop a speech
recognition app to help children in rural India with reading skills
The ‘tutor’ app encourages, explains and corrects the child, as they read aloud. It applied existing speech recognition and text-to-speech technology to a specific application, which was developed with a clear purpose in mind and carefully tested in 200 Indian villages. The pilot was verified by ongoing research in the field, showing 64% of children showed significant improvements in reading proficiency. The app has since been rolled out widely.
The bad
The project The goal What went well The outcome
An AI to predict premature births Identify link between non- invasive electro- hysterography readings and premature births. An initial project suggested up to 99% accuracy. The same datasets were used in both training and validation. So, it was trained to learn about a correlation, then tested with the same data to see if that correlation existed. Given the initial data set was quite small anyway, this gave it a very high
accuracy. When other researchers reproduced the models, accuracy dropped to 50%. Trust in the original accuracy claims was destroyed at a stroke.
A model that looked like it should go into clinical practice was shown to be one that definitely should not.
Amazon’s AI recruitment tool Look at applicant CVs and
predict the best candidates based on similarities with previous
Models were trained by observing patterns in resumes over ten years. Most came from men due to tech industry bias. Not understanding this, the system taught itself that male candidates were
preferable and started rejecting applicants for being female. As a result, it could not be trusted to make unbiased decisions.
Model was scrapped. Considerable negative coverage ensued.

How much trust do I need?

The level of confidence in an AI output needed, before the user will trust it, depends on the seriousness of the consequences of failure. Users will trust a useful low-risk AI even if it is far from perfect, but high-risk AI decisions need much greater levels of confidence in order to create trust.

Figure: Examples of AI Applications and levels of trust

The points in the AI lifecycle where trust can be undermined

There are various stages where trust can be undermined in the AI development and deployment process. In this section, we discuss the main risk factors.

Bias in training data
Unconscious gender or racial bias has often hit the headlines, usually created by applying AI to process automation without understanding the limits of the data. Amazon’s Rekognition for example misidentified women and people of colour, likely due to using smaller training data sets for these groups. Such stories undermine credibility of the commercially available technology.

The AI doesn’t learn incorrectly, it learns to reflect bias in its training, which reflects bias in the real world. Prejudice is the nasty face of this, but bias in data can also extend to misplaced assumptions by scientists, doctors recording incorrect diagnoses, and even people’s choice of written or recorded language.

Badly curated data
Data can also be mislabelled, or poorly curated, so that the AI struggles to make sense of it. If data is not appropriately selected, then the model
will not learn how to reach the right conclusions. And if conclusions seem suspect, people won’t trust it (or worse, they will trust it and take bad decisions as a result).

If data is not appropriately selected, then the model will not learn how to reach the right conclusions

User interface and explainability
Trust is not just about how good the model is, but how easy it is to use and interact with and how clear the answers are represented to the user. If the user does not feel they can input the information they want, they are likely to be suspicious of the result. If the interface is overly complex or the results presented in a confusing way, or with no explanation as to how they were reached (even if they are correct), it will quickly be abandoned. Even something simple like a film recommendation is much more trustable if you can see what aspects of viewing history led to it.

If the user does not feel they can input the information they want, they are likely to be suspicious of the result

Bias in the real world

Many AIs continue to learn post deployment, but they are not necessarily well prepared for the complexities of real-world data. Famously, Microsoft’s Tay, an artificially intelligent chatbot was designed to learn from interactions it had with real people on Twitter. Some users decided to feed it offensive information, which it had not been designed to deal with appropriately. Within 24 hours Tay had to be deactivated and withdrawn for spreading deeply upsetting opinions.

Malicious attacks

AI is susceptible to new malicious attacks in ways that are poorly understood by users. AIs, which appear to take human decisions, can be fooled in ways that humans cannot.

In a test case, an AI was trained to recognize images. By changing just one pixel in an image, researchers fooled the AI, causing it to wrongly label what it saw (sometimes very wide of the mark, one thought a stealth bomber was a dog). Tesla’s self-driving image recognition systems have been tricked by placing stickers on roads and signs causing them to suddenly accelerate or change lanes.

Lack of transparency

Sitting above all these issues is a fear fed by AIs lack of transparency. Not only do the end-users not understand how AIs make their decisions, in many cases nor do their makers.

Apple’s credit card, backed by Goldman Sachs, was investigated by regulators after customers complained that the card’s lending algorithms discriminated against women. No-one from Apple or Goldman was able to justify the output or describe how the algorithm worked. The apparent correlation between gender and credit doesn’t necessarily mean one is causing the other, but it creates suspicion that bias has crept in. Without transparency it’s impossible to know, and that makes it hard to trust.

Not only do the end-users not understand how AIs make their decisions, in many cases nor do their makers

A framework for building and deploying trusted AI

Despite the risks discussed in this paper, AI delivers huge value when done well. And away from the negative headlines, it is often done very well.

The problems usually come when poor process, and lack of experience, leads to poor choices: of the wrong algorithm, bad data, inadequate verification, poor interfaces, or lack of post- deployment support. These errors are often baked in from the outset with fundamental mistakes in initial scoping and design, caused by lack of understanding about AI, and of the real-world problem it solves. These all undermine trust.

As AI plays an ever increasingly important role in our lives, we need to design it to be trusted. This goes beyond data scientists designing an algorithm that learns about correlations and works on test data. AI must be designed as a whole product, with a set of support services around it, that allow the user to trust its outputs. Doing so needs a rigorous approach to AI development.

In this final section, we outline five key parameters for creating
trusted AI.

1. Assured

A data-driven decision is only as trusted as the data that underpins it.

The most obvious aspect of trusted AI is ensuring it does what it is supposed to. Because AI learns from data, that data must be reliable. You can train an AI to recognize cats and dogs by feeding it lots of labelled images of each. But if some cats are labelled as dogs, some are not labelled, or some show a completely different animal, the AI will learn incorrectly and make incorrect decisions. If all images of dogs are in the snow, the AI may learn to detect snow rather than dogs.

As soon as it makes mistakes, users will stop trusting it. This may not matter too much for classifying cats and dogs, but it matters a lot for classifying images of healthy vs precancerous cells.

Trusted AIs must use a well- designed model, and be trained and tested on data that is proven to be accurate, complete, from trusted sources, and free from bias. Capturing that data requires rigorous processes around data collection and curation.

Those designing an assured AI model should examine AI inputs and ask:

  • Does this data accurately represent the system to be modelled?
  • Does the data contain confounding or irrelevant information?
  • How will I know that that the data is of sufficiently quality?
  • Is the underlying data biased – and how would I tell?
  • Are my assumptions about data collection biased?

Trusted AIs must use a well-designed model, and be trained and tested on data that is proven to be accurate, complete, from trusted sources, and free from bias

2. Explainable

A functioning model is not enough, users need to understand how it works. The AI earns trust by backing up its recommendations with transparent explanations and further detail.

If a bank turned you down for a mortgage, you’d expect to know why. Was it past or existing debt? Was it an error? Did they confuse you with someone else? Knowing the reason allows you to move forward in the most constructive manner. For the bank, it allows them to spot faults, retain customers, and improve processes.

It’s the same for AI. A recommendation is much more useful if you understand how and why it was made. Explainability allows the user to see if the AI supports their own intuition (eg about a disease diagnosis or the best way to make a new material), or helps them question it. And it allows developers to spot errors in the AIs learning and remedy them.

A responsibly designed AI will have tools to analyse what data was used, its provenance, and how the model weighted different inputs, then report on that conclusion in clear language appropriate to the user’s expertise.

Explainability may also involve some trade-off between raw predictive power and transparency of interpretation. If AI cannot fully explain its outcome, trust may still be built in some cases through rigorous validation to show it has a high success rate, and by ensuring the user has the information they need to understand that validation.

Those designing AI to be explainable should ask:

  • What could be known in principle about the working of this AI?
  • Does the model need to be fully interpretable or are post- hoc explanations sufficient?
  • Can the AI rationalize why it decided to offer the user this piece of clarifying information, rather than another?
  • How consistent is thegiven answer with previous
    examples?
  • Does too much transparency make the AI vulnerable to attack?
  • Does the information on offer meet the accountability needs of different human users?

3. Human

A trusted AI is intuitive to use. Netflix would be less successful if users had to enter a complex set or parameters to get film recommendations. But instead it automatically presents films you may like based on your history or search terms, in an easy to navigate interface, and sometimes even tells you why it recommended them (Because you watched…).

An intuitive interface, consistently good recommendations, and easy-to- understand decisions, all help the user come to trust it over time.

Intuitive doesn’t always mean simple. A ‘simple’ smart phone app may use very intuitive guided decision-making. A drug property prediction platform can expect advanced chemistry knowledge from its user and
display complex information in a manner appropriate for an expert to understand and interact with.

The complexity of these ‘guided decisions’ must be matched to the user’s knowledge. Equally, the time it takes the user to fully trust the AI will be relative to the complexity and risk of failure.

The complexity of these ‘guided decisions’ must be matched to the user’s knowledge

Making AI usable for humans means understanding the end user and how they interact and learn over time. Those designing AIs should ask:

  • Why would users not trust this AI?
  • What reassurances are they likely to need?
  • What training and support is needed for different users?
  • Should the system allow users to ask for more details?
  • How can we retain confidence if the AI gets it wrong?
  • How do we make human users feel the AI is accountable?

4. Legal and ethical

A trusted AI should reach decisions that are fair and impartial, with privacy and ethical concerns given equal weight to accuracy.

An AI may conclude that certain groups are more likely to reoffend or default on loans. Whilst this may be true at a group level (for broader socioeconomic reasons), it does not mean an individual from that group is more likely to do so. An AI using this as a decision- making factor creates undue prejudice and opens its user up to legal challenges.

An AI may conclude that certain groups are more likely to reoffend or default on loans

Those designing AI to be ethical and legally compliant should ask:

  • Why are we building this AI at all?
  • Are we aligned with prevailing ethical standards?
  • Is it fair and impartial?
  • Is it proportionate in its decisions?
  • How is it governed?
  • Are we honest about what you claim it can do?
  • Are we transparent about what it’s doing, or is it doing something else in the background?

5. Performant

Finally, a trusted AI continues to work after deployment.

Too many AIs work well in a controlled environment but fall over once deployed, either because they are not ready for the complexities of real-world data, or because they have not been to designed to integrate into the users working life (either technically or practically). Users will quickly lose trust in an AI they see making less and less reliable decisions.

A truly performant AI is future- proofed for throughput, accuracy, robustness, security, balancing raw predictive power with transparent interpretation, whilst remaining aligned to genuine business need.

Those designing AI to perform in the real world should examine AI outputs

Those designing AI to perform in the real world should examine AI outputs, and ask:

  • Does this AI actually solve the intended business problem?
  • Do we understand needed levels of throughput and robustness?
  • Do we understand the required output quality (accuracy, precision)?
  • What safety barriers are needed for if the AI makes a mistake?
  • How robust is my testing, validation and verification policy?
  • Is the in-service AI protected against adversarial attacks?
  • Do we have a plan to continuously assess and improve in-service performance?
  • Do we know when and how the in-service model could become invalid?

About the author

Matt Jones

Matt Jones

Matt is in charge of developing data-driven market offers for R&D-intensive industries - Fusing human creativity, science, technology and data to revolutionize R&D. Matt is in charge of developing data-driven market offers for R&D-intensive industries. To revolutionize R&D, we are combining human creativity, science, technology, and data. Engineering and R&D leaders are addressing a new generation of global challenges, such as harnessing green energy and enabling sustainable mobility, as well as discovering novel disease treatments. He firmly believes that the magnitude and diversity of these data-centric challenges necessitate a new and innovative approach to R&D.

Sam Genway

Sam Genway

I am a machine learning expert with an academic research background in quantum theory who leads projects at the forefront of artificial intelligence to transform R&D. I have worked with several of the largest, most innovative companies to develop their capabilities and enhance their operations.

I work in partnership with leaders in R&D to identify transformative opportunities, engineer technical solutions and lead teams in their delivery. My contributions are wide ranging, spanning everything from developing novel AI approaches for drug design to modelling the condition of strategic assets for a $1bn asset-repurposing decision.

I pioneered an in-house technology accelerator, CORTEX, which undertakes research projects in applications using cutting-edge tech such as AI and quantum computing. CORTEX projects have demonstrated success across techniques and applications, with recent applications including reinforcement learning for resource management in traffic scenarios and quantum computing for lab optimisation.

As a thought leader in applications of new technology, I write articles and speak at conferences about the transformative opportunities AI and quantum technologies will bring to R&D and engineering, and support discussions with senior leaders in industry.

John Godfree

John Godfree

I’m the Head of Consulting for Tessella, where I lead Tessella’s development and delivery of Consulting & Business Analysis activities. Fundamentally my focus on the value that data & data science can bring to an organisation from the R&D space all the way to enabling consumer insights. My team tends to work on the fuzzy and ill-defined problems, using our roadmapping approaches combined with exploratory data science.

I have over 25 years’ experience as a Consultant and Senior Project Manager who has worked with clients ranging from small-scale innovative start-ups, to multi-national corporations. My work has ranged from the delivery of Data Science & Digitalisation Roadmaps in multiple sectors (Life Sciences, Consumer Goods, Energy etc.), the management, requirement analysis, design and implementation of the UK National Flow Forecasting System (NFFS), delivery of Radiation Dosimetry systems as well as providing consultancy, feasibility studies and requirements analysis for Government Agencies, large Multinational Energy & Petrochemical companies and technology innovators.

About Capgemini Engineering

Capgemini Engineering combines, under one brand, a unique set of strengths from across the Capgemini Group: the world leading engineering and R&D services of Altran – acquired by Capgemini in 2020 – and Capgemini’s digital manufacturing expertise. With broad industry knowledge and cutting-edge technologies in digital and software, Capgemini Engineering supports the convergence of the physical and digital worlds. Combined with the capabilities of the rest of the Group, it helps clients to accelerate their journey towards Intelligent Industry. Capgemini Engineering has more than 52,000 engineer and scientist team members in over 30 countries across sectors including aeronautics, automotive, railways, communications, energy, life sciences, semiconductors, software & internet, space & defence, and consumer products.

Visit us at capgemini-engineering.com

 

 

Capgemini engineering logo