Section 1 - Chapter 1

Heading: How to Build AI That People Trust

Subheading: What is Quality Engineering/Recent Trends in QE

 

 

 

Introduction

 

 

Many assume that AI success hinges on getting the model to accurately interpret data. This is important, but accuracy is one element of a more nuanced challenge: Trust.

Take a hypothetical AI which looks at medical scans and identifies whether they contain signs of an early-stage tumour. The model is tested using medical data and shown to be 80% accurate in spotting early-stage tumours, 10% higher than a human specialist (figures for illustration only).

But the AI has learned based on making a complex correlation between the images in its training data and — whilst the data shows it is accurate - it is not entirely clear how it reached that decision.

Do we trust it? Is 80% accuracy enough? Should we believe the claim of 80% accuracy? Are we sure the scans were interpreted correctly? Are we sure the test data was right? If a specialist disagrees, should she go with the AI or her own diagnosis, neither of which is perfect?

What should she do about that instinctive feeling that something is wrong that she can’t quite put her finger on?

These are complex questions. For many, the straightforward answer is ‘if an AI is shown to perform better than a human, we should trust the AI’. But such relative concepts as ‘better than’ are hard to be sure of in the complex world of AI. This raises very difficult questions when it comes to trust. Lots of companies have implemented AI but have issues trusting the results.

"AI must be planned and designed for human trust and understanding."

CSR Digital inclusion 1980x660.jpg

AI is a technical concept, but trust is a human one

Trust is certainly undermined by low accuracy, but high accuracy alone does not guarantee trust. An AI can be 99% accurate, but so complex and confusing – or new – that no one trusts it. We need comprehensible evidence it works, and explainability of how and why it works before we can trust it.

Even with all that, trust may take time to earn. Would you get on the maiden flight of a plane piloted only by AI? If you were told it had been shown in trials to be safer than human pilots, would that be enough?

AI must be planned and designed for human trust and understanding. That means managing expectations around what AI can and can’t do, and rolling out a complex technology in a way that aligns with how people learn.

This article will explain the importance of ‘Trusted AI’ and discuss how to develop it.

 

 

Why are trust and transparency a unique issue for AI?

 

 

Most people are familiar with software in their everyday life and workplace, which operates based on rules. They may be complex rules, but they are (bugs aside) predictable. They have been explicitly programmed to follow a set of instructions that turn inputs into outputs.

AI works differently. It ingests data and learns how to interpret it by establishing connections between different data sets. So, an English-Spanish translation AI is not explicitly told word by word ‘perro = dog’, ‘gato = cat’, etc, alongside fixed grammar rules. It is fed texts which have been translated and told to learn what pattern links one to the other (with guidance from language and data experts).

This allows it to learn complex tasks such as translation or image recognition quickly. Many tasks performed with AI would not be possible with traditional software or would take decades of programming.

However, this approach brings unpredictability because the input data is complex and imperfect, not a set of binary options. To learn a language, an AI needs huge amounts of text and there is not time to manually check it all. Some translations may be poor, or contain mistakes, or deliberately misuse language. Even correct ones contain nuance, where experts disagree on the precise translation. A phrase can be translated in several ways, depending on the context. Anyone who has used a translation app will know they are good, but not perfect.

Translation is usually low stakes, and we can trust a language translation AI for many applications, even if we can see it makes some mistakes. But for AIs which diagnose disease, spot when a plane engine component needs replacing, or predict drug formulations, we need to be very confident that it has reached the right answer before we can trust it.

"Many tasks performed with AI would not be possible with traditional software."

Added to this complexity is that AI conclusions may be confusing, but still be correct. NASA used AI to design an antenna against a defined set of criteria. The result would never have occurred to a human, but it was better aligned to their needs than anything a human came up with. What does one do when an AI recommends something completely counter-intuitive? It could be a breakthrough (as in NASA’s case), or it could be a spectacular oversight in the AI design or training. How do we know?

All of this raises questions of trust. If we know it is not 100% accurate, we need to reach a decision about how much we trust its recommendation. This comes down to multiple factors including how accurate we are told it is, how much we believe that claim, how much control we had over the inputs, how well we understand its decision-making, what supplementary information it has provided to back up its recommendations, its past record, the consequences of it being wrong, and the user’s own knowledge of the problem.

 

 

Why trust matters: examples of good and bad AI

 

 

High-profile examples of AI failure have undermined trust. But this is not the full story, plenty of AIs are also having a hugely positive impact upon organizations. Let's look at two cases that got trust wrong and two that got it right.

The good

The project The goal What went well The outcome

AI drug design

(Link)

Use AI to identify drug molecules for treating OCD Algorithms were used to sift through potential compounds for an OCD treatment, checking them against a huge database of parameters. There was a dedicated focus on high-quality data acquisition and checking, and tailoring algorithms to the specific task at hand. This was achieved through close collaboration between AI and drug chemistry experts, who checked inputs and outputs throughout to ensure results could be trusted. A drug molecule ready to go into clinical trials was developed in 12 months, where the industry average is 4.5 years.

Google’s Bolo
reading-tutor

(Link)

Develop a speech recognition app to help children in rural India with reading skills The ‘tutor’ app encourages, explains and corrects the child, as they read aloud. It applied existing speech recognition and text-to-speech technology to a specific application, which was developed with a clear purpose in mind and carefully tested in 200 Indian villages. The pilot was verified by ongoing research in the field, showing 64% of children showed significant improvements in reading proficiency. The app has since been rolled out widely.

 

High-profile examples of AI failure have undermined trust. But this is not the full story, plenty of AIs are also having a hugely positive impact upon organizations. Let's look at two cases that got trust wrong and two that got it right.

The bad

The project The goal What went wrong The outcome

An AI to predict premature births

(Link)

Identify the link between non-invasive electro-hysterography readings and premature births. An initial project suggested up to 99% accuracy. The same datasets were used in both training and validation. So, it was trained to learn about a correlation, then tested with the same data to see if that correlation existed. Given the initial data set was quite small anyway, this gave it a very high accuracy. When other researchers reproduced the models, accuracy dropped to 50%. Trust in the original accuracy claims was destroyed at a stroke. A model that looked like it should go into clinical practice was shown to be one that definitely should not.

Amazon’s AI
recruitment tool

(Link)

Look at applicant CVs and predict the best candidates based on similarities with previous successful hires. Models were trained by observing patterns in resumes over ten years. Most came from men due to tech industry bias. Not understanding this, the system taught itself that male candidates were preferable and started rejecting applicants for being female. As a result, it could not be trusted to make unbiased decisions. The model was scrapped. Considerable negative coverage ensued.

 

 

 

How trusted does your AI need to be?

 

 

The level of confidence in an AI output needed before the user will trust it depends on the seriousness of the consequences of failure. Users will trust a useful low-risk AI even if it is far from perfect, but high-risk AI decisions need much greater levels of confidence in order to create trust.

Examples of AI applications and the levels of trust required

Application Potential negative consequences Level of confidence needed for trusted use
Disease diagnosis Preventable
death
Higher
Drug design Missed opportunities,
expensive mistakes
Arrow
Oil well drilling Major financial loss
Predicative maintenance Unnecessary downtime
Mortgage recommendation Harm to customer, legal challenges
Targeted adverts Missed sales opportunities
Translation Miscommunication
Film recommendation Occasionally frustrated customers
Photo tagging Wrong photo tagged
AI created artwork Probably none Lower

 

 

How trust in AI can be undermined

 

 

There are various stages where trust can be undermined in the AI development and deployment process. In this section, we discuss the main risk factors.

 

page 4

Bias in training data

Unconscious gender or racial bias has often hit the headlines, usually created by applying AI to process automation without understanding the limits of the data. Amazon’s Rekognition for example misidentified women and people of colour, likely due to using smaller training data sets for these groups. Such stories undermine the credibility of commercially available technology.

The AI doesn’t learn incorrectly, it learns to reflect bias in its training, which reflects a bias in the real world. Prejudice is the nasty face of this, but bias in data can also extend to misplaced assumptions by scientists, doctors recording incorrect diagnoses, and even people’s choice of written or recorded language.

page 4

Badly curated data

Data can also be mislabelled, or poorly curated, so that the AI struggles to make sense of it. If data is not appropriately selected, then the model will not learn how to reach the right conclusions. And if conclusions seem suspect, people won’t trust it (or worse, they will trust it and make bad decisions as a result).

"If data is not appropriately selected, then the model will not learn how to reach the right conclusions."

page 4

User interfaces and explainability

Trust is not just about how good the model is, but how easy it is to use and interact with and how clear the answers are represented to the user. If the user does not feel they can input the information they want, they are likely to be suspicious of the result. If the interface is overly complex or the results presented in a confusing way, or with no explanation as to how they were reached (even if they are correct), it will quickly be abandoned. Even something simple like a film recommendation is much more trustable if you can see what aspects of viewing history led to it.

 
 

The AI doesn’t learn incorrectly, it learns to reflect bias in its training, which reflects a bias in the real world. Prejudice is the nasty face of this, but bias in data can also extend to misplaced assumptions by scientists, doctors recording incorrect diagnoses, and even people’s choice of written or recorded language.

"If users don't feel they can input the information they want, they are likely to be suspicious of the result."

page 4

Bias in the real world

Many AIs continue to learn post-deployment, but they are not necessarily well prepared for the complexities of real-world data. Famously, Microsoft’s Tay, an artificially intelligent chatbot, was designed to learn from interactions it had with real people on Twitter. Some users decided to feed it offensive information, which it had not been designed to deal with appropriately. Within 24 hours Tay had to be deactivated and withdrawn for spreading deeply upsetting opinions.

page 4

Malicious attacks

AI is susceptible to new malicious attacks in ways that are poorly understood by users. AIs, which appear to take human decisions, can be fooled in ways that humans cannot.

In a test case, an AI was trained to recognize images. By changing just one pixel in an image, researchers fooled the AI, causing it to wrongly label what it saw (sometimes very wide of the mark, one thought a stealth bomber was a dog). Tesla’s self-driving image recognition systems have been tricked by placing stickers on roads and signs causing them to suddenly accelerate or change lanes.

page 4

Lack of AI transparency

Sitting above all these issues is a fear fed by AIs lack of transparency. Not only do the end-users not understand how AIs make their decisions, in many cases nor do their makers.

Apple’s credit card, backed by Goldman Sachs, was investigated by regulators after customers complained that the card’s lending algorithms discriminated against women. No-one from Apple or Goldman was able to justify the output or describe how the algorithm worked. The apparent correlation between gender and credit doesn’t necessarily mean one is causing the other, but it creates suspicion that bias has crept in. Without transparency, it’s impossible to know – and that makes it hard to trust.

"Not only do end-users not understand how AIs make their decisions, in many cases, neither do their makers."

 

 

A framework for building trusted AI

 

 

Despite the risks discussed in this paper, AI delivers huge value when done well. And away from the negative headlines, it is often done very well.

The problems usually come when poor processes –and lack of experience – lead to poor choices: of the wrong algorithm, bad data, inadequate verification, poor interfaces, or lack of postdeployment support. These errors are often baked in from the outset with fundamental mistakes in initial scoping and design, caused by a lack of understanding about AI, and of the real-world problem it solves. These all undermine trust.

As AI plays an ever-increasingly important role in our lives, we need to design it to be trusted. This goes beyond data scientists designing an algorithm that learns about correlations and works on test data. AI must be designed as a whole product, with a set of support services around it, that allows the user to trust its outputs. Doing so needs a rigorous approach to AI development.

In this final section, we outline five key parameters for creating trusted AI.

1) Trusted AI is Assured

A data-driven decision is only as trusted as the data that underpins it.

The most obvious aspect of trusted AI is ensuring it does what it is supposed to. Because AI learns from data, that data must be reliable. You can train an AI to recognize cats and dogs by feeding it lots of labeled images of each. But if some cats are labeled as dogs, some are not labeled, or some show a completely different animal, the AI will learn incorrectly and make incorrect decisions. If all images of dogs are in the snow, the AI may learn to detect snow rather than dogs.

As soon as it makes mistakes, users will stop trusting it. This may not matter too much for classifying cats and dogs, but it matters a lot for classifying images of healthy vs precancerous cells.

Trusted AIs must use a well-designed model, and be trained and tested on data that is proven to be accurate, complete, from trusted sources, and free from bias. Capturing that data requires rigorous processes around data collection and curation. Those designing an assured AI model should examine AI inputs and ask:

  • Does this data accurately represent the system to be modeled?
  • Does the data contain confounding or irrelevant information?
  • How will I know that the data is of sufficient quality?
  • Is the underlying data biased and how can I tell?
  • Are my assumptions about data collection biased?

"Trusted AIs must use a well-designed model. They must be trained and tested on data that's proven to be accurate, complete, from trusted sources, and free from bias."

2) Trusted AI is explainable

A functioning model is not enough, users need to understand how it works. The AI earns trust by backing up its recommendations with transparent explanations and further detail.

If a bank turned you down for a mortgage, you’d expect to know why. Was it past or existing debt? Was it an error? Did they confuse you with someone else? Knowing the reason allows you to move forward in the most constructive manner. For the bank, it allows them to spot faults, retain customers, and improve processes.

It’s the same for AI. A recommendation is much more useful if you understand how and why it was made. Explainability allows the user to see if the AI supports their own intuition (eg about a disease diagnosis or the best way to make a new material), or helps them question it. And it allows developers to spot errors in the AI's learning and remedy them.

A responsibly designed AI will have tools to analyze what data was used, its provenance, and how the model weighted different inputs, then report on that conclusion in clear language appropriate to the user’s expertise.

Explainability may also involve some trade-off between raw predictive power and transparency of interpretation. If AI cannot fully explain its outcome, trust may still be built in some cases through rigorous validation to show it has a high success rate and by ensuring the user has the information they need to understand that validation.

Those designing AI to be explainable should ask:

  • What could be known in principle about the working of this AI?
  • Does the model need to be fully interpretable or are post hoc explanations sufficient?
  • Can the AI rationalize why it decided to offer the user this piece of clarifying information, rather than another?
  • How consistent is the given answer with previous examples?
  • Does too much transparency make the AI vulnerable to attack?
  • Does the information on offer meet the accountability needs of different users?

3) Trusted AI is human

A trusted AI is intuitive to use. Netflix would be less successful if users had to enter a complex set of parameters to get film recommendations. But instead, it automatically presents films you may like based on your history or search terms, in an easy to navigate interface, and sometimes even tells you why it recommended them (Because you watched…).

An intuitive interface, consistently good recommendations, and easy-to-understand decisions, all help the user come to trust it over time.

Intuitive doesn’t always mean simple. A ‘simple’ smartphone app may use very intuitive guided decision-making. A drug property prediction platform can expect advanced chemistry knowledge from its user and display complex information in a manner appropriate for an expert to understand and interact with.

The complexity of these ‘guided decisions’ must be matched to the user’s knowledge. Equally, the time it takes the user to fully trust the AI will be relative to the complexity and risk of failure.

Making AI usable for humans means understanding the end-user and how they interact and learn over time. Those designing AIs should ask:

  • What does each user need to understand about why the AI did what it did?
  • How should we communicate with users and collect feedback?
  • Why would users not trust this AI?
  • What reassurances are they likely to need?
  • What training and support are needed for different users?
  • Should the system allow users to ask for more details?
  • How can we retain confidence if the AI gets things wrong?
  • How do we make users feel the AI is accountable? 

4) Trusted AI is legal and ethical

A trusted AI should reach decisions that are fair and impartial, with privacy and ethical concerns given equal weight to accuracy.

An AI may conclude that certain groups are more likely to re-offend or default on loans. Whilst this may be true at a group level (for broader socioeconomic reasons), it does not mean an individual from that group is more likely to do so. An AI using this as a decision making factor creates undue prejudice and opens its user up to legal challenges. 

Those designing AI to be ethical and legally compliant should ask:

  • Why are we building this AI at all?
  • Are we aligned with prevailing ethical standards?
  • Is it fair and impartial?
  • Is it proportionate in its decisions?
  • How is it governed?
  • Are we honest about what we claim it can do?
  • Are we transparent about what it's doing, or is it doing something else behind the scenes?

"An AI may conclude that certain groups are more likely to re-offend or default on loans."

5) Trusted AI is performant

Finally, a trusted AI continues to work after deployment.

Too many AIs work well in a controlled environment but fall over once deployed, either because they are not ready for the complexities of real-world data, or because they have not been designed to integrate into the users working life (either technically or practically). Users will quickly lose trust in an AI they see making less and less reliable decisions.

A truly performant AI is futureproofed for throughput, accuracy, robustness, security, balancing raw predictive power with transparent interpretation, whilst remaining aligned to genuine business needs.

Those designing AI to perform in the real world should examine AI outputs and ask:

  • Does this AI actually solve the intended business problem?
  • Do we understand the needed levels of throughput and robustness?
  • Do we understand the required output quality? 
  • What safety barriers are needed in case the AI makes a mistake?
  • How robust is my testing, validation, and verification policy?
  • Is the in-service AI protected against adversarial attacks?
  • Do we have a plan to continually assess and improve in-service performance?
  • Do we know when and how the in-service model could become invalid?

"Those designing AI to perform in the real world should examine AI outputs."

About the authors of the report:

antoine-aymer color round 150x150.pngAntoine Aymer, Strategic Portfolio Director at Sogeti

John has made a career out of unpeeling the onion of creativity and innovation. He brings eclectic experience to the question of ‘how’ - teaching at Harvard Business School and the MIT Media Lab, producing feature films and theater, starting high-tech ventures and sharing the bandstand with Frank Zappa and the original Mothers of Invention.

John is a highly sought-after thought leader and self-styled ‘innovation activist. His many published works reflect a diverse career and entrepreneurial mindset.

 

antoine-aymer color round 150x150.pngAlbert Tort, another author from Sogeti Sogeti

John has made a career out of unpeeling the onion of creativity and innovation. He brings eclectic experience to the question of ‘how’ -

A bit more text

teaching at Harvard Business School and the MIT Media Lab, producing feature films and theater, starting high-tech ventures and sharing the bandstand with Frank Zappa and the original Mothers of Invention.

John is a highly sought-after thought leader and self-styled ‘innovation activist. His many published works reflect a diverse career and entrepreneurial mindset.

 

antoine-aymer color round 150x150.pngAnother author here! From Capgemini

John is a highly sought-after thought leader and self-styled ‘innovation activist. His many published works reflect a diverse career and entrepreneurial mindset.

John has made a career out of unpeeling the onion of creativity and innovation. He brings eclectic experience to the question of ‘how’ - teaching at Harvard Business School and the MIT Media Lab, producing feature films and theater, starting high-tech ventures and sharing the bandstand with Frank Zappa and the original Mothers of Invention.

Download

Please fill in the form below to download the report as a PDF:

By submitting this form, I understand that my data will be processed by Sogeti as described in the Privacy Policy.*

Contact the author:

Antoine Aymer
Antoine Aymer
Global Strategic Portfolio Director for Testing
Phone: +33767793048