Artificial Intelligence
BLOG
ARTIFICIAL INTELLIGENCE

Never Mind Why! Let’s get an AI

Reading through the Quality Characteristics report It struck me that every time we create a new technology to help us fix a problem, many of the same problems – knowledge gaps, hype, high cost/low return, and products rushed to market before they or the market are ready – come back to haunt us. And they always bring new “friends” tagging along as well.

Now we have a situation in which an AI can produce answers that we cannot validate because it has been learning or seeing things that we cannot see or learn. This is both a problem and an opportunity because, while the answer may be correct for the AI, it is not necessarily correct for the user or the business.  

We could be asking the wrong question based on what we expect, or selecting data that supports our preconceptions – face recognition algorithms have turned out to be very good at recognizing people involved in facial recognition as they’ve used their own photos too often!

 Already I am seeming organisations implementing an AI and then thinking what they can do with it.

A lot of times they want to use it as a direct replacement for an existing system and they want to get the same answers as they already get.

But there is so much more that can be done with an AI. They can learn. They can process far more data than we can and open up the data and give us insights into it, and patterns that we have never seen before. In exchange, they give us “unexpected results,” not as an error, but as a certain and valid outcome.

For many people involved in IT, especially in testing, “expected results” is the mantra. But what do we do when we don’t know what the expected results should be? How do we trust that the machine is telling us what we need to know, rather than what we think we need to know?  And to make the best of it, it all comes down to the data and how good it is.

Look at the case of IBM Watson’s Oncology System. It’s recently been reported that Watson has been making very unsafe medical suggestions. No patients were at risk because the hospitals identified the flaw in the recommendations. Investigations by them and IBM showed that the problem lay not with the Watson approach but with the data it was trained with.

It hadn’t been trained with real patients’ data. Rather, it had been fed hypothetical data generated by doctors associated with the project, and like all of us, they brought their own experiences and biases to the data they created. That in turn took Watson down paths that it would never have generated if it had been trained with real world, non-synthetic data.

The data and the quality of that data and the realness was critical.

If you want to see more about how to address this and other related issues, check out our new report.

Andrew Fullen
Andrew Fullen
Solutions Director
+44 (0) 207 014 8900

Recent articles