microphone headphone
PODCAST
INNOVATION

Playing With Reality - AI vs the Fraudsters

Who is winning in the battle between fraudsters and those fighting them? Will AI be the saviour here, or something that makes the issue of fraud worse? Find out on this week's episode of Playing with Reality.

Sorry, this content can only be visible if Functional Cookies are accepted. Please go to the Cookie Settings and change your preferences.

Show Notes

With the increasing prevalence of digital monetary transactions, fraud has become an ever-present threat to all of us online. And advancements in AI have also made it possible for fraudsters to use sophisticated techniques to perpetrate their crimes. From deepfakes to investment scams, AI has made it easier than ever for fraudsters to manipulate people and systems. But AI is also being used to fight back - identifying patterns of fraud, detect anomalies in transactions, and even using behavioural biometrics to spot fraudsters before they can commit the crime. But who is winning in the battle between fraudsters and those fighting them? Will AI be the saviour here, or something that makes the issue of fraud worse? Find out on this week's episode of Playing with Reality.




Today’s Guest

 

Daniel Holmes

Daniel is a highly experienced global Fraud Prevention Leader with a career working across multiple business sectors, including Banks and Fraud & Financial Crime technology providers. He focuses on how data, technology, analytics, process and education can help banks to succeed in protecting customers and reducing fraud losses. He works as the Fraud Prevention SME at Feedzai, a company whose RiskOps platform leverages machine learning and big data to prevent and detect financial crime for some of the world's largest banks.

https://www.linkedin.com/in/daniel-holmes-65911483/?originalSubdomain=uk

https://feedzai.com/

 

 

Subscribe Now:



Episode Transcript

[Music Playing]

Menno: We've all seen the power of AI for good, but what about when it's used for bad?

Daniel: The way in which the pig butchering terminology came about was the pig was being fattened up before it was butchered.

Menno: Fraudsters using AI are on the rise and it's getting ever more sophisticated. This week I'm speaking to fraud AI specialist Daniel Holmes of Feedzai, to find out more about how fraudsters are utilising this technology and the people like him who are fighting against it.

Welcome back to Playing With Reality with me, Menno van Doorn, a podcast from Sogeti, the home for technology talent.

Tia Nikolic, my co-host and one of Sogeti's top AI specialists is here with me again. Hi Tia.

Tia: Hey, Menno. Thank you for the nice compliment, top AI specialist.

Menno: Exactly.

Tia: That's really nice.

Menno: How does it feel?

Tia: Feels great coming from you, especially.

Menno: Well, just before our little talk started, you said, “I was testing LLMs”, so I think then you must be a specialist in AI.

Tia: Yes, exactly. Yeah. It's something that I'm very happy about and proud of the work we've been doing. So, the blog post is going to come out soon and people can look out for that.

Menno: Well, we are not going to talk about LLMs, although we might be talking about it if it's affecting the topic of this podcast being how fraudsters are using AI.

And I've got this story for you about how … everybody has a story. You've got the story, what's going on, and how people are fooled by other people in technology.

It's coming from a close colleague; I won't mention his name. And his mother-in-law got a phone call from her bank, and I think 10 minutes later a guy was knocking on the door and five minutes later she handed over her iPad with all her details of her bank, and she transferred an amount of (I believe it was), 50,000 euro.

And just before he left, she handed over her jewellery because it wasn't safe in her home. Can you imagine that story?

Tia: Oh, I can't.

Menno: And there's a good ending. And yet the bank blocked the transfer because they didn't trust it, but her jewellery are gone.

Tia: That's terrible. That's terrible. All of your life savings and your personal belongings being taken away, that must feel really bad. I can't imagine it.

Menno: So, should we be worried about people, or should we be more worried about technology now that when technology steps in, we are easy to fool people even more.

Tia: Yeah. Technology doesn't fool people. People fool people.

Menno: People fool people. Sounds like a song. People fool people.

Tia: Exactly.

Menno: Then the question could be what kind of role can AI play in this whole scenery? Because it can maybe help you, it can be part of the fraud scheme. What can AI do to help us to improve and to be better prepared for those situations? What's your guess?

Tia: To be more prepared and better prepared. So, if we're talking about that, I went on ChatGPT before we filmed this podcast, and I was asking for advice how to maybe stop yourself from being scammed, et cetera.

And it gave some very nice pointers that we also covered just now, to also check spelling mistakes, legitimacy of the website, et cetera.

AI also can be used, for example, if you go on a deeper level, you can use data points and data from user devices to see if they have been compromised or if they've been engaged in fraudulent behaviour before.

So, these are some techniques that I'm sure our guest is going to go into more detail about.

Menno: That's a very clever answer Tia. From now on, I will call you GP Tia, because you can't do it alone anymore.

So, this is all about the future of fraud, and I spoke with someone who has worked in both banking and AI to give us a picture of the issues from all sides. That person is Daniel Holmes.

Daniel is a fraud prevention leader who has worked across banking fraud and financial crime technology. He now works for Feedzai, one of the world's leading RiskOps platforms for financial risk management.

They use machine learning and artificial intelligence to work alongside banks to detect user anomalies and protect customers from fraud at all levels. I started off by asking him to tell us about how fraud is generally carried out in the digital technology space.

[Music Playing]

Okay Dan, good afternoon. Great to have you on the show.

Daniel: Thank you. Very good to be here. Thank you for the invite.

Menno: I would say what a timing. There's so much going on in the space of AI that we desperately need to talk with someone who has specialised in the combination of AI, I would say, and fraud. Don't you think so?

Daniel: Absolutely. I think whilst AI has been evolving at significant pace over the past few years, the fraud landscape has moved at a similar pace.

Then I think, the media hype around AI and the use of AI has really amplified that. I think we've always typically thought of AI and machine learning as, let's say, a defensive mechanism for the banks.

And I think it's now becoming clear that AI is becoming an offensive mechanism for the fraudsters and criminals as well. So, really good timing in that regard.

Menno: Exactly. Let's first chat a little bit on the status quo of fraud in itself. So, we talk about fraud, fraud, fraud, but can you give some examples maybe for the listeners to get a better overview of what kinds of fraud that we are talking about?

Daniel: Yeah, so when we think about fraud, we tend to think about unauthorised transactions on your account.

So, that means that Menno, you log into your online banking one morning and you recognize that a transaction has left your account and you know nothing about it. And that as a consumer is probably the worst customer experience you can ever have, logging in to see that your money's gone.

So, we tend to think about fraud as an unauthorised transaction. So, that could of course be through one of many channels. There are so many channels for us as consumers to interact through now. It could be a rogue card transaction.

And then of course, more recently with the explosion and utilisation and adoption of digital banking, it could be me compromising your online banking user ID and password, going in and then extracting money out of your account that way.

But we tend to think about it as unauthorised access with the end goal for the fraudster, of course being financial gain.

Menno: Yeah. Well, all kind of terrible things can happen to you, but maybe the worst thing is when you are the one that transfers the money yourself and somebody impersonates, and then I'm the one that is sending the money and you feel really bad about it.

Daniel: Absolutely. That's the evolution that we've been on in the fraud industry over the last three or four years. When we think about some of those unauthorised fraud types that I described, the banks have actually done a really good job in terms of adopting the right technology, adopting the right process, bringing in the right people to protect customers from those unauthorised attacks.

And there's a plethora of different technologies out there that have put the bank in a really good position when it comes to protecting customers from those types of attacks.

Now, what that did gave the banks the upper hand. That doesn't mean that the fraudsters just stop. They're not going to stop trying to commit fraud. What they do is they look for the weakest part of the chain.

Now, once upon a time, the weakest part of the chain was the bank's ecosystem and the bank's fraud defences. We hit a point where that was no longer the case. The weakest part of the chain was the consumer, the individual themselves.

So, rather than trying to take on the bank systems, to your point, the fraudster would contact the victim directly and then ask them to process a transaction on their behalf.

And what that did is it rendered a lot of the technologies that the banks had deployed far less effective because traditional fraud controls, it looks at things like the device that the customer uses, what's the location that it comes in from. And the banks have had to really rethink what they do in order to, again, get themselves in a position where they're succeeding more times than they're failing.

Menno: So, then you're in a situation that you lose money, you can get compensated. We can talk about that.

But can you describe what else can happen to you besides the part of the money? How will it hit you maybe as an individual or community beyond just financial loss?

Daniel: So, we always tend to assume that fraud is about the financial loss. And clearly that's a part of it.

Now, the two fraud typologies that we've spoke about so far, Menno, unauthorised fraud typology, and then there's authorised transactions are what we might call scams. There's a big difference in terms of the refund policies that banks have when it comes to reimbursing customer losses.

So, on the unauthorised side, globally, the general consensus is that if there's an unauthorised transaction on your account, the bank will refund you that money because they've let that fraudster take that money from your account without your consent.

On the authorised side, actually the global perspective is very different. Now, the UK is a little bit ahead of the curve in terms of where they are with that in that around 50 to 60% of those authorised frauds get refunded.

But if you look wider, if you go into the U.S., if you look at Asia Pacific and Australia and New Zealand, for example, currently the customer is having to foot that bill themselves.

Now there's constant policy and regulatory change, which I'm sure we can get into. And the general trend is that the regulators are starting to side with the consumer rather than the banks.

But you put yourself in that position of an individual that's perhaps in their late 50s, early 60s, they've just retired, they've just had a large pension pay out, and then they lose that money to an investment scam. That's not just a financial loss, that's a fundamental change to how that person will have to live the rest of their lives.

So, the emotional impacts and the future financial repercussions of that individual are hugely significant.

[Music Playing]

Menno: Tia, what do you make of this liability issue? Do you think that banks should take more responsibility, for instance?

Tia: Yes. Given I know that they're already doing a lot and that they would also cover almost 100% in some cases of the lost money, which should be the case because the consumer needs to be protected.

I think what also might be a good idea is for banks to include themselves in digital literacy classes for children, for example, in schools. So, to also teach them about these sort of risks. I think that could also help.

Menno: Yeah. Teach them also maybe about what new technology can do with you. If you have never heard of a deepfake voice, for instance. It could be very helpful if in education people learn that the truth isn't always the truth.

Tia: Exactly. I'm a big advocate for this. I think we should talk about these pitfalls of technology a lot together, of course with the positive sides, because there's always the other side of the coin.

Menno: Yeah. And there's of course also the psychological impact and what can we do about that problem?

Tia: Absolutely. I think, yeah, we can all provide a bit more empathy towards people in our day-to-day life. Emotional intelligence is extremely important when handling technology as well. And when talking about this, it again ties in with this digital literacy and how to be human around technology, the pitfalls.

Menno: Next, let's hear a bit more from Dan on how AI is changing the way we think about fraud.

[Music Playing]

What can you do with AI to increase the problem? And we've all been seeing all these kind of Midjourney deepfake technologies that look so real that I can't believe my eyes, this is really this person. This is really this situation and deep voice and all this kind of thing.

So, what do you think will this new technology wave bring to your work personally?

Daniel: Yeah. So, I don't think it will fundamentally change the type of fraud that we see, but clearly what it will do is it will increase the level of sophistication that the fraudsters are able to apply to the attacks.

So, we spoke earlier about authorised fraud, and that might be me calling you pretending to be from your bank and then convincing you to do something on my behalf.

Now, how can that become more sophisticated? Well, I saw a video detail in a fraud case that had been publicised by the media in the U.S. And in this particular case, what the fraudster had done is they'd taken a video from social media, and they trained a generative AI using the voice of an individual.

And they then called that individual's parents, and they used the voice AI that they'd built to say, “Dad, I've been kidnapped, I'm in trouble. You need to send $10,000 immediately to this account in order for them to release me.”

And of course, in that scenario, the parent immediately fell into panic because all he could think was, it's my daughter on the phone and I need to send this money immediately in order to ensure that she remains safe, and every parent would act the same way.

So, that's an increase of the level of sophistication that I was talking about, where if you get that call from your bank, perhaps you take that step back and you think something isn't quite right. I might just hang up and call my bank directly to validate this before I do it.

But when you get a call from somebody that you think is a relative, or in this case a daughter, logic goes to one side, and you immediately want to try and reconcile that position and protect your daughter.

And that's an example of how these things can become much more sophisticated and frankly scary from the consumer's perspective.

Menno: I think it's a very good example. People are in panic, and if you are in panic, you can't think clearly, so then you send money.

So, do you think that AI can be of any help in this kind of situation so that AI automatically detects whether this is your son or something else is happening at the other side, because at this moment they're very bad at doing that?

Daniel: Yeah. So, I think where AI is going to add the most value from scenarios like that is looking at the transaction when it takes place. When that father in that scenario goes to his bank and he asks to send $10,000, there's a whole string of questions that you can ask from a machine learning perspective to understand whether this transaction is normal for that customer.

Has the customer sent money to this beneficiary before? Does the payment amount that they're sending aligned with the norms that we've seen for this customer in the past? Has anybody else in the bank sent money to this particular payment before? Has the customer sent money at this time of day before?

So, you can build out all these strings of logic into features that can ultimately fuel the model. And I think where we need to go from a fraud detection perspective is we need to recognize that consumers behave differently.

You can't just have a sense of what's abnormal at a bank level and assume that that will apply to everybody because the KPIs that you think about in fraud, finding the most fraud, the high levels of detection, keeping your false positives and your intervention low, you can't hit those KPIs and put a good service for your consumers at the heart of your strategy if you assume that everybody behaves the same.

So, I think really becoming customer centric with your approach is ultimately what's going to allow the banks to succeed. And this applies to a whole range of things within the bank as well. In the sense that as more channels have come online for customers to interact with and bank with, you go back 10 years, we did our banking via branch, then via card, then through telephone banking, then through web banking, then through mobile banking.

One of the industry problems that we observe is that none of that data is connected. So, somebody can transact with a card and there's nowhere to connect that to the event that happened on mobile 10 minutes ago.

So, it's about ripping out that legacy set of silos and replacing it with that true customer centric view, layering machine learning on top of that to make the master of the data. And then providing a true customer-based outcome, which is going to allow us to succeed in the industry.

Menno: So, in the end, it's about making the banking system itself more intelligent by using AI while at the same time individuals can make mistakes or send money or be a victim of fraud, but there's a sort of safety net then.

Daniel: Absolutely.

Menno: Do you have examples of companies, financial institutions that are working this way or heading into this direction already?

Daniel: So, this is very much the direction that the industry is moving. I think one of the buzz terms around the industry right now is customer centricity and recognizing that you have to put the customer at the heart of the risk decision in order to drive the best outcomes.

And I think ultimately, Menno, it falls down to having layers. And there's some of these layers that you can think of in a covert sense, and there's some of these that are a little bit more overt from the customer's perspective as well.

So, you have that transaction risk analysis, but then you can layer that with a whole bunch of other AI led technologies like device recognition, like behavioural biometrics, which is thinking about not just what you are doing, but how you are interacting with your device.

The way which you and I hold and interact with our device, although we don't consciously think about it as we are doing it, will actually fundamentally be very different when you look at the data.

So, that's some of the covert things that go on that customers don't really know about, but then you've got some of those other things as well, like education of customers.

Menno: Yeah, of course, of course. Can you give an example of these behavioural biometrics? So, how does it work?

Daniel: Yeah, absolutely. So, as a customer interacts with a web session or a mobile session, the data that's available for collection will be different depending on the device type that you are using.

So, the easiest fork in the rug that we can create here is that we have a laptop or a desktop device, and we think about the data from two perspectives, the way in which the user moves the mouse and the way in which the user interacts with the keyboard.

So, what's the speed that the mouse moves at? What's the count of inflections and the sudden changes of direction that the user will make? What's the click rate on the mouse?

From the keyboard perspective, what's the speed of typing? How long do people hold the keys down for as they're perhaps tapping in a payment reference or similar? Do they use keyboard shortcuts, et cetera?

And you can use that to normalise how the user normally interacts over a period of about five to eight sessions. So, it's about five to eight times we need to see that user so that we can restore and build that baseline.

And then when we see that customer again in future, we can think, okay, how does today's behaviour compare to that baseline that we've created?

Menno: Can you give me maybe some examples of how Feedzai successfully has prevented or detected fraudulent activities?

Daniel: Yeah, absolutely. So, what are the key USPs for Feedzai in the market is that the platform has always been extremely well set up and market leading from a perspective of consuming vast amounts of data at scale and making a real-time fraud decision for the bank.

So, this has typically been centred around transaction analysis and some of those things that we described earlier. What we've added in the last couple of years into the technology stack is the ability to supplement that data that we consume with data that we generate ourselves.

So, we can generate that behavioural biometric data, we can generate those device footprints, we can generate those location signals, and we can combine what the user's doing with how they're doing it. And that's what really puts us in a market leading position and is very much our USP within the market.

And what we've seen is when we've implemented that approach into the ecosystems of our customers, we've seen significant improvements from a KPI perspective.

So, that means we're finding more fraud and we are flagging fewer false positives or let's say putting less friction in the way of genuine transactions and genuine customers.

And ultimately there's always a balance there between what that right approach should be. And each bank and each customer of ours have a slightly different risk appetite, but ultimately, we'll work within that risk appetite to help them manage their strategy effectively.

[Music Playing]

Menno: So Tia, what do you make of this kidnapping story that Dan talks about? What would you do in this situation?

Tia: It's very difficult to say. This is like really the scammers are playing on your emotional strings. So, I really cannot say how I would react. I definitely know I would panic. That would be the first step.

Yeah, definitely. We've come a long way since the Nigerian Prince Scam over email, which we all know about.

Menno: So, there's a lot of more combinations to make digitally to …

Tia: Yeah.

Menno: And when you look at deep voice technology, for instance, what can we do against it? What can we do against deep voice technologies?

Tia: Oh, that's a big question that researchers are working really hard on. You wrote about deepfakes a lot, Vince has already talked about them. This has been part of a research and discussion for years now. And of course, I understand it, but I'm sad to say that there's no, again, one answer to it.

So, it's also behaviour of people that are being scammed. How are they going to react? The companies that are releasing these models, do they watermark the output in a specific way?

But then we also go back to scammers. Are they sophisticated enough to remove this watermark, for example? So, it's a very complicated multifaceted question.

Menno: Yeah. So, the other solution would be just accept that it exists, just accept that you make a mistake, just accept that you pay the money. But then biometric steps in.

Tia: Yes.

Menno: And the bank can actually stop the transfer because of your own marks.

Tia: Exactly. It's a great technology to use here to pre-empt scammers.

Menno: Yeah. So, the banks actually have a digital twin, a digital Tia that knows how she behaves when she pushed the button of her laptop or mobile phone.

Tia: Am I GP Tia or digital twin Tia?

Menno: Well, you're both.

Tia: Nice, nice.

Menno: You're both.

Tia: Okay, great. Then Menno, I'm very curious what you spoke to Dan about next.

Menno: Yeah. Well, you know how obsessed we are with GPT-4 on this show. So, now I want to hear if generative models of AI like this are making it easier for fraudsters.

[Music Playing]

So, I’m a, let's call it an amateur fraudster. So, I go to GPT-4 and ask questions about please send an email by saying that Dan Holmes has won the lottery. Blah, blah, blah, blah, blah. And very convincingly.

And so, normally you would go to the dark web. I don't visit the dark web, but there's now GPT-4, 5, 6 I don’t know. So, how do you look at these possibilities of making it so easy for people to do these kind of things?

Daniel: I think ChatGPT is a challenge for banks in the sense that when you think about the example you gave of Dan Holmes, you've won the lottery and the fraudster would send an email to thousands of different addresses because ultimately, it's a numbers game. If they send that email to 10,000 people and they get a 1% response rate, that's not bad for sitting there and clicking a button.

And I think one of the biggest red flags, and one of the ways in which banks have historically educated their customers to spot suspicious emails has been to look for grammatical errors, negative ways, or perhaps let's say unorthodox ways in which sentences and paragraphs have been structured.

ChatGPT solves that because it's very easy for the fraudster to tap in a grammatically incorrect passage or sentence into ChatGPT and say, please format this in a way that makes more sense in the core English language.

So, I think it's going to force the banks to rethink how they educate consumers when they're looking for things like bogus phishing emails. I think it will, at least in the short term, increase the success rate that the fraudsters have.

And there isn't a lot that the banks can do other than educate their customers when it comes to how frequently they will respond.

Where the banks have got control is what they're doing from a technology perspective in a process perspective to ensure that if the fraudster is able to use a tool like ChatGPT, to more successfully extract personal credentials from a customer.

When the fraudster comes to use those credentials, the bank is set up to succeed from a technology perspective, to stop that fraud, to stop that transaction, to spot that anomaly within the user's usual behaviour and protect the customer and reconcile their financial position quickly.

Menno: So, give it some time, you're saying also?

Daniel: I think so. I think it's a little bit too early to quantify the impact that it's had. It's a little bit too soon to say for now.

Menno: People are always interested in bad stories or bad news maybe. I don’t know. So, what are some of the major scams that you're currently seeing?

Daniel: So, I think probably the highest profile one that I saw recently was that the multiple Olympic gold medallist, Usain Bolt had been scammed out of, I think it was about 12 million pounds, but he was the victim of an investment scam in the sense that he was convinced to send a significant amount of money to a bogus investment.

An investment scam is just one of the many scam typologies that we see. One of the big challenges in industry last year was investment scam, but with the use of cryptocurrency.

So, a fraudster contacting a victim and saying, “Hey, deposit a thousand pounds into this crypto wallet and we can guarantee you a 1000% ROI in two weeks, and then you can have your money back.”

And whilst over the crypto markets, and whilst the stock markets more generally were let's say bullish, customers were falling for that, and the fraudsters were having high success. That's tailed off a little bit in the last three or four months. And I think that's largely owned to how the crypto market has become bearish rather than bullish. But I don't doubt that these things will return.

Menno: I've heard that hackers are using this technique or idea of, it's called pig butchering. Can you tell me what it is?

Daniel: Yeah. Pig butchering is a fraud typology that emerged last year. And the way in which the pig butchering terminology came about was this analogy was that the pig was being fattened up before it was butchered because the fatter it gets, the more meat is on that pig for the consumer to then eat.

And how it works from a practical sentence is that it often involves a rub and scam in the sense that somebody is approached on a dating website, and they build a rapport with that individual.

They'll try and take the conversation outside of the dating website and into a more traditional messaging app like WhatsApp, for example. And there'll be exchange of messages. Often, they'll begin to exchange hundreds of messages every day. And from the fraudster's perspective, it's all about trust.

So, once they build a certain level of trust with the victim, the victim will often then trust the fraudster more than they trust their bank, which is an extremely dangerous position to be in because even if the bank warns them of the risk, they're more likely to listen to the criminal than they are to their own banking institution.

So, they build that relationship over a period of time, and then once they've got to the point in that relationship where they feel the level of trust is at the right place, they will then start to try and extort money from the victim.

So, that could be through a story of I can make you 10 X returns in an investment. I know an individual that can broker that for us. It might be that I really want to come and live with you in the UK or whatever part of the world you are in, but I need two and a half thousand dollars to facilitate a plane ticket.

It could be one of many stories, but ultimately, it's in an emotional connection that the fraudster then capitalises on.

But the pig butchering term comes from that fattening analogy. So, they kind of fatten the victim up over a period of time before then going in for that extraction of funds. So, it's very much playing the long game, let's say, rather than the short game.

[Music Playing]

Menno: So, Tia, as you know, I'm a big fan of catchphrases, so pig butchering, fantastic words.

Tia: Fantastic.

Menno: So, what do you make of it?

Tia: I don't like the terminology, but-

Menno: You don't?

Tia: It's very graphic, but I think actually, yeah, it's a good metaphor for what's happening right now in the fraud world. And what also Dan explained, it's basically the idea of people slowly being fed information so that they would trust the scammer in the end.

And we already spoke about this a bit. So, scammers using, first of all they use email as their first channel to scam people. Then they create legitimate websites also to increase the legitimacy of them.

And then they use Twitter accounts and influencers to further push this. And then people are more likely to actually buy into the scam itself. So, it makes sense.

Menno: So, it can be done in a very simple way by sending some emails, et cetera. So, that’s already been done for many years. So, now ChatGPT kicks in. And how can that I would say, improve the pig butchering strategies?

Tia: Well, given that I'm GPT Tia, GP Tia as you called me, I also asked the GPT to help me out with this.

Menno: Not again.

Tia: But of course, you cannot explicitly ask ChatGPT to help you with a scam. It's going to say no. But I try to prompt engineer my way around it and I asked, how would you write an email where people need to click a link because of the fear of missing out and add a money component to it, so they will make money if they click the link. And it actually gave me a very nice post.

Then I asked, okay, where should I send it? Which channels? And then it told me, first start with email, then use social media, then try to get an influencer and an actual human to call someone or create a social media post.

I was like, “Wow, this is really elaborate. And this I think very nicely illustrates how this can be used.”

Menno: Yeah. It's funny that you say — I've done the same.

Tia: Of course.

Menno: I've talked about it with Dan. And so, we could say that the dark web is here, or the next version of the dark web is here, and it's called ChatGPT.

Tia: Yeah. To some extent, that's true.

Menno: It makes it easy.

Tia: That’s true.

Menno: It's so sort of the vulgarisation of AI, because now everyone can do it.

Now to end, let's hear from Dan on what he thinks the future holds for fraud and financial crime.

Coming back to the technology itself, the capabilities of technology being able to talk to human-like very convincingly ChatGPT, GPT-4, 5 it can only mean that we'll receive much and much more of these cases because it's more easy to convince people that there's a person at the other side.

Daniel: It certainly is. And often, it's English-speaking countries that bear the brunt of this, Menno, in the sense that English is probably the closest thing to a universal language that we have.

And often the fraudsters, they're located in a certain part of the world, but English is the language that they feel is going to give them the most opportunity to attack and convince customers to do what they want them to do in all parts of the world.

And ChatGPT is only going to accelerate how strong their use of the English language is. So, if somebody that perhaps has broken English whilst ever they're communicating through a digital thread, whether it be WhatsApp or dating site or whatever it might be, ChatGPT is going to be a very easy mechanism for them to improve that level of dialogue and enhance the level of language and English that they're using, which is only going to convince the victim more.

Menno: So finally, fast forward to the future. So, can AI be used to monitor and detect emerging fraud trends? How can AI help us to be on top of whatever is happening, whatever they can invent to do?

Daniel: So, I think Feedzai are a machine learning first, AI first company. That's very much where we position ourselves in the market. And the core of our fraud detection is based around AI.

There are challenges with that approach, and we've catered for those in our overall product suite to make sure that we're always giving our customers the best chance to succeed.

But the question, Menno, is a good one in the sense that often a machine learning model is only as good as the data that it's trained on. So, we would typically take six months, a year's worth of transactions for a customer.

We'd have the fraud transactions labelled within that data set, which would allow us to build the supervised model.

But if a fraud typology emerges in future that the model hasn't seen before and hasn't been trained on, it's not always easy for the model to detect that.

So, we supplement what the model is doing with some business rules on top. So, it makes it very easy for us to react and that if we see a new fraud trend happening at 9:00 AM by 9:30 or 10:00 AM we can have new rules into the system that are proactively stopping those fraud attacks.

And then what we can do very quickly is retrain the machine learning model that's doing the bulk of the detection, using those new fraud labels and using the data from those new fraud attacks to make sure that we normalise the position of the machine learning model very quickly.

So, that's a way in which we've been able to manage emerging fraud trends very effectively for our customers.

Menno: So, could you imagine a future without any frauds? Zero, doesn't exist. So, this is science fiction, it doesn't happen maybe, but let's imagine. So, what do you think the role of technology would be? So, what kind of technology would be in place in a world with zero frauds?

Daniel: The dream for us is to implement the right technology with the right people and the right process to detect all fraud that's attempted against customers. Not only does it remove those horrible customer experiences that we talked about, where somebody has not just the financial pain, but the emotional pain of recognizing that their money has been taken. That would be utopia for all fraud practitioners.

If we were ever to get to that point. I suspect that that point would only last for a short period because fraud is adversarial by nature. And as soon as you close one door, the fraudster is already looking at the next door that they can get through in order to continue to monetize.

And I think wherever you've got significant amounts of money being stored in one place, and a bank is a prime example of that, there are always going to be criminals and always going to be fraudsters that are attempting to take it. So, I'm sorry to spoil the fairy-tale.

Menno: Yeah. You spoiled the fairy-tale. I thought we should end with a fairy-tale. Well, we can also end with a nightmare. So, let's imagine the opposite. There's frauds to the max. Everybody's using GPT-4, 5, whatever Midjourney, fake deep voice. It's going berserk.

Daniel: Well, the answer in that case, Menno, is simple. All the banks would buy the Feedzai software, and we would quickly normalise that position for them by dictating all of the fraud.

Menno: Well, I also heard another solution from people working at banks. Come to my office, forget digital, you're not allowed. Forget all these fancy ways of payment and et cetera. Just do it like we did it before.

Daniel: That's a very interesting point. And I think, one of the things we always have to think about as a fraud practitioner and as a fraud technologist is how do we create the balance between the right level of protection whilst maintaining a good level of customer experience?

And I think you're always fighting that constant balance. And I think that applies in both a covert sense in terms of what the bank's doing outside of the eyes of the customer, but also what the customer sees.

The customer doesn't always necessarily want things just happening under the surface. Sometimes they want a little bit of friction put into their journey. So, I think we've seen very much a cultural shift towards friction within journeys.

I think the best way to close this is using a very cliche line from the fraud world. But the best piece for advice you can always advise the consumer is if it sounds too good to be true, it probably is, step away, have a think about what you're doing before you go ahead and do it.

Menno: Thank you, Dan.

[Music Playing]

Daniel: Thanks, Menno. Really enjoyed it.

Menno: That's all for today. Thank you so much for listening. And a big thank you to Daniel as well. You can find out more about Feedzai and what they do in the show notes.

Tia: If you enjoyed this episode and want to let us know, please do get in touch on LinkedIn, Twitter, or Instagram. You can find us at Sogeti. And don't forget to subscribe and review Playing With Reality on your favourite podcast app as it really helps others find our show.

Menno: And in two weeks we'll be getting creative with an episode about how generative AI models are shaking up the artistic industries from music to art and more. Do join us again next time on Playing With Reality.