Playing With Reality - GPT-4 Changes the Code
Today on Playing with Reality we ask: Just how much has GPT-4 changed the code?
Sorry, this content can only be visible if Functional Cookies are accepted. Please go to the Cookie Settings and change your preferences.
Menno: GPT-4 has just been released, and its possibilities seem endless.
Joakim: So, this is a new tool which creates higher productivity, a higher efficiency and higher quality, hopefully.
Menno: And now these systems could even turn natural language into code.
Joakim: You could have a transcription of a meeting note, you could feed that into generative AI. And from that, actually get the use cases.
Menno: This week I'm speaking to Sogeti's very own head of Data and AI, Joakim Wahlqvist. We are going to explore the AI systems, which could create potentially faultless codes. Welcome back to Playing with Reality, with me, Menno van Doorn, a podcast from Sogeti, the home for technology talent.
As ever, I'm joined by my co-host, Tia Nikolic. A data scientist and one of our AI specialists at Sogeti. Hi Tia.
Tia: Hi Menno. And as ever, I'm very happy to be joining you again as co-host. How are you doing? What do you want to speak about?
Menno: Well, of course I want to speak about GPT-4. Maybe you can say whether you are also happy about the release of it. What was your first initial reaction when you heard the news?
Tia: Yeah, I wasn't surprised at all that it came out with all of the competition that we're seeing currently in the market with all of the large language models. It's like a rat's race, right?
Tia: So, it was expected. So, I didn't get to try it yet. I'm still in the queue, unfortunately.
Menno: Me too. But you've seen the examples, haven't you? So, what do you make of it? Is it early to tell, or what do you make of it?
Tia: I think the idea that's very interesting here to introduce to the listeners and also for us to discuss is the idea of multi-modality of this model. It sounds a bit technical, but I will explain it.
So, the GPT-4 can actually take different kinds of inputs as prompts now, and this is extremely exciting for me and for everyone else, of course, because now you can prompt it with an image even and say, “Oh, can you explain to me in layman's terms what this image is?” And then you can just feed it an image from a research paper, for example.
Menno: Yeah, I think that was a great example of these balloons up in the sky. There was a picture feeded to GPT-4, and the question was, what will happen if I cut the rope? And then it actually understands that the balloons will go up in the sky. So, this kind of magic, I would say, is going to happen.
Menno: Yeah. But today we are going to talk about the power of this version of GPT in the context of creating codes. So Tia, what do you know about how artificial intelligence has been used to write code?
Tia: Actually, I've been researching and working on this for quite some time. So, the models that are behind large language models such as GPT-3, 4, 2, all of the GPT models, they're transformer-based models. They actually can capture the sequence of natural language, like human language, like English that we are speaking in currently, but also, they can capture the sequence of code.
So, ever since GPT came out, this has been a point of research. So, it was really interesting and exciting back in 2021 when OpenAI released Codex, which was a fine tune GPT-3 model that could generate code.
And then we all jumped on the bandwagon and requested to use it in Visual Studio as a plugin because also, it was connected to GitHub as co-pilot, if you remember. And it was amazing to see what this model could do.
So, this is for me very exciting because as a developer, you can of course use it if you're stuck with a logical problem and you want to code it, but then also you can use it to improve maintenance of your code.
And this is where it's really, really exciting and interesting for me, because these are the things that really no coder wants to do, like writing unit tests. They just want to release their code into the world and for it to be used.
Menno: So, can you tell me what all of this boils down to? So, what's your conclusion?
Tia: Yes. So again, the power of it. So, using it for maintainability for almost peer review kind of.
Menno: That’s one.
Tia: That's one. Improving the documentation and comments in your code. That's also a very important one. Maybe even creating designs.
So, what I see in the future is GPT-5, 6 coming out, and they can actually give you an answer in form of a image, not just text. So, that can also be.
But then we have to really be careful as coders, developers to not over rely on it. We still need to keep that human part.
So, it is going to leave our hands free to do more human-like things, to be more critical of the software we're developing, of the model, of these ethical issues.
Menno: Yeah. Overreliance is a big risk.
Tia: Absolutely. Thanks for the great discussion, Menno. But I'm really wondering who you are speaking with this week.
Menno: Well, today I sat down with someone at Sogeti, you probably know, Joakim Wahlqvist. He has been with us for the past six years, and he recently moved into his new role as the CTO of data and AI.
I think he has a wealth of experience across the AI space from being a developer himself to helping our clients who implement these technologies and to bring about rapid innovation.
We started off by talking about GPT-4. What else can we do, of course, and how AI is already being used by coders worldwide. So, hi Joakim?
Joakim: Hello, Menno. How you doing?
Menno: I'm great. I'm also happy that I'm here to talk with you about coding and AI, and my first question to you would be, obviously, are you a coder?
Joakim: Well, that's a tricky question actually. I identify myself as a coder because I started when I was something around 10-years-old, and I went on for 20 years. But nowadays sadly I'm not hands-on anymore.
Menno: Maybe GPT-4 will help you to become a better coder.
Joakim: I hope.
Menno: Because yeah, that's the news of the town, of course. So, you're on top of it, I know. So, could you already explain or tell us what the difference is from the GPT-3.5 and this one for coders?
Joakim: That's also interesting question because we can see that the performance in GPT-4 has increased a lot. If we look at the complexity of type of questions you could ask and the accuracy, they have benchmarked quite a lot. Like something 30, 35 different benchmark of different intelligent tests.
And the answers has increased in accuracy a lot, but there is not much information on how much it has increased for coders, actually.
And they still do a remark, a quite clear one that you shouldn't trust the code that is generated out just like that. You still need to go through, and you need to make sure it's accurate and it's not some malicious code in there, but it'll most certainly be more accurate and can do more complex tasks.
Menno: So, you've dived into the new version of GPT, and have you found any new functionality that could be of interest for coders?
Joakim: Yes. Yes, definitely. Because it has been more trained. It's smarter and more intelligent. But in other capabilities actually, but you now can use images as input, before it was text input and text output.
Now they added, so we can also have images combined with text. So, we can take a photo of some sketchy notes and say, “I would like to have a webpage looking like this with this functionality.” And it'll actually, interpret the image together with your instruction in text and being able to generate the code for such a webpage.
Or maybe saying, “I need five versions of the webpage looking somewhat like this.” So, you will get five different mock-ups, which you could show to a potential business and say, which one do you like the most? And then you have a base of continuation.
Menno: Okay. But let's take a step back now to look at the broader picture. What do you think are the main areas we should look at when we talk about AI-generated codes?
Joakim: Yeah, I think the most important stuff to understand is that now AI can actually generate code, and not only one language. This can do more or less every language, which is out there.
So, you can write something, say, “Hey, I want to function that I'll put this data doing this,” have some kind of logic into it. You describe it with your words and the AI can generate that code for you in any language. So, that's the most important features.
Then of course, understanding code, it can also help you to be a kind of a co-pilot. Finding bugs, structure your code, rewrite it to get some better structure or name conventions or what could be.
I think different other scenarios as well could be that you could actually go from a very normal written text to functional requirements. So, like a prestep of coding.
And then if you have your code, you could ask this function also to create test cases for it, different flavours of the fact that we now have generative AI that can understand language and understand and generate code.
Menno: I see a future where two people are sitting at the bar, one taking a pen, and write something down about the website and puts it on GPT and bam, there you are.
Menno: It's coded.
Joakim: Yeah. And I think that future is actually here now, and then exactly how advanced it can be. What details you can draw on that piece of paper that need to be find out. But the capabilities of going from the paper to actual code, it's now existing GPT-4.
Menno: As they say, the devil is in the details. So Tia, Joakim and I talked about co-pilots, and you talked about over-reliance, but a co-pilot, actually is someone you should rely on. Should we use another word?
Tia: Oh, yeah. But it's really funny that you're using this term because GitHub gave the name of co-pilots to their plugin for coding. So, they really want to tell you that this is your helper and that you shouldn't really just leave it to be the only pilot, let's say it like that.
So, it's your right hand and it's going to help you, but not to the point where you're over-relying on it. And what do I mean by this? Is like just writing a prompt to the model and just copy pasting or using that code as is, not testing it, not making sure that it's robust enough and that it actually does the task as you would expect it to. So, you still need to test it out. You just use it as a helper.
Menno: Okay. So, if we take this co-pilot and open the brain of this person sitting next to you in the plane, what are we finding inside his head? So, how does he operate?
Tia: That's a great question. Very nicely put. So, we can also tie it back to translation to make it a bit more digestible. We keep on with the medical terms here, I really love it.
So, if you have a Dutch input for Google Translate or ChatGPT or whatever sort of language model we're talking about, it actually can give you an English translation to it.
And how does it do it? It actually learned in its training data, which Dutch words are mapped to which English words.
So, the same thing applies to code. So, we have an enormous body of data, gigabytes, gigabytes, terabytes that’s scraped from the web, from GitHub, from Stack Overflow, coder's favourite website with questions and answers. And then all of that input is used and mapped to specific output.
So, you have questions like, how do I solve this specific issue? How do I code this logic?
And then you have answers, for example, in Stack Overflow or on GitHub in repositories, in forms of code. So, then you actually can train AI to see these patterns, see these connections, and later on use it.
So, for example, in your day-to-day life, you could have a function like, I want to apply an FLS logic. If a number is odd, output a text saying it's an odd number, and GPT, GPT-4, is that going to give you code in any language based on that. So, it's quite interesting.
Tia: Okay. Now that we know how these models work, I want to ask you what's next?
Menno: Next to your nice words about the co-pilot, I wanted to see how AI actually is being used not just by coders, but more in a, let's call it a workflow sense, changing natural language into code.
So, what do you make of this invention of AI doing code nowadays? Back from this early 60s of the PC to where we are now.
Joakim: So, the way I see on this is that we enhanced the way, or we matured the way that we can code. So, we are on a higher and higher level of coding. So, if you start with assembly, it's very down to the bits and bytes, really.
And then you have a lot of different coding language like C, C++ and C# and whatnot, which make it simpler and simpler. You don't need to write as much; you don't need to control every detail. You do not need to know everything about how computer works, how the memory works.
You could basically do that with simpler version of coding. And I would say this AI generated code is the next step on that. You could actually now create your code simpler. And with that you could do it with higher productivity.
Menno: You already gave it a try to describe how it works, but could you maybe explain how it'll be used in organisations in a more workflow sense? So, can you take us through the steps?
Joakim: Yeah. So, there are multiple steps. So, first you need to understand what is you want to code, what is the app about, what is the use case about, and get that into some kind of functionality specifications.
So, you could have a transcription of a meeting note, you could feed that into generative AI, a large language model. And from that actually get the use cases very clearly specified.
Then you could take these use cases and ask the generative AI to basically generate the code for that. And you will probably, as of to now, you will get some kind of good structure. You will get some base logic that you need for that use case, but you probably will need to tailor it.
You need to look at it and make sure it's actually what you asked for. You need to change it a bit around. And then the next thing will be quality assurance. You need to have test cases.
Menno: Yeah, we need to talk about quality. We will later on. But first, you were using a lot of would and could and these kind of words, which I think means that we are in a very early stage of describing how things will go. Am I right?
Joakim: Yes, definitely. So, as I said in the beginning, this is a early stage of this development. It's couple of months, at least at most a year. I think Codex was released somewhere in fall of 2021.
This is not fully matured yet. A lot of developers is there now and trying these things out, understanding how it works and how we should use it for enhance our work.
Menno: So, OpenAI's Codex, we are talking about for people that don't know what that exactly is and means. Can you explain in simple words what OpenAI's Codex is?
Joakim: So, Codex is a part of the function that we see now with ChatGPT. Behind the scene, intelligence of this functionality is something called GPT-3. It's a AI model we could say, which understands text and has learned from vast amount of text, basically 60% of internet. So, a lot of text.
And on top of that, they also train this model to understand code. And those two combinations together, it was Codex’s. So, it's understands text, understands code and can generate code, both text and code.
Menno: So, how does ChatGPT or other open-source AI generators get their knowledge from?
Joakim: What Codex or the fine tune model of GPT-3 that could generate code is trained on, is basically code snippet from internet. It's not like it's only GitHub or only some other library of code. It's probably a lot of different libraries that they have got hold of, which is open.
Menno: And part of these libraries are so created under an open-source licence. And now currently there's a class action lawsuit against OpenAI and Microsoft claiming, I believe, $12.7 billion because the code that has been used is created under GPL open-source licence.
So, would it be possible that we are going into a scenario where you're not allowed to use this code and this sort of intelligence is seen as theft?
Joakim: Of course, that scenario could be there. Now, I think we can't jump to conclusions because that process is nearly finished and it would take a lot of time, and I don't think we have a possibility to judge that from the side as well.
But of course, there is a possibility now looking at the terms and condition what Microsoft and OpenAI claims, it's the result coming from these models. They are the property of a user, if they pay for it. As of now we shouldn't be afraid of what will happen in the future and this big lawsuit, we simply don't know of it.
Menno: What do you think Tia, is going to happen? Will taking natural language into code be something that a lot of big organisations will actually do?
Tia: Yes, I think they will definitely have to do it because the developers working there are going to use it.
Menno: That would be my answer to. Yeah.
Tia: Absolutely. So, possibilities are endless. Joakim also talked about it. There's a lot of different things that can be used for now with GPT-4 being multimodal. You can even show it a diagram or a website, and then it can give you HTML code back to create that website.
So, these possibilities and this power is too big to not actually use it in an organisation. So, it's going to happen.
Menno: Yeah. And it's not organisations that are going to decide, it's actually the coders.
Tia: Yes, definitely.
Menno: Yeah, yeah.
Tia: And I know that coders are already using this. For example, Codex, we already spoke about it. We spoke about the co-pilot GitHub. This is the model that's underneath it. And I told you we already were waiting in line two years ago, a year and a half ago to use it.
And of course, we made sure we don't use it for commercial purposes because we know that that's a legal issue, but it is something that really excited us. So, coders drive companies forward, they push them forward, they develop it. So, based on their passion, this is definitely going to happen.
Menno: And what about this legal issue that I talked about with Joakim? So, what do you make of it as people saying they're just stealing the intellectual property or the knowledge of the best coders in the world, like you Tia, and turn it into a machine?
Tia: That's a great compliment. Thanks, Menno. Using it yeah, like that, it ties back to the generated visual art copyright issue. We already spoke about this even when I was a guest in your podcast in last season. This is an ethical issue. This is a copyright issue. You can argue that.
I'm a bit more concerned next to the intellectual property because this is already, as I mentioned before, bodies of code from GitHub. These are open repositories. They're not private, so you are already putting them under a specific licence. So, then there's no issues there because you already opened your code. So, it can be made explicit.
I'm a bit more worried about private information leaking because maybe a coder forgot to delete a password or some sensitive information before committing it to a public repository. Even if it's under an MIT licence, for example.
Menno: Okay Tia, I think we should go back to Joakim to hear some more about the bigger questions. Like is he worried about where this technology is going and what it could do to the coding industry?
Okay. We, or actually you have described the engine, so how it works, what you can do with it. And that raises tons of questions, of course, also a lot of questions about the quality of the whole thing and how dependent we will be on these new AI-generated codes.
So, now could we say that we have opened Pandora's box of AI and it'll double and double and double. What do you make of the quality? Should we be worried about the quality and come together and talk about it with all the engineers?
Joakim: I definitely think we should, not because we need to be so worried, because we have seen this kind of changes many times in history, and there is always a lot of questions to be asked, which needs their answers. And people need to come on board and understand the journey.
And so, I think for the sake of creating that arena to discuss this, to create this common understanding of where we are, where we are heading, and how we should see this new technology, that's important.
But still, I think this new capability, which AI and generative AI, which can generate code actually is, we shouldn't be too afraid because I think the developers will still be there. I don't think we very soon will see fully automated generated applications or software where is no human intervention. I don't see that in near future.
So, this is a new tool which enhance and create higher productivity, a higher efficiency and higher quality, hopefully. But of course, we need to discuss and understand this, right?
Menno: I think you're very innovative or like innovation. I'm not sure whether the quality assurers are as optimistic as you are, let's see and how it goes.
But I can understand when you're saying, okay, there's a, a human on the other side, but what can we tell about the quality of the software itself? And it's completely generated. How good is it?
Joakim: So, that's still to be seen. I would say. We see that the code generated, it works. It's correctly written. A developer understand it and can use it.
But going from there to actually build a fully functional system and to end with everything needed when integrations and data, we are not there yet. It'll probably come, but I think there still need to be some more of assisting coding that you have a developer which can use this functionality to increase the productivity, but to really build a fully functional system, that's not where we are as of today, at least.
Menno: So, tell me more about Microsoft's Power Platform, because Pandora's box is opened that we call it OpenAI, ChatGPT, Power Platform. That's started earlier on. So, what can you tell us about no-code, low-code and how different is it from what we can actually do now with these open-source tools?
Joakim: So, first of all, the open-source tools of the OpenAI services we seen today, they could do so many different things, and it's more of a, what you will use them for, and you can stitch a full software together as a developer.
But in Power Platform, it's a very capable tool, more of a citizen developer tool. And in Power Platform, there is actually functionality to write code from text. So, you can do a description of what you would like to happen, and Power Platform will generate that code for you.
And that generation of code is actually done by OpenAI Codex. So, in the background, in Power Platform, this Codex functionality and basically GPT-3 is part of the functionality and has been for quite some time.
Menno: So, what's the big fuss? So, why are we now talking about, and everybody's excited of creating code, whereas Microsoft Power Platform already had these capabilities for a long time.
Joakim: I think ChatGPT opened up our eyes. 30th of November, 2022 was the day that the world got to know that AI is here for real, and it works. It's not for niched companies that could spend a lot of budget into some research and build AI functionality.
It's actually spread and available at the fingertips of your keyboard, basically. And with that, all kind of businesses understood that now we can do things with AI.
The questions start to come, what can we use it for? What is it for my business, my sector? And with that interest created, the developers and the whole community of developers also understood that, okay, this function actually can generate code. What can we do with that?
But the capabilities has been there for some years back, actually the GPT-3 functionality, which could generate code, it was launched in May, 2020. So, it's been around.
Menno: Maybe the difference is — I think the difference also, yeah, what you're saying that you don't need Power Platform anymore, it's just for everyone. So, the democratisation of AI is the thing that makes everyone so excited, I would say.
Joakim: Yeah, everyone can actually try it now and see that it works.
Menno: So, we can actually see that creating code is being democratised. But the question behind it is, what does that mean?
Tia: Yes. Democratisation of code. Also tying it back to hyperscalers like Microsoft, Power Platform, it's very important because we can see low-code really helped organisations implement standardised, already tested functions models, so they don't have to redo a bunch of steps.
Like why reinvent the wheel? Why should every company do it themselves? And kind of have these sparse resources all over the place when we can have a centralised system that's tested based on industry standards and then also open it, so people can use it and democratise it.
So, that's the kind of like accessibility of very complex IT or code, IT systems or code. That's something that's really important that really helped accelerate adoption of AI and RPA, robotic process automation in the past few years.
Menno: Yeah. So, democratisation can mean opening up as a sort of policy when you do code, but it can also mean everybody is being able to code, so the amount of people.
Menno: And it's funny that you said, “I'm not a coder,” you learned code through Coursera. So, I have this fantasy of how people learn to code in the future by using OpenAI. So, what's your fantasy? You learned code from Coursera, but can you imagine a different kind of education system where people learn how to code in the future?
Tia: Definitely. It's a great point of discussion because I think the education is currently being impacted by GPT, ChatGPT, people are using it to write essays, code, et cetera, learn more things, which is great.
And also, I think it's going to be even more impacted in the future because the possibilities are almost endless. Like you can have your own personal tutor, you can have also specific teachers that are catered towards specific types of people. Also, you can have increased accessibility through that as well.
So, it's really an exciting area of application for generative AI, I think. So, I want to ask you finally, what did you finish speaking with Joakim about?
Menno: I finished off with Joakim by making some predictions, actually.
Tia: Oh, exciting.
Menno: Let's look at some of the future scenarios. And I'd like to provoke a little bit and also see whether you can come up with more negative scenarios maybe, or things that we should be scared of. But let's see. So, what will coding AI unlock in the future for the good, would you say?
Joakim: So, for the good, I believe that we will be more productive. We will gain quality because we can focus our limited time and brain power to what's really matter. And then we can focus on being creative.
So basically, finding what type of scenarios use case, what should we create, not actually work on creating it.
And then of course, spending the time and energy on making sure it works as it should, then have the right quality. My positive side of me believes that this technology will actually help us in that journey.
Menno: So, we will build better stuff. That's what you're saying?
Joakim: Yeah. Better and more.
Menno: Yeah. Better and more. Okay, perfect.
Joakim: But a negative side of this is of course, that people that want bad, they want to somehow exploit this functionality. They will have a lower threshold of doing that. Because now they could create code without having those advanced skills.
So, I think we will see more hackers as we will see more like programmers, which is more on the good side. We’ll also have more programmers which is on the bad side, try to exploit this and maybe create fraud, fraud people, earn money from it.
Or even worse, of course, there could be a lot of applications of creating software, which is for bad.
Menno: I think the $1 million question about the whole thing of generating codes by AI is of course, what will coding in AI's ultimate impact be on jobs?
Joakim: That's the bigger question, right?
Joakim: My thinking about this, personal thinking of course, is that if we look back and look at other big changes of technology coming into play, we could take the steam machine or electricity or robotics into manufacturing.
If we go to a manufacturer today in the shop floor, there is a lot of people. They are maybe not doing the work, the hands-on work anymore, but they are still there making sure everything works as it should and stitching all the pieces together.
And my belief is that it'll be somewhat same when it's come to coding and software developer, that maybe we as the coders will not be the one writing all the code. Maybe we will have these functions to help us do that faster.
But to create the bigger solution and make sure it actually aligns with the use case and be creative about the use case and stitch everything together, so it actually works, I think that will demand a lot of efforts, and that needs to be done by humans.
So, I'm not very afraid of that it will change a lot in terms of that we will have a lot of employments around developers.
Menno: So, your crystal ball is saying it'll actually grow the industry. There will be more jobs instead of less jobs because of the efficiency. Yeah.
Would like to try as a sort of final question to predict your future, Joakim. Now that you've seen all these things happening in coding, wouldn't you like to go back in time and become a coder again, with all these opportunities of creating more and better codes, it should be like a child in a candy store for you.
Joakim: Of course. I actually been thinking about this a couple of times, only the last month. It's time-consuming to code. You have some ID, you have something you want to create, and then it takes time basically. And you need to keep up with how to do, how to write new type of code.
And probably, if I still be a hands-on coder, I will be in Python. I need to learn a new language stepping away from the old ones. And now that's possible. And it'll be much easier. That threshold is much lower. So, maybe I will start again.
Menno: Okay. Thank you, Joakim.
Joakim: Thank you.
Menno: Tia, what do you think, what's the optimistic future scenario for how AI will be used by coders?
Tia: Yeah, coders can use it to educate themselves. So, how to code better.
Tia: Or even how to code in general, have like specific course material for themselves that's catered to them, their learning rates, all that. So, some people are much more quick with coding while others are a bit slower, they want more visuals, et cetera. So, this can really help them.
But then if we're talking about already media to senior coders, what AI can really help them with is, again, to tie it back to my first point, which we discussed in the beginning of this episode. It is non-biased testing, for example, better code maintenance, accelerated peer reviews.
And of course, when you have your hands free from all of these mundane things, you can be better at collaboration, better at creativity, more critical about how software is deployed and how it's going to affect the end users, so-
Menno: Oh, it sounds fantastic, but what can we learn from the pessimistic scenario? Shall I give you my version first?
Tia: Yes, definitely. I'm really curious.
Menno: Well, it's not that bad, but I'm not sure whether people will want a teacher next to them, specifically when you do code, you want to figure out the puzzle yourself. So, maybe a pessimistic scenario could be coding will not be fun anymore.
Tia: That's a good point. This again, ties back to overreliance. So, if you're using these models to actually completely solve the puzzle, the logic, it's not going to work, and this really can quickly turn into a pessimistic scenario.
Menno: What's your pessimistic scenario?
Tia: It ties back to being over-reliant to the system, using it to solve the issue completely. And I also … because again, I'm visual, so I like to tie it back to the generation of images and videos for human creativity. It can't be suffocated.
So, we need to let people be creative, solve issues, create art, be artists. So, if we try to suffocate that part by applying it in business, it's going to again turn very quickly into a pessimistic scenario.
So, we still need to remain creative, human and use it as a helper, not as your senior developer and your peer reviewer.
Menno: Nicely said. That's all for today. Thanks so much to you for listening, and thanks to Joakim and Tia.
Tia: If you enjoyed this episode and want to let us know, please do get in touch on LinkedIn, Twitter, or Instagram. You can find us at Sogeti. And don’t forget to subscribe and review Playing with Reality on your favourite podcast app, as it really helps others find our show.
Menno: In two weeks, we'll be returning with an episode which explores how AI is being used, in more sinister context by fraudsters and the people trying to stop them. Do join us again next time on Playing with Reality.
- Menno van DoornDirector of VINT
+31 6 51 27 09 85
Menno van DoornDirector of VINT
+31 6 51 27 09 85