microphone headphone

Playing With Reality - The Remarkable World of AI-Enabled Healthcare

AI is making the lives of doctors more efficient and even helped out in the fight against Covid. But will this new AI see doctors dependent on it for all diagnoses? Make hypochondriacs of us all? And where is it going next? Welcome to the remarkable world of AI-enabled Healthcare.

Sorry, this content can only be visible if Functional Cookies are accepted. Please go to the Cookie Settings and change your preferences.


Generative models of Artificial Intelligence are being used across industries to huge and varied effects - and as we learnt in our first episode of season 2, sometimes with potentially sinister consequences. But there’s one realm where this kind of AI is broadly positive: healthcare. In medicine and drug discovery, AI is being used to scan huge data sets and even discover new drugs. It’s making the lives of doctors more efficient and even helped out in the fight against Covid. But will this new AI see doctors dependent on it for all diagnoses? Make hypochondriacs of us all? And where is it going next? Welcome to the remarkable world of AI-enabled Healthcare.

Today’s Guests
Deepa Mamtani
Deepa Mamtani leads Sogeti’s AI Centre of Excellence in the Netherlands, and together with her team develops AI solutions using deep neural networks, GANs and computer vision. With a multi-disciplinary consulting background in strategy, analytics and data science, she is passionate about analysing and leveraging data and translating them into strategic outcomes.

Aaron Morris
Aaron Morris is the co-founder and CEO of PostEra, a company building an end-to-end medicinal chemistry platform to advance drug discovery. After working in the financial sector, Aaron saw the limiting nature of drug discovery in biotech companies and pharma, and so set up a company to come in at the early stage of drug discovery and improve efficiency, using AI to do so. 


Subscribe Now:

Episode Transcript

[Music Playing]
Menno: Artificial intelligence isn't just about making art, writing essays or speaking to a human like chatbot.
Aaron: You apply machine learning to try and design the actual drug, design the actual pill.
Menno: Soon it'll change the way we think about healthcare, and it's coming to a hospital near you.
Deepa: So, the AI doctor is also going to be one of the doctors giving you an opinion.
Menno: This week I'm speaking to Aaron Morris and Deepa Mamtani, two people who are taking AI in drug discovery and healthcare to a dizzying new heights.
So, welcome back to Playing with Reality with me, Menno van Doorn, a podcast from Sogeti, the home for technology talent.
As always, in this series, I'm accompanied by Tia Nikolic. Hi, Tia.
Tia: Hey, Menno. Happy to be here again.
Menno: Okay. What have you been up to last week? Doing stuff on ChatGPT?
Tia: Yes. I feel like everyone is, and what we see right now, and in the past week, we've been working on seeing how we can make ChatGPT, and of course, large language models in general, more practical.
And one big part of that, that again, our team is really focused on, the AI Centre of Excellence team here in NL is validating these models, testing them, making sure they're implemented correctly.
Menno: Now, today we are looking into how artificial intelligence is being used in healthcare. This is one of the biggest and most vital industries in the world.
So, the applications for using machine learning within it are huge and varied. But I want to know what the impact will be on the people working within healthcare when innovations in AI arrive. And what will they look like? Are we at a turning point in healthcare as elsewhere?
Okay, Tia?
Tia: Yes.
Menno: What do you make of these healthcare applications? And why do you think is generative AI the one being spoken of so many times in this context?
Tia: Regarding why is generative AI the one being used most in healthcare and medicine. Well, something that's very important to say here is that synthetic data and generative models can be used to create data that is not tracked back to the patient.
And this is why generative AI is very, very important in sensitive industries such as medicine, because then we can actually have more data to analyse, to use as training data and for other applications without any issues with privacy of patients. And this is of crucial importance here.
Menno: Yeah. Any other ways, areas where we currently see that AI is being used in healthcare? This is an interesting one. So, surpassing all the privacy issues. Any others?
Tia: Well, the primary aim of any health-related AI application is to analyse relationships between different clinical techniques and patient outcomes. So, of course, to help the patient. That's also the point of medicine, of course, in general.
And here, AI programs are applied to practices such as diagnostics, treatment, protocol development, drug development, of course, that's a big one. And also, there's personalised medicine. It's being more and more talked about these days and of course, monitoring of patients and care of patients.
And here it's again, very interesting to mention ChatGPT, because for patients that have some issues, for example, with different symptoms, et cetera, or maybe just want to talk to someone, then they can use this model even.
So, that's an interesting take and definitely we are going to talk about it more during this episode. But I want to ask you, who did you speak to first this week?
Menno: Yeah, so first off, I sat down with my colleague and one of our top AI experts at Sogeti, Deepa Mamtani.
So, you and I know that she's leading the AI Centre of Excellence in the Netherlands. And together with the team, she's developing solutions using deep neural networks, for instance.
And she's been turning her focus now to how this can work in healthcare recently. So, I was really excited to speak to her about these kinds of generative models as being used in this space.
Tia: Great. Let's hear it. I'm also excited.
[Music Playing]
Menno: The general public is going berserk about generative AI, machine learning, deep learning, diffusion models, large language models. Is there a way to give a simple overview for dummies, what are the differences and similarities of all the things that I just mentioned?
Deepa: Well, the similarity of these models are that they all fall under the banner of AI and machine learning.
The differences are quite so many. So, essentially you could think of it as discriminated models and then generative models. Discriminated models discriminate between data points. So, is this a dog, is this a cat? They make predictions based on labelled data because it's already seen the data.
Then you get generative models, which is essentially models that are shown what the data looks like, so they know what the data needs to look like, and then it generates data points that are similar to the actual dataset.
So, it creates new data, if you will. And that's the family of generative AI models that we are seeing nowadays. So, DALL-Es, GPTs, they're all creating data, but they've learned what the solutions space is, they know what to expect, and then they create a new data point.
Menno: And the large language models?
Deepa: The large language models, ChatGPT being the most prominent of them all, those are a type of generative AI models. They're called transformers. So, they are basically the best performing language models that we have right now.
And that's because they took the transformer architecture, which is a type of neural network architecture. And the way these models were trained made it so that it could understand the context of whatever was being said.
Menno: So, how about transforming healthcare? And there's a lot of things going on in healthcare for a long time. It's called digital health, and everybody was already excited, and now these transformers are coming. So, what would this mean for healthcare, do you think?
Deepa: Well, I think with the onset of generative models like GANs, they had already started transforming healthcare much before these large language models came in.
And that's because GANs was used to create synthetic data, which within the healthcare world is quite a big need because when you deal with patient data, you are dealing with very, very sensitive data, which usually doesn't get shared around very easily. And that's for our own privacy and security.
Menno: Okay, I understand. So, are you saying that this is helping the patients, this is helping the doctors, this is helping the healthcare system?
Deepa: It's helping everyone.
Menno: Everyone?
Deepa: It's helping the researchers most especially by giving them data that is realistic. So, it's not real data, but it's realistic data, so they can play around with and train models and create models that can be used to accelerate different healthcare research avenues.
So, for example, using X-rays to identify blockages or to be able to identify tumour growth, all of those models were created or can be created using real data, but also synthetic data.
Menno: So, there you are, look, you’re a synthetic data creator and you created X-rays of teeth that are synthetic. And now, so what do you do with it?
Deepa: Well, the one that you were talking about the X-rays of teeth, that was a very interesting use case for a social security agency somewhere in the Nordics. And they process a lot of dental claims.
And so, they receive a lot of these dental X-rays by different parties to say that work was done. And there was also a lot of fraud in this entire process.
So, we were looking at different solutions and one of them was, can we use AI to create some sort of a detection model, so that we can quickly detect whether or not this is a fraudulent X-ray, dental X-ray?
So, we created synthetic data using open sources of data sets that were available. And we created very realistic looking synthetic images, which were then used as a foundation to train other models.
Menno: So, in general, what are the places where AI can improve healthcare? So, it's going to be all over the place, maybe, but can you pinpoint out some of the areas?
Deepa: I think the biggest area really started when AlphaFold came into play because they solved a decade-long problem to be able to predict the structure of proteins.
Menno: Okay.
Deepa: They narrowed down the window of research. So, millions of dollars that are going to research to be able to predict the structure of proteins was easily done within a weekend.
And so, you can imagine all of the dollars that are saved, but also the time. To have these huge challenges being solved by AI, that's how we are going to see the next generation of the new medicines, and the new drugs, and the new protein structures. That's all going to be done thanks to AI.
[Music Playing]
Menno: So Tia, could you tell me a little bit more about AlphaFold?
Tia: Yes, definitely. That's a very, very impressive model that's really reshaping medicine and also drug discovery. It is a Deepmind’s model, and they've developed it together with the European Bioinformatics Institute, and they decided to make this open-source and freely available to the scientific community.
So, that's something that's very admirable and that I'm very happy about because you can already see that it's really accelerating research in drug discovery, so you can see an immediate positive impact on society.
Menno: Yeah, it remembers me of many years ago we did a research from crowdsourcing in the mid-2000s, there were a lot of models using the power of the outside world or people outside your company. And it sort of became silent after that. Yeah, yeah. We should return to the roots of crowdsourcing, I would say.
Tia: Yes. Definitely, definitely. It is a very big part of data science and AI, making things open source, making them transparent, open to people to provide feedback. That's something that's very impactful in this industry and especially in healthcare.
And to go back to AlphaFold, this is a software program that actually solves one of biology's greatest challenges. And again, as a biologist, I really love that.
And it's understanding protein folding and for people listening and for everyone that's not a biologist, you can just remember that protein folding actually shows what kind of function the protein has in a body or in a plant, et cetera.
So, you can understand this function actually can help drug discovery, can help research how different proteins interact, how they can actually help patients that have different diseases, et cetera.
So, that's why this model is so important, and especially that the database of predicted protein folding structures and protein folds can help scientists that are working on specific sequences to very quickly find these protein folds and help them accelerate their research.
Menno: Yeah, I'm doing new inventions that were impossible.
Tia: Yes Menno, that's very interesting. But I'm itching to hear who you are speaking to next, so can you please tell me?
Menno: Yeah, great guy, Tia, I talked to. And it really felt like a privilege to talk with Aaron Morris. He's the co-founder and CEO of PostEra. And I would say what’s in the name, a company that is building end to end, as they say, medicinal chemistry platform to advance drug discovery. That's what they do.
I started of course, by talking about drug discovery in the industry in general, and then I more zoomed in on what kind of role AI has into this whole new process.
[Music Playing]
So Aaron, great having you here.
Aaron: It is good to be here. Thank you so much for the invite and having me on today to discuss PostEra.
Menno: Help us a little bit to understand what AI or generative AI, maybe machine learning means for drug discovery.
Aaron: Sure. So, maybe I'll just take one step back and try and very simply outline drug discovery, which is a huge process. It is over 10 years of work typically to bring a drug to market. You've got so many different moving pieces and therefore so many different areas of application for machine learning.
The simple way that I think about it is, it is a three-step transition between different scientific domains. The first scientific domain being biology.
You are looking at a patient who has a disease of some sort, and you're trying to ask yourself what is the underlying biological mechanism that is causing or driving that disease?
The second step is what is the chemistry solution? What is the best way to rectify the broken biological issue that is occurring?
In the case of an antiviral, you're taking a piece of chemistry, turning into a pill. That pill then is ingested by the patient and the chemical matter goes around the whole body and effectively breaks the virus.
The third stage is then you have to prove that this chemical solution to your biological problem works in real patients. So, then it becomes a medicine or medical problem of who are the right patients to select for this particular solution that you found.
So, you go from biology to chemistry to medicine, and obviously the latter is the clinical trials, et cetera. There are so many applications of machine learning across all three of those stages.
One of the biggest challenges in biology is connecting diseases to underlying biological phenomena. Trying to find out what gene or what protein is involved in a given disease.
That is a almost matching type problem that you can apply machine learning to solve in chemistry, which is what PostEra does. You apply machine learning to try and design the actual drug, design the actual pill.
In the case of medicine, you are trying to intelligently sub-select certain patient populations that based on given biomarkers, based on even given age group or gender or ethnicity or whatever it is, you're trying to identify which patients will be most helped and most suited for this particular cure that I've developed.
And so, there is so many wide varieties of use cases for machine learning and drug discovery.
Menno: So, where does the AI step in? So, what are maybe the hotspots in this whole process that you described that AI can really make the difference?
Aaron: Well, maybe I'll try and again speak to the areas that I know well, which is the areas that PostEra is working on.
One of the challenges of developing the chemistry solution, I'll just call it the drug from now on, is that a drug has to satisfy many different properties. There are drugs which can easily kill a virus, but they'll also kill the human. And that's not great.
And so, you are trying to often balance a lot of competing properties of a given molecule, as that molecule advances toward ultimately clinical trials.
And humans, Menno, humans are very good at honing in on one problem and fixing one problem, and then in a serial fashion moving to the next problem and fixing that.
And that is to some extent, a lot of how human-driven drug discovery is done. Drugs often have to satisfy 10 to 15 different properties. We call this a target product profile or a target candidate profile. They are 10, 15 different properties a drug has to satisfy.
And so, what machine learning offers is a way to simultaneously optimise different properties, and at the same time.
And we have reasonable data for this, we don't have billions often of data points, but certainly we have a reasonable number of data points for these properties that we can allow machine learning to do what often humans struggle, which is to balance competing properties all at the same time.
Menno: Yeah. And it sounds like it's too big for our brains. And the thing, the product itself that you're delivering, how can you ensure that it's not just a black box? Do you think it's important to be open about it and explain to other people what they're working with?
Aaron: Yeah, I'll say a couple things. Firstly, I would say the product from PostEra is not Proton or the AI technology. The product is the drug, Menno, like what Pfizer and others expect from PostEra and what patients expect is a drug. So, that's how we benchmark ourselves and that's what we care about.
But to your point, as it pertains to the underlying platform that is producing these drug candidates, yes, we absolutely care about interpretability and I'll make a few comments on that.
So firstly, we tried to be very open from day one, how our technology works. You can see the core publications on our website, and there is code for some of those publications as well.
So, you can get a sense as to how is PostEra actually doing machine learning for drug discovery.
Secondly, what I'll say is that part of the work that we've done, again, I'll take Pfizer, is to not only develop drug candidates, but actually innovate on machine learning for drug discovery.
And investors and pharmaceutical companies had some level of frustration with the literature around this topic. And one of their areas of frustration was the lack of interpretability, these black boxes that would just spit out molecules.
And so, PostEra spent two years actually working with Pfizer on this exact problem, alongside other problems. And we take this very, very seriously working within PostEra where the chemists that we have very high expectations of a model, not just given a prediction, but given interpretation.
And so, I'll say that's my second comment, in that model interpretability is arguably one of the foundational approaches of PostEra.
And I’ll say the third aspect is you often get two different type of stakeholders when you are discussing your company. There are some stakeholders who care about the input and others who care about the output.
What I mean is there are some people who just want to know, tell me how the machine learning works.
But there are a lot of people, particularly in today's market who are like, “Just tell me about the output. I'll believe you, if the output looks good.”
And so, to add a level of transparency, we have also tried to be very open about the drug discovery that we're doing. And COVID Moonshot, as I know we'll talk about, is an example where we've really tried to avoid any black boxness, not only in the input, but also in the output by disclosing structures, showing the results of all our data, et cetera.
So, we do take model interpretability and transparency very seriously at PostEra.
[Music Playing]
Menno: So, he talked about the COVID Moonshot, and we'll return back to that later. But first maybe we should talk Tia, about this black box idea in AI. So, what do you make of this?
Tia: Again, something I really like to talk about as part of my specialisation. So, ethical AI, interpretable AI, how to test it. One big part of it is actually understanding that AI is a black box.
And what does that refer to? It refers to the fact that most AI models and AI-based tools don't actually provide enough insight into what's happening inside of them.
So, when you speak about traditional software, it's rule-based usually and it's explicitly programmed. And in terms of AI, we already defined it in the last episode. We said machine learning models, for example, as part of AI, are models that are not explicitly programmed, and they show human intelligence.
So, one big part of implementing these models, especially in industries that are as sensitive as medicine and drug discovery use case, one big part is actually making them transparent and opening that black box.
Menno: Yeah. And maybe one day we can also open the black box of the human brain, I would say. But coming back to drug discovery, maybe, little explainer of what PostEra actually can do, maybe.
Tia: Yes. So, PostEra has a platform that's called Proton, and it focuses on three different stages of drug discovery. And this is very important because drug discovery is an end-to-end process, so you need to make sure that you are making efficient every step of this way.
And they actually have three different steps. They do drug design, drug candidates, synthesis, and also testing of those drug candidates.
And this platform actually allows PostEra to do faster drug discovery, find more optimised drug candidates, and develop better cures for patients, which is the most important part of their mission, of course.
Menno: It goes back by your remark by saying that cheaper, faster, better, all these kind of things that you want.
Tia: Of course.
Menno: You want more medicine, better medicine, cheaper medicine, faster. So Tia, let's go back to Deepa, and to hear some more about how ChatGPT is being used in healthcare.
[Music Playing]
Let's go back to ChatGPT language models, they are doing something different. So, have you seen already examples of how it is being used in healthcare, or where do you think it's going?
Deepa: So, there's essentially two trends that you can kind of see. One is people using ChatGPT to find information, medical related information, or even to use it almost as a mental health therapist.
And this is not so good because ChatGPT is a very, very general model, which has been trained on the corpus of the internet. So, anything that it does, advise or output is not the gospel truth. And I think people need to take that in mind.
But then there's also a positive trend that we see by really leveraging the power of these language models in that they can hold conversation and have very natural human-like conversation. You can actually use that as an advantage.
So, there's a university, Drexel University, that's doing this fascinating study of using ChatGPT to be able to predict early onset of dementia and Alzheimer's in patients.
So, they fine-tuned ChatGPT or actually the underlying model, so GPT-3, they fine-tuned that model on transcripts of conversations between people with dementia and people without dementia.
And this model was able to understand the underlying patterns and pick up on key indicators because language impairment is one of the biggest symptoms of Alzheimer's that affects quite a majority of patients.
Menno: Of course, yeah.
Deepa: So, they're using this, and they found that actually this fine-tuned GPT-3 model was 20% more accurate in predicting if this person had dementia or not. So, we see the positive sides also coming out.
Menno: So, maybe last question, but I've tried to ask you about a future scenario.
Deepa: Yeah.
Menno: So, in 10 years’ time, you go to the doctor, okay. What's the data set that the doctor will have, that you will have at that moment? What kind of knowledge will come from the AI? What's your preferred scenario?
Deepa: Well, I definitely would not want it to be trained on WebMD, as an example.
Menno: People will go to the doctor for every single-
Deepa: Exactly.
Menno: Yeah, okay. So that's not the-
Deepa: So that's not the answer.
Menno: No.
Deepa: But what you want is you want these models to be created on the vast literature that is available, right?
Menno: Mm-Hmm (affirmative).
Deepa: So, medical scientific papers, previous hospital records of other patients that are maybe similar in demographics, to me. So, a lot of smaller minorities may be left out because there's not enough data.
But if we can gather enough data from different sources and from different channels of information, we might be able to cover those underrepresented communities as well.
So, you want it to be domain specific. And if you have AI models built on, let's say credible medical journals and histories and patient data, then that would be let's say the golden source of truth.
But then again, it's like going to a doctor and getting a second opinion. So, the AI doctor is also going to be one of the doctors giving you an opinion.
Menno: So, you won't trust the doctor anymore who's not using AI?
Deepa: No. It's a double-edged sword. You damned if you do, and you damned if you don't. But I see the positive reasons for using AI that humans can't do.
So, humans can't read through massive journals and literature and literature volumes of … that's what you're supposed to do when you study. Great.
But do I trust doctors continue doing that? I don't know. I think a lot of doctors are overwhelmed with the amount of work that they have. So, if they don't have the time to do it and research and go through what's new in their field of expertise, why not get the AI to do it?
[Music Playing]
Menno: It sounds so reasonable, doesn't it? So, AI and doctors working closely together, hand in hand, that sounds like a really good future scenario. What do you make of it?
Tia: Yeah, definitely. Again, going back to that human-centred part, if it helps doctors in their workflow, if it doesn't accelerate bias, if it doesn't make their output of work more inefficient or anything like that or wrong, then definitely this is a great application and I can see it happening.
And to actually have a good basis for that, you have to have credible data sources. So, I think this is something that's very important for large language models such as ChatGPT , is to understand that they're trained on a huge corpus of data.
And this is very important to check the credibility of these sources, and that really helps increase credibility of these models.
Menno: But let's say what would be your fantasised scenario? So, let's say that you can tame ChatGPT or any other GPT and put it all over the place in healthcare. So, what could be the future of healthcare, maybe?
Tia: I think ChatGPT can be used as a brainstorming tool for doctors and nurses and patients as well. So, I already mentioned it in the beginning. So, if you have something that you need to very quickly look up, it can help you out. And of course, you, it shouldn't be used as ground source of truth.
So, you should always do additional research, apply your knowledge that you already know of, and then use that to go off and help patients. Especially because again, it's such a sensitive application or industry to apply AI in.
Menno: In my words, you are saying, so that's a future scenario. Great. If the quality of the data is good, if there's more data, you always want more data, Tia. I know.
Tia: Better data. Okay, great discussion as always. So, I want to ask you, what are you ending with then, Menno?
Menno: Well, finally of course, I wanted to hear from Aaron about this amazing open-source project I mentioned earlier, that he worked on with PostEra, the COVID Moonshot.
[Music Playing]
It's a very special story, what you've done in COVID Moonshot. And I was thinking back in maybe 2010, I was working on open-source innovation and crowdsourcing, these kind of words and everybody was going, “Wow, we need the brains of people.”
And now we're talking about, “We need the brains of the machine.” It's actually something that you've done, you've worked with. So, tell me more about the COVID Moonshot in a couple of steps. So first, what it is and then how it works and what did it bring?
Aaron: Sure, yeah. What was innovative about COVID Moonshot is again, to display in this input and output we were crowdsourcing, and we were open sourcing the output of code, not the input of the code, which is how it works.
So, for COVID Moonshot, the idea was can we crowdsource a drug? Can we find a cure for COVID, an antiviral that we can distribute around the world at low cost without any patents, without any IP, without any profit?
We were motivated by some amazing work that a group in Oxford, UK called Diamond Light Source had done, which is they'd run a preliminary experiment, it's called a fragment screen against a core part of the virus. And they'd released the data on Twitter.
Now, one of my co-founders spotted that data and realised that our machine learning technology was actually quite suited to help take this preliminary experiment and advance it forward.
And so, what we decided to do was say, well, rather than just doing this by ourselves, why don't we create a quick website and ask people for their opinions on, given this data, what would you do next to develop a drug? And we will use our machine learning to evaluate and score those ideas.
And so, we were three founders in an Airbnb at the time, March 2020, I was surprised that anybody noticed our website, but it actually blew up over the course of several months. And we had over 20,000 submissions, about 400 scientists around the world got involved.
And what began on Twitter turned into the world's largest open science effort to develop a COVID antiviral.
And so, in the very earliest days, and for the first six months, it was scientists all at home locked out of their labs, would design molecules, submit them to PostEra, we would score them using our machine learning, we'd find out how to make them, and then eventually the money came in from donors to actually not just do this in theory, but do it in practice.
And so, we began to make actual molecules in chemistry inspired by the ideas of scientists and scored by our machine learning.
And that has now, over the last two years, gone from just that preliminary experiment on Twitter to a drug that is now been prepared for clinical trials next year, which is really incredible.
As far as we know, it will be the world's first kind of crowdsourced drug, if you will, end to end clinical trials.
And so, it's been a phenomenal story Menno, of yes, there's some exciting machine learning involved, but also a huge amount of global support and collaboration from hundreds of people all over the world.
Menno: And why did you decide to make it an open-source project?
Aaron: Well, I think first there were three people in an Airbnb and lawyers were expensive. And I think we also felt that when the vaccines came, which they did, that it would be the richer, more developed nations that would be able to access them, while often developing nations will be left behind.
There is a challenge in getting equitable access to vaccines. And we knew therefore that an antiviral would likely be the main line of defence for a lot of people in these nations, because a pill in a box is much easier to distribute around the world than some vaccine that requires minus 90 degrees Celsius storage requirements.
And so, we felt that not only could we move faster without any IP and patents, but we also felt that it would therefore ensure that these antivirals that resulted will be of a low cost, so that they could be accessible and provide equitable access to everyone around the world.
But I think also thirdly, we realised that because we were putting all of our data, all of our structures, all of the experimental results in the open, that actually our work will be able to help other scientists and other drug companies who were working on cures for COVID, even if they were doing it for profit, which we were fine with.
And I can actually point you to drug candidates that are now in late-stage clinical trials whose development was inspired by COVID Moonshot, and they put that in the manuscript.
And so, we felt those were the main reasons why we were very happy to leave it open-source and IP free over the course of time.
Menno: So, this format, is it the model that you have worked with sounds like more, there's a lot of more health problems in the world where the big bio companies don't step in.
Aaron: That's very true, Menno. We get asked this a lot and I'll make a couple of comments. I think it's fantastic that we have helped shown that this can be done, and we have not executed perfectly.
But to answer your question, I think it's now an option on the table that people should think about. I don't think it's a silver bullet. There were certain special circumstances, for example, there was just an intense amount of global focus on this one disease at a particular time.
So, I think there was that, but I certainly think that there were areas like antibiotics and other infectious diseases where there's very limited incentives for big companies to get involved in research there for a number of reasons, often just not scientific, just economic ones.
And so, to at least look at COVID Moonshot and say, well, this is a model that has been tried and proven to some extent, can we at least consider it and talk about it? I think we've made that option very viable for people to at least discuss.
So, I'm not going to claim it's a silver bullet and cure all problems, but I think in certain situations, as was COVID, the approach of crowdsourcing scientific expertise, I think has been validated to some extent.
[Music Playing]
Menno: Aaron, thank you so much for your fascinating and intriguing story about PostEra.
Aaron: Thank you, Menno, it's a pleasure to be with you.
Menno: So, that's all for today. Thank you so much for listening. And thanks to Aaron, Deepa and Tia for all the insights.
Tia: If you enjoyed this episode and want to let us know, please do get in touch on LinkedIn, Twitter, or Instagram. You can find us at Sogeti. And don't forget to subscribe and review Playing with Reality on your favourite podcast app, as it really helps others find our show.
Menno: And in two weeks, we will be looking at another AI case study this time focusing on coding and the developers using it to build new digital frameworks. Do join us again next time on Playing with Reality.