Innovative solutions driving your business forward.
Discover our insights & resources.
Explore your career opportunities.
Learn more about Sogeti.
Start typing keywords to search the site. Press enter to submit.
Generative AI
Cloud
Testing
Artificial intelligence
Security
April 18, 2022
The use cases for artificial intelligence (AI) continue to grow, from applications in autonomous vehicles and intelligent industry, to chatbots, smart software testing, fast prototyping and transformed business processes. Whatever the use case, however, there are several common themes emerging that will be part of AI in the coming year — and further ahead.
The conversation around the ethical and responsible use of AI remains critical as the use cases grow and AI adoption accelerates. I expect to see an increasing number of organizations moving beyond the talk to more practical applications and I am already seeing this within our own AI Center of Excellence. There is a huge focus for us on creating trustworthy, transparent AI. This means having a clear understanding of the outcomes of the AI models we’re creating, along with knowledge of the end-to-end process of what it takes to create each AI model. This transparency begins with an understanding of both the business need and the use case.
We’ve developed a quality AI framework as a foundation for the entire development process with a number of gates that act as sanity checks for ethics and responsible outcomes — is there unintentional bias, under-sampling, invasion of privacy, or unjustified outcomes? This type of framework, along with the supporting platform and tools, will be increasingly important to ensure trust in AI and the outcomes delivered.
Hyper-scalers like Microsoft, Google, AWS, and IBM are changing the AI game. They’re pushing the democratization of AI by offering more and more low-code AI solutions to the market. This is something of a double-edged sword. Why? Because while it is making it much easier for more people to train AI models, the specter of responsible and ethical AI rears its head once again. If you’re using a low-code approach to AI, you may not know fully what you’re doing, which can cause a lot of complications and problems, depending on the underlying use case. I return here to the need for frameworks to manage this and I hope to see a wider debate on this topic to raise awareness of what is possible, ethical, and responsible when it comes to AI — and what is not. So, while we will see a lot of low-code AI solutions in the year ahead, this should be matched with an understanding of what could go wrong.
I envisage a stronger focus on how AI solutions are engineered and scaled. We’ve seen AI teams building amazing proofs of concept in the past few years but then they struggle to push these solutions into production. These teams are now maturing, and they need to start focusing on creating a good foundation so that they can mature their solutions as well. And this is where MLOps comes into play. First, what is MLOps? The Capgemini Research Institute defines it as “a set of practices to shorten the time to update and go live of analytics and self-learning systems”. The sound principles of MLOps will help teams scale up faster and more consistently.
Essentially, MLOps offers best practice design principles that help teams build quality AI with transparency and auditability embedded into it — all this within a data centric framework. The data centric approach ensures that data is managed and used as an asset. MLOps also enables teams to have feedback loops, and to keep maintainability cycles running so that they can identify risks like model drifts or biases coming out when datasets change, etc. MLOps will be key to AI teams becoming more effective in production or when scaling up. Consequently, we will see a number of platforms and solutions in the market catering to this.
We’ve seen AI accelerate creativity with the evolution of Generative Adversarial Networks, especially in the field of arts. We will see a similar acceleration when it comes to collaboration with the development of the metaverse. Although it’s almost 30 years since the term metaverse was coined, we’re hearing more and more about it today from the likes of Facebook and Microsoft — the latter announcing at its Ignite Conference that it was introducing tools to create a metaverse for connecting employees. In a digital metaverse people can connect in virtual spaces, such as a virtual workspace to brainstorm the next company innovation.
And all of this is going to be driven by multiple AI models in the background. As we look ahead, we will see AI inspiring more creativity and enabling a new era of collaboration in the metaverse.
I am ending this article with reference to one of the most important topics of the day, that of sustainability. It’s clear that this is gaining more and more traction in the AI space because it is becoming so relevant in the broader IT and business community as well. Like all industries, the AI industry must also start considering ways to be more sustainable and raise awareness amongst practitioners. How much computational power is being used and what is the carbon footprint of training models? How do we create better, more efficient AI solutions? Yes, AI will fix a lot of things, but we must also consider — and talk about — the environmental impact of training AI models or, at best, how to offset it. So, as we look to the future, we need to start talking now about how sustainable AI can be.
AI Specialist
Let Gen AI take the strain, and free up your brightest minds to focus on what you need the most.
As digital technologies rapidly evolve, organizations embrace SAP S/4HANA and other SAP products for accelerated digital transformation, ensuring efficient operations and riding the wave of digitalization.
In partnership with Sogeti, Stora Enso leveraged the Geo Satellite Intelligence (GSI) system to combine satellite imaging and artificial intelligence to track bark beetle activity.