State of AI applied to Quality Engineering 2021-22
Section 7: Secure

Chapter 2 by Sogeti

AI in security testing: Building trust with AI

Business ●●●●○
Technical ●○○○○

Listen to the audio version

Download the "Section 7: Secure" as a PDF

Use the site navigation to visit other sections and download further PDF content

 

By submitting this form, I understand that my data will be processed by Sogeti as described in the Privacy Policy.

While the hype surrounding machine learning and artificial intelligence is considerable, these technologies are not magical. Even if these machine learning/artificial intelligence-driven tools have the potential to deliver significant improvements, security solutions remain in their infancy (particularly in terms of automation/remediation tasks). Organizations must establish clear objectives and expectations in advance and choose the appropriate tools (implementing the best AI/ML), as not all are equal.

Back to the future

Since the mid-1990s, the need for speed and accuracy of detecting and responding to threats has been rising at a geometric rate. In 1995, the WM.Concept macro malware was discovered and shortly after over 20,000 new macro viruses became known. This was a dramatic increase from just under 6000 known malware samples to over 25,000 in a matter of weeks. This marked the beginning of the relentless pursuit of speed. Automation was the only way forward, and over time, we've seen that we're now seeing well over half a million new pieces of malware each day.

It is not just malware that has seen this need to automate. Network security has evolved from simple port blocking to managing the state and content of billions of connections and continuously identifying new risks. IoT has also led to a massive surge in devices and data to sift through to find threats, misconfigurations and suspect behaviour. All areas of security are feeling the pressures of scale and speed.

This has led to ever greater reliance on automation to keep up. It began with simple rule-based responses to make sure that things were running as “normal” and that only the right people could access the right things. However, this strategy quickly became insufficient. Newer solutions came out that began to use Machine Learning to detect anomalies and threats, such as EDR (Endpoint Detection and Response)/XDR and UEBA (User and Entity Behaviour Analysis) along with Guided Analysis to help find root causes of threats.

All of this effort has occurred against the backdrop of a massive skills shortage in Cyber and the requirement to keep up with an ever-increasing volume of data with the same, or less, dedicated staff.

Time of Artificial Intelligence!

Early steps with AI and its application in cybersecurity

We saw the first experimentations a few years ago to detect polymorphic viruses where algorithms were used to extract the viral signature. Things have moved on at pace and AI is now investigating new fields and more complex contexts to help humans solve difficult problems and get quicker analysis. For instance, AI is the natural choice for User and Entity Behavior Analytics (UEBA), where, for example, it can help to identify deviation from users or entities compared to the normal behavior patterns a system has collected and analyzed.

Other fields of cyber application include smart honeypots, or deceptive security, whereby the AI might be used to automate the creation of lures and trap the attackers with what they think are real worthy assets in which they can inject their malwares. Honeypots are not new, the technology having been deployed since 1992, and have proven their worth as decoy systems that set off security alerts once an attacker has penetrated a network. Exciting advances in the use of AI-powered honeypots are ongoing, with research into the potential for machine learning to be applied on honeypots in real systems, rather than decoy ones, enabling attack data to be continually captured, analyzed, and acted on to thwart the attackers.

Identifying patterns with machine learning

Based on “normal” system behavior, we can now identify with a high level of confidence dangerous patterns. Supervised or unsupervised ML-powered solutions can detect patterns associated with different threats in very large volumes of data. Unsupervised ML models are learning what normal behavior looks like by observing the events over a period of time in a given environment (such as network traffic, device CPU,…). It can then adapt and redefine the baseline from one environment to another. It does not use rules or thresholds and is therefore especially useful when it comes to detecting an APT (Advanced Persistent Threat) and other targeted attacks.

Laptops have also greatly improved computing capacity and many local automated remediation actions based on AI analysis can therefore be performed using their GPUs (graphics processing unit). Using the GPU computing power is really useful when time to detect is vital to contain an attack (for instance ransomware) and avoid its propagation. It can be used in a detection “mode” but real progress comes in anticipating the issue and using deep learning mechanisms to prevent the attacks: deep learning structures algorithms in layers to create an "artificial neural network” that can learn and make intelligent decisions on its own.

Deep learning solutions based on a graph approach can detect abnormal behavior, such as the abusive use of supercomputers specifically for the purpose of cryptocurrency mining. In these solutions, the AI system determines whether the generated graph it analyzes looks like the ones the legitimate program is supposed to generate while running on the system. If not, it can ring alarms and trigger remediation actions to block the program execution.

Most solution vendors and global systems integrators are investing massively in AI and machine learning in order to develop new algorithms and solve more and more complex security challenges. There are also interesting open source experimentations such as the toolkit shared by the Microsoft Defender research team aimed at gamifying machine learning for stronger security and AI models. Their experimentation is based on autonomous agents interacting within their environment and studying through reinforcement learning techniques.

Addressing scarce resources

AI-based security solutions not only crack complex problems, they also positively contribute to solving human resources issues in SecOps environments, application development and security testing. As most organizations will have experienced, cybersecurity is suffering from a huge resource scarcity, with millions more professionals needed. Therefore, people’s time must be used more effectively, with a focus on the most demanding and interesting analysis.

Using AI, machines can be trained to identify unnecessary events, such as false positives, and free up SOC analysts to concentrate on real incidents. This has a twofold impact: first, clearly, a solution embedding AI should be more productive; and second, AI helps to improve job satisfaction. How? An analyst concentrating on more interesting tasks without wasting energy looking for a needle in a haystack is generally more motivated to complete the “mission” at hand.

In many cases, the quickly evolving cybersecurity threat landscape leaves organizations floundering about what to do. None want to be blamed (publicly) for failing to take the right measures, so they keep adding layers of defensive software, encumbering their software stack, and slowing down their products. This problem worsens as the attack surface is extended with software and applications that are not securely coded, with the result that it is not uncommon for dozens of security bugs to be introduced per 1,000 lines of source code.

Considering the existing hundreds of millions of source code lines that are likely to host security “anomalies”, this is a big challenge for security professionals and we would not expect them to review every single line of code. Nor do we believe all coders will become 100% secure coders. Here again, we see a case for machines learning models. They can now detect over 90% of security bugs as well as analyze the quality of the code produced by developers and guide them to correct the coding errors. They can also teach them how to avoid these errors. This is particularly interesting in a DevOps cycle where, for example, an AI-powered API security testing tool can be integrated within the CI/CD pipeline tools, its workflows and processes, minimizing the impact on the key stakeholders’ velocity, from developers to security experts.

Augmenting human know-how with AI

In these few examples, we can see the benefits organizations could get from AI-based solutions. This isn’t a case of pitting AI against human endeavor but is about combining AI with human know-how and expertise to get the best results. This means organizations must embrace a new approach and adapt the security workforce capabilities and skills to fully benefit from AI.

The ML and AI hype is high; but it is not a magic stick. Even if these ML/AI-driven tools hold the potential to deliver substantial improvements, the security solutions still need to mature (especially for automation/remediation tasks). It is necessary for organizations to set clear objectives and expectations upfront and select the right tools (implementing the best AI/ML) as all are not equal.

We should furthermore keep in mind that if defenders are using AI solutions, hackers are also leveraging these techniques to refine their attacks and make them more sophisticated and discrete. The game is not yet over. It is likely that we will need more AI in the future to fight against AI-powered hackers.

About the author

Jean-Marc BianchiniJean-Marc Bianchini

Jean-Marc is the Global Cybersecurity Head at Sogeti, part of Capgemini. He establishes the entity's business plan and objectives. He is a proponent of cross-functional collaboration between business unit and technical managers at the national level. He is accountable for the satisfaction of customers and consultants, as well as the profit and loss and budgetary performance of his Business Unit.

About Sogeti

Part of the Capgemini Group, Sogeti operates in more than 100 locations globally. Working closely with clients and partners to take full advantage of the opportunities of technology, Sogeti combines agility and speed of implementation to tailor innovative future-focused solutions in Digital Assurance and Testing, Cloud and Cybersecurity, all fueled by AI and automation. With its hands-on ‘value in the making’ approach and passion for technology, Sogeti helps organizations implement their digital journeys at speed.

Visit us at www.sogeti.com

Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The Group is guided everyday by its purpose of unleashing human energy through technology for an inclusive and sustainable future. It is a responsible and diverse organization of 270,000 team members in nearly 50 countries. With its strong 50 year heritage and deep industry expertise, Capgemini is trusted by its clients to address the entire breadth of their business needs, from strategy and design to operations, fueled by the fast evolving and innovative world of cloud, data, AI, connectivity, software, digital engineering and platforms. The Group reported in 2020 global revenues of €16 billion.
Get the Future You Want!