When it comes to testing updates to packaged applications like SAP and Salesforce, most organizations lean to one of two extremes:
- Test everything: In this approach, testers attempt to manually execute their entire regression suite before each release — a time-consuming and often inefficient exercise.
- Test in production: This usually involves asking key business users to do some manual testing just before the update is released, then using hypercare to fix defects quickly after problems emerge in production.
Neither extreme is ideal, obviously. A smarter option? Test the right things. With this strategy, you consider using AI-driven impact analysis to learn which objects are most at-risk from an application update, and test only those. These are the “right things” to test because they are the sources of potential production defects. By identifying the right things to test, AI-driven impact analysis eliminates unnecessary testing, which cuts the average test scope for a release by 85%. In other words, it gives you up to 100% risk reduction for only 15% of the effort. This lets you deliver higher-quality releases faster than ever before, while preventing the defects that cause business disruption.
This is the strategy that The Coca-Cola Company adopted. Coca-Cola relies heavily on SAP, releasing a high volume of custom and standard transports throughout the year. Their IT team struggled with the overwhelming amount of manual analysis required to identify every object impacted by a software change. It simply took too much time, required too many people, and put too much of the company’s operations at risk. So, they turned to automation.
They now use a smart impact analysis tool, Tricentis LiveCompare, to automatically identify the SAP objects at risk from an update. The results are impressive. By automating risk analysis, teams at Coca- Cola can tailor a test plan for each release based on the actual risks to their business in that release. By testing the right things, they are not only speeding up their testing and minimizing business risk; they are also saving as much as 40% per release.
This AI-driven change impact analysis is implemented in Tricentis LiveCompare. It assesses packaged apps automatically to learn about their business processes, integrations, custom code, security permissions, and configurations. Then, it investigates the actual landscape and usage data from your production systems to determine the true risks posed by an application update.
For instance, LiveCompare's examination of SAP involves detecting integration points that make use of standard SAP interfacing techniques such as IDOC and BDC. LiveCompare then alerts the engineer whenever an update puts those integrations at risk.
But knowing what to test is only the beginning. The next challenge is knowing which tests to run. Through integration with test automation tools such as Tricentis Tosca, LiveCompare can intelligently select the most appropriate regression tests to run for each object that needs testing. If no test for an at-risk object is detected, it highlights the coverage gap and generates requirements for the creation of associated tests.
For example, LiveCompare can look backwards at a customer’s historical app changes to identify “hotspots” that are frequently being updated and so are at heightened risk for ongoing functional and performance issues. Because updates frequently expose the same items to risk (“Create sales order” transactions, for example), quality engineers should prioritize automating tests for these hotspots.
Using this strategy to identify the appropriate items to test before to releasing an update eliminates the need for post-release hypercare. Hypercare periods after an update are common in many enterprises. During these periods, emergency teams are deployed to quickly diagnose and fix defects that are causing business disruption and/or downtime. Hypercare periods are costly, disruptive, and are usually accompanied by a marked decrease in business productivity. Hypercare periods, which can last weeks, can essentially constrain the rate at which teams can deliver innovation. If each delivery requires six weeks of hypercare, it makes no difference if your development teams work on a daily, weekly, or monthly basis. Hypercare will always slow down your ability to deliver innovation.
Ultimately, AI-driven smart impact analysis speeds up the testing process while reducing the actual number of defects released into production down to near zero.