Virtually every system, process, business and industry is getting disrupted by software. As more people are coming to depend on code, the velocity, volume and variability of how software is developed has changed considerably. However, frameworks and approaches available to test and manage software have not evolved to meet the demands of software coding velocity, volume and variability.
One approach to addressing this challenge is to combine automation and artificial intelligence -”autonomous” testing. However, initial experiments with this approach led to significant challenges, due to explosion of computational states as well as the challenge to provide guarantees that ensure relevant scenarios are generated and executed against
the code being tested. Previous approaches to tackling this challenge include formulating test case generation as a many-objective optimization problem and generating test cases to repair test-suite over-fitting. However, these both operate at the source code level which can be impractical and insufficient for manual testers, automation engineers and business analysts.
 A. Panichella, F. M. Kifetew, and P. Tonella, “Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets,” IEEE Transactions on Software Engineering, vol. 44, no. 2, pp. 122–158, Feb 2018.
 Q. Xin and S. P. Reiss, “Identifying test-suite-overfitted patches through test case generation,” in Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis, ser. ISSTA 2017. New York, NY, USA: ACM, 2017, pp. 226–236.[Online]. Available: http://doi.acm.org/10.1145/3092703.3092718