Software developers have accepted some level of faults, as long as they are not “show-stoppers”. Even within agile sprints, teams exhibit varying degrees of collaboration, with varying impact on their overall velocity. In some cases, testing is either inexistent or still begins after all development reaches completion, consuming time and resources in excess of those required for development. Developers frequently need to context switch away from their current work in order to address issues discovered in code checked in days or weeks ago. Any workflow that is faster than its slowest process generates waste. Likewise, context switching manifests itself in the form of untested or unrepaired code. Testing as the last activity in the workflow is unquestionably the most significant bottleneck in the delivery of high-quality code. Why is that so?
The impact of legacy technology
For nearly three decades, automated software testing has meant manually writing scripts in a variety of languages for a variety of automated test types (unit, functional, smoke, regression, performance, load, security and others). The earliest commercial vendors used this definition of "automation" based on scripts. By the mid-1990s, their tools had gained enormous popularity, laying the groundwork for the majority of today's test automation tools.
In the mid- to late-2000s, a second generation of automation tools emerged, but the workflow remained the same: create scripts, primarily by hand (or via some form of recording), in the hope of reusing them in future releases. This required (and still requires) high script development time and high maintenance due to change in object properties or change in test data. Due to the proliferation of different interface behaviors (mobile, web, embedded software, etc. ), the time also required to validate all test conditions across all devices – including edge scenarios – is incompatible with continuous delivery and multiple builds/releases per day. For instance, critical visual bugs in the UI layout, localization errors or digital accessibility issues cannot be caught by typical automated tests.
Numerous tools aid in the development, management and automation of tests - as well as the analysis of application code to ensure coverage - but this does not appear to be sufficient any longer. It is obvious that legacy tools and processes are vulnerable to modern quality engineering requirements.