State of AI applied to Quality Engineering 2021-22
Section 1: Get Started

Chapter 1 by Applitools & Sogeti

Quality Engineering and recent trends

Business ●●●●●
Technical ○○○○○

Listen to the audio version

Download the "Section 1: Get Started" as a PDF

Use the site navigation to visit other sections and download further PDF content

By submitting this form, I understand that my data will be processed by Sogeti as described in the Privacy Policy.*

While delivery cycle time is decreasing, the technical complexity required to deliver a positive user experience and maintain a competitive edge is increasing—as is the rate at which we need to introduce compelling innovations.

Businesses are being forced to transform at a faster rate than ever before. Time to market is critical, as is perfection in all aspects of delivery. As users immerse themselves in rich user experiences outside of work, expectations for the same caliber of experiences have become table stakes. While the discipline of quality engineering at its core remains unchanged, just about every aspect of how quality is achieved has changed. This includes legacy approaches to automation that are no longer adequate for modern applications.

Businesses cannot afford to sacrifice quality for speed and expect to succeed. Modern teams are reshaping their organizational structures, testing schedules, the accountability for testing, the focus and prioritization of testing, and the tools and technologies they use.

This chapter discusses the evolution of Quality Engineering (QE) and the need for Artificial Intelligence.

From software testing to quality engineering

Since the early 2000s, the now old process change that came to software development involved agile, scrum, or some other form of small-team software delivery. To alleviate the lengthy delay between coding and testing, software teams organized themselves into small groups of developers and testers, divided work into "stories" that would be completed over the course of two-, three-, or four-week long sprints.

Along with team-based delivery, software components are now created using a continuous integration/continuous delivery process (CI/CD). This allows to automate the entire check-in process for each developer's code. By embedding testing into such processes and working together, engineers circumvent delays in identifying unintended changes until well after check-in.

This was further reinforced by the introduction of DevOps, where each team member contributes to the management of feature design, workflow design, release processes, and feedback collection throughout the development process. Software quality no longer occurs at a fixed point in the software development lifecycle.

More recent innovation comes from:

  • Test orchestration as a single source of quality truth, from release management to deployment, with integrated tooling, quality checks, and metrics that meet business requirements.
  • The growth in prominence of cloud-based and containerized systems to better manage test environments[1]
  • The use of synthetic data, due to easier availability and data compliance implicit in regulations such as GDPR or OCC.
  • The use of production data, analytics[2], user behavior and sentiment, feeding it back, learning from it, and incorporating it into new iterations.
  • The adoption of a fully optimized organizational structure, including methods of work, processes, people, and tooling, with a mix of onshore, nearshore, and offshore (right-shore) resources and scalability.

 

Software testing is a critical component of the software lifecycle. However, it does not necessarily ensure fast-paced agile and DevOps teams produce high-quality software and does not provide a holistic view of quality throughout the entire application lifecycle. This key role is played by modern Quality Engineering.

 


[1] Test Environment Management (TEM) means managing a complex set of requirements related to infrastructure, data, and applications, so as to provide the ideal testing environment on demand, in order to replicate what the application will encounter in production.
[2] https://techbeacon.com/resources/mobile-analytics-playbook-download-full-book

What is modern quality engineering all about?

Quality engineering is a collaborative discipline devoted to ensuring the highest possible application quality at the right moment for the benefit of the business. Quality engineering contributes to the transformation of IT into a change enabler and disruptor. It entails cross-functional—rather than multidisciplinary professionals—with complementary backgrounds and abilities collaborating to accomplish a common goal. The reason is that in a multidisciplinary team, individuals continue to work on their own island within the team; it is a collection of specialists who each perform their own task. In a cross-functional team, all members share responsibility for completing all tasks. The cross-functional approach strengthens the team's adaptability and resilience; when one team member is unavailable, the team continues to function. Cross-functional is also a term that appears in the DevOps definition.

Quality engineering activities should be integrated into both the process and the people involved to ensure the team’s ability to create and deliver fast, cheap, flexible and with adequate quality. Teams need to implement a way of working with which they can say with confidence that they are satisfied with the quality of the software they deliver and the speed with which it was delivered. The overarching principle is application quality, for which all share equal responsibility and accountability.

The ultimate goal is to deliver business value.

Arguably, the greatest issues teams are facing are almost never technical; they are always human: confidence, trust, communication and cooperation. You will find these at the heart of any product or process failure. The primary goal of quality engineering is therefore also human centered: exploring, learning a system, gaining confidence, to ultimately deliver the pursued business value.

S1C1 picture 1.png

Quality engineering topics

The field of quality engineering can be quite broad. With today's wide variety of IT delivery models in mind, we have described a common set of topics that are always relevant for quality engineering, regardless of the IT development, operations and maintenance approach that is followed by the organization. The way these quality engineering topics are addressed in your situation depends on many factors, not in the least by the IT delivery model you use. We are convinced, however, that for effective and efficient quality engineering, all of the 20 topics need to be addressed in one way or another.

S1C1 picture 2.pngFigure: the 20 topics with the activities relevant for quality engineering

Learn more about quality engineering


TMAP is the body of knowledge for quality engineering in IT delivery. The strength of TMAP can be largely attributed to the considerable practical experience that is the basis for the body of knowledge. This experience comes from thousands of IT professionals in as many projects over the last twenty-five years. The TMAP body of knowledge is easily accessible on the website www.tmap.net.

The book “Quality for DevOps teams” (ISBN 978-90-75414-89-9) describes how quality engineering can be implemented in DevOps and other high-performance IT delivery models.

To educate about TMAP find information about the TMAP certification training courses on https://pages.isqi.org/tmap-2020/

Key challenges faced by Quality Engineering

The evolution towards quality engineering should result in the highest possible true velocity while also producing adequate-quality code. In reality, many organizations today are still focused on software testing and are compromising on quality over time.

Software developers have accepted some level of faults, as long as they are not “show-stoppers”. Even within agile sprints, teams exhibit varying degrees of collaboration, with varying impact on their overall velocity. In some cases, testing is either inexistent or still begins after all development reaches completion, consuming time and resources in excess of those required for development. Developers frequently need to context switch away from their current work in order to address issues discovered in code checked in days or weeks ago. Any workflow that is faster than its slowest process generates waste. Likewise, context switching manifests itself in the form of untested or unrepaired code. Testing as the last activity in the workflow is unquestionably the most significant bottleneck in the delivery of high-quality code. Why is that so?

 

 

 

The impact of legacy technology

For nearly three decades, automated software testing has meant manually writing scripts in a variety of languages for a variety of automated test types (unit, functional, smoke, regression, performance, load, security and others). The earliest commercial vendors used this definition of "automation" based on scripts. By the mid-1990s, their tools had gained enormous popularity, laying the groundwork for the majority of today's test automation tools.

In the mid- to late-2000s, a second generation of automation tools emerged, but the workflow remained the same: create scripts, primarily by hand (or via some form of recording), in the hope of reusing them in future releases. This required (and still requires) high script development time and high maintenance due to change in object properties or change in test data. Due to the proliferation of different interface behaviors (mobile, web, embedded software, etc. ), the time also required to validate all test conditions across all devices – including edge scenarios – is incompatible with continuous delivery and multiple builds/releases per day. For instance, critical visual bugs in the UI layout, localization errors or digital accessibility issues cannot be caught by typical automated tests.

Numerous tools aid in the development, management and automation of tests - as well as the analysis of application code to ensure coverage - but this does not appear to be sufficient any longer. It is obvious that legacy tools and processes are vulnerable to modern quality engineering requirements.

Application complexity continues to increase

Software has grown enormously in size over the last few decades. Increased software volume[1] indicates that more components are being executed on multiple servers and networks, using specific protocols, generating an enormous amount of data. This increased number of components also adds complexity. Each provides specific functions, with varying degrees of criticality (e.g., ticketing system vs. payment system) and requires adequate testing. Application complexity is increasing faster than quality engineering teams can keep up.

The concept of quality is evolving.

The digitalization and digital transformation of many areas of life have resulted in a significant shift in customer needs and expectations. Users expect mobile apps to work seamlessly, regardless of their choice of browser or device, their location, or context. They expect applications to be valuable, elegant, and useful, among other things. The user experience is emotional in nature and is determined by the way users think, perceive, and feel. Intimacy, proximity, and privacy are critical. As a result, it is critical to assess these various facets of software quality and make improvements to those that fall short of expectations. Quality engineers must quickly and efficiently validate the user experience across multiple interfaces to ensure deliverables achieve the business objectives.

We need assistance

To deal with these changes, complexities and high expectations, we need to accelerate software development and meet the quality requirements of an Internet of Things, robotics, and quantum computing-driven future. We need to bring more intelligence in our quality engineering activities. It is about time we considered Artificial Intelligence to assist us in performing quality engineering faster, cheaper and more accurately, with the promise to go beyond traditional automated testing. The following chapter provides an overview of this technology. This report has been structured to allow for a more thorough examination of the convergence of QE and AI across the various sections.

 


[1] Size is a critical characteristic of a software product. The Lines of Code (SLOC or LOC) metric is one of the most widely used in industry. The effort required to maintain a software system is proportional to the technical quality of the system's source code.

About the authors

Moshe Milman

Moshe Milman

Moshe brings over 15 years of experience in leadership, engineering and operational roles in public and private companies. Prior to founding Applitools, Moshe held leadership roles at Wave Systems Corp., Safend, Amdocs (NASDAQ:DOX) and Sat-Smart Ltd. Moshe holds a Bachelor’s degree in Information Systems from the Technion Israel’s Institute of Technology and MBA from IDC (Wharton Business School GCP program).

Antoine Aymer

Antoine Aymer (chief editor)

Antoine Aymer is a passionate technologist with a structured passion for innovation. He is currently the Chief Technology Officer for Sogeti's quality engineering business. Antoine is accountable for bringing solutions and services to the global market, which includes analyzing market trends, evaluating innovation, defining the scope of services and tools, and advising customers and delivery teams. Apart from numerous industry reports, such as the Continuous Testing Reports, the 2020 state of Performance Engineering, Antoine co-authored the "Mobile Analytics Playbook," which aims to assist practitioners in improving the quality, velocity, and efficiency of their mobile applications through the integration of analytics and testing.

Rik Marselis

Rik Marselis

Rik Marselis is principal quality consultant at Sogeti in the Netherlands. He is a well-appreciated presenter, trainer, author, consultant and coach who supported many organizations and people in improving their quality engineering practice by providing useful tools & checklists, practical support and having in-depth discussions.

Rik is a fellow of Sogeti’s R&D network SogetiLabs. These R&D activities result in presentations, books, white-papers, articles, podcasts and blogs about IT in general and quality engineering in particular. He has contributed to 22 books in the period from 1998 through today. He is co-author of the books “Testing in the digital age – AI makes the difference” (2018) and “Quality for DevOps teams” (2020).

Rik is an accredited trainer for TMAP, ISTQB and TPI certification training courses, but also, he has created and delivered many bespoke workshops and training courses. For example, on Intelligent Machines, Exploratory testing and DevOps testing.

About Applitools

Applitools delivers the next generation of test automation platform for cross browser and device testing powered by AI assisted computer vision technology known as Visual AI. Visual AI helps Developers, Test Automation Engineers and QA professionals release high-quality web and mobile apps enabling CI/CD.

Hundreds of companies, including industries such as Technology, Banking, Insurance, Retail, Pharmaceuticals, and Media - including 50 of the Fortune 100 - use Applitools to deliver the best possible digital experiences to millions of customers across all screens.

Visit us at www.applitools.com

 

 

applitools-logo-300x55.png