État de l'Intelligence Artificielle appliquée à l'Ingénierie de la Qualité 2021-2022
Section 4.2: Automate & Scale

Chapter 2 by Digital.ai

Expedite mobile testing cycles

Business ●●○○○
Technical ●●●○○

Listen to the audio version

Download the "Section 4.2: Automate & Scale" as a PDF

Use the site navigation to visit other sections and download further PDF content

By submitting this form, I understand that my data will be processed by Sogeti as described in the Privacy Policy.*

The biggest challenge modern organizations face is not in developing solutions but rather, in testing them. As such, developing a solution in many ways, is a group of solved problems, as in source control, version control, cloud, security, and more. The main component that enterprises across industries struggle with, is quality.

Challenges

Product quality goes hand in hand with the velocity of change, and it is determined by the market demand. There is a level of urgency from the market, demanding higher velocity of product delivery, without quality dilution. As this demand increases, so does the throughput -- and consequently, the pressure to provide a fully automated testing pipeline.

First, we must understand the solution that is being built.

The Digital.ai Continuous Testing solution leverages multiple bots (APIs), to drive a fleet of mobile devices, as part of an app exploration.

The target of the app exploration is to build a model around the application, enabling users to gain full visibility into all paths of execution throughout the application.

Figure: Paths of execution

 

Figure: Paths of execution

 

By deploying this solution, users benefit largely from this model approach to testing.

Creating sanity tests is one of the main use cases and this has been determined based on the testing model, which allows the automation engine to generate tests cases, in order to verify that all activities/pages within the application are functional.

Figure: Testing model

 

Figure: Testing model

Internal Approach to AI

In general, our R&D is not eager to solve problems using AI algorithms, but rather, we do everything we can to work around AI algorithms.

There are several reasons why we avoid AI-based algorithms:

  • AI algorithms are ‘expensive’ to develop in comparison to other types of solutions, as they are resource intensive, overall.
  • With AI-based algorithms, there is no certainty that the solution will yield the desired results.
  • In many cases, AI algorithms require on going ‘maintenance’, as the data and needed heuristics change.

Therefore, we utilize AI tools only after attempting to solve the problem by way of traditional solutions (and once the traditional approaches failed to provide reasonable results).

Below are a few examples where we have used AI to solve various problems:

Identifying Anomalies in the User Interface

When we refer to anomalies, we are talking about defects or bugs.

Now, it’s relatively easy to identify a change from the model (or from the expected behavior) that is fully functional, and this happens to be the case for all the device metrics.

The complexity increases as anomalies occur within specific segments or devices. To elaborate on this point, let’s dive a bit deeper into the mobile device fragmentation subject.

One of the main challenges in building apps (that will work for all of your customers) is that the target device upon which the app is being deployed, is highly fragmented.

The testing device metrics axis include different OS (mainly Android and iOS), different OS versions (for example iOS 14.x and iOS 15.x), different screen sizes, different manufactures (like Samsung and Xiaomi) and different modules.

Figure: Testing device metrics axis

 

Figure: Testing device metrics axis

 

It is a totally different issue when it happens across all the device metrics and when it happens to only devices that are specifically running iOS 13.x.

Understanding the scope of the issue can save R&D teams large sums of time and money.

In regard to building a hypothesis around the scope of an issue, we aim to calculate the certainty of said hypothesis -- as we’d prefer to showcase hypothesis’ with very high certainties.

While utilizing a relatively small data sample pool, the average project will require 10 devices for each operating system and an average test will require 5 minutes to execute.

Low amounts of data samples in conjunction with high fluctuations in metrics, result in low certainty of any hypothesis.

In order to improve the certainty of a hypothesis, the solution will generate more data samples by executing the model against new devices -- with different properties.

This results in highly accurate insights that save the users huge amounts of time in test result analysis.

Identifying Anomalies in App Performance

Now, let’s explore what client facing UX performance is and why it is critical.

User Experience is the compilation of many elements, as such, users find that a lot of factors can affect the usability and overall satisfaction of a mobile application. Among the high-level factors, users have to consider usefulness, usability, the value creation, the ease created for the users, credibility, accessibility and more. A key factor, from the user experience perspective, is the application performance itself.

Mobile applications are operating in complex network environments, where the communication channel parameters (bandwidth, latency, and packet lost -- to name a few) are continuously changing. As a direct consequence, this can cause impermeable damage to the usability of an application and the brand sentiment, overall.

Every application communicates with some type of backend – and therefore, this communication is Rest API-based, which enables applications to download and update information as needed, as well as maintaining synchronization.

In order to analyze the application performance, users should split this dilemma into transactions. A transaction is initiated when a user interacts with the application and clicks on any given button (such as clicking on the login button), and end when the application renders the results received from the backend. Additionally, note that most transactions can include 10s of Rest APIs requests and responses.

As the communication channels affect the experience, users will find other parameters within transactions that can affect their efforts as well, such as the number of requests or of round trips (a plethora of requests/responses), the size of downloaded data, and correlated response delays (from a server perspective).

Users may encounter non-networking elements that can impact the overall user experience, like the consumed CPU, Memory and/or Battery.

As it pertains to transaction performance, another important metric to consider is ‘Speed Index’, which is highly critical to the user experience.

When users examine an application model (as shown in the previous chapter), one can see that every connection from one activity to the next (edge), can be considered a transaction.

The aim of the solution, from the tenant perspective, is to identify regression performance and the corresponding root causes.

We would like to create a ‘performance base line’, based on the past executions and to begin with -- identify any significant deviations that may exist. Deviations may be considered any of the elements listed, such as, transaction duration, speed index, CPU, and Memory.

In this instance, the solution is building a hypothesis around deviations -- from a performance standpoint and it leverages large amounts of information that have been compiled on each transaction, in order to identify the root causes.

Figure: Performance base line

 

Figure: Performance base line

 

About the author

Guy Arieli

Guy Arieli

Mr. Arieli has a strong track record in technology innovation with more than 25 years of development and hands-on experience in test automation. Mr. Arieli co-founded Experitest that was acquired by Digtial.ai. Prior to Experitest, Mr. Arieli spent several years in management positions in HP (formerly Mercury), Cisco and 3Com. In addition, Mr. Arieli founded and sold the largest local Test Automation services company – TopQ (formerly Aqua Software) – to a leading publicly-traded technology group – Top Group. Mr. Arieli leads the largest Test Automation forum online and is a keynote speaker at events worldwide. He holds a B.Sc. from Israel’s world-renowned Technion.

About Digital.ai

Digital.ai is an industry-leading technology company dedicated to helping Global 5000 enterprises modernize and transform their businesses to compete in today’s digital markets. Digital.ai combines leading agile, DevOps, security, testing, and analytics technologies in an advanced, AI/ML-powered platform that provides the end-to-end visibility and unprecedented insights enterprises need to accelerate business value from software investments and increase operational efficiency while reducing costs and software related risks. Purpose built to manage the scale and complexity of large organizations, the Digital.ai platform enables enterprises to align software orchestration to strategic outcomes and optimize their business around the flow of value for their customers. Learn more at www.digital.ai and join the conversation on Twitter