road sunset

Building a Roadmap to Cognitive QA

I’ve previously written about the value of analyzing software repositories to improve testing. It’s part of the journey to what Sogeti terms Cognitive QA, which leverages self-learning and analytical technologies to test and assure the quality of increasingly smart products and applications.

Although test automation and the application of statistical techniques are already part of agile testing and DevOps models, Cognitive QA, with its inherent analytics and smart technology, goes much further. It digs far deeper into the steadily proliferating data volumes in the modern software lifecycle.

But even if you recognize the value of applying analytics to improve testing, there’s now so much data that it’s difficult to know where to start the journey. That’s why we’ve developed a Roadmap and accompanying Rules of the Road to help our clients move forward with confidence to a new world of smart QA and Testing.

From Proof of Value to Operationalization

The Roadmap has two distinct phases: a Proof of Value followed by Deployment, Maintenance & Refinement. The Rules of the Road establish a set of procedures and best practices for implementing Cognitive QA. This is wholly a data-driven approach in the knowledge that data drives most of the activities, results, accuracy, etc.

In the following Roadmap stages undertaken jointly by Sogeti and our clients, Weeks 1 – 8 form the Proof of Value (PoV), while Week 9 onwards takes QA and Testing into planning for operationalizing the PoV into production.

  • Week 1: Are we analytics ready?
    To move ahead on this journey you need to know what data is available (test, development, operations, customer insight) and identify what additional information is required. This stage is an opportunity to identify the rules to be implemented, attributes from repositories to be collected, and scope of work for the project going forward.
  • Weeks 2-3: Data collection and preparation
    The success of Cognitive QA depends on the quality of the data being analyzed. This stage identifies missing, duplicate, incomplete data and fixes missing records. It refines the data rules, such as identifying riskier files with higher design complexity, and links them with the data collected.
  • Weeks 4-5: Modeling Cognitive QA rules
    The objective here is to derive statistical insights before modeling and implementing the rules using an analytical package. The models can be further tested and refined as required to produce graphs, trends, outliers, models and recommendations for after the rules are applied.
  • Weeks 6-8: Validation and reporting
    The recommended results are validated by a subject matter expert using an eye balling technique, quality metrics are applied, and benchmarking is carried out ahead of a set of recommendations (e.g. prioritized test lists) being put forward into possible deployment.
  • Week 9 onwards: Operationalization
    With the repositories and data sets identified and prepared, and rules established, the final stage is to bring it all together in a fully operational approach. Connectors must be developed that pull the records from repositories, a data lake has to be implemented, a user interface portal deployed, the visualization layer (reports, views, dashboard) customized, workflow and job scheduling integrated, and ongoing monitoring of accuracy and maintenance established.

5 Rules of the Road

No one Cognitive QA journey is the same as another. Nonetheless, there are a number of general rules and guidelines that can help keep things on course as you adopt the analytics of software repositories to improve testing.

  • Establish the available data sources by engaging with the project stakeholders to understand their data sources, process, data life cycle, and challenges (what questions do they want the analytics to answer), then establish a set of Analytics Questions.
  • If a review of the data (availability, accessibility, quality) shows that it does not meet expected criteria, exit the project and endeavor to fill the gaps with a data quality improvement program.
  • Understand the relationship between different sets of data/artifacts, such as source to test case, or requirements to test cases.
  • Use data exploitation techniques to derive insights and flag up both positive and negative patterns, and model selected rules using a best-fit algorithm. Start with simple statistical models and move towards complex mathematical models.
  • Have a SME validate the models and refine them through multiple iterations.

The above are just a small number of the Rules of the Road that we have established to smooth the journey to Cognitive QA. If you’d like to read more, or better understand the Roadmap that we’ve developed – and proven – to help clients prioritize the test cases that matter from across multiple repositories, please get in touch.

Vivek Jaykrishnan
Vivek Jaykrishnan
Senior Director - Technology, Product Engineering Services V&V
  • Vivek Jaykrishnan
    Vivek Jaykrishnan
    Senior Director - Technology, Product Engineering Services V&V