State of AI applied to Quality Engineering 2021-22
Section 3.2: Inform & Interpret

Chapter 3 by Capgemini

Bringing NLG into Software Development

Business ●○○○○
Technical ●●●●○

Listen to the audio version

Download the "Section 3.2: Inform & Interpret" as a PDF

Use the site navigation to visit other sections and download further PDF content

By submitting this form, I understand that my data will be processed by Sogeti as described in the Privacy Policy.*

In this short chapter, we demonstrate a sample of software engineering cases that can benefit from Natural Language Generation (NLG). This is a fledgling technology with numerous potential uses, one that we are just beginning to study.

In May 2020, OpenAI unveiled the world's largest neural network, dubbed GPT-3, a 175 billion-parameter language model. GPT-3 was trained utilizing practically all publicly available data on the Internet and outperformed state-of-the-art models in a range of natural language processing (NLP) tasks, including translation, question answering, and cloze tests. GPT-3 is a gigantic NLP model, which achieves state-of-the-art in a range of tasks. Its main breakthrough is eliminating the need for task-specific fine-tuning. One of the tasks GPT-3 is very good at is text generation – in our case code generation.

While GPT-3 does not require any training (a zero-shot learning example), its already excellent performance is surpassed by one-shot or few-shot learning.

After establishing the language task, we compile a small collection of handpicked samples. In more typical machine learning, a big dataset is required, followed by annotation. However, language models have a constrained input window. The text prompt and generated completion must be less than 2048 tokens (about 1500 words) in length when using the OpenAI API to GPT-3. Within this constraint, we must communicate very effectively what we want from the model (the'prompt'), guided by the new craft of 'prompt engineering'. This requires a very small collection of examples, in contrast to standard machine learning, which requires between 60% and 80% of the entire time spent on data preparation.

Old Way Task Definition
Figure


The GPT-3 API determines the optimal pattern for our purposes. It abstractly applies 'fuzzy matching' to a very large model. It will seek out the closest match, and its behavior can be influenced by several settings.

We combine Natural Language Understanding (NLU) with Natural Language Generation (NLP) in two domains. While we are generating structured code, this still falls within the umbrella of NLG.

Use case: Test case generation

The figure below illustrates how GPT-3 works with SQL. The format is question and answer, in which we ask GPT-3 to perform an action and it determines the best match. The critical engineering is the prompt design used to choose the appropriate cases for requesting GPT-3. The actual solution will be similar to this (which we cannot demonstrate owing to IP protection), in that users will upload test cases and Python code will be generated.

Figure:


In our situation, we provide test cases as input and receive code as output.

Use case: Test coverage generation

Code coverage is the one of most critical aspect of agile projects since it enables us to build CI/CD pipelines. While programmers enjoy writing code, they are not as enthusiastic about building test cases. Typically, these code coverage test cases are written in the same language as the underlying software. GPT-3 is well-suited for this type of test code generation, as it is trained on Github, among other sources, as part of its training data. This significantly increases developer productivity because all they have to do is check that the test coverage code supplied by GPT-3 is accurate. This turns into a debugging exercise rather than a code coverage programming task.

Learnings from few customer implementations

One of the most important lessons we've learned from customer implementations is the diversity of templates that exist among various teams. Due to the fact that our Key-Value-Pair model must be trained on specific templates in order to enhance performance, obtaining sufficient training data (samples for each template type) is a significant difficulty, even more so when newer templates are utilized in projects.

While demonstrating these GPT-3-based solutions to customers, occasionally overselling occurs or they believe the models do everything. This is not true. GPT-3 frequently generates deprecated code and makes deprecated library calls. Occasionally, it just does not work properly and produces rubbish. Designing the appropriate prompt is a talent that must be gained via extensive use of GPT-3 and experimentation with how to configure the model parameters. As of July 2021, GPT-3 is not integrated with any cloud systems, posing security and integration difficulties.

Newer models like CuBERT (Code Understanding BERT) are constantly being developed and it is critical to use it for NLP/NLU jobs to increase quality.

About the author

Rajeswaran ViswanathanRajeswaran Viswanathan

Rajeswaran Viswanathan is the head of AI Center of Excellence in India. He has published many papers and articles. He is the author of proportion – A comprehensive R package for inference on single. Binomial proportion and Bayesian computations. It is most widely used for categorical data analysis. He has a passion for teaching and mentoring next generation of data scientists.

About Capgemini

Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The Group is guided everyday by its purpose of unleashing human energy through technology for an inclusive and sustainable future. It is a responsible and diverse organisation of 325,000 team members in nearly 50 countries. With its strong 55 year heritage and deep industry expertise, Capgemini is trusted by its clients to address the entire breadth of their business needs, from strategy and design to operations, fueled by the fast evolving and innovative world of cloud, data, AI, connectivity, software, digital engineering and platforms. The Group reported in 2021 global revenues of €18 billion.

Get the Future You Want  I  www.capgemini.com

 

 

 

Capgemini logo