État de l'Intelligence Artificielle appliquée à l'Ingénierie de la Qualité 2021-2022
Section 8: Operate

Chapter 5 by Capgemini

Industry 4.0: from point data to business impact

Business ●●●○○
Technical ●●○○○

Listen to the audio version

Download the "Section 8: Operate" as a PDF

Use the site navigation to visit other sections and download further PDF content

By submitting this form, I understand that my data will be processed by Sogeti as described in the Privacy Notice.*

Predictive alerts and intelligent automation can be implemented at every stage of the lifecycle, resulting in increased business impact. This is accomplished via a five-stage data transformation process.

Effective quality engineering is based on establishing quality at each stage of the product's lifecyle and ensuring that the solution meets business objectives.

We are hammered by multiple imposed constraints – the four Vs of data (volume, velocity, variety, and veracity), the complexity of use cases and stringent timelines, the high cost of defect recovery at later stages of the life cycle, the breadth of open source/licensed technology, and a skills shortage. In addition, the correlation of point data with business objectives is a real challenge. The abundance of siloed data and the plethora of tools make it difficult to mine and predict patterns with accuracy.

Given the increasing integration of quality engineering into all processes and the growth of artificial intelligence, it is time we considered the value our ALM1 data could provide, such as the prospect of predicting defects rather than reacting to them.

Through a couple of real-world deployments, this chapter delves into the process of converting ALM data to knowledge and the process of building a new collection of analytics with the use of Artificial Intelligence.

The deployment you are about to read aided business users in obtaining predictive maintenance warnings that included critical trends in machine or equipment faults, as well as manufacturing process performance and quality. Successful deployment is estimated to have saved around 350 hours of operational downtime and 700 hours of preventative maintenance effort per year for approximately 600+ robots.

The journey from point data to business decision

The purpose of this article is to evaluate the deployment of predictive monitoring and maintenance at two distinct organizations, both of which are market leaders in their respective domains of the automotive and aviation engine manufacturers industries. Both had very comparable ambition: achieve business benefits through effective software data analysis, some of which being generated by sensors or IoT (Internet of Things).

The following action plan was put in place:

  • As part of Industry 4.0, these organizations desired to examine how to transition from a preventative to predictive model in order to minimize downtime, breakdowns, and interruptions.
  • As part of the contractual agreement, customers installed sensors in devices (robots and machines) to monitor software performance and use the data to optimize maintenance.
  • A big data platform was developed to integrate tens of use cases from predictive maintenance, aftersales and warranty, and marketing and sales, all of which are powered by data science and machine learning algorithms.

Although these examples are ultimately dealing with factory machinery and equipment, the machine learning models we developed relate to software quality engineering. 

The figures below illustrates the high-level architecture and the 5 stages of the data transformation journey. In both diagrams, it all begins with data and concludes with a choice that is related to the business objective. We will relate each stage to the aforementioned high-level architecture.

Figure: High-level architecture

Figure: High-level architecture

Figure: The 5 stages of the data transformation journey

Figure: The 5 stages of the data transformation journey

In God we trust, all others must bring data.

- W. Edwards Deming

Data acquisition is the initial stage in the journey from data to decision. Throughout the data journey, we convert data to information and then to knowledge, which enables us to further change into wisdom. We can gain a better understanding of the data to decision path by applying new ideas such as DIKW (Data, Information, Knowledge, Wisdom). At each stage, we established ongoing quality through the use of AI tools and expanded the principle to include a pillar of Business Impact. It interprets that the wisdom state of data does not qualify as the optimal outcome unless it is tied to a business objective function - either raising the top line or decreasing the bottom line.

Stage 1: Data

Due to IoT Sensors in the assembly line, our data sets are unstructured, organized, and semi-structured. We created a Big Data Analytics platform driven by Hadoop and Spark. In near real time, IoT sensor data and other operational process data from across factories are assimilated. Each distinct source requires a unique ingestion approach based on the variety, velocity, validity, and volume of data. The solution used "schema on read," which means that the data is not checked prior to loading but rather when a query is submitted; this results in a very quick initial load because no data is read and there is no danger for subsequent phases. As a result, we built an automated quality engineering method for reconciling, ensuring the quality, completeness, and absence of crucial data points. It is integrated with an escalation-based notification system, which ensures that the appropriate team is notified immediately. Without this mechanism, it would have been a case of garbage-in, garbage-out.

Across all assignments, we evaluated > 1TB of data from a diverse sample of IoT machines and assembled robots with 76+ million operational process data and 600+ business drivers.

Stage 2: How to travel from Data to Information

After defining the problem statement for predicted failures in the manufacturing area and acquiring data, we began data pre-processing. This is crucial for ensuring that acquired data is clean, complete, and curate. To begin, without context for data, we cannot analyze anything in the past or harness it for future decision-making. We labeled raw data prior to beginning pre-processing with AI algorithms. Data labeling is a time-consuming and laborious procedure that entails data annotation, tagging, classification, moderation, and processing. There are numerous manual techniques, however we choose to label using machines assisted by machine learning/artificial intelligence technologies. We concentrated on a strategy that is somewhat unknown but quite effective. Self-supervised learning is a methodology for autonomously labeling data by matching and exploiting connections between various input indicators. Self-supervised learning is ideally suited for changing settings because industrialized models may be constantly learned in production. At this point, our data has been turned into information.

Figure: Self-supervised learning

Figure: Self-supervised learning

Stage 3: How to leap forward from Information to Knowledge

To achieve the Knowledge state of the data, pre-processing of the data was conducted. This included data normalization, identifying duplicates/quality, imputation of missing values, and identifying extreme data points that were outliers. We used distance-based algorithms and genetic algorithms for pattern matching in order to build a golden copy of part ID identification, cleaning, and standardization based on placed orders and varied BOMs. Missing value imputation was performed using machine learning algorithms to predict missing values. All machine learning algorithms contribute to the principle of continuous quality assurance at each stage of the journey. The entire process of standardization and quality assurance was automated based on the DevOps principles. Each change was logged, analyzed, and communicated to stakeholders.

The historical data for machine failures was skewed, with the minority class (percentage of failures) ranging between 3% and 5%. It would be misleading to feed such data into predictive maintenance systems. Collecting further failure data was also not a possibility. We used advanced machine learning techniques to turn an imbalanced data set into a balanced training dataset, but left the test/validation datasets alone to avoid bias during the validation process.

Each dataset has a few extreme observations, and ours was no exception. We had to identify extreme observations that should be ignored when creating models. To avoid outliers entering into the models, we applied AI approaches such as Cook's distance and other similar strategies. We are now pushing onward to the level of Knowledge.

Additionally, we encountered a cold start issue everywhere new units were planted, and we lacked sufficient data to train our models. The use of machine learning-based similarity algorithms enabled us to distinguish between new and old objects based on a variety of criteria. Thus, nearest-matched historical product/machine data was utilized to predict, and it was refreshed with each use of the model output via a feedback loop procedure that gave back deviations in prediction to the model as a self-learning feature.

The challenge of obtaining representative test data

Although test data is typically included in the initial data set, we were unable to obtain production data due to security, privacy, and local laws. As a result, we used Unsupervised Deep Learning algorithms to generate production-grade data based on production systems without exposing real production data. However, the generated data retained the same pattern as the live production data. Now, data validation tests are automated using continuous integration. Anytime a new data-related feature becomes available, it integrates with the master flow via automated pipelines. Following integration, we used automated continuous delivery (CD) pipelines to push the feature to all candidate platforms with the necessary approval and exception processes. It enabled real-time training and validation of machine learning solutions within the corresponding platform using the necessary dataset. Section 6 provides a deep insight into synthetic data.

Stage 4: Time to fly towards Wisdom

When we established a relationship between the objective function and various extra variables, we arrived at the Knowledge stage of data. To avoid misleading results, pairwise correlation, scatter plots, and VIF (Variance Inflation Factor) were used to identify correlations between independent features. Classification set of machine learning models aided in the development of machine intelligence by establishing a trend behind failures over the last 24 months and enabling fully trained solutions to take on the future part of the Wisdom phase. When machine learning techniques are used, significant statistics such as true negatives, true positives, false positives, and false negatives are generated, as well as a confusion matrix is established to help grasp critical metrics. We built functions to compare these indicators to predefined thresholds and to notify the appropriate stakeholder at the appropriate moment. After we were satisfied with the trained model's performance on the training data set, we compared it to the validation data set. Generally, if there is a considerable difference in metrics between the training and validation datasets, this is referred to as an overfitting problem, which reduces prediction accuracy when using new data sets. We built a function to check for overfitting and automated it to generate alerts in the event of a deviation from the preset overfitting threshold.
Now, we face a more difficult task: maintaining a high level of quality while progressing toward Wisdom. Here, we forecasted failures for the following 24 hours based on qualified features.
Regular data points were generated by IoT devices, and the solution projected the chance of failure per machine and each plant. Once the model emits results, we created an automatic layered notification procedure into three categories. We observed models deteriorating after a quarter, which was detected by the business team before the technical team. It was time to restart the model. Using statistical techniques, we found drift in the model's features and created continuous training pipelines for automatic training after recalibration is confirmed and accepted.

Step 5: The Holy Grail of business impact

Finally, we tied outcomes to the business purpose of improving the bottom line and published considerable improvements in decreasing plant/machine shutdowns and optimizing both fixed and variable costs.

What we learnt from these deployments

We observed the importance of instilling continuous quality at each step. In contrast to transaction assignments, we must constantly monitor the accuracy of current models, assess their impact, and determine whether or not to refresh our models with new data. Continuous quality cannot be achieved without the use of intelligent automation powered by advanced machine learning/artificial intelligence algorithms that can learn and optimize themselves with each execution of the solution. Rather than relying on bundled products' drag-and-drop functionality, it is strongly encouraged to write code-based scripts based on open source stacks, since each script can be registered, automated, adjusted, and customized to meet our own needs. Due to their scalability, self-learning capabilities, and statistical approaches, AI/ML techniques are recommended for incorporating continuous quality into the entire solution as well as individual steps.

As mentioned previously, the first hurdle is correctly labeling the data. Our historical records lacked labels denoting failure and non-failure. We used AI/NLP algorithms to analyze error logs and textual data for key terms and flagged records as failure or non-failure. Now, the next task is to have fewer failure records, i.e. the minority class was less than 5%. As a result, it has an unbalanced data collection and datasets that are skewed toward one class. It is unsuitable for AI/ML techniques to produce accurate and dependable results, and a few machines lacked the necessary data to train AI/ML models. We addressed the imbalanced data set issue by oversampling machine learning approaches via SMOTE algorithms and converting the data set into an imbalanced data set only during the training phase. Additionally, to address the absence of data difficulty in a few cases, we generated synthetic data using Unsupervised Deep Learning algorithms (Gans).

Figure: Quality Engineering during preprocessing and feature engineering

Figure: Quality Engineering during preprocessing and feature engineering

 

The team utilized a variety of classification algorithms on training data sets, including logistic regression and tree-based models. Techniques for decision trees and ensembles — Random Forest and XGBoost. After adjusting and validating all algorithms many times, we built a bespoke ensemble methodology with voting for each method based on data segment. Following training, models are validated and deployed in production.

Automotive acceptance criterion testing and A/B testing were the solution's highlights, as we not only created a single challenger solution but also implemented four challenger models with the purpose of answering the following:

  1. When is it necessary to replace an obsolete model with a new one?
  2. How often should we proactively retrain our models?
  3. When is it appropriate to revert to a previous version of the model?

About the author

Jatinder Kumar Kautish

Jatinder Kumar Kautish

Jatinder K Kautish is a Director at Artificial Intelligence Offering, L3 Certified Chief Architect, IAF Certified, working from India in Hyderabad. He is a regular Industry speaker at leading conferences and academic institutes. He has been awarded with 3 AI Innovation Awards in 2020-21 and performing an Advisory panel for Confederation of Indian Industry (Govt. of India) for 2020-21. He has a passion for positing AI technology at the core of every business and converting solutions into established offerings. Outside of work, you’ll likely find him mentoring academic and NGO technology projects or writing & reciting poems and training ambitious folks for Punjabi folk dance (Bhangra) or enjoying long drives.

About Capgemini

Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The Group is guided everyday by its purpose of unleashing human energy through technology for an inclusive and sustainable future. It is a responsible and diverse organisation of 270,000 team members in nearly 50 countries. With its strong 50 year heritage and deep industry expertise, Capgemini is trusted by its clients to address the entire breadth of their business needs, from strategy and design to operations, fueled by the fast evolving and innovative world of cloud, data, AI, connectivity, software, digital engineering and platforms. The Group reported in 2020 global revenues of €16 billion.

Get the Future You Want  I  www.capgemini.com

 

 

 

Capgemini logo