However, finding, making and preparing consistent data journeys for complex systems is a multi-step process that must resolve several processes in intricate orders. Rather than relying on an overworked Ops team to run linear and disparate processes, the individual test data utilities must be combinable in a way that respects the referential integrity of data.
This can again be handled by model-based approaches, as shown in the figure below. If individual processes are automated and capable of handling different parameters, one process can pick up the parameters from another. They can thereby be ordered to pass parameters from the end of one process to the start of another. Meanwhile, overlaying rules and constraints ensures that the utilities produce complex data accurately.
Figure: Test data processes are executed by blue automation waypoints, while embedded subprocesses contain reusable subflows. This intuitive approach combines test data processes, passing parameters seamlessly from one to another.
Combining utilities in this way removes the need for a human to analyse the results of one process, identifying the next needed to fulfil a data request. Instead, automated test data utilities can handle variation and move seamlessly from one process to another. This minimises human intervention, while the ability to respond to differing requests reflects “intelligent” human decision-making.
One application of this approach lies in automated “Find and Makes”, which can substantially accelerate the fulfilment of data requests. A “Find and Make” searches for data among existing sources based on a set of criteria provided. If that data is not found, the criteria are passed as parameters into a data generation job, in turn creating the missing data. That data is then added to the test database, where it will be available for a future data “find”.
Find and Makes can be performed, for instance, using intuitive forms that are accessible to all users. Alternatively, SQL queries might look for data. If sufficient data cannot be found, the query is parsed to create new data. This constructs new values that will satisfy the query. For example, the “greater than” and “between” values in the function are used to construct new data.
Figure: An automated data “Find and Make” looks for message data based on parameters provided for a SQL Query. If no data is found, the parameters or the parsed query is passed into an automated data generation job.
This approach to performing data “Find and Makes” is well suited when a tester needs data for a set of scenarios. For instance, if they need data for 20 test scenarios, but only 18 can be found in a database, the missing two scenarios are generated by parsing the SQL query.
Alternatively, testers might want to generate a full spread of values into a test environment, creating “gold copy” data that is ready for an extensive range of test scenarios. In this instance, model-based approaches to test data generation might be used to construct the data and generate a complete spread of values.