Enterprises house vast amounts of information, the secure and timely handling of which is key to their core business — especially as it relates to sensitive personal and financial data. As a result, an enterprise’s IT developer community has transformed into the role of compliance enforcer and protector of data.
As most of this information is stored within vast IT systems, many departments are challenged to find the right balance between application development and testing, and adhering to the strict compliance requirements demanded from regulations such as Sarbanes-Oxley or HIPAA. By providing data masking — or de-identifying — technology, developers, quality assurance (QA) teams and database administrators are able to effectively secure confidential information while ensuring realistic and effective application testing.
Testing for Bugs
Businesses are constantly pushing IT systems to consistently manage and modernize key enterprise applications, but many changes made can impact large areas of the system, potentially well beyond the scope of the new feature added. To ensure continuation of service, it is of course crucial to execute test routines that ensure the quality of modifications and pinpoint program bugs.
However, test routines will only be as effective as the quality of the data used within the testing process. Artificially created data may be sufficient for initial unit testing but can rarely recreate the unpredictable and unique combination of circumstances that can arise in production, and thus could leave key scenarios untested.
This is resolved by deriving test data from production data, but to create a productive and useful test system, it may be necessary to create a well defined subset of the production data. Regardless of the size of the data, the contents can still include confidential and private information.
Such sets of sensitive data — be it personal, financial or corporate information — are essential for business survival and are therefore at greater risk for a security breach. IT departments are left with little room for error when potentially exposing confidential material during test procedures.
With the onslaught of identity and data theft, privacy issues are constantly under public scrutiny and government regulations, such as BASEL II and PCI. With the right specialized tools, safe and de-identified test data can be created through data masking techniques that cleanse the information without compromising its integrity, or overtaxing the IT budget.
Quality assurance, regulatory compliance and cost effectiveness all affect how an organization will tackle its needs to efficiently de-identify data that will not only be used for testing within secure environments, but that it is also prepared for exposure to procedures beyond production systems, such as auditing.
While this can be done manually, it is both time consuming and subject to error. To avoid high cost and labor-intensive methods, automated data masking processes are implemented, ensuring complete, rigorous, repeatable and consistent testing environments and application functionality. The technology also works to enhance the reliability of test data, improve overall testing and increase quality assurance productivity.
Data masking modules keep live systems running and permit a window into the live data while not exposing the integrity or security of that information. With the combination of a mainframe engine and a client configuration tool, data masking tools provide organizations a way to read, catalog and store original data that can be later de-identified when a new test generation is required.
Through this process, although the masked data has become anonymous either through obscured views or rearrangement of data order, the test environment still maintains the original characteristics to preserve the realistic nature of the data. The result is efficient and accurate intelligence analyses.
As many enterprise IT systems run on non-congruent parts, the mainframe engine must be able to deposit extracted data into a central depository to ensure across-the-board compliance and a single view of mission critical data. This allows information to be sourced at a later date, enabling the client configuration tools to define rules for new database generations in a secure test environment.
An additional challenge faced by IT professionals is the volume of data that is created for testing. Although it may be the most accurate form of testing data, large-scale projects can be costly and time consuming. Data subset extraction, in its truest form, works to reduce the volume of data in a test environment, while sustaining a consistent and congruent subset of data. The use of data extraction facilitates queries, variables and analyses of specific databases, which supports non-excess use of disk storage required to house test data. It reduces the amount of time for testing and ultimately keeps costs down.
When dealing with mission critical databases, safeguards that ensure efficiency, such as reducing data volume, are vital in their ability to minimize the overall impact on standard business operations.
Data Protection a High Priority
The proliferation of sensitive information in non-secure test environments can cause serious risks to a company’s customers, its bottom line and its reputation. The protection of personal data has moved up on the list of priorities for IT departments — and rightfully so.
Although data testing allows a glimpse into an enterprise’s application system functionality and quality assurance, the danger of exposing critical business assets brings with it the pressing need to mask confidential information. By implementing sophisticated data de-identification technology, organizations are better equipped to handle data testing procedures, increase productivity through application modernization, and comply with regulatory standards to keep key business and customer information out of the hands of criminals.
John Billman is product director at Micro Focus, a provider of enterprise application management and modernization software.