Customer data integration (CDI) hubs continue to become a strategic driver for organizations who need to gain unified views of their customers across their sales and distribution channels for multiple product lines. However, while CDI initiatives have become one of the top five priorities for most Fortune 500 chief information officers, these projects continue to be risky ventures — and many of them fail during the implementation stage.
One of the main reasons for failure is a lack of appreciation for all the factors that affect the scalability of a customer master data hub. For instance, many CDI vendors consider scalability in a very narrow sense of a single factor or two — such as transactions per second or total number of records loaded. Concerned over their stalled CDI projects, data architects are beginning to review all the scalability factors that can affect the project viability.
Today, it is more critical than ever before for vendors and project leads alike to consider all the factors that affect scalability throughout the lifecycle of a CDI project, including the following stages: building, managing, sharing and extending the customer master data hub.
Scalability Assessment Framework
As a result, many organizations are looking for a CDI Scalability Assessment Framework in order to review all the critical scalability and performance levers and metrics that need to be considered across each of these stages. Such a framework can then serve as a guide to lower the implementation risk of CDI initiatives.
For a CDI team to consider all these disparate scalability factors, it must have the confidence that the solution architecture they are considering can adapt to changes of all these factors. One of the primary misconceptions of the CDI category has been the belief that companies need to make a choice between adaptability and scalability within their customer master data hub. This often resulted in organizations selecting suboptimal solutions based on a single scalability metric (like transactions per second) and making tradeoffs, which often proved painful to implement and maintain.
Recent CDI Benchmarking Results have demonstrated that high performance and scalability can be achieved while maintaining an adaptive, extensible architecture.
Therefore, organizations need to review and select an architecture that is adaptable by design — and can take into account all the scalability factors over the master data lifecycle.
One characteristic of an adaptive architecture is a flexible yet proven data model for customer master data entities. This ensures the optimal performance and storage of master data within the CDI hub without forcing an organization to standardize on an inflexible application data model.
Further, such an approach should be able to virtually aggregate the relevant transaction data for unified customer views. This helps avoid duplicating redundant data in the customer hub and ensures that organizations can scale to bigger record volumes. In comparison, the “scalable” data hubs that store all the customer data types (including transaction data) in a single operational data hub for fast access can have onerous data management and poor extensibility.
Last but not least, organizations must consider a solution that can be optionally deployed in distributed hub configurations to meet localized performance and scalability requirements for multiple divisions within large organizations, each with its own data governance regimes.
So before embarking on a CDI initiative that is fraught with risk, make sure that you leverage a CDI Scalability Assessment Framework that identifies critical drivers across the master data lifecycle and demand a solution architecture that is adaptive enough to take into consideration these changing factors.
Click here for Part 2 of Why Customer Data Integration Projects Fail…
Anurag Wadehra is the Vice President of Marketing and Product Management at Siperian, a leading customer data integration and management provider.