This podcast is the second in a series of three to examine “Application Transformation: Getting to the Bottom Line.” Through panel discussions we examine the rationale and likely returns of assessing the true role and character of legacy applications, and then further determine the paybacks from modernization.
To gain the most return on modernization projects, many enterprises are separating core from context when it comes to legacy enterprise applications and their modernization processes. As enterprises seek to cut their total IT costs, they need to identify what legacy assets are working for them and carrying their own weight, and which ones are merely hitching a high-cost — but largely unnecessary — ride.
A widening cost and productivity division exists between older, hand-coded software assets and replacement technologies on newer, more efficient standards-based systems. Somewhere in the mix, there are also core legacy assets distinct from so-called contextual assets. There are peripheral legacy processes and tools that are costly vestiges of bygone architectures. There is legacy wheat and legacy chaff.
With us to delve deeper into the high rewards of transforming legacy enterprise applications is Steve Woods, distinguished software engineer at HP, and Paul Evans, worldwide marketing lead on applications transformation at HP. The discussion is moderated be me, Dana Gardner, principal analyst at Interarbor Solutions.
Listen to the podcast (31:00 minutes).
Here are some excerpts:
Paul Evans: This podcast is about two types of IT assets: core and context. That whole approach to classifying business processes and their associated applications was invented by Geoffrey Moore, who wrote Crossing the Chasm, Inside the Tornado, etc.
He came up in Dealing with Darwin: How Great Companies Innovate at Every Phase of their Evolution with this notion of core and context applications. Core being those that provide the true innovation and differentiation for an organization. Those are the ones that keep your customers. Those are the ones that improve the service levels. Those are the ones that generate your money. They are really important, which is why they’re called “core.”
When these applications were invented to provide the core capabilities, it was 5, 10, 15, or 20 years ago. What we have to understand is that what was core 10 years ago may not be core anymore. There are ways of effectively doing it at a much different price point.
As Moore points out, organizations should be looking to build “core,” because that is the unique intellectual property of the organization, and to then buy “context.” They need to understand, how do I get the lowest-cost provision of something that doesn’t make a huge difference to my product or service, but I need it anyway?
The “context” applications are not less important, but … you should be looking to understand how that could be done in terms of lower-cost provisioning [of them].
Steve Woods: [A lot of the interest in separating core and context in legacy IT applications] has to do with the pain users are going through. We have had customers who had assessments with us before, as much as a year ago, and now they’re coming back and saying they want to get started and actually do something. So, a good deal of the interest is caused by the need to drive down costs.
Also, there’s the realization that a lot of these tools — extract, transform, and load (ETL) tools, enterprise application integration (EAI) tools, reporting, and business process management (BPM) — are proving themselves now. We can’t say that there is a risk in going to these tools. They realize that the strength of these tools is that they bring a lot of agility, solve skill sets issues, and make you much more responsive to the business needs of the organization. …
What I created at HP is a tool, an algorithm, that can go into any language legacy code and find the duplicate code, and not only find it, but visualize it in very compelling ways. That helps us drill down to identify what I call the “unintended design.” When we find these unintended designs, they lead us to ask very critical questions that are paramount to understanding how to design the transformation strategy. …
When you identify the IT elements that are not core and that could be moved out of handwritten code, you’re transferring power from the developers — say, of COBOL — to the users of the more modern tools, like the BPM tools.
So there is always a political issue. What we try to do, when we present our findings, is to be very objective. You can’t argue that we found that 65 percent of the application is not doing core. You can then focus the conversation on something more productive. What do we do with this? The worst thing you could possibly do is take a million lines of COBOL that’s generating reports and rewrite that in Java or C# hard-written code.
We take the concept of core versus context not just to a possible off-the-shelf application, but at architectural component level. In many cases, we find that this is helpful for them to identify legacy code that could be moved very incrementally to these new architectures. …
A typical COBOL application — this is true of all legacy code, but particularly mainframe legacy code — can be as much as 5, 10, or 15 million lines of code. I think the sheer idea of the size of the application is an impediment. There is some sort of inertia there. An object at rest tends to stay at rest, and it’s been at rest for years, sometimes 30 years.
So, the biggest impediment is the belief that it’s just too big and complex to move and it’s even too big and complex to understand. Our approach is a very lightweight process, where we go in and answer to a lot of questions, remove a lot of uncertainty, and give them some very powerful visualizations and understanding of the source code and what their options are. …
When you go to the legacy side of the house, you start finding that 65 percent of this application is just doing ETL. It’s just parsing files and putting them into databases. Why don’t you replace that with a tool? The big resistance there is that, if we replace it with a tool, then the people who are maintaining the application right now are either going to have to learn that tool or they’re not going to have a job.
If we get the facts on the table, particularly visually, then we find that we get a lot of consensus. It may be partial consensus, but it’s consensus nonetheless, and we open up the possibilities and different options, rather than just continuing to move through with hand-written code.
Evans: If you look at this whole core-context thing, at the moment, organizations are still in survival mode. Money is still tight in terms of consumer spending. Money is still tight in terms of company spending. Therefore, you’re in this position where keeping your customers or trying to get new customers is absolutely fundamental for staying alive. And you do that by improving service levels, improving your services, and improving your product. …
The line-of-business people are now pushing on technology and saying, “You can’t back off. You can’t not give us what we want. We have to have this ability to innovate and differentiate, because that way we will keep our customers and we will keep this organization alive.”
That applies equally to the public and private sectors. The public sector organizations have this mandate of improving service, whether it’s in healthcare, insurance, tax, or whatever. So all of these commitments are being made and people have to deliver on them, albeit that the money, the IT budget behind it, is shrinking or has shrunk.
The leaders must understand what drives their company. Understand the values, the differentiation, and the innovations that you want and put your money on those and then find a way of dramatically reducing the amount of money you spend on the contextual stuff, which is pure productivity. …
Woods: Decentralizing the architecture improves your efficiency and your redundancy. There is much more opportunity for building a solid, maintainable architecture than there would be if you kept a sort of monolithic approach that’s typical on the mainframe. …
The problem is sometimes not nearly as big as it seems. If you look at the analogy of the clone codes that we find, and all the different areas that we can look at the code and say that it may not be as relevant to a transformation process as you think it is.
I do this presentation called “Honey I Shrunk the Mainframe.” If you start looking at these different aspects between the clone code and what I call the asymmetrical transformation from handwritten code to model driven architecture, you start looking at these different things. You start really seeing it.
We see this, when we go in to do the workshops. The subject matter experts and the stakeholders very slowly start to understand that this is actually possible. It’s not as big as we thought. There are ways to transform it that we didn’t realize, and we can do this incrementally. We don’t have to do it all at once.
Dana Gardner is president and principal analyst at Interarbor Solutions, which tracks trends, delivers forecasts and interprets the competitive landscape of enterprise applications and software infrastructure markets for clients. He also produces BriefingsDirect sponsored podcasts. Follow Dana Gardner on Twitter. Disclosure: HP sponsored this podcast.