A growing number of technical and economic incentives are mounting that make a strong case for modernizing and transforming enterprise mainframe applications — and the aging infrastructure that support them.
IT budget planners are using the strident economic environment to force a harder look at alternatives to inflexible and hard-to-manage legacy systems, especially as enterprises seek to cut their total and long-term IT operations spending.
The rationale around reducing total costs is also forcing a recognition of the intrinsic difference between core applications and so-called context — context being applications that are there for commodity productivity reasons, not for core innovation, customization or differentiation.
With a commodity productivity application, the most effective delivery is on the lowest-cost platform or from a provider. The problem is that 20 or 30 years ago, people put everything on mainframes. They wrote it all in code.
The challenge now is how to free up the applications that are not offering any differentiation — and do not need to be on a mainframe — and which could be running on a much more lower-cost infrastructure, or come from a completely different means of delivery, such as Software as a Service (SaaS).
There are demonstrably much less expensive ways of delivering such plain vanilla applications and services and significant financial rewards for separating the core from the context in legacy enterprise implementations.
This discussion is the third and final in a series that examines “Application Transformation: Getting to the Bottom Line.” The series coincides with a trio of Hewlett-Packard (HP) virtual conferences on the same subject.
Helping to examine how alternatives to mainframe computing can work, we’re joined by John Pickett, worldwide mainframe modernization program manager at HP; Les Wilson, America’s mainframe modernization director at HP, and Paul Evans, worldwide marketing lead on applications transformation at HP. The discussion is moderated by BriefingsDirect’s Dana Gardner, principal analyst at Interarbor Solutions.
Listen to the podcast (28:46 minutes).
Here are some excerpts:
Paul Evans: We have seen organizations doing a lot with their infrastructure, consolidating it, virtualizing it, all the right things. At the same time, a lot of CIOs or IT directors know that the legacy applications environment has been somewhat ignored.
Now, with the pressure on cost, people are saying, “We’ve got to do something, but what can come out of that and what is coming out of that?” People are looking at this and saying, “We need to accomplish two things. We need a longer term strategy. We need an operational plan that fits into that, supported by our annual budget.”
Foremost is this desire to get away from this ridiculous backlog of application changes, to get more agility into the system, and to get these core applications, which are the ones that provide the differentiation and the innovation for organizations, able to communicate with a far more mobile workforce.
What people have to look at is where we’re going strategically with our technology and our business alignment. At the same time, how can we have a short-term plan that starts delivering on some of the real benefits that people can get out there? …
These things have got to pay for themselves. An analyst recently looked me in the face and said, “People want to get off the mainframe. They understand now that the costs associated with it are just not supportable and are not necessary.”
One of the sessions from our virtual conference features Geoffrey Moore, where he talks about this whole difference between core applications and context — context being applications that are there for productivity reasons, not for innovation or differentiation.
John Pickett: It’s not really just about the overall cost, but it’s also about agility, and being able to leverage the existing skills as well.
One of the case studies that I like is from the National Agricultural Cooperative Federation (NACF). It’s a mouthful, but take a look at the number of banks that the NACF has. It has 5,500 branches and regional offices, so essentially it’s one of the largest banks in Korea.
One of the items that they were struggling with was how to overcome some of the technology and performance limitations of the platform that they had. Certainly, in the banking environment, high availability and making sure that the applications and the services are running were absolutely key.
At the same time, they also knew that the path to the future was going to be through the IT systems that they had and they were managing. What they ended up doing was modernizing their overall environment, essentially moving their core banking structure from their current mainframe environment to a system running HP-UX. It included the customer and account information. They were able to integrate that with the sales and support piece, so they had more of a 360-degree view of the customer.
We talk about reducing costs. In this particular example, they were able to save (US)$40 million on an annual basis. That’s nice, and certainly saving that much money is significant, but, at the same time, they were able to improve their system response time two- to three-fold. So, it was a better response for the users.
But from a business perspective, they were able to reduce their time to market. For developing a new product or service, that they were able to decrease that time from one month to five days.
If you are a bank and now you can produce a service much faster than your competition, that certainly makes it a lot easier and makes you a lot more agile. So, the agility is not just for the data center, it’s for the business as well.
To take this story just a little bit further, they saw that in addition to the savings I just mentioned, they were able to triple the capacity of the systems in their environment. So, it’s not only running faster and being able to have more capacity so you are set for the future, but you are also able to roll out business services a whole lot quicker than you were previously. …
Another example of what we were just talking about is that, if we shift to Europe, Middle East, and Africa region, there is very large insurance company in Spain. It ended up modernizing 14,000 MIPS (million instructions per second). Even though the applications had been developed over a number of years and decades, they were able to make the transition in a relatively short length of time. In a three- to six-month time frame they were able to move that forward.
With that, they saw a 2x increase in their batch performance. It’s recognized as one of the largest batch re-hosts that are out there. It’s just not an HP thing. They worked with Oracle on that as well to be able to drive Oracle 11g within the environment. …
Les Wilson: In the virtual conferences, there are also two particular customer case studies worth mentioning. We’re seeing a tremendous amount of interest from some of the largest banks in the United States, insurance companies, and benefits management organizations, in particular.
In terms of customer situations, we’ve always had a very active business working with organizations in manufacturing, retail and communications. One thing that I’ve perceived in the last year specifically — it will come as no surprise to you — is that financial institutions, and some of the largest ones in the world, are now approaching HP with questions about the commitment they have to their mainframe environments.
We’re seeing a tremendous amount of interest from some of the largest banks in the United States, insurance companies, and benefits management organizations, in particular.
Second, maybe benefiting from some of the stimulus funds, a large number of government departments are approaching us as well. We’ve been very excited by customer interest in financial services and public sector.
The first case study is a project we recently completed at a wood and paper products company, a worldwide concern. In this particular instance we worked with their Americas division on a re-hosting project of applications that are written in the Software AG environment. I hope that many of the listeners will be familiar with the database ADABAS and the language, Natural. These applications were written some years ago, using those Software AG tools.
The user company had divested one of the major divisions within the company, and that meant that the demand for mainframe services was dramatically lowered. So, they chose to take the residual applications, the Software AG applications, representing about 300-350 MIPS, and migrate those in their current state, away from the mainframe, to an HP platform.
Many folks listening to this will understand that the Software AG environment can either be transformed and rewritten to run, say, in an Oracle or a Java environment, or we can maintain the customer’s investment in the applications and simply migrate the ADABAS and Natural, almost as they are, from the mainframe to an alternative HP infrastructure. The latter is what we did.
By not needing to touch the mainframe code or the business rules, we were able to complete this project in a period of six months, from beginning to end. The user tells us that they are saving over $1 million today in avoiding the large costs associated with mainframe software, as well as maintenance and depreciation on the mainframe environment. …
The more monolithic approach to applications development and maintenance on the mainframe is a model that was probably appropriate in the days of the large conglomerates, where we saw a lot of companies trying to centralize all of that processing in large data centers. This consolidation made a lot of sense, when folks were looking for economies of scale in the mainframe world.
Today, we’re seeing customers driving for a higher degree of agility. In fact, my second case study represents that concept in spades. This is a large multinational manufacturing concern. We will just refer to them as “a manufacturing company.” They have a large number of businesses in their portfolio.
Our particular customer in this case study is the manufacturer of electronic appliances. One of the driving factors for their mainframe migration was … to divest themselves from the large mainframe corporate environment, where most of the processing had been done for the last 20 years.
They wanted control of their own destiny to a certain extent, and they also wanted to prepare themselves for potential investment, divestment, and acquisition, just to make sure that they were masters of their own future. …
Pickett: Just within the past few months, there was a survey by AFCOM, a group that represents data-center workers. It indicated that, over the next two years, 46 percent of the mainframe users said that they’re considering replacing one or more of their mainframes.
Now, let that sink in — 46 percent say they are going to be replacing high-end systems over the next two years. That’s an absurdly high number. So, it certainly points to a trend that we are seeing in that particular environment — not a blip at all.
Dana Gardner is president and principal analyst at Interarbor Solutions, which tracks trends, delivers forecasts and interprets the competitive landscape of enterprise applications and software infrastructure markets for clients. He also produces BriefingsDirect sponsored podcasts. Follow Dana Gardner on Twitter. Disclosure: HP sponsored this podcast.