“The rumors of my death have been greatly exaggerated.” –Mark Twain
Many core systems provided by major vendors including Fiserv, Jack Henry, OSI, Harland, Metavante and Fidelity were written in the COBOL programming language. The COBOL (COmmon Business Oriented Language) programming language, one of the earliest “high-level” languages, came to life in the late 1950s. It was created by a group of computer professionals known as the Conference on Data Systems Languages (CODASYL). COBOL became the language of choice for building large, complex, business systems.
In spite of the fact that an estimated 180 to 200 billion (yes “b” for billion) lines of COBOL code worldwide are still running today, the death of COBOL based systems has been predicted for the last 15 years. According to the Gartner Group, each year 15% of new programming code is written in COBOL, largely as functional enhancements to existing systems.
It sure seems like COBOL and the cockroach share a similar characteristic; they will be around for a very long time. However, the cost to continue operating COBOL-based systems is rising and is expected to do so at increasing rates. Two facts make this statement very convincing. First, the pool of COBOL skilled programmers is shrinking by 13% each year, according to Gartner. Second, and not surprisingly, the highest paid programmers have COBOL programming skills.
As systems age, they become more complex and difficult to maintain and add additional functionality. “Difficult” translates to “more expensive (and slower) to add enhancements.” At the same time, in Tom Friedman’s flat world, the pace of change in every industry is accelerating. Banking is no exception. Our clients look longingly at solutions that can be quickly responsive to their competitive needs.
There will not be a watershed event that signals the end of COBOL and starts a rush to replace these systems. This will occur quietly and slowly over time. Technical strategists for each core provider have been contemplating this issue for years. Their clock is ticking and the sound is getting louder each year, but what to do?
Is a rewrite in order? Can the vendor afford the time, cost and client disruption of a rewrite? Should it acquire a solution that is already using current technology? These are just a few of the thoughts being considered.
Stick with Darwin.
Evolution of a system to keep up with the times is always a viable option. For the software vendor, the process goes something like this…
First let’s freshen up the applications running on the user desktops – commonly referred to as the presentation layer. Then, since we engineered the applications with a tiered architecture in mind, let’s swap out the middleware to the latest service oriented architecture (SOA), and finally, let’s tackle the 800-pound gorilla called the mainframe, slowly replacing major sections of data repositories and logic as costs to maintain and enhance the current code become prohibitive.
Obviously, this approach tends to be slower and more expensive over time, but given the bumps in the road that most re-write efforts encounter, it sure does have its appeal. Companies that are able to evolve their systems over time to keep pace with technological change often garner a great deal of respect in the market if they are able to keep the advances transparent to their customer base. That is not to say there won’t be disruptions, it is just that companies that execute well, keep the customer advised, and plan for things to go wrong just seem to avoid the bad press associated with failed conversions.
The good news – most vendors have been following this approach for years. The bad news – most are now staring at the 800-pound gorilla.
Grow up the kids. (Or buy them!)
As a result of the acquisition sprees we’ve seen over the past few years, most vendors have built out a decent portfolio of offerings that service their small to mid-market customers. A large percentage of these systems are already designed in a tiered fashion and utilize the latest technologies, such as .NET, Java, Linux, relational databases and blade servers. With the domain knowledge available in-house and within their customer base, it is natural for vendors to apply the lessons learned with systems for their larger customers to the “kids” in the product family.
Another force driving the maturity level of these next generation systems is the merger and acquisition activity in our industry in general. As billion-dollar banks transform into 10-billion-dollar banks as a result of a few deals, the core system vendors are put under pressure to implement features required to play at the new market level, or risk losing the customer.
With architectures that support separation of data storage, business logic, and presentation, these systems prove to be a bit more nimble when responding to shifting market demands. As we have all seen in the past few years, it can be much less of a chore to integrate a new third party offering if our core system has robust middleware, and a flexible database supporting it.
Keep the beast alive.
Thanks to increasing investment by companies such as IBM in the global workforce, a newly trained group of mainframe programmers is now entering the market in force in places such as China, India and the Philippines. In 2004, IBM revealed a goal of helping put 20,000 new mainframe-educated students into the workplace by 2010 and has made a multimillion-dollar investment in an academic initiative to encourage mainframe studies. Ten thousand of these students will be from China. And where did IBM choose to unveil its new line of z9 Mainframe computers in late April? You guessed it – China.
For companies that are not prepared to tackle that 800-pound gorilla just yet, reducing the cost of maintenance and countering the brain drain we are facing here in the good old U S of A will become a priority. And what better way to deal with it than hiding it in the closet… a.k.a., outsourcing development to China. I know this is not a popular topic, but for development shops facing the pinch of an increasingly expensive and hard to find labor pool, it is an option that must be considered.
So what will be the tipping point for vendors? It will either be cost or peer pressure. We all understand how to analyze the cost thing pretty well, but the peer pressure can sure be a wild card. For vendors, the question will be: Do you want to be the leader in the space to make the bold move, or the competitor enjoying the spoils as the first mover struggles to stabilize its new platform? Sounds like a no-brainer until you consider who wins in the long run. My guess is that the smart money will end up backing the company that is intent on keeping up with the technology trends, not the stodgy vendor that has decided “do nothing” is the best strategy!
Now the question near and dear to us all…
What does this mean to bankers?
What it does NOT mean is the sky is falling. What it does mean is it’s necessary to take an active interest in the direction the core vendor is heading. A few questions bankers must ask themselves:
If you have answered no to any of these questions, perhaps it is about time to be more proactively involved with your vendor’s plans. This is all about confidence that your vendor can successfully navigate the changes ahead.