At GonzoBanker, we spend time talking with vendors about “architecture” on a regular basis. It’s not uncommon for us to have a dialogue going with several competing salespeople at once, each one shamelessly dedicated to the proposition that their system is head-and-shoulders better due to its “superior system architecture.” Vendors who offer “open” (or, more accurately, “contemporary”) solutions argue that their “advanced” system and database architectures can provide you with a competitive advantage.
Vendors whose systems are more legacy in nature (though they never use the term “legacy”) correctly assert that the vast majority of banking applications today are running on “enterprise servers” (a.k.a. mainframes) utilizing an operating system first released in the 1960s or early 1970s. These vendors maintain that their systems, when coupled with new middleware tools, are every bit as “open” as any contemporary system on the market today.
We witness a great deal of nose-thumbing among different vendor architectures. Newer server-based vendors look down upon the legacy providers. The IBM-centered vendors speak ill of the Unisys platform, even though a large number of financial institutions continue to run this hardware and database that has its roots in the 1960s. Occasionally, we even encounter competing assertions from the same vendor, such as when the IBM zSeries folks disrespect the iSeries folks, who in turn disrespect the pSeries folks. It’s a lot of trash talk to keep our eyes on.
When bankers try to sort out the true value of an architecture, they need look no farther than the browsers on their desktops for inspiration. The Internet was originally designed to enable ongoing communications even in the event of a nuclear attack that wiped out major elements of our nation’s communications network. Back in those ancient times (the late 1960s), communications links were over-dedicated, point-to-point circuits, so that disruption of one link in a network would cause the entire network to be compromised. The Internet, however, was designed to be “survivable” by breaking every message into numerous small, discrete packets and then routing those packets in such a manner that each one knew where it came from and where it was going. This was quite a break from what was then the “accepted” methodology, discrete point-to-point circuits.
The architecture of the Internet has now been refined to work so well that it is commonplace to break voice and video information into packets, route the packets willy-nilly across the Web, and re-assemble them so quickly and correctly that listeners and viewers can’t tell that the dialogue has been sliced, diced, routed over random different paths and then flawlessly re-assembled.
Was this architecture a success? While there’s no question that security will remain an issue for the foreseeable future because the very openness and flexibility that helped achieve the original design objectives now make it difficult to secure, in terms of its original design it is an elegant solution. The technology is now used for purposes far wider in scope and greater in number than the inventors ever anticipated. The same can be said of many of those “legacy” banking architectures—if they hadn’t been so well done, they wouldn’t still be around.
Since these legacy banking system architectures were first conceived more than 30 years ago, what are the factors to be considered when a vendor comes a-calling with promises of future glory for your organization if you will simply install one of these new contemporary systems? What are the myths and the realities—decision factors, if you will—regarding the differences between legacy and contemporary systems? Should you abandon your current system just to get an open architecture? How do they stack up against each other?
First, remember that hardware, O/S, database, and programming language together form a “technology complex,” all of these primary components of an architecture are required, but the components evolve at different speeds. Some are evolving rapidly while others are evolving at a snail’s pace. While much of the software in use today was written on aging platforms (typically COBOL and RPG), hardware and operating systems are another matter—they have progressed dramatically, both in terms of computational power, reliability and lowered cost. Even legacy systems now run on hardware and an O/S that is every bit as capable and inexpensive as the newer systems.
Decision: generally a tie between “contemporary” and “legacy.”
Second, databases are an integral component of architecture but they evolve at different speeds than either hardware or the “business logic” contained in software. Legacy software that has been updated to use a modern database is quite different from legacy software using an older, “native mode” database; native mode databases were developed for a world where hardware constraints were a bigger problem than software limitations, just exactly the opposite of today’s world where hardware capacities double every 18 months or so while software resources remain expensive, difficult, and awkward to manage.
Decision: “Contemporary” databases are far superior to “legacy” databases when it comes to maintainability, flexibility, and feature/functionality, but they may initially cost more.
Third, realize that the newer architectures on the market today were designed to be rapidly and easily modifiable. While these newer solutions are generally not as feature-rich as the older systems, they will catch up rapidly because they use newer technologies both in the software development process and for the databases on which the software runs.
Decision: “contemporary” wins, because the future belongs to the flexible.
Fourth, be aware that even with newer, “rapid development” software technologies there is still much effort required to define business requirements and document business needs before any software can be developed. The probability of a software project running over-budget is inversely proportional to the amount of time spent on front-end requirements definition and specification documentation, and some newer systems are much more complex to install and operate than older, more mature technology.
Decision: “legacy” wins. Those who enter any large development effort without investing in well-defined business requirements and software development specifications will experience deadline problems and cost overruns. Most “contemporary” solutions are still maturing.
Fifth, there is much that is right in legacy software which has been running for years and has well-developed, proven functionality. Even when the languages used are not “developing” rapidly, the business logic and processing rules contained in that legacy software become more robust with every new release. A close examination of any of the newer “open” systems on the market today will reveal that they do not have the breadth and depth of functionality that older legacy systems have. Software is like a teenage child—it takes a while for it to mature, and the maturation process can be rather difficult.
Decision: “legacy” wins.
Finally, remember that middleware can do much to make a legacy system act more “open.” Some of the open architectures in the marketplace today still require custom programming to interface external or ancillary systems such as loan origination systems, ATM switches, and free-standing data warehouses. Some of the legacy systems in the marketplace include an easy-to-use XML-layer interface that exposes every data element in the database, making access to/from a proprietary database every bit as easy as access to/from an open-architected system. In this respect, the overall openness depends on many factors, including how open a particular vendor may be to facilitating easier interfaces between systems.
There are many critical aspects to consider before changing systems. When evaluating new systems, with respect to architecture perform an unbiased, side-by-side comparison that includes the following:
Does architecture matter? Absolutely! Choosing the appropriate architecture is an important factor in any system selection decision. And architecture decisions are long-term decisions—most bankers rarely have the opportunity to make them. If your bank has just moved into a new home office and you’ve built the new office with inadequate plumbing, you’ll be back in front of the board asking for money to rectify that problem much sooner than you would like. If you install a system with an outdated architecture, that same uncomfortable future awaits you—and systems are way more costly to correct than plumbing! Architecture will have a major impact on your future capability and flexibility. Because banking systems usually remain in use far longer than planned, it’s important to “do it right” the first time!
For some interesting tidbits about the architecture of the system used to create the original WorldWideWeb browser, see: http://www.w3.org/People/Berners-Lee/WorldWideWeb.html