Legacy Systems

7
28 WHO’S WHO OF FSI ANALYSIS // LEGACY SYSTEMS To renovate or innovate? Escaping the legacy conundrum

description

Insurance and Wealth Management Legacy Systems

Transcript of Legacy Systems

28 w h o ’ s w h o o f f s i

A N A LYS I S // L E G A CY SYS T E M S

To renovate or innovate?

Escaping the legacy conundrum

w h o ’ s w h o o f f s i 29

L E G A CY SYS T E M S // A N A LYS I S

The time has come. No longer can organisations delay investment in innovation and competitive advantage. A large percentage of core software systems are more than 20 years old. They were written and first developed in the roaring 80s, when the internet was science fiction and mainframes ruled the roost. Astute new businesses have seized the day and seen the gap. The competition is here and here with a force. Whether it is a major multinational from across the ocean that uses the internet as a cheap and easy entry level platform into the Australian market, or a dynamic brand such as Virgin that decides the lethargy in the financial services sector makes it ripe for the picking, the threat is real. The time has come to change or wither on the vine.

The cost of maintaining and running these archaic systems is rising by the day and Chief Executive Officers (CEOs) looking to cut IT operating costs are being told by their Chief Information Officer (CIOs) that it’s just not possible. The operational cost of

with increased competition from nimble

local and global players, there is an urgent

need to upgrade, simplify and enhance our

core product and service offerings. however,

‘legacy’ solutions are often inflexible, difficult

and costly to change. Phil Abernathy

explores the critical success factors.

30 w h o ’ s w h o o f f s i

A N A LYS I S // L E G A CY SYS T E M S

running large mainframes is high, and only keeps increasing as systems age. The large mainframe vendors see no reason to reduce costs in the end-of-life phase as they too see the end in sight. It’s time to milk the existing lock-ins for as much as possible, because any discounts or reductions will not delay the inevitable end of their existing revenue stream.

Almost every large financial services organisation has seen the writing on the wall and has decided to change. From both a cost and a competitive advantage point of view the time has come to change. The previous investment has paid for itself many times over and the book value of key IT assets has long been zero. The time has come and the decision has been made to innovate or renovate, with just one simple caveat: operational resilience. Keep the existing operations running smoothly with no drop in availability or reliability.

In short, go forth and carry out a heart and lung transplant while we jog along on the current road and, by the way, even before the transplant is complete we would like to start running uphill at an increased pace.

The challengeJust how do we go about this? Having made the decision to change, what now? This is where the CIO and IT leadership team enter the game. They have been trying to convince executive management for years of the need to change but the need was not tangible enough. Now, all of a sudden, the pain is being felt and exponential change is mandatory.

However, like the IT systems at the heart of large organisations, the IT leaders have also been around for 20+ years. There has not been much change in the way IT systems have been built and run over the decades and the problems of the present are the result of practices of the past. Newer, dynamic and different approaches are frowned upon and resisted at worst and given only marginal support at best.

Will these practices of the past be sufficient and suitable to engineer the solutions of the future, or will we but recreate the problems we have, with just a new set of technological clothes? Is it the

technology that has got us into the current position or is it our way of working?

If we hope to avoid reinventing the current problems we have, we must first understand what those problems are.

The problemWhat is the problem with our legacy systems and what’s the root cause of the current ‘legacy situation’?

But first what is legacy? One well known definition of legacy is “everything that works well”. We often hear people say, “Let’s not touch it, because we don’t really know how it works and we don’t want to jeopardise that”. Another definition is “everything that was implemented last week”. However, when we look at the context in which the word is commonly used we see it linked to certain limitations. Legacy seems to stand for a set of atypical challenges: • Technical complexity• High cost of change• Limited flexibility• Limited transparency of the inner workings

So what’s the cause of these four challenges? The answer is technical debt. Technical debt is the debt we incur when we incrementally change a situation and don’t invest in re-factoring the existing solution to better fit the change made. Systems and solutions are initially designed for a specific purpose. The design at that time fits the purpose but as the purpose changes over time and small incremental changes are made to suit the changing purpose, we do need to invest in a redesign and sometimes a rebuild to cater for the changes.

The root cause of technical debt lies in our way of working, not in the technology or the systems. With our current way of working it’s hard to identify and quantify technical debt and then to justify to the business why investing in it makes good business sense. There is never a business justification to invest in reducing technical debt, as the result does not deliver a visible business benefit in the short term. Like financial debt, if we keep borrowing and never pay back, over time the debt builds up to a crushing level where it is not sustainable anymore. Technical debt is what creates the complex spaghetti that is currently our ‘legacy systems’

Like the iT systems at the heart of large organisations, the iT

leaders have also been around for 20+ years...

newer, dynamic and different approaches

are frowned upon and resisted at worst and

given only marginal support at best.

w h o ’ s w h o o f f s i 31

L E G A CY SYS T E M S // A N A LYS I S

Actual cost of change Optimal cost of change

Years

Cost of change

Adaptability

16

14

12

10

8

6

4

2

01 2 3 4 5 6 7 8

Tech

nica

l deb

t

and results in a high cost of change, limited flexibility, technical complexity and limited transparency (see figure 1).

Furthermore there are very few legacy systems, if any, with a comprehensive automated test suite that can be run to validate integrity if any changes are made. Almost all testing is manual, so after 20 years of incremental change one can imagine the complexity involved in testing legacy systems. Furthermore, the people who knew the system inside out have moved on and the documentation is not good enough to figure out what it actually does. Any change carries great risk as its impact on other areas within the system is not well understood or documented.

To aggravate the problem most IT departments have a testing resource to development resource ratio of 1:5 or worse, thus increasing the technical debt and risk with every passing day. There is no way one tester could thoroughly manually test what five developers produce in a reasonable period of time. This is the single largest cause of delays in time to market with the current solutions.

So for purposes of this article, legacy is defined as any existing working solution that is costly to change; has a rigid structure and is not flexible to change; is technically complex; and is not transparent as to how the insides really work. Legacy is a system with high technical debt.

A number of the current legacy systems, especially the large mainframe ones in most financial services companies, fall into this category. They were once off-the-shelf packages that have been bought and customised extensively to the point where it may not be feasible to take normal upgrades from the original vendor, if the vendor even exists. In some cases the source code has also been bought and the organisation has taken over full responsibility for all maintenance and support of what was once an off-the-shelf package. As can be expected, this is often more expensive than paying a fixed annual maintenance and support fee to the vendor.

The challenge is to move away from this scenario to a more cost effective solution that is flexible, costs less to change, and is more transparent in how it really works, thus

reducing risk. In addition, one does not want to create the same legacy challenges with the new solution.

To implement reliable, cost effective, flexible and fit-for-purpose solutions that can morph with time we must change the way we work.

The optionsThere are a couple of options to solving this problem.1.NewCOTS(CommonOffTheShelf)

system This involves purchasing a new, more

modern COTS system to replace one or more legacy systems.

2.As-IsRewrite This involves rewriting the existing legacy

system in a more modern language, making it more modular and documenting it better.

3.Wrapandextend This option looks at wrapping the

most complex part and treating it as a black box that will never be changed while extending the functionality outside the wrapper to cater for the necessary changes and flexibility.

Figure 1

To implement reliable, cost effective, flexible and fit-for-purpose solutions that can morph with time we must change the way we work.

32 w h o ’ s w h o o f f s i

A N A LYS I S // L E G A CY SYS T E M S

4.Completeupgradeandre-engineeroftheexistingsystem

In this case the existing system is completely overhauled, re-engineered and extensively rewritten to make it more flexible, cost effective and less complex.

5.Buildabespokesystemfromscratch Here a new fit-for-purpose system is built

from scratch. 6.Combi This is a combination of two or more

options shown above.Each of these options has their own merits

and demerits that can vary from situation to situation. The problem is not in the option chosen, but in the way it is chosen and implemented. It’s the way we work that is the critical success factor.

The current way of workingIn most of the multi-million dollar transformations the decision on how to proceed and what is the suitable option is made top down and sometimes far removed from the people who know the systems and challenges best.

As some major ERP vendors like to brag, most of their multimillion dollar sales are closed on the golf course and have nothing to do with detailed cost benefit analysis or option evaluation.

Build or buy decisions are sometimes made far removed from the people who best know the existing systems or the ones who have to implement, operate and maintain the new

solutions. The decisions are handed down with huge budgets in the tens or hundreds of millions and large programs are kicked off to then execute and implement the chosen option.

This is where the biggest and most costly mistakes are made – at the very start of the initiative! It is set up to fail but program managers are given the solution, given the deadlines, given the budgets or ‘guesstimates’ and told to deliver. The deadlines, budgets, and even the solutions are sometimes literally conjured up from guesses, wishes, promises and expectations.

Almost every senior IT executive, bald or grey, would have gone through this harrowing experience and dread the challenge ahead. The Standish reports are littered with horror stories and a recent report states that any project over $3 million has a 90 per cent chance of failure and yet projects and programs in the $10 million plus range are kicked off regularly in the old manner.

Complex and heavy governance procedures are put in place with gate-checks and steering committees with a hope of averting disaster, but major failures still occur. Why? The key reason is that the basic premise, that one can make accurate estimates up front for large pieces of work, is flawed.

Accurate estimates are an oxymoron, yet they are demanded and steering committees then live in the false comfort of their accuracy. Heads will roll if they are inaccurate, so the fear this creates should be enough to make people give accurate estimates. There is a false belief that the heavier the governance structure, lesser the risk of failure. The fact that there is always a funnel of uncertainty in any estimate is conveniently ignored (See figure 2).

A better way of workingSo is there a better way of meeting the original business needs, solving the legacy challenges, and reducing risk of failure? Is it possible to carry out this heart and lung transplant with no disruption to business as usual?

To quote an old but true saying: “If we always do things the way we did, we will get what we always got.”

This is where the Agile approach combined with Lean principles comes into play and act as a game changer to deliver a fit-for-purpose

Figure 2

Time

Uncertainty

+100%

0

-100%Idea conception Project kick-off Delivery date

Estim

ate

varia

nce

rang

e

As some major ERP vendors like to brag, most of their

multimillion dollar sales are closed on the golf course

and have nothing to do with detailed cost benefit analysis

or option evaluation.

w h o ’ s w h o o f f s i 33

L E G A CY SYS T E M S // A N A LYS I S

solution, faster, more cost effectively, and with reduced risk.

Agile is no longer just a development methodology but a way of working that stretches from idea to decommissioning of a system. It spans portfolio and project governance, project execution and operations.

It applies to new solutions, COTS implementations, integration projects and the replacement of legacy systems. It is often stated that Agile cannot be applicable to all sorts of projects. Why not? Why could waterfall be applicable to all sorts of projects but not Agile?

Agile is a set of social, management and leadership practices, based on a set of values and principles that define a new way of working. When done properly, it is extremely structured and disciplined, lending itself to better governance and incremental funding models that manage risk.

Using Agile in the concept and planning phase is more critical and has more impact than just using it in the build/deliver phase. While Agile in its infancy focused on just the delivery phase, it has now grown and matured to address the more critical start-up phases of an initiative. IT leadership and methods need to make a step change in order not to recreate the problem we are trying to dig ourselves out of. A step change is called for not just in the system landscape but in the approach and way of working.

Outline of the new approach Rather than deep diving into a detailed requirements analysis, specification, package selection or build, the Agile approach would be to thin-slice it into two short dynamic phases before implementation can start.

Phase 1 would be a high level concept phase and Phase 2 would be a more detailed initiate phase. There would be strict governance and decision making gate checks between the phases. Both phases would almost include the same steps but at deepening levels of details.

Another key Lean principle that should be adopted is to make decisions at the very last reasonable moment. For example it is not necessary to choose the way the project will be implemented before or at the same time as the reasons for doing the project.

Phase 1 – Concept PhaseStep1–ProblemThe very first step is to conceptualise the problem. This involves validating and understanding the problem, and its root causes. It involves identifying and validating the impact of the problem and quantifying the pain. A classic mistake that is made in this step is that it is done in isolation by a few or sometimes even just one stakeholder. It is essential that this phase is done with ALL key stakeholders, both in the business and in IT. No solutions should be discussed at this stage and the key focus should be to understand the problem, the root causes and their impacts.

Step2–OutcomesandfeaturesIn this stage the desired business outcomes and high level features and required functionality should be explored. All stakeholders should be present and adequately represented. This means the appropriate people at both the senior and intermediate level need to make time and contribute.

Step3–OptionsThe options should be evaluated at a high level with their pros and cons and a short list of preferred options finalised. No detailed costs are needed at this stage, just high level ball-park ranges.

Step4–RAIDSRisks, Assumptions, Impacts, Dependencies and constraintS should be examined at a high level by all stake holders. This step is vital in order to make an informed decision at the end of the concept phase.

Step5–PlanA high-level plan put together showing rough costs, timelines and resources. For a large legacy replacement this whole phase should take in the area of one to four weeks depending on the complexity, number of stakeholders involved, and geographic spread.

There is a clear deliverable at the end of this phase in the form of a project charter, which includes a business case and enough information for senior executive management to make a decision as to whether it is worth pursuing this initiative to the next phase.

Using Agile in the concept and planning phase is more critical and has more impact than just using it in the build/deliver phase.

34 w h o ’ s w h o o f f s i

A N A LYS I S // L E G A CY SYS T E M S

They don’t have to decide whether to do the project as a whole – just whether there is business justification to do the next phase. This reduces risk and provides better control and governance.

Phase 2 – Initiate PhaseIn this phase the five steps listed above are repeated but at a lower level of detail. The features are broken down into stories (requirements) and the short listed options are further detailed and costed to arrive at a preferred option. High level architectural designs are done and validated.

Once again ALL stakeholders should be involved from both the business and IT.

Stories are prioritised, estimated and planned into releases. At the end of this phase the deliverable is a release plan with time lines, milestones, an updated cost benefit analysis and a resource plan.

A management check point is essential after this stage and if given the go ahead, the delivery phase can start.

In the delivery phase, or as some call it the implementation phase, the design, build and test are carried out in short sharp iterations using cross functional teams and automated testing practices to improve quality and reduce risk.

Tips and tricksThe key to getting out of the legacy conundrum is to appreciate what created the problem in the first place: our original way of working. The following are a few tips and tricks to keep in mind while adopting a new way of working.• Go Agile from the concept phase and not

just for the delivery phase.• Involve all stakeholders from day one and

avoid the temptation to leave a number of them out to save costs or time. The cost of leaving them out exponentially increases with time.

• If deciding to go down the Agile route, follow the book and avoid cutting corners until one is well experienced to make the ‘let’s cut this corner’ decision.

• Create cross-functional teams of business subject matter experts, architects, analysts, developers and testers across silos with clear delivery responsibility.

• Encourage collaboration using facilitated workshops instead of meetings and build in continuous feedback loops through Agile retrospectives.

• Avoid deciding on the solution too early and avoid making this decision without using the ‘wisdom of the crowd’.

• Avoid accumulating technical debt• If a COTS solution is chosen then try to

configure instead of customise as much as possible. This enables the system to remain compatible with new releases from the vendor and technical debt can be avoided.

• If the decision is to customise or develop a solution, then it’s essential to invest in re-factoring the code to reduce technical debt. This early investment will pay off enormously in the medium to long run.

• Allow designs to evolve instead of attempting to design the perfect solution upfront as this could lead to the design of a gold plated inflexible solution that is costly to change in the future.

• Peer code reviews and technical quality reviews go a long way in ensuring code quality and reducing technical debt.

• Whether a COTS solution or a bespoke solution option is chosen, ensure that a full automated test suite exists around and through the solution. This will reduce the cost of change, improve the time to market and reduce the risk of operational incidents in the future. It’s important to remember that the ‘legacy

problem’ companies find themselves in was created by poor software development practices and disciplines and not by restrictions of old coding languages or other technical system limitations.

Consequently, to avoid getting into the same situation again we need to change the way we work. To escape the legacy conundrum, whether you innovate or renovate, we must take a new Agile approach to the journey. *Phil Abernathy is an executive coach and consultant with some 30 years of IT and management experience. His speciality is helping executive teams in IT and the business, deliver more with less, using Agile and Lean techniques. He can be contacted at [email protected].

The design, build and test are carried out in short

sharp iterations using cross functional teams and automated testing

practices to improve quality and reduce risk.