Categories
Digital Technology

Avoiding the Pitfalls of New Legacy

An Introduction

This blog is the second in a series on Digital Transformation. The first, written by Siva Yuvaraj, asked Is your Digital Transformation Journey Future-Proof?

This blog addresses a common challenge seen by organisations who have performed a cloud migration and yet are struggling to realise the benefits they were expecting. The question has been asked: Why do we face application performance and reliability issues despite migrating to cloud and modern technologies?

An Analogy

Lumberjack with big ol' axe

Picture a man in a forest, chopping down a tree. He’s swinging his axe, working up a sweat, but making pretty slow progress. A travelling salesman with a secondary interest in graph theory is passing by. He pauses and observes the painstaking labour of the lumberjack. Filled with a capitalist pity for the man’s plight, he lays a hand on his shoulder. “Here” he says. “Try this.” With these words, he plucks a chainsaw from his load and hands it to the worker.

“Brilliant!” the weary whacker wheezes. “This will expedite my delivery and surely digitise my production line, all while disrupting big data.” He takes a good grip of the chainsaw, lines up the nearest pillar of pine and swings the tool with force. The salesman winces, collects his payment and proceeds on his merry way.

An Application

The point of the above story is that while new technology certainly can improve our lives and speed up our processes, it only does so when used correctly. Unfortunately, a natural tendency when handed a new platform is to shoehorn it into doing things the way we have always done them. A problem with this approach is that it can negate most – if not all – of the benefits that the new tech has to offer in the first place.

Today, a big push we see almost ubiquitously is that of a digital “Cloud Migration”; moving data, systems, and applications from their snug home on-premises to a cloud computing environment such as AWS, Google or Azure. Executives are promised a utopia of reduced hosting costs, auto scaling infrastructure and a rapidly diminishing support bill.

Cliched planning involving whiteboards and sticky notes

The engineers are excited. They take a furtive look at the quivering monolith of legacy code creaking on their servers and dream of a world of serverless functions, event streaming and microservices.

The possibilities are endless. But what happens? The monolith gets wrapped with some VM configuration and exported to EC2. The home servers get turned off and they declare the cloud migration a success. The executives pat each other on the back as the salesman winces, collects his payment and proceeds on his merry way.

As you may imagine, they do not realise the lofty hopes of the cloud migration. Sure, there might be some savings in infrastructure hosting, but the support costs and the speed of delivery have not noticeably improved. Why not? Because new legacy was built. Or, perhaps more accurately, old legacy was replatformed, creating new legacy.

From the field we see this all too often. Companies can have huge historic systems hosted in their “closet-datacenter”. An ambitious migration project moves the data and all its custom-built services into a cloud account. Virtual containers are spun up and the entire system is dropped in place. Sometimes, and to their credit, digital services using cloud native features are designed and built to provide access to the data and functionality provided by the old system. However, since they rely on the monolith of legacy services and data-structures – which remain fundamentally unchanged – the performance and reliability of the new technology suffers, and the desired improvements go unrealised.

Choosing a Migration Strategy

The successful planning and implementation of a cloud migration isn’t a one-size-fits all problem. Every business is unique, and – even within a business – different systems and applications will have their own distinct migration requirements.

Railroad intersection. Abstractly representing different options or something.

Thanks to the over-representation of pirates undergoing digital transformation, there are five or six generally accepted “Rs” of migration. These include Rehosting or Replatforming (lifting and shifting), Repurchasing (finding a COTS solution that does it all for you), Retiring (getting rid of), Retaining (doing nothing) and Rearchitecting.

Of all these various R’s, rearchitecting is the most likely to provide the largest benefit when migrating a sufficiently legacy system to the cloud. Rearchitecting allows for the system to be designed from the ground up with cloud native tools and features. Monoliths can be broken down into services or microservices, heavy SOAP Web Services can be simplified into lightweight APIs backed by serverless functions. and giant relational databases can be refactored into robust data platforms.

Rearchitecting may not always be necessary though. Simply because an application is deployed on-premise doesn’t mean that it is legacy or ready for the scrap heap. Conversely, anti-patterns can be developed for cloud services as readily as they can be for more traditional platforms. Like many things in life, the best strategy to follow depends on the specifics of the situation.

In Conclusion

It is generally accepted that legacy systems can be a problem. But legacy doesn’t always mean old. Without proper planning, an application deployed to the very newest and shiniest of platforms can creak and groan like a mainframe monolith.

In order to avoid building new legacy, thought needs to be given to how the system can take advantage of the benefits offered by the new platform. Sometimes this might require a complete design overhaul. Other times perhaps only a few tweaks are required. One thing to keep in mind is that if you are swinging a chainsaw at a tree, you’re probably doing it wrong.

Leave a Reply