Three reasons why you need to modernize your legacy enterprise data architecture

5 min read

Previously published on itproportal.com.

A system must undergo “modernization” when it can no longer address contemporary problems sufficiently. Many systems that now need overhauling were once the best available options for dealing with certain challenges. But the challenges they solved were confined to the business, technological, and regulatory environments in which they were conceived. Informatica, for example, was founded before the internet. It goes without saying that enterprise integration has changed profoundly since then.

One set of systems that desperately needs modernizing is traditional on-premises data architecture. The huge increase in the volume, variety, and velocity of data today is baffling legacy systems. At present, legacy data architectures are bending beneath the weight of these three data challenges. Soon they may break.

Volume: Data is getting too big for legacy britches

Cosmic amounts of data glut our world. Every day, 3.5 billion Google searches are conducted, 300 million photos are uploaded to Facebook, and 2.5 quintillion bytes of data are created. IDC predicts global data will grow ten-fold between 2016 and 2025 to a whopping 163 zettabytes.[1] One of SnapLogic’s biotechnology customers processes a remarkable five billion documents a day.[2] And that’s just one company.

Managing these surging volumes of data in an on-premises setting is unsustainable. IT ends up pouring valuable time and resources into purchasing, installing, and managing hardware. They also have to write heaps of code to operate the systems in which the data resides (e.g., databases, data warehouses, etc.). Organizations that permit such an approach to data management will never achieve the depth of analytics needed in the digital economy. They will be like surfers endlessly paddling near the shore without ever getting past the breakers.

Variety: Data is too disparate for rigid legacy systems

Most data was of a similar breed in the past. By and large, it was structured and easy to collate. Not so today. Now, some data lives in on-premises databases while other data resides in cloud applications. A given enterprise might collect data that is structured, unstructured, and semi-structured. The variety keeps widening.

According to one survey, enterprises use around 1,180 cloud services, many of which produce unique data. In another example, we (SnapLogic) integrated over 400 applications for a major enterprise IT firm.

The process of integrating all this wildly disparate data alone is too great a task for legacy systems. Within a legacy data architecture, you often have to hand-code your data pipelines, which then need repairing as soon as an API changes. You might also have to oversee an amalgam of integration solutions, ranging from limited point-to-point tools to bulky platforms that must be nurtured through scripting. These traditional approaches are slow, fraught with complexity, and ill-matched for the growing variety of data nowadays. Legacy systems largely thwart companies’ efforts to use the data they collect.

Velocity: Data needs to move faster than legacy systems can handle

Scenarios in which you needed high-speed data processing were far fewer in years past than what we see today. Now, mission-critical operations rely more and more on real-time data processing. Even a 10-second lag in data delivery can pose a threat if you’re dealing with, say, “hypercritical” data (data upon which people’s health and well-being depend). Interestingly enough, IDC estimates that 10 percent of all data will be of a hypercritical nature by 2025. In some cases, if such data is not processed instantly, the consequences can be dire. Air travel, self-driving cars, and healthcare use cases come to mind.

Legacy data architectures struggle to process big data with the speed and consistency needed in mission-critical situations. One reason for this is, in an on-premises setting, IT essentially has to guess how much computing power they’ll need at a given time. If they provision too few servers for a “peak-load” event, their system could suffer an outage. What’s more, as the volume and variety of incoming data strain their traditional database management system, it hobbles data processing speeds as well.

Modern enterprise data architecture: Solve today’s problems … and tomorrow’s

By all indications, legacy data architectures are on a path to becoming obsolete. The rate at which this happens will indeed vary by sector. But before long, most, if not all, organizations will be forced to reckon with data challenges for which legacy systems have no answer.

Organizations need to modernize their data architecture to triumph in the fast-paced, big-data world of today. Such a shift is likely even more critical for thriving in the age that’s yet to come.

A modern enterprise data architecture (MEDA) is rooted in the cloud. With a cloud data lake at its core, a modern architecture forbids the dumping of resources into non-strategic activities like maintaining servers and acquiring hardware. Indeed, it can withstand mountains of incoming data and do so at scale.

Self-service also is a hallmark of a modern data architecture. In this environment, new crops of low-code data management tools vastly reduce the time spent on performing basic data manipulation tasks. They automate the process of moving, cleansing, and transforming data, regardless of the data’s format. All the while, the need to do tedious manual scripting wanes.

In such a setting, analysts and data scientists no longer have to devote 80 percent of their day to preparing data. Instead, they can busy themselves with extracting value from data through analytics. Not only that, knowledge workers across the entire organization, not just in IT, are empowered with actionable data. And they’re able to harness it to make high-impact business decisions. Unlike legacy systems, a modern architecture creates immense value out of complex, heterogeneous data.

Finally, a modern architecture is designed to process data in real-time even when faced with dramatic spikes in data traffic. It prevents irksome glitches as well as devastating outages. A modern data architecture gives companies assurance as they increasingly rely on high-speed data processing for their most critical operations.

The inexorable rise in the volume, variety, and velocity of data will be the end of legacy systems. If you wait to modernize your data architecture until legacy systems have fully met that end, it may be too late.

To learn more about modern enterprise data architecture, download our ebook, “The State of Modern Enterprise Data Architecture for Big Data Analytics.


[1] 1 zettabyte = 1 trillion gigabytes

[2] A single document often contains multiple units of data.

Former Chief Data Officer at SnapLogic

We're hiring!

Discover your next great career opportunity.