Three Important Considerations for Delivering a Data Mesh

Michael Nixon headshot
5 min read

The demand for data has never been greater, especially in this age of artificial intelligence (AI) and generative AI (GenAI). Businesses face escalating demands for more agile, domain-specific data product development and data management solutions. To this end, the concept of a data mesh has emerged as a transformative framework and is redefining how enterprises approach data product ownership, management, and governance. 

By empowering business domains to directly manage their own data pipelines and data products, the data mesh framework promises decentralized, self-service capabilities that can better scale and achieve faster business outcomes compared to traditional data architecture implementations that tend to be more heavily reliant on flowing data requests through IT.

For a quick primer, a data mesh is an enterprise data management framework defining how to manage business-domain-specific data in a manner that allows the business domains to own and operate their data. It empowers domain-specific data producers and consumers to collect, store, analyze, and manage data pipelines and data products with minimal need, if at all, for an intermediary IT team.

With a recent webinar, we dove deeper on implementing decentralization and self-service. Here are three important considerations we covered for applying the data mesh framework: 

1. Logical domains help with achieving decentralization

Data mesh puts the spotlight on decentralization, self-service access, domain data products, and federated computational governance. It fundamentally challenges the thinking of traditional data architectures, whereby there is usually one production domain and requests for data (whether it be integrating data, developing pipelines, or creating data products) normally flow through an IT department. As such, the traditional data architecture is a rigid, one production-domain environment, lacking the flexibility to deploy multiple, independent domains, let alone scaling these domains to support high numbers of citizen integrators. It is simply too complex to do so. 

To the rescue are logical domains. Highly scalable SnapLogic enables the deployment of logical domains (a few, a dozen, or a few dozen), which we call Orgs. Each Org logically stands on its own, with each having its own set of assets. Achieving centralization is made dramatically easier. 

2. Federated integration is key to deploying a data mesh

The capability to deploy logical domains is just one important consideration when thinking of deploying a data mesh framework. Overall, what you want to think about is federating all integration necessities into a single infrastructure. This will also cut down on complexity and significantly reduce the time-to-deploy to get a data mesh or data fabric up and running.

When I say all integration necessities that a data mesh or data fabric will require, I mean a common, super iPaaS infrastructure foundation for deploying data integrations, app-to-app integrations and automations, API-led integrations and data services, and as well, GenAI/AI data connectivity and integration preparations. A one-stop solution to federate these functions will go a long way to setting you up for success for your data mesh or data fabric deployment. 

3. A structure is necessary for a smooth-running self-service model

Every company wants empowered users. No company wants the Wild West and shadow IT.  

Some form of a control mechanism is necessary for managing a decentralized structure and self-service. In the eyes of some, a control point may fly in the face of the spirit of data mesh, where the concept is all about decentralization.

In practical terms however, some type of orchestrated control (such as, self-service policies), whether it comes from IT, a center of excellence group, or from business group technical leadership, is necessary to assure a central nervous system for administering and managing the decentralized infrastructure. This is what SnapLogic refers to as managed self-service. If directed from IT, the opportunity is for IT and business teams to collaborate together to deliver mutual benefits. And of course, it should go without saying, the federated integration platform should have security control and provision features that aid in the process. In the long term, enterprises that own and handle data via federated controls, administration, and governance will enhance the reliability and quality of their data, deliver a good experience for users, and produce smooth running operations. 

Modern times require a modern approach to building an infrastructure for data

Remember, as it was stated at the start, it’s all about having the infrastructure in place you need to meet the growing demands for data–and not just for analytics purposes. But also for what will be needed to support new AI and GenAI initiatives. Therefore, while it may appear that what it is called (data mesh vs. data fabric vs. something else) will be a subject of industry debates for some time, let’s nonetheless embrace the change data mesh promises – a vista of organizational domain scalability and democratized data accessibility that combine to drive better business outcomes, faster. This will be important in this new era of AI and GenAI.

Learn more about implementing decentralization and managed self-service for your environment. To see these capabilities in action, watch our latest webinar on demand: Data Mesh Drill Down: A Roadmap to Self-Service Data Products and Enterprise Empowerment

Michael Nixon headshot
VP of Cloud Data Marketing at SnapLogic
3 Clues for Decoding the Data Mesh

We're hiring!

Discover your next great career opportunity.