A lot has happened since I last checked in! And even more so since my first post. Finally, our integration strategy is coming into focus. I will try my best to capture all the progress we’ve made.
A new week: more clarity on our enterprise IT vision
While we identified the key pillars of our enterprise IT vision, there were a lot of details left unresolved. We met a couple more times to iron those things out. Through these discussions, we not only covered the infrastructure elements, but also accounted for various use cases, current and future data sources, data variety, and data velocity. We involved a variety of business process owners to ensure that we were making assumptions grounded in reality.
A lot of times, the devil is in the details. Having discussed all these aspects of our integration plan puts my mind at ease and makes me confident that we can find a platform that will enable our company in the long term.
Some business owners also brought specific GenAI questions into the discussion. So as we look at vendors, we are also looking to solve the following questions:
- How can we leverage GenAI to help us accelerate key business processes, and provide greater access to insights?
- What kind of connectivity do we need to public and private Large Language Models (LLMs)?
- How can we provide the right data to these models to realize the promised ROI from GenAI applications?
The latest: a recap on the recently concluded POC
We followed our vision workshops with a formal RFP to gather information from vendors that could help us build a modern data platform to serve our current and future needs. We got a handful of responses, so we shortlisted three candidates to do a POC with. The four-week POC concluded last week.
We had varied use cases; across cloud and on-premises environments, requiring connectivity across databases, data warehouses, and enterprise applications, and for both batch and real-time workflows. We even incorporated two GenAI use cases that the business leaders are keen to implement in the short term.
And we evaluated the vendors for ease of use, seamless onboarding, time-to-build production ready workflows, automation capabilities, and performance, across both simple and complex use cases.
While the three shortlisted vendors on the surface had all the requisite capabilities, in reality, one vendor stood out for its ease of use for both simple and complex use cases, ease of onboarding, and performance. In fact, we saw that for large data movement use cases to cloud data warehouses, they were 10x-50x faster than the other vendors. It was a bonus that the same vendor supported a diverse set of GenAI use cases, including those the business teams want to implement.
Our choice was SnapLogic’s Platform for Generative Integration.
What’s next? An effortless migration
After the technical evaluation, we will evaluate the commercial terms. But the preliminary look at the pricing tells me we’ll likely get started on the platform that got the technical win. Before that happens, there will be one more hurdle: Migration.
Today, we met internally to plan for this big upcoming migration effort. Our teams have already spent some time on classifying DataStage jobs so that we can estimate the time and effort needed to move the jobs over to the new platform.
The SnapLogic team and their partners such as EXL have significant expertise in migration and should be able to estimate the migration costs and efforts more accurately. Before a full-scale migration, we will also do a pilot to make sure the assumptions hold.
So, while building a modern scalable data platform is a long journey, I do see a well-lit path toward our stated goal. Finally!
Ready for easier, faster integration? Take a page from Ian’s playbook and compare IBM DataStage with SnapLogic.