According to SnapLogic vice president of engineering, Vaikom Krishnan:
“Integration must run at the speed of business and it must be unified. Whether you’re moving to the cloud or re-thinking your data architecture with Spark and Hadoop, there’s never been a better time to re-think how you’re going to tackle the age-old problem of connecting your applications and data at scale. Our Winter 2016 release continues to push the boundaries of how integration is defined and delivered in the modern enterprise.“
Today we’re pleased to announce our Winter 2016 Release which aims to further simplify big data integration and provide greater flexibility for customers to execute data pipelines with the most optimal processing framework based on the job’s data volume or latency – Spark or MapReduce. Further, we alone enable the development of Spark pipelines without coding. This democratizes Spark for in-memory, high-volume data processing by putting it in the hands of non-experts. This also relieves Data Scientists of much of the data engineering tasks associated with both MapReduce and Spark so they can focus on gleaning insights from data.
Some highlights of Winter 2106 include:
- Spark data pipelines – the ability to translate data pipelines into the Spark data processing framework without scripting
- Snappy compression – an open source high-speed compression and decompression library for big data sources
- Support for multiple versions of Hadoop – allows SnapLogic to use native libraries on whatever Hadoop implementation customers are running
- A “queued” state for data pipeline tasks
- An expanded library of Snaps
For more information, visit our Winter 2016 page, featuring all details of the release and additional assets. We also have a SnapLogic Live session tomorrow (Thursday, February 18th at 1pm PST / 4pm EST) where we’ll feature a live demo of SnapLogic highlighting these recent features and product updates. Register here.