Talend Adds Graphical Tool for Building Data Pipelines

PinIt

Talend has been on a building and buying spree to bolster its iPaasS offering.

Talend has announced it is adding a graphical tool to its portfolio that will make it much simpler to create data pipelines with the Spring 2019 release of Talend Cloud, the company’s integration platform-as-a-service (iPaaS) environment.

Despite all recent advances in automating IT processes, building the data pipelines that applications consume remains one of the most manual processes in all of IT. In fact, while increased adoption of best DevOps practices has led to faster rates of application development, IT operations teams are finding it challenging to keep pace with the demand for increased access to data. As a result, data pipelines have become a bottleneck that winds up thwarting investments in agile development methodologies.

See also: Talend looks to transform data loading by buying Stitch

Pipeline Designer from Talend addresses that issue via a tool that makes it possible to design data pipelines using both structured and unstructured data regardless of where it is stored across batch and streaming use cases, says Ray Christopher, a product marketing manager for Talend.

That tool makes it possible to build pipelines using a schema-on-read capabilities that eliminates the manual effort associated with integrating multiple data sources, adds Christopher. That capability also makes it possible to employ a preview mode to debug those pipelines before pushing them into production, says Christopher.

The need to accelerate the rate at which data can be integrated is further being exacerbated by the rise of artificial intelligence (AI) applications that need to aggregate massive amounts of data from multiple streaming and static data sources. Couple that requirement with the rise of streaming data to drive various classes of application processing in near real-time and it becomes apparent that organizations need to recruit a new class of IT professionals known as data engineers.

“Data engineers are now in more demand than data scientists,” says Christopher.

Assuming an organization can find and retain data engineers or anyone else with data pipeline expertise, it quickly becomes apparent there is a need to maximize the productivity of these individuals across multiple application development and deployment projects.

In fact, going forward the decision to employ one integration platform over another is likely to have as much to do with the quality of the tools being provided to data engineers as much as it does the number of data sources that platform can integrate at scale.

That productivity requirement is, in fact, the primary force behind an ongoing reinvention of how data integration is achieved. On the one hand, organizations are increasing investments in self-service tools to enable end users to integrate some classes of data on their own. In turn, that should free up the IT staff to take on more complex data integration tasks that increasingly involve the processing of streaming data to drive an analytics application in near real time.

One of the biggest issues that organizations face today is a skill shortage that arises from the fact that the processing of streaming data is so fundamentally different than traditional batch processing. Training IT staffs on how to master the nuances of, for example, a Kafka platform requires time, effort, and, frequently, patience.

On the plus side, tools such as Pipeline Designer make it possible to manage all the data pipelines needed to feed next-generation applications at a higher level of abstraction. That’s critical when the data sources that feed those applications are subject to frequent change as analysts and developers move to dynamically incorporate new data sources within applications. Organizations are not going to tolerate for very long responses to new requests for access to additional data sources that take months to implement.

Put it all together and it is clear data integration is being transformed utterly. Arguably, the difference between organizations that succeed or fail going forward in the age of digital transformation going forward will come down to the degree to which they can master the art and science of data integration. In most cases, that means providing IT teams and end users alike with tools that make it as simple as possible to organize, manipulate and wrangle data whenever and wherever required.

Leave a Reply

Your email address will not be published. Required fields are marked *