Sponsored by StreamSets

Beyond Mainframe Data Access: The Need for Smart Data Pipelines

PinIt
smart data pipelines

Smart data pipelines take data from its source (e.g., a mainframe), put the data into the right format needed by those working with it, and integrate the data with the appropriate systems and tools.

The easy access to and power of cloud platforms is disrupting modern business. Now, lines of business and data scientists can run analytics and generate reports on vast volumes of data, elastically scaling up or down compute power depending on demand and getting results quickly, all without setting up infrastructure.

One highly valuable data source that many want to use resides on mainframes. There are two aspects to using that data. First, the data must be made available and, some might say, liberated from the mainframes. The second is the data must then be delivered to the cloud systems in a format suitable for the intended purposes.

The situation is a bit like making use of water held in reservoirs. The water must first be released, then directed to a destination for its final use. And depending on the specific use case, along the way, that water might need to be diverted, converted (i.e., turned into ice), or combined, such as is the case when it is an ingredient in a beverage product or concrete.

Freeing up mainframe data

For years, access to mainframe data was a tightly controlled operation. Providing the data so it can be made available for reports or analysis requires great effort. The work is challenging and often takes too long to meet the changing needs of businesses today in a time frame that would make the information useful. 

Fortunately, that obstacle can be removed with solutions that provide secure access to the data in such a way that it does not place an enormous burden on those responsible for the mainframe environments. This is an area where StreamSets, a Software AG company, can help. Its StreamSets Mainframe Collector helps companies unlock data from the depths of their mainframe systems for cloud analytics. 

StreamSets Mainframe Collector connects to mainframe data sources through a lightweight listener to avoid high additional costs and presents data in a relational format, allowing users to easily find, understand, and include the data in their cloud analytics efforts. To that end, it offers reliable delivery to modern data platforms, including Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, Snowflake, and Databricks, at a lower cost and with less effort than alternative solutions.

Enter smart data pipelines

Increasingly, one of the most common use cases for a mainframe data pipeline is to move the data and make it available in cloud systems. For example, the data might be moved into a cloud data platform or cloud data warehouse. Once there, many things can be done.

To start, users then have highly scalable compute resources to analyze the data. Businesses can quickly ramp up and handle new demands as they emerge. For instance, a business might want to quickly assess the impact of a new promotional campaign by region or type of customer (e.g., online vs. in-store, loyalty club members vs. non-members, etc.). That analysis would need to be performed soon after the promotion was launched, and it would be a one-time type of thing. As such, getting the data to the cloud is essential as cloud services are ideal for such bursty workloads.

Going back to the analogy with water, once mainframe data is available via the StreamSets Mainframe Collector, businesses want to incorporate it into reporting and analytics workflows. To accomplish that requires smart data pipelines.

One use of a smart data pipeline is to make streaming data continuously available to analytics or reporting teams. With such needs becoming common, businesses must address the critical steps that take data from its source (e.g., a mainframe), put the data into the right format needed by those working with it, integrate the data with the appropriate systems and tools, and more.

To that end, smart data pipelines are data pipelines with built-in intelligence to abstract away details and automate as much as possible. They transform raw data into data ready for analytics and reporting. They keep data flowing to solve problems and allow business units to make informed decisions.

Anatomy of a smart data pipeline

When a data pipeline is deployed and running, it pulls data from the source, applies rules for transformation and processing, then gets the data to its destination. Let’s take a look at what is involved.

Smart data pipelines must continuously flow data from the source (e.g., a mainframe) to its destination.

To do this, data must first be ingested and loaded into a data pipeline. A smart data pipeline should also handle data integration. Data integration means consolidating data from multiple sources into a single dataset to be used for consistent business intelligence or analytics.

Data structure varies significantly depending on the source. A smart data pipeline must transform the data to prepare it for proper use. Depending on the use case, there might be a need for several transformations. Examples include the merging or joining of datasets, converting data types or fields, converting data formats (e.g., JSON to Parquet), masking PII (personally identifiable information) for privacy compliance, and more.

Teaming with a technology partner

Businesses can take a do-it-yourself approach to building smart data pipelines. But that requires a staff with the time and skills to bring the various elements of data ingestion, integration, transformation, and more together into a production-level system.

An alternative is to work with a technology partner that offers the right tools, expertise, real-world knowledge, and best practices for creating and maintaining smart data pipelines. StreamSets brings all of that to the table. It combines its StreamSets Mainframe Collector with the StreamSets platform, which helps businesses build the smart data pipelines needed to power DataOps across their operations.

Want to learn more? Visit StreamSets’ blog and read: Mainframe Data Is Critical for Cloud Analytics Success—But Getting to It Isn’t Easy

Salvatore Salamone

About Salvatore Salamone

Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Leave a Reply

Your email address will not be published. Required fields are marked *