Case Study: Delivering Real-Time Data, at 100,000 Samples per Second

PinIt
Depositphotos_71954081_s-2015

A provider of edge sensor tech doubles down on its commitment to store and manage customer data streaming in from many device types.

Name of Organization: Gantner Instruments

Industry: Manufacturing

Location: Schruns, Austria

Opportunity or Challenge Encountered: In business since 1984, Gantner Instruments develops and produces edge devices that measure and control various electrical, thermal and mechanical quantities within enterprises across the globe. Gantner’s measurement systems are used by leading companies such as John Deere, Caterpillar, Cummins Diesel, FIAT, Bosch, and Airbus.

Gantner recently developed a solution to monitor and analyze all data sent from devices in real-time, expanding its capabilities from data capture into intelligent provisioning of real-time data for monitoring and analytics. The company provides its customers with back-end data storage and ability to query sensor data such as temperature, acceleration, and tension at high speeds from thousands of touchpoints.

See also: Public safety goes real-time in the Big Easy

Use cases for Gantner’s devices include measuring physical data tracking the load and stress of bridges, measuring the vibrations and displacement of railway tracks, calibrating helicopter test benches, monitoring rocket launch pads, measuring energy generation assets (hydro and conventional) and monitoring renewables (utility-scale solar plants and batteries). Gantner’s edge computing devices monitor and control customers’ assets, with varying input/output signals that enable the measurement of all the connected sensors and send signals out.

The company provides testing (e.g. engine testing) for a duration of hours or days (with high data rates that can get to 10,000 samples/second); long-term asset monitoring (e.g. container batteries, bridges, structural health) for a duration of years or even decades; and event-based data logging with upwards of 100,000 samples/second for a short period (minutes, usually), which is processed on the edge device with the results — as well as the raw data files — sent to the database.

The company has been collecting and maintaining customer instrumentation data since 1984, says Jürgen Sutterlüti, manager of business development for Gantner. “Of course, since then data logging/collecting technology has evolved tremendously, and we are now positioned for the edge computing era by focusing on high-speed, decentralized, and synchronized data acquisition.” This speed now goes up to measuring “parameters from up to 100,000 sensor inputs per second,” he says. “Signals can be analog or digital and understand different industrial protocols. The system is designed to be flexible and scalable since we have a broad customer base with different variables to measure.”

Gantner’s philosophy “is to provide customers with open interfaces where they can store data and send that data wherever they want,” Sutterlüti adds. “Our customers value this option quite a bit because it relieves them from handling data (and how to store it). Instead, they have easy access to APIs, and all the existing functions they are used to are still available.”

Gantner’s challenge was to achieve the needed scale to store massive amounts of customers’ sensor data and keep it available on a 24×7 basis.

How This Opportunity or Challenge Was Met: In order to capture, analyze, and store the rather enormous volume of data that Gantner customers’ use cases require, and to ensure its available for applications, Gantner turned to a combination of Apache Kafka (data streaming) and CrateDB (distributed SQL database built for IoT/industrial use cases). The company uses CrateDB for real-time hot storage and Kafka for cheaper, file-based storage. This data stack strategy makes the Gantner Cloud possible.

Connectivity between edge devices and Gantner core systems is established and maintained through several methods. For example, its engine-testing capability is managed through gigabit intranet connections. Long-term asset monitoring is achieved through 4G modems, and ultimately 5G when it is available. For connections, the company employs web sockets, full data encryption, two-way control, and remote configuration.

Gantner also recently launched a service called GI.cloud, a data-layer solution that advances the monitoring and controlling of systems for more efficient data acquisition, storage, and enrichment. It integrates high-resolution measuring, big data analytics, and data accessibility, and combines edge-type monitoring and control units with an adaptive cloud system and flexible APIs.

“We’re seeing particular growth in distributed and adaptive monitoring and control applications,” says Sutterlüti. “It’s a trend that is requiring better and faster utilization of data streams – where reliable and distributed data acquisition and exploitation is mandatory. And that’s why we’re investing a lot into a scalable cloud backend (GI.cloud) that enables what we describe as an adaptable and scalable platform for high-performance edge computing services.” The platform contains services for connectivity, data transfer (both directions) and authentication down to the specific edge device, he adds. “From a data standpoint – and importantly – we separate between hot and cold data storage. On the cold data/raw data side, we use Apache Kafka for continuous and event-based data stream processing. For hot data we use the time-series database CrateDB.”

Anomaly detection tends to differ by use case. For example, with a drawbridge, we’re collecting data on vibration, pressure, temperature, electrical parameters, angles, strain, etc. From these signals, we calculate KPIs and the load of a train on the bridge and store the results. If thresholds are exceeded, we send out notifications in real-time and create specific KPI reports for customers (as ensuring the bridge is safe for a train is, of course, imperative). Also, since we work across many use cases, we cooperate with experts in the field where customers are running their apps on our platform.

Benefits from This Initiative: With this data infrastructure, Gantner is able to provide a platform which allows enterprise customers and test-and-certification institutes to grasp, monitor, analyze and react on any physical data in real-time and regardless of the data volume. The benefits are being seen on a case-by-case basis among Gantner’s customers.

For example, Sutterlüti relates, “microgrid operators able to improve energy self-consumption by 20 percent” as a result of enhanced monitoring capabilities. In addition, “a hydro energy use case has improved its pumping strategy for more efficiency, where a one-percent gain equals more than $300,000 savings for each plant, and reducing maintenance cycles and knowing their asset status due to monitoring, which saves an additional $200,000 a year.”
In another example, “utility-scale battery lifespans have been improved eight percent, which saves around $200,000 annually,” he says. “Reducing battery degradation, optimizing self-discharge, and enabling faster charging has been shown to cut costs by another $125,000 per year.”

With the more recent launch of the GI.cloud backend, “we have expanded our cloud connectivity service and data storage options,” Sutterlüti. continues. “Benchmarking tests for how best to do that for GI.cloud showed that a combination of Kafka and CrateDB would work well for flexibility, low recurring costs, and the ability to process even-based data down to 100,000 samples per second.”

(Source: Gantner Instruments)

Leave a Reply