Sponsored by Dell Technologies
Center for Edge Computing and 5G

Extracting Gold From Mountains of Data

PinIt
business gold

Organizations can derive extraordinary value by collecting new data smartly and efficiently, at the right place at the right price, and analyzing that data to extract business gold.

Is there really gold in them hills? Living in Colorado, I ponder that question whenever I head west into the Rocky Mountains. Driving by abandoned mines and tourist spots where families can get out to pan for gold, I cannot help but think there’s just got to be unfound gold still to be discovered.

But what’s the cost and risk to get that gold? Locating it would be hard enough, and extracting and profiting from it even more difficult. At some point, gold mines of early eras realized “the juice wasn’t worth the squeeze” anymore and closed up shop. Despite that, my enthusiastic curiosity causes me to wonder how much gold is yet undiscovered.

Watch Now: Dell Technologies Edge Solutions (Video)

Today, mining is happening in mountains of newly generated data, extracting nuggets of valuable and profitable information. But, at what price? Poor performance and increased cost? There must be value in all of that data, is there not?

Data gravity can take its toll on an organization. With proper planning, organizations derive extraordinary value by collecting new data smartly and efficiently, at the right place at the right price, and analyzing that data to extract the gold. To do so, we must collect, refine and reserve.

Collection: The big picture of the global new data creation world, according to Statista, indicates we will be generating 180 quintillion gigabytes of data by 2025. Today, most of that data is NOT being stored (~2% or 1.28 ZB), but that percentage is forecasted to grow to over 25% by 2025. Combining data creation growth and data storage growth, that is a 3,086% increase in the amount of data stored from 2020 to 2025. Akin to a snowball rolling down a mountain, gaining speed and size, data gravity’s impact – left unresolved – is skyrocketing.

A commonly used process to minimize your data footprint is the implementation of data reduction technologies such as deduplication and compression.

Refining: In algorithmic-based decision processes, new data enters, then is analyzed against rules setup from training against previous incoming data. As more data is collected, new patterns are identified, generating more precise and personalized results. The algorithms get updated, and the value proposition increases. Where should this refinement synthesis occur? At the location it is created? Or somewhere else so that it does not encumber the environment required to provide the highest throughput?

Reserve: Lifecycle management policies are critical to the data gravity discussion. We collected, we refined, but now what? Will this data have use, or have we extracted all its usefulness? The answers will most likely be different depending on the use case of the data. Do you want to compare satellite images over time? Keeping image data may be helpful down the road. Did you average rapid data ingest speeds? Can you delete the data once you receive answers? Most likely, yes. Perhaps it never needed to be stored, especially if it was possible to measure average data ingest speeds in real-time. The point is that different data types have different values and need to be managed in the best way possible, creating the least amount of pain, most notably in terms of cost and latency.

Extracting gold from ore requires a process called gold cyanidation. It is a dangerous effort due to the poisonous nature of cyanide, requiring expensive measures to handle and dispose of these harmful chemicals. The methods used and their costs get measured against the revenues from the gold. Are they slowing things down, affecting business, or are they too expensive, reducing the value to the mining company?

As data gravity increases, so does the negative impact to the business, requiring important changes that can mitigate business-limiting latency and data costs.

The sheer number of abandoned mines in my beautiful home state tells me the answer to those looking for those veins of gold. Most likely, the juice just wasn’t worth the squeeze anymore.

To learn more, visit Dell.com/Edge

Dr. Curtis Breville

About Dr. Curtis Breville

Dr. Curtis Breville is a Principal Analytics Engineer with the Dell Technologies Data Centric Workload Solutions (DCWS) Pre-Sales organization, Office of the CTO Field Ambassador program, and Executive Briefing Center Leadership Bureau member. With over 30 years in IT, including programming, consulting, and management, Dr. Curtis is a liaison between his customers' executive leaders, analytics, and IT teams.

Leave a Reply

Your email address will not be published. Required fields are marked *