Inside Intel Data Strategy

PinIt
Intel big data

Terabytes of on-chip data enables widespread AI and real-time transactions, with less IT help.

If Intel has its way, 36 TB of data will be persistently available in memory on each processor – part of what the company describes as the dawn of a new era of data-centric, business-empowering computing that Intel is pioneering internally.

Long before the average enterprise ever gets around to deploying the newest Intel Xeon processors, the chip maker is already taking advantage of a second generation of Intel Scalable processors that combine of Intel Xeon processors and Intel Optane DC memory.

Pioneering Data-as-a-Service Internally

In fact, that capability is about to accelerate a transition of a data-as-a-service initiative that Intel first launched in 2017, says Ron McCutchen, vice president of IT and general manager of the office of the CIO for Intel.

See also: Intel Goes Into AI Chip Battle With Itself

That initiative makes data consistently available to lines of business (LOB) teams in a way that not only enables the company to make better decisions faster, but also requires a minimal amount of effort by internal IT organizations, McCutchen says.

As part of that effort, Intel has eliminated many of the islands of data that previously required a lot of IT intervention to simply create reports that could be consumed by the business, he explains. Now, LOB teams can increasingly interrogate a whopping 315 PB of data without intervention by the IT department.

“IT should always be enabling,” says McCutchen. “IT should never be an impediment.”

Intel in 2017 was one of the first companies to appoint a chief data officer, a position held by Aziz Safa, vice president of IT at Intel, to craft a data-as-a-service strategy. In its most recent Intel IT Performance Report, Intel credits that approach with driving an additional $1.2 billion of business value.

As Intel’s data as a service strategy continues to evolve the company will increasingly operate in near real time. Once transactions and analytics are increasingly handled together on Intel Scalable processors, more processes than ever will be automated using predictive analytics embedded within applications.

In fact, rather than having to process analytics in a data warehouse in a batch mode, 36 TB of persistent data on the Intel Xeon Scalable processor will enable entire databases to run in memory.

Transforming processing, development, deployment

This shift to processing analytics in real time has been a long time in coming. Mainframes have been able to process transaction and analytics simultaneously for decades. But only now are commodity x86 processors gaining this capability. That’s significant because it means transactions and analytics will one day soon be processed in real time, from cloud to network edge.

See also: Inside Adobe’s Big Data Strategy

That capability will also play a critical role in making artificial intelligence (AI) pervasively available. AI models can be trained using graphics processing units (GPUs), but the inference engines that enable those AI models to automate a process typically run on x86 processors. As more data becomes available in real time, machine learning algorithms will be able to dynamically adapt. The rate at which those algorithms will learn will only exponentially increase.

The impact these advances will have on business will be profound.

Historically, business leaders have often viewed IT as a constraint on agility. Going forward, IT will soon be able to introduce updates to applications infused with AI capabilities faster than most businesses will be able to absorb. In fact, most organizations today only perceive the tip of the digital process transformation iceberg. The ability to process and store data in memory will transform the way almost every application going forward will be developed and deployed. Instead of being constrained by the limited amounts of memory, the day is approaching when processing data in batch mode will be remembered as little more than a quaint limitation from a bygone era of computing.

Of course, no organization is going to be able to take advantage of these advances without first implementing a data-as-a-service approach to managing IT. Making that transition involves a multi-year effort that requires organizations to first define and then implement modern DataOps processes. Organizations that have not begun that transition are already in danger of being rendered obsolete by agile rivals more savvy about how to not just manage data, but also monetize it in real time.

Leave a Reply

Your email address will not be published. Required fields are marked *