They intend to close the functionality gap of managing capacity needs locally versus the cloud by separating out storage from compute.
Cloud computing has always offered a lot of potential when it comes to driving insights into data in real time. But tapping into that potential has proven challenging. It may be inexpensive to store lots of data in the cloud. However, given performance requirements, many applications still need to run in a local data center.
Micro Focus is trying to close that gap with the general availability of version 9.1 of its Vertica Analytics Platform, which now includes an Eon Mode that enables IT organizations to transparently separate compute and storage resources.
Initially designed to be compatible with the S3 storage cloud service developed by Amazon Web Services (AWS), Micro Focus intends to extend support for Eon across multiple cloud platforms. The basic idea is to be able to spin up analytics applications locally that can access data running in a public cloud as if it were local to the Vertica Analytics Platform, says Jeff Healey, senior director for Vertica at Micro Focus.
The platform includes support for a caching capability that supports applications accessing time series, geospatial and pattern matching functions, says Healey.
See also: What is the key to multi-cloud unity?
In addition, Healey notes IT organization can also employ Vertica Analytics Platform to replicate data between what amounts to a federated instance of a columnar database that is enabled by applying advanced machine learning algorithms, says Healey.
Interest in analyzing massive amounts of data has been on the rise for years. But the cost of storing massive amounts of data locally has been prohibitive. Running analytics applications in the cloud, however, often introduces unacceptable levels of latency as application workloads are accessed over a wide area network (WAN). Eon mode solves that issue by making the cloud storage service a logical extension of a local instance of Vertica Analytics Platform.
“It makes it easier to scale,” says Healey. “Organizations now have complete flexibility.”
The degree to which that flexibility influences where data winds up being housed remains to be seen. It’s clear that the cost of moving data into the cloud has been a major inhibitor to adopting public clouds. But if it becomes easier to both replicate data into a public cloud then access that data without incurring any egress fees than the amount of data that might move from an on-premises environment into a public cloud might increase substantially. Today most IT organizations are often content to let the laws of data gravity dictate where data winds up being stored. Data that gets generated in the cloud stays in the cloud. Data that gets generated on-premises stays local; except for copies of that data created for backup and recovery purposes.
Centralizing all that data, however, is likely to become a more pressing issue as organizations look to deploy artificial intelligence (AI) applications based on machine and deep learning algorithms. Those algorithms are only effective when they can be applied to a massive amount of data.
In the meantime. Micro Focus is betting that a more flexible approach to analytics will lead to better insights once organizations don’t have to focus so much on where data happens to be residing at any given point in time.