When Data Centers Go Dark: Why File Recovery Software Is Vital To Cloud Users

PinIt

Everyone using cloud storage and computing now knows the potential for outages. To mitigate the risks, the right recovery tools are essential.

When a lightning strike hit a Microsoft data center in San Antonio, Texas in early September, causing a power surge and ensuing power-down of the data center as the cooling system in place failed, Microsoft’s Azure cloud computing platform went out throughout the region.

Microsoft users experienced issues with Office 365 both locally and around the world, and for the next few hours, couldn’t access many of Azure’s services, Application Insights, web or mobile apps, or the IoT hub. Even the Azure service monitor was knocked offline.

See also: Nokia, MIRIS team up for renewable urban data centers

Aside from causing headaches for both users and Microsoft engineers and knocking systems offline several hours, such an incident serves as a reminder as to how important it is for all organizations using cloud-based computing and storage to maintain redundancy and recovery capabilities in case of such an event.

And occur they will – much more often than ever before, as organizations of all sizes begin to rely more and more on cloud systems and storage services. In fact, MicroStrategy’s 2018 Global State of Analytics report found that 71% of enterprises believe they will accelerate their investment in data and analytics by 2020, and that 41% are considering moving to entirely cloud-based computing within the next year.2

Shifting to cloud-based computing certainly makes sense for organizations employing big data analytics, as the cloud’s flexibility and capacity are particularly suited to such use. But as the shift continues, we’ll be seeing a lot more incidents like the one at Microsoft in San Antonio. Such incidents amplify the need for data recovery tools if organizations wish to be safeguarded from both outages and the problems that cloud storage can cause for big data and analytics. 

The cloud’s distributed nature can make storage performance difficult on its own, and the demands of big data analysis can cause many organizations to find their systems bogged down with serious performance issues. Many clouds simply do not have the storage performance necessary for heavy analytics processes, which can place hefty demands on the computing layer and system storage. 

Many systems now have their big data running constantly, and administrators are starting to find their systems demand stronger infrastructure and computing power. Most cloud networks simply don’t have the capacity to transfer the terabytes or even petabytes that big data analysis necessitates and at the speed it requires. These overtaxed storage systems pose the constant risk of causing data errors or system malfunction.

Protecting Against Data Loss

Issues like these are simply the risks inherent to doing business in such a highly-distributed environment – as are outages like the one that occurred at Microsoft in September. They also serve as reminders that no data center is completely fail-safe, and that all organizations must take precautions for remaining online in the event of an outage. Specifically, systems administrators and IT departments must have the right tools at their disposal for preventing data loss. 

These tools and precautions necessarily include backups and snapshots of systems, which can serve to quickly restore entire lost data sets. But there are also dedicated software tools designed for rapidly recovering individual deleted or overwritten files, whether hosted locally or on the network, and whether purposefully deleted or lost during a data outage. These deletion recovery tools also make it easy for help desk administrators or even individual users to quickly restore deleted and lost files without needing backup media, or even needing to call in the IT department to delve into backup files . In other words, file recovery software allows for much more rapid recovery from outages or data losses. 

Of course, while cloud storage is often the best choice, it’s also important that systems administrators know when their particular big data analysis needs aren’t suited for the cloud. It may require taking a more well-rounded view of their system performance, network and storage needs, and using on-premisis dedicated storage infrastructure (perhaps used in conjunction with cloud storage) when necessary. The increased performance and avoided data errors can still be worth the software and hardware investment.

  1. Targett, Ed, “Azure Outage as Lightning Strike Forces Data Centre Offline,” Computer Business Review, September 5, 2018.
  2. Columbus, Louis, “The global state of enterprise analytics 2018: How cloud, big data and AI are key to the future,” Cloud Computing News, August 23, 2018.

Jim D'Arezzo

About Jim D'Arezzo

Jim D’Arezzo has had a long and distinguished career in high technology. First serving on the IBM management team that introduced the IBM Personal Computer in the 1980s, he then joined start-up Compaq Computer as an original corporate officer and helped the company grow to over $3 billion as VP Corporate Marketing and later VP International Marketing. Seeing the technology trend toward networking, Jim joined Banyan Systems in the early 1990s as VP Marketing and helped that global networking software leader grow rapidly and eventually go public on NASDAQ. He then moved on to computer-aided design software leader Autodesk as VP Marketing and multiple Division GM for data management, data publishing and geographic information systems. D’Arezzo later served as President and COO for Radiant Logic, Inc., the world leader in virtual directory database solutions. Jim holds a BA from Johns Hopkins University and an MBA from Fordham University, and currently serves as the CEO of Condusiv (www.condusiv.com), the world leader in software-only storage performance and file recovery solutions for virtual and physical server environments.

Leave a Reply

Your email address will not be published. Required fields are marked *