COVID-19 Data Tsunami Ushers in Unprecedented Opportunities

PinIt

The pandemic caused a data tsunami. Businesses that use that data will put distance between themselves and their competitors and position themselves well for the future.

Seemingly overnight, COVID-19 created a data tsunami, the likes of which hadn’t been seen before. As the world went digital in the early parts of the pandemic, internet usage surged nearly 50% in some countries.

For some businesses, this shift initially wreaked havoc. In addition to disrupted workflows and negatively impacted forecasting models, the massive shift to digital created new, volatile data resources. But lost among the chaos was an upside: the pandemic made data bigger and more important than ever before. The companies that capitalized on this fact positioned themselves well for the future and put distance between them and their peers. The ones that didn’t capitalize on data find themselves, in many ways, left behind.

Certain macro trends point to many firms falling in the “left behind” category. 2020 was the worst year for economic growth since the Second World War, indicating most companies didn’t know how to take advantage of the new, vast data resources at their disposal.

There’s still time for firms to take advantage of the foundational shifts the pandemic has caused, as the data boom is still doing just that – booming. But for businesses to adapt to this data tsunami and take advantage of its unprecedented opportunities, they need to modernize their data lakes, leaving static spreadsheets behind, and attract data and IT talent to identify new business opportunities. The clock is ticking.

See also: Covid Lessons Learned: Focus on Real-Time Supply Chains

BAI was faced with a data tsunami

First, let’s look at some organizations that adapted well and took advantage of the data surge. One example is BAI Communications.

BAI provides state-of-the-art communications infrastructure (for cellular, Wi-Fi, broadcast, radio, and IP networks) in major commuter cities, including New York, Toronto, Tokyo, Hong Kong, Sydney, and London. The company’s technologies keep commuters around the globe connected while they’re in transit – a huge driver in the evolution of smart cities.

They were challenged with the ability to capture, store and analyze the immense volumes of log data generated across their smart city networks during the pandemic. The Wi-Fi infrastructure in the Toronto subway alone sees upwards of 200,000 unique daily users. Capturing this data across global systems was becoming increasingly critical to ensure riders could stay productive while in transit.

Additionally, because of the various equipment BAI provides—public Wi-Fi and mobile networks—it needed to adhere to government regulations and compliance, which translates to maintaining a tremendous amount of data without compromising it. All the different systems BAI needed to feed, based on the data generated, were too costly to store on-site.

See also: Covid Pushed Manufacturers even Deeper into the AI Realm

How BAI rode the data tsunami

To modernize this vast data lake, BAI focused on reducing the complexity, time-to-access, and cost of managing its growing log data. By doing so, they accessed and analyzed their data faster, easier, and in a more cost-controlled manner.

The company continues to uncover new value from its data. While they initially focused efforts on optimizing data storage, BAI has since expanded its log use case by building solutions on top of the stored data, like anomaly detection, penetration testing, and more. Additionally, insights generated from BAI’s Wi-Fi network are helping rail operators maintain the flow of traffic; by analyzing the data across the Wi-Fi network, rail operators can optimize routes based on transit capacity.

Blackboard rode the data tsunami during online learning

Blackboard is another example of a company that took advantage of a surge of data. When COVID-19 hit, students across the world switched to online learning models to continue their education, and the company saw daily log volumes explode overnight. In the early days of the pandemic, the number of logins for the company’s learning management system (LMS) increased four-fold, while the number of daily virtual classroom users increased a whopping 36 times the normal rate.

Not only did Blackboard need to analyze short-term data for troubleshooting and real-time alerts to deal with this increase, but they also needed access to long-term data for resolving academic disputes and other compliance purposes.

They rose to the challenge. Blackboard was able to quickly allocate more resources to enhance the online experience, recognizing the importance of an online learning tool to students worldwide. They were able to scale up where necessary, and now Blackboard satisfies both its long-term data retention and short-term log analytics needs by pushing data directly into its data lake, where it can be indexed, queried, and analyzed at scale.

Attracting top data science talent is more important than ever

According to the Harvard Business Review, data scientists were already in possession of “the sexiest job of the 21st century” prior to the pandemic. Now, demand for them is at an all-time high, as organizations grapple with the data deluge that was created once the world went entirely digital.

A study from consulting firm QuantHub found that there was a shortage of 250,000 data scientists in 2020; meanwhile, job services such as Indeed.com, LinkedIn, and Glassdoor are inundated with listings for data scientists. With the need for data scientists growing, it’s becoming even more critical for companies to determine how best to attract and retain top talent.

And yet, so many companies today are wasting their data science talent by having them focus on activities like data prep and transformation. To capitalize on the data scientists they do have—while also attracting more talent to add to their ranks—it’s critical that companies consider modernizing their data capabilities to automate the routine tasks that limit employees’ availability to be creative, strategic, and analytical.

Data is bigger than ever and will only continue to grow

The massive shift that occurred in all walks of life due to COVID-19 extended itself to businesses. Specifically, the pandemic caused a huge increase in the amount of data that’s produced and the amount of data that companies must deal with. The businesses that capitalize on this boom will put distance between themselves and their competitors and position themselves well for the future. Companies like BAI Communications and Blackboard were able to quickly and deftly adapt to an evolving need and were big winners in the data explosion.

For other businesses to follow in their footsteps, they need to attract top-tier data talent (and retain the talent they already have). They also need to modernize their troves of data and identify new business opportunities. Although the shifts due to the pandemic seemingly occurred in the blink of an eye, the shockwaves from these changes will be felt for years to come. It’s vital for companies to act now and position themselves as beneficiaries in these data boom times.

Data is bigger and more important than ever before. The companies that realize this – and take advantage – will win in the long run. The ones that don’t may find themselves left behind.

Thomas Hazel

About Thomas Hazel

Thomas Hazel is Founder, CTO, and Chief Scientist of ChaosSearch. He is a serial entrepreneur at the forefront of communication, virtualization, and database technology and the inventor of ChaosSearch's patented IP. Thomas has also patented several other technologies in the areas of distributed algorithms, virtualization, and database science. He holds a Bachelor of Science in Computer Science from the University of New Hampshire, Hall of Fame Alumni Inductee, and founded both student & professional chapters of the Association for Computing Machinery (ACM).

Leave a Reply

Your email address will not be published. Required fields are marked *