Confidential computing ensures that in-memory information is safe from cybersecurity threats and trusted third parties running mission-critical parts of a company’s infrastructure.
When securing data, people tend to think about one of two states—at rest and in transit—both of which can be encrypted or tokenized to protect them. But what about data in use, data that’s actively being analyzed by an algorithm or viewed by a company employee? And what happens when this in-use data resides in edge or IoT environments, which are oftentimes uncontrolled?
In certain situations, companies can protect in-use data the same way they protect the rest of their physical and digital infrastructure. They can restrict physical access to their offices and can use countless tools to detect cybersecurity threats to their computers, servers, or cloud deployments. These measures come in handy, for example, when using real-time analytics platforms to monitor logs for anomalies in deployed applications or batch analyzing marketing data to understand customer trends better.
But more companies are not only deploying more devices at the edge, but they’re also asking them to do more computing, such as running a machine learning (ML) algorithm on incoming sensor data to make autonomous decisions. For example, a device on a remote machine might automatically take the system offline if it reaches a potentially dangerous state. Edge devices are often housed in remote locations or public environments, which are nearly impossible to monitor and secure as strongly as a standard-issue data center.
Which, in turn, begs the question: But as more computing happens on the edge, how are companies going to protect their in-use data from dangerously prying eyes?
See also: Ultra-Secure 5G Aim of Joint R&D Effort
Protecting in-use data with confidential computing
One viable solution is confidential computing, which uses special hardware to isolate some or all the data, specific functions, or even entire applications from the rest of the system. This hardware creates a trusted execution environment (TEE, sometimes known as an enclave) for the data, functions, or applications, which can’t be viewed by the rest of the operating system—even with a debugger, and even if the operating system itself is compromised. The TEE refuses to run any code that’s been altered, like the injection of malware.
Confidential computing ensures that in-memory information is safe from not only cybersecurity threats but also third parties that are trusted with running other mission-critical parts of the company’s infrastructure, like public cloud providers and their employees.
More public cloud providers are offering confidential computing, but growth is slow due to how complex a reliable TEE is to implement in tightly-controlled relationships between hardware and software. But when it works, confidential computing helps companies protect their data and make better use of sensitive workloads wherever they might be collected and stored.
The value of bringing confidential computing to the edge
Confidential computing at the edge is in relative infancy, but there’s no argument about the clear value in these insecure and unstable environments.
Security of air-gapped hardware with flexibility: Companies that operate in highly regulated environments simply can’t deploy edge computing without confidential computing—the risk of data loss and cyberattacks is too great. But with the TEE to protect their workloads, they suddenly have new opportunities to collect real-time data, monitor their operating environments, or offer more depth and context to their customers.
Securely share data with partners: Confidential computing can isolate specific parts of a sensitive data set based on who’s observing it, which enables multiple stakeholders, even at different companies, to view relevant portions of shared data. An industrial operation could give the manufacturer who built their machines access to specific sensor information without exposing anything proprietary.
Protect algorithms or other intellectual property: With confidential computing, a software development company that’s created a sophisticated ML algorithm for use in edge computing can now secure their proprietary code inside of the TEE, where no one—not even the trusted customer they’re helping—can peek inside the “black box” to figure out how it works.
There are certain things a company doesn’t want to know, collect, or store about their customers or partners. Confidential computing, whether at the edge or in a data center, offers hardware-level guarantees that everyone sees only what’s been designed for them to see.
What’s holding up edge confidentiality?
If the tech is so powerful, why is it not in every cloud computing and edge environment? Why isn’t it the default?
As mentioned before, developing hardware TEEs is an enormously complex task. IBM Cloud, Azure, and Google Cloud Platform all offer some degree of confidential computing thanks to CPUs like second-generation AMD EPYC™ CPUs and Intel Xeon CPUs, which offer the Intel SGX (Software Guard Extensions) technology. But these are still specialty VMs, not the standard-issue computing environment.
The Confidential Computing Consortium (CCC) was also formed in 2019 to define industry standards and promote open-source tooling. It has support from big industry players, like AMD, Google, IBM and Red Hat, Intel, Microsoft, and VMware, and while it’s released a software development toolkit (SDK) and Red Hat Enarx—an open-source framework for running applications in TEEs—all of the aforementioned deployments in public cloud have happened outside the consortium.
All of this means that confidential computing at the edge has a long way to go until it’s widely adopted, but now is a fantastic time to familiarize yourself and your team with new frameworks and software development processes. Try them out, apply them in your public cloud of choice, and be ready for the confidential future of edge.