Confidential Computing explained

Confidential Computing (CC) at its core is one of possible approaches to the problem of providing security to the data and code while in-use (that is, being executed / computed / processed in CPU and memory of an IT system). This is critically important, but usually neglected part of the security trinity, complementary to security of code/data in transit and at rest.

What CC really is?

There are multiple reasons why this part of security so far was not necessarily a focal point for developers and architects, lack of standards and competing solutions proposed by hardware and software vendors being one of them. The other is the fact that, as mentioned above,  security for code/data in use can be satisfied through multiple, quite different approaches. One thread postulates to minimize the need of exposing the data in decrypted form even while in use; primary examples of this concept are  Full Homomorphic Encryption, supporting computation on encrypted data (and yielding correct, encrypted results), or Secure Multi-party Computation, where cooperating parties are able to jointly compute results from their data, while keeping this data private all the time, but these proposals are still in relatively early stages of development and / or limited to specific cases.

Far more universal approach is based on concept of Trusted Execution Environment, or TEE, that is, an area defined over CPU / RAM space that is protected from unauthorized entry and as such, supports secure use of decrypted code / data within its boundaries. Logically, TEE can have different forms and capabilities, depending on whether it is hardware- or software-based, and this adds to the confusion around Confidential Computing definition.

To counter this, Confidential Computing Consortium, or CCC, part of the Linux Foundation, worked on more precise and practical definition of Confidential Computing, which is as follows:

Confidential Computing is protection of data in-use by performing computation in a hardware-basedattestedTrusted Execution Environment .

Trusted Execution Environments

Three key ingredients of Confidential Computing are:

  1. Presence of Trusted Execution Environment (TEE)
  2. TEE must be hardware-supported
  3. TEE must be attestable

Lets take a look on these requirements.

Hardware support, as reasonable as it sounds (at the end of the day hardware-based root of trust gives a lot more guarantees as for trustworthiness of the environment, as hardware is far more difficult to hack or mimic), may  be also a bit discouraging, suggesting complicated (and costly) solutions with limited availability. However, we need to remember that even now, quite often unknowingly, we’re using similar hardware-based solutions, like for example Hardware Security Modules aka HSMs, used for key management in high tiers / dedicated implementations of Azure Key Vault or Trusted Platform Modules aka TPMs, used by Windows Hello or BitLocker services. Actually, as we see later, hardware support for TEEs had been already commoditized couple of years ago and is widely available (for example through Cloud Service Providers).

Attestability is even more reasonable requirement, considering the fact that given environment, to be fully, needs a way of presenting an evidence of its genuity, security, and intact state. Attestation process and its challenges is really a theme for separate post, for now we can just underline that this is critically important for cloud environments, where remote attestation, that is, a process of verification of underlying hardware without physical access to it is a key factor in making decisions of using it for highly confidential computation / data processing.

And finally Trusted Execution Environment itself. Unsurprisingly TEE is defined by CCC as one offering at least 3 founding capabilities:

  1. Data confidentiality: Unauthorized entities cannot view data while it is in use within the TEE.
  2. Data integrity: Unauthorized entities cannot add, remove, or alter data while it is in use within the TEE.
  3. Code integrity: Unauthorized entities cannot add, remove, or alter code executing in the TEE.

These are TEE founding capabilities, but usually actual TEE implementations offer more capabilities by themselves, like for instance code confidentiality (sometimes algorithm itself may represent our Intellectual Property), programmability (so arbitrary code can be loaded and executed within TEE).

Threat model for TEEs is also important to consider. According to CCC:

Confidential Computing aims to reduce the ability for the owner / operator of a platform to access data and code inside TEEs sufficiently such that this path is not an economically or logically viable attack during execution.

It means that, specifically, in scope of the model we will find software, protocol, cryptographic, or even basic physical and upstream supply chain attacks, but not sophisticated physical or side-channel attacks, neither (D)DoS.

It is also very important to remember that Confidential Computing is just a part of system security, complementary to security for data-at-rest and in-transit, thus needs to be used with close conjunction with them.

Ok, enough theory, let’s see a few examples.

Shielded and Confidential VMs in GCP

Shielded VMs

Shielded VM offers verifiable integrity of your Compute Engine VM instances, so you can be confident that your instances haven’t been compromised by boot-level or kernel-level malware or rootkits, or that your secrets are exposed and used by others.

Using Shielded VMs helps protect workloads from remote attacks, privilege escalation, and malicious insiders:

  • Secure boot prevents loading of malicious code during bootup. Shielded VM instances accomplish this with UEFI firmware.
  • Measured boot checks for modified components during bootup. Measured boot uses a virtualized Trusted Platform Module (vTPM).

Each time your VM starts up, secure boot makes certain that the software it is loading is authentic and unmodified by verifying that the firmware has been digitally signed with Google’s Certificate Authority Service (CAS).

Shielded VM instances use Unified Extensible Firmware Interface (UEFI) firmware, which securely manages the certificates that contain the keys used by the software manufacturers to sign the system firmware, the system boot loader, and any binaries loaded. UEFI firmware verifies the digital signature of each boot component in turn against its secure store of approved keys, and if that component isn’t properly signed (or isn’t signed at all), it isn’t allowed to run. This verification ensures that the instance’s firmware is unmodified and establishes the “root of trust” for Secure Boot. Measured boot creates a hash of each component as it loads, concatenates that hash with other components that have already been loaded, and then rehashes it. This allows measured boot to record the number of components loaded on boot-up and their sequence.

The first time your Shielded VM is booted, this initial hash is securely stored and used as the baseline for verification of that VM during subsequent boots. This is called “integrity monitoring,” and it helps ensure that your VM’s boot components and boot sequence have not been altered.

Shielded VMs use a virtual Trusted Platform Module, which is the “virtualized” version of a specialized computer chip you can use to protect objects, like keys and certificates, that are used to provide authenticated access to your system. This vTPM allows Measured Boot to perform the measurements needed to create a known good boot baseline, called the integrity policy baseline, upon the first bootup of your Shielded VM.

Confidential Computing VMs

Confidential VM is a type of Compute Engine VM that ensures that your data and applications stay private and encrypted even while in use. You can use a Confidential VM as part of your security strategy so you do not expose sensitive data or workloads during processing:

  • Compute Engine VM that ensures that your data and applications stay private and encrypted even while in use.
  • Confidential VM runs on hosts with AMD EPYC processors.
  • Creating a Confidential VM only requires an extra checkbox or 1-2 more lines of code than creating a standard VM.

Confidential VM runs on hosts with AMD EPYC processors which feature AMD Secure Encrypted Virtualization (SEV). Incorporating SEV into Confidential VM provides the following benefits and features.

You can enable Confidential Computing whenever you create a new VM. Creating a Confidential VM only requires an extra checkbox or 1-2 more lines of code than creating a standard VM. You can continue using the other tools and workflows you’re already familiar with. Adding Confidential Computing requires no changes to your existing applications.

Confidential VMs provide end-to-end encryption. End-to-end encryption is comprised of three states:

  • Encryption-at-rest protects your data while it is being stored.
  • Encryption-in-transit protects your data when it is moving between two points.
  • Encryption-in-use protects your data while it is being processed.

Confidential Computing VMs give you the the last piece of end-to-end encryption: encryption-in-use. Confidential Computing VMs provide:

  • Isolation: Encryption keys are generated by the AMD Secure Processor (SP) during VM creation and reside solely within the AMD System-On-Chip (SOC). These keys are not even accessible by Google, offering improved isolation.
  • Attestation: Confidential VM uses Virtual Trusted Platform Module (vTPM) attestation. Every time an AMD SEV-based Confidential VM boots, a launch attestation report event is generated.
  • High performance: AMD SEV offers high performance for demanding computational tasks. Enabling Confidential VM has little or no impact on most workloads, with only a 0-6% degradation in performance.

AKS confidential compute nodes

Azure Kubernetes Service (AKS) supports adding Intel SGX confidential computing VM nodes as agent pools in a cluster. These nodes allow you to run sensitive workloads within a hardware-based TEE.

TEEs allow user-level code from containers to allocate private regions of memory to execute the code with CPU directly. These private memory regions that execute directly with CPU are called enclaves. Enclaves help protect the data confidentiality, data integrity and code integrity from other processes running on the same nodes, as well as Azure operator. The Intel SGX execution model also removes the intermediate layers of Guest OS, Host OS and Hypervisor thus reducing the attack surface area. The hardware based per container isolated execution model in a node allows applications to directly execute with the CPU, while keeping the special block of memory encrypted per container. Confidential computing nodes with confidential containers are a great addition to your zero-trust, security planning and defense-in-depth container strategy.

Intel SGX confidential computing nodes feature

  • Hardware based, process level container isolation through Intel SGX trusted execution environment (TEE)
  • Heterogenous node pool clusters (mix confidential and non-confidential node pools)
  • Encrypted Page Cache (EPC) memory-based pod scheduling through “confcom” AKS addon
  • Intel SGX DCAP driver pre-installed and kernel dependency installed
  • CPU consumption based horizontal pod autoscaling and cluster autoscaling
  • Linux Containers support through Ubuntu 18.04 Gen 2 VM worker nodes

Learn more about CC AKS nodes here.

I hope the information above will shed light on CC in general and help you understand how a common cloud providers implemented this strategy already.

Please visit or #CyberTechTalk WIKI pages for much more information about designing reliable systems, monitoring and information security.

If you still experience a problem with system reliability, not sure how exactly your system react on load increased, mind disaster recovery or incident management process. We as a BiLinkSoft, provide and support fully managed and automated solutions to archive your operational excellence in:
Reliability-as-a-Service
Monitoring and Observability
Cloud adoption
Business Continuity and Disaster Recovery
Incident Management
Release Management
Security

Contact Us for FREE evaluation.

Be ethical, save your privacy!

subscribe to newsletter

and receive weekly update from our blog

By submitting your information, you're giving us permission to email you. You may unsubscribe at any time.

Leave a Comment