AWS Finally Launches Nested Virtualisation on EC2: Better Late Than Never

AWS Finally Launches Nested Virtualisation on EC2: Better Late Than Never

Table of Contents

If you’ve ever needed to run a hypervisor inside an EC2 instance, you know the pain. For years, the answer from AWS was simple: buy a bare metal instance. That meant paying for an i3.metal or m5.metal just to get access to hardware virtualisation extensions. Need to test a Firecracker microVM setup? Bare metal. Want to run KVM for a security sandbox? Bare metal. Running nested Hyper-V for a Windows lab? You guessed it.

That just changed.

What AWS Shipped

On 12 February 2026, AWS quietly updated eight EC2 API actions to introduce a new NestedVirtualization parameter. No blog post. No press release. The feature surfaced through SDK commits and API documentation changes. The description is straightforward: “Launching nested virtualization. This feature allows you to run nested VMs inside virtual (non-bare metal) EC2 instances.”

The affected APIs tell the story of how deep this goes. RunInstances, CreateLaunchTemplate, CreateLaunchTemplateVersion, ModifyInstanceCpuOptions, DescribeInstances, DescribeInstanceTypes, DescribeLaunchTemplateVersions, and GetLaunchTemplateData all received updates. You can enable nested virtualisation through CpuOptions at launch, bake it into launch templates, or query whether an instance type supports it.

Initial availability appears to be us-west-2, with broader region expansion likely to follow.

One detail worth noting: when you enable nested virtualisation, Virtual Secure Mode (VSM) is automatically disabled. That’s a trade-off you’ll want to understand before flipping the switch in production.

Why This Took So Long

Here’s the uncomfortable part. Azure has supported nested virtualisation since July 2017. Google Cloud introduced it the same year. That’s nearly nine years of AWS customers either paying the bare metal premium or looking enviously at what the other clouds could do.

The Hacker News discussion captured the community sentiment well: “Only took them 9 years.”

Why the delay? AWS has historically taken a more conservative approach to virtualisation features than its competitors. The Nitro hypervisor architecture is purpose-built and tightly controlled. Exposing VT-x/AMD-V extensions through a software hypervisor layer introduces complexity around performance isolation, security boundaries, and noisy-neighbour effects. AWS appears to have waited until specific hardware generations could deliver the feature without compromising their reliability standards.

The current rollout is restricted to 8th-generation Intel instances: the c8i, m8i, and r8i families. Community analysis suggests AWS is leveraging microarchitectural improvements in these chips, specifically VMCS shadowing, rather than enabling the feature broadly across all instance types.

That’s a very AWS move. Ship it when it’s right, not when the slide deck says you should.

The Competitive Landscape

Let’s be clear about where each provider stands:

Azure launched nested virtualisation with Dv3 and Ev3 VM sizes in July 2017. It supports Hyper-V natively, making it the natural choice for organisations running Windows-centric virtualisation stacks. Today, all v3 and newer series support the feature.

Google Cloud introduced nested virtualisation in 2017 as well, supporting KVM-based hypervisors on Haswell or later processors. Google’s documentation is refreshingly honest about the performance impact: expect 10% or greater degradation for CPU-bound workloads, potentially more for I/O-bound ones.

AWS is now in the game, starting with 8th-gen Intel instances. The feature is enabled per-instance through CpuOptions, which gives you granular control but means it’s not a fleet-wide default.

The parity gap has closed. Not completely, but meaningfully.

What This Actually Unlocks

This is where it gets interesting. Nested virtualisation on standard EC2 instances opens up use cases that were previously cost-prohibitive or architecturally awkward.

Firecracker and MicroVMs Without Bare Metal

This is the big one. Firecracker, AWS’s own microVM technology that powers Lambda and Fargate, requires KVM. We’re talking 125ms boot times, less than 5 MiB memory overhead per microVM, and the ability to create up to 150 microVMs per second per host. Until now, running Firecracker outside of AWS’s managed services meant provisioning bare metal instances. Companies building sandbox environments, like E2B for AI code execution, and projects like Kata Containers for Kubernetes workload isolation, can now run their infrastructure on significantly cheaper standard instances.

The economics change dramatically. A c8i.2xlarge costs a fraction of a c5.metal. For startups building microVM-based platforms, this removes a major cost barrier.

CI/CD with VM-Level Isolation

If you’re running CI/CD pipelines that need to build, test, or validate VM images, you’ve been stuck with containers (insufficient isolation for some use cases) or bare metal (expensive). Nested virtualisation means you can spin up actual VMs inside your build agents. Think Packer builds, system image validation, or integration tests that need a full OS environment.

Security Research and Sandboxing

Security teams that need to detonate malware samples, run forensic analysis tools, or test exploit chains in isolated environments have a much cheaper path forward. A nested VM provides a stronger isolation boundary than a container, and you no longer need dedicated bare metal to get it.

Training and Lab Environments

Running Hyper-V or KVM labs for training purposes no longer requires either on-premises hardware or premium cloud instances. Organisations delivering infrastructure training can provision standard instances with nested virtualisation enabled, dramatically reducing the per-student cost of lab environments.

VMware Migration Without Bare Metal

This is a subtle but significant one, and the timing is not accidental. Organisations migrating from on-premises VMware environments have historically needed either VMware Cloud on AWS (expensive, vendor lock-in) or bare metal instances to maintain hypervisor compatibility. Nested virtualisation creates a middle path for lift-and-shift scenarios, particularly for development and testing environments that mirror production VMware stacks.

For the full enterprise migration path, AWS already launched Amazon Elastic VMware Service (EVS) in August 2025, which runs VMware Cloud Foundation inside your VPC. That’s the big-ticket option. Nested virtualisation on standard instances is the lighter-weight alternative for dev, test, and validation environments.

The Broadcom acquisition of VMware has sent shockwaves through enterprise IT. Gartner predicts 35% of VMware workloads will migrate to alternative platforms by 2028, with cost pressures driving 70% of enterprise VMware customers to migrate at least half their virtual workloads in the same timeframe. AWS launching nested virtualisation right now gives those migrating organisations a lower-cost landing zone. That’s not a coincidence. That’s a land-and-expand play.

The Trade-Offs

I’d be doing you a disservice if I didn’t cover what you’re giving up.

Performance overhead. Community consensus puts it at 5-15% for CPU-bound workloads, with potentially higher impact for I/O-intensive operations. Google Cloud’s documentation confirms similar numbers on their platform. This is physics, not a bug. An extra layer of virtualisation adds latency to every privileged instruction.

VSM is disabled. When you enable nested virtualisation on AWS, Virtual Secure Mode is automatically turned off. If you’re relying on VSM for Credential Guard or other Windows security features inside the instance, that’s a direct conflict.

Limited instance types. Right now, you’re restricted to c8i, m8i, and r8i families. No AMD instances. No Graviton. If your workloads are optimised for ARM or AMD, you’ll need to wait for AWS to expand support.

Not a bare metal replacement for all workloads. High-performance virtualisation workloads that need direct hardware access, PCIe passthrough, or custom device emulation still need bare metal. Nested virtualisation is a layer of abstraction, not a removal of one.

The Bigger Picture

AWS closing this gap matters more than it might seem. Nested virtualisation isn’t just a niche feature for hypervisor enthusiasts. It’s a foundational capability that enables an entire class of workloads: microVM platforms, security sandboxes, training environments, migration pathways, and CI/CD patterns that need VM-level isolation.

For years, AWS customers who needed this capability had two choices: pay the bare metal premium or move that workload to Azure or GCP. Neither was great. The bare metal tax was real, and multi-cloud complexity for a single capability is never worth it.

Now there’s a third option. It’s limited to Intel 8th-gen instances and us-west-2 for now, and the performance overhead is real. But the direction is clear: AWS is bringing nested virtualisation to the standard compute tier, and the instance type and region coverage will almost certainly expand.

Think about the second-order effects. Organisations that land their VMware workloads on AWS nested VMs during migration will modernise in place: converting to containers, then to microVMs, deepening their AWS commitment at every step. AWS isn’t just shipping a feature. They’re building an on-ramp.

If you’re running Firecracker workloads, building VM-based CI/CD, or managing security sandboxes on bare metal today, it’s worth testing the c8i/m8i/r8i families with nested virtualisation enabled. The cost savings alone could be significant.

I hope someone else finds this useful.

Cheers.


Sources:

Share :

Related Posts

Beyond Vibe Coding: The Renaissance Developer Framework for Infrastructure Leaders

Beyond Vibe Coding: The Renaissance Developer Framework for Infrastructure Leaders

I watched Werner Vogels deliver what he’s calling his final AWS re:Invent keynote, and it struck me that he wasn’t talking about new services or feature announcements. Instead, he spent an hour articulating why the tools matter less than the person holding them. After 14 years of keynotes, Amazon’s CTO decided to hand the microphone to younger voices—but not before leaving infrastructure leaders and architects with something more valuable than a roadmap: a framework for how to think about engineering in the AI era.

Read More
I Used Amazon Q CLI to Build a Feature for Amazon Q CLI (And It Was Mind-Bending)

I Used Amazon Q CLI to Build a Feature for Amazon Q CLI (And It Was Mind-Bending)

Ever wondered what it’s like to use an AI tool to improve itself? I just spent 2 hours using Amazon Q CLI to build a new feature for Amazon Q CLI, and the experience was genuinely mind-bending.

Read More
Building AI-Powered Life Management Systems: The AWS Infrastructure Approach

Building AI-Powered Life Management Systems: The AWS Infrastructure Approach

Daniel Miessler just dropped a fascinating deep-dive into building what he calls a “Personal AI Infrastructure” (PAI) - essentially an AI-powered life management system that handles everything from content creation to security assessments. While his approach uses Claude Code and local tooling, it got me thinking about how we could architect something similar using AWS services.

Read More