Nested virtualization experiences?


Jun 8, 2004
So I've recently come across an HP DL380 G7, and it's amazing to have some real server horsepower in the house.

I've been wanting to play with clustering in Hyper-V for the sake of work and especially to experiment with the RDS VDI. The perfect solution to this is doing some nested virtualization to have my Hyper-V cluster up and running. I'm curious if anyone else has done so and any comments.

So the specs:
Physical Server:
-2x Xenon X5660 (6x2.8GHz)
-4x 300GB 10K SAS HDD (single RAID5 datastore)
-6x 1TB SATA WD Red (6 separate 1TB datastores)
-VMWare ESXi 6.0U2
-Any VMs at this level are logically considered physical servers simulating an office design

On ESX, I have 3 VMs simulating the physical design:
-2012R2 domain controller (AD02)
-2x 2012R2 Core Hyper-V Hosts (HYPV01 and HYPV02)

The HYPV servers each have a 20GB disk on the 10K datastore, and then 6 1TB disks presented to both hosts on a shared SCSI controller for clustered storage spaces. The clustered storage spaces combines all disks in a single RAID5 virtual disk. All working in a standard failover cluster.

All other VMs I build are within the Hyper-V cluster. Some comments that I've noticed thus far. VM performance is actually quite good. I expected a large performance penalty for nesting the hypervisors, but things are still very snappy.
VM stability is great for general purpose, but some CPU extensions dont work well it seems. Specifically, I have a Plex VM that would constantly either crash just Plex or BSOD. I've since moved the Plex server up to the ESX layer, and it is working plenty stable now. So I'm wondering if some media type CPU extensions are an issue with nesting.

I'm curious if anyone else has run labs nested like this and if they have run into any other caveats with nested virtualization. I want to know I'm not the only one doing completely crackpot configurations for the sake of playing.