Just for funsies, and to see what this vSAN thing is all about, decided to build a 2-node VMware vSAN cluster.
Hosts are running on 8th Gen NUC i7's, 32GB RAM, 1x 250gb nvme, 1x 2tb SSD, booting off old USB thumbsticks.
Took a bit to figure out how to get it all working, but I managed.
Only one snag, which I don't think is directly related to the 2-node set up (with witness appliance).
The VMware vSAN witness appliance has 3 disks.
Currently, 1 of those 3 disks is out of compliance with the storage policy.
It reports that one disk as RAID0, when everything else is RAID1; and policy states RAID1.
All of the disks in the VCSA are properly reported as RAID1.
I've tried creating a new storage policy and applying it, but that doesn't change anything.
I also tried doing the "Repair Objects Immediately" option from the vSAN Health menu, under the vSAN cluster/monitor.
Nothing seems to want to correct it.
I've done a whole lot of searching, but everything I find is related to all or multiple disks in a noncompliant state. In those cases, it was usually creating and applying a new policy that got things working.
any tips? what might I be doing wrong? bug in something else?
I have already updated everything with the latest patches (2 physical hosts, witness host, and VCSA)
Hosts are running on 8th Gen NUC i7's, 32GB RAM, 1x 250gb nvme, 1x 2tb SSD, booting off old USB thumbsticks.
Took a bit to figure out how to get it all working, but I managed.
Only one snag, which I don't think is directly related to the 2-node set up (with witness appliance).
The VMware vSAN witness appliance has 3 disks.
Currently, 1 of those 3 disks is out of compliance with the storage policy.
It reports that one disk as RAID0, when everything else is RAID1; and policy states RAID1.
All of the disks in the VCSA are properly reported as RAID1.
I've tried creating a new storage policy and applying it, but that doesn't change anything.
I also tried doing the "Repair Objects Immediately" option from the vSAN Health menu, under the vSAN cluster/monitor.
Nothing seems to want to correct it.
I've done a whole lot of searching, but everything I find is related to all or multiple disks in a noncompliant state. In those cases, it was usually creating and applying a new policy that got things working.
any tips? what might I be doing wrong? bug in something else?
I have already updated everything with the latest patches (2 physical hosts, witness host, and VCSA)


