All-In-One (ESXi Server with virtualized high-speed ZFS-SAN solution in a box)
How I have done it
Modern IT services are mostly based on Virtualization and SAN Storage servers.
Usually you have one or more ESXi servers for virtualization and one or more SAN storage or backup-servers
connected with a fast FC or IP based network and a lot of expensive switches and adapters based on expensive software.
I'm from the edu sector. In my case, we have a lot of different services, systems and applications,
mostly less money so we prefer OpenSource and low cost/ free solutions whenever possible. We have critical services
but we have defined, that a 30 min period from any kind of failure until regain service is always ok.
Beside this, we have a maintenance windows in the morning/ evening. So we do not have 24/7 service with
no interrupt allowed.
Until begin of last year we have had separate VMware and SAN Storage Servers. Due to the new upcoming feature of
real hardware virtualisation/ passthrough real hardware to guests (Intel vt-d and AMD IOMMU) instead of the former
virtualisation based only on emulated old standard hardware, sometimes improved a little with optimized ESXi drivers
i developed a concept of an All-In-One, based on free ESXi and a free virtualized ZFS-Storage SAN in the same box.
see my miniHowto about:
http://www.napp-it.org/doc/downloads/all-in-one.pdf
If you are thinking about a similar config, you may be interested in my thoughts about:
1. VMware ESXi
If you think about high-speed virtualization, you are very fast at a point, where you must say
that the best way is a type-1 virtualization technology (runs directly on barebone hardware) like XEN or ESXi.
I use ESXi because its the market leader and because of Xenserver currently lacks pass-through, needed for
performance and needed to have ZFS formatted disks and all ZFS failure features.
Main problem with ESXi. If you talk about storage features of a free ESXi, the best answer is,
that there are no features. Even with licensed versions there are no more than some HA and backup features,
nothing comparable to the features of a modern SAN, so in any serious ESXi use a SAN is involved.
I use free ESXi 4.1 for my solution
2. SAN Storage
In former time we have used Windows. We moved then to unix-systems mainly because of up-time.
Its not funny to have that lot of security patches with needed reboots...
Especially the immens ZFS feature list and the availability of free versions like OpenSolaris or
NexentaCore convinced us to use this as a base for Storage (ESXi datastore, SMB filer and backup-systems)
Currently i use mostly OpenIndiana 148 or NexentaCore (both are free and OpenSource)
3. Separate ESXi and SAN servers
This is what is mostly used. Main problem: You have a lot of machines, expensive switches, energy consumption,
cables and and parts that can fail. Especially if you want to have highspeed like 10 Gb between SAN and VMware
its very expensive.
Pro: If one fails, the other remains intact. You have no dependencies between them and you are free about your SAN.
If you need really 24/7 you should use this and care about a HA-SAN not only about High Availability of ESXi.
4. All-In-One
With modern hardware and especially comparable RAM like on a separate solution, you can virtualize
a SAN itself on ESXi, like you do with any other guest. You have the same logical configuration like
you have with separate machines. From outside view, there is no difference.
Best feature is software based high speed internal transfer (example 10 Gb with vnics based on ESXi vmxnet3 drivers)
between SAN storage and ESXi guests or the use of the ESXi virtual switches with vlans to your hardware switch.
I use for ex one physical 10 Gb vlan connection from ESXi to my HP switch for all of my VM's and lans like manage, san, lan1, lan2, lan3, internet, dmz..
The rest is internal 10Gb transfer based in virtual nics.
Cons:
You have to care about ESXi updates and SAN updates (-> complete out of service)
You have nearly the same SAN performance like on real hardware but you can have more on separate machines.
With my preferred ZFS-OS, you can only use max 2 virtual cpu's or its unstable on booting.
Reduced hardware and disk controller set (ESXi and Solaris are stable ond good only on a some hardware)
If you want to build your own system, i can use and can recomend the following sample config:
Use a Mainboard with Intel server chipsets 5520, 3420 or 202/204
I always prefer Supermicro mainboards from X8-.. or X9.-- series ending with -F for remote mamagement,
Use always Xeons and a minimum of 12 GB RAM, with SuperMicro X9 you may need an additional Intel Nic
If you use/add an SAS controller based on LSI 1068 chipset or LSI 2008 (ex LSI 9211-8i) you have it running without pain.
Conclusions
You should have two All-In-Ones. The second is the backup and failover system for ESXi and SAN service.
Its optimal if you have the possibility to physically move a disk-pool to the second machine, either
by enough free disk slots or an external SAS storage box (plug SAS cable on either machine).
You should also keep both systems up to date. I use ZFS replication between them to have the same data on the
second system with a few minutes delay. (Do always additional backups. This is only for availability)
My experience.
In my use case, its working very well and with less overall problems than our former separate solution. I now have three pairs of All-In-Ones plus
additional 4 dedicated SAN boxes used as SMB filers in our Domain and for backups in use for a year now. On hard or software problems we
could restart any vmware guest in that time usually in 30 min and in all cases without the need to use any backup but with the original pool data.
How I have done it
Modern IT services are mostly based on Virtualization and SAN Storage servers.
Usually you have one or more ESXi servers for virtualization and one or more SAN storage or backup-servers
connected with a fast FC or IP based network and a lot of expensive switches and adapters based on expensive software.
I'm from the edu sector. In my case, we have a lot of different services, systems and applications,
mostly less money so we prefer OpenSource and low cost/ free solutions whenever possible. We have critical services
but we have defined, that a 30 min period from any kind of failure until regain service is always ok.
Beside this, we have a maintenance windows in the morning/ evening. So we do not have 24/7 service with
no interrupt allowed.
Until begin of last year we have had separate VMware and SAN Storage Servers. Due to the new upcoming feature of
real hardware virtualisation/ passthrough real hardware to guests (Intel vt-d and AMD IOMMU) instead of the former
virtualisation based only on emulated old standard hardware, sometimes improved a little with optimized ESXi drivers
i developed a concept of an All-In-One, based on free ESXi and a free virtualized ZFS-Storage SAN in the same box.
see my miniHowto about:
http://www.napp-it.org/doc/downloads/all-in-one.pdf

If you are thinking about a similar config, you may be interested in my thoughts about:
1. VMware ESXi
If you think about high-speed virtualization, you are very fast at a point, where you must say
that the best way is a type-1 virtualization technology (runs directly on barebone hardware) like XEN or ESXi.
I use ESXi because its the market leader and because of Xenserver currently lacks pass-through, needed for
performance and needed to have ZFS formatted disks and all ZFS failure features.
Main problem with ESXi. If you talk about storage features of a free ESXi, the best answer is,
that there are no features. Even with licensed versions there are no more than some HA and backup features,
nothing comparable to the features of a modern SAN, so in any serious ESXi use a SAN is involved.
I use free ESXi 4.1 for my solution
2. SAN Storage
In former time we have used Windows. We moved then to unix-systems mainly because of up-time.
Its not funny to have that lot of security patches with needed reboots...
Especially the immens ZFS feature list and the availability of free versions like OpenSolaris or
NexentaCore convinced us to use this as a base for Storage (ESXi datastore, SMB filer and backup-systems)
Currently i use mostly OpenIndiana 148 or NexentaCore (both are free and OpenSource)
3. Separate ESXi and SAN servers
This is what is mostly used. Main problem: You have a lot of machines, expensive switches, energy consumption,
cables and and parts that can fail. Especially if you want to have highspeed like 10 Gb between SAN and VMware
its very expensive.
Pro: If one fails, the other remains intact. You have no dependencies between them and you are free about your SAN.
If you need really 24/7 you should use this and care about a HA-SAN not only about High Availability of ESXi.
4. All-In-One
With modern hardware and especially comparable RAM like on a separate solution, you can virtualize
a SAN itself on ESXi, like you do with any other guest. You have the same logical configuration like
you have with separate machines. From outside view, there is no difference.
Best feature is software based high speed internal transfer (example 10 Gb with vnics based on ESXi vmxnet3 drivers)
between SAN storage and ESXi guests or the use of the ESXi virtual switches with vlans to your hardware switch.
I use for ex one physical 10 Gb vlan connection from ESXi to my HP switch for all of my VM's and lans like manage, san, lan1, lan2, lan3, internet, dmz..
The rest is internal 10Gb transfer based in virtual nics.
Cons:
You have to care about ESXi updates and SAN updates (-> complete out of service)
You have nearly the same SAN performance like on real hardware but you can have more on separate machines.
With my preferred ZFS-OS, you can only use max 2 virtual cpu's or its unstable on booting.
Reduced hardware and disk controller set (ESXi and Solaris are stable ond good only on a some hardware)
If you want to build your own system, i can use and can recomend the following sample config:
Use a Mainboard with Intel server chipsets 5520, 3420 or 202/204
I always prefer Supermicro mainboards from X8-.. or X9.-- series ending with -F for remote mamagement,
Use always Xeons and a minimum of 12 GB RAM, with SuperMicro X9 you may need an additional Intel Nic
If you use/add an SAS controller based on LSI 1068 chipset or LSI 2008 (ex LSI 9211-8i) you have it running without pain.
Conclusions
You should have two All-In-Ones. The second is the backup and failover system for ESXi and SAN service.
Its optimal if you have the possibility to physically move a disk-pool to the second machine, either
by enough free disk slots or an external SAS storage box (plug SAS cable on either machine).
You should also keep both systems up to date. I use ZFS replication between them to have the same data on the
second system with a few minutes delay. (Do always additional backups. This is only for availability)
My experience.
In my use case, its working very well and with less overall problems than our former separate solution. I now have three pairs of All-In-Ones plus
additional 4 dedicated SAN boxes used as SMB filers in our Domain and for backups in use for a year now. On hard or software problems we
could restart any vmware guest in that time usually in 30 min and in all cases without the need to use any backup but with the original pool data.
Last edited: