I'm starting fresh. How would you build this out?

serbiaNem

2[H]4U
Joined
Mar 6, 2005
Messages
2,167
Going to rebuild my home cluster. Goals are balanced redundancy, Plex storage, personal cloud, pen test lab, couple light websites.
Refresh Hardware:
Server 1: 5900x 64GB ECC, 1TB NVME, 280GB Optane, 8x10TB, 2x10GbE
Server 2: 2400ge 64GB ECC, 2x 1TB NVME, 8x10TB, 10GbE
Server 3/4: 2x NUC 10700u 64GB, 512GB SATA, 1TB NMVE, 10GbE
Server 5: 2x E5-2450v2 (8 core ea.) 96GB ECC, Sata SSD: 3x 500GB, 2x 2TB, 4x 4TB, Rust: 8x 6TB, 2x 10GbE
Server 6: Pentium N5105, 16GB, 128GB NVME, spare SATA slot, 4x 2.5GbE
Server 7: i3-10100u 16GB, 256GB NVME, spare SATA slot, 4x1GbE
Server 8: 4 core Intel, 64GB, 1TB NVME, 12x 3.5" useable slots, 4x1GbE
NAS: 6x14TB, 2x 2TB Cache, 2x 500GB root, 10GbE
WS: 3970x 128GB, 2TB NVME, 10GbE,

I have a whole slew of smaller 3TB/2TB/1TB drives along with 2x spares ea. for the 6TB and 10TB arrays.
I'm going to use Server 6 as the new opnsense box, replacing Server 7. No raid controllers, all HBA.

The question: How would you setup the storage on these servers (starting from scratch). I was looking into ceph, but I don't know how I feel about eating my network bandwidth to distribute writes. Currently thinking raid 1 zfs root disk where possible, using raidz3 on any large spinning arrays. Populating server 8 with old junk drives and installing Snapraid+mergerfs for a final backup. I would switch the 5900x with the 3970x for WS duties, but I don't feel like moving around the water cooling loop. I could probably get a nice replication going across the 1TB NVMEs for Containers/VMs, and use the bulk ZFS storage for secondary NAS and backup.

Am I missing anything glaring? Anything interesting I could try?
 

danswartz

2[H]4U
Joined
Feb 25, 2011
Messages
3,711
If you're willing to pay $200/yr (approx) for a VMUG membership, 7.0 VSAN has been rock-solid for me..
 

Eulogy

2[H]4U
Joined
Nov 9, 2005
Messages
2,667
I'd scrap most of the idea of VMs and just go with containers. If something doesn't play well with containers, than QEMU for VMs for those, but, highly discouraged.

Since you didn't really give specifics on needs, it's difficult to give much further input. You've quite bit of odd ball, mismatched storage going on that's for sure.
 

serbiaNem

2[H]4U
Joined
Mar 6, 2005
Messages
2,167
I'd scrap most of the idea of VMs and just go with containers. If something doesn't play well with containers, than QEMU for VMs for those, but, highly discouraged.

Since you didn't really give specifics on needs, it's difficult to give much further input. You've quite bit of odd ball, mismatched storage going on that's for sure.
Containers all the way down doesn’t play nice with opnsense firewall and passing through nic cards. That’s one big one.
 

Eulogy

2[H]4U
Joined
Nov 9, 2005
Messages
2,667
I mean, you could pretty easily with MACVLAN, but, I'd keep edge network devices far away from compute/storage, just out of good hygiene and practice.
Looking at your list, I really don't know what I'd do with that setup. I'd personally probably toss most of it and end up with:
"Server" 1 - edge network (router/firewall). Though I wouldn't run a server for this, I'd run probably a mikrotik RB4011 instead. Less power draw, more performance, and at a glance, better features.
"Server" 2 - compute and storage. Again, I'd move basically everything to containers. For storage, I'd just use zfs, but with the odd arrangement of disks you have, your vdev layout is going to be really off kilter, so something like unraid may work better. I have a single nVME for L2ARC and ZIL, but, honestly basically never need it. Even at 10Gbps my limitation is network, not disk I/O.

To connect it all, I'd probably have something like an ICX-6610-xxP. Either 24P or 48P depending on how many 1G ports I was wanting. PoE, mostly for cameras. 16x 10Gbps and 2x 40Gbps (I use the 40Gbps as a backhaul between switches on each floor of my house). Killer switches and cheap.
Still, no visibility into your workload, or what your intent really is. Even if you do something "simple" like I prescribe above, with that hodgepodge of disks you're going to run into headaches over time.
 

D-EJ915

[H]ard|Gawd
Joined
Jan 31, 2003
Messages
1,664
I don't think I'd bother with a distributed storage system with that variety of systems, you'd need beefier hardware to do it properly. Since this isn't anything really important like for a business as long as you can restore it reasonably well a single box should be fine as your "san" imo.
 
Top