Zarathustra[H]
Extremely [H]
- Joined
- Oct 29, 2000
- Messages
- 37,351
Hey all,
I have a dual socket server board (Supermicro X9DRI-F and two Xeon E5-2650v2's). As I was installing a few SAS HBA's, a 10GB Ethernet card and preparing for future NVME drives, it struck me.
Each of the PCIe slots is labeled with which CPU the lanes are attached to. Two slots (16x and 8x) to CPU0, three slots, (16x, 8x and 8x) to CPU1 and 8x to chipset.
In order to maximize the efficiency/performance of the system, should I be caring about which slot things go into? Grouping all storage onto one CPU? Or distributing it across them? Does it matter at all?
I've been running a couple of dual socket servers for a few years now, and I never even thought about this before, but googling doesn't seem to give me much in the way of answers.
Appreciate any input.
I have a dual socket server board (Supermicro X9DRI-F and two Xeon E5-2650v2's). As I was installing a few SAS HBA's, a 10GB Ethernet card and preparing for future NVME drives, it struck me.
Each of the PCIe slots is labeled with which CPU the lanes are attached to. Two slots (16x and 8x) to CPU0, three slots, (16x, 8x and 8x) to CPU1 and 8x to chipset.
In order to maximize the efficiency/performance of the system, should I be caring about which slot things go into? Grouping all storage onto one CPU? Or distributing it across them? Does it matter at all?
I've been running a couple of dual socket servers for a few years now, and I never even thought about this before, but googling doesn't seem to give me much in the way of answers.
Appreciate any input.