We have recently had some issues with the current server room we are in and decided to spend the money to move into a newer room of the building and get it setup correctly for a server room. This is all currently going and taking some time...but hopefully in the next month or so, everything will be done and I can start moving into the new room.
This is a picture of the current server room. You can't see it in this picture, but we have a couple of ceiling tiles pulled out so that we could pull more A/C ducting into the room to keep it cool. We currently have two racks of servers, 20 or so in total and one network rack.
The servers run email, ERP, database, file server, etc... Our current internet connection is 6mb brought in over four T1 lines that are bonded together (this is being upgraded to 30mb/30mb fiber as you read.)
New Room Contruction
We ordered one mini-split system at the moment and another will be installed in three months or so.
Hot air returns at the back of the new server rack.
New lights in the room. The ceiling is torn out because we are putting in a drywall ceiling.
The metal duct in the wall is for the new cabling coming into the room. It will be sealed with a rubber surround and then a flexible rubber in the middle to guard against air flow through that duct.
Our old servers consisted of a variety of Dell models. Some are as old as 6yrs old, others are a few months old.
We are moving to a virtual server environment with Citrix XenServer. We purchased four new servers and a new SAN for them.
Server specs:
Dual quad-core Xeon 2.93ghz
48gigs of ram
150gb for hypervisor via four 10k drives in RAID 10.
2gb bonded NIC links to redundant switches
The SAN is a 7.2TB HP iSCSI SAN.
16 port Avocent IP KVM
Dual HP ProCurve 2910al 48-port L2/3 gigabit switches.
Servers and the SAN
Back of the servers and the SAN
Some cabling started
Network cabling going in for Management VLAN
Red cables are Network Management VLAN. Purple cables are bonded ports between switches as well as for fail-over.
I'm using 1ft power cords to connect to the PDU. I'm using short ones just for air flow and for the fact that if these machines come out of the rack, they're going to be fully powered off. We used to use the cable management arms for our Dell servers and with 900 ft. power cords; they just get in the way. Since we're using VM's, I can just push all of the running VM's on a particular server to one of the others.
More pictures to come...
Also, if anyone has suggestions on something, please feel free to post.

This is a picture of the current server room. You can't see it in this picture, but we have a couple of ceiling tiles pulled out so that we could pull more A/C ducting into the room to keep it cool. We currently have two racks of servers, 20 or so in total and one network rack.
The servers run email, ERP, database, file server, etc... Our current internet connection is 6mb brought in over four T1 lines that are bonded together (this is being upgraded to 30mb/30mb fiber as you read.)
New Room Contruction

We ordered one mini-split system at the moment and another will be installed in three months or so.

Hot air returns at the back of the new server rack.

New lights in the room. The ceiling is torn out because we are putting in a drywall ceiling.

The metal duct in the wall is for the new cabling coming into the room. It will be sealed with a rubber surround and then a flexible rubber in the middle to guard against air flow through that duct.
Our old servers consisted of a variety of Dell models. Some are as old as 6yrs old, others are a few months old.
We are moving to a virtual server environment with Citrix XenServer. We purchased four new servers and a new SAN for them.
Server specs:
Dual quad-core Xeon 2.93ghz
48gigs of ram
150gb for hypervisor via four 10k drives in RAID 10.
2gb bonded NIC links to redundant switches
The SAN is a 7.2TB HP iSCSI SAN.

16 port Avocent IP KVM
Dual HP ProCurve 2910al 48-port L2/3 gigabit switches.


Servers and the SAN

Back of the servers and the SAN

Some cabling started

Network cabling going in for Management VLAN

Red cables are Network Management VLAN. Purple cables are bonded ports between switches as well as for fail-over.

I'm using 1ft power cords to connect to the PDU. I'm using short ones just for air flow and for the fact that if these machines come out of the rack, they're going to be fully powered off. We used to use the cable management arms for our Dell servers and with 900 ft. power cords; they just get in the way. Since we're using VM's, I can just push all of the running VM's on a particular server to one of the others.
More pictures to come...
Also, if anyone has suggestions on something, please feel free to post.
Last edited: