Dashcat2 Build


Feb 8, 2004
New 30U rack still on pallet. Practice house FTW.


The Next Morning

It puts the Dashcat in the workshop.

Oh look... someone left a driveway empty. This shall be put to use.
Dandelions FTW. (Killed lawnmower blade on spike driven into ground so they still stand.)
I'm on borrowed time with those clouds looking like that.
Messy Driveway

Cleaner workshop

Glorified garbage bag removed from pallet

It's a start

Even my daughter is excited

...excited enough to turn into a dinosaur.

The weather turned to shit just after I finished putting the stuff back in the shop. The next step will have to wait.
Cold, shitty weather called for a hot meal.

Gyudon = epic
And here's how you make it: http://www.youtube.com/watch?v=F1mvYnRJX70

5U shelf insert removed, added to top of rack and two controllers added to that.

Getting the controllers positioned such that they would swing out for maintenance was rough. While the shelf isn't meant to have stuff attached to it in front, these controllers are so lightweight it's really quite trivial. The controllers were originally meant to go on the back of a full-depth rack as a zero-"U" device. I'm pressed for space and want to show off the machine so it was natural that I have everything face-forward.
Shelves and vented blanks removed. --I'm considering offering those for sale, actually--

The 5U shelf is bolted to the rack. The 2U shelf on the original Dashcat system was zip-tied

How does one avoid special plugs? By having more than one. Even if I had twenty nodes, I could power the whole thing from a pair of 115VAC 20A rails.

But why stop there? I have to be able to cool it so I've got a 30A circuit set aside. This will be divided out into a total of four circuits at a main-lug panel in the workshop. Two 115VAC 20A circuits and 2 115VAC 15A circuits. While that adds up to 35A, it's unlikely I'll ever trip the 30A breaker and the cable between the main panel and sub-panel is rated for 35A anyway. Now imagine this: We sell the trailer and move out. New owners go to reset a breaker and see "Supercomputer". You don't see that very often at any house, let alone a trailer.

Here's my current wiring with the new fatpipe lying next to it. What you see is a pair of 12AWG heavy duty extension cords (orange and yellow) running parallel to four Cat-5e cables. The current arrangement was strictly temporary, with the yellow one having been set up last fall just before the ground froze (I kept blowing the GFCI when my laser printer kicked on. It's not code-compliant for permanent install and wasn't meant to be. The fat cable is a power cord from a welder. It's 8AWG SOOW type cable, rated for 35A. While I can't directly bury SOOW cable (outer jacket is not rated for submersion) and have any hope it will last, I can run it along the ground under my trailer and feed it through conduit for the fifteen feet between the house skirting and the workshop breaker panel.
Last edited:
I've cut a piece of an article and am giving it an update comparing it to my own render farm.

URL: http://findarticles.com/p/articles/mi_m0EIN/is_1995_Dec_4/ai_17812444/

Pixar's RenderFarm

Sun worked closely with a team from Pixar to create its RenderFarm, which serves as Pixar's central resource of computer processing power. The RenderFarm uses a network computing architecture in which a powerful SPARCserver(TM) 1000 acting as a "texture server" supplies the necessary data to the many rendering client workstations needed to complete the rendering process.

Nobody worked closely with me. I'm using outdated 2005 hardware that was destined for a recycler and I got some assistance from [H] member kogepathic through AMD forum member AndersN, luckily, which netted me a BIOS that gave my motherboards a shot in the arm for double the performance capability. I don't need an amazing server to handle the load of ten compute nodes, or even twenty if I decide to go that far (note: I wrote this when I was planning ten nodes and ended up with eight).

In 1995, they were using 10BaseT Ethernet their SPARCStation 20s came with. Okay, maybe 100BaseT if they added expansion cards, but they didn't. Texture server... probably because the nodes were limited to 512MB RAM. Likely so, since the SPARCserver 1000 was able to take 2GB RAM. Each of my nodes will take 2GB almost for free and 8GB without costing me too much, while the server will take 16GB.

The RenderFarm was assembled by Sun and Pixar engineers in less than a month and drew upon Sun's own experience in setting up "farms" of many systems linked together. Some facts about Pixar's RenderFarm and the computing aspects of "Toy Story":

Theirs required engineers. Mine required... me, power tools, good timing. That, and a lot of determination. I wish I could do it in a month. Hell, it's taking me half a year the way I've planned it out. (This was written when my target completion date was April 29 to coincide with Ubuntu 10.04 LTS being released).

-- The RenderFarm is one of the most powerful rendering engines ever assembled, comprising 87 dual-processor and 30 four-processor SPARCstation 20s and an 8-processor SPARCserver 1000. The RenderFarm has the aggregate performance of 16 billion instructions per second -- its total of 300 processors represents the equivalent of approximately 300 Cray 1 supercomputers.

That's right! One of their processors equals a Cray 1. Yes. In 1979, Popular Science said the Cray 1 "will cruise along at 80 MFLOPS." That's an aggregate speed of 24GFLOPS. Mine, at 281GFLOPS (492GFLOPS in 14-node form), wipes the floor with theirs. Granted, I'm doing this fifteen years after the Toy Story farm was built, with technology from 2005. One AMD 275 Dual core CPU (17.6 GFLOPS) almost matches their entire 1995 farm--and I have sixteen of them. (28 now)

-- Each system is the size of a pizza box, and all 117 systems work in a footprint measuring just 19 inches deep by 14 feet long by 8 feet high.

My farm fits within four square feet and it's counter-top height. (Okay, that was with the short rack. It now fits within eight square feet and that includes the cooling system.)

-- Sun is the price/performance leader, in Pixar's own rankings. The SPARCstation 20 HS14MP earned a rating of $80 per Rendermark (a Pixar measurement for rendering performance), while the comparable SGI Indigo Extreme came in at approximately $150 per Rendermark.

(note: Sun? Oracle, now.) They're comparing it to an Indigo2 Extreme, not an Indigo "1". The article has a typo. They compared an I2 with a 200MHz R4400 CPU. Slow end of the I2. I know this because I own an I2 R10K-195 Impact. I don't know a thing about their metric, but since my entire rig is intended to fall within a budget of $3000 (note: still within that range)... I think I'm getting some good value here. I saw that the 30 quad processor machines cost $47,395 each at the time. The 87 duals were $43,895 each. Okay, their CPUs ran at 100MHz in both dual and quad machines. I just did the math and found that cluster, minus any discounts they may have gotten, cost $5.24 million. That's almost 1,750 times my own cluster budget.

-- Using one single-processor computer to render "Toy Story" would have taken 43 years of nonstop performance.

So 80MFLOPS means 43 years? Let's do a little math here. I'm going to surmise their farm did it in 1/300th as long with 300CPUs. That's 52 days and eight hours. To render Toy Story on one of my AMD 275s would take 71 days and 8 hours. One dual-CPU node would take 35 days and 16 hours. My whole farm in 16 CPU form would do it in 4.5 days, using 22 dollars worth of electricity. (In 28-CPU form, 2 days 13 hours)

-- Each of the movie's more than 1,500 shots and 114,000 frames were rendered on the RenderFarm, a task that took 800,000 computer hours to produce the final cut. Each frame used up 300 megabytes of data -- the capacity of a good-sized PC hard disk -- and required from two to 13 hours for final processing.

300MB? That's it? Okay, granted, the hard disk I was using in 1995 was 420MB. I could fit 300MB in a small corner of my RAM, let alone a modern hard disk. My boot disks are microdrives 6GB in size and that's a desktop hard disk size from 1998. (again, obsolete data. I planned the microdrives when I was going to use the Verari nodes as-is. I have 160GB disks now, which makes this even more funny.)

-- In addition to the high-resolution final rendering, the RenderFarm was also used to generate the test images animators needed to plan and evaluate lighting, texture mapping and animation. Since fast response is key in doing tests, RenderMan could produce test frames in as little as a few seconds.

The film was rendered at 1536x922 pixels. I'm really only going for 1280x720, which is 65% as many pixels. I don't know how big of a deal the number of pixels actually is anymore.

-- Scalability is built-in: the RenderFarm can be upgraded (with more processors and disk storage) to a nearly four-fold performance level, without requiring any additional space. The RenderFarm also integrates seamlessly with Pixar's existing computer network containing different types of machines.

Scalability isn't a luxury these days--it's a requirement. I wouldn't be worth a damn if I couldn't connect a lot of computers and have them cooperate. A 24-port gigabit switch will allow care and feeding of sixteen nodes with eight lines to anything else I want to link to, such as servers, workstations and NAS boxes. Two ports for a NAS box, two ports for a server and four ports for a shotgunned link to the workstation switch. I doubt I'll realistically need more than twenty nodes. With technology moving the way it is, I'll be able to replace the motherboards by the time the cluster is inadequate. Only the power supplies may need changing.

Now Dreamworks:
Shrek render farm 2001

Product - Quantity - OS - CPUs - Description

SGI Origin200 - 406 - IRIX 6.5 - Dual R10000 180MHz - 512MB SGI RAM, 3U + 1 rackmount, 9GB HDD
PC Vendor #1 - 292 - Linux - Dual PIII 450MHz - 1GB PC-100 SDRAM, 2U rackmount, 39GB HDD
SGI 1200 - 324 - Linux - Dual PIII 800MHz - 2GB PC-133 SDRAM, 2U rackmount, 39GB HDD
PC Vendor #2 - 270 - Linux - Dual PIII 800MHz - 2GB PC-133 SDRAM, 1U rackmount 39GB HDD
SGI O2 - 190 - IRIX 6.5 - Single R10000 - 256-512MB SGI RAM, 9GB HDD


1482 cpus - 836 boxes - 443 dual processor Linux boxes, 203 dual O200s, and 190 O2s.

Pentium 3 CPUs manage one FLOP per cycle.

P3-450 boxes: 131.4 GFLOPS
SGI P3-800s: 259.2 GFLOPS
P3-800 boxes: 216 GFLOPS

SGI R10000 CPUs manage 2 FLOPs per cycle.

180MHz Origin 200s: 146.16 GFLOPs
200MHZ (estimated) SGI O2s: 76 GFLOPs

828.76 GFLOPs total

by the same standard of manufacturer design numbers, my Opterons nuke four FLOPs per clock cycle.

16 2.2GHz DC Opterons: 281.6 GFLOPs

My farm is 34% the speed of the 2001 Dreamworks farm. That's a bit of perspective there. I would need 24 nodes of the same config to have that kind of performance. I'll keep what I've got. (note: No, I won't. 14 nodes gives me 60% of the Shrek farm's performance.)
Last edited:
Today was a day for pulling RAM, changing out heatsinks and doing yardwork.

Nodes pulled from the small rack for downgrade to 2GB RAM plus change from side-discharge heatsink to straight-flow for the CPU closest to the PSU since, as some may recall, half the airflow was choked off by the PSU being right up against one side of the heatsink. Instead of raising the 1U PSU to the top slot, I did things the right way with shiny new copper. I didn't get a photo of that because I was racing the weather again.

And now they're in the new rack. The master server has 8GB PC2700 RAM now, up from 4GB PC2100. In the process, I pulled eight 2GB sticks from the workstations, replacing them with six 1GB sticks in one and a pair of rogue 2GB sticks plus a pair of 1GB sticks in the other. Let's be honest, 12GB and 8GB were a lot of RAM, even for graphics work.

Remember the Dandelions from the first post? Here's why they were still standing. Yes. That's the blade from my mower.

Oh yeah. That's toast.

I need to buy about sixty 10-32 thread steel Hex standoffs between 1" and 2.5" in length before I can rail-mount the servers. My racks have only 19" between the front and rear mounts and the 22" deep rails I got (for 24" total travel to allow easy removal of the 24" deep cases) will only adjust down to 20" depth.

That's where I stand right now.

If anyone is curious, the large rack is 5'9" tall from the floor to the top of the topmost ICEBOX.
I can't wait to see more pics. Including after you get the lawnmower repaired and destroy the dandelions. MMMMMM SGI 02's = tasety. I'd love to get my hands on one of those for moddin'

I wish every build log had recipes for dinner. What are you going to be rendering/What kind of projects do you animate?
I wish every build log had recipes for dinner. What are you going to be rendering/What kind of projects do you animate?

Rendering? Blender animations, mostly. I'm also trying to work out a way to have these machines do builds for UDK via the Swarm approach. I've been working on a video game since 1997 and this is the next step in making it more than a pipe dream.

There will be more recipes in the future if I can source the ingredients. Daikon radish and Lotus Root shouldn't be this hard to find, seriously.
I'm pleased to announce that I've ordered the standoffs I need for assembly and they are on their way. I need to order 10-32 rack screws and 8-32 rail-to-server screws.

I've had my Dell PowerConnect 5324 switch burning in for the past week. Zero problems.
The standoffs arrived today and they are perfect for this application. The thread depth is just what I needed with no deadspace.

I've hit a snag getting more cases so I'll likely be switching to a 10-node, 40-core build. However! Since 40 cores isn't quite enough and the short rack will be freed up if I have to scale back, it means I can do another build in the 16U rack. And I think I know just the perfect candidate since it's been chilling in my storage bay for the past few months.

Two directions I can take this. Both are massively positive.
Just a quick note on what I've been doing with my downtime while waiting for screws.

Linux NetworX Evolocity blade
Original config:
Dual Xeon 2.4GHz Socket603 400FSB... boring
eATX form factor... hmmmm....

Will it blend? Okay, that's a HDAMA motherboard in there now for a test fit. It's going to require a new power supply because the original is 20+8, not 24+8 like the HDAMA needs. I have plenty.

Heatsink and RAM have enough clearance. It's a go. I might make air channels for the fans just to be thorough about CPU cooling.

Going to need a bit of Dremel work.

Dashcat2 is being downsized to 10 nodes to free up the small rack and second ICEBOX. The LNXI blade fits vertically in a cage that's 8U tall and holds ten blades. I have six. I bet I can fit a system with the same computing power as Dashcat2 in the small rack using these blades. It won't have the thermal range tolerance of Dashcat2, I'm sure. It's not designed to.

I'll be creating another work log thread for that machine. I'm calling it Project Housecat.
Last edited:
I have a feeling I'm going to need more funding for this. Too bad I can't link my eBay profile.

I've repositioned the gear to reflect the new configuration.

As I said, I only need one ICEBox now, but I was still facing a few hard choices so I got creative.

Sitting in the bottom of the shelf I turned into a rack extension, you see a Belkin KVM of a special breed. This one will let you view and control two nodes at once. Why would I want to do that? Remote administration. I can either go hands-on in the workshop with the LCD console I intend to buy for this or I can deal with the machines from inside the house using a heavy VGA cable (I can get a 50' cable pretty cheap) and Cat-5 Mouse/keyboard repeater. This will be golden during Winter when I don't want to go out in the snow just to mess with computers.

I can daisy-chain the KVMs and have left a space for the second one I just bought. I wasn't about to go with a 16-port KVM because the ones that I saw require unique cables that cost the Earth.

The vertical extensions my switch is mounted to served as cord tamers on the rack before I repurposed them as rails for my switch. I'll be cutting those down quite a bit.

My repurposed rack shelf is going to be packed full of cables when I have this all up and running. It's a good thing the switch is the only thing putting off any heat. Even better that it pulls air from the side.
I've received my second shipment of rails and my second batch of dual-core Opterons (One pair for Dashcat2, the rest for Project Housecat).

I also attended a recent surplus sale and bought two scrapheap Sun servers that had four 1GB PC2700 sticks in each one. With that in mind, I'm re-upgrading the RAM to 3.5GB per node for Dashcat2.

This weekend I do a burn test on every node to weed out defective fans and hopefully get drillpress time to customize the rails for my cases.
I get no drillpress time today. Bummer. I do, however, get time to do a shakedown on the various systems.

I have 32 dual-core Opterons in my possession. They are widely varied in terms of manufacture date and such. I have CCBBE and CCBWE steppings. I'm going to try to get twenty of the CCBBE chips grouped together for use in Dashcat2 because they overclock the best and should, just because that's the way things work, undervolt the best.
I have no idea what is going on, but it looks awesome, and the supercomputer circuit breaker cracked me up. Subscribed!
I am happy to say I've ordered the screws I need to put the servers in their places. I should be able to get to that this weekend.

Still to come: Finding a centerpunch in my collection, marking the chassis members of the railsets and drilling the 5mm holes I need to match the cases. I get drillpress time for sure.
Went outside to take some measurements this morning and could hardly believe my eyes.

June 17... Global warming my ass.
climate change DNA, climate change...


If climate change means Utah won't be a desert wasteland with simply evil winters and 110F temperatures in the dead of summer anymore, I say bring it on.

If climate change means Utah won't be a desert wasteland with simply evil winters and 110F temperatures in the dead of summer anymore, I say bring it on.

Where are you in UT? I thought those mountains looked familiar.
I'm working on my outdoor workstation today because it turned out my rail screws are the wrong size. I should have checked this before, but I needed 10-32 thread screws to attach the rails to the servers, not 8-32 like I thought.

My desk is no longer useless. You may notice I've changed my displays some. I used to have a 32" WXGA panel. I now have two Dell 1800FP 18" IPS displays. 2.5x the pixels and actualy worth a damn. Though they are of the rare BGR subpixel pattern so tuning them took a bit of doing. I bought those broken for $1 each and fixed the power supplies.


Seen in the photo:

Two Hakko FP-102 soldering irons. The base units and iron cords came from the trash at work. They are off calibration by 100F and nobody knew how to calibrate them so they got junked and I rescued them. The iron cords were broken internally and the fix never held for more than an hour under heavy use, but home use isn't heavy. The iron holders were bought used from a surplus liquidation ebay seller while the tip cleaners came from a stained glass store on ebay. The iron tip is one of my worn out ones from work. Good enough for home use, but piss poor for commercial. I only have one tip. Haven't needed to double up lately.

The blue anti-static mat was another trash rescue. I had to cut it down today.

My speakers are Optimus Pro 7AV units from Radio Shack, circa 1994.
The stereo is a 1980s Hitachi bookshelf system with 80W output and a crisp tuner.
The keyboard is a 1987 IBM Model M I've yet to clean
The desk is a Herman Miller unit I picked up at the local second-hand store in 2008 for $15. It's built well and has a pencil drawer on the right.
The printer is a 2004 Dell laser printer that won't work with Windows 7 so that's why it's in my workshop and my Panasonic laser is in the house.
On top of the printer is a Dell C-series monitor stand with my ThinkPad T60 and dock I bought from [H] member Rob Black last year.
Rack to rail screws have arrived. Correct rail-to-server screws are on order and have already shipped out.

You know you're doing a big build when screws have _any_ effect on the budget.