Box fans for cooling my gaming PC case

Originally published at: Box fans for cooling my gaming PC case | Boing Boing

4 Likes

or if one lives in a hot (relatively dry) zone, they can be used to make rather effective focused swamp coolers. (one example - but they can be scaled up a bit from that. “One for each… foot?”) the summer solstice is upon us!

2 Likes

Is this “article” a joke? This is kind of like saying “oh hey now that I’ve put tires on my car look at how I can drive around now!”

1 Like

Hey, man, water (or oil) cooling isn’t for everyone.

I’ve never heard of case fans being referred to as Box fans before. WTF.

2 Likes

Why didn’t you connect the 4-pin plugs? That fourth pin lets the PC run the fan at intermediate speeds. With 2 or 3 pins it’s always either off or full-on.

1 Like

I have 3 pins on my connector. The 4th is id’d on the board as a ground. I used what fit. The ASUS fan software reports that it is managing the fan at variable speeds.

1 Like

A quick FYI for anyone who has some 120mm fans and needs to really teach a case the meaning of cooling(also handy for producing slimline fans to fit windows that don’t open very far to provide household ventilation):

Standard case fans will have a screw hole on each corner; and will (with the exception of some of the ultra-thin ones) have frames with relatively flat edges(and typically made of fairly durable plastic; not always fiber-filled; but PC+ABS is reasonably serious).

Simply place two fans edge-to-edge and then loop a zip tie through the two adjacent screw holes and tighten down. If you zip-tie both screw holes you’ll end up with a pretty solid joint(if you absolutely want to cut down on vibration, you can sandwich a piece of foam tape or elastomeric silicone between the fans being joined; but tight cable ties are usually enough to produce a joint that won’t buzz.

This arrangement can be tiled as desired to produce strips or grids of fans to suit the desired dimensions.

Since you almost certainly won’t have enough fan headers(especially 4 pin headers) you can do a little creative rewiring to just tie +12 and 0 to your PSU (don’t try to double-up on a fan header too aggressively; blowing traces on a motherboard you care about sucks); but retain some (admittedly less granular) speed control by tying tach from one fan to a 4 pin fan header; and using PWM from that header to control the speeds of all the fans.

Extreme proponents of this technique have built entire PC cases from nothing but fans and zip ties; I don’t tend to take it that far; but will admit that I’d much rather have 6 slow-n-quiet 120s and look excessive than have a less overt arrangement that ends up sounding excessive.

4 Likes

The fan does not have an arrow showing you the direction of flow. The sticker on the center of the fan indicates air blowing out that side.

In case your fan has stickers on both sides, computer fans blow towards the side that the wires are on.

Also, most cases these days have space for 140mm fans. Bigger fans can move more air than smaller ones, so you can run them slower, and therefore quieter, and still get the same cooling effect.

Not true, you can vary the speed by varying the voltage, although you might need to change the mode your motherboard is using in your BIOS. The advantage of PWM is that it can run the fans slower than is possible by changing the voltage.

1 Like

Small fans can really be annoying. If I go and build a rackmount server, it’s going to need to be in a 4U case; if you’ve ever been around an operating 1U server running at full tilt, that’s not something you want in your family room unless you like the constant roar of a jet engine. But even the 4U cases don’t generally use fans larger than 92mm.

You are preaching to the choir on that one… In my first real IT job my ‘office’(while gloriously air conditioned and private; which is quite a luxury for entry level in these dark days of ‘open plan’ hellscapes) was shared with a rack worth of mixed server/switch/network appliance gear. (as well as mixed storage for incoming/outgoing hardware and machines under testing or prep, but those were just bulky rather than noisy)

They eventually reshuffled things, after HQ decided that the arrangement probably didn’t meet OSHA sound exposure level requirements and they’d prefer to avoid finding out the hard way; but that was a good couple of years in.

It’s a bit of a pity because you can get some fantastic deals on hardware in 1-3U boxes definitely not designed to coexist with humans(a lot of network appliances, in particular, are pretty much just a normal x86, sometimes with handy CF or DoM boot support already built in, and copious onboard NICs; but guess who is always an 1U with 4+ awful little 40mm fans; often not even PWM controlled?

For my home purposes, to tame a few of the said fantastic deals, I’ve been futzing with the details of a “piggiback” bracket that could be clipped on to combinations of sub-4U systems (securely but nondestructively) in order to provide shared cooling from a bank of proper fans. Kind of a lightweight, vendor-agnostic, take on how blades and 2U/4node systems are typically cooled.

I’ve definitely sworn off any motherboard with an awful little chipset fan; and(while you can’t really avoid them entirely, or get standard-size fan mounts) always go with the GPUs that have the biggest, laziest-locking fans I can find.

2 Likes

tenor (27)

1 Like

I suspect that this is inadvisable(and some fans will definitely fail to spin up, though most will keep spinning if given a shove); but the old go-to for an ‘intermediate’ voltage setting when dealing with 2-wire fans or lack of 3 or 4 pin fan headers was to exploit the 7 volt difference between +5 and +12.

Probably less viable now, as other voltages have shrunk into vestigial shadows of their former selves(or disappeared entirely; shockingly enough RS-232 can’t make a case for dedicated negative voltage support anymore…); while +12 has just grown and grown as a percentage of total output.

It depends on the fan. I’ve had ones that have stopped at anywhere from 5-15% power, and wouldn’t restart until about 20%. With PWM you can go as low as 5% reliably.
Have you seen Intel’s new idea? 12V only from the PSU, with DC-DC converters on the motherboard to provide everything else.

I have seen that one. Given how much is already generated on the motherboard or expansion cards(it’s been a long time since +5 or +3.3 was actually suitable to directly feed logic chips on the motherboard); and we already have a few different nothing-but-12v connectors to plug in near the CPU’s power regulation bits or feed the GPU; it strikes me as less radical than it first appears, though not wholly without potential incident.

I’d imagine that it(or a proprietary form factor variant of it) will be pretty popular with OEM prebuilds; since it reduces cabling complexity, apparently the efficiency is better; and they know exactly how much +3.3 and +5 needs to be generated to supply things like hard drives(if they aren’t just all NVMe).

It seems like it has a greater potential to be a pity for slightly more exotic custom builds; I’d hate to have to choose from a total of 2 different overpriced motherboards because those are the only ones that provide enough output headers for my dozen HDDs or enough +5v to drive my definitely-not-redundant-in-the-proximity-of-a-real-computer 5.25in-bay mounted rPi hive.

It will be interesting to see if, for this market, we just continue to get either classic PSUs or PSUs that supply the new 12v connector and a bunch of the old standard connectors; whether standalone, non-motherboard, voltage converters designed to cut the 12v into what you need will become readily available; as well as what level of header allotment and power supply from motherboards becomes the de-facto ‘standard’ that you get in motherboards not aimed at specialty users.

In some respects it’s honestly a little surprising that it took this long to show up: I can’t speak for all reasonably current servers; but most of the ones I’ve gotten hands on or had occasion to read up on are already using PSUs that are one voltage to either the motherboard(presumably on models where the economies of tighter integration made up for the risk of needing more SKUs) or directly to a voltage conversion board(where the versatility is presumably worth the extra parts and cabling).

The other slight surprise(I assume relatively short cable lengths and lots of drive motors and fans where being able to pass through without conversion is handy) is that Intel didn’t take the opportunity to bump the voltage. Lower voltages mean higher resistive losses to deliver the same power; and while high voltage DC would be a whole different kettle of fish; bumping it to 24v would require minimal changes in wiring insulation and such to remain safe; and thanks to PoE even ~48v is pretty well supported and nonscary even in fairly cheap and lousy devices. I have no doubt that Intel’s electrical engineers had their reasons; and know them far better than I do; but I’d be interested to hear what they are.

The Dell below has gone pure 12v; the little edge connectors are just for status signaling between the PSU and the BMC, all the power goes over the two huge ones.

SuperMicro’s offering here is a little more conservative(possibly because the server uses a power distribution board between the PSUs and the motherboard, so supplying standby power to the USB ports might be tricky to do efficiently; and has a few amps of +5v to go with the 62ish amps of +12.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.