How storage ended up tiny

Let’s see if I’ve got this correct:

Rob- Isn’t it weird that these people argue for the superiority of a hardware format that wasn’t widely adopted?

Also Rob- Here’s twelve articles about the Commodore Amiga.


Look, RAM cables just made sense! The options it gives you for positioning RAM is endless, Dammit!!!

With all the fancy RGB LED options on RAM now days everyone should want to be able to reposition RAM sticks to get the most impact out of the lighting.

1 Like

I think that is a life long process. A journey to the end of the rainbow. :wink:


Nevermind hot-swap, relatively easy cold-swap would be nice.

I’ve got a little home server with a RAID of [hot-swappable!] SATA disks to back my laptop up to, which boots off an m.2 SSD. Initially I got the cheapest SSD I could. This was a mistake, and within a few months it developed numerous bad sectors and the system started acting wonky.

I bought a fancy Samsung stick to replace it… and had to take the entire server apart because the m.2 slot was on the wrong side of the motherboard.

The m.2 format isn’t really the problem there…


Most people don’t use more than one or two drives in their computers.

Shoot, there I go again not being one of those “most people.”


Yeah, me as well. Though I only have 3 right now. 4 is better…

Is there signal processing/acquisition as well as storage going on there?

Compared with my experiences in more banal computing contexts the coax/fiber ratio looks profoundly weird for a storage system.

Honestly seeing coax at all is pretty weird. I’d have expected a mix of fiber for connections to and between the storage nodes and a bunch of copper twisted pair handling the low speed ethernet for management interfaces and stuff.

Please tell me that primeval dire ethernet isn’t secretly lurking in the Antarctic, waiting until the time is right to wreak a terrible vengeance…

Just a Tuesday?

The coax cables are the radio IF signals, reference clocks, timing pulses, and other status signals. There’s a hydrogen maser upstairs for a frequency reference. The disk arrays are fed by 10 Gbit/sec Ethernet cables.

There’s supposed to be a big press release next week, describing this system in detail. It’s part of the Event Horizon Telescope.


and yet to me it’s obvious. Why would anyone prefer a bulky, case-bound SSD with thick double-decker connectors and annoying rubbery cables over one that looks just like a wee stick of RAM

Because once it’s in the machine, I don’t care. I basically have to handle it 2 times over it’s entire life; once at the beginning during installation, and once at the end when it’s being replaced. External drives can use USB3 or some other sufficiently fast inerface. We’re not stuck in the days of USB2 any more.

The whole heatsink (or lack thereof) issue is usually addressed in newer hardware. High end SSDs come with heatsinks now, and modern motherboards include them too. We’ve been using some of the Intel NUCs at work, and they have a strip of thermal pad on the back plate which is positioned right above the SSD. Mind you, as far as I can tell it doesn’t quite touch it, but the thought was there.

You can get 4TB M2 cards now, and probably fit three in the same space as a 3.5" drive :wink:

1 Like

Flash storage can get pretty preposterously dense(Samsung has a 30 TB 2.5in SAS option); and I assume that, so long as someone has produced a vertical mount for the M2 cards so they don’t have such a footprint your ability to cram those in is limited more by running out of PCIe lanes, or into bottlenecks elsewhere in the system, than by mechanical concerns.

It’s just the small matter of the price. There is probably some negotiating room built in to the $12,000/drive list price on the Samsung; but your wallet isn’t going to escape unscathed.

And the SAN vendors offering ‘All Flash Arrays’? Cool toys; but in the “no price listed; we’ll send a sales team if we think you are serious” price range.

Absolutely ludicrous I/O, so if you are paying $$$$ per core for your Oracle licenses or the like not necessarily a bad deal at all(and, if it’s even available, the mechanical solution that can match that performance would be just silly; probably rooms full of drive shelves); but definitely not the masses’ mass storage.

1 Like

Do you happen to know what file system they’re running on that volume? Is it a SAN?

This thread is a little silly (and the original post).

There are multiple form factors of NVMe drive on the market today. M.2 is just one of them, and it is used for appropriate applications, like laptop boot drives. Why would you need a hot-swappable laptop boot drive? And why would you want a much larger form factor drive, when you don’t need it?

Then there are the form factors available for server-based applications. These are more appropriate for that role. They are physically larger, among other things, thus the storage density per drive can be/is getting pretty nutty.

This article kind of has the tone that M.2 is the only NVMe flash storage form factor that won out.

This might actually be a good solution for the next generation of VLBI recorder, which will need to stream 32 Gbytes/second for an hour. We’d need a lot of them, of course, and it will be fed by several 100G Ethernet cables.

Our working group still doesn’t have any volunteers to design that, so if anyone wants to have fun storing a LOT of data, be my guest.


If you apply this analysis is seems easy to pick the winner.

The lighter product wins because it takes less material and therefore costs less to fabricate. Heat sinks ain’t free.


It’s a Haystack Observatory Mark 6 VLBI data recorder. Here’s a paper describing it in some detail.


That is a strange paper, if they did this in 2012ish timeframe, I’m wondering why they felt they needed to write storage system software from scratch, and why they say RAID would have been too slow for ~4GB/second data rate requirements (16Gbps peaking to 32Gbps is what the paper says). I was putting in a lot of SANs faster than that in 2012, and the RAID-based storage appliances used were not an impediment to writes in that speed range or much higher, across one or several appliances using a clustered file system (StorNext).

There’s a lot of build-it-yourself mania in the radio astronomy world. Also, they wanted to be sure that there wouldn’t be ‘brief pauses’ while some unknown OS thing happened, since the data are being emitted continuously, and every byte wants to get recorded.

The next generation is likely to be off-the-shelf hardware.