This computer is never obsolete

Define “going bad”? Bad sectors that would or would not go away with a reformat? Thumbdrives with read errors or going completely dead? So many ways things can fail…

They’d be unformatable. First files would go corrupt, then windows wouldn’t be able to open the drives and floppies, then before you know it they’d not even take a reformatting.

Sounds like an issue with corrupted writes.

Thought. Check the facility’s power lines with a datalogger. The power may be dirty and the cheap power supplies may let some of the EMI through.

1 Like

I acquired some from a university physics department when they switched to PCs for most of the experiments. As far as I can tell, the switch involved a bit of a downgrade. At a minimum, the PCs have significantly worse interrupt latency than either the BBCs or the Archimedes machines.

I suspect the Raspberry Pi will be filling this niche in the long run now. Similar I/O to the beeb, but dirt cheap.

2 Likes

It’s a consequence of being used in schools. Dirty disks contaminate the heads, which contaminate more disks, which makes them unreadable. Reasonably well looked after disks and drives last fine - I have plenty of disks with perfectly readable content that are 20 years old now, and drives to match.

2 Likes

That’s an excellent point and that’s why I wrote "I had the hobby of… " :grin:

This e-machines is less capable and more power inefficient than a Raspberry Pi for example.

I started to replace my old rebuilds with Single Board Computers as soon as they started to be affordable (Such as with the Texas Instruments BeagleBoards).

But sometimes (and somewhere), there won’t be enough money to even buy a RPi in a third world country for example, or to rewrite a software package for an SBC that works very nice in an old PC, and so on.

That was my thought–people think power surges are the enemy, but so are power drops. And most places don’t have a UPS or Purewave protecting all standard plugs.

2 Likes

The average value being too high or too low should be corrected by the switching power supply. If the voltage gets too low, some parts may overheat, though. Spikes can be pretty damaging too. Short dips that go too low for too long, which the capacitors on the power bus cannot cover, may confuse the electronics. But there is also another problem, conducted EMI. That may capacitively couple through the transformers and other parts and not get sufficiently filtered from the power bus. And cause intermittent faulty behavior.

Thinking about a datalogging voltmeter that’d be able to catch such spikes and dips…

And if you don’t have say ecc ram, your buffers may get subtly corrupted. Miss a critical MOV or JNZ because power was too low for a micromoment? Write that to heap, then dump to disk?

It’s exactly why us data center guys don’t just run racks out of our garages (I might though, I had 40 amps of 220 installed a few weeks ago)

2 Likes

What about taking lead-acid or LiFePO4 batteries, making a power bank of sufficient voltage for the stock power supplies to cope with (they rectify the AC feed anyway), and just charge the batteries from the mains? Then when the mains power goes down, or glitches, the batteries will cover for a fairly long time.

The electric car task force I consult for is developing opensource BMS modules usable for these purposes too.

Envy! I have lousy 3x25 amps in my inner city apartment! (Maybe, if I’d couple together both the neighbouring ones, I’d get a bit over 30 kilowatts of power. Rather lousy, for a rather small accelerator…)

1 Like

And lead acid really aren’t that expensive. The deep discharge ones carry a premium, but power here is fairly stable, so it would be for emergencies that last hours not days.

I wonder how many Pis I could run… Do Pi distros support Docker yet…?

1 Like

An UPS snafu is in my experience more likely than a main power glitch. I try to use only equipment with 2 power supplies, attached to both current circuits.

And you can run on worn batteries; a forklift one that doesn’t hold a shift worth of charge anymore can still be good enough for a small datacenter’s clean shutdown. If you get lucky, you could get it even for the cost of scrap metal.

How’s the car doing? It sure is a purty little thing (tho I’m not totally sure about the rear. Kammbacks are always cool, just sayin’ :slight_smile: ).
I saw a thing on H-A-D today about some guy who’s trying to make reasonably priced fuel cells, which seems relevant to your interests, you seen it?

1 Like

I’ve got an old P6-200 Vectra with Warp Server on it - runs reasonably well with 32 MB of RAM. If I were still running an internal network, it would do a credible job as the domain controller. Warp’s problem was that the Presentation Manager GUI was a bit clunky (as was its API), and you had to know what you were doing when setting up services. If you did know, however, it was a lot leaner than NT for running equivalent workloads.

1 Like

NT did have a reasonably accessible, understandable registry. I’m not familiar with Warp’s… was it easy to use and manage?

Mostly config.sys for the core O/S. OS/2 used text files to store configuration data - not unlike Linux in that regard. It’s much easier to recover from a corrupt configuration text file than a corrupt registry hive.

From point of view of managing configuration, there were applications similar to the lot in Control Panel. They weren’t necessarily easier to use than NT’s - I recall the “feel” of Big Blue software being quite different. I can’t go into detail because I haven’t used that system in several years and I’ve forgotten a lot. It still works, though, and I have it if I ever need it again (or want a museum piece ;)).

Warp Server was ahead of NT in a few things - my own installation uses a journalling file system, which isn’t too shabby for something of that vintage.

1 Like

Yes, Novell was ahead too. But that was before the wars.

I have the same hobby, except I usually deal with PPC Macs. :grin:

1 Like

Yeah, but look how long it took MS to add Mahjong to Windows… :wink:

2 Likes