Latency: why typing on old computers just feels better

Latency on a modern computer can be quick when engineers understand that it’s important. If you have a low latency display, your mouse cursor latency is probably significantly less than the Apple IIe keyboard latency.

Honestly, that’s more surprising(and shameful) than most of the lineup in the linked article: yes, the ancient teeny machines are the ones that put in impressive results; but at the cost of mostly being microcontrollers whose very lives were tied to spitting out a video signal(didn’t hurt that color burst oscillators were cheap at the time): more or less none of the luxuries we associate with “operating system” were even close to available. Only the SGI really stands out in terms of being both vaguely recognizable as modern and way more efficient than the rest.

There is nothing even remotely close to that level of difference between OS 10.6 running on a more or less architecturally modern x86 and driving an LCD by way of some reasonably complex pile of GPU and 10.10, or even 10.13, running on a more or less architecturally modern x86 and driving an LCD by way of some reasonably complex pile of GPU.

2 Likes

I don’t think that it was ever open-sourced a such; but both by dint of being old and relatively simple; and by virtue of being more of a Woz project than a Jobs one, it is fairly well understood. FPGA implementations are available.

I had this debate when our IT wanted to put our CVS repository in a different state. I told them about the importance of latency when you have a centralised VCS. They promised to use a compression appliance to increase the throughput of the link. I said they are missing the point so they said they would use a second compression appliance to compress it further…

10 Likes

Wow. Were they Bestbuy Geeksquad or some shit? Is consulting really that easy of a job? I mean, I would’ve loved to work on your IT team. Sounds to me like nobody actually had to know anything. Must’ve been a breeze.

4 Likes

When one is typing at speed, latency is less of an issue. Most typists “buffer” a series of finger movements and often ignore what appears on the screen. Sometimes what you see is a bit of a surprise. (WYSIABOAS) The problem is whenever one needs command confirmation. So many devices nowadays are terrible about this. I’ll give credit to Apple with its IOS. I was also impressed what they did with the iPad Pro. Now, if only they had an iPad Pro Mini.

Not available from that URL, FYI. :slight_smile:

I’ve long thought it would be an interesting project to make my own 8-bit
computer from scratch and then create an OS for it.

13 Likes

Do you mean that the hardware isn’t available for sale there? The downloads section appears to have all the goods; just BYOFPGA.

1 Like

Its incredibly easy with a modern microcontroller.

1 Like

Fun fact: one of the key things that you used to have to do in fencing refereeing (for epee, anyway) was to determine whether hits were simultaneous or not – simultaneous hits both count, but if they’re not simultaneous, only the first touch scores.

When electric scoring was introduced, the time window for “simultaneity” was set at between 1/20 and 1/25 of a second – 40 to 50 milliseconds. If this feels the same for a keyboard, then the 35ms latency that the Apple 2 gave you should therefore “feel” like your keypresses and the output are simultaneous. It’s interesting that in these totally different settings, the effective definition of simultaneous is roughly the same.

[insert quote from a human factors textbook here.]

4 Likes

Ooh, here we go:

The [point of subjective simultaneity] … [for] touch and visual feedback [was] 32ms with a 95% confidence interval of 20 to 43 ms.

(Kaaresoja et al. (2014) Towards the Temporally Perfect Virtual Button: Touch-Feedback Simultaneity and Perceived Quality in Mobile Touchscreen Press Interactions.)

Neat that this aligns almost exactly with the linked article in the original post. Science!

2 Likes

IIRC from driving school, as well as various other things, the reaction time between visually noticing something and starting a conscious reaction is in the ballpark of 100ms. That seems very high, but the thing is, from fMRI studies, the brain decides its reaction and preps the premotor cortex much faster than the human perceives their own reactions.

I don’t know how that factors in, but I’d expect intermodal timing to be a lot sloppier than just determining if two stimuli are simultaneous or not through the same mode.

I think you get the longer time because you have to factor in time to trigger a response. This is different from reflecting on a perception and making a determination (after the fact) about whether things occurred simultaneously, though I don’t know the specifics. But for something like a reaction, there’s the time to simply receive the stimulus, plus the time for the mind to be consciously aware of the stimulus enough to start responding to it.

Disclaimer: I do not have any of my human factors textbooks handy.

2 Likes

I think that some of the blame can also be assigned to object oriented languages that make it difficult to do build-time linking with function level granularity. (i.e. If you used strlen() from stdio, that’s the only function that would be linked in rather than the whole library.) When you have to link an entire class and its dependencies, even unused parts, it snowballs rapidly.

That probably made run-time linking more attractive.

Still, 25 MB seems like padding the bill, even if it was compiled for debug. If that total includes DLLs that all programs require, I’d assign it to the OS’s tab.

3 Likes

This is one thing that drives me absolutely insane with Office 2013 and newer. In their infinite wisdom, they added animations to text entry which makes it look glassy smooth but also adds a noticeable delay between pursuing a key and the text displaying on screen. Extremely aggravating when you’re a fast touch typist. I don’t know why anybody thought this was a good idea.

At least you can disable this but it requires setting a registry key (because of course it does).

5 Likes

That’s not really a fair comparison. If you disassemble that “25 MB” file, there’s a lot of stuff going on in there to protect the binary both during runtime and while at rest. And anyway, any high level language like C will never be as compact as pure assembly. You can certainly get it to be extremely small if you set all the right compiler flags and use the right optimizations. Visual Studio may be a huge bloated mess but Microsoft’s compilers are actually quite good.

3 Likes

This is true. Disk, memory, and CPU cycles are cheap so nobody cares about optimization unless it makes a huge difference.

1 Like

Well, you know what Knuth says:

“The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.“

3 Likes

It included Microsoft Foundation Class’s, as I recall, which was a massive monolith of code.

Somewhere in the move from assembly (bare metal) to IDE (massive insulation), it got easier to code, but harder to code
efficiently. Out of sight, out of mind. I’ve spent 30 years waiting for that pendulum to swing back the other way, but looking
at what happens with older phones/tablets/computers, I think it’s actually being held deliberately at the far end now.

Bloat the code, sell new hardware that can almost run it as fast as last years code after you install latest version.

5 Likes