If anyone wants to test a Lear Siegler ADM-20 for latency, I have one on the shelf behind me.
ASCII - Boing Boing is the true way…
Every layer of indirection we slap onto the software stack is yet more latency. And lately we code atop vast tottering software stacks that we stop from collapsing by shoveling more material on them all the time.
Of course things are slow. Screw slow. I’m amazed any of it works. Consider something like Microsoft Code which is actually a Javascript application pretending it’s a native application and which, since it uses the Electron framework, loads a userland driver for the Xbox controller.
Yes.
You could write much faster applications if you really wanted but you’d have to do it using The Old Ways and the thing about The Old Ways is that you need competent coders and those have always been in short supply. The whole reason for the vast, tottering heaps of the software stack is that you can make it all work slightly faster, a whole lot cheaper, and with really crappy coders.
That said, some indirection is not optional. The Apple II could afford to have preposterously low latency because it could do a really limited number of things. When the screen is displaying its characteristic rainbow text that’s pretty much all it does. When a modern computer is displaying text it can go from that to a 3D render to high-definition video back to text in the time it takes the user to slam the alt-tab combination a few times. If you want that to be possible you have to put some indirection in. Also if you want things to be secure. That’s a lot more indirection right there. Apple II has absolutely no security whatsoever and didn’t need it. Every application it ran had full unrestricted access to all memory, but considering it was the only application running—what was the harm in that?
In summation, it’s not fair to compare current computers to old computers because they do fundamentally different things but modern software does waste unbelievable amounts of user time.
Needs a black background with green text version. (Maybe one already exists? Paging @beschizza.)
well, unless that would glitch the podcast i’m trying to listen to
Latency, similar to security, is an externalizable cost in software design because the user has insufficient visibility into what’s going on.
If the software doesn’t have a feature it’s supposed to, people will choose another software. If it has a confusing interface, people will give up on it because they feel confused. If it has a higher price than they can afford, they won’t buy it. The user has all the information they need to understand what’s reasonable to put up with in these cases, so the software must be designed to meet their expectations.
But if it has high latency, generally people will not notice. Or if they do notice, they’ll tolerate it without much complaint. After all, lots of computer functions do take a bit of time. How are we, the user, to judge what’s a reasonable and necessary delay, and what’s just lazy design?
This leaves users bearing the cost for poor design simply because we don’t understand enough to know to complain, and we can’t quite put our finger on what’s amiss.
Ultimately, we get slow software because we don’t care. But marketing basically determines our priorities when it comes to consumer technical specifications - it determines what we’re able to care about. I wonder what user experience would be like if, rather than advertising the GB, dpi and MP on a phone, they advertised the ms?
It’s the page you get to by mistake if you hit enter before hitting down arrow after typing Boing into Firefox’s address bar.
In general, all these systems support low latency software: games. Sure, keyboard drivers and the like add a few milliseconds here and there, but whenever you look at some game rendering at 60 or 120 hz and responding instantly, it’s a proof by example that the issue isn’t the OS or the drivers.
The reality is just that most programmers and software shops couldn’t care less about latency and get annoyed when some irritating person questions their competence in that regard.
This is definitely the biggest issue. What’s more, all of that adds complexity, making it harder to understand and optimize any of it. Harder to get it even working.
For some definitions of ‘work’, sometimes, if you don’t mind it being buggy, laggy, somewhat insecure, and unmaintainable due to all the incidental complexity.
They barely touched on web sites/software other than bloat, but there is more to it. Behind that bloated web page with all of its JS frameworks and CSS frameworks lies at least one tottering stack of network and server-side software dynamically building the HTML from data pulled from other applications that live on other systems another network step (or more) away from the actual application building the page. Some of those applications do the same thing, pulling from other systems across a network (which might be doing the same thing). Trends like REST/SOAP microservices etc. add to that. There can be a lot of requests and network latency behind the scenes before anything even gets to your browser.
Amber.
As a dead canary warning, the ~monthly update for Visual Studio is 4.48 jiggabytes! That’s just the core stuff plus Xamarin. No Unity or other fancy stuff.
At what point did Microsoft fail their sanity check on that?
(I suppose I could update less frequently, but Android Studio, another monster, got me in the habit of not ignoring updates, or I’d have to do a complete reinstall to get to the latest version.)
Need to find myself an all metal 3270 keyboard.
As others have pointed out, the main problem is in the software, not the hardware interface this article seems to be mainly focusing on. It’s not just the software people are writing, it’s also the runtime environments programs use and the OS services and kernel too.
Here’s a good video about all of this stuff, with a novel posibile solution at the end:
I mean hardware does make a difference. In the old days everything was much closer to the metal and latency was super low. These days the amount of things that need to happen between pressing a key and having it show up on the screen is pretty mind boggling. There’s just so many layers of abstraction (and security) getting in the way. Combine that with shitty software and you have a recipe for latency across the entire stack.
Well sure, everything makes a difference; there’s a string of different latencies you have to add together, Cycle stacking as they call it in the article. The majority of the problem though, and most easily fixable in theory, is the software.
I really miss ctrl-C. Fuck you computer, you are going to stop whatever you’re doing and pay attention to ME.
This all reminds me of the switch to OS X. It would seem slower but you had to always reassure yourself that it didn’t matter in the short term because you were doing so much more in the long term.
That’s why the SGI Indy in the ‘Computer latency: 1977-2017’ list is most notable.
It is beaten by the 6502-and-no-extras brigade, though not by much; while still running an OS that, while not contemporary, is a least architecturally more or less modern.
Hahaha. This topic is here because it was Thanksgiving yesterday, and everyone has just finished working on their parents’ computers.
Almost but not quite- our own computers, installing more RAM so my wife will hopefully go into a meltdown over getting the spinning beachball (speaking of latency!) a little less often.
Note that as with many problems, people will tend to blame the wrong things when they do experience latency, so why would the people responsible care at all?
I recall reading a cell phone review where the reviewer gushed about how one phone’s screen was “very responsive” while the other phone had a “slow display.” Clearly if the device isn’t responding, someone needs to go yell at those pesky LCD engineers and not bother the software people… right?