Display postscript was the final WYSIWYG (in theory, anyway). But it did have more overhead than most systems of its time. MS Word, on the other hand, was designed to claim buzzwords to dazzle an IT manager’s eyes and make sales. Display postscript was kinda neat. “Neat” is not a word ever used to describe MS Word by someone who actually had to use it.
Not directly compared, I agree. One was a micro environment, you wanted “Hello World” in the upper left corner, you copied
those 11 bytes to the start of the video card memory. A decade later, on a PC, with the restriction that the application had to
look like every other windows app, you had to include all the libraries that let you do that.
The problem was that the code increased in size a lot faster than the memory/cpu did, so things slowed down, just like the
article above is talking about. I’ve worked on both sides of that wall and customers on the fast side of it, were not happy
with the other side of it. The machine they worked on might have been cheaper, the company they worked for might have spent
less money, but the end result might not have been more cost efficient.
Just ask a teller processing deposits on a Friday afternoon what they think of having to redo a transaction several times when
there is a line of customers getting impatient in front of them.
The Mac SE/30 was the first Mac that felt like it was responding as I did things (moving the mouse, clicking, and typing). Typing on my old Apple //c is still the best writing experience I’ve had, but that might have been the environment as much as the machine.
Carpal tunnel was “solved” more by education than by hardware changes; most keyboards today aren’t that different from the late '80s (and possibly worse).
MFC isn’t really that bad (it’s just old and outdated) but yeah it is so big and bloated. Static linking it is a surefire way to get a giant binary. So many better frameworks out there these days.
Given the budgets and deadlines on which giant tottering heaps of features are expected(‘small’ and ‘tight’ respectively); and the gruesome security and stability failures that routinely occur; I imagine that you might be waiting quite some time for the pendulum to swing.
Even more so in cases where the same program is expected to run on multiple platforms that don’t necessarily share much aside from having the same abstraction layers ported to them.
*drinks tall glass of 10W-40*
“Ah, functional.”
The SE/30 is perhaps one of the finest Macs ever made.
First major OS built with “objects from the ground up”, basically today’s Mac OS running on 1990-era hardware. Not surprising that it lagged.
It drives me crazy that the new Logitech Harmony remotes sacrifice buttons by overloading certain keys with long/short press behavior. Short press FF to FF, long press to skip. It’s just infuriating.
Not to mention the new Apple TV remotes with the touch pads are perhaps the biggest pieces of shit ever. Hey, here’s a great idea, let’s have a remote with zero tacitlity!
Yup. I have a harmony remote, no general-purpose tactile DVR list button. Or even short skip buttons. Those are all on the touch screen.
However there’s a “Red” button a “Green” button a “Blue” button and a “Yellow” button, as well as an “A” button, a “B” button, a “C” button and a “D” button. None of which are manually programmable from the remote’s own interface, (in fact none of the tactile buttons are programmable from the remote itself at all). As well as several buttons that appear to have different sized light bulbs on them. None of them do anything for any of my devices. The least Logitech could do is have a little app on the remote that lets you map buttons. It’d take, like, an extra 10 hours of programming time to make that. Probably less due to whoever wrote that remote’s OS needing debugging tools that probably do just that, but didn’t get included in the final release.
The Harmony software on my PC that I used to initially setup the remote doesn’t have any way of manually assigning functions either.
You know how Logitech decided how to fix the problem of having to hold a remote vertically while looking at a touch screen to do a bunch of functions? They decided to put a second IR emitter on the rear surface of the remote. Instead of making a functional piece of hardware.
At this point, it’s almost enough to make me just build my own effing setup that actually is comfortable to use, and does what I need simultaneously.
Wow. The more I think about interfaces I deal with every day, the angrier I get.
“I still fervently believe that the only way to make software secure, reliable, and fast is to make it small. Fight Features.” ~ Andrew Tanenbaum
The interface on Amazon Fire OS is absolute trash. I assume its code is a slapdash kludge built on an obsolete version of Android by developers that barely cared. If more people knew about and tried Kodi, no one would ever buy a Fire Stick or Fire TV.
I haven’t used a Harmony remote in years, but I seem to remember the software used to. Not surprised as their Unifying software is also inferior to their old mouse and keyboard manager.
Given that the “click” is an immediate physical indicator that the system has been sent your keystroke for ingestion and processing, I would argue that it is not an illusion at all. But I prefer clears over blues.
VR latency needs to be down in the <=10ms region to avoid massive motion sickness and disorientation due to sensory disconnection of your head movements from the graphics output causing extreme vertigo. That may go a long way to explaining why my one and only VR experience made me nauseated in 15 seconds flat, but I can attest that 30ms mouse-to-screen latency using Steam in-home streaming is easily enough to disconnect your physical actions from their consequences and render some games entirely unplayable.
My favorite is voice articulation to hearing. If you wear a pair of sound isolating headphones that play your own voice back to you (but on a delay of 200ms or so), suddenly you can’t speak except in a very stilted manner. Your body just won’t let you get the next word out sensibly until you’ve heard yourself say the last word.
Apparently it’s also used as a treatment for stuttering.
A YouTube search for “delayed auditory feedback” will turn up a bunch of good (and not-so-good) examples of the effect, though it’s totally different to experience it yourself.
I had an account on my university’s NeXT cluster back when they were new.
30 years later I don’t remember issues with keyboard latency, but I still remember the slow access and write times for the optical drive.
(Favorite NeXT memory - In the interface builder that helped you make a GUI and link buttons and features to code behind it there was button that would automatically neaten up the layout (same size and align buttons and inputs)
The button was named “Make Pretty”.)
The colored buttons are for controlling bluray players. Particularly the Java apps that come on certain disks.
Yes, you could have a custom buttons for each, but its a bit awkward (unless you can do it graphically)
Of course, now that bluray is, shall we say, “more cumbersome than using a streaming service”, they just take up space on a remote.
" […] so much more horsepower is required draw words on the screen that a contemporary Mac is visibly slower than a decade-old one."
Must be this “progress” thingy I keep hearing so much about.
Part of why modern computers are slower is due to the jobs we’ve given them with respect to their OS. Beyond just handing interrupts between processes and the hardware they handle all kinds of silly things. Antivirus/malware detection, driver detection, cloud storage, etc. I feel that a decently pared down OS would just handle the interrupts and maybe driver support. Beyond that the UI would be a whole separate affair like how it is with Linux. I really don’t get the desire to reinvent every component as if it’s your job for some OS authors. Just do the basics, please, and do them well.
That’s fair enough, but I feel that it raises the stakes, too: a clicky keyswitch provides a very clear starting pistol that the screen output has to keep up with. So I’d hazard that it has a psychological either/or effeect. Acceptable latency (say, <100ms?) will gain the illustion of instantaneity. But anything marginal will pick up an uncanny echo sensation between click and show, which makes it even worse.
I hope this pattern doesn’t continue, I’d probably have a stoke if the typing latency on my computer was 200ms.