Someone set that to music!
Gah. People like this guy are kind of close to understanding the threat but they always seem to completely miss an important, fundamental issue. You canât just blame the humans and call them stupid for doing the things we know humans will do when an imperfect technology is given to them:
This guy sounds like one of those self-driving-Telsa defenders who would argue that a Tesla that can drive itself perfectly 98% of the time is great, and if it gets into a crash because the human in the driverâs seat isnât perfectly focused on supervising it and doesnât immediately intervene during the relatively rare but potentially deadly errors, thatâs on the human, not the technology. Even though itâs no secret that humans are (and always have been) terrible at that kind of task.
Technology needs to be built for the real humans that actually exist, not some fantasy hypothetical humans that these techbros seem to think that we should all be!
These rich assholes thinking that they are smarter than the rest of us really need to go fuck off now.
A large part of their problem is they only consider people like themselves, and everyone else is an afterthought:
Any pushback on that is met with dismissal, empty promises to address fundamental problems at some vague point in the future, or outright hostility. The name techbros in the US covers how bad it is, because too many canât even acknowledge 51% of the population.
Sounds like Jakob Nielsen has lost the plot:
A good analysis:
That mindset has worked perfectly well for the discipline of Economics for a very long time now. Itâs really your fault for being poor, for not being a Rational Actor according to the University of Chicagoâs models.
âhave been triedâ.
Have they? Have they though?
Theyâve been developed, sure. Theyâve been tested and poked and improved and standards written and testbeds created. The standards are ignored, and the testbeds are treated as niche and never go anywhere except for specific apps. But from being in Zoom meetings and watching the conversations with people in Disability Advocacy groups, the tools that most people are forced to use either donât have features that people with this disability or that one need to access it, or itâs hilariously difficult to use, or itâs not turned on by the administrators because theyâve never heard of any of it. If disability accessibility tools arenât working, Iâd argue that itâs because the people writing the applications theyâre embedded in arenât forced to use them.
But still, things are improving. Have you used Zoom with automatic closed captioning? Itâs not perfect, but it works well enough (once you figure out how to turn it on, once your site admin has realised that they have to buy access to it, then actually find the budget and do that).
But from an IT perspective, âgenerative AI for individualised accessibilityâ just means that you as a disabled user are fucked, because when it gets it horribly wrong (and it will) or subtly wrong (which is worse, because itâs harder to detect) or just breaks entirely, then nobody will be able to help you because nobody will be able to see the same thing you do to be able to fix it.
And thatâs all just from reading the onebox. I should probably read the articles now, hey.
⌠dunno if this ever happened
Canât they just switch to Decepticons instead? That would be more Elonâs style.
slightly ot, but fits nevertheless;
The world isnât on track to meet its climate goals â and itâs the publicâs fault, a leading oil company CEO told journalists.
Exxon Mobil Corp. CEO Darren Woods told editors from Fortune that the world has âwaited too longâ to begin investing in a broader suite of technologies to slow planetary heating,
In his comments Tuesday, Woods argued the âdirty secretâ is that customers werenât willing to pay for the added cost of cleaner fossil fuels
you really have some nerve, motherfucker.
To whit: if your algorithm isnât O(N log(N)) or better then itâs the wrong algorithm. If the growth rate in processing required for N points has N² or NÂł or any higher exponent in it: forget about it.
Last I dared to ask someone I was told their AI fitting algorithm was O(Nâś).
Your monkey count is finite, your requirements are exponential. You will always lose. Stop now.