See also Nature Publishing Group diluting its brand by creating 50+ specialized “Nature (thing)” journals.
I’m shocked that no one is selling NFTs of Nvidia chips.
Yup. Who cares if you destroy the planet, look at last quarter’s numbers! /sarcasm
Used to be slang that “some yahoo” meant someone who was dumber than a sack of hammers and should not be given control of anything.
:: starts singing “Circle of life” ::
LINE GOES UP- that’s all these techbros care about. /sarcasm?
Yeah, but it makes you feel better to yell at an inanimate object, at least, so there is that.
DON’T GIVE THEM IDEAS. /silly
Sam Altman (etc) is out there demanding that we not only build new energy infrastructure, but invent whole new means of generating power, just so the “AI” industry will be able to get what it needs in the future, as power demands increase exponentially… yeah, it’s real sustainable.
It’s really pathetic from a technology standpoint too. All these people have been saying “just think of what the next generation of AI can do”. Meanwhile the actual plan for the next generation is…the exact same thing but with way more brute force? Sure, that’s definitely where real breakthroughs come from.
It’s just possible they understand their limitations and they’re being realistic, though - they know that there’s diminishing returns on the refinements “AI” is undergoing, we’re probably close to peak utility (and it still won’t be remotely useful enough) and long term the big changes will just be due to massive increases in computing power.
I think it’s close to peak utility for LLMs on scraped data. If they really think this is the future, though, maybe they should try adding some other algorithms to improve results. It might be neat for instance to have any attempt at intelligence in the AI rather than just hoping it hallucinates only correct stuff. Asking for an extra planet’s worth of power instead of doing R&D on that is being greedy and lazy.
I do think the costs of approaches like Chat GPT and Perplexity and others are upfront - you’re paying with a subscription. So there’s less tension there than what degraded Google’s search usefulness. There’s no cash incentive so far for Chat GPT, Perplexity et al to prioritize 3rd parties like advertisers over direct users.
It’s also worth noting that Google as it is is still more useful than Yahoo search. : ) So it seems at least possible, and I’d argue likely, that AI LLM search methods will be stay more useful than Google’s current searches. That will be a likely floor.
And there is a good possibility of people being able to roll their own AI assistants through publicly available LLMs. They may end up being not quite as efficient as the bleeding edge huge corporations can pay for…and also, they may end up being better in some ways. All the advantages Linux can have over Windows and Apple OS in specific lines are a good example of that possibility.
We’ll see how it all goes of course.
But DDG isn’t logging and selling my search history like google does. Hell, google probably logs every keystroke and mouse movement. In addition to all the tracking cookies, beacons, and fingerprinting it does.
… asking an AI site to draw a picture of something often gets us a car in the picture too as a bonus, even if that’s not what we asked for
True, except . . . What we suddenly witnessed when ChatGPT appeared comes from a phenomenon called Emergence. As some systems scale, there are points where significantly new behaviors emerge suddenly and nonlinearly. It happens in biology, cosmology, and in computer systems.
The AI investors right now are high on the cocaine rush of this recent breakthrough. And they want MORE.
There may be some validity to the idea. But this direction, accelerating, points straight toward a Dyson sphere of computronium sucking all the radiated energy from the sun. And to The Matrix turning us all into Coppertop.
No it doesn’t. The companies might like you to think there is something like that happening, but ChatGPT is a neural net where all the arrows point the same way. It has a training phase to fit a function to the input data and an application phase to see what that function gives out. More neurons mean a better fit, but emergence is a property of systems with feedback, and their model doesn’t have any.
Emergence is all about simple systems showing complex behavior. ChatGPT is a complex function approximating the data points it was trained on. There’s no similarity.
… we know intelligence does not intrinsically require huge amounts of energy, because people are intelligent and people just don’t use that many calories thinking about stuff
Sorry - the name’s taken.
Depends on how hard you’re thinking.
Comparing what ChatGPT writes to say people on reddit, it shouldn’t take more than a pack of doritos and a coke.
There’s 3,500 calories in a pound.
Maybe he was just constipated?
If only they could end their exclusivity of apple maps over google maps that’d be huge. The only downside i’ve encountered since switching…