You can call me AI

Will there be ice cream?

3 Likes
2 Likes

tl;dr: They’re all wankers.

I’d argue that there aren’t two cults, but two factions of the same cult, that share a common base of garbage ideas and assumptions.

Their debates will be as productive as Christian schisms over the make-up of the Trinity. i.e. Did the Son descend from the Father alone, partake of the Holy Spirit as well, or are the Three really One? Supremely important to Christian sects, causing war and slaughter, but confusing and pointless to outsiders.

Interesting that a whole paragraph was devoted to the Friedrich Hayek story about an obscure footnote of aristo-socialism. Was that his favorite after-dinner story at the Mont Pelerin Society?

3 Likes

“When my graduate students showed me some of the results that were in this paper, I actually thought it must be a mistake,” said Marzyeh Ghassemi, an MIT assistant professor of electrical engineering and computer science, and coauthor of the paper, which was published Wednesday in the medical journal The Lancet Digital Health. “I honestly thought my students were crazy when they told me.”

At a time when AI software is increasingly used to help doctors make diagnostic decisions, the research raises the unsettling prospect that AI-based diagnostic systems could unintentionally generate racially biased results. For example, an AI (with access to X-rays) could automatically recommend a particular course of treatment for all Black patients, whether or not it’s best for a specific person. Meanwhile, the patient’s human physician wouldn’t know that the AI based its diagnosis on racial data.

1 Like

What Grok’s recent OpenAI snafu teaches us about LLM model collapse

2 Likes

Yes, sue the plagiarism machines!

3 Likes
2 Likes
4 Likes

Initially the AI companies tried to give the impression that all the source material was processed into some concentrated holographic quantum AI paste during training, so that there was no copyright infringement. Now it looks like they’re brute-forcing the whole damned sample set!

No F’ing wonder they need so many energy-sucking planet-warming data centers and all the GPU chips to run it!

This reminds me of a gadget from a Vernor Vinge story: an FTL communicator that works in a Slow Zone where that’s normally impossible, but the cheat significantly dims the local sun for only teletype data rates.

Cool that the trick works, but a helluva cost, and it doesn’t scale.

The valley bros have to know this, but want to raise their number to the moon before this catches up with them.

6 Likes

Quite. I believe the ability to brute force the process is the actual scientific breakthrough behind all this bullshit. In 2012 and it took a few years for scaling to occur. That’s Stephen Wolfram’s timeline anyway if you have time for a long read this is a good one.

The question of whether many of the things people are suggesting it can do next are even computable is moot. Personally I find the image making and code writing from gen AI to be better than the text writing. This is because I’m shit at programming and art but I’m actually literate I think.

I quoted the summary from Gary Marcus in the NYT thread so I’ll copy it here so people don’t need to go to the Nazi funding stack:

“ The cat is out of the bag:

  • Generative AI systems like DALL-E and ChatGPT have been trained on copyrighted materials;
  • OpenAI, despite its name, has not been transparent about what it has been trained on.
  • Generative AI systems are fully capable of producing materials that infringe on copyright.
  • They do not inform users when they do so.
  • They do not provide any information about the provenance of any of the images they produce.
  • Users may not know when they produce any given image whether they are infringing.

§

My guess is that none of this can easily be fixed.

Systems like DALL-E and ChatGPT are essentially black boxes. GenAI systems don’t give attribution to source materials because at least as constituted now, they can’t .”

5 Likes

They’re massively increasing the size of their models to try to fix inherent problems, but that’s going to require a non-linear increase in the computing requirements to handle it. (Even if they throw a Really Big hash at it! :laughing:)

Eventually that’ll eat us all for tiny improvements.

If there’s no one-way transformation of the source material, then they might be able to. They don’t want to, because it would be death to their corporate/investment model that depends on pillaging for free.

8 Likes
1 Like
2 Likes

So I just noticed this new feature on the BBS today and I’m not sure that I like it…

20240102_115412

It does seem rather useless when few bb posts go over 150 words or so.

The summary I got when I tried it on the “Sweetie, Elon’s not fit for battle!” post.

:confused:

1 Like


https://nitter.net/divisionten/status/1742036933117694254

https://nitter.net/JonLamArt/status/1741545927435784424#m

2 Likes

I think this will interest you @docosc

3 Likes

No surprise. Every atempt to make peds into a checklist, automated procedure has failed. I will have a job for a while yet. Still, 83% failure rate? 83% success rate would be unacceptably low. This is ridiculous.

7 Likes