A major problem is all the LLM generated slop (“slop” is the technical term for LLM output) is being fed back into LLMs as new training data. Unable to tell valuable content from slop, they are rapidly devolving themselves as they eat their own tails.
And while LLMs rendering themselves useless sounds like a good thing, the problem is it still leaves the slop inextricably mixed in with the extant human content.
