i posted over in follow up about how apparently cnet’s bot was plagiarizing existing articles. so currently at least humans are still needed for the seed. ( eta: link to the cnet article )
what will be really weird, i guess, is when - like with @MononymousSean’s dr seuss - ai is plagiarizing ai
separate thought: i didn’t realize till reading the generated posts above just how repetitious gpt can be. space filling variations around what would normally just be a sentence or two. like reading a grade school essay going for word count
The Futurism article really gets it wrong as to what the machine learning algorithm is doing and how it is doing it, but it does get it right in that the end result amounts to about the same.
It is probably especially noticeable because the kind of article they are generating is super generic to begin with so all of the input to the model is garbage “I need to write a 300 word article about a boring thing that I don’t care about to put an affiliate link into it” things that were written by humans doing the absolute minimum amount of work.
The algorithm is definitely not taking what is out there on the internet and changing a few words - it’s creating a database that includes that kind of article, generating word frequency and proximity tables from that and using that data to generate content statistically from that corpus. So it is going to be similar to all of it, but it will also make up stuff too since it doesn’t understand any of it.
I love that I can tell GPTChat to generate a negative product review and it will just make up believable stuff about a product not working as it should - that shows how dangerous this stuff is
That’s fascinating. I actually can’t read the GPT-3 content without drifting off at the anodyne emptiness. It’s not really saying anything, and the language feels like it was chosen not to upset anyone while it’s achieving this nothing. If it’s a fake Chat GPT, well done!
what you described is how gpt works. cnet does not say they use gpt. they lead with a mention of gpt and “other automated technology”, then call theirs variously an “ai assist” and “an ai engine”
the futurism article mentioned that microsoft has had a non gpt algorithm that did spit large chunks of copied text with no modifications. so it’s possible to design algorithms that way.
it feels entirely possible they licensed an algorithm which works via transformation rather than generation, or that they trained a relatively unsophisticated algorithm using such a small data set that the result was the same
Oof, yeah. There’s just layers and layers of terribleness to the coming AI content wave - the more you think about it, the more problems arise. Meanwhile the advantages are… pretty limited.