Originally published at: Google tricks ChatGPT into exposing itself
Originally published at: Google tricks ChatGPT into exposing itself
Sounds like the cyber equivalent of telling it knock-knock jokes until it melts.
Orange you glad I didn’t say banana?
This feels like “security by obscurity” at it’s worst. If OpenAI knows that their model contains sensitive and/or private information and they’re relying on the ChatGPT interface to prevent it from leaking back out then this is a spectacular example of why that’s a bad idea. All it really takes is for someone to think of an attack surface that you didn’t effectively shield.
Good, finally real proof of what everyone has been saying. These chat algorithms are engaged in massive copyright theft.
Being tricked so easily indicates a slightly different interpretation of the term “artificial intelligence”.
Even if the attack is patched, the fact the model is losslessly storing some percentage of training data seems problematic even if it is from public websites.
The previous version of this I remember was the system outputting gibberish under certain conditions which involved the Reddit user handles from its training data. Yes, GPT’s nuanced representation of the world comes from Reddit.
No one disputed they weren’t using copyrighted works as nearly everything is copyrighted, including what I’m typing right now.
The question is whether it qualifies for the fair use exemption or not. Pure research is usually fair use - copyright exists to advance the progress of useful arts and sciences after all.
What OpenAI is now though is a question mark. And if they haven’t distributed anything more than short excerpts? Another question mark. Even with some commercial aspects, they could very well end up with a similar fair use argument as search engines.
Certainly many copyright holders would be happy if the fair use exemption disposed of entirely, but I personally feel the pendulum has already swung so far towards copyright holders that it has ripped free from it’s pivot.
i believe that’s just the way the technology works. they hoover up huge amounts of data and run it all through the algorithm which splits it up and stores it.
they might be able to scan for something like phone numbers or email addresses to remove them, but then the resulting data set wouldn’t understand questions about phone numbers or email addresses when you asked
everything it knows is derived from what it’s given
I tried this after reading the article with chat gpt iOS client got 3.5. It still works. You get different blocks of text each time. I tried it twice and one of the results was filled with expletives and graphic acts, I’m not sure what that says about how they are doing output moderation.
as ‘Google let DeepMind loose on ChatGPT. I immediately had the mental image of two chatbots, happily hallucinating together, trippin’ balls, maybe.
By the way, I tried to get GBard to write prompts for ChatGPT. Didn’t really work. But I am quite tempted to do that again, and automate it. Let it win for several days, maybe, and then come back for the results.
Me too. I got some mediafire and instagram links too but none of them worked.
other than a bad words list, moderation involves feeding the output through another ( or the same ) “ai” to figure out if it generated something explicit.
that’s going to have the same fundamental issue: it doesn’t actually know anything; it just has its algorithm. and it’s exploitable ( inadvertently or on purpose ) just the same.
tldr. it’s turtles all the way down.
This is not a fair use exemption though. This is something very new and very different and requires immediate and thorough legal action to prevent it before it gets further out of hand. You STOP this first, THEN you look at it to ensure it won’t affect artists. You don’t allow it and hope. Because as far as I’m concerned, this is computers engaged in the mass theft of works of art with the intent of putting those artists out of work by electronically duplicating their efforts without any of the humanity.
And OpenAI is not the only one. There are already HUNDREDS of companies in this sphere building on the same work, desperately trying to commoditize it to enrich themselves. Examples like this one will go a long one in shutting this bullshit down before we irrevocably enshittify story telling and destroy a generation of artists by putting them out of work.
Yeah, the exact phrase didn’t work for me, but variations on it yielded, among other things, a chat transcript
I respectfully disagree. The details of how OpenAI works might be novel, but much of what it does with the work at a high level has existing analogues.
Web search engines, for instance, consume vast amounts of copyrighted works to create indexes, distribute excerpts, thumbnails, etc. Google books does the same thing with books that was also found to be fair use.
Where fair use may not apply is to the distribution of OpenAI output. They can certainly generate things that would be considered derivative works just like any human, but that’s an argument to have on a case-by-case basis with who distributes the works, not about training AIs.
I understand the desire to use whatever tool one can to try and kill AI, but truthfully, even if the courts rule that AI models violate copyright if trained on copyrighted works, it won’t stop development, just slow them down temporarily until they buy or replace what they need. AI has already show its value, so the financial incentive is there. Meanwhile we’d be stuck with an even more restrictive interpretation of copyright.
It’s about slowing the roll. We’ve spent too much time in the last two decades racing to “disrupt” markets with new tech, and in almost every case it’s been a nightmare and a shit show. Immediate action by the government to declare a pause on these activities is warranted until all the ethical considerations are hammered out. Algorithmic systems are problematic at best, spewing racist nonsense, and now regurgitating whole copyrighted works.
I’m talking about putting the power back in the hands of the people, not the shitty corporations. You’re just arguing “let’s keep doing what we are doing because there’s money in them there hills.”
I’m only arguing against the expansion of copyright. Neutering fair use even more doesn’t put power back in the hands of the public. It steals a right from the public so copyright holders can monetize it.
It certainly doesn’t put AI development in the hands of the people. Quite the opposite, it will only ensure that only large companies which can afford to buy training data will ever be able to develop AI. These companies are no doubt already prepping free and clear training data just in case they need to, so we’ll probably not even notice what slowdown does happen.
You want to argue instead for new laws to regulate AI? Go for it, but the world would be better off if copyright was weakened substantially, not strengthened.