Originally published at: https://boingboing.net/2019/07/24/why-didnt-openai-release-the.html
…
Looking at the installation instructions and realizing it’s so far over my head it may as well be in space. I guess this isn’t going to be my toy to play with, at the moment.
In before Flossy sues for copyright violations.
They aren’t releasing it because it would be fatal to their business model.
OpenAI isn’t all that open. Microsoft recently invested a billion dollars in it.
If by “lucid” you mean grammatically correct fever dreams…
Having fiddled a bit with some various trained models lately, there’s very little open anything in the space. You can get easy access to get training, but to me, it feels more like openAI is shilling this whole centrally controlled AI paradigm where only huge corporations with millions of dollars to burn have real access to innovation. Honestly, Facebook seems to be contributing more to open AI than openAI, for whatever that is worth. Yes, we should hold people accountable for abusing tech, but you can’t legally use most of this stuff in a commercial way without millions at your disposal. I don’t think the corporations are inherently more responsible.
GPT-2 being withheld to allow social media companies time to prepare a defense against fake users isn’t necessarily a social good. Social networks are already quite hostile to human wellbeing, after all – and constantly in legal trouble for it, with Facebook getting a $5bn fine just today. So helping them get better at identifying humans is not particularly ethical or moral.
In fact, it suggests that there is a hidden agenda: as GPT-2 turing-bots could make it impossible to accurately measure human engagement on social media, it poses a huge threat to advertising. This gives the GPT creators an incentive to shake down Facebook and Twitter with it. And what better way to make the danger clear than by announcing the war without revealing the weapon.
I also think one of the non-spoken aspects of this is the concept of transfer learning. If there’s an open project that works well, you can train on the pre-trained model to fine-tune it to your needs. Maybe the starting model isn’t a good fit for what you are doing, but GPT-2 has huge potential for creative purposes with generative game engines. Maybe it’s the corporations you get worried most about when you’ve been making high six figures for too long…
This is a valiant effort to ensure roboethical practices early on. We need more discussions about the implications of our inevitable robot filled future.
Why release information at all, if the ethics of ones work are that concerning? In fact, why do the work in the first place, if the ethics of ones work are that concerning? The work has been handed on a golden platter to every global intelligence agency, every large tech corporation, academia, and every sufficiently motivated mid sized company. Is it really that much of an ethical danger to release information to ordinary people?
that’s why you’re smartr!
This topic was automatically closed after 5 days. New replies are no longer allowed.