NOW we’ll start to see hairsplitting about what is actually AI and what isn’t from the LLM companies and capital class…
“No no no, this isn’t AI, this is merely a mechanistic algorithmic process for generating a natural-seeming text based on large corpus data sets. Nothing ‘intelligent’ about it at all. I mean, if you’re going to complain about this, what else are you going to complain about? Autocorrect and word processors? Want to go back to writing things in longhand with a dictionary at your elbow? No? Then shut up and run ScriptGPT over this show bible, and send the results down to the writers’ room for cleanup and copy editing.”
It’ll be better with the mouth movements, and maybe a fishier voice.
Mixboard is a testament to the democratization of music production
Ugh, the use of “democratization” to describe AI removing the need for knowledge, skill, time or effort from an artistic endeavor is really grating on me. Democratization of an art form means removing artificial barriers to entry, like unnecessarily expensive equipment, distribtuion cartels or incumbant gatekeeping. Since when did possessing a unique passion and talent become tyranny?
The corridor crew described their AI tool to generate Anime from live video as “democratizing” anime production because “You used to need money to pay all those animators, now you can do it yourself!” So, owning the entire means of production rather than engaging in a collective effort is “more democratic?”
Sure studio owners exploit animators, but the answer to democratizing that is to create artist collectives with a flat hierarchy, not “fire everyone and replace them with robots”
TL;DR firing bosses = democratic. Firing creatives = not democratic.
Oh dear. ChatGPT is that guy.
But “Hidden Treasure Mapping Logic” is gold.
Policymakers should insist manufacturers include controls in artificial intelligence-based weapons systems that allow them to be turned off if they get out of control, experts told the UK Parliament.
Speaking to the House of Lords AI in Weapons Systems Committee, Lord Sedwill, former security advisor and senior civil servant, said the “challenge” politicians should put to the tech industry is whether they can guarantee AI-enhanced weapons systems can be entirely supervised by humans.
Earlier in the enquiry, James Black, assistant director of defense and security research group RAND Europe, warned that non-state actors could lead in the proliferation of AI-enhanced weapons systems.
“A lot of stuff is very much going to be difficult to control from a non-proliferation perspective, due to its inherent software-based nature. A lot of our export controls and non-proliferation regimes that exist are very much focused on old-school traditional hardware: it’s missiles, it’s engines, it’s nuclear materials,” he warned.
“human control” could mean more than one thing
What makes the situation even odder is that the case Riehl was reporting on was actually filed by a group of several gun rights groups against Washington’s Attorney General’s office (accusing officials of “unconstitutional retaliation”, among other things, while investigating the groups and their members) and had nothing at all to do with financial accounting claims.
AI, guns, lawyers, money - what’s not to like?
It won’t be long before ChatGPT sues ChatGPT for something that ChatGPT said about ChatGPT.
The generative AI boom has reached the US federal government, with Microsoft announcing the launch of its Azure OpenAI Service that allows Azure government customers to access GPT-3 and 4, as well as Embeddings.
Earlier this year it was revealed that a government Azure server had exposed more than a terabyte of sensitive military documents to the public internet – a problem for which the DoD and Microsoft blamed each other.
Microsoft subsidiary and ChatGPT creator OpenAI has also been less than perfect on the security front, with a bad open source library causing exposure of some user chat records in March. Since then, a number of high-profile companies – including Apple, Amazon, and several banks – have banned internal use of ChatGPT over fears it could expose confidential internal information.
We asked Microsoft for clarification on how it would retain AI prompt and completion data from government users, but a spokesperson only referred us back to the company’s original announcement without any direct answers to our questions.
With private companies concerned that queries alone can be enough to spill secrets, Microsoft has its work cut out for it before the feds start letting employees with access to Azure government – agencies like the Defense Department and NASA – use it to get answers from an AI with a record of lying.