Originally published at: https://boingboing.net/2024/03/19/youtube-requires-realistic-ai-to-be-labeled.html
…
Sort’ve admire them for wading into this (if only to say that they did), but there’s hardly any term in this CYA paragraph that couldn’t be used to exculpate from potrace’d Hieronymus Bosch to Bernie Sanders put in place of Catherine, Princess of Wales
Of course, we recognize that creators use generative AI in a variety of ways throughout the creation process. We won’t require creators to disclose if generative AI was used for productivity, like generating scripts, content ideas, or automatic captions. We also won’t require creators to disclose when synthetic media is unrealistic and/or the changes are inconsequential.
starting with the definition of ‘A.I.’ (“generative” or otherwise). Need there be back feeding neural networks involved? or will the equivalent Markov chains suffice?
here’s one ‘industry’ definition of ‘A.I.’:
A.I. [is any process which] seeks to process and respond to data much like a human would. That may seem overly broad, but it needs to be.
well… there go any image run through a contrast filter
That’s good to hear.
eye-swears, if one more gossip-site writes something like “Our experts checked the meta-data so it must be the real deal!” meta-data on any image can be changed with a single @#$! command-line call. @#$!
by the bye, yayhoos various say that ain’t Kate because: wrong Willy relative height (??) (is there some royal academy of royal doppelgangers?)
The cynical part of me wonders if this is intended to be for the benefit of viewers or AI trainers (to prevent feedback loops of AI training on AI content).
I don’t imagine they’ll enforce it any better than any of their other rules, though. (Though if they do, it suggests it’s about AI training more than viewer protection…)
This topic was automatically closed after 5 days. New replies are no longer allowed.