welcome to the control group
https://www.sciencedirect.com/science/article/pii/S0301051122001946
https://www.sciencedirect.com/science/article/pii/S0301051122001946
I always end up in the control group.
Related to surveillance and deepfakes:
An interview with David Holz, the founder of Midjourney. Josh is clearly impressed by the tech, but also pushes in somewhat on the risks and dangers. David seems like a nice enough guy, but clearly still has a lot of thinking to do about the potential unintended impacts of his work.
I am of several minds about this. customized and robust AI image and asset generation opens up the ability of a lot more people without art skills to create works using the skills they do have (writing, game design, whatever) combined with the AI stuff. At the same time I value humans that can create art (both Art and art) and want them to continue to have opportunities too (I almost said I want them to thrive too, but I know too many working artists). Then there are the huge dangers posed by this technology as it improves. David talks about the restrictions they put on Midjourney to prevent its use in deepfakes and porn, but once the Djinn is out of the bottleâŚ)
(Content warning - some discussion of porn, Bill Cosby, Hitler, and the intersection thereof)
But what type?
Metaâs main AI guy (or possibly former AI guy, I donât recall which) has been all over Twitter since it shut down trying to act like nothing bad happened and everyoneâs just over-reacting to this. Itâs shocking (well, not that shocking) that he canât see how an AI making up demonstrably false information is bad.
Yet that is inherent in what large language model "AI"s do - they take stuff in, remix it according to a series of semantic rules and spit it out without any regard for accuracy. There is no understanding of meaning at all. I can see this as an interesting thing to play with, but it doesnât âknowâ anything other than the rules it was programmed with and the basically unknowable complexity that arises from the interaction of that ruleset and the training data
This is how Captain Kirk easily defeated evil computers in a regular basis.
Yeah. I know. Itâs not the AIâs fault. Itâs the people. It would be far too easy for people to mis-use this, especially with the speed that mis-information travels compared to the truth.