How tech rotted out

Originally published at:


How can Deep Fake be news? Its been used for years to produce footage of “Reality TV Star” Donald Trump posing as the 45th President of United States. I wonder who the real President is? He must be really angry.


“Tech” did not rot out. The people who make money from technology rotted. People have always been rotten.


I think there is almost a rule that if something is technologically possible, and advantageous to someone, it is going to be done, even if it’s ethically questionable, morally odious, or criminal. This is why people are scared about AI.


I will say again, “deepfake” is just a trendy word for “lie”, and the fact people are wringing their hands over the existence of lies, as though it were a new thing, has sinister overtones. What I hear is “now that lying has been invented, you’ll never figure out what is real on your own; click to subscribe to our truth!” It’s an epistemological slum-clearance scam.

That’s emblematic of the way tech has gone rancid. Some optimistic experiments fail, and that’s fine – it’s the nature of experiments – but the problem arises because we’re in this paradoxical phase where up is down and failed experiments must be continued instead of stopped. Like, everyone knows Facebook sucks, so the organic conclusion, based on people’s own senses, would be “let’s ditch Facebook”. But what gets clicks is “you’ll never be free of Facebook! Woooo! Satan’s going to get you! We must plan to embed Facebook at the heart of society for a thousand years!”

What has gone wrong with tech is advertising. We’re so riddled with the logic of advertising, where the solution to every problem is more of the problem, that we focus on what is bad and ignore what is good. Unlimited communication, smartphones, desktop manufacturing – we do live in an age of miracles, it’s just that we have this psychic plague compelling us to use these miracles to torture ourselves.


I liked the Internet better before anyone figured out how to make money at it.


The Decade Tech Lost Its Way

Or the incurable cancer that is gnawing at the fabric of what it is to be a human being.

So say I…


Is anyone else holding off a purchase of a piece of tech they genuinely want just because the privacy implications make you queasy?
I have the money to pay for a smart scale, which I am keen to use to better track my fat loss and (I hope) muscle gain. But at the same time I don’t want to build up a database of my health that can be bought and abused in the future by an insurance company.


For years print journalists have been putting their work on social media for likes and ego stroking because Max Headroom is their idol.

Tech made the space, but it is the humans that have the rot.

I think that’s dangerously reductive. It’s a new way of lying. That opens up entirely new issues. Fake celebrity/political nudes/scandals is the obvious consequence, but the reality is going to be weirder and more insidious. (E.g. scams built around impersonations)

It’s like consumer drones. Nothing really new there, at first glance. Obvious impact is privacy invasions - high-tech peeping toms. Which has happened. What you don’t expect is something like this:



1 Like

Shoshana Zuboff’s The Age of Surveillance Capitalism is a fine book that provides an excellent framework for thinking about these issues.


I’m definitely avoiding Ring because of the privacy implications, looking at options that don’t have cloud requirements.

1 Like

A picture of the real president along with VP and Chief of Staff


^^ Here’s where the rot started.


Drones make new categories of things possible, but fake images don’t (other than in the trivial sense).

I mean, yes, it may become possible to fake a sex tape without hiring a lookalike. But nothing about “deepfake” sex tapes will mean they can stand alone as proof of something, any more than old-fashioned fakes could. It’s always been the case that some people will believe things based on inadequate evidence; and it remains the case that for everyone else, some kind of corroboration is needed.

Likewise, if I wanted to set up a scam university and make people think Donald Turmp would be teaching classes in Reel Busniess Smartts, I could use “deepfake” images, but if the rubes were gullible enough to fall for that over a sustained period, I could probably achieve the same end with some cardboard Turmp cutouts. Not to mention, if I were doing this over a sustained period (as opposed to releasing a one-off fake video anonymously), nothing about using “deepfakes” would protect me from creating a mile-wide trail of live evidence.

There are minor, debatable edge cases, but if “deepfakes” create new danger at all, it is modest compared to the danger of the agenda that deepfake panic serves. Namely, the campaign to persuade us that individuals cannot possibly figure out what’s true on our own. We absolutely can, and we should be working on it harder than ever.

Porn is far from the only thing you can use deep fakes for. And yes, it’s possible to detect them, for news organizations to debunk them, and with proper media literacy even recognize them yourself. However, the “pics or it didn’t happen” mentality is deeply ingrained in our culture, and there’s a sense amongst many many people that photographs are “truthful” in a way that people are not. In recent years there’s been broadening understanding that images can (and often are) photoshopped for any number of reasons—vanity, be it personal or national, is a common motivation lots of people can understand—but it’s still far from universal. And video of “news” is still seen as more “sacrosanct” in a lot of ways, more truthful even than photos, despite the existence of high-budget special effects that can seamlessly de-age Samuel L. Jackson for duration of an entire film.

I don’t think it’s an intractable problem that can’t be solved by mere mortals, but I do think that the advancement of techniques used to produce believable lies is more rapidly outstripping most people’s ability to keep up with these days, and it’s a perilous time for “all lies all the time” and “never believe anything you see” to start taking deeper root. It may not cause tons of people to fall for any particular scam or propaganda effort, but overall it’s just one more tool that can be used to fundamentally undermine people’s trust in systems and institutions. Especially when less scrupulous members of those institutions may be motivated to fund the creation of such materials to further their own goals.

1 Like

Well yeah, that’s my point – we have yet to see anyone get fooled by a deepfake when they couldn’t have been fooled otherwise, yet we’re already using the fear of deepfakes to persuade ourselves that the concept of truth is dead. We are harming ourselves in a way that we don’t need to.

And it’s not like we’d be sleepwalking into anything. The moral continues to be: “always consider the source”. Deepfakes are not the main reason that is good advice.

It’s not just that people can be fooled by deepfakes. It’s that people who don’t want to believe something—say, that their favorite politician really did say the thing he said—can choose to dismiss literally any evidence that it happened.

Sure, people have been doing that for decades (i.e. the “moon hoax” crowd) but it used to be a fringe thing applied to specific situations entailing grand conspiracies that defied all reason and laws of physics. Now “that video of the President doing the horrible racist thing is just fake news invented by the left” is relatively plausible. We’re getting farther and farther away from a world in which people on both sides of the political divide can at least agree on some kind of baseline reality.


A fish rots from the head.