No one's coming - it's up to us: it's past time for technologists to be responsible to society

Originally published at:

Last autumn, I was invited to Foo Camp, an unconference organized by O’Reilly Media.


What a load of bullshit. Try living in 1918 for a day and tell me you were better of without modern technology.


No one’s coming. It’s up to us.

There are a lot of well-meaning people in this world who need to have that tattooed in reverse on their foreheads. But I’ve also come to the conclusion, over my life, that of all the tiny and insignificant powers available to me to effect change, the most important is my vote and my willingness to get over my ego and my self-importance and vote consistently for the quote-unquote “lesser evil”.


The problem with technologists is the same problem with any ruling class, they don’t really see how they’re harming the rest of the people. And those that do see it most of the time don’t care because what are the “little people” but things to move around for their designs. The only way out of this pattern of abuse is to abolish the social order which enshrines it at every level. It’s not an easy task but I think if we can abolish the idea of state religions then we can probably abolish this social order too.


In what way are technologists a ruling class? What about all those politicians and capitalists?


And yet the majority of new money has come out of Internet based technology. So that makes the billionaire technologists part of the capitalist class. And if they don’t want to be capitalists anymore then they better start funding the revolution and stop trying to talk the fellow capitalists into being more compassionate because that does not work. Reformism only helps the capital class, this isn’t exactly in depth Marxist analysis here (insert Zizek sniff meme here).


I strongly believe that, in the long view, it is insane to consider any path other than setting the technological-progress boiler to “full steam ahead” and breaking off the handle. Because (a) there’s no point hitting the brakes if it’s too late to avoid the canyon, and (b) if we turned our back on the one truly unique thing we have to offer, then it wouldn’t matter if our species continued. It’s not like the world needs humans to live in mud huts and eat locally-sourced paleo diets; we have, like, ants for that (and always will).

I don’t mean we should worship a single, canonical vision of the future like 1950s atomic cheerleaders. But when people give TED talks on how to sadly shake your head at other people’s foolishness, I hate that that is considered “leadership”. It’s smug, witless solipsism. The narrative we need is that progress will save us, but it might come from directions that you don’t like / didn’t think of yourself. Don’t believe the sci-fi hype of corporate marketing departments or demagogues, but do believe that knowledge and open-mindedness and curiosity can continue to make the world better at every scale.


Okay lets be clear here: when you refer to technologists are you referring to Steve Wozniak or Steve Jobs?

Or both?

The main stumbling block is that technologists get so enamoured of the shiny new toys they can build that they don’t think of the larger implications, leading to what Morozov calls solutionism (often in the absence of a problem to be solved).

Questions like “is there a sustainable business here?” and “how will this create a net improvement for the lives of our human customers?” and “is this really solving a problem people have or will have?” and (per the “Silicon Valley” joke) “is this making the world a better place?” Part of my work involves asking these kinds of questions on a regular basis, and most of the time the answers are lacking.

Also, in order to eat and keep a roof over their heads, the majority of career technologists have to operate at some point within the ruleset of the sociopathic slow AIs (AKA for-profit limited-liability corporations). Priority #1 should be finding and implementing alternate business and ownership models, which the FOSS community has made a start on. There are very strong and entrenched memes working against that priority, however.


Sez you. That’s why I donate cycles to SETI.


It’s a side effect of blind privilege and its bubble effect. If one can call oneself a technologist then chances are strong that one has already been given and built up a lot of social capital.

I would disagree that the only way out of this pattern is the complete abolishment of the current social order. Often awareness of one’s own privilege is not only enough to break out of it but also to find (sometimes lucrative) ways of using one’s skills to help improve the lives of people who haven’t been so fortunate.


I find the whole notion of ‘value’ of a corporate entity to be laughably measured these days. Dollars in and out in a year does not determine whether a company is doing the right thing for the future world or not.

The balance sheet needs to have a measure of lives ruined vs lives improved, of promising futures brought closer to reality versus short-sighted blunders, etc. .


I don’t think the author is at all arguing that we’d be better off without the last X years of technological advancement. Their point is more that technology is not an unqualified benefit to humanity. It has costs, and those costs are often ignored or shrugged aside as unimportant in this Brave New World of unbridled advancement.

As a small example, consider the Bodega startup. At face value, using highly granular consumer spending data to provide micro-targeted selections of goods in on-premise vending machines seems like a straight win for convenience and efficiency. But there are human costs involved in further driving down the need for local economies of bodegas and convenience stores. How does putting more pressure on already-struggling small businesses help us do better as a society? What happens to the people whose jobs are eliminated in the name of that efficiency? How might this sort of centralized, computer-driven system amplify existing inequalities and impact the existence of food deserts? What happens when all of that highly-granular consumer spending data inevitably ends up getting stolen? The answers to these questions may end up being trivial, but the fact that nobody seems to be asking them in the first place is troubling. And that’s just a vending machine. There are oceans of unanswered questions about the long-term impact of technologies like unmoderated centralized social networking, or technological panopticons that might reconnect you with a friend from grade school at the cost of the total collapse of people’s privacy, or “unbiased” algorithms that reinforce racist systems like criminal sentencing.

The argument here isn’t that technology is bad and we should throw our wooden shoes into the gears of the flour mill to stop the march of progress. It’s that technologists often forget that their work impacts real human beings in ways that are poorly captured by the signifiers of “success” and “value” that we as a society have constructed, such as capital, profit, and the size of a platform’s userbase.


@gracchus @alahmnat
great stuff.


Did you read the essay? (If so, for more information please re-read.)
That’s not at all what’s being said. It’s simply being pointed out that for the last 40+ years there’s been this subculture that’s taken it as an article of faith that the developments of technology would, by virtue of merely existing, create a utopia for us (or as near could exist in the real world); that is, they would bring about " fairness, equity and prosperity for everyone." Which isn’t what happened, because that’s not how technology works. While many people, even those outside this subculture, were expecting those issues to naturally get better, they’ve actually gotten worse, despite that technology (and sometimes even as a direct result of it). It’s because of those expectations, as a result of the complacency that it brought about, that those problems exist.


Well, there was that whole lifting more than a billion people out of abject poverty thing.


When Barlow said “the fact remains that there is not much one can do about bad behavior online except to take faith that the vast majority of what goes on there is not bad behavior,” his position was that we should accept the current state of affairs because there is literally no room for improvement. […] It is a little bit like saying on the one hand that the condition underlying human existence is nasty, brutish and short, and on the other, writing off any progress humanity has made to make our lives less nasty, kinder and longer. […]

[A]t a high level, I believe that we need to:

  1. Clearly decide what kind of society we want; and then
  2. Design and deliver the technologies that forever get us closer to achieving that desired society.

This train of thought is kind of left dangling. What exactly are “technologists” supposed to do about bad behavior online? If bad behavior is going to be suppressed, then someone’s going to have to decide on a standard of behavior and some means of enforcing it, and in the end there are going to be lots of people unhappy with that standard. Are technologists supposed to call the shots here?


Technologists only create tools. Tools get used by the powerful to do whatever the fuck they want; more technology means more power for the powerful.

The biggest problem here is the concentration of power. Which is only getting worse. Lots of people have all these great ideas, but they aren’t worth shit because democracy is a myth.

Get back to me when there’s some way to get the powerless on the same damn page and overthrow the entrenched order…


Tools are powerful. They can help or harm. What is required is wisdom and compassion. Technology is not required.


Sorry, I’m not sure if you misreplied to me, or your response is some sort of ironic doubling-down on not reading the essay…

1 Like