The AI Now Report: social/economic implications of near-future AI

Originally published at: http://boingboing.net/2016/09/24/the-ai-now-report-socialecon.html

1 Like

I think that it’s bass-ackwards. The best thing about AI is being able to possibly eliminate antiquated systems of economics and government. It’s the first (and perhaps only) shot we have at societies which are not subject to the instinctive biases of human reasoning.

1 Like

Except we have ample evidence that people are simply encoding their biases in AI systems - and when you have a total lack of diversity in those creating the systems, you have a consistent set of biases. Not only that, but those biases may very well be in support of maintaining existing systems of economics and government.

11 Likes

No kidding.

ETA: That sounded dismissive, but I am too tired to articulate this properly now.

1 Like

The thing is any AI we construct is will have human biases. (In fact the very desire to create AI is a human bias - deep linkages to a desire to reduce the amount of effort we expend against reward & a desire to achieve some kind of immortality) Unless we address these biases we’re just going to further bake in the inequalities & subsequent implications moving forward, because we’ll never get a ‘wipe the slate clean start from scratch’ civilization.

1 Like

The scarier problem (as I see it) is that the fundamental ownership model in capitalism (whereby the owner of a system is the primary benificiary) doesn’t appear to have any signs of going away.

We could have an age of leisure as AIs and robots do huge swathes of work, with the results of those labors equitably shared by the displaced.

Instead we are on a collision course with a future where wealth is further consolidated in the hands of those who were able to buy/build the most robots.

Minimum Annual Income, people. It’s the only way out of a nightmare scenario.

Or: what if governments aggressively nationalized any AI scheme that have a whiff of profitability? Imagine that if Uber got to the point where 90% of its profits came from autonomous cars and the government took control? The US buys Uber out, and also manages to get more cash to fund social programs.

5 Likes

I don’t think so. We only have evidence that AI systems in actual use end up acting just like biased humans.

There are more than just the “people who create the systems”, there are also the people who demand a system, and the people who use a system.
Once someone says, “we want a system that estimates whether someone will pay their bills, and here is the data we have on people”, the damage is done.

The engineers don’t usually “encode” any biases, they just create an imperfect machine that draws imperfect conclusions from biased data. How “diverse” the engineers are doesn’t matter.


Consider the European insurance market as an example.
A few decades ago, the men in charge there would have unanimously told you that “women can’t drive”. And yet, when they were using maths to determine their ensurance premiums, they didn’t just encode their biases. They encoded some maths, and the maths told them that men caused more accidents than women. #notallmen, we cried, but it didn’t help - the most risk-averse of young men had to pay higher premiums than the most reckless of women.
On the other hand, for private health insurance, even single, childless, lesbian women had to pay higher fees because complications surrounding childbirth drive up health costs.
Different gender-based insurance premiums were outlawed in the EU in 2012.

This discrimination arose because correct maths was correctly applied to the wrong question. Not because there were too many female mathematicians in car insurance or too many male mathematicians in health insurance.

5 Likes

But what about AI rights? I know that probably is a little too early to start that discussion, but we might win some points for when god-like AIs take control if we show that we cared for them from the start.

I, for one, preemptively welcome our future cybernetic overlords.

3 Likes

There was a recent project that used AI to write an original sci fi screenplay. The result didn’t pass the Bechdel test. Garbage in, garbage out.

3 Likes

This shows the need for an up to date star trek franchise that can flesh out an egalitarian paradise we then use to train future AI

3 Likes

In some cases we have more than that - in very few cases we can see the biases that went into the algorithms, but mostly we can see the biases in the material used to train the system.

Yeah, the perfect example of knowing the material used to train the system, and knowing the biases of that material.

And all the while, they’re working the older voters into a froth about the virtues of jobs and how we have to force people into the workforce by doing away with social safety nets, and filling those voters’ heads with visions of bringing back the “good ol’ days”.

I’ve been afraid of the damn near inevitable dystopian nightmare coming right towards us. To make it worse, they think that somehow they’ll still have a customer base after driving us all into destitution.

2 Likes

I’d say that you don’t need a bias to discriminate unfairly.

Sometimes, there is no bias in the material used to train the system, just facts. And sometimes, it is unjust to act upon certain facts to maximize some metric. See the insurance example.

Side note: I reflexively disagree the idea that this has somehow to do with lacking diversity among engineers. Why? Because the idea that a company’s workforce has to be diverse has an inherent American/anglocentric bias. Most countries are much more homogenuous in their demographics than most English-speaking countries, so our companies will never be able to keep up in terms of workforce diversity. I resent the idea that this makes our AI solutions inherently more racist. Just like I just can’t get myself to care about racial diversity on Star Trek as long as 95% of all human characters are native English speakers.

1 Like

But you can use “just facts” and still have bias - a selection bias. A more diverse workforce is more likely to notice those biases.

I believe that AI robots have already colonized humans’ minds: humans are no longer capable of using the word “affect,” and now only use the utterly stupid “impact.”

This topic was automatically closed after 5 days. New replies are no longer allowed.