If you think self-driving cars have a Trolley Problem, you're asking the wrong questions

Being willing to do something is not the same as requiring it.

Selected by whom? I understand the idea, but again, we are starting from assumptions that natural selection is somehow desirable or effective.

You might be arguing in circles here. The idea of dominance is based upon the same concept that it supposedly matters who prevails. I doubt if it matters if we call it dominance, advantage, selection, or anything else. It suggests that there is an underlying motivation for preferring one possible outcome over another, with no distinct reason as to why. Once you have programmable systems, you are no longer limited to natural selection. You can design new models as well as new instincts, instead of relying upon iterative accidents.

My mom told me I was an interative accident.


But seriously I hear you, but regardless of future possibilities selection and iteration are true, and will remain true until at least (checks calendar) 2016.

At least natural selection isn’t eugenics–as an individual, group,or race we have proven we are baaad at that.

2 Likes

To hand over your traits to the next generation you are required to actually do it. You won’t do something you don’t do. If you are less likely to achieve something than your competitor, there will be fewer of you in the next generation than the competitors.

By the interaction between the agents and the environment.

Selection is The Factor in any iterative system.

If you won’t prevail, you will be less likely to survive into the next generation. You may have some degree of survival if your trait gives you some advantage for some edge conditions, and keep a small subset of you in the population, but that’s about it.

The “motivation” is an emergent property. If you have a set of traits that behave as a “motivation” for having more survival/spawn success, they will become dominant. The reason why is that they give an advantage for the next iteration.

And you will again have a set of selection factors that influence the agent’s rate of survival and (dependent on the survival) rate of handing over traits to a next generation of agents.

Also, the nature itself is a sort of a programmable system; see genetics and epigenetics as an example. With knowledge (memes) as another set of entities.

1 Like

Clang, clang, clang went the trolley
no, no, no screamed the kids
please don’t kill me begged the fat man
but I’m a conformist so I did!

10 Likes

How do I know that it matters if those traits continue? Again, you are re-phrasing the same a-priori assumptions without examining them. How do you even know if someone even has any “competitors” if you don’t know what their goals are? Everything you are saying here seems to be based upon the presumption of: “Assuming that the framework of natural biological birth and death are universally relevant…”. Well, those assumptions, and their relevance are precisely what I am questioning.

As an agent, I am not assuming anything. Why live as long as possible for its own sake? If my goals require me to live 27 years, wouldn’t that be more efficient? What if people mate at random? If I combine 100,000 sperm with 100,000 ova are any of them actually “competing” once they are in a different conceptual framework than natural selection? How do I know how prevalent my genes or traits should be? What is the optimum?

We could model it like this. But why? And perhaps more crucially - what is the significance? How do I know how much of “me” there should be in the population? And for what purpose?

This sounds circular also. How can we speak of “success” without any goals? If the goals are determined by traditional reproductive processes, then how much agency can be attributed to the participants? The only way an outcome can be recognized as successful is to devise a system which favors certain outcomes. Advantage is subjective, as it depends entirely upon what one is trying to do.

It could, if you like. But this still evades explaining why the rates of survival or distribution of traits might be significant. There are no doubt countess things which could be measured. What is so special about these?

If you thought so, I would be curious as to why you think people should subordinate themselves to randomly-defined goals. It sounds like you are trying to universalize a game based upon the fact that it already exists.

It does not matter. There is no meaning in it. There is just the survival/continuation.

Do blue-green algae need a meaning of their action to take over the planet and almost freeze it to death by eating all the CO2 and shitting oxygen instead?

There is nothing to examine there. You survive or you don’t. Same for people, same for algae, same for memes. You don’t need a Special Meaning, per se; any such “meaning” (usually present in the memes) is a means to end, not the reason for its existence.

Unless you want to go outside the paradigm of an iterative system, you will be bound by the paradigm of an iterative system. Does not matter if the framework is the ecosystem on a stupid rock in space, a society, or a genetic algorithm based system for a turbine blade shape optimization. The underlying rules are the same.

If we constrain ourselves to biological systems, are there long-term stable (or quasi-stable) biological systems that do not involve birth and death of the agents?

Because simple rules are easier to implement than complex ones. “Survive” is such a rule that provides you the most bang for the least complexity.

Nope. The implementation of the timer, and the decision matrix for when the cutoff time is reached or not, is way more complex; the additional advantage, if there’s any, is apparently not worth the cost of the added complexity.

Then you get a pretty fast degradation; in biological system you have a fairly high degradation rate, most mutations and changes are towards the worse. You have to pick the ones that are at least neutral, at best positive.

Then you get the framework of an artificial selection. And they will have to get into some system later, and if the system is finite/resource-constrained, they will have to compete for the limited resources. And then we’re back in the survival-of-the-best paradigm.

You don’t. You don’t need to. The outside selection factors will do that job for you. The optimum will be reached in a couple iterations.

There is no purpose. There is just survival. Purpose is what philosophers came up with as an illusion that is apparently a survival-enhancing factor worth its cost, or it would be selected against as a resource sink.

The goal is self-preservation/self-propagation. Yes, it is a circular definition. It evolved that way - the agents that did not have this goal were selected against. Those with absence of such traits are disadvantaged in comparison, so they are few and far between.

Not much, actually.

The whole planet is one such system.

Some sort of survival/spread. Whether of one’s genes or one’s ideas. It can be said that we are just vehicles for these.

The survival itself. This scales down to unicellular organisms and even into the realm of abiotic self-catalyzing systems. If it can propagate itself, there will be more of it than of what can not propagate itself. If it can propagate itself better, there will be more of it than of what can propagate itself worse. As simple as that.

We evolved into needing some goals. Doesn’t that much matter which goals as long as striving for their reaching activates certain parts of brain and floods it with certain neurotransmitters as rewards.

It exists, it evolved in the competition with other possible games into the existing system. It does not need any other justification than its own existence. If you think you have a better one, show it off and let it compete.

4 Likes

And, will your insurance cover your car’s screw-up?

Well, if part of your non-self-driving car malfunctions - say a tie rod breaks - and kills someone, who is liable?

You, the car, the manufacturer, the coders tie-rod machinists?

Why would defective driving software be any different?

4 Likes

Perhaps because lawmakers don’t fully understand the implications.

There is going to be a period of time when these systems are new, and drivers will be expected to take over when the automation can’t make an appropriate decision. Autopilot in aircraft sounds an alarm when human intervention is needed. there is usually an acceptable amount of time for humans to take over, even with the much higher speed of aircraft, because they are typically far from the ground or any other object they might collide with.

Cars on roadways are much closer together. The margin for error is smaller. A normal driver has far less training than a pilot and if cruising along at 65 - 70 MPH ~ 20 feet behind another vehicle is not going to have time to wake up from whatever their distraction is when the buzzer goes off to say: “I’m failing, you fix this.”.

So do you want to leave it up to the free market / laws from the horse and buggy ages to determine liability in such a situation?

1 Like

the assumption would be that the person/people occupying the automated car would be safe in an accident because of their set belt/steal cage/air bags… but that people/animals outside the car are in need of protection from the automated car/by the automated car.

so: car avoids accident > car risks damage to self to avoid causing damage to living objects, but car doesn’t risk damage to self to avoid damage to/by other non-living objects.

means: car won’t knowingly drive into objects of any kind, but it will prefer driving into an inanimate object (even another car) to avoid harming a living animal (perhaps of a specific type, if possibly–human/cow, not squirrel).

if, for example, a car pulled into an intersection the automated car was driving through on a green light, it would attempt to slow down/speed up/or detour to avoid the most dangerous collision (t-bones are worse than side swipes); and if there was a person/horse to the left and an incoming car to the right, it would not swerve into the horse/person to avoid being t-boned, but it might attempt to slow down or speed out of the way to avoid a direct hit; but it would also not attempt to block/protect the horse/person from the incoming car. It would only be responsible for itself and its actions, and its occupants.

basic logic: don’t commit suicide, don’t commit homicide, don’t cause harm, and, when unavoidable, minimize harm.

1 Like

cars may be closer, but autonomous cars can be designed to maintain safe distances in accordance with human response time, just like autopilot in planes; at least on highways.

luckily, not even an autonomous vehicle would be at fault in an accident caused by the human driver behind them not keeping a safe distance… so if necessary the automated car could just hard break to stop the vehicle at any time, should the human fail to take over.

1 Like

The problem with maintaining safe distances based upon human response times is that you would have to have less traffic on roads if the humans are not constantly engaged, which they would not be if they assumed their cars were on autopilot. So unless you want to drastically reduce the amount of traffic on roads, you aren’t going to be able to transition form the system we have now to one with automated cars without a period of increased risk, or reduced traffic flow.

Your assumptions about how the law would interpret an accident caused by an autonomous vehicle are based on what exactly?

Then you lose one of the big benefits of self-driving cars - being able to drive with a very small gap, increasing capacity and reducing losses.

1 Like

It’s actually better than this - with other cars about, they can cooperate to reduce the risk. So an oncoming car can drive into a ditch (because that would likely be safer than a head on collision) to free up space for another to avoid a collision. Or they can make sure the impact happens so the passengers are protected - if all the passengers are on one side, then e.g. try to impact on the other side.

3 Likes

And space the cars on intersections so they can zoom through the gaps between the cars in the perpendicular direction.

3 Likes

I’m not going to get into an argument with you about whether or not evolution is happening - and I honestly don’t know if you’ll even disagree with what I’m saying - but this is a little too simplistic. Turbine blade shape and blue-green algae are a lot simpler than human beings. When organisms are cooperating to survive, it helps if they aren’t entirely motivated by their own survival, but they are also invested in the survival of the group. But since we have big brains instead of a handful of binary choices, any trait like that is going to be expressed on a bell curve. In a social species, you ought to expect that some members are going to be very selfless and even if that makes those individuals less likely to survive, you’ll keep seeing them pop up generation after generation, because other people are passing on that trait at random. Plus, unlike in simulations, traits are often linked - two sickle cell genes and you are screwed, one and you are protected from malaria. Plus, the nature/nurture thing comes in and it becomes possible to share important traits without reproducing and we become more idea-based and less body-based in how we identify. Ideas are subject to a selection process as well, but it is hard to imagine they aren’t just as complex with good and bad things linked together and cooperation messing everything up.

But that brings us to the idea of random breeding. You say it would lead to degeneration, but I think that we are seeing that diversity makes us better off. In a really rough world where we have to struggle to survive, having non-functioning legs would be extremely bad, but in a world where we can easily feed everyone and build wheelchairs and ramps, it’s an advantage to us collectively to just have someone around who has a slightly different perspective - more inputs to the get-smarter system.

It was inevitable that we’d be programmed for survival when we arrived at this point, but there is every reason to question the utility of continuing to see ourselves as more important that others now that we have the tools to examine the situation objectively. It is very plausible that we’d all be better off if we didn’t look out for #1 any more than #2. There’s a game where you give each player $10 and each player can decide how much to keep and how much to put into a common pot. The money in the common pot is then multiplied by three and shared evenly between everyone. Nash equilibrium is you put in $0 in reality people (that is, college students in the US) put in about $5 (except students of Economics who put in very close to $0). Obviously the best outcome is everyone puts in $10. Since genetic evolution is kind of a settled issue for our species (we’re going to know how to reprogram our genes before any more significant changes are made by natural selection), I think it’s not insane to start asking how we could get closer to the best result.

5 Likes

Could you not argue that you were not the driver?

I wonder how many of those Econ students had a course which touched on game theory before playing. That is, I wonder how many of them were trained that this was the “rational” choice. Rational is such an unfortunate, value laden, choice of terms. It really should be called something like “locally optimizing”.

6 Likes

Well, heck, if you can hack your car to save you from the Trolley Problem, you can also hack it to tell your (or someone else’s) car (or cars) to hit people matching a particular facial recognition profile, or say, someone who appears to be wearing riot gear.

Even worse, you might be able to do this while making the systems logs claim that the driver manually took control to perpetrate this crime.

You’re definitely asking the wrong question if you’re limiting yourself to questions of saving lives. Because someone will no doubt be asking the question of ‘how can I use this for evil?’

Keeping in mind that the difference between “autonomous vehicle” and “cruise missile” is largely a matter of labelling.

Who needs suicide bombers when you have suicide cars?

3 Likes