Being willing to do something is not the same as requiring it.
Selected by whom? I understand the idea, but again, we are starting from assumptions that natural selection is somehow desirable or effective.
You might be arguing in circles here. The idea of dominance is based upon the same concept that it supposedly matters who prevails. I doubt if it matters if we call it dominance, advantage, selection, or anything else. It suggests that there is an underlying motivation for preferring one possible outcome over another, with no distinct reason as to why. Once you have programmable systems, you are no longer limited to natural selection. You can design new models as well as new instincts, instead of relying upon iterative accidents.
But seriously I hear you, but regardless of future possibilities selection and iteration are true, and will remain true until at least (checks calendar) 2016.
At least natural selection isnât eugenicsâas an individual, group,or race we have proven we are baaad at that.
To hand over your traits to the next generation you are required to actually do it. You wonât do something you donât do. If you are less likely to achieve something than your competitor, there will be fewer of you in the next generation than the competitors.
By the interaction between the agents and the environment.
Selection is The Factor in any iterative system.
If you wonât prevail, you will be less likely to survive into the next generation. You may have some degree of survival if your trait gives you some advantage for some edge conditions, and keep a small subset of you in the population, but thatâs about it.
The âmotivationâ is an emergent property. If you have a set of traits that behave as a âmotivationâ for having more survival/spawn success, they will become dominant. The reason why is that they give an advantage for the next iteration.
And you will again have a set of selection factors that influence the agentâs rate of survival and (dependent on the survival) rate of handing over traits to a next generation of agents.
Also, the nature itself is a sort of a programmable system; see genetics and epigenetics as an example. With knowledge (memes) as another set of entities.
How do I know that it matters if those traits continue? Again, you are re-phrasing the same a-priori assumptions without examining them. How do you even know if someone even has any âcompetitorsâ if you donât know what their goals are? Everything you are saying here seems to be based upon the presumption of: âAssuming that the framework of natural biological birth and death are universally relevantâŚâ. Well, those assumptions, and their relevance are precisely what I am questioning.
As an agent, I am not assuming anything. Why live as long as possible for its own sake? If my goals require me to live 27 years, wouldnât that be more efficient? What if people mate at random? If I combine 100,000 sperm with 100,000 ova are any of them actually âcompetingâ once they are in a different conceptual framework than natural selection? How do I know how prevalent my genes or traits should be? What is the optimum?
We could model it like this. But why? And perhaps more crucially - what is the significance? How do I know how much of âmeâ there should be in the population? And for what purpose?
This sounds circular also. How can we speak of âsuccessâ without any goals? If the goals are determined by traditional reproductive processes, then how much agency can be attributed to the participants? The only way an outcome can be recognized as successful is to devise a system which favors certain outcomes. Advantage is subjective, as it depends entirely upon what one is trying to do.
It could, if you like. But this still evades explaining why the rates of survival or distribution of traits might be significant. There are no doubt countess things which could be measured. What is so special about these?
If you thought so, I would be curious as to why you think people should subordinate themselves to randomly-defined goals. It sounds like you are trying to universalize a game based upon the fact that it already exists.
It does not matter. There is no meaning in it. There is just the survival/continuation.
Do blue-green algae need a meaning of their action to take over the planet and almost freeze it to death by eating all the CO2 and shitting oxygen instead?
There is nothing to examine there. You survive or you donât. Same for people, same for algae, same for memes. You donât need a Special Meaning, per se; any such âmeaningâ (usually present in the memes) is a means to end, not the reason for its existence.
Unless you want to go outside the paradigm of an iterative system, you will be bound by the paradigm of an iterative system. Does not matter if the framework is the ecosystem on a stupid rock in space, a society, or a genetic algorithm based system for a turbine blade shape optimization. The underlying rules are the same.
If we constrain ourselves to biological systems, are there long-term stable (or quasi-stable) biological systems that do not involve birth and death of the agents?
Because simple rules are easier to implement than complex ones. âSurviveâ is such a rule that provides you the most bang for the least complexity.
Nope. The implementation of the timer, and the decision matrix for when the cutoff time is reached or not, is way more complex; the additional advantage, if thereâs any, is apparently not worth the cost of the added complexity.
Then you get a pretty fast degradation; in biological system you have a fairly high degradation rate, most mutations and changes are towards the worse. You have to pick the ones that are at least neutral, at best positive.
Then you get the framework of an artificial selection. And they will have to get into some system later, and if the system is finite/resource-constrained, they will have to compete for the limited resources. And then weâre back in the survival-of-the-best paradigm.
You donât. You donât need to. The outside selection factors will do that job for you. The optimum will be reached in a couple iterations.
There is no purpose. There is just survival. Purpose is what philosophers came up with as an illusion that is apparently a survival-enhancing factor worth its cost, or it would be selected against as a resource sink.
The goal is self-preservation/self-propagation. Yes, it is a circular definition. It evolved that way - the agents that did not have this goal were selected against. Those with absence of such traits are disadvantaged in comparison, so they are few and far between.
Not much, actually.
The whole planet is one such system.
Some sort of survival/spread. Whether of oneâs genes or oneâs ideas. It can be said that we are just vehicles for these.
The survival itself. This scales down to unicellular organisms and even into the realm of abiotic self-catalyzing systems. If it can propagate itself, there will be more of it than of what can not propagate itself. If it can propagate itself better, there will be more of it than of what can propagate itself worse. As simple as that.
We evolved into needing some goals. Doesnât that much matter which goals as long as striving for their reaching activates certain parts of brain and floods it with certain neurotransmitters as rewards.
It exists, it evolved in the competition with other possible games into the existing system. It does not need any other justification than its own existence. If you think you have a better one, show it off and let it compete.
Perhaps because lawmakers donât fully understand the implications.
There is going to be a period of time when these systems are new, and drivers will be expected to take over when the automation canât make an appropriate decision. Autopilot in aircraft sounds an alarm when human intervention is needed. there is usually an acceptable amount of time for humans to take over, even with the much higher speed of aircraft, because they are typically far from the ground or any other object they might collide with.
Cars on roadways are much closer together. The margin for error is smaller. A normal driver has far less training than a pilot and if cruising along at 65 - 70 MPH ~ 20 feet behind another vehicle is not going to have time to wake up from whatever their distraction is when the buzzer goes off to say: âIâm failing, you fix this.â.
So do you want to leave it up to the free market / laws from the horse and buggy ages to determine liability in such a situation?
the assumption would be that the person/people occupying the automated car would be safe in an accident because of their set belt/steal cage/air bags⌠but that people/animals outside the car are in need of protection from the automated car/by the automated car.
so: car avoids accident > car risks damage to self to avoid causing damage to living objects, but car doesnât risk damage to self to avoid damage to/by other non-living objects.
means: car wonât knowingly drive into objects of any kind, but it will prefer driving into an inanimate object (even another car) to avoid harming a living animal (perhaps of a specific type, if possiblyâhuman/cow, not squirrel).
if, for example, a car pulled into an intersection the automated car was driving through on a green light, it would attempt to slow down/speed up/or detour to avoid the most dangerous collision (t-bones are worse than side swipes); and if there was a person/horse to the left and an incoming car to the right, it would not swerve into the horse/person to avoid being t-boned, but it might attempt to slow down or speed out of the way to avoid a direct hit; but it would also not attempt to block/protect the horse/person from the incoming car. It would only be responsible for itself and its actions, and its occupants.
basic logic: donât commit suicide, donât commit homicide, donât cause harm, and, when unavoidable, minimize harm.
cars may be closer, but autonomous cars can be designed to maintain safe distances in accordance with human response time, just like autopilot in planes; at least on highways.
luckily, not even an autonomous vehicle would be at fault in an accident caused by the human driver behind them not keeping a safe distance⌠so if necessary the automated car could just hard break to stop the vehicle at any time, should the human fail to take over.
The problem with maintaining safe distances based upon human response times is that you would have to have less traffic on roads if the humans are not constantly engaged, which they would not be if they assumed their cars were on autopilot. So unless you want to drastically reduce the amount of traffic on roads, you arenât going to be able to transition form the system we have now to one with automated cars without a period of increased risk, or reduced traffic flow.
Your assumptions about how the law would interpret an accident caused by an autonomous vehicle are based on what exactly?
Itâs actually better than this - with other cars about, they can cooperate to reduce the risk. So an oncoming car can drive into a ditch (because that would likely be safer than a head on collision) to free up space for another to avoid a collision. Or they can make sure the impact happens so the passengers are protected - if all the passengers are on one side, then e.g. try to impact on the other side.
Iâm not going to get into an argument with you about whether or not evolution is happening - and I honestly donât know if youâll even disagree with what Iâm saying - but this is a little too simplistic. Turbine blade shape and blue-green algae are a lot simpler than human beings. When organisms are cooperating to survive, it helps if they arenât entirely motivated by their own survival, but they are also invested in the survival of the group. But since we have big brains instead of a handful of binary choices, any trait like that is going to be expressed on a bell curve. In a social species, you ought to expect that some members are going to be very selfless and even if that makes those individuals less likely to survive, youâll keep seeing them pop up generation after generation, because other people are passing on that trait at random. Plus, unlike in simulations, traits are often linked - two sickle cell genes and you are screwed, one and you are protected from malaria. Plus, the nature/nurture thing comes in and it becomes possible to share important traits without reproducing and we become more idea-based and less body-based in how we identify. Ideas are subject to a selection process as well, but it is hard to imagine they arenât just as complex with good and bad things linked together and cooperation messing everything up.
But that brings us to the idea of random breeding. You say it would lead to degeneration, but I think that we are seeing that diversity makes us better off. In a really rough world where we have to struggle to survive, having non-functioning legs would be extremely bad, but in a world where we can easily feed everyone and build wheelchairs and ramps, itâs an advantage to us collectively to just have someone around who has a slightly different perspective - more inputs to the get-smarter system.
It was inevitable that weâd be programmed for survival when we arrived at this point, but there is every reason to question the utility of continuing to see ourselves as more important that others now that we have the tools to examine the situation objectively. It is very plausible that weâd all be better off if we didnât look out for #1 any more than #2. Thereâs a game where you give each player $10 and each player can decide how much to keep and how much to put into a common pot. The money in the common pot is then multiplied by three and shared evenly between everyone. Nash equilibrium is you put in $0 in reality people (that is, college students in the US) put in about $5 (except students of Economics who put in very close to $0). Obviously the best outcome is everyone puts in $10. Since genetic evolution is kind of a settled issue for our species (weâre going to know how to reprogram our genes before any more significant changes are made by natural selection), I think itâs not insane to start asking how we could get closer to the best result.
I wonder how many of those Econ students had a course which touched on game theory before playing. That is, I wonder how many of them were trained that this was the ârationalâ choice. Rational is such an unfortunate, value laden, choice of terms. It really should be called something like âlocally optimizingâ.
Well, heck, if you can hack your car to save you from the Trolley Problem, you can also hack it to tell your (or someone elseâs) car (or cars) to hit people matching a particular facial recognition profile, or say, someone who appears to be wearing riot gear.
Even worse, you might be able to do this while making the systems logs claim that the driver manually took control to perpetrate this crime.
Youâre definitely asking the wrong question if youâre limiting yourself to questions of saving lives. Because someone will no doubt be asking the question of âhow can I use this for evil?â