Questioning the nature of reality with cognitive scientist Donald Hoffman

It’s funny, though, when you put it this way. The ability to shrug off and work with the unfamiliar, or survive in an environment we didn’t evolve for, seems like it might be a defining feature of human “intelligence.” Maybe we are measuring just what we mean to measure.

The only part of this I’d actually come out and challenge, is the preference we say we have for afaptability. Rats, pigeons cockroaches, grey squirrels, flies, they have proven to be just as ornery and adaptable as the human systems they colonize. Yet we talk as though wed prefer the native species, the more stable native biome instead. It’s the doling out of social status to animals we like to imagine are smart- no eating g dogs or horses, but toreturning chimps is fine… and then what we do to the human animal in these cities! It’s as if we set out to deliberately construct massive machines to create anxiety, and then put hand inside these machines to create treasure.

I think I would benefit from living in a place where less adaptivity were required of me.

I don’t think we have a preference for adaptability, but I think going to the Moon gets us a gold sticker for most adaptable. Even if that’s cheating, I don’t know if any other animal lives consistently in all climates and at such a wide range of temperatures (“flies” don’t count, that’s like a zillion different species).

But the whole intelligence as a measure of whether we ought to torture/eat them is a facade anyway. No one could possibly credibly say that dogs are smarter than chimps. We just do what we like and make up a reason after.

What I missed saying in my first reply, though, what that I adore your characterization of how we measure animal intelligence. It’s just that for many humans I think we do just the same thing. IQ tests are a very stressful and alien environment.

1 Like

This is a difficult discussion because we’re limited to Hoffman’s representations of his own work, much of which we (or at least I) wouldn’t be able to read in its pure form. He seems to be making two bold statements that don’t get unpacked in the episode. The first is that evolutionary fitness in no way depends upon having an accurate perception of reality, which, if true, would be really interesting because it’s so counter-intuitive. Here, I took him to be mean that he built a model wherein agents with inaccurate perceptions don’t die* at a greater or lesser rate than those with accurate perceptions—but I wish we knew more about the design and assumptions that went into that model.

*(it’s interesting to consider whether death itself could be a false perception)

The conclusion he seems to be drawing from that is that even if evolution is an accurate model of reality, it still wouldn’t produce minds which are tuned to perceive ultimate reality. Therefore, evolution is insufficient to support the physicalist thesis that our perceptions closely mirror reality, and we should accordingly assume that most of what we’re experiencing is just user interface. However, it is possible to find cracks in the facade, which he says physicists are doing as they start to question spacetime.

How, then, do we know what is user interface and what is truth? That’s where he makes his second bold statement, which, as I understand it, is that he has some of kind math showing that you can build science back up again by, in some way, substituting units of consciousness for units of matter and energy. That’s the concept I had the most difficulty with. I simply can’t picture what that kind of science would look like—how you start with consciousness and get to gravity without any danger of false perception.

I liked this interview a lot because Hoffman is rigorously confronting the hard problem of consciousness without trying to reduce consciousness to the mechanics of the brain. While I’m inclined to agree that pretty much everything we perceive is just user interface, I’m skeptical that ultimate reality is in any sense knowable. Not because of any evolutionary constraints, mind you, but because of the Chinese Room. With only our perceptions as data we cannot infer what lies outside our perceptions, any more than we can understand Chinese without access to a single definition, even if we have a perfect understanding of how each and every symbol relates to every other. I think that the best we can do is understand the surface phenomena—what we perceive as the physical world—while recognizing that there is probably much more happening that it will always be impossible for us to perceive.

Thank you. Though, if theres a special brownie point to be scored by going to the moon, humans should be disqualified from being the judges for it. Sleeping over for a couple of nights and then blasting off, is a far cry from living there. If there exists such an award, I doubt we’ve gotten very close to earning it quite yet.

It’s that model that seems the most far-fetched part to me. I don’t understand how evolutionary theorists build these models, but I know that not very long ago they were building models that gave results that didn’t reflect the real world. I have a suspicion that while they have refined their models they haven’t come remotely close to perfecting them. So to go from a model to a probability of something being true in the real world with enough confidence to say “precisely” seems strange to me.

Beyond that immediate quibble a few things he said seemed really concerning to me. He talked about beings of equal complexity. First of all, my short reading on the matter seems to suggest that biologists don’t have a clear definition of what “complexity” is, which makes me wonder how you could model it. He knows more than I do on this front, though. I also understand that currently the belief is that evolution doesn’t particularly select for complexity or simplicity.

But it seems like he’s talking entirely about DNA-based progression. What if you take one of the species in his model and give it the ability to make tools that can be handed down from one member of the species to its children. So survival isn’t just dependent on your genetics but also on your technological inheritance. I think it’s pretty easy to look at the world and see that technological inheritance is a more powerful fitness tool than any possible sequence of DNA, especially since sufficient technological inheritance would allow a species to simply rewrite its DNA in whatever way was desirable.

Since we are talking about human beings and our ability to perceive reality and whether it has been useful to us, I don’t see how we can leave this out of the discussion.

But another thing is I really want to know how he could have modeled the value of the content of consciousness when he admits not knowing what consciousness is. He’s saying that we can have a rich internal symbolic world (the user interface) that lets us distinguish things to eat from things to not eat without having any accurate perception of underlying reality, which sounds like it may or may not be true. But I wonder if he is also saying that in his model he assumed the two thing were independent, so that having an accurate understanding of reality was an additional cost on top of that symbolic inner world. In reality, it could be that they are highly dependent on one another, and developing the symbolic world without a good understanding of reality would be more costly. To use the user interface analogy against, making really, really efficienct code requires an understanding of the hardware it is running on. That analogy is super weak and isn’t meant to prove anything, it’s just meant to point out there is a question that I don’t think it is even remotely possible he could have the answer to without knowing what consciousness is to begin with.

2 Likes

I’m actually a fan of this type of modeling. I don’t know if you’re familiar with Robert Axelrod’s work on iterated prisoner’s dilemma, but it’s a good example of how agent-based simulations can explain complex, real-world behaviors:

The actual simulation used genetic algorithms—random arrays encoding strategies for playing IPD, which were made to evolve by natural selection until the best ones emerged. When Hoffman is talking about “complexity” I think he’s referring to the length of (i.e. number of bits encoded by) the genetic algorithms in his simulations (which corresponds to the number of nodes in a neural net).

It seems to me that culture and technological inheritance are just faster-paced expressions of what the gene is already doing (e.g., learning and remembering), so that doesn’t bother me too much, but I agree that the equal complexity assumption might be the weak spot because that’s definitely not what’s happening in the real world—species of varying complexity are competing with one another. When he models agents of varying complexity, is he finding that there is selection pressure that favors accurate perceptions?

I’m reading through one of his papers, and he is making the argument that veridical perceptions are more expensive. If you’re a male beetle, he’s saying, it’s more effective to be attracted to anything that vaguely matches a female beetle’s markings than to pay for the extra complexity required to avoid occasionally mating with a beer bottle.

1 Like

Based on his discussion of the issue, it seemed impossible that he wasn’t making that assumption. But given that we don’t know what consciousness is, it is just an assumption. The beetle example you give (and I’m not sure if it’s his example or not) makes a big mistake. The male beetle will have a system for determining whether to be attracted to something that will produce false positives and false negatives, like every other system. Saying that you are better off generating tons of false positives to avoid false negatives is completely unrelated to the question of whether you need information about underlying reality in order to make the system run.

I agree that technological inheritance isn’t fundamentally different than genetic inheritance and evolution still works on it. My question is whether we might not need some kind of perception of underlying reality to generate a technological inheritance to begin with. Could it be that a species that lucks into perceiving reality is more likely to start to generate technology that would then moot the DNA fitness issue. Humans are a remarkably “unfit” species in a lot of obvious ways without our technology. I’d like to know, for instance, what would happen if he and his students modeled infants having heads so large that they barely fit out of their mothers, resulting in high death-during-childbirth rates. Because if you hold other things equal, it’s pretty obvious that the chance of any species evolving into that would be “precisely zero.” But we know we can’t hold other things equal. Since we don’t know what consciousness is, it is extremely unreasonable to hold those same other things equal.

It’s the kind of experiment done in your video that I was talking about when I say we’ve refined but not perfected our processes. It wasn’t very long ago (nothing in the study of evolution happened all that long ago) when people were designing experiments to show competition was better because the models they built weren’t complex enough and didn’t take enough real-world factors into account.

These are still models, and they aren’t validated until we check the results of the model against reality. If we are incapable of doing that then we can’t trust the model.

This topic was automatically closed after 5 days. New replies are no longer allowed.