And HPMoR is, of course, written by one of the CFAR curriculum developers.
You can also find a lot of the underlying information in blog/wiki form at http://lesswrong.com/, especially http://wiki.lesswrong.com/wiki/Sequences
And HPMoR is, of course, written by one of the CFAR curriculum developers.
You can also find a lot of the underlying information in blog/wiki form at http://lesswrong.com/, especially http://wiki.lesswrong.com/wiki/Sequences
Thatâs not really where I was going with that. One canât become more rational by applying the inherently irrational. At the very best, people can try a variety of workarounds (âbrain hacksâ or whatever term you want to use) to accomplish their goals. However, that does not mean they are acting rationally. It is, in fact, a form of quasi-magical thinking.
In fact, the main problem I have with the checklist is that it takes the trappings of science (as in, the way people actually approach scientific problems) and applies them to something incredibly unscientific. The result is, as one poster already mentioned, a rationalized life, rather than a rational one. If that helps you be happier/more successful, then fine. However, donât confuse the two.
To illustrate what I mean:
Look at number 2.3 on the list. Itâs one thing to search for easy counterexamples in mathematics (or science, in general), but much more difficult to do so in real life. Abstract arguments are much harder to support or disprove because, even from a statistical perspective, there are multiple conflicting points of view.
Try applying 3.1 to a person with a strong phobia. They may consciously be able to acknowledge their fear is irrational, but that is meaningless because it doesnât stop the fear.
Ah, this is where we disagree. Just because two processes are both wrong doesnât mean they are equally wrong, and not all the processes in our brains are equally wrong. Some work quite well, and in many cases we can bypass the ones that donât.
You can extract truly random bits from biased coin flips, and you can make reasonably consistently correct predictions from useful but fundamentally false assumptions (Newtonâs laws), and yes, and you can build communications systems that let you talk loudly about secrets in public, and you can write programs that behave reliably on (some) corrupted hardware (like the brain) as long as it isnât too screwed up. It is hard, and error prone, which is why CFAR is not claiming they have all the bugs worked out, not even close. But theyâre trying, and there is absolutely no reason to think what they are trying to do is impossible.
Again, I am not saying the advice in that FAQ is useless, Iâm just saying it doesnât lead to rational behavior. Pretending it does is cargo cult science, more akin to neuro-linguistic programming than to physics.
By itself, yes, but CFAR works by conducting week-long immersive training programs with lots of practice, problem solving, learning the science, discussion of how to integrate into your daily life, and weekly follow-up sessions to track whether people are (and assist them in) using the new methods theyâre teaching. Itâs messy social science, but not pseudoscience.
I really like this line of reasoning. thanks for the share Cory. I will look into this further.
If I try really hard, I can probably convince myself that I have.
This topic was automatically closed after 5 days. New replies are no longer allowed.