What Intelligence Tests Miss: The Psychology of Rational Thought

WK Hellstal

By rationality, I normally mean what the Less Wrong people mean, the notion of having a picture in our minds, a map, a model, that matches up fairly well with reality. Having “a map that matches the territory” is what this is all about. This is referred to by psychologists as “epistemic rationality”. I want a good map. I want my mind to be able to update itself when faced with new information. (You can also go with the related definition of “instrumental rationality”, the rationality of getting what you want. They mostly work together.)

The problem with rationality is knowing when you’ve succeeded. This relates to the Dunning-Kruger effect. People have problems figuring out what they get wrong, because becoming good at solving problems requires the meta-skills that you know what you are, and are not, good at. If I knew precisely what I was wrong about, then I would immediately no longer be wrong about it. I don’t know how to improve myself, because if I knew how to improve myself, this knowledge would automatically be the improvement I wanted to have. I would then be stuck with the next step of what to do for further improvement.

It’s a pickle. It’d be nice to have a test, a Rationality Quotient, to update myself, but there is not yet such a thing, and despite Stanovich’s beliefs that one could be made, I’m not sure how useful it would be for people like me.( What Intelligence Tests Miss: The Psychology of Rational Thought, Keith E Stanovich) I’m not even sure exactly how useful it would be more generally, though I think it would be a big help. The problem is the proxy for the test. Using a test as a proxy runs the risk of the test questions losing their correlation with what you actually care about. When the Soviets had quotas for nail creation, their workers would create many small useless nails to quickly fill it. When the Soviets changed the quotas to a weight of nails, their workers would create large, heavy useless nails to quickly fill it. This is Goodhart’s Law. They wanted some objective measure of productivity, but the proxy was no longer useful after it was chosen. Workers labored only to meet the proxy, and steadfastly avoided the further effort that would’ve been needed to meet the underlying goal. Quite the pickle, yes.

Stanovich is aware of these differences, as he makes clear in the book. He talks about measures that aren’t based on questionnaires. But even deeper questions run the risk of people modifying their answers for the test, and only the test, and then continuing to live the rest of their lives as if nothing has happened. The skills don’t necessarily transfer to real life. And even if the test is well-designed to avoid that problem, there are still limitations.

I read a bit about cognitive psychology, rationality, heuristics and human biases. I try to internalize the lessons. That doesn’t mean I’m able to consistently apply them in contexts beyond the scope of a possible test. In the real world, people tend to blunder along until they make a mistake so big that the large ego pain of admitting our previous failure is less than the damage of sticking with the painfully wrong answer. That’s not the best way of operating. That’s not the way to have your beliefs more in synch with the world, even after you’ve read all the psychological summaries.

An RQ test would likely be a big benefit to humanity. It’s still not enough. What I need, personally, is a proxy for RQ. I need some other measurable variable that should correlate strongly with general sensibleness even after the test-taking is over.

A naive guess could be money, but it’s easy to see the problems with that. It’s a limited sort of game, not necessarily generally applicable. You can be good at money and bad at everything else. There’s also a substantial luck component. The same limitations apply to fame and popularity: too much luck, not enough skill. Something like chess is pure skill, but an extremely narrow range of skill. Poker, if played over enough games, is a lot like chess. With a large sample size, the luck gets squeezed out and underlying skill shines through, but again, it’s too narrow a proxy. There’s no reason to believe being good at such games has broader applicability to being right about the state of the world.

Better could be an approach that involved personal happiness, but that’s got it own problems. We hyuu-mons don’t have the best memory for how much we’re enjoying our lives as we live them. The satisfaction, or dissatisfaction, we feel looking back at the end of the day over what happened previously is a different breed of happiness than what we feel as we live moment to moment. We could, I suppose, get out a notebook and pencil and record how we’re feeling every hour, but that sort of record-keeping is distracting in and of itself. The ideal of maximizing happiness hits closer to the mark than something like money, but it still doesn’t quite hit home.

My current idea is productivity toward a specific goal. The proxy for RQ is how much progress I’m making. To ensure that I’m not making empty output, Soviet nails of zero quality, the productivity can have quality controls, like writing code. It’s not just writing a large quantity that matters, I have to make sure that it works as I want it to do. Stanovich doesn’t discuss this kind of thing directly — it’s mostly general ideas also discussed by Kahneman — but reading the book helped me solidify the usefulness of this proxy against a Goodhart-style criticism.

A lot of other good stuff, too (just as in Kahneman’s book) but the one big thing that hooked me here that I don’t remember from Thinking, Fast and Slow was his discussion of time-inconsistent preferences.

If you offer people a choice of $100 today or $115 next week, a certain percentage will have such a skewed discount function that they will choose the $100 today. If you’ve studied a bit of finance, you know that this is technically not a rationality violation by itself. However, you can then ask those same people whether they’d prefer $100 a year from now, or $115 a year and a week from now. Those same people will, often, choose the $115. Weirdity. In one year’s time, the second choice becomes identical to the first choice, and their preferences thus automatically “change”. When they’re a year out, they prefer waiting, but when the wait is only a week, they take the money now. (This makes sense if they have a starving child at home at this very moment, but not in most any other situation.)

What Stanovich points out, that I hadn’t considered before, is that practically every other issue of human “willpower” involves the same preference reversal. A dieter who tells themselves they’ll eat a cookie today, and start the diet tomorrow, is going to face the exact same choice tomorrow. Choosing a cookie today, today, means that the diet will always start tomorrow, even tomorrow. The diet is always a day away, which means the diet never comes.
With money, I’ve always known how to avoid the problem. The time-value of money, along with time consistent preferences, is instilled into me. (I’m fine with my weight as well.) But I know for a fact that I’ve fallen into that exact same sort of trap myself with respect to other non-money things. With money, I know the rules and I use my knowledge correctly. With other subjects, I have in the past allowed myself to continually, indefinitely, put things off till tomorrow, and that tomorrow never comes. I make mistakes that I’d never make with money. The knowledge is there, but it doesn’t transfer over from one context to another.

Irrationality. We’re smart enough. We have the knowledge. We just don’t always use the knowledge when it’s appropriate.

An RQ test is, I’m now convinced, one of the most important things social scientists (especially psychologists, since it’s their field) should be building. And in my own life, a proxy for RQ that has practical relevance for my choices is likewise one of the most important things I should be doing. Good to know.

Time to get started.


Courtesy: Hellstal Live Journal