Rationality does not guarantee agreement, but we might get along better

I think we would all get along a lot better if we were rational, but not because we would all be in complete agreement if we were rational.

Rationality is a tool for getting what we want. Rationality tells us how the world is, and what the results of our actions are likely to be. Debates over how the world is and which strategies will be more effective might be settled more quickly if we were all rational. However, rationality doesn’t determine which results are most desirable to us. Even if we were all ideally rational, we probably wouldn’t all converge on the same set of behaviors and policies.

Given that we have individual tastes and desires, why do I suspect that we would get along better if we were more rational?

The short answer is that I think we adopt on different attitude to opponents who we regard as rational rather than irrational.

Manufacturing our own facts

Our naive trust in the facts is informed by our political sensibilities. This much isn’t mere opinion. There have been hundreds of studies of cognitive dissonance, and the results are unambiguous.

We all suffer from a degree of wishful thinking. When our opponent cites a study to support his political agenda, we immediately doubt the cited study. If we’re opposed to an economic policy, we’ll impulsively be less likely to trust the evidence in favor of the policy. And if we support an environmental policy, we’ll be much more credulous about factual claims that look like rationales for the favored policy. This is an unfortunate fact about the way we humans look at the world. We distort the facts to fit our wishes. The cognitive dissonance that alerts us to contradictions is also prone to throw out contradictory facts instead of opinions.

If we were more rational, we would be able to separate evaluations of fact from evaluations of policy. We would be less fearful of the consequences of accepting true claims, or rejecting false ones.

At this point, you might ask, if a study shows that our favored policy doesn’t work, isn’t that a genuine reason to fear the study?

Well, if the study is misleading, and shows that a policy is effective when it isn’t (or vice versa), that’s a genuine reason for concern. However, our knee-jerk reaction to factual claims often causes us to miss the real motivation for our political inclinations.

Are  facts even relevant to ideology?

Try this experiment. Think of a political issue about which you feel strongly. Perhaps, global warming legislation, abstinence-only sex education, flat taxation, mandatory sentencing, or something like this.

Now, think about the scientific evidence that relates to the policy, and why you think the evidence backs up your position.

Finally, imagine what you would do if the evidence didn’t support your ideology. You will probably find that your ideology is primarily motivated by something other than the evidence in question. For example, studies that tell us about the effects of abortion on society are virtually irrelevant to proponents on either side of the debate. Proponents of abortion rights won’t be swayed by evidence that abortion leads to statistically negative outcomes, and detractors won’t be swayed by evidence that abortion leads to positive outcomes. Yet, this doesn’t stop either side from promoting and defending studies that talk about the statistical social of psychological consequences of abortion.

We all reserve the right to vote for a given policy on the basis of ideology alone. I’m not saying we ought to do this, but rather that it is okay to acknowledge this fact if it reduces our fear of looking honestly at the evidence. Sometimes, we don’t have to give up an ideology if the facts don’t go our way. Once we come to terms with our true motivations, we can be more honest about the facts. And an ability to look fairly at the facts makes us more rational.

Irrationality and coercion

A clear separation of facts and ideology can have secondary effects. To begin with, ideology isn’t only a matter of taste. Once we have a clear evaluation of the facts, our ideology may well change through reflective equilibrium. This can happen if our ideology was propped up by denials or irrational fears that pass in light of new evidence. Perhaps, we’ll retain our ideological stance, but be less committed to it.

After spending time isolating my ideology from the facts, I have noted a change in the way I react the positions of people with different ideologies. It becomes easier to see where my political opponents are coming from. It has become possible to imagine a political opponent who is rational!

In my experience, the way I interact with opponents who I believe to be rational is different from the way I interact with opponents who I believe to be acting irrationally. When an opponent is irrational, I feel an impulse to coerce the opponent into correct thinking. The opponent cannot be reasoned with, so force (or force of law) is the most suitable option. However, when I can see my opponent as rational by his or her own values, then I feel much more amenable to a negotiated compromise.

What do you think? If your opponent is rational by his or her own values, does compromise sound more appealing?


  1. Starting with the title: whenever I see weasel words like “might”, I immediately replace them with words like “might or might not” (too many years of gimmick road rallying :-)). Lots of things “might” be true, but it doesn’t give any indication of the likelihood of it being true.

    Now on to the substance: how is this article not just rationalization and wishful thinking of something you are hoping is true? One could, for instance, say that tolerance of other people’s beliefs leads to getting along better, without any need to analyze the rationality of those beliefs.

    Take religion. Some people (call them group G) believe in a “higher power”, while others (call them group A) feel that particular belief is irrational. How do you envision group A using rationality to get along better with group G?

  2. I do like the “might or might not” technique. 🙂

    If this were a rationalization, I assume it would work something like the following. I want people to agree with me, and I (falsely) attribute agreement to rational faculties, so I cherry pick a reason why a more rational society would get along better. Implicitly, I would be hoping that the appeal of getting along better makes people more keen on improving their rationality.

    There are a few reasons why I don’t think this is what is going on here.

    First, I don’t think people are particularly motivated by a desire to get along. As I suggested in an earlier post, there are hundreds of proverbs that have been around for centuries, but they’re not very motivational, in my opinion. They might be quoted after one has learned the lesson, but they’re not very effective beforehand. However, I do think that people are motivated by a desire for self-mastery or by a desire to understand other people (especially their competitors). Perhaps, awareness of cognitive bias (self-knowledge) will be more effective than proverbs.

    Second, I was motivated to write this in light of my own experiences. I’m naturally argumentative, and keen to correct other people. Yet, understanding how people actually reason has made me somewhat less argumentative, and more willing to attribute differences in belief to differences in circumstance, and differences in political outlook to differences in value.

    Of course, a sample of one is not very compelling, but that is why I ended the post with a question.

    As to your last question, I think it comes down to recognition of fundamental attribution error. The instinctive approach of group A to group G (or G to A, for that matter) is to think of its members as people of faulty character. If only they were not so wicked, they would believe as we do! But if we can overcome the impulse long enough to look at situational factors, that changes the dynamic. Or, at least, it does for me. Does it for you?

  3. “First, I don’t think people are particularly motivated by a desire to get along.” I disagree. For the most part, people do get along when they meet and talk with each other face to face. We don’t like direct confrontation.

    Unfortunately, with the proliferation of so-called “social media”, we have many outlets for indirect confrontation. People say things in electronic forums that they would never say in person.

    Even US politics is set up so that members of government don’t talk to each other.

    It is easy to tell the world that someone is wrong (or worse) if they aren’t present to directly refute the charge.

    As for A vs. G or G vs. A, for the most part it really doesn’t matter to me what the opposing (for lack of a better word) belief is, at least until it encroaches on my world (which only tends to happen from the extremists). That being said, I do find myself asking if it is self-consistent and consistent with observations made in the world at large.

    1. There’s a distinction between being motivated to get along, and being coerced into civil behavior. You’re right, people are less likely to act anti-socially when they are publicly identifiable.

      However, there’s also a distinction between being civil, and being willing to compromise. Debates about policy or corporate governance are not consciously motivated by anti-social or mischievous impulses. If you look at comments on political news stories, you’ll find plenty of examples of fundamental attribution error from people who think they’re trying to fix the system. People of opposing political views are not trying to get along per se, but trying to provide reasons to support their own policies. They are trying to be constructive (i.e., build a better future), but failing to reach accord. Meeting in person might make exchanges more civil, but it doesn’t really address the problem of FAE.

      In my original post, I was asking whether you would feel differently about compromise if you felt the other party was being rational in their assessment according to their own values. You responded:

      One could, for instance, say that tolerance of other people’s beliefs leads to getting along better, without any need to analyze the rationality of those beliefs.

      I wonder if such a statement is tautological. If tolerance is a willingness to get along with people who have different beliefs, then the thesis would be rather empty.

      I’m suggesting that understanding ideal rationality and the ways that people deviate from ideal rationality will instill more tolerance.

      If someone disagrees with me on a matter of policy, there are several ways to explain the disagreement.

      1. The other party might be rational, but be deciding from a different set of values.
      2. They might be irrational, and share my values, but be making bad inferences.
      3. They might be irrational, and have different values, but end up at the rational conclusion (for themselves) by accident.

      I think the our instinct is to assume possibility #2. We think “No sane/good person would disagree with our policy unless they were confused about evidence or logic. Therefore, imposing the policy in an authoritarian manner is appropriate.”

      Of we knew that possibility #1 were the case (or even possibility #3), then we tend to think “That’s what I would do if I were them, so let’s just work something out.”

Comments are closed.