The Rational Future Institute
CULTURE THAT UNDERSTANDS RATIONALITY
HOME  | ABOUT  | RATIONALITY  | BLOG  | NEWS | RESOURCES  | GET INVOLVED

The Philosophy of Rationality

Philosophy use arguments to answer the deepest of questions. Since arguments are just inferences, the case can be made that the ultimate questions of philosophy are really questions about reasoning.

Why These Principles?

The principles of rational inference cannot be justified by the principles of rational inference. This means that no positive case for the principles of rational thought can be made without circular reasoning. However, there are other criteria for selecting axioms.

The first criterion is generality. We're seeking general rules for thinking, not rules that give us particular conclusions in particular cases. For example, we're not looking for a rule that says "No matter what, the moon is made of cheese," or "No matter what, the planet Saturn has rings." Declaring that the planet Saturn has rings won't help me reason about a patient's diagnosis.

However, if we consider the principle of non-contradiction (PNC) which commands us to reject conclusions that would lead to a contradiction, we find that the PNC is very general. The PNC applies to every inference.

The second criterion is that, if we try to deny the principle, we find ourselves unable to reason at all. Very few principles would have this property. To deny that the moon is made of cheese, for example, doesn't stop us from reasoning about problems in mathematics or biology.

Here, again, the PNC fulfills the criterion. If the PNC is not true, then it isn't false either because there is no notion of truth or falsehood if contradictions are allowed. If contradictions are allowed, every answer is allowed, and so there's no sense of correctness about anything.

We can also look at how the principle of induction fares under these criteria. David Hume (1711-1776) famously showed that induction cannot be justified. This "problem of induction" has stood until today. Hume showed that we cannot cite the success of induction in the past as evidence for the principle that induction works because that would require the principle of induction.

The principle of induction is also very general. Without induction, it would be impossible to learn anything from experience. We would be unable to rationally know about general rules or patterns. We would be unable to recognize things because recognition can be seen as a general rule, e.g., we would be unable to recognize an elephant as an elephant because the general properties of elephants are something we inferred by seeing specific elephants. If induction is discarded, we wouldn't even be able to say what induction was (or deduction, for that matter!). Hence, induction is the kind of principle that could not be false.

As far as we know, these two principles are the only principles of inference, the only principles that lead us to infer new knowledge from existing knowledge. The PNC commands us to reject pictures of the world that are inconsistent. Induction shows us how to use our experience select among the remaining consistent pictures of the world.

However, we're still left with a question. What counts as evidence in the general sense. The intuitive answer is that experience of any kind counts as evidence. This may seem trivial, but if we can't accept experience as evidence, then induction doesn't work. Indeed, it can be argued that we could not even perform deduction if our experiences could not be used as evidence (e.g., "What problem am I trying to solve?" or "In my experience, A and B are contradictory.").

The solution to this problem is to introduce a third axiom of rationality: self-knowledge.

Self-knowledge: Experiences qua experiences are also axioms of rationality.

This simply means that experiences are true as far as experiences go, and that experiences count as evidence for inference.

Criticisms

There are several criticisms that might be made against our choice of rational axiom. Some criticisms are sound, but ultimately irrelevant to our project. Other criticisms are simply "special pleading."

One kind of criticism that we shall reject are criticisms of the following form:

  1. I accept your rational axioms in most cases.
  2. I intuitively think I am rational when I believe conclusion X.
  3. But, according to your rational axioms, belief in X is not rational.
  4. Therefore, I reject your rational axioms in the case of X.

Clearly, this sort of criticism is special pleading for X. The critic is not proposing an alternate axiom, but simply declaring without reason that his intuition cannot be wrong in the case of X.

Alternative Logics

Some critics will point out that mathematicians have experimented with alternative logics. There are, for example, paraconsistent logics in which some contradictions are allowed. There are many-valued logics which allow propositions to be true, false or something in between. We shall argue that this doesn't significantly affect our rational axioms.

Mathematicians create mathematical systems by laying down a set of axioms for computations. The system consists of all the true statements that don't contradict the axioms. Thus, the creation of any mathematical system relies on the principle of non-contradiction. This applies to paraconsistent logics. Although a paraconsistent logic accepts certain kinds of contradictions, there are rules that the reasoner cannot break without violating the principle of non-contradiction.

If the world around us has any consistent structure, the that structure can be modeled with a mathematical system or logic that is based on the principle of non-contradiction.

Bayesian Priors

Another criticism that can be leveled at our axioms is that they are not quite as precise as they look. There is still ongoing study into the question of how to set one's prior probabilities.

We have suggested that prior probabilities be distributed equally across all possible outcomes (that we can conceive of). If we can conceive of N outcomes, then the prior probability of finding any one outcome is 1/N. This is known as the Principle of Indifference.

Some scholars suggest that there might be alternative ways to set prior probabilities. While this is a complex subject, we argue that no reasonable scheme for setting priors significantly affects  our choice of axioms. Most people fail to be good Bayesian inductive reasoners because they ignore false positives or because they fail to consider conditional probabilities. 

Intuitions are not Bayesian Priors

Some readers may mistakenly treat an intuitive judgment of probability as a prior probability. This is incorrect, and any substantially non-uniform prior used in a specific case must be justified by the rules of inference.

The following scenario describes a fashion in which intuitions can be abused when treated as priors.

Suppose that Fred has a strong intuition that the moon is made of cheese, and this intuition makes Fred feel almost certain that the moon is actually made of cheese. Wanting to be diligent, Fred decides to investigate the actual composition of the moon. The first thing Fred does prior to collecting new evidence is to quantify the strength of his intuition. As a result of his intuition, Fred feels so sure that the moon is cheese that he claims he is 99.9999999% confident in his hunch (1 in a billion chance of his intuition being wrong).

If this degree of confidence is used as a Bayesian prior probability, then the evidence for the moon being made of something other than cheese would need to be of comparable strength in order to get Fred to change his mind.

Fred gets his telescope and does a spectrograph of light from the moon, and finds that the output is inconsistent with the moon being made of cheese. The light from the moon looks like light from millions of rock samples on Earth, and nothing like light from cheese samples on Earth. However, given his near certain prior probability, Fred feels he must reject the evidence of the spectrograph, and stick to his intuitions. After all, although the evidence so far says it is millions of times more likely the moon is made of rock than cheese, Fred's prior probability is 1 in a billion.

What went wrong in this story?

The mistake occurred when Fred failed to properly convert his intuition to a Bayesian prior. It is true that an intuition is not a complete lack of evidence. Even a hunch counts as evidence of a form. But the intuitive strength of one's hunch is not necessarily an indicator of its reliability. Fred must ask himself what evidence he actually has for thinking his hunches are wrong only 1 time in a billion. If Fred has no formal evidence about the inferential strength of his hunches, then he has no justification for thinking his hunches are better than guesses.

Of course, nothing in Bayesian reasoning rules out the possibility that Fred really does have ultra-reliable hunches. Indeed, with Bayesian reasoning, it would conceivably be possible for Fred to justify the inferential strength of his intuitions. If Fred has enough experiences of the right kind with his hunches, he can make the case that he's different from all other humans. Without this sort of evidence, however, Fred is irrational if he thinks his beliefs about the moon are justified.

A similar argument can be made for cultural "background" knowledge. Suppose Fred has a brother, Carl, who believes that Fred's intuition is infallible. Carl's believe doesn't contribute much to the Bayesian analysis unless Carl has collected the formal evidence needed to establish Fred as a reliable prophet of otherwise unknowable facts. If Fred also has three sisters who declare him to be a prophet without formal evidence, this doesn't add much to backup Carl's claim. Even if Fred has a million fans, his priors are still bogus unless formal evidence can be presented to back up his claim that his intuitions are wrong once in a billion trials.

Knowledge and Certainty

A common definition of knowledge is "justified, true belief." However, since the only way to know that something is true is by justification, we prefer to define knowledge as rationally justified belief.

The word knowledge may make some people uncomfortable because knowledge is sometimes use to refer to knowledge with certainty. However, in the principles above, we do not refer to certainty when we use the word knowledge. In everyday conversation, we can comfortably say that we know things without certainty. For example, we might say that we know that we are wearing shoes when it is not logically impossible that we are dreaming or hallucinating our shoes, or that we forgot we are wearing objects that feel like shoes but which are not shoes. As long as our confidence in a statement is greater than 50%, we are probably comfortable saying we know that the statement is true. Thus, in the above, when we refer to knowledge, we mean rationally justified confidence, not certainty.

Rational belief is not infallible. A perfectly rational might still be misled if the evidence is misleading. With diligence and good fortune, a rational being will eventually find his or her error. However, by definition, no non-rational strategy for reaching conclusions is more likely to yield a correct answer.



Copyright 2011 Rational Future Institute NFP