[This post is roughly 8,000 words long, or about 30 printed pages. Just so you know up front. Don’t say I didn’t warn you.]
BRUTUS’ home, outside. ACHILLES and BRUTUS are lounging in Adirondack chairs. There is a whiteboard set up nearby.
BRUTUS: I have to ask: do you really believe in the many-worlds interpretation? Do you really believe that multiple universes exist, or is it just a mathematical fiction, designed to make things “more elegant”?
ACHILLES: It depends on what you mean by “believe”.
BRUTUS: I sense we are about to have a semantic argument.
ACHILLES: Semantics are important. In fact I would say that if you are arguing philosophy, then all you are doing is semantics.
BRUTUS: Touché.
ACHILLES: So yes; we need to discuss semantics. And “belief” is a notorious word that deserves some special scrutiny.
BRUTUS: All right.
ACHILLES: So the word “belief” doesn’t really have an official definition in epistemology. This, despite the fact that some have tried to define “knowledge” as “justified true belief”. For example: does belief require a self-aware believer, and if so, what level of self-awareness is needed? Does belief imply particular mental states inside a believer’s mind?
BRUTUS: I have heard talk of those before. They are called qualia.
ACHILLES: [Nodding] For that matter, does belief require a “mind” at all? Is belief even real? Even if belief is not real, might it not be a convenient fiction, useful in certain limited situations? [D. Dennett, “Real patterns,” Journal of Philosophy 87, 27-51 (1991)]
BRUTUS: I do not know.
ACHILLES: These are all undoubtedly interesting questions, but they can often dominate simpler discussions about belief and thereby obscure the matter at hand. If we want to ask, for example, whether a person—or a computer—might entertain contradictory beliefs, then we had better define “belief” very carefully; otherwise, a tangential discussion of self-awareness, for example, will bring us too far afield.
BRUTUS: Surely it might it be possible, though, to pin down the definition of belief, at least from a practical standpoint?
ACHILLES: I agree. Such a definition would not only eliminate confusion, but might lead to further insight. A similar problem with definitions arose in astronomy. The discovery of the dwarf planet Eris in 2005 posed a conundrum: why is Pluto to be considered a planet when Eris is not, given that Eris is actually more massive? Astronomers concluded that the trouble was with definitions of the word “planet” itself; the definitions were vague at best, and had more to do with culture and history than with science. Thus in 2006 the International Astronomical Union proposed an official definition for “planet” that consequently reclassified Pluto as a dwarf planet. The definition used precise physical criteria to classify what constitutes a planet, as opposed to cultural criteria. Some people were outraged—
BRUTUS: I was outraged!
ACHILLES: —but the benefit to astronomy has been unquestionable. Now, when a scientist uses the term “planet” it means a very specific thing. Previously, it just arbitrarily meant Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, or Pluto. Now, in the same vein, how might we define belief?
BRUTUS: I will attempt it. “A person P believes a statement S, if and only if P holds S to be true.”
ACHILLES: That’s a good first try. Such a definition (superficially) matches our intuitions about belief. However, upon further reflection there are numerous problems with this definition.
BRUTUS: Such as?
ACHILLES: First, the word “person” implies a human believer. Might not intelligent aliens be capable of belief?
BRUTUS: Perhaps we can change the word “person” to “intelligent being”?
ACHILLES: Well, that also leads to trouble: what do we mean by “intelligent”? Intelligence is usually considered to be measured on a continuous spectrum, which forces us to make an arbitrary boundary line somewhere. Can Forrest Gump believe something? Can a chimpanzee believe something? Can a cat? Maybe the word “entity” is preferable, to get around this problem, but then…what is the definition of the word “entity”? Is a computer program an entity? What about a rock?
BRUTUS: OK, so “person” was not the best choice of words.
ACHILLES: Secondly, what do you mean by the phrase “P holds S to be true”? Maybe it means that the person P—
BRUTUS: —or intelligent being, or entity—
ACHILLES: —has an abstract representation of the statement “S is true” within his “mind”. [This is the representational viewpoint: J. Fodor, The Language of Thought. (Harvard University Press, Cambridge, MA, 1975).] Of course, this begs the question: what is a mind? Alternatively, maybe this statement means that if we “asked” P, then P would “assert” that S was true. But P might lie to us, or we might not speak P’s language, or P might not even possess language at all. So maybe the phrase “P holds S to be true” simply means that P is inclined to behave in a way that implies that S is true. This inclination may or may not necessarily lead to observable behavior. [These are the interpretational (D. Davidson, “Belief and the basis of meaning,” (1974) in D. Davidson, Inquiries into Truth and Interpretation, (Clarendon Press, Oxford, 1984) and dispositional (R. Marcus, “Some revisionary proposals about belief and believing,” Philosophy and Phenomenological Research 50, 132–153 (1990)) viewpoints, respectively.]
BRUTUS: Who knew that “belief” was so complicated?
ACHILLES: There are yet more interpretations. A hard instrumentalist [W. V. O. Quine, Word and object. (MIT press, Cambridge, MA., 1960)] will say that beliefs do not really exist, but there still might be utility in positing the fiction that “P holds S to be true” in some limited sense. A soft instrumentalist [Dennett, 1991] will not go quite that far; he will say that belief (as a concept) is not only useful, but is actually real—albeit in a limited, less “robust” sense than physical objects. Finally, an eliminative materialist [see W. Lycan and G. Pappas, “What is eliminative materialism?” Australasian Journal of Philosophy 50 (2), 149-159 (1972)] will conclude that beliefs are entirely fictitious, except in the most informal contexts, and that advances in neuroscience will eventually eliminate any need to speak of beliefs at all.
BRUTUS: That is a lot of “ists”.
ACHILLES: The problem with this jumble of viewpoints is that they all differ in what the word “belief” entails. Thus, how can two people (with different viewpoints) have any meaningful discussion about belief at all?
BRUTUS: Surely, to argue, we must agree on our definitions beforehand.
ACHILLES: Exactly. Suppose we are discussing some belief paradox. We argue for a while, and then we both realize that our understanding of the word “belief” is fundamentally different: you happen to think that belief by definition requires a sentient believer, whereas I disagree. Since our definitions differ, our discussion about the paradox has little hope of advancing any further.
BRUTUS: I see now. The same trouble could arise in astronomy.
ACHILLES: Go on.
BRUTUS: Suppose you classify Pluto as a planet, whereas I classify Pluto as a dwarf planet. Now suppose that a new object X is found in our solar system. We can hardly be expected to agree on whether or not X is a planet, given our confusion about the term “planet” itself.
ACHILLES: [Nodding] It would be preferable to have a definition of belief that everyone finds acceptable. Alice thinks that belief entails qualia, Bob does not; the mere fact that they are arguing about it, though, implies that the word “belief” embodies core concepts that Alice and Bob should be able to agree upon. Therefore the definition of the word “belief” should embody those core concepts and those core concepts only—and should leave out controversial aspects, such as qualia.BRUTUS: I understand.
ACHILLES: An analogy is appropriate here. Imagine a mystical being called the rhinotaur, which means different things to you and me: you think a rhinotaur is half human/half rhinoceros; moreover, you think a rhinotaur horn is made of ivory; I think a rhinotaur is half human/half rhinoceros; moreover, I think a rhinotaur horn is made of keratin. If your definition (or mine) mentions in any way the composition of a rhinotaur horn, then debates about rhinotaur horns will be difficult for us. You will say that—by definition—rhinotaur horns are made of ivory; I will say that your definition is invalid, and we will get nowhere. We have unfortunately ignored the common ground that we share: namely, that rhinotaurs are half rhinoceros, half human, regardless of the composition of their horns. The solution, of course, is for us to agree that rhinotaurs are defined to be half human, half rhinoceros; we can then debate ad infinitum about what rhinotaur horns are made of without being caught on the (rhinotaur) horns of a dilemma.
BRUTUS: I see.
ACHILLES: So what definition of belief could everyone agree upon? I suggest that the answer lies in having an operational definition of belief—a definition that makes no statement about, and remains completely neutral, regarding the presence/absence of abstractions such as qualia. Perhaps such a definition could thereby satisfy even the hardcore “eliminative materialists”, who deny that belief exists. Surely, if we were to ask an eliminative materialist, “Do you believe that the moon is made of cheese?” then the answer would be “No”, despite our use of the word “believe”.
BRUTUS: Well, in their defense, I would say that when someone denies that “belief” exists, they are not thereby saying that they do not use the word in their everyday lives; they are asserting rather that “belief”—as a concept word in epistemology—does not exist as they interpret its definition.
ACHILLES: What definition? Show me a giant handbook of agreed-upon philosophy terms. [BRUTUS shrugs] But anyway, that is a problem with definitions, not with belief. Can anyone deny that “belief” is a useful word in English, even if it has no rigorous definition yet in the fields of psychology or philosophy? In any case such a definition would go a long way toward helping all the various “ists” communicate.
BRUTUS: So your goal, as I understand it, is to build toward an operational definition of belief.
ACHILLES: I think belief is like Pluto: a cold dark object, orbiting on the fringes of epistemology, that is not really what everyone thinks it is.
BRUTUS: All right then.
ACHILLES: Now, before attempting to construct an operational definition of belief, I want to discuss two belief paradoxes that emphasize that our intuitions about beliefs are confused (at best) and nonsensical (at worst). The first is called the “paradox of the preface” [due to D. Makinson, “Paradox of the preface”, Analysis 25, 205-207 (1965)]. An author writes a book, and makes many statements within that book. In the process of proofreading, the author checks the factuality of these statements, so that the book can be as accurate as possible. Suppose that the author makes 100 statements, and she judges the probability of any given statement being true is 99%. On an individual basis, the author has every reason to believe every one of the statements in her book is true. However, on balance she will also predict that at least one statement is wrong, with a probability of 1 – (0.99) 100 = 0.634 = 63%. Does the author therefore believe contradictory things? Does she believe that the book contains no errors, while at the same time believe that it does contain errors? Clearly a more rigorous definition of “belief” could help sort this out.
BRUTUS: I agree.
ACHILLES: The second paradox is the so-called “paradox of latent belief”. Consider the statement “An ostrich can run faster than a physics professor.” I daresay you have never entertained this thought before, yet I also predict that you believe the statement. I would even go so far as to say that you believed this statement yesterday. This is a paradox if you are convinced that beliefs are mental structures that “reside” somehow within your mind. For, if you never thought about it, how could you have believed it?
BRUTUS: Well, it might be logically possible that within my mind there is a vast matrix, in which every possible statement I could construct with my vocabulary is listed, and been categorized (ahead of time) as believed or disbelieved.
ACHILLES: True, but in reality, the number of “possible statements you could construct with your vocabulary” is actually infinite, because of sentences such as “23 ostriches weigh more than 5 physicists” in which the numbers can be increased without bound. And in any case, if such a vast matrix did exist, I doubt you have thought about it directly. You must therefore conclude that there are things you believe that have never entered your consciousness.
BRUTUS: So I agree; maybe a new definition of belief could help resolve this paradox and clarify our thinking.
ACHILLES: Right. [Stands up from his chair and approaches the whiteboard] So, can we posit an operational definition of belief that resolves some of these problems? I think so, but we first need to clarify some terms and notation; we can then define belief as concretely as possible. [Writes on the whiteboard]
Axiom 1. Assume that every statement is either true (S), not true (~S), or nonsensical (S?) in a particular context.
In this fanciful notation S? means “S is nonsense”. For future reference, note that any question about how to categorize a statement as being either true, false, or nonsense, is to be called a question about the “truth of S”.
BRUTUS: This is tangential, but have not some people [e.g. G. Priest, “What is so bad about contradictions?” Journal of Philosophy 95, 410-426 (1998)] tried to introduce bizarre logical categories, such as “true and not true” or “neither true nor not true”? I think such attempts are called dialetheism.
ACHILLES: I would aggregately classify such statements as “nonsense” for the purposes of our discussion. By nonsense I simply mean “not exclusively true” and “not exclusively false”, either; I do not mean to dismiss dialetheism out of hand.
BRUTUS: If you want to dismiss such garbage, feel free.
ACHILLES: Nonsensical statements can take many forms: they can superficially appear logical,
All mimsy were the borogoves
or appear utterly baffling
Tpdsbuft jt npsubm
BRUTUS: Where is the Rosetta stone when you need it?
ACHILLES: The second is nonsense because, lacking any context, there is no way to say that it is true or false. The first is nonsense because “mimsy” and “borogoves” are not defined in English. It is interesting to note, however, that the second statement might become true to you, if I were to tell you that it is written with a simple substitution cipher, and you were to then decipher it. Context has changed; nonsense has become truth.
BRUTUS: [Silently translating in his head] Hemlock, right?
ACHILLES: I want to stress that the phrase “in a particular context” in Axiom 1 is important, for the truth of any statement is context-dependent. By context I mean the full gamut of language, culture, time, place, speaker, listener, and so forth. So again, to use my favorite joke, “pain” means one thing in English and quite another thing in French. If I have
F = the “set of all foods”
then the statement
pain ⊂ F
is false in English, true in French, and nonsensical in Klingon. [Of course a speaker of French would have to know the elements of the set F, and also know the meaning of the symbol “⊂”, which means “is a subset of”.] Or take the statement “Sauron, ruler of Mordor, is a dragon.” What is the truth of this statement?
BRUTUS: I would say that in the context of Tolkien’s Lord of the Rings, the statement is false—Sauron was one of the Maiar, not a dragon—but as a statement about our world, it appears to be nonsense because neither “Sauron” nor “Mordor” nor dragons exist. [BRUTUS thinks for a moment] Although, on further reflection, the statement could even be true if we were to find a zoo in which the komodo dragon enclave is whimsically called “Mordor” and in which the alpha male has been named Sauron…
ACHILLES: Much trickier is the statement “Sauron does not exist”.
BRUTUS: I would be inclined to say that this is nonsense, since the word “Sauron” has no meaning in our world.
ACHILLES: My response is that the word “Sauron” does have meaning to us. Specifically, our definition of the word includes a proviso that statements about “Sauron” are to be understood only within the context of Tolkien’s novels. Thus the statement “Sauron does not exist” really means something like “Sauron, by definition, is a fictional being in the Lord of the Rings, and so does not literally exist in our world.”
BRUTUS: I see. So in retrospect, then, the previous statement “Sauron, ruler of Mordor, is a dragon” is actually false, not nonsense, because mentioning Sauron and Mordor fixes the context of the statement unambiguously as being within Tolkien’s oeuvre. [Here is an even trickier one for your amusement: is the statement “Batman is one of the X-men” true, false, or nonsense? Good luck.]
ACHILLES: Correct. Suppose instead that my friend Will says that “the Sun is a planet”. If I then say that this is a false belief, I am implying that Will’s definition of a “planet” agrees with mine, but that he is wrong in placing the Sun in this category. If his definition of “planet” is far enough away from mine, though, then it is not in any sense the same word, so in reality Will is saying something like “the Sun is a klanet”. In that case I cannot say he is wrong or right unless I can sort out what “klanet” means; if I cannot, then the statement is nonsense. Basically if we disagree on what “planet” means, then we are speaking different languages, and we have to assume the exact same language in order to make statements about each other’s beliefs.
BRUTUS: Are you saying that language is a requirement for belief?
ACHILLES: I would say, rather, that language—either explicit or implicit—is required to make generalizations about other people’s beliefs. But I may be wrong.
BRUTUS: It seems we have moved tangentially away from our main discussion.
ACHILLES: It always happens. But the main idea is to be able to classify statements as being in particular categories
S, ~S, S?
and of course these categories themselves must be rigorously defined.
BRUTUS: I am guessing that the motivation for all this is the idea that a believer is simply something that sorts statements into categories.
ACHILLES: Yes. This leads us to an “operational definition” of belief:
Axiom 2. A believer B is anything capable of consistently sorting out arbitrary statements S (in a particular context) into the three categories S, ~S, and S?
The word “consistently” is present because I have in mind a mathematical analogy: a believer is like a function that maps any statement onto a single element in the set {true, false, nonsense}.
BRUTUS: That seems a little restrictive. I mean, a person can change their beliefs, right? May I not believe six contradictory things before breakfast?
ACHILLES: My point is that you may, or you may not, depending upon how you define belief. To me, and in light of Axiom 2, a person who changes their beliefs is thereby not the same believer as before: as an agent—as a truth-deciding algorithm—a person is not the same at time t1 and then later at time t2.
BRUTUS: Wow.
ACHILLES: Why is it so strange? People with differing beliefs over time are analogous to computers whose programming has changed with additional data.
BRUTUS: I feel like the same person today, as I was yesterday.
ACHILLES: Same person, different believer. And I have no wish to argue your point: it is a question of mind, of identity; of consciousness. I will grant that “you” might still be “you” tomorrow, if you could ever define “you” precisely enough—
BRUTUS: Good heavens!
ACHILLES: —and that you are the same person as before, but my definition you are not the same believer.
BRUTUS: Sheesh. [Absent-mindedly rubs his head] For the sake of moving this forward, I will accept your definition of a “believer” for now.
ACHILLES: Note that a believer, by this definition, can literally be anything. I make no mention of persons, entities, agents—such loaded terms are distractions that confuse the issue. In effect I am advocating a semantic shift from “belief as an action, that also may or may not involve internal abstract mental states and intentions” to simply “belief as an action”.
BRUTUS: That seems sensible.
ACHILLES: Thus when I hear a robot R saying that it “thinks” S is true, I can assert that R believes S, and you will agree with me, regardless of our positions on souls or consciousness or philosophical zombies. We can then advance our discussion (whatever it concerns) in total agreement about what action (“belief”) has taken place.
BRUTUS: Belief, then, is an action.
ACHILLES: Yes, and it should be operationally defined based on what the verb “to believe” entails:
Axiom 3. The statement “B believes S” means that B considers S and consistently classifies (for whatever reason) S as true; conversely, the statement “B does not believe S” means that B considers S and consistently classifies (for whatever reason) S as false.
By “consider” I do not intend to imply that B necessarily has any conscious thought, whatever that means. Rather I mean that B accepts S as input, and produces “S is true” as output. A believer is then a “black-box” function that maps statements onto a particular range. If that range has three elements—that is, if we allow statements that are neither exclusively true nor exclusively false—then we must then also admit one final axiom:
Axiom 4. The statement “B dismisses S” means that B considers S and
consistently classifies (for whatever reason) S as nonsense.
BRUTUS: OK, so you are developing a nice little calculus of belief. What is the point of all this?
ACHILLES: [Erases half of the whiteboard and begins writing again] Armed with this operational definition of belief, we can now classify right or wrong beliefs into nine possibilities:
Belief | What is actually the case | Name |
S | S | Correct belief |
S | ~S | Incorrect belief |
S | S? | Belief in nonsense |
~S | ~S | Correct disbelief |
~S | S | Incorrect disbelief |
~S | S? | Disbelief in nonsense |
S? | S | Dismissal of truth |
S? | ~S | Dismissal of falsehood |
S? | S? | Dismissal of nonsense |
Regarding this table, if you hold either a correct belief, a correct disbelief, or a dismissal of nonsense, then you “got the truth of S right”; otherwise you “got the truth of S wrong”.
BRUTUS: Fine.
ACHILLES: Let’s try some examples. “I believe that ‘The Earth travels around the sun’.”
BRUTUS: A correct belief, surely. If we are heliocentrists.
ACHILLES: “I believe that ‘Cairo is the capital of France’.”
BRUTUS: Incorrect belief.
ACHILLES: “I do not believe that ‘the mome raths outgrabe’.”
BRUTUS: Hmm. A disbelief in nonsense, I suppose.
ACHILLES: “I do not believe that ‘Abraham Lincoln was a leprechaun’.”
BRUTUS: Actually, I think he was; but I suppose you want me to say a correct disbelief.
ACHILLES: You see how this goes.
BRUTUS: Let me try one. What about, “It is nonsense to say that ‘Napoleon was a ballerina’.” That is a dismissal of falsehood, right?
ACHILLES: Good, good.
BRUTUS: This is an amusing game. But how does it help us resolve the paradoxes you mentioned earlier?
ACHILLES: To show you, let us return to the preface paradox.
BRUTUS: The author who thinks that every individual statement in her book is correct, although she also thinks that there is probably one statement which is nevertheless wrong?
ACHILLES: To solve this paradox, let’s focus on the word “consistently” from the operational definition. If I ask the author, “Does your book contain errors?” she must answer either yes or no, or dismiss the question entirely. She can vacillate all she likes, but to be a believer by my definition she must be able to consistently answer the question somehow. To believe that the book literally contains no errors, she must conclude that the “individual” argument (looking at each fact in the book individually) carries more weight than the statistical argument (looking at the book holistically). If she concludes that the statistical argument is better, than she will believe the book does have errors. She cannot believe both things and still meet our definition of a believer.
BRUTUS: I am confused. If she believes there are no errors—consistently, as you say—when she looks at each question individually, then she believes there are no errors, right? Is that not Axiom #3?
ACHILLES: Be careful. There are two kinds of statement here, and you can believe one without believing the other. The first kind of statement is one such as, “Fact #23 is not in error.” Let’s say the author believes this, since she consistently answers in the affirmative. On the other hand, there are statements such as “The entire book contains no errors,” which she might not believe. The “individual argument” and the “holistic argument” are the names I give to two separate decision algorithms for evaluations of statements of this second kind. The individual argument runs like this: “Individually, I think each statement is true. Therefore all the statements are true, and the book has no errors.” The holistic argument is more nuanced: “Individually, I think each statement is true on its own. However, I know enough statistics to conclude that, on balance, the book should contain at least one error.” The “consistency” mentioned in Axiom #3 has nothing to do with whether there’s some sort of cognitive dissonance present in this holistic approach—it has to do with whether she can apply her internal algorithms time and again, and make the same decision (about the book as a whole!) each time.
BRUTUS: I am still bothered.
ACHILLES: Let me ask you this. Do you believe that you will die in a tornado?
BRUTUS: No.
ACHILLES: And if I point to any arbitrarily chosen person [other than Dr. Jonas Miller], you would say that they probably wouldn’t die in a tornado either.
BRUTUS: Of course not.
ACHILLES: And yet someone will die in a tornado this year, right? Many people will.
BRUTUS: I see.
ACHILLES: There seems to be an inconsistency here—that’s why we called this a paradox, originally—but it’s not an Axiom #3 inconsistency. I will consistently say that a tornado won’t get me, and consistently say a tornado will get somebody—so I believe both things.
BRUTUS: Hence, the author believes that each fact is correct, but that there is some fact or facts in the book that are incorrect.
ACHILLES: Yes. If she’s applying the “holistic” algorithm.
BRUTUS: But what if she cannot make up her mind?
ACHILLES: Internally, she should be able to find some rationale (an algorithm) for choosing yes or no. Otherwise, I would not classify her as a believer. To deny that she can choose is to assert that she is literally incapable of reaching a decision in certain situations.
BRUTUS: Well, maybe it is possible to have no opinion on a subject. That is, maybe it is possible for a believer B to be entirely neutral, and neither believe nor disbelieve some statement S, while at the same time denying that S is nonsense.
ACHILLES: Maybe. If this is truly the case, then we have two options. We could say that B is not a believer with regard to F; B’s views about F are not beliefs. Or we could add another logical category to our definition: “B does not dismiss S”. The choice hardly matters, for the question is one of the semantics of dismissal, not of belief. I am inclined to choose the former for reasons of simplicity—being unable to choose between two categories strikes me as a form of indecision, not belief—but there is obviously room for further investigation.
BRUTUS: I do not dismiss either option.
ACHILLES: In any case, this situation—not dismissing, but also not believing—is logically possible, but extremely rare. Suppose someone asks you if you believe Napoleon was a Scorpio. If you have no knowledge of when Napoleon was born, then you might say you have no way to determine the truth of the claim. You might hesitate and say that you have no opinion, but this is a cop-out: without any other data, you can still say that Napoleon only has a 1/12 chance of being a Scorpio, and if you were forced to make a decision (perhaps in order to make a wager) then you would logically say “I believe he was not a Scorpio.”
BRUTUS: Still, I do think you can have no opinion. Let us take Finnish, for example. I speak Finnish, but you do not.
ACHILLES: Very true.
BRUTUS: So consider the word hölynpöly. Surely you will not say you believe anything regarding this word?
ACHILLES: [Laughing] I might say that the word is nonsense to me. [BRUTUS glares at ACHILLES] But OK, I see your point. You’re asking if I believe that “hölynpöly means cow” or if I believe that “hölynpöly means hexagon” or something like that.
BRUTUS: Yes. Do you believe that hölynpöly means X, where X is any word you like?
ACHILLES: Well, no. There are so many possible words it could mean that I can’t really believe it means any of them. Of course, it does mean something. So it’s the preface paradox all over again.
BRUTUS: But suppose I said it means either cow, or hexagon.
ACHILLES: In that case, in the light of new information, I definitely would say that it meant hexagon. I am applying everything I know about language and making an educated guess. The principle in this case is that the longer the word, the more modern the concept; I doubt medieval Finns would have used a four-syllable word for cow.
BRUTUS: You were right in any case. The Finnish word for cow is lehmä, whereas the word for hexagon is kuusikulmio. So the shorter word is the cow, as you said.
ACHILLES: And hölynpöly?
BRUTUS: Again, you were right. Just nonsense.
ACHILLES: Given any statement S which is not nonsense, there will be ways (algorithms) for any believer B to sort S into the true or false categories. [Writes on whiteboard] Additionally, a believer’s self-assessment will place a “confidence” between
50% and (100 – ε)%
upon the belief itself. The reason the confidence will not be less than 50% is that 50% represents the confidence of a belief for which one has no information at all. That is, if someone asks me a true/false question I have at worst a 50/50 chance of getting it right.
BRUTUS: Now hold on. If I ask “Do you believe that usko means carrot?” then surely your confidence level is less than 50%?
ACHILLES: Looking at it another way, if one believes S with a confidence of less than 50%, then one really believes ~S with a confidence greater than 50%. So my response would be, “I do not believe that usko means carrot”, and in that case my confidence level is much higher than 50%.
BRUTUS: What of that strange epsilon notation?
ACHILLES: “(100 – ε)%” is meant to be reminiscent of calculus in which e traditionally indicates a vanishingly small but non-zero amount. Thus I claim that a rational and honest believer will always admit the possibility, however remote, of being wrong on any question. This amounts to asserting that all beliefs are fallible beliefs [J. Roorda, “Fallibilism, ambivalence, and belief,” The Journal of Philosophy 94 (3), 126-155 (1997)]. After all, we could just be characters in a Socratic dialogue, forced to say and believe whatever the author has in mind.
BRUTUS: [Looks up, as if expecting the roof to collapse] Well what if your confidence in a belief about the truth of S is literally 50/50, exactly? Then you could honestly deny that you had a belief about S, could you not?
ACHILLES: Maybe. But only in the realm of quantum physics could such a situation arise. If I perform a Stern-Gerlach experiment on an electron, I will get either the result “1/2” or the result “–1/2”. The probability of either event is 0.5 exactly, so I logically cannot believe either event will happen. On the other hand, macroscopic events are never 50/50; even a coin flip will be (slightly) biased because “perfect” coins do not exist.
BRUTUS: I have heard you say before that you do not believe in God. What say you now, in light of your “belief” definition?
ACHILLES: One consequence of this discussion is that “agnosticism” is not a belief per se but instead a self-analysis of one’s own beliefs. If I think that God exists, but I am only 51% sure (and I therefore think that God does not exist with a 49% confidence) then I am in truth a theist, albeit a weak one, because I believe in God, but am very unsure about this belief. Conversely I am an atheist if I am 51% sure that God does not exist. In both of these examples I would typically be called in our culture an agnostic, but such a description says nothing about what I believe. By the operational definition every rational believer is a theist or an atheist to some degree of confidence [cf. R. Dawkins, The God Delusion, (Houghton Mifflin, Boston, 2006) p. 50-51] and I would call agnostic someone who declines to answer a question about the existence of God, for whatever reason. Usually, there is some social, cultural, or psychological reason for declining to answer, but there is certainly no logical requirement to do so.
BRUTUS: So back to wonderland. Can I believe contradictory things?
ACHILLES: No, according to the operational definition.
BRUTUS: Example?
ACHILLES: Suppose I memorize the elements of the periodic table sequentially, so that I can say “Hydrogen, Helium, Lithium, …” all the way through element 100, Fermium. Suppose I also “learn” in a book that potassium is element number 20. I might go years without ever realizing that my “beliefs” about potassium are inconsistent, but one day I recite the periodic table all the way through to potassium, and find that potassium’s atomic number Z is 19, not 20. What did I believe, and what do I now believe?
BRUTUS: I think I can work this out. By your operational definition, you did not believe both Z = 19 and Z = 20. Rather, you believed one or the other, depending upon which number you would have chosen as “more correct” if the discrepancy had ever been brought to your attention. That is, when you learn that there is an inconsistency, you evaluate all the evidence and decide perhaps that Z = 19 has a higher probability of being true. In retrospect, you believed that Z = 19 all along, even before you realized that there was any contradiction.
ACHILLES: Good! Here is another example, which was essentially used as the main plot device in the films The Shop Around the Corner and You’ve Got Mail. Suppose you have a pen pal that you like, and a neighbor that you dislike. You then find out that your pen pal is your neighbor. Do you now like this person X, or not like them? Upon reflection—or, to put it more bluntly, upon applying certain algorithms—you will no doubt be able to decide one way or the other. Perhaps you will decide that the pen pal you like is the “real” version of X, and that the neighbor persona was a façade. In retrospect, then, you liked X all along.
BRUTUS: In retrospect, I disliked both movies all along.
ACHILLES: One implication of this discussion is that I can believe that “I believe S”, and be wrong. This is because I might not be fully aware of belief-sorting algorithms that I possess—I am not a “fully self-aware believer”. Maybe I hear on the radio that Andrew Jackson was the 6th American president. That sounds plausible, so I can say truthfully that “I believe that if I were to think about it, I would conclude that Jackson was our 6th president.” That is, I believe that I believe that he was #6. This, despite the fact that if I were to carefully think about it, I would actually conclude that he was actually #7. (I would do this by counting through the first few presidents in my head, “Washington, Adams, Jefferson, Madison, Monroe, John Quincy Adams, Jackson” and get that he was 7th, not 6th.) There is no contradiction here; the statement “I believe that Jackson was our 7th president” is a belief about history, whereas the statement “I believe that if I were to think about it, I would conclude that Jackson was our 6th president” is a statement about my own beliefs and data-retrieval algorithms.
BRUTUS: I believe you.
ACHILLES: Keep in mind that if a believer has more than one “data retrieval” algorithm operating, then to be fair the believer must apply all algorithms, and use a meta-algorithm to choose among them.
BRUTUS: [Arises and approaches the whiteboard himself, borrowing a marker from ACHILLES] I hate to send us along another tangent, but is there not some consilience here between our discussion and Frege’s puzzle [G. Frege, “Über Sinn und Bedeutung,” Zeitschrift für Philosophie und philosophische Kritik C, 25-50 (1892)], in which a statement such as
S1 = “Mark Twain was Samuel Clemens”
is thought to convey something subtly different from
S2 = “Mark Twain was Mark Twain”
even though Mark Twain and Samuel Clemens refer to the same person? The puzzle is that a person might believe S2 while disbelieving S1—an apparent contradiction.
ACHILLES: Hmm. With regard to the operational definition of belief, there is no doubt that a rational believer will always evaluate S2 as true, regardless of how “Mark Twain” is defined. However, the evaluation of S1 will depend upon both the definitions of “Mark Twain” and “Samuel Clemens”.
BRUTUS: [Sitting back down] Of course.
ACHILLES: If one recognizes the two names as being synonymous, then one cannot rationally accept S2 while denying S1.
BRUTUS: But therein lies a dilemma for some: S1 and S2 feel different in some intangible sense…
ACHILLES: My decidedly mathematical view is that S2 is a tautology like “X = X”, true no matter what X is. It therefore conveys no information whatsoever. S1, however, is like “X = Y”, and conveys information: it may or may not be true, depending upon what X and Y are.
BRUTUS: Interesting. You are saying that a statement such as “X=X”, which is always true, conveys less information than a statement such as “X=Y”, which might even be wrong!
ACHILLES: Makes sense to me. A tautology is always true, so it really says nothing at all. It is a subtle form of nonsense. In the words of physicist Wolfgang Pauli, such nonsense is “not even wrong.” [R. Peierls, “Wolfgang Ernst Pauli, 1900-1958,” Biographical Memoirs of Fellows of the Royal Society 5, 186 (1960)]
BRUTUS: But what about the paradox of latent belief? “An ostrich can run faster than a physics professor”?
ACHILLES: In light of the operational definition, the “paradox” of latent belief now ceases to be a paradox at all. If a believer possesses a truth-sorting algorithm regarding S, then that believer can be said to believe something about S even before the algorithm is applied. The consistency restriction guarantees that this “black-box” processing will give the same result time and again. It hardly matters whether the believer is himself cognizant of the final outcome.
BRUTUS: Seems reasonable.
ACHILLES: Latent beliefs are much more common than people realize. The truth is that much of what we “believe” cannot be recalled instantly, or even quickly; the beliefs must be “brought to the fore” by applying some algorithm. For example, if someone asks me “Which letter of the alphabet is ‘J’?” I can consistently answer, but to figure it out I have to run through the alphabet “A = 1, B = 2, C = 3,…” to the tune of “Twinkle Twinkle Little Star” and eventually get that J = 10. There is a time delay in the application of my alphabet algorithm, around 4 seconds, but it is clear that even beforehand I “believed” that J = 10. Other beliefs might take much longer to process. I believe that 16,184 ÷ 17 = 952, but only because I can consistently find this quotient through long division. I believe that Franklin Pierce was the 14th American president, because I have memorized the presidents in order and can list them sequentially. On the other hand, if I am given a hopelessly difficult problem, and cannot consistently reproduce the same answer, then I cannot said to be a believer with respect to that problem.
BRUTUS: I still have a strong objection to your operational definition: it forces us to admit that inanimate objects are capable of “belief”. If belief (as you say) is nothing more than a sorting action, a classification-into-categories, then surely a suitably chosen machine could perform such a task? Therefore such a machine could, according to you, believe. The idea is troubling.
ACHILLES: Sure, but remember that I am proposing a cognitive shift to “belief as an action” and trying to strip away all qualia. Even if this makes you uncomfortable, the shift is justified if it proves useful.
BRUTUS: Useful, how?
ACHILLES: We asked earlier if an alien could believe something, or a chimpanzee, or a cat. By my definition the answer is yes, for each of these beings can certainly sort “statements” into categories either explicitly (through language) or implicitly (through actions). When a cat sniffs a rock and then a piece of salmon, and then starts to eat the salmon, the cat has “asserted” without language that a rock is not edible but that salmon is edible. The cat “believes” that rocks are inedible. It does not matter that the cat may or may not have the notion of “edibleness” in its mind. I think of this as a qualified interpretational viewpoint: from a practical standpoint, we cannot know that a cat believes anything without observing the cat (since the cat lacks a common language with us). At the same time, we can observe that the cat makes the same decision consistently time and again, which means that the cat still believes something. The cat is a consistent sorting algorithm.
BRUTUS: Well, surely a computer cannot “believe” anything, since it is not self-aware, whereas maybe a cat at least has some limited self-awareness.
ACHILLES: That’s an invalid point if you accept the operational definition, which doesn’t mention self-awareness in any way.
BRUTUS: But the distinction remains important to me, somehow.
ACHILLES: Fine: then you could easily differentiate between “automated” and “sentient” beliefs if you like. I feel that this is a distraction, and would prefer to designate both types of believers, together, as just “believers”.
BRUTUS: What of free will?
ACHILLES: Belief is not something anyone has control over; free will, whether it exists, or not, is of no consequence. This follows from the operational definition. You cannot exercise free will to consciously choose one belief over another, because that violates the “consistently” constraint. For, suppose you always have the freedom to choose to believe S. Tomorrow you may very well use that same free will to “believe” ~S. But this is inconsistent—to be a consistent believer, you cannot have any free will as far as your beliefs are concerned.
BRUTUS: I still maintain that I have free will.
ACHILLES: Great. But it doesn’t affect your beliefs. You can’t choose to believe that 2+2=5, can you? I am sure you disbelieve it, and no amount of free will can change that.
BRUTUS: I choose to believe in God.
ACHILLES: No, you believe in God, because that’s the way your brain is wired. You don’t really have a choice.
BRUTUS: Somehow, a discussion of belief has turned cold, hard, calculating. You say that I have no choice about what I believe. But maybe I can use free will to change my beliefs: I want to believe in God, so I look for evidence of His majesty—
ACHILLES: And I want to believe in life after death. But wanting doesn’t make it so.
BRUTUS: But beliefs can be changed, right?
ACHILLES: Yes, I suppose. Maybe I can immerse myself in a religion to try and “change” my beliefs. This amounts to “reprogramming” the belief algorithms by which I decide the truth of propositions.
BRUTUS: Yes.
ACHILLES: I guess there are situations where we would expect this to work. I allow myself to be brainwashed; I now believe things I did not believe before. [BRUTUS winces] A less drastic example involves education. A student may not believe in the truth of Einstein’s special relativity; it seems intuitively wrong. But then the student is initiated into the culture/knowledge base of physics, and gets his PhD. [R. Peters, “Education as initiation,” in Archambault, R. (ed.) Philosophical Analysis and Education (pp. 87-111). (Routledge and Kegan Paul, London, 1965)] In so doing his “beliefs” have changed: he now “believes” in special relativity.
BRUTUS: So your point is that you cannot choose to believe something at this moment (and still be the same believer). However, you can choose to re-wire your own brain in the hopes that it will lead you to believe something else. What you have chosen, then, is to become a different believer.
ACHILLES: That seems a cogent summary of my views.
BRUTUS: [Sighs] So what, then, is your final point?
ACHILLES: I have argued that belief is a concept that should be operationally defined, and have attempted such a definition. The key is to strip away abstractions, and focus instead on belief as an action.
BRUTUS: Quine [W. V. O. Quine and J. S. Ullian, The Web of Belief. (Random House, New York, 1970] says that we hold many beliefs, that these beliefs reinforce one another, and that they exist in an inter-connected network. Thus new data/observations that affect one belief can in fact affect them all.
ACHILLES: The same is clearly true for science and philosophy in general. With the operational definition of belief, the whole body of human knowledge K is seen to itself be a believer, because the collective K can be used to sort facts into true/false categories. In turn, epistemology is seen to be the search for functions or algorithms that tap into K to reach conclusions. Seen in this light, a discussion such as ours is just one strand in the web, and may or may not pull believers (such as you or me) in unexpected ways.
BRUTUS: But as to my original question…?
ACHILLES: [Sitting back down] Which was?
BRUTUS: Well, I asked if you really do believe in the many-worlds interpretation of quantum mechanics.
ACHILLES: Oh, that. Yes, I do believe it. With a confidence level of about 51%.
BRUTUS: No very sure of yourself, are you?
ACHILLES: [Winking] Hey, with usko there’s always epäilen.
BRUTUS: [Muttering] Hölynpöly.
[Note: this dialogue is an outtake from my book Why Is There Anything? which is now available for download on the Kindle. It had to be removed from the book because it was only tangentially relevant to that book’s main point.]