Friday 10 June 2011

Issues of basic logic logic issues, residents of the islands of the blue puzzle creates a bottom border

The remote server returned an unexpected response: (400) Bad Request.
The remote server returned an unexpected response: (400) Bad Request.

I recently reposted my favourite logic puzzle, namely the blue-eyed islander puzzle. I am fond of this puzzle because in order to properly understand the correct solution (and to properly understand why the alternative solution is incorrect), one has to think very clearly (but unintuitively) about the nature of knowledge.

There is however an additional subtlety to the puzzle that was pointed out in comments, in that the correct solution to the puzzle has two components, a (necessary) upper bound and a (possible) lower bound (I’ll explain this further below the fold, in order to avoid blatantly spoiling the puzzle here). Only the upper bound is correctly explained in the puzzle (and even then, there are some slight inaccuracies, as will be discussed below). The lower bound, however, is substantially more difficult to establish, in part because the bound is merely possible and not necessary. Ultimately, this is because to demonstrate the upper bound, one merely has to show that a certain statement is logically deducible from an islander’s state of knowledge, which can be done by presenting an appropriate chain of logical deductions. But to demonstrate the lower bound, one needs to show that certain statements are not logically deducible from an islander’s state of knowledge, which is much harder, as one has to rule out all possible chains of deductive reasoning from arriving at this particular conclusion. In fact, to rigorously establish such impossiblity statements, one ends up having to leave the “syntactic” side of logic (deductive reasoning), and move instead to the dual “semantic” side of logic (creation of models). As we shall see, semantics requires substantially more mathematical setup than syntax, and the demonstration of the lower bound will therefore be much lengthier than that of the upper bound.

To complicate things further, the particular logic that is used in the blue-eyed islander puzzle is not the same as the logics that are commonly used in mathematics, namely propositional logic and first-order logic. Because the logical reasoning here depends so crucially on the concept of knowledge, one must work instead with an epistemic logic (or more precisely, an epistemic modal logic) which can properly work with, and model, the knowledge of various agents. To add even more complication, the role of time is also important (an islander may not know a certain fact on one day, but learn it on the next day), so one also needs to incorporate the language of temporal logic in order to fully model the situation. This makes both the syntax and semantics of the logic quite intricate; to see this, one only needs to contemplate the task of programming a computer with enough epistemic and temporal deductive reasoning powers that it would be able to solve the islander puzzle (or even a smaller version thereof, say with just three or four islanders) without being deliberately “fed” the solution. (The fact, therefore, that humans can grasp the correct solution without any formal logical training is therefore quite remarkable.)

As difficult as the syntax of temporal epistemic modal logic is, though, the semantics is more intricate still. For instance, it turns out that in order to completely model the epistemic state of a finite number of agents (such as 1000 islanders), one requires an infinite model, due to the existence of arbitrarily long nested chains of knowledge (e.g. “{A} knows that {B} knows that {C} knows that {D} has blue eyes”), which cannot be automatically reduced to shorter chains of knowledge. Furthermore, because each agent has only an incomplete knowledge of the world, one must take into account multiple hypothetical worlds, which differ from the real world but which are considered to be possible worlds by one or more agents, thus introducing modality into the logic. More subtly, one must also consider worlds which each agent knows to be impossible, but are not commonly known to be impossible, so that (for instance) one agent is willing to admit the possibility that another agent considers that world to be possible; it is the consideration of such worlds which is crucial to the resolution of the blue-eyed islander puzzle. And this is even before one adds the temporal aspect (e.g. “On Tuesday, {A} knows that on Monday, {B} knew that by Wednesday, {C} will know that {D} has blue eyes”).

Despite all this fearsome complexity, it is still possible to set up both the syntax and semantics of temporal epistemic modal logic in such a way that one can formulate the blue-eyed islander problem rigorously, and in such a way that one has both an upper and a lower bound in the solution. The purpose of this post is to construct such a setup and to explain the lower bound in particular. The same logic is also useful for analysing another well-known paradox, the unexpected hanging paradox, and I will do so at the end of the post. Note though that there is more than one way to set up epistemic logics, and they are not all equivalent to each other.

(On the other hand, for puzzles such as the islander puzzle in which there are only a finite number of atomic propositions and no free variables, one at least can avoid the need to admit predicate logic, in which one has to discuss quantifiers such as {\forall} and {\exists}. A fully formed predicate temporal epistemic modal logic would indeed be of terrifying complexity.)

Our approach here will be a little different from the approach commonly found in the epistemic logic literature, in which one jumps straight to “arbitrary-order epistemic logic” in which arbitrarily long nested chains of knowledge (“{A} knows that {B} knows that {C} knows that \ldots”) are allowed. Instead, we will adopt a hierarchical approach, recursively defining for {k=0,1,2,\ldots} a “{k^{th}}-order epistemic logic” in which knowledge chains of depth up to {k}, but no greater, are permitted. The arbitrarily order epistemic logic is then obtained as a limit (a direct limit on the syntactic side, and an inverse limit on the semantic side, which is dual to the syntactic side) of the finite order epistemic logics.

I should warn that this is going to be a rather formal and mathematical post. Readers who simply want to know the answer to the islander puzzle would probably be better off reading the discussion at the puzzle’s own blog post instead.

— 1. Zeroth-order logic —

Before we plunge into the full complexity of epistemic logic (or temporal epistemic logic), let us first discuss formal logic in general, and then focus on a particularly simple example of a logic, namely zeroth order logic (better known as propositional logic). This logic will end up forming the foundation for a hierarchy of epistemic logics, which will be needed to model such logic puzzles as the blue-eyed islander puzzle.

Informally, a logic consists of three inter-related components:

A language. This describes the type of sentences the logic is able to discuss. A syntax. This describes the rules by which the logic can deduce conclusions (from given hypotheses). A semantics. This describes the sentences which the logic interprets to be true (in given models).

A little more formally:

A language is a set {L} of sentences, which are certain strings of symbols from a fixed alphabet, that are generated by some rules of grammar. A syntax is a collection of inference rules for generating deductions of the form {T \vdash S} (which we read as “From {T}, we can deduce {S}” or “{S} is a consequence of {T}“), where {T} and {S} are sentences in {L} (or sets of sentences in {L}). A semantics describes what a model (or interpretation, or structure, or world) {M} of the logic is, and defines what it means for a sentence {S} in {L} (or a collection of sentences) to be true in such a model {M} (which we write as {M \models S}, and we read as “{M} models {S}“, “{M} obeys {S}“, or “{S} is true in {M}“).

We will abuse notation a little bit and use the language {L} as a metonym for the entire logic; strictly speaking, the logic should be a tuple {(L, \vdash_L, \models_L)} consisting of the language, syntax, and semantics, but this leads to very unwieldy notation.

The syntax and semantics are dual to each other in many ways; for instance, the syntax can be used to show that certain statements can be proved, while the semantics can be used to show that certain statements cannot be proved. This distinction will be particularly important in the blue-eyed islander puzzle; in order to show that all blue-eyed islanders commit suicide by the 100th day, one can argue purely on syntactical grounds; but to show that it is possible for the blue-eyed islanders to not commit suicide on the 99th day or any preceding day, one must instead use semantic methods.

To illustrate the interplay between language, syntax, and semantics, we begin with the simple example of propositional logic. To describe this logic, one must first begin with some collection of atomic propositions. For instance, on an island with three islanders {I_1, I_2, I_3}, one could consider the propositional logic generated by three atomic propositions {A_1, A_2, A_3}, where each {A_i} is intended to model the statement that {I_i} has blue eyes.

One can have either a finite or an infinite set of atomic propositions. In this discussion, it will suffice to consider the situation in which there are only finitely many atomic propositions, but one can certainly also study logics with infinitely many such propositions.

The language {L} would then consist of all the sentences that can be formed from the atomic propositions using the usual logical connectives ({\wedge}, {\vee}, {\neg}, {\implies}, {\top}, {\bot}, etc.) and parentheses, according to the usual rules of logical grammar (which consists of rules such as “If {S} and {T} are sentences in {L}, then {(S \vee T)} is also a sentence in {L}“). For instance, if {A_1,A_2,A_3} are atomic propositions, then

\displaystyle ((A_1 \wedge A_2) \vee (A_3 \wedge \neg A_1))

would be an example of a sentence in {L}. On the other hand, \displaystyle \wedge A_1 \neg A_3 A_1 ) \vee \implies A_2 (

is not a sentence in {L}, despite being a juxtaposition of atomic propositions, connectives, and parentheses, because it is not built up from rules of grammar.

One could certainly write down a finite list of all the rules of grammar for propositional calculus (as is done for instance at this Wikipedia page), but we will not do so here in order not to disrupt the flow of discussion.

It is customary to abuse notation slightly and omit parentheses when they are redundant (or when there is enough associativity present that the precise placement of parentheses are not relevant). For instance, {((A_1 \wedge A_2) \wedge A_3)} could be abbreviated as {A_1 \wedge A_2 \wedge A_3}. We will adopt this type of convention in order to keep the exposition as uncluttered as possible.

Now we turn to the syntax of propositional logic. This syntax is generated by basic rules of deductive logic, such as modus ponens

\displaystyle A, (A \implies B) \vdash B

or the law of the excluded middle \displaystyle \vdash (A \vee \neg A)

and completed by transitivity (if {S \vdash T} and {T \vdash U}, then {S \vdash U}), monotonicity ({S,T \vdash S}), and concatenation (if {S \vdash T} and {S \vdash U} then {S \vdash T,U}). (Here we adopt the usual convention of representing a set of sentences without using the usual curly braces, instead relying purely on the comma separator.) Another convenient inference rule to place in this logic is the deduction theorem: if {S \vdash T}, then one can infer {\vdash (S \implies T)}. (In propositional logic (or predicate logic), this rule is redundant (hence the designation of this rule as a theorem), but for the epistemic logics below, it will be convenient to make deduction an explicit inference rule, as it simplifies the other inference rules one will have to add to the system.)

A typical deduction that comes from this syntax is

\displaystyle (A_1 \vee A_2 \vee A_3), \neg A_2, \neg A_3 \vdash A_1

which using the blue-eyed islander interpretation, is the formalisation of the assertion that given that at least one of the islanders {I_1,I_2,I_3} has blue eyes, and that {I_2, I_3} do not have blue eyes, one can deduce that {I_1} has blue eyes.

As with the laws of grammar, one can certainly write down a finite list of inference rules in propositional calculus; one such list for instance is given at this Wikipedia page. Note though that, much as a given vector space has more than one set of generators, there is more than one possible list of inference rules for propositional calculus, due to some rules being equivalent to, or at least deducible from, other rules; the precise choice of basic inference rules is to some extent a matter of personal taste and will not be terribly relevant for the current discussion.

Finally, we discuss the semantics of propositional logic. For this particular logic, the models {M} are described by truth assignments, that assign a truth value {(M \models A_i) \in \{ \hbox{true}, \hbox{false} \}} to each atomic statement {A_i}. Once a truth value {(M \models A_i)} to each atomic statement {A_i} is assigned, the truth value {(M \models S)} of any other sentence {S} in the propositional logic generated by these atomic statements can then be interpreted using the usual truth tables. For instance, returning to the islander example, consider a model {M} in which {M \models A_1} is true, but {M \models A_2} and {M \models A_3} are false; informally, {M} describes a hypothetical world in which {I_1} has blue eyes but {I_2} and {I_3} do not have blue eyes. Then the sentence {A_1 \vee A_2 \vee A_3} is true in {M},

\displaystyle M \models (A_1 \vee A_2 \vee A_3),

but the statement {A_1 \implies A_2} is false in {M}, \displaystyle M \not \models (A_1 \implies A_2).

If {S} is a set of sentences, we say that {M} models {S} if {M} models each sentence in {S}. Thus for instance, if we continue the preceding example, then \displaystyle M \models (A_1 \vee A_2 \vee A_3), (A_2 \implies A_3)

but \displaystyle M \not \models (A_1 \vee A_2 \vee A_3), (A_1 \implies A_2).

Note that if there are only finitely many atomic statements {A_1,\ldots,A_n}, then there are only finitely many distinct models {M} of the resulting propositional logic; in fact, there are exactly {2^n} such models, one for each truth assignment. We will denote the space of all possible models of a language {L} as {\hbox{Mod}(L)}.

If one likes, one can designate one of these models to be the “real” world {\hbox{Real}} so that all the other models become purely hypothetical worlds. In the setting of propositional logic, the hypothetical worlds then have no direct bearing on the real world; the fact that a sentence {S} is true or false in a hypothetical world {M} does not say anything about what sentences are true or false in {\hbox{Real}}. However, when we turn to epistemic logics later in this post, we will see that hypothetical worlds will play an important role in the real world, because such worlds may be considered to be possible worlds by one or more agents (or, an agent may consider it possible that another agent considers the world to be possible, and so forth.).

The syntatical and semantic sides of propositional logic are tied together by two fundamental facts:

Theorem 1 (Soundness and completeness) Let {L} be a propositional logic, and let {S} be a set of sentences in {L}, and let {T} be another sentence in {L}.

(Soundness) If {S \vdash T}, then every model {M} which obeys {S}, also obeys {T} (i.e. {M \models S} implies {M \models T}). (Completeness) If every model {M} that obeys {S}, also obeys {T}, then {S \vdash T}.

Soundness is easy to prove; one merely needs to verify that each of the inference rules {S \vdash T} in one’s syntax is valid, in that models that obey {S}, automatically obey {T}. This boils down to some tedious inspection of truth tables. (The soundness of the deduction theorem is a little trickier to prove, but one can achieve this by an induction on the number of times this theorem is invoked in a given induction.) Completeness is a bit more difficult to establish; this claim is in fact a special case of the Gödel completeness theorem, and is discussed in this previous blog post; we also sketch a proof of completeness below.

By taking the contrapositive of soundness, we have the following important corollary: if we can find a model {M} which obeys {S} but does not obey {T}, then it must not be possible to deduce {T} as a logical consequence of {S}: {S \not \vdash T}. Thus, we can use semantics to demonstrate limitations in syntax.

For instance, consider a truth assignment {M} in which {A_2} is true but {A_1} is false. Then {M \models (A_1 \implies A_2)}, but {M \not \models (A_2 \implies A_1)}. This demonstrates that

\displaystyle (A_1 \implies A_2) \not \vdash (A_2 \implies A_1),

thus an implication such as {A_1 \implies A_2} does not entail its converse {A_2 \implies A_1}.

A theory (or more precisely, a deductive theory) in a logic {L}, is a set of sentences {{\mathcal T}} in {L} which is closed under deductive consequence, thus if {{\mathcal T} \vdash S} for some sentence {S} in {L}, then {S \in {\mathcal T}}. Given a theory {{\mathcal T}}, one can associate the set

\displaystyle \hbox{Mod}_L({\mathcal T}) = \hbox{Mod}({\mathcal T}) := \{ M \in \hbox{Mod}(L): M \models {\mathcal T} \}

of all possible worlds (or models) in which that theory is true; conversely, given a set {{\mathcal M} \subset \hbox{Mod}(L)} of such models, one can form the theory \displaystyle \hbox{Th}_L({\mathcal M}) = \hbox{Th}({\mathcal M}) := \{ S \in L: M \models S \hbox{ for all } M \in {\mathcal M} \}

of sentences which are true in all models in {{\mathcal M}}. If the logic {L} is both sound and complete, these operations invert each other: given any theory {{\mathcal T}}, we have {\hbox{Th}(\hbox{Mod}({\mathcal T})) = {\mathcal T}}, and given any set {{\mathcal M} \subset \hbox{Mod}(L)} of models, {\hbox{Th}({\mathcal M})} is a theory and {\hbox{Mod}(\hbox{Th}({\mathcal M})) = {\mathcal M}}. Thus there is a one-to-one correspondence between theories and sets of possible worlds in a sound complete language {L}.

For instance, in our running example, if {{\mathcal T}} is the theory generated by the three statements {A_1 \vee A_2 \vee A_3}, {A_2}, and {\neg A_3}, then {\hbox{Mod}({\mathcal T})} consists of precisely two worlds; one in which {A_1, A_2} are true and {A_3} is false, and one in which {A_2} is true and {A_1,A_3} are false. Since neither of {A_1} or {\neg A_1} are true in both worlds in {\hbox{Mod}({\mathcal T})}, neither of {A_1} or {\neg A_1} lie in {{\mathcal T} = \hbox{Th}(\hbox{Mod}({\mathcal T}))}. Thus, it is not possible to deduce either of {A_1} or {\neg A_1} from the hypotheses {A_1 \vee A_2 \vee A_3}, {A_2}, and {\neg A_3}. More informally, if one knows that there is at least one blue-eyed islander, that {I_2} has blue eyes, and {I_3} does not have blue eyes, this is not enough information to determine whether {I_1} has blue eyes or not.

One can use theories to prove the completeness theorem. Roughly speaking, one can argue by taking the contrapositive. Suppose that {S \not \vdash T}, then we can find a theory which contains all sentences in {S}, but does not contain {T}. In this finite setting, we can easily pass to a maximal such theory (with respect to set inclusion); one then easily verifies that this theory is complete in the sense that for any given sentence {U}, exactly one of {U} and {\neg U} is true. From this complete theory one can then directly build a model {M} which obeys {S} but does not obey {T}, giving the desired claim.

— 2. First-order epistemic logic —

Having reviewed propositional logic (which we will view as the zeroth-order iteration of epistemic logic), we now turn to the first non-trivial example of epistemic logic, which we shall call first-order epistemic logic (which should not be confused with the more familiar first-order predicate logic). Roughly speaking, first-order epistemic logic is like zeroth-order logic, except that there are now also some knowledge agents that are able to know certain facts in zeroth-order logic (e.g. an islander {I_1} may know that the islander {I_2} has blue eyes). However, in this logic one cannot yet express higher-order facts (e.g. we will not yet be able to formulate a sentence to the effect that {I_1} knows that {I_2} knows that {I_3} has blue eyes). This will require a second-order or higher epistemic logic, which we will discuss later in this post.

Let us now formally construct this logic. As with zeroth-order logic, we will need a certain set of atomic propositions, which for simplicity we will assume to be a finite set {A_1,\ldots,A_n}. This already gives the zeroth order language {L_0} of sentences that one can form from the {A_1,\ldots,A_n} by the rules of propositional grammar. For instance,

\displaystyle (A_1 \implies A_2) \wedge (A_2 \implies A_3)

is a sentence in {L_0}. The zeroth-order logic {L_0} also comes with a notion of inference {\vdash_{L_0}} and a notion of modeling {\models_{L_0}}, which we now subscript by {L_0} in order to distinguish it from the first-order notions of inference {\vdash_{L_1}} and modeling {\models_{L_1}} which we will define shortly. Thus, for instance \displaystyle (A_1 \implies A_2) \wedge (A_2 \implies A_3) \vdash_{L_0} (A_1 \implies A_3),

and if {M_0} is a truth assignment for {L_0} for which {A_1, A_2, A_3} are all true, then \displaystyle M_0 \models_{L_0} (A_1 \implies A_2) \wedge (A_2 \implies A_3).

We will also assume the existence of a finite number of knowledge agents {K_1,\ldots,K_m}, each of which are capable of knowing sentences in the zeroth order language {L_0}. (In the case of the islander puzzle, and ignoring for now the time aspect of the puzzle, each islander {I_i} generates one knowledge agent {K_i}, representing the state of knowledge of {I_i} at a fixed point in time. Later on, when we add in the temporal aspect to the puzzle, we will need different knowledge agents for a single islander at different points in time, but let us ignore this issue for now.) To formalise this, we define the first-order language {L_1} to be the language generated from {L_0} and the rules of propositional grammar by imposing one additional rule:

Thus, for instance,

\displaystyle K_2( A_1 ) \wedge K_1( A_1 \vee A_2 \vee A_3 ) \wedge \neg A_3

is a sentence in {L_1}; in the islander interpretation, this sentence denotes the assertion that {I_2} knows {I_1} to have blue eyes, and {I_1} knows that at least one islander has blue eyes, but {I_3} does not have blue eyes. On the other hand, \displaystyle K_1( K_2 ( A_3 ) )

is not a sentence in {L_1}, because {K_2(A_3)} is not a sentence in {L_0}. (However, we will be able to interpret {K_1(K_2(A_3))} in the second-order epistemic language {L_2} that we will define later.)

We give {L_1} all the rules of syntax that {L_0} presently enjoys. For instance, thanks to modus ponens, we have \displaystyle K_1(A_1) \wedge (K_1(A_1) \implies K_1(A_2)) \vdash_{L_1} K_1(A_2). \ \ \ \ \ (1)

Similarly, if {S, T} are sentences in {L_0} such that {S \vdash_{L_0} T}, then one automatically has {S \vdash_{L_1} T}.

However, we would like to add some additional inference rules to reflect our understanding of what “knowledge” means. One has some choice in deciding what rules to lay down here, but we will only add one rule, which informally reflects the assertion that “all knowledge agents are highly logical”:

First-order epistemic inference rule: If {S_1,\ldots,S_i,T\in L_0} are sentences such that \displaystyle S_1,\ldots,S_i \vdash_{L_0} T

and {K} is a knowledge agent, then \displaystyle K(S_1), \ldots, K(S_i) \vdash_{L_1} K(T).

(We will introduce higher order epistemic inference rules when we turn to higher order epistemic logics.)

Informally speaking, the epistemic inference rule asserts that if {T} can be deduced from {S_1,\ldots,S_i}, and {K} knows {S_1,\ldots S_i} to be true, then {K} must also know {T} to be true. For instance, since modus ponens gives us the inference

\displaystyle A_1, (A_1 \implies A_2) \vdash_{L_0} A_2

we therefore have, by the first-order epistemic inference rule, \displaystyle K_1(A_1), K_1(A_1 \implies A_2) \vdash_{L_1} K_1(A_2)

(note how this is different from (1) – why?).

Another example of more relevance to the islander puzzle, we have

\displaystyle (A_1 \vee A_2 \vee A_3), \neg A_2, \neg A_3 \vdash_{L_0} A_1

and thus, by the first-order epistemic inference rule, \displaystyle K_1(A_1 \vee A_2 \vee A_3), K_1(\neg A_2), K_1(\neg A_3) \vdash_{L_1} K_1(A_1).

In the islander interpretation, this asserts that if {I_1} knows that one of the three islanders {I_1,I_2,I_3} has blue eyes, but also knows that {I_2} and {I_3} do not have blue eyes, then {I_1} must also know that he himself (or she herself) has blue eyes.

One particular consequence of the first-order epistemic inference rule is that if a sentence {T \in L_0} is a tautology in {L_0} – true in every model of {L_0}, or equivalently (by completeness) deducible from the inference rules of {L_0}, and {K} is a knowledge agent, then {K(T)} is a tautology in {L_1}: {\vdash_{L_0} T} implies {\vdash_{L_1} K(T)}. Thus, for instance, we have {\vdash_{L_1} K_1(A_1 \implies A_1)}, because {A_1} is a tautology in {L_0} (thus {\vdash_{L_0} A_1 \implies A_1}).

It is important to note, however, that if a statement {T} is not a tautology, but merely true in the “real” world {\hbox{Real}}, this does not imply that {K(T)} is also true in the real world: as we shall see later, {\hbox{Real} \models_{L_0} T} does not imply {\hbox{Real} \models_{L_1} K(T)}. (We will define what {\models_{L_1}} means presently.) Intuitively, this reflects the obvious fact that knowledge agents need not be omniscient; it is possible for a sentence {T} to be true without a given agent {K} being aware of this truth.

In the converse direction, we also allow for the possibility that {K(T)} is true in the real world, without {T} being true in the real world, thus it is conceivable that {\hbox{Real} \models_{L_1} K(T)} is true but {\hbox{Real} \models_{L_0} T} is false. This reflects the fact that a knowledge agent may in fact have incorrect knowledge of the real world. (This turns out not to be an important issue in the islander puzzle, but is of relevance for the unexpected hanging puzzle.)

In a related spirit, we also allow for the possibility that {K(T)} and {K(\neg T)} may both be true in the real world; an agent may conceivably be able to know inconsistent facts. However, from the inference {T, \neg T \vdash_{L_0} S} of ex falso quodlibet and the first-order epistemic inference rule, this would mean that {K(S)} is true in this world for every {S} in {L_0}, thus this knowledge agent believes absolutely every statement to be true. Again, such inconsistencies are not of major relevance to the islander puzzle, but as we shall see, their analysis is important for resolving the unexpected hanging puzzle correctly.

Remark 1 It is perhaps worth re-emphasising the previous points. In some interpretations of knowledge, {K(S)} means that {S} has somehow been “justified” to be true, and in particular {K(S)} should entail {S} in such interpretations. However, we are taking a more general (and abstract) point of view, in which we are agnostic as regards to whether {K} represents necessary or justified knowledge. In particular, our analysis also applies to “generalised knowledge” operators, such as “belief”. One can of course specialise this general framework to a more specific knowledge concept by adding more axioms, in which case one can obtain sharper conclusions regarding the resolution of various paradoxes, but we will work here primarily in the general setting.

Having discussed the language and syntax of the first-order epistemic logic {L_1}, we now turn to the semantics, in which we describe the possible models {M_1} of {L_1}. As {L_1} is an extension of {L_0}, any model {M_1} of {L_1} must contain as a component a model {M_0} of {L_0}, which describes the truth assignment of each of the atomic propositions {A_i} of {L_0}; but it must also describe the state of knowledge of each of the agents {K_i} in this logic. One can describe this state in two equivalent ways; either as a theory {\{ S \in L_0: M_1 \models_{L_1} K_i(S) \}} (in {L_0}) of all the sentences {S} in {L_0} that {K_i} knows to be true (which, by the first-order epistemic inference rule, is closed under {\vdash_{L_0}} and is thus indeed a theory in {L_0}); or equivalently (by the soundness and completeness of {L_0}), as a set

\displaystyle \{ M_{0,i} \in \hbox{Mod}(L_0): M_{0,i} \models_{L_0} S \hbox{ whenever } M_1 \models_{L_1} K_i(S) \}

of all the possible models of {L_0} in which all the statements that {K_i} knows to be true, are in fact true. We will adopt the latter perspective; thus a model {M_1} of {L_1} consists of a tuple \displaystyle M_1 = (M_0, {\mathcal M}_0^{(1)}, \ldots, {\mathcal M}_0^{(m)})

where {M_0 \in \hbox{Mod}(L_0)} is a model of {L_0}, and for each {i=1,\ldots,m}, {{\mathcal M}_0^{(i)} \subset \hbox{Mod}(L_0)} is a set of models of {L_0}. To interpret sentences {S \in L_1} in {M_1}, we then declare {M_1 \models A_i} iff {M_0 \models A_i} for each atomic sentence {A_i}, and declare {M_1 \models K_i(S)} iff {S} is true in every model in {{\mathcal M}_0^{(i)}}, for each {i=1,\ldots,m} and {S \in L_0}. All other sentences in {L_1} are then interpreted by applying the usual truth tables.

As an example of such a model, consider a world with three islanders {I_1, I_2, I_3}, each of which has blue eyes, and can each see that each other has blue eyes, but are each unaware of their own eye colour. In this model, {M_0} assigns a true value to each of {A_1, A_2, A_3}. As for {{\mathcal M}_0^{(1)}}, which describes the knowledge state of {I_1}, this set consists of two possible {L_0}-worlds. One is the “true” {L_0}-world {M_0}, in which {A_1,A_2,A_3} are all true; but there is also an additional hypothetical {L_0}-world {M_{0,1}}, in which {A_2, A_3} is true but {A_1} is false. With {I_1}‘s current state of knowledge, neither of these two possibilities can be ruled out. Similarly, {{\mathcal M}_0^{(2)}} and {{\mathcal M}_0^{(3)}} will also consist of two {L_0}-worlds, one of which is the “true” {L_0}-world {M_0}, and the other is not.

In this particular case, the true {L_0}-world {M_0} is included as a possible world in each of the knowledge agents’ set of possible worlds {{\mathcal M}_0^{(i)}}, but in situations in which the agents knowledge is incorrect or inconsistent, it can be possible for {M_0} to not be an element of one or more of the {{\mathcal M}^{(i)}_0}.

Remark 2 One can view an {L_1} model {M_1} as consisting of the “real world” – the {L_0}-model {M_0} – together with {m} clouds {{\mathcal M}_0^{(i)}}, {i=1,\ldots,m} of “hypothetical worlds”, one for each knowledge agent {K_i}. If one chooses, one can “enter the head” of any one of these knowledge agents {K_i} to see what he or she is thinking. One can then select any one of the {L_0}-worlds {M_{0,i}} in {{\mathcal M}_0^{(i)}} as a “possible world” in {K_i}‘s worldview, and explore that world further. Later on we will iterate this process, giving a tree-like structure to the higher order epistemic models.

Let {\hbox{Mod}(L_1)} be the set of all models of {L_1}. This is quite a large set; if there are {n} atomic statements {A_1,\ldots,A_n} and {m} knowledge agents {K_1,\ldots,K_m}, then there are {2^n} possibilities for the {L_0}-world {M_0}, and each knowledge agent {K_i} has its own independent set {{\mathcal M}_0^{(i)}} of possible worlds, of which there are {2^{2^n}} different possibilities, leading to {2^{n + m 2^n}} distinct models {M_1} for {L_1} in all. For instance, with three islanders wondering about eye colours, this leads to {2^{27}} possibilities (although, once everyone learns each other’s eye colour, the number of possible models goes down quite significantly).

It can be shown (but is somewhat tedious to do so) that the syntax and semantics of the first-order epistemic logic {L_1} is still sound and complete, basically by mimicking (and using) the proof of the soundness and completeness of {L_0}; we sketch a proof of this below when we discuss higher order logics.

— 3. Higher-order epistemic logic —

We can iterate the above procedure and construct a language, syntax, and semantics for {k^{th}} order epistemic logic {L_k} generated by some atomic propositions {A_1,\ldots,A_n} and knowledge agents {K_1,\ldots,K_m}, recursively in terms of the preceding epistemic logic {L_{k-1}}. More precisely, let {k \geq 1} be a natural number, and suppose that the logic {L_{k-1}} has already been defined. We then define the language of {L_k} as the extension of {L_{k-1}} generated by the laws of propositional grammar and the following rule:

Thus, for instance, in the running example of three propositions {A_1,A_2,A_3} and three knowledge agents {K_1,K_2,K_3},

\displaystyle K_1(A_3) \wedge K_1(K_2(A_3))

is a sentence in {L_2} (and hence in {L_3}, {L_4}, etc.) but not in {L_1}.

As for the syntax, we adopt all the inference rules of ordinary propositional logic, together with one new rule:

Thus, for instance, starting with

\displaystyle A_1, (A_1 \implies A_2) \vdash_{L_0} A_2

one has \displaystyle K_1(A_1), K_1(A_1 \implies A_2) \vdash_{L_1} K_1(A_2),

and then \displaystyle K_2(K_1(A_1)), K_2(K_1(A_1 \implies A_2)) \vdash_{L_2} K_2(K_1(A_2)),

and so forth. Informally, this rule asserts that all agents are highly logical, that they know that all agents are highly logical, and so forth. A typical deduction from these inference rules, which is again of relevance to the islander puzzle, is \displaystyle K_1( K_2( A_1 \vee A_2 \vee A_3 ) ), K_1( K_2( \neg A_3 ) ) \vdash_{L_2}

\displaystyle K_1( (\neg K_2(A_2)) \implies (\neg K_2( \neg A_1 )) ).

Remark 3 This is a very minimal epistemic syntax, and is weaker than some epistemic logics considered in the literature. For instance, we do not have any version of the positive introspection rule \displaystyle K(S) \vdash K(K(S));

thus we allow the possibility that an agent knows {S} “subconsciously”, in that the agent knows {S} but does not know that he or she knows {S}. Similarly, we do not have any version of the negative introspection rule \displaystyle \neg K(S) \vdash K(\neg K(S)),

so we allow the possibility that an agent is “unaware of his or her own ignorance”. One can of course add these additional rules ex post facto and see how this strengthens the syntax and limits the semantics, but we will not need to do so here.

There is also no reason to expect the knowledge operators to commute:

\displaystyle K(K'(S)) \not \vdash K'(K(S)).

Now we turn to the semantics. A model {M_k} of the language {L_k} consists of a {L_0}-model {M_0 \in \hbox{Mod}(L_0)}, together with sets of possible {L_{k-1}}-models {{\mathcal M}_{k-1}^{(1)},\ldots,{\mathcal M}_{k-1}^{(m)} \subset \hbox{Mod}(L_{k-1})} associated to their respective knowledge agents {K_1,\ldots,K_m}. To describe how {M_k} models sentences, we declare {M_k \models_{L_k} A_i} iff {M_0 \models_{L_0} A_i}, and for any sentence {S} in {L_{k-1}} and {i = 1,\ldots,m}, we declare {M_k \models_{L_k} K_i(S)} iff one has {M_{k-1,i} \models S} for every {M_{k-1} \in {\mathcal M}_{k-1}^{(i)}}.

Example 1 We consider an islander model with {n} atomic propositions {A_1,\ldots,A_n} (with each {A_i} representing the claim that {I_i} has blue eyes) and {n} knowledge agents {K_1,\ldots,K_n} (with {K_i} representing the knowledge state of {I_i} at a fixed point in time). There are {2^n} {L_0}-models {M_0}, determined by the truth values they assign to the {n} atomic propositions {A_1,\ldots,A_n}. For each {k \geq 0}, we can then recursively associate a {L_k}-model {M_k(M_0)} to each {L_0}-model {M_0}, by setting {M_0(M_0) := M_0}, and then for {k \geq 1}, setting {M_k(M_0)} to be the {L_k}-model with {L_0}-model {M_0}, and with {{\mathcal M}_{k-1}^{(i)}} consisting of the pair {\{ M_{k-1}(M_{0,i}^-), M_{k-1}(M_{0,i}^+) \}}, where {M_{0,i}^-} (resp. {M_{0,i}^+}) is the {L_0}-model which is identical to {M_0} except that the truth value of {A_i} is set to false (resp. true). Informally, {M_k(M_0)} models the {k^{th}}-order epistemology of the {L_0}-world {M_0}, in which each islander sees each other’s eye colour (and knows that each other islander can see all other islander’s eye colour, and so forth for {k} iterations), but is unsure as to his or her own eye colour (which is why the set {{\mathcal M}_{k-1}^{(i)}} of {A_i}‘s possible {L_{k-1}}-worlds branches into two possibilities). As one recursively explores the clouds of hypothetical worlds in these models, one can move further and further away from the “real” world. Consider for instance the situation when {n=3} and {M_0 \models A_1,A_2,A_3} (thus in the “real” world, all three islanders have blue eyes), and {k=3}. From the perspective of {K_1}, it is possible that one is in the world {M_2(M_{0,1}^-)}, in which {I_1} does not have blue eyes: {M_{0,1}^- \models \neg A_1, A_2, A_3}. In that world, we can then pass to the perspective of {K_2}, and then one could be in the world {M_1( M_{0,1,2}^{--} )}, in which neither {I_1} nor {I_2} have blue eyes: {M_{0,1,2}^{--} \models \neg A_1, \neg A_2, A_3}. Finally, inside this doubly nested hypothetical world, one can consider the perspective of {K_3}, in which one could be in the world {M_{0,1,2,3}^{---}}, in which none of {I_1,I_2,I_3} have blue eyes: {M_{0,1,2,3}^{---} \models \neg A_1, \neg A_2, \neg A_3}. This is the total opposite of the “real” model {M_0}, but cannot be ruled out in at this triply nested level. In particular, we have \displaystyle M_3(M_0) \models \neg K_1(K_2(K_3(A_1 \vee A_2 \vee A_3)))

despite the fact that \displaystyle M_3(M_0) \models A_1 \vee A_2 \vee A_3

and \displaystyle M_3(M_0) \models K_i(A_1 \vee A_2 \vee A_3)

and \displaystyle M_3(M_0) \models K_i(K_j(A_1 \vee A_2 \vee A_3))

for all {i,j \in \{1,2,3\}}. (In particular, the statement {A_1 \vee A_2 \vee A_3}, which asserts “at least one islander has blue eyes”, is not common knowledge in {M_3(M_0)}.

We have the basic soundness and completeness properties:

Proposition 2 For each {k \geq 0}, {L_k} is both sound and complete.

Proof: (Sketch) This is done by induction on {k}. For {k=0}, this is just the soundness and completeness of propositional logic. Now suppose inductively that {k \geq 1} and the claim has already been proven for {k-1}. Soundness can be verified as in the propositional logic case (with the validity of the {k^{th}} epistemic inference rule being justified by induction). For completeness, one again uses the trick of passing to a maximal {L_k}-theory {{\mathcal T}} that contains one set {S} of sentences in {L_k}, but not another sentence {T}. This maximal {L_k}-theory {{\mathcal T}} uniquely determines an {L_0}-model {M_0} by inspecting whether each {A_i} or its negation lies in the theory, and also determines {L_{k-1}}-theories {\{ S \in L_{k-1}: K_i(S) \in {\mathcal T} \}} for each {i=1,\ldots,m}. By induction hypothesis, each of these theories can be identified with a collection {{\mathcal M}_{k-1}^{(i)}} of {L_{k-1}}-models, thus creating a {L_k}-model {M_k} that obeys {T} but not {S}, giving (the contrapositive of) completeness. \Box

— 4. Arbitrary order epistemic logic —

An easy induction shows that the {k^{th}} order logic {L_k} extends the previous logic {L_{k-1}}, in the sense that every sentence in {L_{k-1}} is a sentence in {L_k}, every deduction on {L_{k-1}} is also a deduction in {L_k}, and every model of {L_k} projects down (by “forgetting” some aspects of the model) to a model of {L_{k-1}}. We can then form a limiting logic {L_\omega}, whose language is the union of all the {L_k} (thus, {S} is a sentence in {L_\omega} iff {S} lies in {L_k} for some {k}), whose deductive implications are the union of all the {L_k} deductive implications (thus, {S \vdash_{L_\infty} T} if we have {(S \cap L_k) \vdash_{L_k} T} for some {k}), and whose models are the inverse limits of the {L_k} models (thus, a model {M_\infty} of {L_\infty} is an infinite sequence of models {M_k} of {L_k} for each {k}, such that each {M_k} projects down to {M_{k-1}} for {k \geq 1}. It is not difficult to see that the soundness and completeness of each of the {L_k} implies the soundness and completeness of the limit {L_\infty} (assuming the axiom of choice, of course, in our metamathematics).

The logic {L_\infty} now allows one to talk about arbitrarily deeply nested strings of knowledge: if {S} is a sentence in {L_\infty}, and {K} is a knowledge agent, then {K(S)} is also a sentence in {L_\infty}. This allows for the following definition:

Definition 3 (Common knowledge) If {S} is a sentence in {L_\infty}, then {C(S)} is the set of all sentences of the form \displaystyle K_{i_1}( K_{i_2} ( \ldots ( K_{i_k}( S) ) \ldots ) )

where {k \geq 0} and {K_{i_1},\ldots,K_{i_k}} are knowledge agents (possibly with repetition).

Thus, for instance, using the epistemic inference rules, every tautology in {L_\infty} is commonly known as such: if {\vdash_{L_\infty} S}, then {\vdash_{L^\infty} C(S)}.

Let us now work in the islander model in which there are {n} atomic propositions {A_1,\ldots,A_n} and {n} knowledge agents {K_1,\ldots,K_n}. To model the statement that “it is commonly known that each islander knows each other islander’s eye colour”, one can use the sets of sentences \displaystyle C( A_i \implies K_j(A_i) ) \ \ \ \ \ (2)

and \displaystyle C( \neg A_i \implies K_j(\neg A_i) ) \ \ \ \ \ (3)

for all distinct {i,j \in \{1,\ldots,n\}}.

For any {0 \leq l \leq n}, let {B_{\geq l}} denote the sentence that there are at least {l} blue-eyed islanders; this can be encoded as a suitable finite combination of the {A_1,\ldots,A_n}. For instance, {B_{\geq 0}} can be expressed by any tautology, {B_{\geq 1}} can be expressed by {A_1 \vee \ldots \vee A_n}, {B_{\geq n}} can be expressed by {A_1 \wedge \ldots \wedge A_n}, and intermediate {B_{\geq l}} can be expressed by more complicated formulae. Let {B_l} denote the statement that there are exactly {k} blue-eyed islanders; for instance, if {n=3}, then {B_1} can be expressed as

\displaystyle (A_1 \wedge \neg A_2 \wedge \neg A_3) \vee (\neg A_1 \wedge A_2 \wedge \neg A_3) \vee (\neg A_1 \wedge \neg A_2 \wedge A_3).

The following theorem asserts, roughly speaking, that if there are {m} blue-eyed islanders, and it is commonly known that there are at least {l} blue-eyed islanders, then all blue-eyed islanders can deduce their own eye colour if {m \leq l}, but not otherwise.

Theorem 4 Let {{\mathcal T}} be the set of sentences consisting of the union of (2) and (3) for all distinct {i,j \in \{1,\ldots,n\}}. Let {0 \leq m, l \leq n}. Let {S} denote the sentence \displaystyle S = \bigwedge_{i=1}^n (A_i \implies K_i(A_i))

(informally, {S} asserts that all blue-eyed islanders know their own eye colour).

If {m \leq l}, then \displaystyle {\mathcal T}, B_m, C(B_{\geq l}) \vdash_{L_\infty} S.

If {m > l}, then \displaystyle {\mathcal T}, B_m, C(B_{\geq l}) \not \vdash_{L_\infty} S.

Proof: The first part of the theorem can be established informally as follows: if {B_m} holds, then each blue-eyed islander sees {m-1} other blue-eyed islanders, but also knows that there are at least {l} blue-eyed islanders. If {m \leq l}, this forces each blue-eyed islander to conclude that his or her own eyes are blue (and in fact if -model {M_\infty} which satisfies {{\mathcal T}}, {B_m}, and {C(B_{\geq l})} but not {S}. By definition of an {L_\infty}-model, it thus suffices to construct, for all sufficiently large natural numbers {k}, an {L_\infty}-model {M_k} which satisfies {{\mathcal T} \cap L_k}, {B_m}, and {C(B_{\geq l}) \cap L_k}, but not {S}, and which are consistent with each other in the sense that each {M_k} is the restriction of {M_{k+1}} to {L_k}.

We can do this by a modification of the construction in Example 1. For any {L_0}-model {M_0}, we can recursively define an {L_k}-model {M_{k, \geq l}(M_0)} for any {k \geq 0} by setting {M_{0,\geq l}(M_0) := M_0}, and then for each {k \geq 1}, setting {M_{k,\geq l}(M_0)} to be the {L_k}-model with {L_0}-model {M_0}, and with possible worlds {{\mathcal M}_{k-1}^{(i)}} given by

\displaystyle {\mathcal M}_{k-1}^{(i)} := \{ M_{k-1, \geq l}( M_{0,i} ): M_{0,i} \in \{M_{0,i}^+, M_{0,i}^-\}; M_{0,i} \models_{L_0} B_{\geq l} \};

this is the same construction as in Example 1, except that at all levels of the recursive construction, we restrict attention to worlds that obey {B_{\geq l}}. A routine induction shows that the {M_{k,\geq l}(M_0)} determine a limit {M_{\infty,\geq l}(M_0)}, which is an {L_\infty} model that obeys {{\mathcal T}} and {C(B_{\geq l})}. If {M_0 \models_{L_0} B_m}, then clearly {M_{\infty, \geq l}(M_0) \models_{L_\infty} B_m} as well. But if {m > l}, then we see that {M_{\infty, \geq l}(M_0) \not \models_{L_\infty} S}, because for any index {i} with {M_0 \models_{L_0} A_i}, we see that if {k-1}, then {{\mathcal M}_{k-1}^{(i)}(M_0)} contains worlds in which {A_i} is false, and so {M_{k,\geq l}(M_0) \not \models_{L_k} K_i(A_i)} for any {k \geq 1}. \Box

— 5. Temporal epistemic logic —

The epistemic logic discussed above is sufficiently powerful to model the knowledge environment of the islanders in the blue-eyed islander puzzle at a single instant in time, but in order to fully model the islander puzzle, we now must now incorporate the role of time. To avoid confusion, I feel that this is best accomplished by adopting a “spacetime” perspective, in which time is treated as another coordinate rather than having any particularly privileged role in the theory, and the model incorporates all time-slices of the system at once. In particular, if we allow the time parameter {t} to vary along some set {T} of times, then each actor {I_i} in the model should now generate not just a single knowledge agent {K_i}, but instead a family {(K_{i,t})_{t \in T}} of knowledge agents, one for each time {t \in T}. Informally, {K_{i,t}(S)} should then model the assertion that “{I_i} knows {S} at time {t}“. This of course leads to many more knowledge agents than before; if for instance one considers an islander puzzle with {n} islanders over {M} distinct points in time, this would lead to {nM} distinct knowledge agents {K_{i,t}}. And if the set of times {T} is countably or uncountably infinite, then the number of knowledge agents would similarly be countably or uncountably infinite. Nevertheless, there is no difficulty extending the previous epistemic logics {L_k} and {L_\infty} to cover this situation. In particular we still have a complete and sound logical framework to work in.

Note that if we do so, we allow for the ability to nest knowledge operators at different times in the past or future. For instance, if we have three times

which informally asserts that at time {t_2}, {I_1} knows that {I_2} already knew {S} to be true by time {t_1}, or \displaystyle K_{1,t_2}( K_{2,t_3}( S ) ),

which informally asserts that at time {t_2}, {I_1} knows that {I_2} will know {S} to be true by time {t_3}. The ability to know certain statements about the future is not too relevant for the blue-eyed islander puzzle, but is a crucial point in the unexpected hanging paradox.

Of course, with so many knowledge agents present, the models become more complicated; a model {M_k} of {L_k} now must contain inside it clouds {{\mathcal M}_{k-1}^{(i,t)}} of possible worlds for each actor {I_i} and each time {t \in T}.

One reasonable axiom to add to a temporal epistemological system is the ability of agents to remember what they know. More precisely, we can impose the “memory axiom” \displaystyle C( K_{i,t}(S) \implies K_{i,t'}(S) ) \ \ \ \ \ (4)

for any {S \in L_\infty}, any {i=1,\ldots,m}, and any : given a sentence {S \in L_\infty}, we let {C_t(S)} denote the set of sentences of the form

\displaystyle K_{i_1,t}( K_{i_2,t} ( \ldots ( K_{i_k,t}(S) ) \ldots ) )

where {k \geq 0} and {i_1,\ldots,i_k \in \{1,\ldots,n\}}. This is a subset of {C(S)}, which is the set of all sentences of the form \displaystyle K_{i_1,t_1}( K_{i_2,t_2} ( \ldots ( K_{i_k,t_k}(S) ) \ldots ) )

where {t_1,\ldots,t_k \in T} can vary arbitrarily in {T}.

— 6. The blue-eyed islander puzzle —

Now we can model the blue-eyed islander puzzle. To simplify things a bit, we will work with a discrete set of times {T = {\bf Z}} indexed by the integers, with {0} being the day in which the foreigner speaks, and any other time {t} being the time {t} days after (or before, if {t} is negative) the foreigner speaks. (One can also work with a continuous time with only minor changes.) Note the presence of negative time; this is to help resolve the question (which often comes up in discussion of this puzzle) as to whether the islanders would already have committed suicide even before the foreigner speaks.

Also, the way the problem is set up, we have the somewhat notationally annoying difficulty that once an islander commits suicide, it becomes meaningless to ask whether that islander continues to know anything or not. To resolve this problem, we will take the liberty of modifying the problem by replacing “suicide” with a non-lethal public ritual. (This means (thanks to (4)) that once an islander learns his or her own eye colour, he or she will be condemned to repeating this ritual “suicide” every day from that point.) It is possible to create a logic which tracks when different agents are alive or dead and to thus model the concept of suicide, but this is something of a distraction from the key point of the puzzle, so we will simply redefine away this issue.

For similar reasons, we will not concern ourselves with eye colours other than blue, and only consider suicides stemming from blue eyes, rather than from any non-blue colour. (It is intuitively obvious, and can eventually be proven, that the foreigner’s statement about the existence of blue-eyed islanders is insufficient information to allow any islander to distinguish between, say, green eyes and brown eyes, and so this statement cannot trigger the suicide of any non-blue-eyed person.)

As in previous sections, our logic will have the atomic propositions {A_1,\ldots,A_n}, with each {A_i} expressing the statement that {I_i} has blue eyes, as well as knowledge agents {K_{i,t}} for each {i=1,\ldots,n} and {t \in {\bf Z}}. However, we will also need further atomic propositions {S_{i,t}} for {i = 1,\ldots,n} and {t \in {\bf Z}}, which denote the proposition that {I_i} commits suicide (or a ritual equivalent) at time {t}. Thus we now have a countably infinite number of atomic propositions and a countably infinite number of knowledge agents, but there is little difficulty extending the logics {L_k} and {L_\infty} to cover this setting.

We can now set up the various axioms for the puzzle. The “highly logical” axiom has already been subsumed in the epistemological inference rule. We also impose the memory axiom (4). Now we formalise the other assumptions of the puzzle:

Let {{\mathcal T}} denote the union of all the axioms (4), (5), (6), (7), (8), (10). The “solution” to the islander puzzle can then be summarised as follows:

Theorem 5 Let {1 \leq m \leq n}.

(At least one blue-eyed islander commits suicide by day {m}) \displaystyle {\mathcal T}, B_m \vdash_{L_\infty} \bigvee_{i=1}^n \bigvee_{t=1}^m (A_i \wedge S_{i,t}).

(Nobody needs to commit suicide before day {m}) For any , \displaystyle {\mathcal T}, B_m \not \vdash_{L_\infty} S_{i,t}.

The first conclusion is weaker than the “Solution 2? of the puzzle, which would assert in fact that all {m} blue-eyed islanders will commit suicide on day {m}. While this indeed the “default” outcome of the hypotheses {{\mathcal T}, B_m}, it turns out that this is not the only possible outcome; for instance, if one blue-eyed person happens to commit suicide on day {0} or day {1} (perhaps for an unrelated reason than learning his or her own eye colour), then it turns out that this “cancels” the effect of the foreigner’s announcement, and prevents further suicides. (So, strictly speaking, Solution 2 is not quite accurate, though it is much closer to the truth than Solution 1 is.)

In fact there is a strengthening of the first conclusion: given the hypotheses {{\mathcal T}, B_m}, there must exist a time {1 \leq t \leq m} and {t} distinct islanders {I_{i_1},\ldots,I_{i_t}} such that {A_{i_j} \wedge S_{i_j,t}} holds for all {j=1,\ldots,t}.

Note that the second conclusion does not prohibit the existence of some models of {{\mathcal T}, B_m} in which suicides occur before day {m} (consider for instance a situation in which a second foreigner made a similar announcement a few days before the first one, causing the chain of events to start at an earlier point and leading to earlier suicides).

Proof: (Sketch) To illustrate the first part of the theorem, we focus on the simple case {m=n=2}; the general case is similar but requires more notation (and an inductive argument). It suffices to establish that

\displaystyle {\mathcal T}, B_2, \neg S_{1,1}, \neg S_{2,1} \vdash_{L_\infty} S_{1,2} \wedge S_{2,2}

(i.e. if nobody suicides by day {1}, then both islanders will suicide on day {2}.)

Assume {{\mathcal T}, B_2, \neg S_{1,1}, \neg S_{2,1}}. From (10) we have

\displaystyle K_{1,0}( K_{2,0}( A_1 \vee A_2 ) )

and hence by (4) \displaystyle K_{1,1}( K_{2,0}( A_1 \vee A_2 ) ).

By (6) we also have \displaystyle K_{1,1}( \neg A_1 \implies K_{2,0}( \neg A_1 ) )

whereas from the epistemic inference axioms we have \displaystyle K_{1,1}( (K_{2,0}( A_1 \vee A_2 ) \wedge K_{2,0}( \neg A_1 )) \implies K_{2,0}(A_2) ).

From the epistemic inference axioms again, we conclude that \displaystyle K_{1,1}( \neg A_1 \implies K_{2,0}(A_2) )

and hence by (7) (and epistemic inference) \displaystyle K_{1,1}( \neg A_1 \implies S_{2,1} ).

On the other hand, from {\neg S_{2,1}} and (9) we have \displaystyle K_{1,1}( \neg S_{2,1} )

and hence by epistemic inference \displaystyle K_{1,1}( A_1 )

and thus by (7) \displaystyle S_{1,2}.

A similar argument gives {S_{2,2}}, and the claim follows.

To prove the second part, one has to construct, for each {k}, an {L_k}-model in which {{\mathcal T}, B_m} is true and {S_{i,t}} is false for any {1 \leq i \leq n} and -model {M_0}, we recursively define a {L_k}-model {M_k(M_0)} for {k=0,1,2,\ldots} as follows. Firstly, {M_0(M_0) := M_0}. Next, if {k \geq 1} and {M_{k-1}()} has already been defined, we define {M_k(M_0)} to be the {L_k}-model with {L_0}-model {M_0}, and for any {i=1,\ldots,n} and {t \in {\bf Z}}, setting {{\mathcal M}_{k-1}^{(i,t)}(M_0)} to be the set of all {L_{k-1}}-models of the form {M_{k-1}(M'_0)}, where {M'_0} is an {L_0}-model obeying the following properties:

Now we model worlds in which there is a foreigner announcement. Define an admissible {L_0} model to be an {L_0}-model {M_0} such that there exist {1 \leq t \leq m} for which the following hold:

We call {m} the blue-eyed count of {M_0}.

(Such models, incidentally, can already be used to show that no suicides necessarily occur in the absence of the foreigner’s announcement, because the limit {M_\infty(M_0)} of such models always obey all the axioms of {{\mathcal T}} except for (10).)

Given an admissible {L_0}-model {M_0} of some blue-eyed count {m}, we recursively define an {L_k} model {\tilde M_k(M_0)} for {k=0,1,2,\ldots} by setting {\tilde M_0(M_0) := M_0}, then if {k \geq 1} and {\tilde M_{k-1}()} has already been defined, we define {\tilde M_k(M_0)} to be the {L_k}-model with {L_0}-model {M_0}, and with {{\mathcal M}_{k-1}^{(i,t)}(M_0)} for {i=1,\ldots,n} and {t \in {\bf Z}} defined by the following rules:

Case 1. If to be the set of all {L_{k-1}}-models of the form {M_{k-1}(M'_0)}, where {M'_0} obeys the two properties “{I_i} sees other islanders’ eyes” and “{I_i} remembers suicides” from the preceding construction. ({M'_0} does not need to be admissible in this case.)

Case 2. If {t=m-1}, {M_0 \models A_i}, and there does not exist {1 \leq t' \leq t} distinct {i_1,\ldots,i_{t'}\in \{1,\ldots,n\}} such that {M_0 \models_{L_0} A_{i_j} \wedge S_{i_j,t'}} for all {j=1,\ldots,t'}, then we set {{\mathcal M}_{k-1}^{(i,t)}(M_0)} {L_{k-1}}-models of the form {\tilde M_{k-1}(M'_0)}, where {M'_0} is admisssible, obeys the two properties “{I_i} sees other islanders’ eyes” and “{I_i} remembers suicides” from the preceding construction, and also obeys the additional property {M'_0 \models A_i}. (Informally, this is the case in which {I_i} “must learn” {A_i}.)

Case 3. In all other cases, we set {{\mathcal M}_{k-1}^{(i,t)}(M_0)} to be the set of all {L_{k-1}}-models of the form {\tilde M_{k-1}(M'_0)}, where {M'_0} is admissible and obeys the two properties “{I_i} sees other islanders’ eyes” and “{I_i} remembers suicides” from the preceding construction.

We let {\tilde M_\infty(M_0)} be the limit of the {\tilde M_k(M_0)} (which can easily be verified to exist by induction). A quite tedious verification reveals that for any admissible {L_0}-model {M_0} of blue-eyed count {m}, that {\tilde M_\infty(M_0)} obeys both {{\mathcal T}} and {B_m}, but one can choose {M_0} to not admit any suicides before time {m}, which will give the second claim of the theorem. \Box

Remark 4 Under the assumptions used in our analysis, we have shown that it is inevitable that the foreigner’s comment will cause at least one death. However, it is possible to avert all deaths by breaking one or more of the assumptions used. For instance, if it is possible to sow enough doubt in the islanders’ minds about the logical and devout nature of the other islanders, then one can cause a breakdown of the epistemic inference rule or of (7), and this can prevent the chain of deductions from reaching its otherwise deadly conclusion.

— 7. The unexpected hanging paradox —

We now turn to the unexpected hanging paradox, and try to model it using (temporal) epistemic logic. (For a formulation of this paradox, I will refer to the Wikipedia entry for this paradox.)

It turns out that there are several, not quite equivalent, ways to model this “paradox” epistemologically, with the differences hinging on how one interprets what “unexpected” or “surprise” means. In particular, if {S} is a sentence and {K} is a knowledge agent, how would one model the sentence “{K} does not expect {S}” or “{K} is surprised by {S}“?

One possibility is to model this sentence as \displaystyle \neg K(S), \ \ \ \ \ (11)

i.e. as the assertion that {K} does not know {S} to be true. However, this leads to the following situation: if {K} has inconsistent knowledge (in particular, one has {K(\bot)}, where {\bot} represents falsity (the negation of a tautology)), then by ex falso quodlibet, {K(S)} would be true for every {S}, and hence {K} would expect everything and be surprised by nothing. An alternative interpretation, then, is to adopt the convention that an agent with inconsistent knowledge is so confused as to not be capable of expecting anything (and thus be surprised by everything). In this case, “{K} does not expect {S}” should instead be modeled as \displaystyle (\neg K(S)) \vee K(\bot), \ \ \ \ \ (12)

i.e. that {K} either does not know {S} to be true, or is inconsistent.

Both interpretations (11), (12) should be compared with the sentence \displaystyle K(\neg S), \ \ \ \ \ (13)

i.e. that {K} knows that {S} is false. If {K} is consistent, (13) implies (11), but if {K} is inconsistent then (13) is true and (11) is false. In either case, though, we see that (13) implies (12).

Let now analyse the unexpected hanging paradox using the former interpretation (11) of surprise. We begin with the simplest (and somewhat degenerate) situation, in which there is only one time (say Monday at noon) in which the hanging is to take place. In this case, there is just one knowledge agent {K} (the knowledge of the prisoner after the judge speaks, but before the executation date of Monday at noon). We introduce an atomic sentence {E}, representing the assertion that the prisoner will be hanged on Monday at noon. In this case (and using the former interpretation (11) of surprise), the judge’s remarks can be modeled by the sentence

\displaystyle S := E \wedge (E \implies \neg K(E)).

The “paradox” in this case stems from the following curious fact:

Theorem 6

There exist {L_\infty}-models in which {S} is true. There exist {L_\infty}-models in which {K(S)} is true. However, there does not exist any {L_\infty}-model in which both {S} and {K(S)} is true.

Thus, the judge’s statement can be true, but if so, it is not possible for the prisoner to know this! (In this regard, the sentence {S} is analogous to a Godel sentences, which can be true in models of a formal system, but not provable in that system.) More informally: knowing a surprise, ruins that surprise.

Proof: The third statement is easy enough to establish: if {S} is true in some model, then clearly {\neg K(E)} is true in that model; but if {K(S)} is true in the same model, then (by epistemic inference) {K(E)} will be true as well, which is a contradiction.

The first statement is also fairly easy to establish. We have two {L_0}-models; a model {M_0^+} in which {E} is true, and a model {M_0^-} in which {E} is false. We can recursively define the {L_k}-model {M_k(M_0)} for any {k \geq 0} and any {M_0 \in \{M_0^+,M_0^-\}} by setting {M_0(M_0) := M_0}, and for {k \geq 1}, setting {M_k(M_0)} to be the {L_k}-model with {L_0}-model {M_0}, and with {{\mathcal M}_{k-1} := \{ M_{k-1}(M_0^+), M_{k-1}(M_0^-) \}}. One then easily verifies that the {M_k(M_0)} have a limit {M_\infty(M_0)}, and that {M_\infty(M_0^+)} models {S} (but not {K(S)}, of course).

A trivial way to establish the second statement is to make a model in which {K} is inconsistent (thus {{\mathcal M}_{k-1}} is empty). One can also take {{\mathcal M}_{k-1}} to be {M_{k-1}(M_0^+)}, and this will also work. (Of course, in such models, {S} must be false.) \Box

Another peculiarity of the sentence {S} is that

\displaystyle K(S), K(K(S)) \models_{L_\infty} K(\bot)

as can be easily verified (by modifying the proof of the second statement of the above theorem). Thus, the sentence {S} has the property that if the prisoner believes {S}, and also knows that he or she believes {S}, then the prisoner’s beliefs automatically become inconsistent – despite the fact that {S} is not actually a self-contradictory statement (unless also combined with {K(S)}).

Now we move to the case when the execution could take place at two possible times, say Monday at noon and Tuesday at noon. We then have two atomic statements: {E_1}, the assertion that the execution takes place on Monday at noon, and {E_2}, the assertion that the execution takes place on Tuesday at noon. There are two knowledge agents; {K_1}, the state of knowledge just before Monday at noon, and {K_2}, the state of knowledge just before Tuesday at noon. (There is again the annoying notational issue that if {E_1} occurs, then presumably the prisoner will have no sensible state of knowledge by Tuesday, and so {K_2} might not be well defined in that case; to avoid this irrelevant technicality, we replace the execution by some non-lethal punishment (or use an alternate formulation of the puzzle, such as the surprise exam formulation).)

We will need one axiom beyond the basic axioms of epistemic logic, namely \displaystyle C( \neg E_1 \implies K_2( \neg E_1 ) ). \ \ \ \ \ (14)

Thus, it is common knowledge that if the execution does not happen on Monday, then by Tuesday, the prisoner will be aware of this fact. This axiom should of course be completely non-controversial.

The judge’s sentence, in this case, is given by

\displaystyle S := (E_1 \vee E_2) \wedge (E_1 \implies \neg K_1(E_1)) \wedge (E_2 \implies \neg K_2(E_2)).

Analogously to Theorem 6, we can find {L_\infty} models obeying (14) in which {S} is true, but one cannot find models obeying (14) in which {S}, {K_1(S)}, {K_2(S)}, and {K_1(K_2(S))} are all true, as one can soon see that this leads to a contradiction. Indeed, from {S} one has

\displaystyle \neg E_1 \implies \neg K_2(E_2)

while from (14) one has \displaystyle \neg E_1 \implies K_2(\neg E_1)

and from {K_2(S)} one has \displaystyle K_2(E_1 \wedge E_2)

wihch shows that {\neg E_1} leads to a contradiction, which implies {E_1} and hence {\neg K_1(E_1)} by {S}. On the other hand, from {K_1(S)} one has \displaystyle K_1( \neg E_1 \implies \neg K_2(E_2) )

while from (14) one has \displaystyle K_1( \neg E_1 \implies K_2(\neg E_1) )

and from {K_1(K_2(S))} one has \displaystyle K_1( K_2(E_1 \wedge E_2) )

which shows that {K_1( \neg E_1 \implies \bot )}, and thus {K_1(E_1)}, a contradiction. So, as before, {S} is a secret which can only be true as long as it is not too widely known.

A slight variant of the above argument shows that if {K_1(S)}, {K_2(S)}, and {K_1(K_2(S))} hold, then {K_1(\neg E_1)} and {\neg E_1 \implies K_2(\neg E_2)} hold – or informally, the prisoner can deduce using knowledge of {S} (and knowledge of knowledge of {S}) that there will be no execution on either date. This may appear at first glance to be consistent with {S} (which asserts that the prisoner will be surprised when the execution does happen), but this is a confusion between (11) and (13). Indeed, one can show under the assumptions {K_1(S), K_2(S), K_1(K_2(S))} that {K_1} is inconsistent, and (if {\neg E_1} holds) then {K_2} is also inconsistent, and so {K_1(\neg E_1)} and {\neg E_1 \implies K_2(\neg E_2)} do not, in fact, imply {S}.

Now suppose that we interpret surprise using (12) instead of (11). Let us begin first with the one-day setting. Now the judge’s sentence becomes

\displaystyle S = E \wedge (E \implies (\neg K(E) \vee K(\bot))).

In this case it is possible for {S} and {K(S)} to be true, and in fact for {S} to be common knowledge, basically by making {K} inconsistent. (A little more precisely: we use the {L_k}-model {M_k} where {M_0 = M_0^+} and {{\mathcal M}_{k-1} = \emptyset}. Informally: the judge has kept the execution a surprise by driving the prisoner insane with contradictions.

The situation is more interesting in the two-day setting (as first pointed out by Kritchman and Raz), where {S} is now

\displaystyle S := (E_1 \vee E_2) \wedge (E_1 \implies (\neg K_1(E_1) \vee K_1(\bot)))

\displaystyle \wedge (E_2 \implies (\neg K_2(E_2) \vee K_2(\bot))).

Here it is possible for {S} to in fact be common knowledge in some {L_\infty}-model, but in order for this to happen, at least one of the following three statements must be true in this model:

(We leave this as an exercise for the interested reader.) In other words, in order for the judge’s sentence to be common knowledge, either the prisoner’s knowledge on Monday or Tuesday needs to be inconsistent, or else the prisoner’s knowledge is consistent, but the prisoner is unable (on Monday) to determine that his or her own knowledge (on Tuesday) is consistent. Notice that the third conclusion here {\neg K_1(\neg K_2(\bot))} is very reminiscent of Gödel’s second incompleteness theorem, and indeed in the above-mentioned paper of Kritchman and Raz, the surprise examination argument is modified to give a rigorous proof of that theorem.

Remark 5 Here is an explicit example of a {L_\infty}-world in which {S} is common knowledge, and {K_1} and {K_2} are both consistent (but {K_1} does not know that {K_2} is consistent). We first define {L_k}-models {M_k} for each {k=0,1,\ldots} recursively by setting {M_0} to be the world in which {M_0 \models_{L_0} E_2} and {M_0 \models_{L_0} \neg E_1}, and then define {M_k} for {k \geq 1} to be the {L_k}-model with {L_0}-model {M_0}, with {{\mathcal M}_{k-1}^{(1)} := \{ M_{k-1}\}}, and {{\mathcal M}_{k-1}^{(2)} := \emptyset}. (Informally: the execution is on Tuesday, and the prisoner knows this on Monday, but has become insane by Tuesday.) We then define the models {\tilde M_k} for {k=0,1,\ldots} recursively by setting {\tilde M_0} to be the world in which {\tilde M_0 \models_{L_0} E_1} and {\tilde M_0 \models_{L_0} \neg E_2}, then define {\tilde M_k} for {k \geq 1} to be the {L_k}-model with {L_0}-model {\tilde M_0}, {{\mathcal M}_{k-1}^{(1)} := \{ M_{k-1}, \tilde M_{k-1} \}}, and {{\mathcal M}_{k-1}^{(2)} := \{ \tilde M_{k-1} \}}. (Informally: the execution is on Monday, but the prisoner only finds this out after the fact.) The limit {\tilde M_\infty} of the {\tilde M_k} then has {S} as common knowledge, with {K_1(\bot)} and {K_2(\bot)} both false, but {K_1(\neg K_2(\bot))} is also false.

No comments:

Post a Comment