# What is Modal Logic?

Modal logic is a framework for extending ordinary logic to allows us to reason about multiple ways in which something can be true. Many modal logics exist, to discuss notions of knowledge, time, provability, or computability. Modal logic is a handy tool for formalizing mathematical notions and reasoning about novel concepts.

Like most extensions to ordinary logic, the goal is to produce a logic custom-tailored to the task at hand. Generally the use of a modal logic can be replaced by ordinary logic, but only at the cost of more complicated rules of inference. This makes modal logic especially handy for computer applications: proof assistants, automated verification, and synthesis.

## A logic of time

Suppose we want to reason about the execution of a program over time. For example, we might want to make statements like, “If the button is pressed, the computer will halt.” However, it's tricky to see how we can turn this into a rigorous logical statement. Our first attempt might be something like

pressed(button) → halted

However, after some contemplation this is false. There is a moment when the button is pressed but the computer has not yet halted.

To better handle this gap between the button being pressed and the computer halting, we figure we need to add a notion of time. We might extend the `pressed`

and `halted`

predicates to take a time argument, and render the rule like this:

∀ t, ∃ s, pressed(button, t) ∧ s ≥ t ∧ halted(s)

Time might be identified with clock cycles; then this rule is believable. We can imagine proving this constructively by arguing that, well, suppose the button is pushed, then in one cycle we'll be in the interrupt handler, and in ten cycles we'll halt the machine, so let `s = t + 11`

, and then we are done.

This new way of rendering the button-halt property is at least accurate, but it requires our proofs to do meticulous cycle-counting. Our specification of the computer now relies on reasoning about order, a tricky thing; proofs require checking arithmetic. All this means that tasks like theorem checking, proof search, or program generation are harder and require more computational resources.

There is another way to reason about time that hides all of these details. Instead of introducing clocks and comparisons, we introduce a *modal operator*: a unary operator on propositions. Call the operator `eventually`

, expressing the notion that at a future time the proposition will be true. Then our statement is rendered as:

pressed(button) → eventually halted

This axiom removes the need for times, ordering, arithmetic, or even first-order reasoning. Using such a specification is easier; checking a proof is simpler; synthesizing programs to meet this requirement might be doable with complete automation. That is the promise of modal logic.

## Axioms for a modal logic

A modal logic is ordinary logic plus at least one modal operator: an operator that takes a proposition and returns a proposition. These operators express the *way* in which a proposition is true. Above, `halted`

is true in a certain way; meaning at some later time.

Modal operators have also been used to express the idea that a proposition is true after a certain program is run; that its truth is known by some agent; that the truth is morally required; that a truth is provable in a certain system; or that a value of a certain type is constructible. In each of these cases, a modal operator is introduced, and then certain axioms are assumed that govern that operator's behavior.

In the example above, we introduced `eventually`

to keep track of times and their ordering. If we imagine the context to have an associated time, `eventually`

moves us from a context with time `t`

to a context with some greater time `s`

. Here are some axioms we might add to describe the `eventually`

modality above:

`¬eventually p → ¬ p`

`eventually eventually p → eventually p`

Translate each of these to statements about times, and you get:

`∀ t, ¬ (∃ s, s ≥ t ∧ p(s)) → ¬ p(t)`

`∀ t, (∃ s, s ≥ t ∧ (∃ r, r ≥ s ∧ p(r))) → (∃ r, r ≥ t ∧ p(r))`

In other words, these axioms correspond to the reflexivity and transitivity of time order.

In other applications, other axioms might be necessary. The field of modal logic has developed a long list of axioms, so in using modal logic you are often picking from a slate of axioms already known to provide good behavior. Other times, you design your own modal axioms as necessary.

## A logic of knowledge

Sometimes we are reasoning about multiple “agents”: different people, different computers, or different data structures. We can use modal logic to capture the idea that an agent knows some property. To do so, we introduce a `knows(a)`

modal operator, where `a`

is an agent. The statement `knows(a) x = 3`

translates as, “Agent `a`

knows that `x`

is equal to 3”.

We'd add the following axioms to this logic of knowledge; keep in mind that I'm not formalizing whatever it means for *humans* to know something, but rather what it means for information to be information be implicit in a computer system or data structure:

- If
`p`

is a tautology,`knows(a) p`

, reflecting the fact that our agents can recognize obvious truths `knows(a) p → knows(a) (p → q) → knows(a) q`

, reflecting the fact that this knowledge isn't necessarily explicitly represented`knows(a) p → p`

, reflecting the fact that you can't know false things`knows(a) p → knows(a) knows(a) p`

and`¬knows(a) p → knows(a) ¬knows(a) p`

, reflecting the fact that our store of data is manipulated by known rules and thus it is possible to find out if something is known or not

These rules sound self-evident, but they also suffice to describe the general notion of “knowledge implicit in a computer system”. Properties of the system or its environment could then be described by statements invoking this idea of knowledge.

Note that in this system, `p → knows p`

would be a bad axiom to add. Our computer systems are not omniscient! Modal axioms require some care to choose, the same care as when we describe the behavior of a computer system's environment. Luckily, the common modal axioms are very well-studied and -understood, so you've got good guides for choosing the axioms.

## Necessary axioms

Modalities come in two flavors: negative, like `knows(a)`

, and positive, like `eventually`

. If `M`

is a positive modality, then `¬ M ¬`

is a negative one, and vice versa.

Every modal logic with a negative modality `M`

must hold to these two axioms:

- If
`p`

is a tautology,`M p`

is true. `M p → M (p → q) → M q`

Logics with a positive modality `W`

must hold to the flip side of these axioms:

- If
`p`

is a tautological falsehood, then`W p`

must be false `W p → W q ∨ W ¬ (p → q)`

These two rules are really the same as the ones for the negative modality `¬ W ¬`

.

The deep reason for these two axioms becomes clearest when considering the model theory of modal logics, but by themselves the rules are rarely controversial. The first indicates that logical tautologies are true in every possible way; for example, everyone knows all tautologies, they are true at all points in time, and so on. The second indicates that all ways of a thing being true respect hypothetical reasoning. Given the importance of hypothetical reasoning, this isn't a surprise. Occasionally someone has a problem with one of these two axioms, but it is usually when we are trying to formalize human thought, with its fallibility and limitation. Don't use modal logic for that. Like logic itself, modal logic is good for modeling idealized, black-and-white worlds.

If you have a favorite application of modal logic, let me know!