 ## By Pavel Panchekha

### 24 October 2016

Share under CC-BY-SA.

# Bimodal logic

I am working on a side-project to understand game theory where the utility functions can change, using modal logic. In the course of doing this, I came up against the question: is there any use for a modality that takes two arguments?

Modal logic is a logic with an operator of type `Prop → Prop`. There are, of course, many modal logics that take an additional argument; for example, the epistemic modalities are used to describe when various agents know various things, and usually are given as modalities of type `Agent → Prop → Prop`. But is there a use for modalities that take two propositions as arguments? After all, `and` and `or` have this type; are there other useful operands of this type?

One of my tasks in formalizing game theory in modal logic is to do away with utility functions. Most treatments of game theory involve each agent having a function from strategy sets to the reals, describing how much that agent likes various outcomes. There are good historical reasons for this formulation,1 [1 The Nash Existence Theorem is central. It already involves using the reals in all but its most abstract forms, and requires (I think?) a linear order of preferences, so you might as well make up a utility function.] but I need a way to talk about preference directly, without using real numbers.

Game theory involves agents reasoning about the behavior of other agents, so I have a proposition "agent `a` will do `s`" written `a ⇝ s`. A strategy set is just a big proposition: `(a ⇝ s) ∧ (b ⇝ s')`. Since agents prefer some outcomes to others, they can have preferences between strategy sets. So I can have a “prefers” proposition of type `Agent → Prop → Prop → Prop`: two propositional arguments.

This proposition, written `P <ₐ Q`, means that any world where `Q` is true is preferable to every world where `P` is true, according to `a`'s ordering on worlds. So for example, in the Prisoner's dilemma,

D C
D 1, 1 6, 0
C 0, 6 5, 5

we can write that `a ⇝ C ∧ b ⇝ C <ₐ a ⇝ D ∧ b ⇝ C`, since `a` prefers to defect on a cooperator, but we cannot write `a ⇝ C <ₐ a ⇝ D` (read: `a` prefers defection to cooperation) since `(C, C)` is preferred to `(D, D)`.

The semantics are, however, complex. How do we interpret `(m⇝X <ₘ m⇝Y) <ₙ (m⇝Y <ₘ m⇝X)`? It should roughly mean that `n` prefers that `m` prefer `X` to `Y`. One answer could be that `<ₐ` propositions always have the same meaning in every world. Then the statement above is equivalent to one of `⊥ <ₐ ⊤`, `⊤ <ₐ ⊥`, `⊥ <ₐ ⊥`, or `⊤ <ₐ ⊤` (which are ⊤, ⊤, ⊤, and ⊥). But that feels like modal logic before the addition of the accessibility relation. And it makes it impossible to reason about agents that change their utility function.

Presumably some relation must be added—but should it be among triples of worlds (if at C, compare worlds A and B), between a world and two sets of worlds (if at C, compare all worlds in sets A and B), or something else? For the game theory case, I think the most natural meaning is that at C, it should consider all worlds accessible from C, comparing those where `P` to those where `Q`. But perhaps for other bimodal logics a different meaning would be more natural.

## Footnotes:

1

The Nash Existence Theorem is central. It already involves using the reals in all but its most abstract forms, and requires (I think?) a linear order of preferences, so you might as well make up a utility function.