Pavel Panchekha


Share under CC-BY-SA.

Defining our Logic


This is post 2 of the Models series.

I’ve recently become interested in model theory, and as I’ve started reading about it, one question in particular has been eating on me: why do we need to do model theory at all? This series of posts will answer that question, hopefully saving others the trouble of figuring it out for themselves. This post defines the logic that we want to model.

Defining a logic: propositions

As discussed in the last post, a logic is just a set of strings of symbols, and a set of rules for transforming strings of symbols into one another. Usually, both of these are defined inductively: the strings of symbols are defined in terms of abstract syntax trees, and the transformation rules are defined in terms of an inductive relation between these syntax trees.

The logic I’m going to be talking about intuitionistic predicate logic. I’ll be talking about only a fragment, because my logic won’t include a quantifier (I’ll discuss why in part 4).

Let’s start by writing down the syntax—the strings of symbols that this logic describes—and then we can talk about the semantics. Our fragment of intuitionistic logic describes strings made up of the connectives , , , , , and (they’re called connectives because they connect other things) and variables. To define this formally, we need to give an inductive definition for this syntax. For now, we’ll make do with:

Definition: a proposition in our logic is one of:

  • , , or a variable;
  • e₁ → e₂, e₁ ∨ e₂, or e₁ ∧ e₂, where e₁ and e₂ are propositions;

To state this in Coq, we’ll need to refine how variables work, but this definition is more than enough when we’re working in text. Note that we have not defined the meaning of or ; in fact, so far they are indistinguishable.

Defining a logic: transformation rules

To define the meaning of the various expressions, we will define transformation rules on symbols. These rules will, of course, distinguish between and . The intuition for these rules is that they should transform “true” propositions only into “true” propositions. (Of course, we aren’t really talking about truth—just syntax—but that’s the intuition).

As it happens, almost all of these rules will be phrased as things that transform into . This isn’t necessary, though it is very neat. It’s due to the fact that I got these rules by munging a natural deduction presentation of intuitionistic logic. In all these rules, all variables are arbitrary propositions. A rule is written e₁ = e₂, meaning that either of e₁ and e₂ can be transformed into the other. (In principle, one could have one-way rules; this logic doesn’t have any.)

The first two rules describe “implies”; namely, they demonstrate weakening and substitution:

  • e₁ → (e₂ → e₁) = ⊤
  • (e₁ → e₂ → e₃) → ((e₁ → e₂) → (e₁ → e₃)) = ⊤, with implication associating to the right

The next three rules describe “and”; they are two elimination rules and one introduction rule:

  • e₁ ∧ e₂ → e₂ = ⊤ and e₁ ∧ e₂ → e₁ = ⊤
  • e₁ → e₂ → e₁ ∧ e₂ = ⊤

The next three describe “or”; they are one elimination rule and two introduction rules:

  • (e₁ → e₃) → (e₂ → e₃) → (e₁ ∨ e₂ → e₃) = ⊤
  • e₁ → e₁ ∨ e₂ = ⊤ and e₂ → e₁ ∨ e₂ = ⊤

Finally, we have one rule each for “true” and “false”:

  • e₁ → ⊤ = ⊤
  • ⊥ → e₁ = ⊤

We’re missing only “for all”; describing that will require using substitution, which I’ll write e₁[x ↦ e₂] (meaning, substitute e₂ for each occurrence of x in e₁). This rule is one-directional; this logic has a for-all connective only in a very limited way.

  • (∀ x, e₁ → e₁[x ↦ e₂]) = ⊤
  • (∀ x, e₁) = e₁, if e₁ does not mention x.

When we do proofs in this logic, we usually it by substituting inter-convertible expressions for each other. There’s a sort of meta-rule, that if e₁ = e₂, and P[e₁] = Q, where P is some large expression with a hole filled by e₁, then P[e₂] = Q. To make this meta-rule more powerful, we’re going to say that propositions that imply one another are equal:

  • If e₁ → e₂ = ⊤ and e₂ → e₁ = ⊤, then e₁ = e₂.
  • If ⊤ → y = ⊤, then y = ⊤.

Now that we’ve defined a logic, we should be able to prove some facts by pure symbol-manipulation.

Internal characterizations

We’ve defined some rules by which expressions can be transformed; but how does that lead to proving properties? We only want to prove “true” properties, but we haven’t yet defined truth. So instead, we’ll take an internal notion of truth.

This is incredibly important, so I want to take a moment to dwell on it. The notion of “truth” is complex and slippery (we’ll get to it in the next post). Because it’s hard to describe which things are “true”, we’ll take a conservative approach. Instead of defining which things are true and which are not, we’ll only define a conservative notion of truth: we’ll define a way to describe only true properties, but not necessarily all true properties. We do this by describing a set of building blocks, which can only combine into true properties.

This is called an internal characterization of truth, because we don’t have to actually define truth; we just define a small set of building blocks and proclaim them to only produce true things. But it’s weaker than external notions of truth, because there may be true properties that are not captured by our building blocks. One of the incredible things about first-order logic is that this internal notion ends up being complete—it ends up being equivalent to an external notion of truth (this is a rough and informal view of Gӧdel’s Completeness Theorem).

Proving things

So what’s our internal characterization? What’s our conservative estimate of truth? It will be this: is true, and truth is maintained by the transformations we’ve defined.

A proof is only a check-able record of the internal notion of truth that we’ve just defined; so, it must be a sequence of transformations that transform a given statement into just .

For example, let’s try to prove that ∀ x, (x ∧ (x → ⊥)) → ⊥. If we define ¬ x to be x → ⊥, this is ∀ x, ¬ (x ∧ ¬ x). To make things easier to read, I’ll write e₁ for the string x ∧ (x → ⊥). This is not a variable in the logic itself—it is a meta-variable, to be interpreted by us humans. The proof is:

  • is true, so
  • ∀ x, ⊤ is true, so
  • ∀ x, (e₁ → x → ⊥) → (e₁ → x) → (e₁ → ⊥) is true by the substitution rule, so
  • ∀ x, ⊤ → (e₁ → x) → (e₁ → ⊥) is true by and-elimination, so
  • ∀ x, ⊤ → ⊤ → (e₁ → ⊥) is true by and-elimination, so
  • ∀ x, ⊤ → (e₁ → ⊥) is true by the second equality rule, so
  • ∀ x, e₁ → ⊥ is true by the second equality rule, which is
  • ∀ x, (x ∧ (x → ⊥)) → ⊥ by expanding e₁.

This proof was ugly and at least a bit tedious, but it should give you reason to think that the logic we’ve defined is at least a little bit useful. This trick of “substituting for things” is the main activity of these proofs. Sometimes (though not here), the proofs branch by proving a lemma of the form x = y by showing x → y = ⊤ and y → x = ⊤.


We’ve now defined a logic, as a collection of strings of symbols and a collections of valid transformations on those symbols. We’ve also defined an internal notion of truth, as things provable by transforming things into (or vice versa). The next post will formalize these definitions in Coq.