Equal functions in ITT
Many logics have a notion of functions and a notion of equality; so near-every logic has to decide, when are two functions equal?
How equality works in ITT
Intentional Type Theory is the base of Coq and Agda. It describes functions (including dependent functions); products and sums; equality; and a whole host of other neat constructions, all definable through a few common mechanisms.
In intentional type theory, equality is defined in a rather curious way: the family of propositions a = ?
has a single proof, refl a
, which proves that a = a
. Now that's certainly true, but we'd like to prove more things. For example, the theorem ∀n, n + 0 = 0 + n
ought to be true, but it doesn't look like any instance of a = a
.
It turns out we can prove it. A proof of ∀n, n + 0 = 0 + n
is a function that takes a particular n
, like 3
, and then proves its version of the theorem, like 3 + 0 = 0 + 3
. Now, ITT can compute that 3 + 0 ⇝ 3
and that 0 + 3 ⇝ 3
, and every construction respects computation, so that the proof refl 3
which proves 3 = 3
is proving literally the same thing as 3 + 0 = 0 + 3
. Returning to general n
, there is a proof of ∀n, n + 0 = 0 + n
because there is a function which for any n
does return the same value as refl n
, though it does this returning in a peculiar way that satisfies the type checker.
Before I go on, I'd like to point out that by “the same value as” and similar phrases, I mean semantically the same value. It's common to present dependent type theories syntactically—these are the introduction rules, the elimination rules, the computation rules—but I find it helpful to talk about things semantically. Because in ITT, all sorts of things can have a type like a = b
. But if we we imagine running our code, we imagine that the machine must eventually create a value of type a = b
, and per our definition, this value can only be tagged refl
.
So in ITT, when things are equal, they compute to the same thing—they are the same as values. And further, that a “forall” equality is true when the two sides are equal as values for any values of the quantified variable.1 [1 I'm explicitly trying to work outside ITT here, and discuss things meta-theoretically. I find within-the-theory thinking helpful when I'm trying to understand the behavior of the type-checker. But equally often I have meta-theoretic questions: what things can be proven equal? Or even philosophical ones: how should I think about this equality, or that one? Then, stepping outside the system and pretending to be a mathematician, not a computer, is helpful.]
Computing on functions
So now, what functions are equal in ITT? We can see that this must to some extent depend on what sort of computation ITT allows us to do to a function. For example, does ITT compute the function x ⇒ 3 + 0
down to x ⇒ 3
? Yes: in ITT we are allowed to compute inside a function, so refl (x ⇒ 3)
, which proves (x ⇒ 3) = (x ⇒ 3)
, is proving literally the same thing as (x ⇒ 3 + 0) = (x ⇒ 3)
.
On the other hand, the function n + 0
might not compute to n
if +
has been defined by checking if the left argument is zero or not. After all, the left argument is n
, and who knows if that's zero? Then refl (n ⇒ n + 0)
proves (n ⇒ n + 0) = (n ⇒ n + 0)
, but this isn't literally the same thing as (n ⇒ n + 0) = (n ⇒ n)
, so no simple proof works.
Using equalities
What's the point of proving an equality? An equality is useful, because since we know that all equality stems from computation, and since in ITT no matter how you compute a term you'll get the same value,2 [2 This is part of strong normalization, though I don't know which one. In the rewriting literature it's often called confluence, but that doesn't seem like a popular term in PL.] it's always safe to replace one value by an equal one.
In ITT, you can find two version of this. The first is the value-level property: if a = b
, then for any function f
, you have f a = f b
. So any function you compute has the same value on equal terms—because again, all your equalities are really about computing to the same value, and functions only care what value you are, not how you compute to it.
There's also a type-level version of this, that is, a version of this for predicates. This tells you that if a = b
and some property P
is true of a
, then it is also true of b
; or, in symbols,
\[ \forall P, a = b → P a \to P b \]
You can use this type-level version of equality to prove the value-level version. Just consider the property f a = f x
of x
. Clearly this property is true at a
, by reflexivity. But then it must also be true at b
if a
and b
are equal. So then f a = f b
.
You may wonder—if predicates in ITT are functions to Type
, can you prove the type-level eliminator from the value-level one? It turns out, you cannot. You can prove that P a = P b
, but this isn't enough to get you from x : P a
to x' : P b
, because being of a type isn't a function. You can think of decomposing the type-level eliminator for equality into the function-level eliminator, plus an eliminator for type-level equality: A = B → A → B
. But this is uglier—it looks like equality means different things on Type
and on normal types. You do sometimes find this idea used for type theories without dependent types. If there are no functions from a value to a type, there's no reason to describe equality on types identically to how you describe equality on values, and you can't use predicates there anyway.
Rewriting under binders
You may wonder—if you can prove that n + 0 = 0 + n
, why can't you use that equality to prove (n ⇒ n + 0) = (n ⇒ 0 + n)
. At first it looks you just need to rewrite n + 0
to 0 + n
, which you can do. But this doesn't work.
To help your intuition, don't think of having proved n + 0 = 0 + n
. Instead, think of yourself as having proven, simultaneously, 0 + 0 = 0
, 1 + 0 = 1
, and so on, by giving the proofs refl 0
, refl 1
, and so on, though again you must do it in a circuitous way so that the type-checker is happy.3 [3 It's not so easy to get this proof past the type-checker. But of course, the proof must have the value refl n
, since it's the only constructor of the relevant type. The power of meta-theoretic thinking!] You can rewrite by any of those equalities, but not by n + 0 = 0 + n
in general, since that is actually a function from n
to the appropriate equality. To rewrite n + 0
in n ⇒ n + 0
, none of the equalities 0 + 0 = 0 + 0
, 1 + 0 = 0 + 1
, … is sufficient, not even a few of them together, because you have no idea which one you'll need. n
could be anything!4 [4 Are there type theories that allow direct rewriting by quantified equalities? This sounds fascinating. I don't know how exactly you would set it up, but it might be worth looking into.]
So you cannot rewrite n + 0
to n
in (n ⇒ n + 0)
, because really you only have the infinite collection of proofs 0 + 0 = 0
, 1 + 0 = 1
, and so on, none of which talk about n
the bound variable.
Proving things not equal
In ITT, you prove two things unequal by assuming them equal and then proving ⊥
. Let's prove 1
not equal to 0
. So, assume that 1
is equal to 0
, and prove ⊥
.
To do so, construct a family of types X
where X 0
is ⊥
and X 1
is some nice friendly type like ℕ
. We can provide a member of X 1
easily. But if 1 = 0
, then X 1 → X 0
, so since there is a member of X 1
, there must be a member of X 0
as well.5 [5 This member is called the “transport” of our member of X 1
, the idea being that you can think of equality as a process, not just as a fact, and this process can turn things of type X 1
into things of type X 0
. My mathematician friends tell me that this perspective goes back to the incredibly inventive mind of Grothendiek.] Since X 0
is just ⊥
, we've constructed a proof of ⊥
as required. So, 0 ≠ 1
. The core insight to this proof is that two values can be proven different if I can write a function that has different outputs for those inputs, one output being a provable proposition and the other being the false proposition.
So how would we prove two functions are unequal in ITT? We'd need a function that distinguishes them—returns different outputs when given the two functions as inputs. But how would you write such a function, that takes a function and returns some value, where different function inputs produce different outputs? Well, pretty much the only thing you could do in ITT to a function is to apply it. If the two functions do different things upon being applied to the same argument, it's easy to see that the functions must be unequal. Say f
and g
give different outputs for the input a
. Then x ⇒ x a
takes a function, and returns different values when given f
and g
. A little bit of work and you've proven that f ≠ g
.
Extensionality
On the other hand, there are functions, like n ⇒ n
and n ⇒ n + 0
, which provably return the same value for each argument (so we cannot prove them different) which we also can't prove the same.
The usual way out of this bind is to just add an axiom that any two functions that are equal on all values are also equal. This axiom is usually called functional extensionality, and in its non-dependent form it is written
\[(∀ f g, (∀ x, f x = g x) → f = g)\]
Now again, this is introduced as an axiom, and axioms are a hack. For example, I claimed that the equality type family only had one member, but turns out it also has this functional extentionality member. Moreover, since you can't compute through axioms, there's actually a lot of members. For example, imagine you want to prove that (x ⇒ x) = (x ⇒ x)
. One way is refl (x ⇒ x)
. But another way is to use function extensionality and then prove ∀ x, x = x
with x ⇒ refl x
.6 [6 Note the symmetry here. You can think of function extensionality as allowing you to distribute refl
over ⇒
. You can also see why this is surprising: no other constructors in ITT have built-in distribution rules like this.]
It's not even clear that our original rationale for how equality works—that two things are only equal when they compute to the same thing—still holds. It turns out that it's safe to add functional extensionality as an axiom, and with functional extensionality assumed, you can actually prove anything that isn't a type or a function that returns types (or a function that returns functions that return a type, or …) either equal or unequal. That is, any Type₀
that you can construct is decidable.7 [7 Which is different from the claim that ∀ T : Type₀, decidable T
, because ITT universes are open.]
So we can characterize the functional extensionality approach to the problem of function equality as resting on the assumption that ITT doesn't need more capabilities (as in, new things you could do that might tell more functions apart) but instead needs more logical power (as in, more ways to prove that things are true).
Alternative: HoTT
Since just adding a functional extensionality axiom is such a hack, people have looked for ways of changing how equality is defined to make function extensionality happen automatically.
One approach which is really hot right now is Homotopy Type Theory. One way to think about HoTT is that equality turns out to be really resilient to us “making up” new members. This is kinda surprising. If I told you I'd discovered a hitherto-unknown number x
, you'd be right to doubt me. In fact, you can prove by induction that I hadn't: let P
be the property of being 0
or 1
or any “normal” number. P
is true of 0
, and it is preserved by successor, so by induction it holds of all numbers. So it must hold of x
, so it can't be some weird hitherto-unknown number.8 [8 If this feels hand-wavy, think of it like this: though I cannot write down, in ITT, an infinite disjunction, I can imagine what it would mean. ITT has an “open” universe: we don't need to be able to write down our types for them to be types. Any property of all types is just as true for “made up” types as for types you can write down.]
But with equality things are different. If I make up a bunch of equalities, often nothing bad happens, and you can't prove that my equalities aren't true. For example, HoTT has an axiom that adds equalities between any “equivalent” types. So for example, consider the two types
\[ \{n : ℕ \mid n = 0 \lor n = 1\} \hspace{1in} \mathsf{bool} \]
These two types don't normalize to the same value; for example, true
is an element of the left type but not of the right type. But these types are “equivalent” in HoTT, so they are equal. This equality cannot be reflexivity, but HoTT adds it anyway.
Why don't weird things happen if you stick in fake equalities, like they would if you stuck in fake integers? This is beyond my pay grade, though I do remember Bob Harper explaining it very well. All I've got from that lecture is a memory of him saying that even though this equality isn't reflexivity, we've added enough equalities to ensure that it's equal to reflexivity, so no one notices.9 [9 How do you prove that no problems crop up? A model for HoTT has been constructed using simplicial complexes, which are also beyond my pay grade.]
Anyway, some of the many equalities HoTT sticks in include every instance of function extensionality, in complicated or not-so-complicated ways according to which variant of HoTT you're using. In particular, there's a really slick proof of functional extensionality if you assume the existence of an interval type: a type with two members L
and R
which are different but equal. The proof goes like so:
Pick
f
andg
fromX
toY
and let's suppose∀ x, f x = g x
. Forx : X
we have a function from the interval toY
: mapL
tof x
,R
tog x
, and the equality between them tof x = g x
. Now comes the trick. We have a functionX → (interval → Y)
. Flip the order of the arguments and we getinterval → (X → Y)
. What does this function do? It mapsL
tof
, andR
tog
. So the equality between them must map to a proof off = g
. Slick!
You can also prove function extensionality using just the paths between types (I think “higher inductive” types like the interval and the “univalent” equalities between types are largely independent parts of HoTT). That proof is more complicated, hinging on the fact that two things can be equal in two different, unequal ways; in the book, the type of booleans is equal to itself both in the normal way, and by swapping true
and false
.
Alternative: OTT
Conor McBride has another exciting proposal, which he calls Observational Type Theory. The idea is to totally replace the definition of equality with a variant of his heterogeneous equality. Normal equality is always between two things of the same type, but heterogeneous equality can go between things of different types. In OTT, there are two ways to make a (heterogeneous) equality. One is the normal way: refl a
proves a = a
, or, to be a little more clear about the heterogeneity, we might say that if a : A
, then refl a
proves a : A = a : A
. But there's a second way. Equality can be used to “coerce” like in the special interpretation of equality on types up above: if e : A = B
and a : A
, then a[e] : B
. And the second kind of heterogeneous equality is the “coherence” property: a : A = a[e] : B
.
In OTT, you can prove function extensionality, and in general you can prove a lot of “natural” equalities (like the fact that all proofs of a = a
are equal). I don't really understand what's going on here too well—but heterogeneous equality gives a lot of looseness to start with. One way I like to think about it is—ok, one way to get a “surprising” heterogeneous equality is to start with something simple like a : A = b : A
, and take a dependent function f
with type (a : A) → T a
. Then f a : T a = f b : T b
, and note that the left and right here have different types. If you think about the type ∃ T
(that is, the type (a : A) × T a
of dependent pairs), then with normal equality we can say that (a, f a) = (b, f b)
, but we can't say that f a = f b
, since it might not even type-check.10 [10 We can't say that f a ≠ f b
either!]
One way I've been thinking about heterogeneous equality (and this may be wrong!) is that a : A = b : B
means the same thing as there existing some f : (x : X) → T x
and some x, y : X
such that a = f x
, b = f y
, and (x, f x) = (y, g y)
in ∃ T
. The f
is hidden—I don't mean that there exists such an f
in the sense that you can get it out of a heterogeneous equality.11 [11 I think of this as “squashing” in HoTT.] Since the function is hidden, you can't tell two proofs of the same heterogeneous equalities apart, but since one must exist, you get a lot of flexibility without losing soundness.12 [12 This intuition for heterogeneous equality, and especially OTT's variations on the theme, really deserve a proper formal proof, which is unfortunately beyond my pay grade.]
I've heard it said (by Dan Licata, I think) that you can regard OTT as an approximation to HoTT13 [13 The joke is that HoTT, standing not only for the theory of homotopy types and type theory of a homotopic sort, also stands for higher observational type theory. What an acronym!] which describes analogs of the first two levels in HoTT's universe: h-props and h-sets. On the other hand, OTT also manages to make a lot more equalities definitions, not propositional, and I think Conor McBridge might not like that characterization of OTT. So I'll leave off my musings before I say anything that gets me in trouble.
Conclusion
In ITT, there are many pairs of functions that are not provably equal or unequal, and this is upsetting. The usual philosophical position is that all such pairs of functions, which must all return equal outputs to every input, ought to be equal. That is, no new mechanisms should be added to ITT to allow such pairs to be told apart; instead, new mechanisms should be added to prove them equal. Several such mechanisms have been proposed, starting with the simple hack of adding an axiom declaring such pairs equal, to the complex machinery of HoTT or OTT. Neither HoTT nor OTT currently offer strong normalization (HoTT has no operational semantics yet; OTT does not represent types and proofs as a first-class concept). As always, more research is necessary.
Footnotes:
I'm explicitly trying to work outside ITT here, and discuss things meta-theoretically. I find within-the-theory thinking helpful when I'm trying to understand the behavior of the type-checker. But equally often I have meta-theoretic questions: what things can be proven equal? Or even philosophical ones: how should I think about this equality, or that one? Then, stepping outside the system and pretending to be a mathematician, not a computer, is helpful.
This is part of strong normalization, though I don't know which one. In the rewriting literature it's often called confluence, but that doesn't seem like a popular term in PL.
It's not so easy to get this proof past the type-checker. But of course, the proof must have the value refl n
, since it's the only constructor of the relevant type. The power of meta-theoretic thinking!
Are there type theories that allow direct rewriting by quantified equalities? This sounds fascinating. I don't know how exactly you would set it up, but it might be worth looking into.
This member is called the “transport” of our member of X 1
, the idea being that you can think of equality as a process, not just as a fact, and this process can turn things of type X 1
into things of type X 0
. My mathematician friends tell me that this perspective goes back to the incredibly inventive mind of Grothendiek.
Note the symmetry here. You can think of function extensionality as allowing you to distribute refl
over ⇒
. You can also see why this is surprising: no other constructors in ITT have built-in distribution rules like this.
Which is different from the claim that ∀ T : Type₀, decidable T
, because ITT universes are open.
If this feels hand-wavy, think of it like this: though I cannot write down, in ITT, an infinite disjunction, I can imagine what it would mean. ITT has an “open” universe: we don't need to be able to write down our types for them to be types. Any property of all types is just as true for “made up” types as for types you can write down.
How do you prove that no problems crop up? A model for HoTT has been constructed using simplicial complexes, which are also beyond my pay grade.
We can't say that f a ≠ f b
either!
I think of this as “squashing” in HoTT.
This intuition for heterogeneous equality, and especially OTT's variations on the theme, really deserve a proper formal proof, which is unfortunately beyond my pay grade.
The joke is that HoTT, standing not only for the theory of homotopy types and type theory of a homotopic sort, also stands for higher observational type theory. What an acronym!