Pavel Panchekha


Share under CC-BY-SA.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

Exact Geometric Operations

Some of my friends at the University of Washington have been working with computational geometry and running into quite a few headaches because of floating-point error. Of course, they were expecting my work on Herbie to have taught me how to help, so I've been fielding a few floating-point questions. When they got desperate enough to start using MPFR and talking about computable reals, I had an idea.

My friends are working on a programming language for describing shapes in two and three dimensions. I don't know much about it, but I do know that it has only a few simple operations, like translation, rotation, and union. I want to implement all of these operations in a totally error-free way, so my friends never have to worry about rounding error ever again.

Looking over that list, translation shouldn't cause too much error—rational arithmetic is exact and easy to implement. But rotation is a more interesting: sine and cosine are transcendental functions and satisfy complicated identities, that a user may actually care about, like rotating a thing by 60 degrees six times leaving it unchanged. With rounding error, that would be tricky to ensure.

I needed exact arithmetic with support for the field operations, cosine and sine of rational multiples of π, and the ordering operations. This post uses a mathematical perspective; I hope to later write a paper that describes my implementation from a programming perspective.

Exact sines and cosines are possible

In most geometric shapes people actually care about, all of the angles are going to be rational multiples of π; I'm going to call these rational angles in the rest of this post, though of course I mean rational multiples of π. One nice thing about rational angles is that:

Theorem: Sines and cosines of rational angles are algebraic.

Proof: You can come up with a polynomial formula cos nx in terms of cos x by repeating the angle-addition formulas n times; this formula is called the Chebyshev polynomial \(\cos nx = T_n(\cos x)\).

Now suppose the angle we are interested in is a π / 2 b, for a and b integers. First, since we might want the sine or the cosine, note that \(\sin \frac{a \pi}{2 b} = \cos \left( \frac{\pi}2 - \frac{a\pi}{2b}\right) = \cos \frac{(b - a)\pi}{2b}\). Now, for any rational angle x π / 2 b (where x is a or b - a), we have: \[ \cos \frac{x\pi}{2} = T_{b}(\cos \frac{x \pi}{2 b}) \] The value on the left is an integer no matter the value of x, so both the sine and cosine of the angle in question are roots of a polynomial and are therefore algebraic. ∎

Representation and equality

The cosine of \(a \pi / 2 b\) is equal to \(T_a(\cos \frac{\pi}{2 b})\), thanks to the wonderful properties of the Chebyshev polynomials. So, we only need to find a way to do exact arithmetic with \(\cos \frac{\pi}{2b}\). Luckily, we have \[ \cos \frac{\pi}{2} = 0 = T_{b}(\cos \frac{\pi}{2b}), \] so we need to do exact arithmetic with a root of \(T_b\), in other words we need something like \(\mathbb{Q}[x] / T_b\).

I say something like, because \(T_b\) is not necessarily irreducible (it is irreducible only for \(b = 2^k\)). Instead, if \(k | b\) and \(k\) is odd, then \(T_{b/k} \mid T_b\).11 See Rayes, Trevisan, and Wang in Factorization properties of Chebyshev polynomials, available online The root of \(T_b\) that we care about, \(\cos \frac{\pi}{2b}\), is not a root of any smaller \(T_{b/k}\). So, its fundamental polynomial \(P\) is defined by \(P \mid T_b\) but \(P \not\mid T_{b / k}\) for any \(k > 1\). Call that polynomial \(D_b\).22 For more, and a simpler construction, see Watkins and Zeitlin's The Minimal Polynomial of cos(2π/n), available online

My library implements exact sines and cosines, for angles of the form a \pi / 2 b, by implementing arithmetic in \(\mathbb{Q} / D_b\).

I adopt a peculiar representation of elements of this field. First, \(D_b\) is complicated to compute, so I will represent numbers as elements of \(\mathbb{Q} / T_b\), using the fact that: \[x = y \in \mathbb{Q} / D_b \iff \exists^{1 < k < b}_{k\:\mathrm{odd}} T_{b / k} x = T_{b / k} y \in \mathbb{Q}/T_n.\] For example, \(\cos \frac\pi3 = \frac12\). Both are representable in \(\mathbb{Q} / T_3\), as \(T_2(x)\) and \(\frac12\) respectively. They are equal, because \(k = 3\) is odd and greater than 1, and \(T_1 T_2 = \frac12 T_3 + \frac12 = \frac12 \in \mathbb{Q} / T_3\).

Furthermore, I represent the elements of \(\mathbb{Q} / T_b\) as linear combinations of Chebyshev polynomials. This representation makes addition, multiplication, and cosine and sine not only easy to implement but also to work well with a sparse representation. Note that multiplication can produce terms \(T_{i + j}\) where \(i + j \ge n\); luckily, \(T_{n} = 0\) and \(T_{n + d} = -T_{n - d}\) so these terms are really easy to handle.

As a final tweak, I put all coefficients on a common denominator.

Overall, my representation makes most operations much simpler than they would be in a traditional implementation of \(\mathbb{Q} / D_b\).

Changing field

Just implementing arithmetic in \(\mathbb{Q} / D_b\) is not enough, because we do not know up front what angles \(a \pi / 2 b\) the user will want to take sines and cosines of, and thus we do not know \(b\) up front.

Luckily, a number \[ \frac1d \sum_i a_i T_i(x) \in \mathbb{Q} / T_b \] in my representation of \(\mathbb{Q} / D_b\) is easy to translate into \[ \frac1d \sum_i a_{i} T_{ci}(x) \in \mathbb{Q} / T_{cb}, \] which represents the same real value because \(T_{ic}(x) = T_i(T_c(x))\) and because \(T_c(\cos \frac{\pi}{2bc}) = \cos \frac{\pi}{2b}\).

This operation, which I call rebasing, maps \(\mathbb{Q} / D_b\) to \(\mathbb{Q} / D_{cb}\). It thus makes sense to think of a category of fields \(\mathbb{Q} / D_b\) and rebasing as functors between these fields that preserve the real value a value represents. Given two fields, \(\mathbb{Q} / D_b\) and \(\mathbb{Q} / D_a\), both map into \(\mathbb{Q} / D_{ab}\), so this category has coproducts.

As a result, if we need to do an operation, like addition, to two values that are represented in different fields in this category, we can first map both to the coproduct of those two fields, and then do the operation there. For example, to add \(x = \cos \frac\pi3\), represented by \(T_2(x) \in \mathbb{Q} / D_3\), to \(y = \cos \frac\pi4\), represented by \(T_1(x) \in \mathbb{Q} / D_2\), we first move both to \(\mathbb{Q} / D_6\) and then sum them to \(T_3(x) + T_4(X) \in \mathbb{Q} / D_6\).

Field operations

The field operations are easy to implement in my representation. Addition and subtraction can be implemented directly. Multiplication makes use of the identity \(T_i T_j = \frac12 ( T_{i+j} + T_{i - j})\),33 Where \(T_{-n} = T_{n}\). as described by Rutta.44 Basic algorithms and resultant for polynomials in Chebyshev basis, by Olivier Rutta, available online

Division in polynomial quotient rings is usually implemented using the extended GCD algorithm, which, given two polynomials \(p\) and \(q\), finds polynomials \(A\) and \(B\) such that \[\mathrm{gcd}(p, q) = A p + B q. \] You can use this for division in \(R / p R\): to compute \(a / b\), first compute \(1 / b\) by writing $1 = A p + B b$—assuming that \(1\) is the GCD of \(p\) and \(b\), which it will be if \(p\) is irreducible. Then \(1 = B b\) modulo \(p\), or in other words \(B = 1 / b\), so \(a B = a / b\).

Rutta describes an extended GCD algorithm for sums of Chebyshev polynomials; it's basically identical to the normal division algorithm in polynomial fields. However, implementing it in my context is a little tricky, for two reasons: First, I am working modulo \(T_n\), which is reducible; and second, because I am always working modulo \(T_n\), so can't represent \(T_n\) directly.

For the first issue, the solution is simple. Suppose, in the course of computing \(1 / p\), I find that \(r = A T_n + B p\), where \(r\) is the GCD of \(T_n\) and \(p\), and suppose further that \(r\) is not one. Then I know that \(r\) divides \(T_n\). Since \(D_n \not\mid p\) (otherwise I would be dividing by zero), I know that \(D_n \not\mid r\). So, since I am really working modulo \(D_n\), I can use the extended GCD algorithm again to find \(1 = A' (T_n / r) + B' p\), and now \(B'\) is the inverse of \(p\).

The second issue is also simple: a special case in the extended GCD algorithm for \(T_n\). Since \(T_n\) has a higher degree than any possible \(p\), and since the difference in their degrees is never greater than \(n\), this special case can only occur for the first iteration of the loop inside the extended GCD algorithm, so it can be easily handled up-front.


Comparing two of our numbers for equality is easy, but how should less-than and greater-than work?

My solution is to compute the interval bounds for the real-number values of each number in some fixed-point representation (\(n\) bits). If the intervals don't overlap, then the comparison is easy. If the intervals do overlap, try again with larger \(n\). Here's the trick: this can't go on forever, because the intervals shrink by a factor of 2 every time \(n\) increases by one, and if the numbers were actually equal, we would have found out using the equality test.

I also use this trick to round a number to its nearest integer, which I use to implement remainder (interpreted as remainder in ℝ, not in \(\mathbb{Q} / D_b\)).

In practice, this interval trick works fairly well but is slow; I augment it with a cached floating-point computation that records not only the floating-point value of an exact number, but also information about it necessary to compute sound, conservative error bounds on that floating-point value. These error bounds can be used to decide the equality and ordering of the vast majority of values, since most values compared are unequal and in fact quite far apart.

Additional hacks

I represent some additional operations symbolically, mainly the constant π and square roots.

Square roots are interesting because it would in theory be possible to represent them explicitly, since square roots are algebraic. However, square roots are not used too often, and in a geometry library most of their uses are unnecessary (for example, norms and distances can often be represented in squared form).

I also have symbolic support for the arctangent and arcsine functions, in a limited way: separate types for lengths and angles, where the arc-trigonometric functions take lengths to angles and where the trigonometric functions take angles to lengths. Since practical use in geometry treats angles and length separately, this dual number arrangement works fine. Angles would come in two forms: rational multiples of \(\pi\), and arc-trigonometric functions of a length. Taking, say, the sine of the arctangent of \(\frac13\) can be done exactly—though involves square roots (it is \(\frac1{\sqrt{10}}\)) that are represented symbolically.


My implementation, which is in OCaml and uses the ZArith big integer library is fast enough to be usable and supports almost all operations the geometry application uses, and further passes almost all of the tests; the remaining ones are due to an ongoing effort to replace all decimal constants with exact values.55 Which we're doing by putting the decimal constants into Wolfram|Alpha and hoping for a nice symbolic representation. After some optimization effort, it is roughly 20× slower than floating-point computation, making it faster than arbitrary-precision code. (I'm working on further speed-ups.) It's unclear if the benefit of exactness is really worth the slow-down, especially since the application is being written to work both in floating-point and exact modes, but at least the price to pay for certainty is small.

See Rayes, Trevisan, and Wang in Factorization properties of Chebyshev polynomials, available online
For more, and a simpler construction, see Watkins and Zeitlin's The Minimal Polynomial of cos(2π/n), available online
Where \(T_{-n} = T_{n}\).
Basic algorithms and resultant for polynomials in Chebyshev basis, by Olivier Rutta, available online
Which we're doing by putting the decimal constants into Wolfram|Alpha and hoping for a nice symbolic representation.