# Hyperbolic sines in math.js

In between PLDI submission and response, I’ve been taking another look at benchmark results for Herbie, my colleague’s and my tool for automatically fixing rounding errors in floating-point code. A few months ago, Herbie found a bug in the complex square-root routine in math.js, an open-source mathematics library. But since then, Herbie’s had a lot of work put into it. When I ran it again on math.js, I realized that it had found another bug. I submitted a patch, and now it’s been merged, so I want to again pull up the bug, talk about the fix, and mention some of the challenges of writing correct floating-point code.

## Math.js

I don’t want to rehash my description from last time, but the short of it is that math.js is a mathematics library for Javascript. It’s pretty extensive, including support for complex number arithmetic, matrices, unit conversions, and so on. I found it on Hacker News a few months ago, and converted parts of the code to Herbie test cases. I found a bug in math.js before, and so was already familiar with the system for contributing (in summary: the authors give very good feedback on pull requests, and writing good, maintainable code).

## Hyperbolic sines

The hyperbolic sine is defined by \(\operatorname{sinh}(x) = (e^x - e^{-x}) / 2\); hyperbolic cosine is the corresponding function \(\operatorname{cosh}(x) = (e^x + e^{-x}) / 2\). I think the name comes from the fact that \((\operatorname{cosh}(x), \operatorname{sinh}(x))\) traces out a hyperbola, and the fact that they satisfy many of the same equations as the regular trigonometric sine and cosine, but sometimes with a sign flip. Evaluating the hyperbolic sine and cosine is also important to evaluating the sine and cosine of complex numbers, which is how I first tracked this bug down.^{1}

However, evaluating these functions on a computer is a bit more challenging than defining them mathematically. As I discussed in Floating Point Guarantees, every floating point operation must round the ideal real number result back to a floating-point number. This can be problematic when the “important” part of the number is in the less-significant, not more-significant parts. A great example of this comes up in the hyperbolic sine.

To evaluate the hyperbolic sine of a number \(x\) near zero, you might compute \(e^x\) and \(e^{-x}\) first. But this would be unfortunate, since for \(x\) near zero, \(e^x\) is \(1 + x + O(x^2)\). In other words, the important part of the output is not the most-significant bits (which come from the 1) but less-significant bits (from the \(O(x)\) terms). When \(e^x\) is rounded, the bits that are lost are the important ones. Then, when we subtract \(e^x\) and \(e^{-x}\), the two \(1\) values cancel, and the rounding error is just as big as the actual answer. Instead of a reasonable value, you compute 0.

## Series expansion

The solution in this case does not come from clever reassociations and algebraic transformations (as in the square root case) but through a trick from analysis: Taylor expansion. We know that \[e^x = 1 + x + \frac12x^2 + \frac16x^3 + \dotsb,\] and for small values of \(x\), the later terms in this series are not important. So (ignoring the subtle but in this case irrelevant issues of convergence) \[e^x - e^{-x} = 2 \left(x + \frac16x^3 + \frac1{120}x^5 + \dotsb\right).\] We can truncate the series after the \(x^5\) term (we only care about small \(x\) here), to find that \[\operatorname{sinh}(x) \approx x + \frac16 x^3 + \frac1{120}x^5.\]

How do we evaluate the hyperbolic sine for general values of \(x\)? Well, we can test if \(x\) is sufficiently close to \(0\) (in this case, we test whether it is between \(-1\) and \(1\)), and if so use the series expansion; otherwise use the original function, which is accurate in those cases.

Herbie approaches the problem just as described above. It first tries a variety of reassociations and algebraic transformations to the formula, but ultimately these don’t improve its accuracy. Then, it also tries to compute a series expansion, just like we did above (using an algorithm I’ve described elsewhere), and then computes a good branch point.^{2} Series expansion is a recent addition to Herbie, and I’m glad I have a visible way to demonstrate its importance. It’s also come up as for avoiding floating-point overflow in a few cases.

## The patch

Using Herbie’s results, I put together a patch, which computes the hyperbolic sine with:

if (Math.abs(x) < 1) { return x + (x * x * x) / 6 + (x * x * x * x * x) / 120; } else { return (Math.exp(x) - Math.exp(-x)) / 2; }

I also went into the code for complex cosine and sine, and changed them to call the hyperbolic sine and cosine functions^{3}. I also put together some tests, made sure the old version failed and the new version passed, and sent the patch to the math.js developers.

The next day, I was in for a surprise. My patch was rejected, and embarassingly it was because my own test failed! How could that be?

The test in question was testing that \(\cos(10^{-50}+10^{-50}i)\) was exactly \(1 - 10^{-100}i\). Of course, it is not exactly equal mathematically, but my assumption was that, seeing as the actual answer was off from this one by \(10^{-150}i\), the difference would not come up. However, when others ran my test (including the automatic build-bot), they found that instead of `1e-100`

, they were getting `-1.0000000000000001e-100`

. This seems like a difference in rounding mode, but I’m not sure what distribution / build of Node.js they were running. I fixed the bug by just testing that the imaginary value was nonzero. This isn’t very precise, but it gets the job done.^{4}

And this brings up one of the difficult things about writing floating-point code. Even when you know a lot about floating-point numbers, seemingly more than you’d need to, it’s still hard to predict what exactly will happen, especially when you factor in the variations due to rounding modes, different implementations of built-in functions (which might round differently), and of course the whole mess with 80-bit floating point in x86, which means the flags you pass to your compiler can affect the result, too. Given that difficulty, you just must use approximate equality routines. But then the problem, of course, is how much equality is enough equality? The generic routine used by most of math.js treats numbers less than `0.0001`

as close enough to zero, but in many cases that’s a bad call^{5}. Writing correct floating point code requires very carefully thinking about tolerances, error, and so on, and handling that in a generic way is near-impossible.

I’ve still got a few more patches to go in math.js, but I don’t always understand Herbie’s fix or the problem. I don’t want to submit a patch with the explanation, “Herbie said so”, so I’ll be investigating these over the next few days.

^{1}

When I first converted math.js to Scheme (as required to run Herbie), math.js didn’t implement a standalone hyperbolic sine function. Now it does, so my patch rerouted hyperbolic sine and cosine to use it, then fixed that function.

^{2}

It actually computes a branch point a bit off from 1, but I decided to stick with code that doesn’t include magic numbers, since the difference in accuracy was slight.

^{3}

Note that the hyperbolic cosine function, which adds \(e^x\) and \(e^{-x}\), does not suffer from accuracy problems. The answer will be approximately \(1\), so the most-significant terms are important, and the rounding in computing \(e^x\) and \(e^{-x}\) is irrelevant.

^{4}

Math.js’s test harness has a function that tests that two values are “close enough”, but when one of the values is zero it just tests absolute error, which can lead to huge inaccuracies.

^{5}

There are as many double-precision floating point numbers between `1e-100`

and `0`

as between `1e-100`

and `8.8e+107`

. Or think of dividing by the hyperbolic sine of a number. You can’t have that number be accidentally zero.