Pavel Panchekha


Share under CC-BY-SA.

Engineering Taste

"Hello Earthlings!" Up here in academia, unwashed masses of graduate students (along with smaller, washed masses of professors and researchers) are busy inventing the future of software engineering. Test generation, software verification. SMT solvers that find bugs automatically, and synthesis tools that fix them. And the theory needed to fuel the next few centuries of advances in hardware, software, in the problems we want to solve and the ways we solve them. For most programmers, this will be useful only decades after academia works out the details.

Unfortunately, using these tools requires a measure of taste, a trait we are doing a poor job teaching. Taste is humility, knowing your and your tools' limits. Because a good programmer sees bugs as a fact to investigate: to analyze, categorize, and understand. I'll start with a familiar example to demonstrate the need for taste: unit tests.

Unit tests are a simple concept: a good way to make sure your code is correct is to check it on some manually-verified input-output pairs. But taking this concept and implementing it requires answering some very complex questions:

Unfortunately, these are difficult questions to answer without some experience. The best option, of course is to test every function, on all input-output pairs, using input-output pairs from well-studied, peer-reviewed literature. But this ideal is also impossible to reach. So what should the tradeoff be between functions tested, and time spent writing tests? Between adding more tests and testing more functions? Between fighting for coverage, or for branch coverage on already-covered lines of code?

The answers to these questions require experience to even understand. For example, what functions should be tested? The functions more likely to have bugs, or where bugs are most likely to be insidious. Likewise, the input-output pairs we should check are those most likely to fail the test, and these test cases should be ones where there is no question whether an answer is right or wrong, and where the answer is unlikely to change with time. But knowing which functions are buggy, or which input-output pairs will trigger unknown bugs, is difficult. The long battle of experience teaches you to beware functions with many branches and complex state (though some state looks complex and is not; and some branches are benign and others hide bugs), to beware edge cases and error conditions and to make sure we test each domain of definition of our function. But a novice would have a hard time even telling us which inputs are edge conditions.

Likewise, the tools of future programmers require some application of taste. Software verification requires many decisions—what code to verify, and which properties to prove—that taste-less programmers will find difficult. Automated bug-finding tools require changing your programming style to trigger fewer false positives. Test generation means writing more-testable functions to avoid an exponential blow-up of test cases. All this demands knowing how and where bugs occur, which bugs matter and which do not, and the balance of adding new features and solidifying old features.

Right now this taste comes from experience. I know my bugs, because I have made thousands. I have written complex code and simple code; I know how to test parsers, algorithms, UIs. But could we do a better job educating programmers into having this taste as novices? Can we systematize this knowledge, and then teach it?

This isn't a blog post with answers. But I know that nearly every university CS department has a software engineering course. They teach TDD and writing specifications and static checks and design patterns and little languages—all tools to avoid this or that class of bugs. Why do so few software engineering classes teach you about the bugs themselves—where and when and why they appear. We can't prevent our students from making bugs, but perhaps we can move bugs from unknown to known unknowns.