I’m all for attempting to make computing as secure as possible, so don’t take the following as “lets give up on security.” No, we need to understand the causes of our insecurity and their implications before coming up with solutions. Naive solutions, like “secure” languages only address part of the problem, giving us a false sense of security. Perhaps languages that enable more formal verification could help, but they can’t exist in isolation running on top of an insecure software stack on insecure hardware.

The impetus for this post, was a random internet user who began an open conversation about safe programming languages, which quickly devolved into the usual “C” is a bad language and symbolic computing is better discussion. Worse, the discourse quickly lowered to accusing all “C” programmers of being part of a “cow-boy” culture (where are the “cow-girl” cultures?). The claim, in general, was that this culture has led to multiple hacking attacks such as this one: link. Part of me wants to believe that this idea stems from not understanding how computers actually work. I want to believe that these kinds of people think that assembly code is black magic, and that your processor decodes Javascript directly. Unfortunately I’m not sure that’s the case. To discover where our insecurities lie, let’s pull back the curtain on the wizard to see what really makes our modern world tick: abstraction.

The nice interface that you interact with when you pull out your iPhone, laptop, or really any type of full computer system might feel like one monolithic, well honed system. In reality is is something closer to this: River terrapin It is actually much worse than that that. Perhaps add a few dozen more turtles, then throw them on a pogo stick. There are a lot of layers. Computers are built on abstraction. Millions of lines of code are between you and the actual hardware. Each of these layers has been written over hundreds of thousands of hours. Each file of source code, perhaps having many authors, with equally different styles (and interpretations of what the code should do).

Modern hardware starts with an HDL description of the wires and logic that make up the processor itself. Some structures can be formally verified, others must be verified exhaustively to ensure that they are operating within the expected specifications. This is where the first source of error comes in. (Note: I’m greatly simplifying here) Modern hardware extremely complex, hundreds of independently verified pieces come together to make something that is virtually impossible to formally verify. The interactions between components make even exhaustive validation difficult (i.e., months and potentially years if all cases are covered). This process is largely still an art, reliant on engineers to specify cases where they think errors could exist. Were the correct assertions entered by designers so that these tests hit all the cases they were supposed to? Were all the corner cases hit. One notable instance of an error getting past the verification process (broadcast far and wide by the media) was Intel’s transactional memory bug (link). This is only the start of the problem. The second comes with the documentation of the instructions (ISA) and hardware.

The first step in writing the bits that load the operating system, is to look at the documentation that comes with the hardware. Parts that determine core functionality are typically correct (they’re easy to verify, e.g., memory regions). What are harder to find are errors in complex features that pervade modern processors such as this one: link. The documents for modern processors are typically generated by a combination of hand and automated methods because they are huge, often thousands of pages. The subtle errors in documentation (wording, vagaries, and sometimes outright mistakes) are eventually worked out, but it takes time, and often many bug reports. So when the hardware works exactly as specified, where is the next place that errors can creep into the process? Enter the compiler, the master of modern software. Without it, there would be no way to build the systems we all rely upon (yes, you can have interpreters…but what are these but continuous compilers). The compiler takes a programmer’s instructions to the computer and translates them into something the hardware understands (hopefully).

Compilers are something that many computer scientists, programmers, and engineers don’t fully understand. It’s a subject that’s often skipped by undergrads and grad students alike. The people that know how to write a compiler must understand language theory, abstract math for optimizations, and the hardware itself. When translating the words of a programmer into the hardware’s language, there can be many different ways to say the exact same thing (just like in spoken language). The characters that are given by the programmer from their code, map to one or more machine instructions (not directly but it’s easier to think about it that way). For example, I could say “ciao” or “goodbye”, however what happens if either of these is mis-interpreted by the receiver. Imagine if a programmer has an input error when they are writing the mapping. How about if the programmer mis-interprets the documentation. What if the ISA documentation had an error? Even more complex are the transformations of code. The compiler can transform what the programmer specifies, into something entirely different (perhaps a more efficient form that does the same thing, like bit shifting instead of a multiply or eigenvalue power vs. a brute force matrix power). What if the author of the transformation didn’t do something quite right? The solution to finding these errors is to write unit tests covering (again) what the programmer thinks are the cases to be covered. What happens if a case was missed? An alternative is to do formal or exhaustive verification as with hardware, however, the same problem exists. Verification via automated means takes time.

There’s also the application binary interface, which is assumed to be followed when functions are called (i.e. something the compiler should produce correctly). This interface describes registers to be saved before calling a function. Even more importantly, it specifies things like stack layouts. What happens (especially in multi-threaded code) when a compiler author, or programmer fails to follow these specifications exactly? Strange, often hard to find errors result. Compound this with all the other sources of error that I’ve mentioned, and there are quite a few possibilities for things to go awry.

Now we’re to the source code. We all know that humans make errors. People get complacent. Worse yet, when faced with hundreds of thousands of lines of code it’s hard, even for experienced programmers, to understand the secondary and tertiary interactions of what they are changing (ever wonder why programmers are reticent to remove seemingly unused/dead code?). The whole programming model relies on the interface of dozens of essentially moving, living parts. I say living because many binaries are “dynamically” linked, meaning that portions of the program can change without actually having to update the entire program. As long as the function signature doesn’t change, then the larger program can still use these “library” functions. Dynamic libraries are wonderful for efficiency, however they can also be a source of new errors in once verified software.

I’ve largely focused on sequential code (there’s plenty to talk about), but how about parallel code. Everything is a multi-core these days, right? With classic parallelization methods, it is entirely up to the programmer to write “safe” code. It is extremely easy to write code that has very non-deterministic behavior. More recent constructs (well, as of the 1970’s) bring the idea of promises/futures/tasks that enable almost “port” style interfaces between communicating threads. Unfortunately these constructs (even the very well conceived “safe” ones) sit upon layers of heavily interacting pieces which could lead to a chain reaction. Given the frailty of software highlighted in popular media, preaching about the insecurity of language would merely be joining the chorus. The security (even within so-called “secure” languages) relies upon the constant vigilance of programmers, engineers, and architects at all levels.

As a biologist, I have to say that the complexity of compute is beginning to reach the complexity of a living cell. The compilation process itself is almost as complex as the transcription process. The interaction between hardware/software layers is beginning to resemble the intracellular signalling pathway. There are layers upon layers of interacting functions, any small failure could go undetected for quite awhile and lead to exploitation by a malicious party or even system failure. Perhaps future security solutions will resemble something more biological as well, like cell surface proteins for identification (now that I think about it, public/private keys do resemble this process). How about something like an immune system?

So do I have an ultimate security solution? No. There are many potential ones, unfortunately most pile on top of the legacy systems in order to make them more secure. Which is the best solution? I am not the one to judge, nor do I think any one person is. Solutions must reach a tipping point (in the Gladwell sense), before they’ll be long lived. Until that time, perhaps it is best we have as many good solutions as possible. Perhaps one of them will stick. I hope this post points out the frailty of modern systems, and the folly of band-aid solutions such as “secure language” (not that a more secure, easier to code in language can’t be a part of the overall solution). I should also make clear that I’ve not even addressed another huge source of insecurity, the network stack. Addressing even surface issues within our networking interfaces would take a much longer post.