Note: I wrote part of this in late 2013 and the rest of it in 2015, but only getting around to publishing it now. So take its prognostications in the context of 2013–2015.
A Distinguished Lecture by Martin Rinard, MIT
When I first came to UBC, I attended a lecture in our Distinguished Lecture Series that made quite an impression on me. Rinard's talk presented an intriguing approach to software development which I explore in this post. I can't hope to summarize his whole talk—I only aim to convey the overall philosophy and its implications to software development.
Overview of the Talk
If you weren't able to attend the talk, I'll first give you a quick overview:
Rinard's objective is to improve the implementation of computer programs through somewhat ... unorthodox means. Here's the problem-solution matrix:
|Infinite Loops||Loop Escaping|
|Memory Leaks||Cyclic Allocation|
|Memory Addressing Errors||Failure Oblivious Computing|
"But Neil," you say, "those aren't even things. What are you talking about?" Oh yes they are things! These are the names of various techniques that Rinard and his colleagues have developed to deal with the problems that exist in almost every piece of software. A quick glossary:
- Loop Perforation: Take iterations out of a for-loop so that for(i=0; i<x; i++) becomes for(i=0; i<x; i+=8). Your program now runs faster!
- Loop Escaping: When your program freezes (i.e. hits an infinite loop), simply attach a tool that will find the boundaries of the loop and jump to the instruction directly after the loop, allowing the program to proceed. Voilà, no more program freezes!
- Cyclic Allocation: Sometimes a programmer allocates a new object and forgets to delete it when they're done with it. For each instantiation, provide a finite array of memory locations where these objects can be stored. When the array fills up, just go back to the beginning and overwrite the first entry. It's like having an infinite RAM!
- Failure Oblivious Computing: Sometimes programmers make errors in their pointer math. They address an area of memory where they're not supposed to touch. Well, fine. The next time they do that, let's just ignore them, give them a random value, and let them continue on their way—oblivious. Goodbye segfaults!
- Rectification: A user loads a file. Your program parses it. But there's a bug in your program. The strange file causes an unexpected error. Well, what if we could figure out the piece of the file that caused the error and, just, you know ... change it? We'd never see a bad input again!
Cue the objections. "Whoa, whoa, whoa, whoa!" you say. "You can't do that! You'll break the program! You'll skip important parts of the code, overwrite values, produce wrong answers, alter user input in unanticipated ways, introduce non-determinism and release chaos upon the world! This, sir, is ridiculous!"
In actual fact, many of these changes, if properly tuned, result in programs that are faster, more stable, and whose output is within about a 10% error bound of the original program. If you want more information on the details of the research, check out Rinard's publications.1 What I'd like to talk about now is the implications going forward for the development of software systems.
Programs as Approximations
Rinard's talk brought a classic question to mind: Is software engineering really engineering?
Now before you get all defensive, I'm not implying that software engineering is somehow lesser than mechanical engineering or electrical engineering. What I mean is that it's so fundamentally different that even granting it the term "engineering" is potentially harmful. Applying the best practices of engineering disciplines may not be a productive approach to software development.
How is software different from "traditional" engineering? The answer lies in its complex and intangible nature.
Complexity and Intangibility
Any modern piece of software is a highly complex system. Computational systems (computing hardware and the software that runs on top of it) are more complex than any other system ever engineered by man.2 The only thing in the purview of science that even compares (in terms of complexity) is biology. More on this later.
Software is also unique in the history of human invention in its property of being intangible. Software is often compared to architecture, but this can be misleading. The legendary "Gang of Four" Design Patterns book was inspired by the design patterns of architect Christopher Alexander. Ever since, software has often been compared to architecture, and we have even adopted the term architecture to refer to the large-scale structure of a program. Today you can find thousands of job postings for "Software Architect" online.
But in actual fact, software development is not like building development, is it? When you lay the foundation for a building, it is literally set in stone. You can't change it. You can't wipe everything and start over. You've permanently altered the landscape. If you catch the problem early enough, you might be able to resolve it and only lose a couple million bucks. But if you suddenly discover in the middle of development that the building you intended to be a library can't support the weight of 10,000 books, you're in serious trouble.
Conversely, in software you can change just about anything. You can even go in years after the building has been put into use, and you can completely replace the foundation with all of the latest new techniques. That malleability is simply not a property that exists in any other product on the planet. Such a fundamentally different product calls for a fundamentally different engineering approach.
As I mentioned earlier, there is a system that surpasses the software systems of today in complexity; that is the ecosystem. Doctors are forced to deal with this complexity every day. They deal with systems far too complex for them to fully understand. And yet, they still manage somehow. They don't throw out a treatment if it isn't 100% effective; they make do with what they have and they use their professional discretion to determine the best course of action on a case-by-case basis. They don't follow an algorithm.
Maybe us programmers should be more flexible, too. Maybe we should strive to be less perfectionist. What would software look like then?
So what if it's non-deterministic? After all, given the sheer number of components that can fail, software is already non-deterministic as it is. As Bill Joy explains when talking about the coming "Entanglement", many software systems are so complex that a programmer cannot know the results of their actions when making changes to the software. There is always a chance of something we could never have predicted. So why not embrace the uncertainty? Could it help us to do so?
Rinard's experiments show that, in many cases, it can help us. It can help our programs to be more robust to failure if they are designed to accept a certain amount of failure in the first place. If we change our expectations, software can become more useful than ever before.
This may not be a new idea at all, but just the first time such an idea gets an academic treatment. In industry, many clever methods of error recovery are employed all the time in fault-tolerant software. For an introduction to some of these fault tolerance patterns, you can listen to the two-part episode of SE Radio on Fault Tolerance. I listened recently, and it was very intriguing to hear Bob Hamner talk about ways of flagging bad data, ignoring bad data, correcting bad data, or even just allowing bad data to be bad. Depending on your application, you might be able to make trade-offs that enhance the robustness of your software, even while introducing small inaccuracies.
This practice isn't without its problems, to be sure. One of the professors who attended Rinard's talk asked an important question: Suppressing little errors can allow them to pile up, leading to big, catastrophic failure later on. How do you propose to address that? The answer is, more or less: exercise prudence when you implement these techniques. Consciously make the trade-off between accuracy and stability, and be aware of the implications of doing so. Consider enhancements on a case-by-case basis to decide the best treatment for the symptoms at hand.
Just like a doctor does.
After the talk, I had a moment to discuss with professor Ron Garcia who gave me a very interesting perspective. He presented an analogy with floating point numbering systems. In a computer, ultimately every number is represented in discrete increments. The smallest unit that can be represented by a computer is referred to as machine epsilon. If two numbers are closer together than epsilon, then a computer cannot tell the difference between those two numbers. 64-bit systems have a smaller epsilon than 32-bit systems—so we can continue to push this limit, but the limit will always be there. The numbers in a computer cannot be represented exactly; everything that happens inside a computer is already an approximation.
The idea, Ron said, is to extend this approximation into other parts of the software development process. If the entire program is already an approximation, why not treat it as such?3
Last year, in an IEEE workshop on Rebooting Computing, the topic discussed was the end of Moore's law and what to do about it. One of the four major topics discussed was Randomness and Approximation. The question they posed was, can we make processors faster by allowing errors to creep in? Dick Lipton of the Gödel’s Lost Letter Blog phrased it this way:
Today we operate like this: Design approximate algorithms that run on exact processors. Can we switch this around: Design algorithms that operate on approximate processors?
This is a bit different since in this case we're actually talking about operating on approximate hardware, but it raises the same kinds of questions. For example, the problem I mentioned earlier—small errors adding up to big failures—is also a primary concern in approximate numbers. In one infamous example, a Patriot missile missed its target and instead killed 28 U.S. soldiers due to the accumulation of numerical errors over a period of 100 hours! Presumably, the problem was missed because no missiles in test had ever been run for as long as 100 hours.
The Future Programmer
One major theme here is what I view as the future of how software will be built. "Programmers" as such will cease to exist, and in their place will be "architects" of complex systems; computational systems wranglers. Today's simple, imperative, if-this-then-that programmer will go the way of the dodo as they are made obsolete by new paradigms like Stephen Wolfram's sentient computation. Things I think will happen:
- The inputs to our programs and databases will become much more complex, making exact software much more difficult. Software engineers will have to deal with fuzziness and unstructured data on a regular basis. As Neil Lawrence says, our datasets "will be `uncurated’ in the sense that the size of the data sets means that no single individual will have been through all the data consistently removing outliers or dealing with missing or corrupted values."
- Machine learning will be a component of every single piece of software. If you do not have an understanding or at least an ability to utilize learning algorithms, you will be out of a job. This is because:
- Computer programs will evolve to find the appropriate solution to a problem. They will adapt to their user, they will adapt to their input, and they will not compute an exact answer, but rather they will intuit an approximate answer. This is exactly what Rinard demonstrates in his new programming paradigm.
- Thus, I think the programmer of the future will be artist as much as they are an engineer. They will artfully meld together very high-level tools, nudging and sculpting the output into what they want to see. The programs they produce will be much more human, exhibiting our fuzzy logic and our ability for adaptation, and as a result they may lack the precise calculation of traditional computing.
- The code-compile-run loop will become a lot tighter. In this future, we can't necessarily predict what will be the result of changes we make to the program. If our programs are going to be adapting fluidly to so many nudges and tweaks of parameters, then we'll need to be able to see the outcome of these adjustments quickly and easily. Running unit tests and experimenting with code snippets in a REPL are a good start in this direction; new editors like Light Table will take these ideas even further and make it easier for a programmer to mold a piece of software into a shape they like. Programming will become increasingly interactive.
Okay, so rote programming may not disappear entirely, but as Neil Lawrence points out, it may be farmed out to less developed economies, since it no longer requires a degree in computer science to write a computer program. In developed nations, various topics in machine learning "must form the basis of a modern computer science course if we are to provide added value over what will be achievable by farming out software engineering."
Maybe, in addition to machine learning, we can add fault tolerance and approximation to the list of topics that will be essential to the Programmer of the Future.
- Here are the slides and video from a similar talk he gave at AOSD 2012.
- Except for perhaps—perhaps—the vehicles that took men to the moon. Ah, rocket science, you win again.
- Of course, it's one thing to have imperceptibly small discretizations of your answers; it's quite another to have the kinds of fudging that Rinard is talking about.