Two Performance Walls Approaching


I just watched a panel from PDC called Microsoft Perspectives on the Future of Programming with loads of “big name” programming gurus. At the end of the session Herb Sutter points out that eventually Moore’s law will end, and then optimization will become extremely sexy again. This is a very good point, but I think there’s another law that will cause optimization to become sexy again far sooner than that: Almdahl’s law.

Almdahl’s law basically states that once your performance increases come from parallelism rather than increasing clock speeds the bottle neck will rapidly become the serial bits of your program. The bits that can’t be parallelized. Once that happens (if it hasn’t already!) it will become extremely important to write those serial bits in a way that is extremely efficient.

Note that this scenario is exactly the opposite of the situation capitalized on by lots of current dynamically typed languages like Python or Ruby. These languages work because they assume that most of the time will be spent on “bulk” code that can be written in C libraries, and that the “glue code” which plugs all these bulky computations together will account for only a tiny fraction of the runtime and therefore doesn’t need to be that fast. This is true today, but in the future the wall-clock time will be dominated by that glue code because it’s full of sequential dependencies and can’t be parallelized.

In short: If your “glue code” is written in a scripting language that’s 20x slower than an efficient compiled language, then Almdahl’s law states that the whole application will be 20x slower (in the limit). It’s true that the majority of clock cycles will still be spent in the “bulky” computations (which will hopefully be parallelized to hell), but the critical path of the program will be dominated by the glue. We need faster glue!

So what’s the solution? Go back to C? I sure hope not! Productivity and safety will still be extremely important, but I do think that merely sweeping performance concerns under the rug, as many high level languages do today, will very rapidly become untenable as we rush towards the performance wall of Almdahl’s law. I think a high level language can be designed that can be compiled to efficient code. Such a language would need to take conscious steps to ensure that abstractions don’t cost too much performance-wise (occasionally making sacrifices favouring performance over productivity), but at the end of the day I see no reason why it couldn’t be type safe, null safe, side-effect safe and most of the other goodness we expect from modern languages like Haskell, while still producing native code on par with good C compilers.


2 thoughts on “Two Performance Walls Approaching

  1. GoLang and Rust are compiled, high performance and also support easy concurrency. I’m interested to see which languages emerge the winners.
    It still irks me when a program takes seconds to perform something – when you think that the CPUs can execute literally billions of instructions in that time!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s