http://seancorfield.github.io for newer blog posts." />

An Architect's View

CFML, Clojure, Software Design, Frameworks and more...

An Architect's View

The Last Programming Language

July 16, 2011 · 11 Comments

On July 13th, 2011 Robert C. Martin (aka "Uncle Bob") gave a talk at Skills Matter in London called The Last Programming Language. He was scheduled to give a version of it as the keynote for ACCU 2011, a conference I remember with fondness from my days back in England as a member of the Association of C and C++ Users! You can read Martin's blog post about the talk here but note there's a $2 charge to watch the version linked from that blog post - the Skills Matter version linked above is free.

TL;DR: He asks whether we've exhausted all possible programming paradigms and languages and whether we should now consider a single standardized programming language (and offers a suggestion of what that might be). Preposterous?

Read his blog post. I'll wait.

His basic premise is that our craft hasn't really changed in the last 40 years. You might be shocked by that bold claim but he actually makes a compelling argument. Nearly every modern computer language is based on an earlier computer language (or, more often, a melting pot of several). The ground-breaking languages all appeared a long time ago and everything else since has been a refinement of that - a remix, if you like. More to the point, the major paradigms in computing that have shaped our thinking also all trace back their lineage several decades. In his talk he covers how modular programming was one of the first shifts, removing "unlimited size" and replacing it with the constraint of reasonably sized modules. The next major shift was structured programming, which essentially removed "goto" - or at least constrained us not to use it. The next mainstream shift was object-oriented programming and he makes a very interesting point about what was core about that: not encapsulation or inheritance per se (since we'd been able to implement those easily enough before "OO languages" - just look at the X11 Windows source code in C), but polymorphism in a controlled way (again, X11 had that via pointers to functions but they were unconstrained).

He makes the point that with OO, the runtime flow goes in the opposite way to the source code dependencies: you call a method on a base class and it dispatches to a derived class - defined in a source file that in turn depends on the base class's source file. I found that a very interesting observation. The shift to OO was also a removal - of unconstrained indirect calls via pointers to functions - and introduced the constraint of invoking those indirect calls only thru well-defined (class) interfaces under the control of the language itself.

At this point in the talk I was a bit surprised he hadn't mentioned functional programming, since that pre-dates OO. He was getting to it tho'... As we reach the limit of chip speeds, Moore's Law has allowed us to move to multiple cores as chip density continues to rise. I'm writing this on a quad core desktop machine. My previous machine, a laptop, was a dual core. Expect eight cores on the desktop to become commonplace - Apple's "Westmere"-based Mac Pro towers already offer six cores (kind of a strange number but...) - and then 16, 32... who knows what we'll have in a few years? The problem with this switch from single core to multiple core is a dramatic increase in concurrency in our applications - at least in order to take advantage of those extra cores - and as many of us have discovered, concurrency can be hard. One of the things that makes concurrency hard is mutable shared state - something that traditional OO relies on fairly heavily - and we've all been bitten by shared variables changing across multiple concurrent requests and causing nasty bugs (whether it's shared state in an object or shared global state in an application). Which brings us back to functional programming! Why has it taken so long for functional programming to get enough attention to move it to the mainstream? Martin said we just didn't learn our lessons quickly enough but we are finally learning them: we need the functional paradigm.

To be clear, functional programming doesn't mean that OO is dead: we can still have encapsulation, polymorphism and a notion of type extension (inheritance). It just means we have to accept another constraint: the removal of "assignment" (or at least only allowing extremely controlled data mutation). Thus all four major paradigm shifts he covered are subtractive, taking away an unconstrained programming feature (unconstrained size, unconstrained direct jumps - gotos, unconstrained indirect jumps, unconstrained mutation) in exchange for a better, safer, more maintainable way of life.

At this point he talked about all the other disciplines that have standardized on terminology - on the language they speak - and asks whether we're ready to start doing that? To reach a world where you hire "programmers" rather than "language X programmers"; a world where all magazines, books and training materials use a common language for examples that we all understand; a world where all code is reusable because it is all in the same language. That would certainly make us a lot more productive (and a lot more like all the other engineering and scientific disciplines!). Given the preceding discussion, it was clear where he was going with this: a modern functional language was the only sane option. And then his argument stumbled, for me. Don't get me wrong, I wholeheartedly agree with his suggestion for the last programming language, but I think he glossed over a step. After making all the right arguments to get us to a functional language, he introduced homoiconicity.

After (many of) you have recovered from that WTF moment, take a few minutes to read that linked definition on Wikipedia. "...the primary representation of programs is also a data structure in a primitive type of the language itself..." Raw machine code is homoiconic: it's all just raw bytes - so a program can manipulate itself and construct new programs on the fly (most assembler programmers I know, myself included, have done this quite a bit). Lisp is the classic "modern" homoiconic language: programs are lists which is also a core data type of Lisp (modern is relative: the original Lisp dates back to the 1950's!). Whilst I agree with Martin that homoiconicity is extremely powerful and allows for incredible expressiveness, I think it's too much of a stretch for most of the world's programmers for quite a while yet - and I think it was a weak point in his argument.

His suggestion for the last programming language? Clojure - or at least something evolved, or derived, from it. He has a point: Clojure is a very simple, consistent language (syntactically and semantically) and has implementations on both the JVM and the CLR; it has the functional stuff down pat; it has some slick polymorphism features (and supports encapsulation and type extension, although none of this looks much like you would expect if you're used to OO languages); it has awesome interop with its primary platform, making all the world's libraries available to you and allowing you to easily be called from those libraries.

The homoiconic issue is a deal-breaker for many right now - but maybe Uncle Bob is right and that will be our final paradigm "lesson" and we'll see a world where we all speak a modern Lisp, maybe even in our lifetime?

Tags: clojure · programming

11 responses so far ↓

  • 1 Peter Boughton // Jul 17, 2011 at 8:03 AM

    Do you really think there's a chance of all the companies, committees, and communities behind the dozen or so major programming languages (never mind all the lesser known ones), actually getting together and saying "let's all merge together under language X"?

    Even if you take the stubborn companies out of the picture, there's no way you're going to get fans of Python or Ruby saying "YAY, Clojure!"

    Hell, we can't even get just half a dozen groups to agree on something as simple as CSS! :/
  • 2 Dave // Jul 17, 2011 at 8:34 AM

    Great summary of this excellent talk. Thanks. He also gave this talk at NDC 2011 and the video seems to be a little higher quality. It's available at http://ndc2011.no/agenda.aspx?cat=1071&id=-1&day=3726. It's in the top row toward the right.
  • 3 Mike Kelp // Jul 17, 2011 at 9:24 AM

    Nice post Sean!

    This is a topic I've been very interested in for a long time.

    The real flaw I think is in assuming that while our craft has not really changed, that we all use the same terms and the same core functionality the same way. To me, languages in Computer Science are not just about features, but purpose. While in life we speak our language simply to be understood by the world around us which has many completely different languages, in development, we are using many different languages that are similar in nature (sub-languages maybe?) and often times completely understandable to someone who has never even "spoken" that language. Consider how the many different sciences all have very different vocabularies, concepts and constructs, but they usually meet at a common thread where they can both solve a problem together. The difference here is developing that each specialty science has developed an efficient mechanism for communicating purpose, instructions, and common concepts.

    I believe this is the same thing we face in the different areas of Computer Science, though we definitely have a lot of maturing to do. One problem is we create many languages that don't really offer anything over another instead of improving an existing language that can support it. Another, and the biggest one in my opinion, is that those that "design" languages currently are not considering readability or ease of understanding, in lieu of nifty syntax tricks for many good rare-use concepts that get way over-used to fit into a philosophy that doesn't put elagance (being the combination of simplicity and completeness) at the forefront. Personally, instead of seeing one "pure" language, I would like to see a similar feature base shared between a couple slightly different syntaxes that focus very clearly on their purpose. Think the perfect logical / analytical library with a truly broad base of libraries, with a version that truly does UI in an easy to understand way, one that does templating in an easy to understand way, etc.

    That is what I think we should really be looking for, our ideal set of dialects, DSLs, etc. (pick your term haha) targeted for our understanding in different lines of development. Anyway, hope this comes across understandably though I'm kind of spouting off the cuff hehe.

    Cheers for bringing up a great subject and making some awesome points as usual.

    Mike.
  • 4 Sean Corfield // Jul 17, 2011 at 2:35 PM

    @Peter, well that is part of the problem with our industry - too much fragmentation and incompatibility. Having worked on the ANSI & ISO Standards Committees for C++ for eight years, I know how painful standardization can be so I think we're quite a ways off Uncle Bob's vision! But I'm hopeful that we will eventually learn our lessons and standardize :)

    @Dave, thanx for the link - it doesn't seem to stream for me, and it's a 530MB download so I guess I'll watch it later!

    @Mike, I think there is always going to be room for specialist languages but most of the languages we create are general purpose and can be used to solve "any" problem - so I don't think we _need_ that many languages. I do agree on the DSLs tho' and that's a point Martin makes in his talk: we need the ability to create DSLs in our One True Language so that we have a good "fit" for our differing domains. Lisps are actually very good for that since you can extend the language using itself to create any DSLs you want, natively.

    One of the things I looked at during my PhD research in the mid-80's was how far you can go in terms of expressive syntax when the underlying model of your code is Lisp (in other words, creating a language without homoiconicity that is otherwise "identical" to the Lisp implementation). After that experience, I'm honestly not convinced that syntax should matter (i.e., there's no need for multiple languages if they have the same expressive power anyway - and homoiconicity is useful).
  • 5 Sean Corfield // Jul 17, 2011 at 3:30 PM

    @Dave, one oddity about that NDC preso - it's silent for the first five and a half minutes while Martin waits for the time to start his talk. That's a bit disconcerting!
  • 6 Dave // Jul 17, 2011 at 3:58 PM

    @Sean Ha. Yeah I forgot about that. All the videos from that conference are like that. It's weird. It looks like they started the recording like 5 minutes before all the talks started.
  • 7 Marko Simic // Jul 18, 2011 at 3:48 AM

    Thanks for this review. Tried to stick to number of characters suitable for comment, but, inspiration took me too far and I blog it :)
    http://itreminder.blogspot.com/2011/07/last-programming-language-my-5-cents.html
  • 8 Sam Jones // Jul 19, 2011 at 5:31 AM

    I know another thing that hasn't changed for 40 years: Programmers who keep on telling the world that functional programming is the only true way.
  • 9 Sean Corfield // Jul 19, 2011 at 1:16 PM

    @Sam, and finally the world is listening :)

    That's why we have F#, Scala and Clojure...
  • 10 Robert // Jul 30, 2011 at 7:52 AM

    By asking everyone to use the same language you are asking everyone to think and express all problems in the same way which can't happen. Innovation stops.
  • 11 Sean Corfield // Aug 2, 2011 at 11:03 AM

    @Robert, innovation hasn't stopped in any industry that has adopted a single, standard "lingua franca". It won't stop in computing either. In fact, I suspect innovation would _accelerate_ when we don't have to deal with a Tower of Babel in terms of competing programming languages.

Leave a Comment

Leave this field empty