On July 13th, 2011 Robert C. Martin (aka "Uncle Bob") gave a talk at Skills Matter in London called The Last Programming Language. He was scheduled to give a version of it as the keynote for ACCU 2011, a conference I remember with fondness from my days back in England as a member of the Association of C and C++ Users! You can read Martin's blog post about the talk here but note there's a $2 charge to watch the version linked from that blog post - the Skills Matter version linked above is free.
TL;DR: He asks whether we've exhausted all possible programming paradigms and languages and whether we should now consider a single standardized programming language (and offers a suggestion of what that might be). Preposterous?
Read his blog post. I'll wait.
His basic premise is that our craft hasn't really changed in the last 40 years. You might be shocked by that bold claim but he actually makes a compelling argument. Nearly every modern computer language is based on an earlier computer language (or, more often, a melting pot of several). The ground-breaking languages all appeared a long time ago and everything else since has been a refinement of that - a remix, if you like. More to the point, the major paradigms in computing that have shaped our thinking also all trace back their lineage several decades. In his talk he covers how modular programming was one of the first shifts, removing "unlimited size" and replacing it with the constraint of reasonably sized modules. The next major shift was structured programming, which essentially removed "goto" - or at least constrained us not to use it. The next mainstream shift was object-oriented programming and he makes a very interesting point about what was core about that: not encapsulation or inheritance per se (since we'd been able to implement those easily enough before "OO languages" - just look at the X11 Windows source code in C), but polymorphism in a controlled way (again, X11 had that via pointers to functions but they were unconstrained).
He makes the point that with OO, the runtime flow goes in the opposite way to the source code dependencies: you call a method on a base class and it dispatches to a derived class - defined in a source file that in turn depends on the base class's source file. I found that a very interesting observation. The shift to OO was also a removal - of unconstrained indirect calls via pointers to functions - and introduced the constraint of invoking those indirect calls only thru well-defined (class) interfaces under the control of the language itself.
At this point in the talk I was a bit surprised he hadn't mentioned functional programming, since that pre-dates OO. He was getting to it tho'... As we reach the limit of chip speeds, Moore's Law has allowed us to move to multiple cores as chip density continues to rise. I'm writing this on a quad core desktop machine. My previous machine, a laptop, was a dual core. Expect eight cores on the desktop to become commonplace - Apple's "Westmere"-based Mac Pro towers already offer six cores (kind of a strange number but...) - and then 16, 32... who knows what we'll have in a few years? The problem with this switch from single core to multiple core is a dramatic increase in concurrency in our applications - at least in order to take advantage of those extra cores - and as many of us have discovered, concurrency can be hard. One of the things that makes concurrency hard is mutable shared state - something that traditional OO relies on fairly heavily - and we've all been bitten by shared variables changing across multiple concurrent requests and causing nasty bugs (whether it's shared state in an object or shared global state in an application). Which brings us back to functional programming! Why has it taken so long for functional programming to get enough attention to move it to the mainstream? Martin said we just didn't learn our lessons quickly enough but we are finally learning them: we need the functional paradigm.
To be clear, functional programming doesn't mean that OO is dead: we can still have encapsulation, polymorphism and a notion of type extension (inheritance). It just means we have to accept another constraint: the removal of "assignment" (or at least only allowing extremely controlled data mutation). Thus all four major paradigm shifts he covered are subtractive, taking away an unconstrained programming feature (unconstrained size, unconstrained direct jumps - gotos, unconstrained indirect jumps, unconstrained mutation) in exchange for a better, safer, more maintainable way of life.
At this point he talked about all the other disciplines that have standardized on terminology - on the language they speak - and asks whether we're ready to start doing that? To reach a world where you hire "programmers" rather than "language X programmers"; a world where all magazines, books and training materials use a common language for examples that we all understand; a world where all code is reusable because it is all in the same language. That would certainly make us a lot more productive (and a lot more like all the other engineering and scientific disciplines!). Given the preceding discussion, it was clear where he was going with this: a modern functional language was the only sane option. And then his argument stumbled, for me. Don't get me wrong, I wholeheartedly agree with his suggestion for the last programming language, but I think he glossed over a step. After making all the right arguments to get us to a functional language, he introduced homoiconicity.
After (many of) you have recovered from that WTF moment, take a few minutes to read that linked definition on Wikipedia. "...the primary representation of programs is also a data structure in a primitive type of the language itself..." Raw machine code is homoiconic: it's all just raw bytes - so a program can manipulate itself and construct new programs on the fly (most assembler programmers I know, myself included, have done this quite a bit). Lisp is the classic "modern" homoiconic language: programs are lists which is also a core data type of Lisp (modern is relative: the original Lisp dates back to the 1950's!). Whilst I agree with Martin that homoiconicity is extremely powerful and allows for incredible expressiveness, I think it's too much of a stretch for most of the world's programmers for quite a while yet - and I think it was a weak point in his argument.
His suggestion for the last programming language? Clojure - or at least something evolved, or derived, from it. He has a point: Clojure is a very simple, consistent language (syntactically and semantically) and has implementations on both the JVM and the CLR; it has the functional stuff down pat; it has some slick polymorphism features (and supports encapsulation and type extension, although none of this looks much like you would expect if you're used to OO languages); it has awesome interop with its primary platform, making all the world's libraries available to you and allowing you to easily be called from those libraries.
The homoiconic issue is a deal-breaker for many right now - but maybe Uncle Bob is right and that will be our final paradigm "lesson" and we'll see a world where we all speak a modern Lisp, maybe even in our lifetime?