The Making of Epoch, Part 1 - Why?

Published February 01, 2012
Advertisement
The Making of a Pragmatic Programming Language for the Future

Part 1: Why do this in the first place?



Welcome to the first of what I hope to make a semi-regular series, chronicling the experiences I've had thus far with the Epoch programming language. Across the span of the series, I plan to delve into the motivation behind the language and its design decisions, the technical approach used in implementing both the compiler and the virtual machine upon which Epoch is based, and even a glimpse into the nascent IDE and developer toolkit for the language.

In this first installment, I'd like to lay the groundwork for understanding exactly why I undertook this venture, almost five and a half years ago. I feel that a solid grasp of Epoch's idealistic roots is essential to fully appreciating the scope of the project and the reasoning behind some if its more esoteric decisions.

I intend to keep the first part of this series fairly light reading, but I promise that later on we'll get into some nice heavy programming, as I tour through the guts of several iterations of compiler and VM architectures as well as the underlying design theory behind Epoch itself. For the moment, though, I'll do my best not to make you think too much.


Necessity is not the mother of invention
Contrary to the old adage, I don't think that "necessity" is really what breeds innovation. I think it's good, old-fashioned, righteous annoyance. Nothing motivates you to fix a problem like being routinely and abjectly pissed off by it. There is a threshold of frustration; below this line, nothing gets done, because the pain of the problem isn't "that bad" and we can just sort of suffer through it. Above this line, though, we're just mad as hell and we're not going to take it anymore.

This is precisely what led to Epoch's creation. I am, by vocation, primarily a C++ programmer. Over the past several years, I've witnessed a shift in programming challenges. It used to be, back in the before times, that the hard part of programming - especially in fields like games, which have always held my interest - was fitting a lot of really cool stuff into relatively limited processing space. We didn't have much memory, or much in the way of processor cycles.

For historical background, I learned to program on a machine with 16KB (yes, kilobytes) of memory. Its processor ran at roughly 1.3MHz (yes, megahertz). This was an exercise in extreme parsimony. Making convincing games on such a platform required time, skill, and a lot of clever thrift. (If you're interested in more of my personal story, check out my past entry "Waxing Nostalgic.")

In the intervening years, the problem has ironically inverted itself. Now, we find ourselves with precisely the opposite dilemma: we have too much processing capability, and no effective way to harness all of it.

I speak, of course, of the multicore era.

Enter the core. And then three other cores, just because.
It is now pedestrian to own a computer with dual processing cores, each running at over 2GHz. Four cores is commonplace, and eight is fast becoming a mainstream consumer reality. Pile on top of that a handful of gigabytes of memory. This is a radically different world than that in which I learned to hack. The challenge of contemporary software development isn't "how do I fit all this cool stuff onto this wimpy little machine," it's "how do I take advantage of all of this latent power sitting around?"

Writing concurrent, multithreaded software is a nightmare at the best of times. Even the best programmers I know are leery of concurrency. Race conditions, deadlock, livelock, resource starvation, reentrancy concerns, priority inversion... the list of common concurrency issues seems staggering. And yet, concurrent multithreading is the go-to, de facto standard tool for writing software that can fully leverage multiple processor cores. (I'll leave out my snarky comments on how most "programmers" need little help wasting the billions of bytes of RAM they have at their disposal. Oops, I was snarky anyways.)

The problem isn't just with the paradigm of concurrent multithreading, however; the problem is compounded by the tool that is most often chosen for this job, at least in the games industry: C++. Until very recently, C++ didn't even have a standard memory model, let alone a standard threading model; this means that various compiler, operating system, and hardware vendors all had subtly but perniciously different ideas about how memory operations and concurrency should behave.


Wherein I begin to whine about languages that suck
C was a massively successful language because it largely allowed programmers to abstract away the details of dozens of competing hardware architectures and operating systems. C++ is, in my own opinion, a massive failure in that it largely forces you to worry about the details of every hardware architecture and operating system you must target. Anyone who has ever written code for multiple gaming platforms knows of what I speak.

Go ahead: write a nontrivial game that'll run equally well on an iPhone, a Droid, a four-core gaming PC, an Xbox 360, and a Playstation 3. Now do it without lots of libraries that hide the platform details for you, and lots of #ifdef hackery. Go on. I dare you.

Even without concurrency to worry about, though, C++ is a language that has aged very, very badly. Compared with the abstraction capabilities of most modern functional languages (I think specifically of languages like OCaml, F#, Haskell, and their ilk here) and the dynamic, highly mutable and malleable nature of languages like Python, Ruby, and even JavaScript, C++ is just... gross. I have this recurring thought that itches at the back of my brain, suggesting that anyone who willingly writes C++ in today's ecosystem of programming languages must be a complete masochist.

Sadly, though, C++ has its strengths, and its reasons for remaining the undisputed king of deploying high-performance realtime game software. Predominantly, C++ is a double edged sword when it comes to hardware abstraction. Although it is true that you might be forced to think in painstaking detail about your memory usage patterns, interactions between various threads of execution, and so on, it is precisely this fact which gives C++ its power. Because a C++ program has such intricate control over such details, it can truly wring every last available drop out of a modern computing platform.


Looking for better alternatives
All this left me, about six years ago, feeling rather annoyed. While I appreciated (and still do appreciate) having the kind of control that C++ permits me, in about 90% of what I do, that kind of control is utter overkill. The simple fact is that the majority of the code in the world doesn't have to be hideously efficient in order to do its job. (Witness the continued existence of idiomatic Java "enterprise" programs as strong evidence of this.)

What I really wanted was some kind of hybrid. I wanted to be able to think about bits and bytes and alignments and processor cycles and register budgeting and SIMD instructions, but only when absolutely necessary. Moreover, I wanted to be able to think about highly abstract concepts as often as I could get away with it: higher order functions, discriminated unions decomposed via pattern matching, partial function application, lexical closures, lambdas, control over lazy-versus-eager evaluation semantics, process space isolation (instead of shared memory concurrency), and all these great features that other languages had.

I wanted to put C++, macro assembly, Erlang, Haskell, and Lisp in a blender and then bathe forever in the beautiful concoction that came out.


It took some time for the thought process to distill; at first, I played with ideas like embedded domain specific languages, specially crafted architectures that worked on the principle of nested abstraction layers, and so on and so forth. But the more I stared at it, the more the situation seemed to be begging for something new. I couldn't accomplish the kind of programming I wanted to do in any language I knew of, and I'd scoured the internet in search of languages that could help me realize those goals.


Birth of a cyborg. I mean, programming language.
We needed a new creation. More than that, though, we needed a pragmatic creation - something that was all about getting stuff done, and to hell with programming language theorists and blind idealism. Academia churns out plenty of interesting languages that are utter failures at getting things done in the real world. And this, I thought, was a serious issue. There seemed to be no shortage of people wanting to make new languages; and that remains true to this day.

What was lacking, I thought, was a unique combination of two factors that I deemed crucial and fundamentally inseparable in the development of a killer language. I thought about every great, successful language I knew of, and they all seemed to share these two traits.

First, the language must be productive for some real applications. If you can't do interesting stuff with it, it's a failure.

Second, the language must enable certain ways of thinking which are more effective than the alternatives. This is largely tied in with the first point, but subtly different. Lots of competing languages are productive, but essentially offer minimal advantages over each other. A new language had to be not only productive, but more productive than anything else available in its domain. A new language had to be better.


So it was that I posted a thread which became a historical artifact in the lore of the GDNet Software Engineering Forum (which has since been folded in with the General Programming forum). The thread proposed the creation of nothing less ambitious than "A Pragmatic Language for the Future."

There were a lot of differing reactions. Most people seemed to like the idea in principle, but almost immediately, it became obvious that everybody's wishlist was a little bit different. Clearly, this was not something that could be done by committee; it would take a single, benevolent dictator to come in, decide on a good compromise that hit the sweet spot between "boring" and "perfect" for a suitably large subset of programmers, and then go make the whole thing happen.


I donned my royal robes and grabbed the nearest gilded scepter. It was time to make a programming language.


Epilogue
In the intervening years, I've learned a hell of a lot - not just about programming languages, but compilers, virtual machines, hardware, human nature, and my own limitations. The ideas behind the Epoch project have crystallized substantially. It now exists in a tangible, more or less usable form - usable enough that I've written and shipped a simple syntax-highlighting editor for the language, written (partially) in Epoch itself. The remainder of the dream lives on in my mind - a beautiful world of many noble goals.

To give you a quick taste of what I want Epoch to bring to the table, imagine the following programming scenario.

Joe must write a program which processes images of cats. This program needs three major features: an image recognition system that can categorize cats into "cute" and "not quite as cute"; a powerful and efficient engine for batch processing thousands of images culled from the internet; and an elegant UI that can be easily ported to desktops, mobile devices, and so on.

Joe looks in his toolbox and finds only C++. Joe promptly jumps off a bridge in despair, realizing how painfully icky the project is going to be if he has to do it in C++.

Bob then is awarded the contract after Joe's untimely retirement. Bob looks in his toolbox and finds a nice language, let's call it Foo.

Foo has a few features that make it ideal for this project. First, it allows (but does not mandate) direct hardware-level description of algorithms, data structures, and their respective implementations. This means that writing the image recognition system will be straightforward. The math can be fast, the code optimized for various platforms, and all the nifty SIMD instructions and such can be harnessed.

Secondly, Foo has a rich vocabulary for describing how to parallelize processes. You can use (if you must) traditional shared state concurrency, but there's also support for isolated parallel processes akin to Erlang, or cooperative multitasking via coroutines, or automatic vectorization to SIMD or even SPUs. If you write your code nicely, you can even automatically compile it into OpenCL and run it on a GPU. This means that the cat-recognition app can leverage any possible tool that makes parallel processing effective, and no matter what platform it runs on, it'll be close to maximally efficient.

Third, Foo does all that automatically with some markup in the code. You don't even have to think too hard, just describe how to decompose your algorithm and data, and it will take care of the rest. Install it once on a PC, and it'll use your GPU and eight cores of processor to do the heavy lifting. Install it on your phone, and it will carefully schedule the code to sip battery power while still running as fast as it can. Run it on your PS3, and it'll convert to SPU kernels. Run it on your toaster, and it'll automatically burn the toast twice as fast - one slice in each slot.

You get the idea.

Fourth, Foo has a nifty ability to interface with existing languages. Bob happens to be a whiz at writing UIs in C# and XAML, so he whips up a slick interface and plugs it in to all the powerful Foo back-end code in an afternoon.

Fifth, Foo allows you to do incredibly powerful things with abstractions, like "code access control lists." Gone are the days of public, private, protected, friend, blah blah blah. Instead, Bob says that the UI is allowed to access these specific methods of the CatRecognizerApp object, and the ImageProcessing library is allowed to access those other methods. CatRecognizerApp is a single, well defined, concise piece of code that ties the interface layer in with the low-level guts, but doesn't need any weird code boilerplate to prevent the image processing code from turning all the buttons black, or some similar silliness.

Sixth, Foo provides automatic selection between garbage collection and manual memory management via arena allocation models. This means that Bob can write his entire UI in garbage-collected mode, and then write highly efficient allocation patterns for the image processing logic.

Seventh, Foo has a rich editor and tool ecosystem. When Bob moves the Frobnicate function from the Bletch module into the Quux module, he doesn't lose his eight days of revision history in that function. Instead, the revision history magically follows the code into the new module, as if it had lived in Quux all along. When Bob does a diff to check his work, he can go all the way back to the original implementation of Frobnicate, all without leaving the comfort of Quux. No need to root around through dozens of files trying to figure out where a function came from, or what branch or merge or fork moved it from one file to another.

Lastly, Foo supports a wide variety of execution models. While developing, Bob can write Foo code in a REPL interpreter right in the IDE. Once it's ready, he can deploy it to a virtual machine that allows hot-swapping of code as he tweaks it, without restarting the cat recognition app. Finally, when he's ready to ship, he can compile directly to native binaries for any number of platforms.


All this may seem like a distant, unreachable fairy tale; a wonderful but inaccessible land of paradise and programming magic, where people actually get stuff done instead of debugging segfaults all day, where hair is not torn out because of weird compiler messages, where fairies and elves dance together under the stars and sing songs of IPOs and stock options.

Consider, though, that this may not be so far off as it sounds.

"Foo" was the name I originally used to describe Epoch, and Epoch is the slowly-but-surely evolving embodiment of the "Foo vision" that I just described.


I believe it can happen. I believe it will happen, someday at least.

And I intend to be a part of making that fairy tale come to life.
5 likes 6 comments

Comments

Washu
Enjoy the issue reports. Try not to cry too hard as I break this over my pinky :D
February 01, 2012 11:12 AM
coderx75
This is pretty ambitious. I saw your last post and thought, "Five years to write a compiler?" Now, I see.

This made me think of an idea I had a while back for a compiler or VM (or both) that relied completely on metaprogramming. I don't know if it would be of any use here but I thought it might be worth sharing. The idea was inspired by metatables in Lua and I thought "why stop there?" What if the behaviors [i]and[/i] features of the language could be externally implemented and defined through metaprogramming? I never really fleshed out the idea but, if it could be done, it would make the development of a complex language more manageable. So, the syntax would be designed around metafeatures and the features would be defined through it's metaprogramming, even going so far as to allow code in separate modules of the same application to use separate memory management schemes or even compilers (bytecode or binary). Sounds cool though I'm not sure if this would lead to more bad than good... and the performance... eesh. =/
February 01, 2012 03:12 PM
Telastyn
The problem with that sort of metaprogramming utility is that there needs to be code somewhere. If you make it fully featured, it's just another programming language that doesn't really offer anything in its domain (of making other programming languages). If you don't then it's far less useful because people can't do what they want with it. Picking those sweet spots where people can mostly do what they want, but you take care of what they don't care about is a lot of the trick.

And honestly, DSL development is a common goal these days of programming language developers.
February 01, 2012 03:57 PM
nolongerhere
Yay for article #1!
February 01, 2012 10:42 PM
ApochPiQ
Sometimes, when I'm eating cookies, I think to myself: "What would it be like to login as ApochPiQ." Then I do it.

Love,
Washu
February 02, 2012 12:13 AM
Dario Oliveri
Programming is a masochist hobby. If a programmer is masochist why it should move to Foo instead of C++ ? :) just joking.
February 03, 2012 03:30 PM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement