🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Domain Specific Languages: Yea or Nay?

Published September 18, 2006
Advertisement
Floyd Marinescu has posted a great summary of the debate over domain-specific languages. I find the points particularly interesting in light of some of my own rantings here - and moreso in light of personal experience.

I'm going to make some lofty promises: herein I will both espouse and bash Lisp, prove that guns and Unix have a lot in common, and make a few morbid Star Wars jokes. I will also present a solution to the DSL debate - and no, I'm not talking about cable modems.


Programming is a tricky and complex task. Developing software can, at certain professional levels, involve juggling literally hundreds of different concerns, often mutually exclusive. Satisfy the customer, satisfy the boss, satisfy investors, satisfy the shareholders, get it done on time, get it done on budget, get it done in the first place... make sure you don't write something so horrid that it ends up in the hall of shame, or the maintenance programmer ten years from now hunts you down and axe-murders you...

You know, the everyday, routine, this-would-be-mundane-if-it-wasn't-making-me-psychotic worries of the software world.


We have some powerful tools for mitigating these challenges: development methodologies, testing practices, design applications. They're all useful and have their benefits. However, there is one weapon which remains the single most important and effective tool in our toolbox: abstraction.

Abstraction is the lightsabre of programming. It is elegant, civilised, and glows in the dark. It offers precision, control, and reliability. It can sever our bonds and lend us freedom, strike down the evil foe, or even deflect the haphazard blaster fire of the enemy to protect the innocent. And, just when we think we can't stretch the analogy any further, we accidentally sneeze while twirling the blade around, and decapitate ourselves.


Unfortunately, abstraction shares a deadly trait with rm -rf and Glock handguns: like all the best tools, abstraction is dangerous if not used correctly. There are some safety mechanisms and simple practices we can employ to make sure we sever a minimum of precious limbs, but there's absolutely no way we can stop a determined moron from screwing something up. Even an innocent newbie can point the handle the wrong way and turn himself into a Padawan Shish-Kebab. That flowing you're feeling? Sorry, Luke, it ain't the Force.


Once we get past the imminent danger of parting with our digits, abstraction looks like it may have some upsides. It may even be worth the risk. So what does it actually look like? Abstraction takes three primary forms:
  1. Structural or algorithmic abstraction

  2. Semantic abstraction

  3. Linguistic abstraction


I'll cover each of these in turn. (Note that this is not necessarily a chronological or evolutionary progression; indeed the relationship is much more complex, as we will see later.)


Structural/Algorithmic Abstraction
This was one of the first great revolutions in programming. For modern programmers (myself included) it's still a bit of a brain-bender that this stuff was ever considered revolutionary - let alone a mere generation ago. But revolutionary it was, and it made a huge difference.

The difference came in the guise of structured programming. This was an early but vital form of abstraction. By grouping programs into common, similar blobs (conditionals, loops, routines) we can achieve an important result: elimination of duplication. This is a fundamental principle that will crop up in all forms of abstraction; they are deeply intertwingled, to borrow Nelson's charming adjective.

To us in the twenty-first century, the benefits are obvious: the common points (incrementing a loop counter, checking it, conditionally repeating; jumping into and returning from a subroutine; and so on) are hidden from us in a way that gives us plenty of control over the loop/routine/etc., without requiring us to duplicate (and potentially screw up) the common code. Boilerplate prolog and epilog instructions that once were a part of writing any function are now generated magically behind the scenes for us by the compiler; ditto for loops, exceptions, and a host of other things we now take for granted.

A natural corollary to this structured approach is algorithmic abstraction; in many ways, they are one and the same. Common routines like searching, sorting, and so on can now be hidden inside functions and subroutines. It's all still GOTO under the surface, sure, but the layer of abstraction allows us to safely rely on the mechanism working without caring about how it is implemented.


Semantic Abstraction
The separation of interface (how we talk to a chunk of code) and implementation (how the code gets stuff done) is a vital principle which also recurs quite heavily in discussions about abstraction. Once we have begun to divide our programs up into common, similar building blocks, we run into an interesting category of challenges.

Consider, for example, the idea of "sorting." Once we have a sorting routine written, an important (but deceptively boring) question arises: what, precisely, can this sort? In the beginning it may simply accept an array, and sort the array. Maybe it's an in-place sort that ensures we don't need to allocate additional memory.

However, this quickly becomes inadequate. Suppose we also have another chunk of data which is stored in a binary tree, which we also want to sort. Or perhaps it's a linked list, or whatever. Intuitively, the concept of "sorting" still can be applied - but our code itself can't work on non-arrays without being rewritten.

This leads us to an extremely important step: semantic abstraction. This is where we begin to separate ourselves from the concern of how the computer does things, and start focusing more on what can be done. This level of abstraction gives us notions like "containers" and "iterators".

We cannot ignore the breakthrough here: whereas structural abstraction lets us easily vary the routines that act on our data, semantic abstraction lets us take the next step and fully decouple the operations from the data representation. We can now speak of algorithms that operate without knowing anything about the representation (on a machine level) of the data they handle. Generic programming becomes possible.

A whole host of improvements appears on the horizon: we can write code once, maybe with one or two special cases, and not worry about replicating simple search/sort/parse/traverse algorithms each time we change the storage details of our data.

But there is a dark side: implementing this decoupling is not always easy. In fact, in many languages, it can be downright daunting. Don't believe me? Take a look at an implementation of the C++ standard library sometime; have a look at how generic iterators work. It's enough to melt your brain.


Linguistic Abstraction
Thankfully, that's not the end of the story. We have one final layer of abstraction available to us, which handily solves the difficulties of implementing semantic abstraction. Instead of building the abstraction in the language, and then using it from within the same language, we build a new language that automatically incorporates the abstraction.

Such higher-level languages have well-proven benefits. In much the same way that hiding stuff inside a "magic" for loop helped eliminate points of failure, linguistic abstraction lets us ensure that our programs all benefit uniformly from single abstractions. For a noteworthy case, see C++'s std::list and the reasons why you cannot use the std::sort algorithm on it. In a language where the notions of "container" and "sorting" were built into the language itself, that distinction could trivially be masked and programmers would never need worry about it.

But linguistic abstraction is more than just cramming a lot of libraries into a box and calling them "keywords" instead of "library functions" - it affects the very design and whole being of the language itself. Ruby is a beautifully elegant example of the potential available here.


The Section Where I Wish I Was Douglas Hofstadter
Once linguistic abstraction becomes available, we see a fascinating phenomenon: in a sort of fractally recursive way, programs still tend to exhibit patterns and idioms. Common sequences and arrangements of code continue to crop up. In this way, linguistic abstraction gives way to a new layer of structural abstraction.

It is not entirely clear how far this recursion may extend; while it is tempting to dream up worlds where single strings of nuanced prose could invoke immensely complex computer programs under the scenes - sort of HAL 9000 meets SQL - it is difficult to imagine practial benefits which could be generic enough to apply widely throughout the software field. While certain domains may achieve incredible feats of concision and expressivity in this manner, I personally feel it is not likely that we will see more than one or at most two additional iterations of this pattern in the foreseeable future.


There Where DSLs Come In, Finally
When considering DSLs, we must keep an important truth in mind: these are, by very definition, domain specific solutions. Therefore, by nature, they will tend to involve many subtleties and specific concerns of their host domain which may not be relevant in other domains. As a result, making blanket generalized statements about the practice of employing DSLs is difficult.

We need to consider three important points: the level of abstraction which we will have in the DSL itself, the level of abstraction at which the DSL is implemented, and the level of abstraction of any code which interacts with logic implemented in the DSL but is itself implemented elsewhere.

For a concrete example, consider a DSL that is particularly fresh in my mind: KC. At Egosoft, we use a custom bytecode-compiled language which is interpreted on the fly in a VM which runs alongside the game engine itself. High-level game logic is implemented almost exclusively in KC or yet another set of DSLs which are implemented themselves on top of KC. Performance-critical code is handled by C or C++ in the engine itself.

Aside from the fact that it's a purely homebrew solution, this is a very common tactic in the games world. It strikes a useful balance: we can use the benefits of abstraction to help get work done and ensure code reliability, while at the same time dropping down into lower-level languages when we absolutely have to worry about CPU cycles.


However, all is not perfect. In fact, this separation is a cause of daily annoyance and complication. There have been a few cases where our DSLs, in the valiant attempt to save us work by providing abstraction, have in fact cost tremendous amounts of time, effort, and pain. When wielded by someone who makes too many incorrect assumptions, the KC language is perilous - it is a powered-up lightsabre juggled by a clown with bad reflexes.

Worse, wielding it correctly isn't perfectly safe, either - even the most careful and conscientious users occasionally hack off a body part they'd rather have kept attached.


Where did things go wrong? How did we manage to create such a predicament?

The problem, as Obele effectively captured, is choice. More precisely, it's change.

In the years since KC was first deployed (about 7 now, if my probing in the source repository is accurate), things have changed radically. We once targetted machines that had paltry amounts of RAM, like 128MB. (I can't even find a cheap digital camera with so little storage anymore.) Today, we're moving towards a requirement of 1GB with a recommendation of 2GB. Then, we had a relatively simple set of tasks that needed to be done at a high level of abstraction (running an economic simulation divided into about 100 "sectors").

Today, KC handles several orders of magnitude more responsibility. It takes care of mapping keys to in-game actions. It handles menus, buttons, widgets, radar screens. It juggles a few 3D engine resources. In several places, the engine actually calls up to KC to do certain tasks (more on why that is evil in a second). KC was once designed to abstract away a fairly narrow set of problems - it was a domain-specific language for a limited domain. In the years since, it has exploded into a generic language, but it's outgrown itself. The result has been costly.


This is precisely the risk Obele warns about. While he rightly points out that changing understanding of requirements can lead to mutilation of a DSL, there is an even worse danger, which particularly plagues us in the gaming business: when the requirements themselves change. If the domain shifts, the DSL must shift with it, or fail.

Problem is, shifting a DSL is a painful job. You have to respecify the language, maybe even redesign it entirely. Compilers, runtimes, and libraries must be modified. Worst of all, you run the very real risk of invalidating a potentially huge body of existing code. This is exactly what we've run into with the KC language.


Signs of DSL Decay
Ideally speaking, a DSL should accomplish one thing: linguistic abstraction. It should give us a way to talk about a problem in terms that is specifically suited to the problem domain. I want to say "ship A attack ship B until B blows up" - I don't want to have to write some loop that polls B's state and moves A around in space while it shoots at B and... blah. Hell, I don't even want to explain the icky alternative to abstraction - it's that freakin' ugly.

When this works and is done well, it is beautiful and effective. Most of the time, though, things get worse.

There are some tell-tale signs that the linguistic abstraction has leaked, and that the DSL is in danger of becoming a liability - indeed, in all likelihood, it became a liability a long time ago:

  1. Transparency - when logic implemented in the DSL plainly is affected by the implementation of the language itself

  2. Inverted Responsibility - when logic implemented outside the DSL plainly relies on the implementation of logic in the DSL itself

  3. Martian Syndrome - when concepts from one language have absolutely no equivalent in another; usually, this occurs in conjuction with transparency, where the DSL relies on implementation-specific quirks in the lower levels, but can't actually understand how the implementation works


All of these plague KC. We see transparency all the time: KC logic directly allocates and manipulates assets in the 3D engine itself, rather than dealing with them as abstractions. This sort of "parallel towers" situation is a common problem with abstractions, even ones that don't involve DSLs. Abstraction should be built in horizontal, onion-peel layers in order to be effective.

We get inverted responsibility on a regular basis as well; low-level DirectInput logic calls up into the KC code for handling key mapping, menus, and a host of other functionality in the game. The result is extremely tight coupling between layers that should have been decoupled via abstraction.

Lastly, we suffer extensively from the Martian Syndrome. KC is a dynamic, loosely typed language. Cram something into an array, and you have no way to know what it is when you want to get it back out: string, integer, object... we have to resort to some hacks that actually talk directly to the VM in some cases to work around this. It's ugly.

Worse, KC runs in a VM that is deeply tied to the game engine. This precludes the use of debuggers and other common tools to help analyze runtime behavior and find bugs. Hunting down flaws in KC code is reminiscent of being dumped into a jungle with a dull machete and told to escape the hungry tigers: gets the heart rate going, but ultimately is too stressful to be much fun.

For the final blow, KC is half-object-oriented: you can define classes and even simple inheritance hierarchies. This is tremendously useful and has saved us a lot of time and work. However, KC has no way to ensure that you're getting the class you expect at compile time: everything is just object. As a result, the code itself often lacks vital expressivity; the code cannot reliably indicate the type of object expected, so we have to use extensive documentation. In a well-crafted abstraction, the code itself would describe clearly what the intended object types were - for instance, via static type annotations.


The Answer
Believe it or not, there's a way to solve all of this. (Get ready; smug-Lisp-weenie-isms ahead.)

It's very simple. Earlier I alluded to using onion-layer models of abstraction; instead of having entire "parallel towers" of abstraction that each describe part of the software solution, you build up layers on top of each other. Each layer is more abstract than the last, and is implemented entirely in terms of the layer below it.

This really saves our bacon. Transparency is no longer a concern, because you can set up the physical file architecture (depending on the language) to preclude layers talking to layers they shouldn't be. In some cases, if layer 3 needs to go all the way down to layer 0, it's OK; since every layer between 0 and 3 understands 0, chances are you can't do much damage by probing in such a manner. Contrast this with the towers approach, where sticking a pipe from one tower into the core of another causes all manner of hideous problems.

Secondly, it cures inverted responsibility. Layer 2 can't talk to layer 3 because layer 2 has no possible grammar or comprehension with which to muck around in layer 3. The only way they will interact is if 3 uses 2 in some way. This eliminates the possibility of foul play provided the layers each adhere to their interface contracts correctly.

Finally, it totally blows away Martian Syndrome. If layer 0 can access Data Representation Foo, so can every layer 1..n.


Now, we still haven't actually answered the DSL question: do we craft this digital onion from a single language, or do we build each layer out of a different DSL?

As I noted earlier, it is dangerous to make blanket assertions. However, there are some general guidelines which should apply safely to most situations. Be sure to think over them critically before running off to apply them to your current project - your case may well be one I haven't considered fully.

Here's what we need:
  • A guarantee that higher layers won't muck around too much in lower layers. In languages like C++, this is essentially impossible. It is remarkably easy to do in Lisp, but Lisp kind of sucks for practical projects, and most people find all the parentheses scary. Point: DSLs, or Lisp if you're blessed.

  • A guarantee that lower layers won't muck around at all in higher layers. This is also impossible in C++, Java, and so on. It's also not even all that easy in Lisp, which so far has been looking like a perfect candidate for solving all these problems. Sorry, Paul, but you lose this one. Point: DSLs.

  • A guarantee that concepts available in lower layers can be made available to higher layers with minimal cost. This means that vital details and semantics existing in low layers must be transparent and easily understood by high layers when needed. Lisp totally blows here because it stops at a layer of abstraction too high above the hardware, and in many domains like games, that simply isn't acceptable. Worse, DSLs also suck here, because making semantics from, say, C++ available in your DSL requires a lot of work. Point: single language design, and not in Lisp.




This may look deceptively like we've failed to reach a conclusion. However, I've wasted all your time with this long-winded grunting based on a simple hook: DSL-vs-not-DSL is a false dilemma.

Or at least, it should be, in a theoretically ideal world.


Let us suppose the existence of some language, Foo. Foo lets us build onion-layer abstractions trivially, like Lisp. But Foo also lets us talk to hardware and operating systems directly and trivially, like C/C++. Foo fixes the Three Signs of Decay, as we've seen.

What about the problem of change? What if our requirements shift, as they are virtually guaranteed to do sooner or later? As it turns out, this isn't a problem either. DSLs are fragile because a radical requirements shift can require a total reimplementation of the DSL itself. However, Foo makes implementing our abstraction layers trivial - as trivial as writing subroutines or constructing classes.

Even better, our nested-onion model has a cool property: at each new onion layer, the power of code in that layer increases exponentially. Stated another way, the amount of code required to accomplish a task decreases, on average, logarithmically as we reach higher layers. This means that the net amount of code affected is going to be smaller. Even better, the cost of converting that code is minimal, because the cost of reimplementing the abstraction is in turn minimal. This cures the change issue. I don't have conclusive proof, but my gut says that this actually handles change more effectively than a monolithic single-language approach could.


As we've seen, Foo cures all the problems. We get the advantages of DSLs (namely, linguistic abstraction) without the costs and pitfalls. We avoid the change problem, which is (to my mind) the single most serious issue with DSLs in the real world. Best of all, we only have to learn one language, rather than one language per domain.



There's only one downside to all this: Foo doesn't exist.


Yet.
Previous Entry Me is the not happy
Next Entry Tidbits
0 likes 4 comments

Comments

Daerax
Arrr, that be a lot of words matey. I be fer ye.

ehm im going to eat some noodles, since i like to read while i eat i guess that will be a good read. Cant wait to see what you have to say. Eat & read , write my entry. Reply to this post. ah yes.
September 19, 2006 05:11 AM
Daerax
Brilliant. I really like this entry. I also find it ironic (grammar shield on) that I was going to post on something covering similar ground (although different subject) 2 entries forward in mine.

Btw I dont get bullet two about lower layers not mucking with higher ones. How is that possible? nm I get it.
September 19, 2006 05:38 AM
justinhj
Regarding lisp there are bigger problems than the parentheses.

Firstly it's trivial to expand lisp to interpret another DSL. If your scipters or game programmers don't like ()'s then expand lisp to be a python or ruby syntax language. You still get all the good things about lisp under the covers (apart from it's hard or impossible to write macros in the new syntax but that's probably not an issue).

So the bigger problems...

Lack of lisp programmers, and it takes a long time to become competent in lisp. I learned C for example in a few weeks. C++ took a few months. lisp took me about 2 years of part time hacking before I felt confident to write production quality code.

Lack of tools and compilers. You can't go to the store and buy a lisp development system that will run on ps3 and xbox 360. You'd have to write your own native code generating compilers, and only one company has been crazy and smart enough to do that.


Justin

September 19, 2006 10:22 AM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement