• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

D Bits

  • entries
    32
  • comments
    46
  • views
    121249

About this blog

Game development with the D Programming Language.

Entries in this blog

Aldacron

Since the D1 days, D has had a package protection attribute akin to Java's (though unlike Java, it is not the default). Applying package protection to any symbol in a module makes it accessible only to modules in the same package.[code=nocode:0]module mylib.mypackage.mymodule;package struct MyStruct {...}class MyClass { package void doSomething() {...}}
Given the above, MyStruct is only usable and the member function doSomething in MyClass only callable directly inside the mylib.mypackage package. However, neither is accessible in any subpackages of mypackage (such as in a module mylib.mypackage.subpack.somemodule).


There was some demand for expanding the scope of package protection, so it was eventually added to the language. The above code still works as it always had, restricting access exclusively to the package in which a symbol is declared. Now, it's possible to do something like this:[code=nocode:0]module mylib.mypackage.mymodule;package(mylib) MyStruct { ... }class MyClass { package(mylib.mypackage) void doSomething() {...}}
In this snippet, MyStruct is accessible in the mylib package and all of its subpackages. The doSomething member function of MyClass is callable in mylib.mypackage and all of its subpackages. In effect, specifying a package name with the attribute tells the compiler the topmost package in which a symbol should be accessible, making it accessible also to all subpackages of that package.


This isn't an earth shattering feature, but it allows much more freedom for code organization. Consider a renderer package for a 2D or 3D game engine. One possible approach to supporting multiple renderers is to have a base engine.gfx package which contains the interfaces and an engine.gfx.impl package for all of the implementations (e.g. engine.gfx.impl.ogl). The extended package protection comes in handy here to make common internal declarations visible to the implementations while hiding them from the outside world:[code=nocode:0]module engine.gfx.common;package(engine.gfx):enum internalConstant = 123;struct InternalStruct { ... }void internalFunction() { ... }
Previously, you might put this module in the engine.gfx.impl package, make everything public, and tell users not to import anything in the impl package as a form of voluntary protection. Now, you don't even need a 'common' module. You can put each declaration where it makes sense to put it and the compiler will enforce your protection scheme for you.

Aldacron

My New Book: Learning D

From late February up until about two weeks ago, I've had my head down over my keyboard working on a book about the D programming language for PACKT called 'Learning D'. The electronic version is currently available from the publisher's site for roughly half-price. Both kindle and print versions are available at Amazon, though not at the sale price.

If you have experience with a C family language (you don't have to be an expert) and are interested in learning D, this book will guide you through the language at a reasonable pace. Of course I cover all of the language fundamentals, but I've done so in a way that puts the most focus on where D differs from its cousins. There are a lot of similarities with C, C++, Java and C#, but sometimes the similarities can be deceptive. My goal was to try and point out many of the common issues people have when thinking in C++ or Java when programming in D.

Once past the fundamentals, the book goes into D's compile-time features, including templates, followed by two chapters on ranges. The pace slows down in these chapters and goes into more detail, as D's approach here is quite different from C++ (though newer versions of C++ are gaining similar features). This is especially true for templates.

The last few chapters go through the D ecosystem, using D and C together in the same program, a peek at web development with D, and a final chapter that gives pointers on where to go for more info.

If you have little experience with C-family languages, you might be better served by Ali Cehreli's 'Programming in D', which is freely available online as HTML, but is also available for purchase in both electronic and print forms. PACKT has another D book coming in January called 'D Web Development' by Kai Nacke.

Aldacron
One D construct I often use is the scope guard statement. This allows you to write code that executes when a scope exits under one of three circumstances: an exception is thrown, no exception is thrown, or always. Example:[code=nocode:0]void main() { import std.stdio; scope( failure ) { writeln( "I only execute when the scope exits due to an exception being thrown."); } scope( success ) { writeln( "I only execute when the scope exits normally." ); } scope( exit ) { writeln( "I always execute when the scope exits, exceptions or no." ); }}
In this example, as written, two lines will be printed to the console when the function exits, the ones for the exit and success conditions, since no exception is thrown. The exit condition will be written first, since anything scope statements are executed in the reverse order they are declared. If you add a return statement before the success condition or throw an exception before the failure condition, nothing will be printed -- the scope blocks won't be executed because the code never reached the point where they are declared. Also, you can have more than one statement in each scope block. Here's another example.[code=nocode:0]void main() { import std.stdio; scope( exit ) { writeln( "I'm always going to execute."); writeln( "Because I'm the first scope guard in this scope." ); writeln( "And the scope doesn't exit before I am declared."); writeln( "But I will be the last scope guard to run." ); } int x = 5; int y = 6; scope( success ) writefln( "I won't execute because of the Error thrown below. BTW, x+y = %s", x + y ); scope( exit ) { writeln( "Yes, multiple scope guards of the same type can be declared." ); writeln( "And this scope( exit ) will execute before the one above." ); } if( y - x != 0 ) { throw new Error( "No scopes declared after me will execute." ); } scope( failure ) writeln( "I won't run, since an exception was thrown before this point." ); }
You can run this code online, where you will see this ouput:[quote]
Yes, multiple scope guards of the same type can be declared.
And this scope( exit ) will execute before the one above.
I'm always going to execute.
Because I'm the first scope guard in this scope.
And the scope doesn't exit before I am declared.
But I will be the last scope guard to run.
object.Error: No scopes declared after me will execute.[/quote]

D also has the try...catch...finally construct, which is what scope guard statements are lowered to by the compiler. While this is extremely useful and eliminates the need for try..catch..finally in many cases, it is not a complete replacement. For one thing, if there is an exception thrown, scope( failure ) does not give you access to the exception object. In practice, I've found that I use the exit condition the most, with success a distant second. I find that I rarely need the failure condition, but it's handy when I do.

This feature may not seem like much on the surface, but as soon as you start to use it you'll fall in love with it. I actually feel disappointed when I find I need to replace a scope guard with a manual try...catch block, as it clutters things up. But what you can't do is add them blindly to your code. The above examples only use the scope guards at function level, but you can put them in any scope. Conditional blocks, loops, anonymous scopes, wherever you want. The statements in the scope guard will execute if they meet the conditions when the scope in which they are declared exits. Like this:void main() { import std.stdio; if( 1 > 0 ) { scope( exit ) writeln( "I'll execute as soon as the if exits."); } for( size_t i=0; i<3; ++i ) { scope( exit ) writeln( "How many times will I execute?" ); } scope( exit ) writeln( "Will the above scopes be run before I am? Yes, they will." ); }
Output:[quote]
I'll execute as soon as the if exits.
How many times will I execute?
How many times will I execute?
How many times will I execute?
Will the above scopes be run before I am? Yes, they will.[/quote]

The scope guards in the if block and the for loop are declared before the last scope, but in this case all three are in separate scopes, so they aren't executed in reverse order. The first one is executed as soon as the if block exits and the second one is executed each time the for loop scope exits (three times in all). Those scopes exit before the final scope guard is every declared, so it executes last, only when the main function exits.

That brings me to the point of this post. You should always be aware of the scope in which your scope guard is declared, what is visible in that scope, and what happens in any scopes that follow it. I had never had any issues with scope guards until today. Can you spot the error?Config loadConfig( string fileName ) { auto path = Paths.findSettingsFilePath( fileName ); scope( exit ) path.destroy(); if( path !is null ) { auto config = al_load_config_file( path.toCString() ); return new Config( config, fileName ); } else { return createConfig( fileName ); }}
I'm so used to putting a scope guard right after allocation of a resource that, in this case, I didn't pay attention to how that particular resource is used in the function. In this particular case, it is possible for the path instance to be null. Because I declared the scope guard at function scope, it will always execute, regardless of whether the path instance is null or not. If the path object is null, the result is an access violation.

One way to fix this is to add a null check inside the scope guard:scope( exit ) if( path !is null ) path.destroy();
But that's rather silly, since there's already an if block that checks for null. Furthermore, since the existing if block also creates a new scope, and only in the case where the path object is non-null, then the better solution is to move the scope guard to the if block like so:auto path = Paths.findSettingsFilePath( fileName );if( path !is null ) { scope( exit ) path.destroy(); ...}
D's scope guards are quite useful, but always be aware of how you're using them. Like anything else in programming, if you aren't paying attention, you can create bugs for yourself that may not manifest at all during development and could be hard to track down if they show up on a user machine. As I used to hear all the time in the Army, "More attention to detail, soldier!"
Aldacron

Functional Me

I recall very clearly the first time I ever saw a video game. It must have been in the summer of either '78 or '79, just before my 7th or 8th birthday. I walked into a local 7-11, just a short distance from my house, and was puzzled to see this big box surrounded by a bunch of older kids. Space Invaders. The first time I saw the screen, it blew my mind. I totally forgot the reason I had come to the store in the first place and ran home to beg my mother for money to play. In Christmas of that year, I awoke to find an Atari 2600 under the Christmas tree. That cemented it. I knew I wanted to make games when I grew up.

As it would turn out, my parents would never buy a computer. I did manage to get a little exposure to some programming manuals and a chance to try some things out now and again. Eventually, I gave it up and moved on to baseball. That was something my parents could afford. The programming bug was still there, just stashed away. I pulled it out again when I finally got my first computer at the age of 26. I slogged my way through books, online tutorials (including resources from a handful of sites that would eventually merge to form GameDev.net), and anything I could get my hands on. As a result, I never had any formal education or training in computer science. For years, I thought it didn't matter. But lately, it's been bugging me. And I blame D.

I got into the D community pretty much near the ground floor. There have always been some lively discussions in the newsgroups about which features to add or change. Over the years, especially after D2 came along, I've realized just how many gaps there are in my knowledge base. There are a number of conversations I've tried to follow, but in which I became completely lost. And forget about contributing! Then, there's the functional bits that have made their way into the standard library, particularly with regards to the range interfaces. That stuff is just completely alien to me. Recently, I decided to rectify that.

In the past few weeks, I've signed up for three free online CS courses. Two at edX (one starts next month, the other in March), and one at Coursera (which started last week). The latter is a Programming Languages course in which, for starters, we're learning functional programming with SML. Something I never thought I'd do, but, thanks to D, now have the motivation for. I'm actually quite enjoying it.

Once these courses are finished, I plan to look for more that can be useful to me. Hopefully, I'll be a better programmer as a result.
Aldacron

Binding D to C Part Five

This is the fifth and final part of a series on creating bindings to C libraries for the D Programming Language.

In part one, I introduced the difference between dynamic and static bindings and some of the things to consider when choosing which kind to implement. In part two, I talked about the different linkage attributes to be aware of when declaring external C functions in D. In part three, I showed how to translate C types to D. In part four, I wrapped up the discussion of type translation with a look at structs. Here in part five, I'm going to revisit the static vs. dynamic binding issue, this time looking at implementation differences.

Terminology



Back in part one, I gave this definition of static vs. dynamic bindings.[quote]By static, I mean a binding that allows you to link with C libraries or object files directly. By dynamic, I mean a binding that does not allow linking, but instead loads a shared library (DLL/so/dylib/framework) at runtime.[/quote]
Those two unfortunate sentences did not clearly express my meaning. Only one person demonstrated a misunderstanding of the above in a comment on the post, but in the intervening 12 months I've found a few more people using the two terms in the wrong way. Before explaining more clearly what it is I actually mean, let me make emphatically clear what I don't.

When I talk of a static binding, I am not referring to static linking. While the two terms are loosely related, they are not the same at all. A static binding can certainly be used to link with static libraries, but, and this is what I failed to express clearly in part one, they can also be linked with dynamic libraries at compile time. In the C or C++ world, it is quite common when using shared libraries to link them at compile time. On Windows, this is done via an import library. Given an application "Foo" that makes use of the DLL "Bar", when "Foo" is compiled it will be linked with an import library named Bar.lib. This will cause the DLL to be loaded automatically by the operating system when the application is executed. The same thing can be accomplished on Posix systems by linking directly with the shared object file (extending the example, that would be libBar.so in this case). So with a static binding in D, a program can be linked at compile time with the static library Bar.lib (Windows) or libBar.a (Posix) for static linkage, or the import library Bar.lib (Windows) or libBar.so (Posix) for dynamic linkage.

A dynamic binding can not be linked to anything at compile time. No static libraries, no import libraries, no shared objects. It is designed explicitly for loading a shared library manually at run time. In the C and C++ world, this technique is often used to implement plugin systems, or to implement hot swapping of different application subsystems (for example, switching between an OpenGL and D3D renderer) among other things. The approach used here is to declare exported shared library symbols as pointers, call into the OS API for loading shared libraries, then manually extract the exported symbols and assign them to the pointers. This is exactly what a dynamic binding does. It sacrifices the convenience of letting the OS load the shared library for more control over when and what is loaded.

So to reiterate, a static binding can be used with either static libraries or shared libraries that are linked at compile time. Both cases are subject to the issue with object file formats that I outlined in part one (though the very-soon-to-be-released DMD 2.061 alleviates this a good deal, as the 64-bit version knows how to work with the Visual Studio linker). A dynamic binding cannot be linked to the bound library at compile time, but must provide a mechanism to manually load the library at run time.

Now that I've hopefully gotten that point across, it's time to examine the only difference in implementing the two types of bindings. In part two, I foreshadowed this discussion with the following.[quote]Although I'm not going to specifically talk about static bindings in this post, the following examples use function declarations as you would in a static binding. For dynamic bindings, you'll use function pointers instead.[/quote]

Static Bindings



In D, we generally do not have to declare a function before using it. The implementation is the declaration. And it doesn't matter if it's declared before or after the point at which its called. As long as it is in the currently visible namespace, it's callable. However, when linking with a C library, we don't have access to any function implementations (nor, actually, to the declarations--hence the binding). They are external to the application. In order to call into that library, the D compiler needs to be made aware of the existence of the functions that need to be called so that, at link time, it can match up the proper address offsets to make the call. This is the only case I can think of in D where a function declaration isn't just useful, but required.

I explained linkage attributes in part two. The examples I gave there, coupled with the details in part three regarding type translation, are all you need to know to implement a function declaration for a static D binding to a C library. But I'll give an example anyway.
// In C, foo.hextern int foo(float f);extern void bar(void);// In Dextern( C ){ int foo(float); void bar();}
Please be sure to read parts 2 - 4 completely. With all of that, and the info in this post up to this point, you have almost everything you need to know to implement a static binding in D, barring any corner cases that I've failed to consider or am yet to encounter myself. Oh, and function pointers. But that's coming in the next section.

Interlude -- Function Pointers



I could have covered function pointers in part three. After all, it isn't uncommon to encounter C libraries that use function pointers as typedefed callbacks (or declared inline in a function parameter list), or as struct members. And there's no difference in declaring function pointers for callbacks or for loading function symbols from a shared library. The syntax is identical. But, there are some issues that need to be considered in the different situations. And it's important to understand this before we get into dynamic bindings.

Let's look first at the syntax for declaring a function pointer in D.

int function() MyFuncPtr;
Very simple: return type->function keyword->parameter list->function pointer name. Though it's possible to use MyFuncPtr directly, it's often convenient to declare an alias:


alias int function() da_MyFuncPtr;da_MyFuncPtr MyFuncPtr;
What's the difference? Let's see.


int foo(int i){ return i;}void main(){ int function(int) fooPtr; fooPtr = &foo; alias int function(int) da_fooPtr; da_fooPtr fooPtr2 = &foo; import std.stdio; writeln(fooPtr(1)); writeln(fooPtr2(2));}
There is none! At least, not on the surface. I'll get into that later. Let's look at another example. Translating a C callback into D.

// In C, foo.htypedef int (*MyCallback)(void); // In Dextern( C ) alias int function() MyCallback;
Notice that I used the alias form here. Anytime you declare a typedefed C function pointer in D, it should be aliased so that it can be used the same way. Finally, the case of function pointers declared inline in a paramter list.

// In C, foo.hextern void foo(int (*BarPtr)(int));// In D.// Option 1extern( C ) void foo(int function(int) BarPtr);// Option 2extern( C ) alias int function(int) BarPtr;extern( C ) void foo(BarPtr);
Personally, I prefer option 2. Also, I generally prefer to use extern blocks to include multiple declarations so that I don't have to type extern( C ) or extern(System) all the time (as I did in the previous example).

Now that the function pointer intro is out of the way, it's time to look at dynamic bindings.

Dynamic Bindings



At this point, I would very much like to say that you know everything you need to know about dynamic bindings. But that would be untrue. As it turns out, simply declaring function pointers is not enough. There are two issues to take into consideration. The first is function pointer initialization.

In one of the examples above (fooPtr), I showed how a function pointer can be declared and initialized. But in that example, it is obvious to the compiler that the function foo and the pointer fooPtr have the same basic signature (return type and parameter list). Now consider this example.

// This is all D.int foo() { return 1; }void* getPtr() { return cast(void*)&foo; }void main(){ int function() fooPtr; fooPtr = getPtr();}
Try to compile this and you'll see something like:

[quote]fptr.d(10): Error: cannot implicitly convert expression (getPtr()) of type void* to int function()[/quote]
Now, obviously this is a contrived example. But I'm mimicking what a dynamic binding has to go through. OS API calls (like GetProcAddress or dlsym) return function pointers of void* type. So this is exactly the sort of error you will encounter if you try to directly assign the return value to a function pointer declared in this manner.

The first solution that might come to mind is to go ahead and insert an explicit cast. So, let's see what that gets us.

fooPtr = cast(fooPtr)getPtr();
The error here might be obvious to an experienced coder, but certainly not to most. I'll let the compiler explain.

[quote]fptr.d(10): Error: fooPtr is used as a type[/quote]
Exactly. fooPtr is not a type, it's a variable. This is akin to declaring int i = cast(i)x; You can't do that. So the next obvious solution might be to use an aliased function pointer declaration. Then it can be used as a type. And that is, indeed, one possible solution (and, for reasons I'll explain below, the best one).

alias int function() da_fooPtr;da_fooPtr fooPtr = cast(da_fooPtr)getPtr();
And this compiles. For the record, the 'da_' prefix is something I always use with function pointer aliases. It means 'D alias'. You can do as you please.

I implied above that there was more than one possible solution. Here's the second one.

int foo() { return 1; }void* getPtr() { return cast(void*)&foo; }void bindFunc(void** func) { *func = getPtr(); }void main(){ int function() fooPtr; bindFunc(cast(void**)&fooPtr);}
Here, the address of fooPtr is being taken (giving us, essentially, a foo**) and cast to void**. Then bind func is able to dereference the pointer and assign it the void* value without a cast. When I first implemented Derelict, I used the alias approach. In Derelict 2, Tomasz Stachowiak implemented a new loader using the void** technique. That worked well. And, as a bonus, it eliminated a great many alias declarations from the codebase. Until something happened that, while a good thing for many users of D on Linux, turned out to be a big headache for me.

For several years, DMD did not provide a stack trace when exceptions were thrown. Then, some time ago, a release was made that implemented stack traces on Linux. The downside was that it was done in a way that broke Derelict 2 completely on that platform. To make a long story short, the DMD configuration files were preconfigured to export all symbols when compiling any binaries, be they shared objects or executables. Without this, the stack trace implementation wouldn't work. This caused every function pointer in Derelict to clash with every function exported by the bound libraries. In other words. the function pointer glClear in Derelict 2 suddenly started to conflict with the actual glClear function in the shared library, even though the library was loaded manually (which, given my Windows background, makes absolutely no sense to me whatsoever). So, I had to go back to the aliased function pointers. Aliased function pointers and variables declared of their type aren't exported. If you are going to make a publicly available dynamic binding, this is something you definitely need to keep in mind.

I still use the void** style to load function pointers, despite having switched back to aliases. It was less work than converting everything to a direct load. And when I implemented Derelict 3, I kept it that way. So if you look at the Derelict loaders...

// Instead of seeing thisfoo = cast(da_Foo)getSymbol("foo");// You'll see thisfoo = bindFunc(cast(void**)&foo, "foo");
I don't particularly advocate one over the other when implementing a binding with the aid of a script. But if you're doing it by hand, the latter is much more amenable to quick copy-pasting.

There's one more important issue to discuss. Given that a dynamic binding uses function pointers, the pointers are subject to D's rules for variable storage. And by default, all variables in D are stashed in Thread-Local Storage. What that means is that, by default, each thread gets its own copy of the variable. So if a binding just blindly declares function pointers, then they are loaded in one thread and called in another... boom! Thankfully, D's function pointers are default initialized to null, so all you get is an access violation and not a call into random memory somewhere. The solution here is to let D know that the function pointers need to be shared across all threads. We can do that using one of two keywords: shared or __gshared.

One of the goals of D is to make concurrency easier than it traditionally has been in C-like languages. The shared type qualifier is intended to work toward that goal. When using it, you are telling the compiler that a particular variable is intended to be used across threads. The compiler can then complain if you try to access it in a way that isn't thread-safe. But like D's immutable and const , shared is transitive. That means if you follow any references from a shared object, they must also be shared. There are a number of issues that have yet to be worked out, so it hasn't seen a lot of practical usage that I'm aware of. And that's where __gshared comes in.

When you tell the compiler that a piece of data is __gshared, you are saying, "Hey, Mr. Compiler, I want to share this data across threads, but I don't want you to pay any attention to how I use it, mmkay?" Essentially, it's no different from a normal variable in C or C++. If you want to share a __gshared variable across threads, it's your responsibility to make sure it's properly synchronized. The compiler isn't going to help you.

So when implementing a dynamic binding, a decision has to be made: thread-local (default), shared, or __gshared. My answer is __gshared. If we pretend that our function pointers are actual functions, which are accessible across threads anyway, then there isn't too much to worry about. Care still need be taken to ensure that the functions are loaded before any other threads try to access them and that no threads try to access them after the bound library is unloaded. In Derelict, I do this with static module constructors and destructors (which can still lead to some issues during program shutdown, but I'll cover that in a separate post). Here's an example.

extern( C ){ alias void function(int) da_foo; alias int function() da_bar;}__gshared{ da_foo foo; da_bar bar;}
Finally, there's the question of how to load the library. That, I'm afraid, is an exercise for the reader. In Derelict, I implemented a utility package (DerelictUtil) that abstracts the platform APIs for loading shared libraries and fetching their symbols. The abstraction is behind a set of free functions that can be used directly or via a convenient object interface. In Derelict itself, I use the latter since it makes managing loading an entire library easier. But in external projects, I often use the free-function interface for loading one or two functions at a time (such as certain Win32 functions that aren't available in the ancient libs shipped with DMD). It also supports selective loading, which is a term I use for being able to load a library if specific functions are missing (the default behavior is to throw an exception when an expected symbol fails to load).

Conclusion



Overall, there's a good deal of work involved in implementing any sort of binding in D. But I think it's obvious that dynamic bindings require quite some extra effort. This is especially true given that the automated tools I've seen so far are all geared toward generating static bindings. I've only recently begun to use custom scripts myself, but they still require a bit of manual preparation because I don't want to deal with a full-on C parser. That said, I prefer dynamic bindings myself. I like having the ability to load and unload at will and to have the opportunity to present my own error message to the user when a library is missing. Others disagree with me and prefer to use static bindings. That's perfectly fine.

At this point, static and dynamic bindings exist for several popular libraries already. Deimos is a collection of the former and Derelict 3 the latter. You'll find some bindings for the same library in both and several that are in one project but not the other. Use what you need and are comfortable with. And I hope that, if the need arises, you can use the advice I've laid out in this series of posts to help fill in the holes and develop static or dynamic bindings yourself.

Given that I'm just under 4 hours from 2013 as I write this in Seoul, Korea, I want to wish you a Happy New Year. May you start and finish multiple projects in the coming year!
Aldacron
This story has nothing really to do with D except peripherally, but it's a tale worth telling as a warning to others.This is the best place for me to tell it.

I had a collection of C code that I'd built up over the years. I suppose I still have it, but it's sitting on the hard drive of a closeted dormant computer that I don't want to bother setting up. Besides, I'm not really happy with that bit of code anymore. My style has evolved over the years and I don't write C in quite the same way as I used to. Plus, I've learned a thing or two since I started putting all of that together. Using it again without rewriting it would just bug me too much. So recently, I set out to start over with it.

In a nutshell, I'm putting together a package of C libraries that I can use for different sorts of apps. Granted, most of my hobby coding is in D nowadays, but I don't want to lose my skill at C. It took me too many years to get to where I am to just give it up completely (I did toy with doing my rewrite in C++, but gave up on that rather quickly -- D has me too spoiled to touch C++ anymore). It's all organized rather neatly under a self-contained directory tree. These days, I prefer to avoid IDEs and work from the command line and a text editor (a licensed version of SublimeText 2 is my editor of choice), so I need a good tool to manage my build process. I went with premake4.

I've been using premake for quite a while without any difficulty, so it was only natural to use it again for the rewrite. There are different ways to use premake, one master config file for every project in the "workspace" or a separate config file for each project (my preference). Both have their pros and cons, but in the end the way I want to set up my source tree made me realize early on that I'm going to hit the cons no matter which approach I take. Nothing major, mind you, but when working from the command line every bit of convenience counts. I did make an attempt to restructure things and combine my multiple premake files into one master script, but I managed to run afoul of GCC's pickiness with the order in which libraries are specified on the command line (see below). I realized I was either going to have to make the config script even more complex with some custom Lua, deal with the annoyances of the multiple premake script approach, or do something different.

For my D projects these days, I use a custom build script for compilation, written in D. It was simple to put together and I can copy it from project to project with only minor modifications. And it's dead easy to maintain across projects. I've actually got several different versions of it, as it improves with each new project I create. I realized it wouldn't take too much to knock up a version that could compile my C projects. So that's what I did. Less than 10 minutes after I opened the file I had it automatically pulling in C files and compiling libraries just fine. Then it came time for the executables.

If you've ever used gcc before, you've likely encountered a situation when you were getting "undefined references" all over the place even though you are 100% certain you specified the correct library in the linker settings of your IDE and that you configured the library path properly. After consulting Google, you will have learned that gcc takes its list of libraries and processes them in the reverse of the order in which you passed them along. So if library B depends on library A, you have to pass them in this order: "gcc -lB -lA...". Doing it in the reverse will cause linker errors. This is a lesson I learned long ago, so I rarely run into that sort of error anymore. But, as it turns out, I didn't understand the issue as well as I'd thought.

In my build script, I have separate functions to compile, archive (create a static lib) and link (create an executable). All source file names are pulled from a directory, appended to an array, and passed to the compile function for individual compilation. If the project currently being processed is a library, then the file name extensions are changed from ".c" to ".o", all of the names globbed together into one string, and passed along to the archive function where the library is created. If the project is an executable, a list of required libraries is handed off to the link function along with the object file names for the executable to be created. It was here that things broke down. More undefined references.

This one really had me stumped. The premake build system had been working fine until I combined the configurations into one, resulting in undefined references when compiling. Using the verbose option showed me that the libraries were not being passed in the correct order. But with my D build script, I was certain they were properly set up. The verbose option confirmed it. So what the hell was going on?

Google, unfortunately, showed me nothing. I was at a total dead end. I kept staring at my script, changing things here and there randomly. Staring at the directory tree, moving things around. Sitting in my chair and doing nothing but thinking and cursing. After over an hour of mounting frustration, a thought suddenly popped into my head. A small adjustment to my build script and compilation was successful.

The problem that bit me was that any object files you pass to gcc during the link step need to be specified on the command line before the libraries on which they depend. Given that the libraries are just collections of objects, that makes perfect sense. I'd just never known it or had to care about it before. The string I was sending to the OS originally looked something like "gcc -lfoo -lbar baz.o buz.o...", whereas it should have been "gcc baz.o buz.o -lfoo -lbar...". It doesn't matter in which order the object files are specified, they just need to appear in the list before the libraries. How many years now have I been using gcc?

So that's how what should have been a less-than-20-minute side project turned into a nearly hour-and-a-half time sink. If you are going to work on the command line, it pays to know your compiler inside and out.
Aldacron
Given that my BorderWatch project has languished on github for months without any updates beyond the first few days of random hacking, it's going to be a while before it can serve as an example of game programming in D. So I had some free time recently and decided to do something different. I put together a simple TicTacToe game, that I call T3, and put the source up on github.

T3 is not a complete game. When it starts, you can play two players, one with the mouse and one on the keypad. You can press space after each game to clear the board, and Esc to exit the game at any time. That's all it is or, in the master branch at least, will ever be. It shows some very basic features of D and I hope can be used as a starting point for people new to D to play around with. Maybe by adding a menu screen and a UI, some AI, networking... whatever.

I've never programmed a TicTacToe game before and didn't consult any source for this one, so don't be surprised if you see something weird. However, there are a couple of architectural points that I want to highlight.

One of the features of D whose usefulness can be overlooked is that of the module. In C++, it is quite common for each class to have its own source file, sometimes even multiple files. And the declaration and implementation are often separated between header and source. C++ classes that are "friends" can have access to each other's internals. It is even more common in Java to see one class per file (and without the declaration/implementation divide), but without the help of the friend keyword. In D, it's much more useful to think in terms not of files or classes, but of modules.

Modules are designed to be used as something analogous to a C++ namespace. You group common functionality into a common module, the difference being that one module equates to one file. It was quite natural for me to start thinking in terms of modules (to an extent... see below), as that's what I have always done when programming with C -- individual C source files are often referred to as modules, and a common C idiom is to group common functionality in a single file. Others may not have such an easy time with it.

So as a result of this focus on modules, you find features that might be surprising. For example, you can have protection attributes (such as public, package and private) at the module level. Modules can have their own (static) constructors. But one module feature that really trips people up is that private class or struct members are visible throughout the entire module in which they are declared.

Take, for example, the tt.game module in T3. The abstract Player class declares a private member, _mark (on line 173 as of this writing). Scroll down a bit to the Game class and you'll see that it accesses this member directly. On first blush, that appears to be a severe violation of the encapsulation principle of OOP. But if you think about it, it really isn't. In this case, the game class needs to perform gets and sets for a player's mark. I could have used D's @property attribute to provide a convenient getter/setter pair in the player class, but that would be rather pointless given that both classes are in the same module. One of the oft-cited reasons (and a good one) for encapsulation is that it can hide changes to an implementation. But here, anyone changing the implementation, maybe by adding some sort of calculation every time _mark is accessed, will always have access to the game class because they are in the same module. Make the changes, get compiler errors, search/replace in the same module to fix them, done. If the outside world needed to have access to Player's _mark, then property accessors would be the right thing (especially if it were in a library).

The converse is true, too. In C++, I rarely, if ever, had variables in a source file that were declared outside of a class, static to the module, but accessed by that class. In D, I do it all the time. An example of that is seen in tt.main, where the module-private _game instance is used by the HumanPlayer class and also by the module-level functions below it. These days, on the rare occasions when I toy around with C++, I often find myself declaring certain variables in an anonymous namespace to be shared by two or more classes in the same file.

Another consequence of thinking in modules is that free functions become less of an issue. In earlier posts on this blog, I had a dilemma over whether or not to use free functions for a library I was wanting to develop. This is the one issue I had with fully embracing the module concept. In C, this problem just doesn't manifest because you're always dealing with free functions. But mixing free functions with objects just feels... dirty. Plus there's the potential for name clashes, and no one likes the this_is_a_function syntax common in C. In C++, there is an easy solution: wrap the free functions in a namespace. There are different ways to handle this in D. The most straightforward is just to not worry about namespace conflicts. When they do arise, the fully-qualified package name can be prefixed to the function name at the call site. I mostly accept this now, but as you can see in tt.gfx and tt.audio, I still add a prefix to free functions whose names I am certain will have a conflict (gfxInit and audioInit, for example). I have no practical reason for avoiding typing tt.gfx.init() at the call site. It's just a personal quirk.

If you do decide to do something with the code, please make sure to give the README a once-over. I would like to highlight the following paragraph.

[quote]Pull requests with new game features will likely not be accepted. I would like to keep this simple and useful as a toy for new D users. I might very well create a branch with multiplayer and AI for myself to play around with, but I really want to keep the master as-is. However, I'll happily accept pull requests for bug fixes and improvements to the build script (gdc & ldc support, for example).[/quote]

This code is just a playground for anyone looking to experiment with D, so while I would love to see people do cool stuff with it, I'd rather not put any of that cool stuff into the master branch.

As an aside, I just realized that I didn't add a license to the repository. I'll take care of that pronto. For the record, I'm releasing this as public domain, with the exception of the Derelict bindings in the import directory which are released under the Boost Software License.
I'm overdue for part 5 of my Binding D to C series. I'll be sure to get that at some point over the next couple of weeks. Thanks for reading.
Aldacron
This is the fourth part of a series on creating bindings to C libraries for the D Programming Language.

In part one, I introduced the difference between dynamic and static bindings and some of the things to consider when choosing which kind to implement. In part two, I talked about the different linkage attributes to be aware of when declaring external C functions in D. In part three, I showed how to translate C types to D. Here in part four, I'll wrap up the dscussion of type translation with a look at structs.

A D Struct is a C Struct


For the large majority of cases, a C struct can be directly translated to D with little or no modification. The only major difference in the declarations is when C's typedef keyword is involved. The following example shows two cases, with and without typedef. Notice that there is no trailing semi-colon at the end of the D structs.


// In C
struct foo_s
{
int x, y;
};

typedef struct
{
float x;
float y;
} bar_t;

// In D
struct foo_s
{
int x, y;
}

struct bar_t
{
float x;
float y;
}


Most cases of struct declarations are covered by those two examples. Sometimes, a slight deviation may be encountered. Such as a struct with two names, one in the struct namespace and one outside of it (the typedef). In that case, the typedefed name should always be used.


// In C
typedef struct foo_s
{
int x;
struct foo_s *next;
} foo_t;

// In D
struct foo_t
{
int x;
foo_t *next;
}


Another common case is that of what is often called an opaque struct (in C++, more commonly referred to as a forward reference). The translation from C to D is similar to that above.


// In C
typedef struct foo_s foo_t;

// In D
struct foo_t;


Member Gotchas


When translating the types of struct members, the same rules as outlined in Part 3 should be followed. But there are a few gotchas to be aware of.

The first gotcha is relatively minor, but annoying. I've previously mentioned in this series that I believe it's best to follow the C library interface as closely as possible when naming types and functions in a binding. This makes translating code using the library much simpler. Unfortunately, there are cases where a struct might have a field which happens to use a D keyword for its name. The solution, of course, is to rename it. I've encountered this a few times with Derelict. My solution is to prepend an underscore to the field name. For publicly available bindings, this should be prominantly documented.


// In C
typedef struct
{
// oops! module is a D keyword.
int module;
} foo_t;

// In D
struct foo_t
{
int _module;
}


The next struct gotcha is that of versioned struct members. Though rare in my experience, some C libraries wrap the members of some structs in #define blocks. I find this practice rather annoying (libpng, I'm looking at you), because it can cause problems not only with language bindings but also with binary compatibility issues when using C as well. Thankfully, translating this idiom to D is simple. Using it, on the other hand, can get a bit hairy.

Here's an example.


// In C
typedef struct
{
float x;
float y;
#ifdef MYLIB_GO_3D
float z;
#endif
} foo_t;

// In D
struct foo_t
{
float x;
float y;
// Using any version identifier you want -- this is one case where I advocate breaking
// from the C library. I prefer to use an identifier that makes sense in the context of the binding.
version(Go3D) float z;
}


Then, to make use of the versioned member, the '-version=Go3D' is passed on the command line when compiling. And this is where the headache begins.

If the binding is compiled as a library, then any D application linking to that library will also need to be compiled with any version identifiers the library was compiled with, else the versioned members won't be visible. Furthermore, the C library needs to be compiled with the equivalent defines. So to use foo_t.z from the example above, the C library must be compiled with -DMYLIB_GO_3D, the D binding with -version=Go3D, and the D app with -version=Go3D. And when making a binding like Derelict that loads shared libraries dynamically, there's no way to ensure that end users will have a properly compiled copy of the C shared library on their system unless it is shipped with the app. Not a big deal on Windows, but rather uncommon on Linux. Also, if the binding is intended for public consumption, the versioned sections need to be documented.

Read more about D's version conditions in the D Programming Language documentation.

The final struct member gotcha, and a potentially serious one, is bitfields. The first issue here is that D does not have bitfields. In D2, we have a library solution in std.bitmanip, but for a C binding it's not a silver-bullet solution because of the second issue. And the second issue is that the C standard leaves the ordering of bitfields undefined.

Consider the following example from C.


typedef struct
{
int x : 2;
int y : 4;
int z: 8;
} foo_t;


There are no guarantees here about the ordering of the fields or where or even if the compiler inserts padding. It can vary from compiler to compiler and platform to platform. This means that any potential solution in D needs to be handcrafted to be compatibile with a specific C compiler version in order to guarantee that it works as expected.

Using std.bitmanip.bitfields might be the first approach considered.


// D translation using std.bitmanip.bitfields
struct foo_t
{
mixin(bitfields!(
int, "x", 2,
int, "y", 4,
int, "z", 8,
int, "", 2)); // padding
}


Bitfields implemented this way must total to a multiple of 8 bits. In the example above, the last field, with an empty name, is 2 bits of padding. The fields will be allocated starting from the least significant bit. As long as the C compiler compiles the C version of foo_t starting from the least significant bit and with no padding in between the fields, then this approach might possibly work. I've never tested it.

The only other alternative that I'm aware of is to use a single member, then implement properties that use bit shift operations to pull out the appropriate value.


struct foo_t
{
int flags;
int x() @property { ... }
int y() @property { ... }
int z() @property { ... }
}


The question is, what to put in place of the ... in each property? That depends upon whether the C compiler started from the least-significant or most-significant bit and whether or not there is any padding in between the fields. In otherwords, the same difficulty faced with the std.bitmanip.bitfields approach.

In Derelict, I've only encountered bitfields in a C library one time, in SDL 1.2. My solution was to take a pass. I use a single 'flags' field, but provide no properties to access it. Given that Derelict is intended to be used on multiple platforms with C libraries compiled by multiple compilers, no single solution was going to work in all cases. I decided to leave it up to the user. Anyone needing to access those flags could figure out how to do it themselves. I think that's the best policy for any binding that isn't going to be proprietary. Proprietary bindings, on the other hand, can be targeted at specific C compilers on specific platforms.

Conclusion


I believe that's all I wanted to say about structs. In Part 5, which I'm quite certain will be the final installment, I'll talk about how to declare functions for both dynamic and static bindings and some of the issues that need to be considered when doing so. I'll also tie off any loose ends I think of.
Aldacron
Uniform Function Call Syntax (UFCS) is a feature of the D Programming Language that was finally implemented in all its glory in a recent compiler release. It has been available for use with arrays for quite some time, since the early days of D1. But now it is available for every type imaginable.

On the one hand, UFCS is nice syntactic sugar for those who hate free function interfaces (a group to which I do not belong). But it's more than that. It's also an easy way to extend the functionality of existing types, while maintaining the appearance that the new functionality actually belongs to the type.

Here's how it works. Given a free function that accepts at least one parameter, the function can be called on the first argument using dot notation as if it were a method of that type. Some code will make it clear.



import std.stdio;

void print(int i)
{
writeln(i);
}

void main()
{
int i = 10;
i.print();
8.print();
}


Notice that it works on both variables and literals (see the output over on DPaste, where you can compile and run D code on line).

For a long time, I was rather ambivalent about UFCS. I didn't see the need. After all, I have no problem with free functions. Then I found a situation where it's a perfect fit.

I'm using SDL2 in one of the many projects I've managed to overload myself with. The SDL rendering interface has several methods accepting SDL_Rect objects as parameters. While implementing a simple GUI, I wanted to maintain bounds information using a rect object. But I also need functionality SDL_Rect doesn't provide out of the box, like routines to determine the intersection of two rects, or if a rect contains a point. And despite not having a beef with free functions, it really does make a difference in the appearance of code when you have a bunch of free function calls mixed in with object method calls. So I started implementing my own Rect type, giving it an opCast method to easily pass it anywhere an SDL_Rect is expected. Then I realized how silly that is when I've got UFCS.

So I scrapped my Rect struct and reimplemented the methods as free functions taking an SDL_Rect as the first parameter. And now I can do things like this.


SDL_Rect rect = SDL_Rect(0, 0, 100, 100);
if(rect.contains(10, 10)) ...

auto irect = rect.intersect(rect2);


And so on. I also had need of a Point type, which SDL doesn't have. But it was ugly mixing 'Poin't and 'SDL_Rect', so I aliased the SDL_ bit away and it's now just 'Rect'. With the combination of aliasing and UFCS, it's possible to hide implementation details without using a full-on wrapper to do so. Of course, it's not entirely hidden as the SDL_Rect is still directly accessible and you can still use the type by name. But it certainly can come in handy.
Aldacron
This is the (long overdue) third part of a series on creating bindings to C libraries for the D programming language.

In part one, I introduced the difference between dynamic and static bindings and some of the things to consider when choosing which kind to implement. In part two, I talked about the different linkage attributes to be aware of when declaring external C functions in D. Here in part three, I'm going to begin discussing how to translate C type declarations into D. I'll continue the discussion in part four.

First off, I want to mention a particular page over at dlang.org called Converting C .h Files to D Modules. This is required reading for anyone planning to work on a D binding to a C library. This series should be considered a companion to that page.

Dealing With Types



When translating C types to D, the large majority take only a handful of forms:

* typedefs
* #defines
* struct declarations
* function parameters

All of the translation guidelines discussed in this post cover all four situations, but there are special cases regarding function parameters that will be covered in part five when I talk about function declarations. It's also possible to find global variable declarations in a C header. But, in my experience, they aren't generally encountered when creating library bindings.

Typedefs, Aliases, and Native Types



D used to have typedefs. And they were strict in that they actually created a new type. Given an int typdefed to a Foo, a type Foo would actually be created rather than it being just another name for int. But D also has alias, which doesn't create a new type but just makes a new name for an existing type. In D2, typedef was deprecated. Now we are left with alias.

alias is what should be used in D when a typedef is encountered in C, excluding struct declarations (more on that in part four). Most C headers have a number of typedefs that create alternative names for native types. For example, you might see something like this in a C header.


typedef int foo_t;
typedef float bar_t;


In a D binding, it's typically a very good idea to preserve the original typenames. The D interface should match the C interface as closely as possible. That way, existing C code from examples or other projects can be easily ported to D. So the first thing to consider is how to translate the native types int and long into D.

Fortunately, on the dlang page I mentioned above, there is a table that lists how all the C native types translate to D. If you look it up, you'll see that an int is an int, a float is a float, and so on. So to port the two declarations above, simply replace typedef with alias and all is well.


alias int foo_t;
alias float bar_t;


One thing I'd like to point out about that table, though. It lists the D int as equivalent to the C long. In most cases, this is true. But there is a possibility that the C long type could actually be 64-bits on some platforms, whereas D's int type is always 32-bits and D's long type is always 64-bits. As a measure of protection against this possible snafu, it's prudent to use a couple of handy aliases on the D side that are declared in core.stdc.config: c_long and c_ulong.


// In the C header
typedef long mylong_t;
typedef unsigned long myulong_t;

// In the D module
import core.stdc.config;

// Although the import above is private to the module, the aliases are public
// and visible outside of the module.
alias c_long mylong_t;
alias c_ulong myulong_t;


One more thing. If you are translating typedefs that use types from C's stdint.h, you have two options for the aliases. You can use native D types, since the sizes are fixed, or you can include core.stdc.stdint, which mirrors the C header, and just replace typedef with alias. For example, here are some types from SDL2 translated into D.


// From SDL_stdinc.h
typedef int8_t Sint8;
typedef uint8_t Uint8;
typedef int16_t Sint16;
typedef uint16_t Uint16;
...

// In D, without core.stdc.stdint
alias byte Sint8;
alias ubyte Uint8;
alias short Sint16;
alias ushort Uint16;
...

// And with the import
import core.stdc.stdint;

alias int8_t Sint8;
alias uint8_t Uint8;
alias int16_t Sint16;
alias uint16_t Uint16;
...


Enums



Translating anonymous enums from C to D requires nothing more than a copy/paste.


// In C
enum
{
ME_FOO,
ME_BAR,
ME_BAZ
};

// In D
enum
{
ME_FOO,
ME_BAR,
ME_BAZ,
}


Note that enums in D do not require a final semicolon. Also, the last member may be followed by a comma.

For named enums, you may want to do just a bit more than a direct copy/paste. Named enums in D require the name be prefixed when accessing members. Example:


// In C
typedef enum
{
ME_FOO,
ME_BAR,
ME_BAZ
} MyEnum;

// In D
enum MyEnum
{
ME_FOO,
ME_BAR,
ME_BAZ
}

// In some function...
MyEnum me = MyEnum.ME_FOO;


There's nothing wrong with this in and of itself. In fact, there is a benefit in that it gives you some type safety. For example, if a function takes a parameter of type MyEnum, you can't just pass any old int in its place. The compiler will complain that int is not implicitly convertible to MyEnum. That may be acceptable for an internal project, but for a publicly available binding it is bound to cause confusion because it breaks compatibility with existing code samples. One work around that maintains type safety is the following.


alias MyEnum.ME_FOO ME_FOO;
alias MyEnum.ME_BAR ME_BAR;
alias MyEnum.ME_BAZ ME_BAZ;

// Now this works
MyEnum me = ME_FOO;


It's obvious how tedious this could become for large enums. If type safety is not important, there's one more workaround.


alias int MyEnum;
enum
{
ME_FOO,
ME_BAR,
ME_BAZ
}


This will behave exactly as the C version.

#defines



Often in C, #define is used to declare constant values. OpenGL uses this approach to declare values that are intended to be interpreted as the type GLenum. Though these values could be translated to D using the immutable type modifier, there is a better way.

D's enum keyword is used to denote traditional enums and also manifest constants. In D, a manifest constant is an enum that has only one member, in which case you can omit the braces in the declaration. Here's an example:


// This is a manifest constant of type float
enum float Foo = 1.003f;

// We can declare the same thing using auto inference
enum Foo = 1.003f; // float
enum Bar = 1.003; // double
enum Baz = "Baz!" // string


For single #defined values in C, these manifest constants work like a charm. But often, such values are logically grouped according to function. Given that a manifest constant is essentially the same as a one-member enum, it follows that we can group several #defined C values into a single, anonymous D enum.


// On the C side.
#define FOO_SOME_NUMBER 100
#define FOO_A_RELATED_NUMBER 200
#define FOO_ANOTHER_RELATED_NUMBER 201

// On the D side
enum FOO_SOME_NUMBER = 100
enum FOO_A_RELATED_NUMBER = 200
enum FOO_ANOTHER_NUMBER = 201

// Or, alternatively
enum
{
FOO_SOME_NUMBER = 100,
FOO_A_RELATED_NUMBER = 200,
FOO_ANOTHER_NUMBER = 201,
}


Personally, I tend to use the latter approach if there are more than two or three related #defines, and the former if it's only one or two values.

But let's get back to the manifest constants I used in the example up above. I had a float, a double and a string. What if there are multiple #defined strings? Do you have to declare a seperate manifest constant for each one? No, not if you don't want to. D's enums can be typed to any existing type. Even structs.


// In C
#define LIBNAME "Some Awesome C Library"
#define AUTHOR "John Foo"
#define COMPANY "FooBar Studios"

// In D, collect all the values into one enum declaration of type string
enum : string
{
LIBNAME = "Some Awesome C Library",
AUTHOR = "John Foo",
COMPANY = "FooBar Studios",
}


Neat, eh? Again, note the trailing comma on the last enum field. I tend to always include these in case a later version of the C library adds a new value that I need to tack on at the end. A minor convenience.

More to Come



I think that's about enough for this session. In part four, we'll take a look at structs. There are a couple of pitfalls to be aware of when porting them over to D and I'll give you some advice to get around them.
Aldacron
When working with D's standard library, it is sometimes necessary to work around missing declarations in the core.sys.windows.windows module. It's a fairly big module as is, but it isn't all-inclusive. If you are doing any heavy-duty Windows development, you'll likely want a third-party Win32 API binding. But if you just need to call a function or two, that's overkill.

A good example of this is the need to lock the timing thread to one core when dealing with the Windows timer on a multi-core machine. This is a well-known solution to the problem of erratic timing results returned by QueryPerformanceCounter. If you are using a library like SDL, which uses QPC under the hood for its timing calls, this is something you ought to consider. Unfortunately, Phobos is currently missing declarations for a couple of function calls and one type that you need to lock the timing thread down.

For many cases, this can be overcome by adding the appropriate declarations where you need them. In other cases, you'll find (particularly when using DMD) that the Win32 import libraries that ship with the compiler are outdated. In that case, you'll either need to generate them yourself or load the function symbols manually via LoadLibrary and friends. Luckily, for setting the thread affinity the solution is easy. Here's a complete example, ripped right out of the module where I use it.

[source]
version(Windows)
{
private
{
import core.sys.windows.windows;
import std.windows.syserror;

// Declarations missing from the windows module.
alias size_t DWORD_PTR;
extern(Windows)
{
DWORD SetThreadAffinityMask(HANDLE,DWORD);
BOOL GetProcessAffinityMask(HANDLE,DWORD_PTR*,DWORD_PTR*);
}

void setThreadAffinity()
{
void doThrow(string msg)
{
auto err = GetLastError();
throw new Exception(msg ~ sysErrorString(err));
}

DWORD_PTR procMask, sysMask;
if(!GetProcessAffinityMask(GetCurrentProcess(), &procMask, &sysMask))
doThrow("GetProcessAffinityMask failure: ");

DWORD_PTR mask = 1;
if(mask & procMask)
{
if(!SetThreadAffinityMask(GetCurrentThread(), mask))
doThrow("SetThreadAffinityMask failure: ");
}
else
{
throw new Exception("Unexpected affinity mask mismatch.");
}
}
}
}
[/source]

Drop this into module or class scope and add the following to an init method somewhere in the same module (or another module if you move setThreadAffinity out of the private block).

[source]version(Windows) setThreadAffinity();[/source]

Notice also that I imported std.windows.syserror, which exposes the sysErrorString function. I'm sure I wasn't the only one who overlooked that module. After years of using D, I only noticed it recently. If you're going to be making Win32 API calls, it will come in handy.
Aldacron
I've been working on BorderWatch a little bit every day. My focus has been on getting the ASCII engine, Arthur, into a state that will let me get a game up and running. With the few modules that I've implemented so far, I believe I'm there. One of the D features that has come into play in this process has been array slices. Read Steven Schvieghoffer's excellent article on slices, now hosted at dlang,org, for a good introduction if you don't know what they are.

Take a look at the BorderWatch main method (or view the whole module for context):

[source]
void main()
{
initArthur(AppName, OrgName);
scope(exit) termArthur();

auto console = createConsole(ConsoleData(AppName));
menuScreen(console);
}
[/source]

The Console is the means of displaying the ASCII graphics to the user. Initially, there was only one kind, a "heavyweight" console that represents a window on the OS. But after realizing how annoyingly awkward it was to position things appropriately with my naive implementation of clipping, I came to the conculsion it would be much nicer to have another type, "virtual consoles", that maintain their own coordinate space.

At the heart of both types of Console is an array of Symbols (an ASCII character and RGBA values). A virtual console's buffer is a subrect of its parent's buffer. Every time you print to a console, it is marked as dirty. When you call the render method on a console, it first looks to see if any of its children are dirty. If they are, it copies the children's symbols into the proper region of its own buffer.

In C, this sort of operation would most likely be accomplished with a loop and a memcpy, copying entire rows at a time. My D implementation is done similarly, but instead of memcpy, I use array slices. Here's the (uncommented) code from console.d that does the work:

[source]

void render()
{
foreach(c; _children)
{
if(c._dirty)
{
uint dstStart = c.x + (c.y * columns);
uint dstEnd = dstStart + c.columns;
uint srcStart, srcEnd;
for(uint i=0; i {
srcStart = i*c.columns;
srcEnd = srcStart + c.columns;

// Here's the slicing...
// The symbols from one row of the child's buffer are
// copied to one row of the destination buffer.
_symBuffer[dstStart .. dstEnd] = c._symBuffer[srcStart .. srcEnd];

dstStart += columns;
dstEnd = dstStart + c.columns;
}
_dirty = true;
c._dirty = false;
}
}
}
[/source]

If you haven't yet read the article I linked above, a quick explanation. _symBuffer[dstStart .. dstEnd] takes a 'slice' of the _symBuffer array, starting from the index indicated by dstStart (inclusive) and ending at the index indicated by dstEnd (exclusive). That slice is then assigned all of the values contained in the slice of the child's buffer that is taken from srcStart to srcEnd. There's no need for pointer arithmetic, no chance of overwriting memory, no need to worry about allocations or deallocations... it's all safe and convenient.

Another use I had for array slices, in the same file, is in the following method:

[source]
void fill(ubyte c, ubyte r, ubyte g, ubyte b, ubyte a = 255)
{
auto symbol = Symbol(c, r, g, b, a);

// Here's the slice...
_symBuffer[] = symbol;
}
[/source]

Here, I'm taking a single symbol and using the slice syntax to assign it to the entire array.

These are seemingly farily trivial things, but I can tell you that it makes a big difference. I've been using C for many years and, though I've been frustrated from time to time, I've never actually hated it. But the more I use D, the more I miss the little things like this when I go back to my C codebase. It almost makes me not want to go back at all.
Aldacron

BorderWatch

I'm afraid I'm going to be waving goodbye to Dolce. It's been sitting, bit rotting, for a while now. Some time ago I encountered two major issues with my Allegro binding in Derelict 2. One, a random (and I mean random) crashing bug, I've been unable to solve. Another, regarding Allegro's interaction with Cocoa on Mac, I don't have the means to test (no Mac), though I do have an idea on how to solve it. However, I'm washing my hands of it all for now. I may come back to the Allegro binding after the Allegro 5.1 branch is complete, but Dolce I think is quite dead.

But I still haven't lost my urge to make a game. In fact, I did make one as a way to get back into the groove. I didn't mention it here because I actually coded it in C. Nothing to brag about, just a simple reimplementation of the A5teroids example that ships with Allegro 5. I wanted to get familiar with the API and get my chops back without the distractions my binding was causing. And that brings me to the topic of this post.

I really, really want to make this game that has been in my head forever. So, now I am. But without Allegro. Instead, I'm going with an ASCII-based approach using SDL2. This is an evolution of the text-based idea I was contemplating a while back. The big difference is that this time, I've got some code to show for it. Also, though it is still in the early stages, it's getting to the point where I'm ready to have a git repo to manage my changes. Rather than just working on a local repo, I decided to bite the bullet and just put everything on github as is. And so, I give you BorderWatch.

One of the reasons I wanted to put it up on github is to give myself a reference to talk about features of D on this blog. Dolce was supposed to fill that role, but I never really felt it was ready for the world. BorderWatch is different, though, and I hope it can serve as a starting point for anyone curious about using D for game development. I can seriously say that after working on even a simple little game like an asteroids clone in C (a language I've used and loved/hated for a very long time), putting one together in D is a much different, and better, world entirely.

Please read the project README before you bother with the code. I'm using my recently implemented SDL2 and SDL2_image bindings from Derelict 3, and no other external libraries. If you want to build the project, you'll have to get the SDL2 and SDL2_image source and compile it all yourself. Also, only DMD and Windows are currently supported.

I do hope to work on this over the coming months as often as I can. I have a lot of ideas for it and it is proving to be a lot of fun so far. It's licensed under the zlib license, so do what you want with it. There's not a whole lot there yet, but what is there allows you to open a 'console' window, print ASCII characters and text strings to the whole window area or specific regions, and there's an effect implemented that can display text strings as a slideshow. Nothing is optimized, nothing is stable. But it's a start.
Aldacron
I'm supposed to be posting part three of my series on binding D to C. It's going to be boring to write, so I keep putting it off. But I *will* get to it eventually. In the meantime, I wanted to blog about a neat feature of D that I'd sort of forgotten about until I needed it.

I'm working on an ASCII-based strategy/simulation game in the vein of Dwarf Fortress. One of the things I want to have is an ASCII bitmap that is always available, a default that I can fall back on if any custom bitmaps fail to load. The best way to do this is to have the bitmap data compiled into the executable. A simple, cross-platform way to do that in C or C++ is to convert the bitmap data into a C array in a source file to be compiled and linked. In D, the compiler can do it all for you.

The import keyword in D plays two roles. The most common (and important) role is in import declarations. These are what you use to make the content of one module available to another. D also has a feature called import expressions, which allow you to specify a file that the compiler will build in to the final executable.

[source]
// Import declaration (no parentheses)
import mypackage.mymodule;

// Import expression (with parentheses)
string s = import("some.file");
[/source]

The first thing you might notice is that the file is imported as a string. This isn't going to be very useful for binary data. But that's not a big deal. A simple cast to ubyte[] will do the trick. If other types are needed and casting doesn't fit, a compile-time function can be used to massage the data into the appropriate format. Also, the filename itself can be generated at compile time. Read more about D's compile time function execution.

Now for some real example code, straight from my game. I'm using SDL_image to load the default font image from a ubyte array,

[source]
// The name of the default font file, declared as a manifest constant.
enum DefaultFontName = "default_font.png";

// The image data. Loaded as a string, so casted to an immutable ubyte array.
immutable(ubyte)[] _defaultFont = cast(immutable(ubyte)[])import(DefaultFontName);

SDL_Surface* loadSurface(string filename)
{
if(filename == DefaultFontName)
{
log.writeln("Loading default font image.");
auto rwops = SDL_RWFromConstMem(_defaultFont.ptr, _defaultFont.length);
if(rwops)
{
auto surface = IMG_Load_RW(rwops, 1);
if(surface) return surface;
}
throw new Exception("Failed to load default font image: " ~ to!string(SDL_GetError()));
}
else
{
auto path = format("%s/data/gfx/%s", appDirectory, filename);
log.writefln("Loading font image %s.", path);
auto surface = IMG_Load(path.toStringz());
if(surface) return surface;
throw new Exception(format("Failed to load font image [%s]: %s", path, to!string(SDL_GetError())));
}
}
[/source]

A very important point in using this feature is that it won't compile unless you tell the compiler via a command line switch where it should look for import data. This is a security feature that, IIRC, was implemented to prevent things like remotely accessing a compiler to compile arbitrary data on that system. So you pass the root path with the command line switch -J (i.e. -Jpath). Any file you import via the import expression will be relative to that path.

This is one of those small features of D that mesh with the whole to make it such an enjoyable language to use.
Aldacron

Binding D to C Part Two

This is part two of a series on creating bindings to C libraries for the D programming language.

In part one, I discussed the difference between dynamic and static bindings and some of the considerations to take into account when deciding which way to go. Here in part two, I'm going to talk about an important aspect of function declarations: linkage attributes.

When binding to C, it is critical to know which calling convention is used by the C library you are binding. In my experience, the large majority of C libraries use the cdecl calling convention across each platform. Modern Windows system libraries use the stdcall calling convention (older libraries used the pascal convention). See this page on x86 calling conventions if you want to know the differences.

D provides a storage class, extern, that does two things when used with a function. It tells the compiler that the given function is not stored in the current module and it specifies a calling convention via a linkage attribute. The D documentation lists all of the supported linkage attributes, but for C bindings the three you will be working with most are C, Windows and System.

Although I'm not going to specifically talk about static bindings in this post, the following examples use function declarations as you would in a static binding. For dynamic bindings, you'll use function pointers instead.

The C attribute is used on functions that have the cdecl calling convention. If no calling convention is specified in the C headers, it's safe to assume that the default convention is cdecl. There's a minor caveat in that some compilers allow the default calling convention to be changed via the command line. This isn't an issue in practice, but it's a possibility you should be aware of if you don't have control over how the C library is compiled.


// In C
extern void someCFunction(void);

// In D
extern(C) void someCFunction();


The Windows attribute is used on functions that have the stdcall calling convention. In the C headers, this means the function is prefixed with something like __stdcall, or a variation thereof depending on the compiler. Often, this is hidden behind a define. For example, the Windows headers use WINAPI, APIENTRY, and PASCAL. Some third party libraries will use these same defines or create their own.


// In C
#define WINAPI __stdcall
extern WINAPI someWin32Function(void);

// In D
extern(Windows) someWin32Function();


The System attribute (extern(System)) is useful when binding to libraries, like OpenGL, that use the stdcall convention on Windows, but cdecl on other systems. On Windows, the compiler sees it as extern(Windows), but on other systems as extern( C ). The difference is always hidden behind a define on the C side.


// In C
#ifdef _WIN32
#include
#define MYAPI WINAPI
#else
#define MYAPI
#endif

extern MYAPI void someFunc(void);

// In D
extern(System) void someFunc();


The examples above are just examples. In practice, there are a variety of techniques used to decorate function declarations with a calling convention. It's important to examine the headers thoroughly and make no assumptions about what a particular define actually translates to.

One more useful detail to note is that when implementing function declarations on the D side, you do not need to prefix each one with an extern attribute. You can use an attribute block like so:


extern(C)
{
void functionOne();
double functionTwo();
}

// Or, if you prefer
extern(C):
void functionOne();
void functionTwo();


In part three, I'll talk a bit about builtin types and how they translate between C and D. After that, we'll be ready to look at complete function declarations and how they differ between static and dynamic bindings.
Aldacron

Binding D to C

This is part one of a series on creating bindings to C libraries for the D programming language.

This is a topic that has become near and dear to my heart. Derelict is the first, and only, open source project I've ever maintained. It's not a complicated thing. There's very little actual original code outside of the Utility package (with the exception of some bookkeeping for the OpenGL binding). The majority of the project is a bunch of function and type declarations. Maintaining it has, overall, been relatively painless. And it has brought me a fair amount of experience in getting D and C to cooperate.

As the D community continues to grow, so does the amount of interest in bindings to C libraries. Recently, a project called Deimos was started over at github to collate a variety of bindings to C libraries. There are several bindings there already, and I'm sure it will grow. People creating D bindings for the first time will, usually, have no trouble. It's a straightforward process. But there are certainly some pitfalls along the way. In this post, I want to highlight some of the basic issues to be aware of. For the sake of clarity, I'm going to ignore D1 (for a straight-up "static" binding, the differences are minor, but they do exist.

The first thing to consider is what sort of binding you want, static or dynamic. By static, I mean a binding that allows you to link with C libraries or object files directly. By dynamic, I mean a binding that does not allow linking, but instead loads a shared library (DLL/so/dylib/framework) at runtime. Derelict is an example of the latter; most (if not all) of the bindings in the Deimos repository the former. There are tradeoffs to consider.

D understands the C ABI, so it can link with C object files and libraries just fine, as long as the D compiler understands the object format itself. Therein lies the rub. On Posix systems, this isn't going to be an issue. DMD (and of course, GDC and, I assume, LDC) uses the GCC toolchain on Posix systems. So getting C and D to link and play nicely together isn't much of a hassle. On Windows, though, it's a different world entirely.

On Windows, we have a variety of object file formats to contend with: COFF, OMF, ELF. DMD, which uses an ancient linker called Optlink, outputs OMF objects. GDC, which uses the MingW backend on Windows, outputs ELF objects. I haven't investigated LDC yet, but it uses whichever backend LLVM is configured to use. Meanwhile, the compiler that ships with Visual Studio outputs objects in the COFF format. What a mess!

This situation will improve in the future, but for now it is what it is. And that means when you make a C binding, you have to decide up front whether you want to deal with the mess or ignore it completely. If you want to ignore it, then a dynamic binding is the way to go. Generally, when you manually load DLLs, it doesn't matter what format they were compiled in, since the only interaction between your app and the DLL happens in memory. But if you use a static binding, the object file format determines whether or not the app will link. If the linker can't read the format, you get no executable. That means you have to either compile the C library you are binding with a compiler that outputs a format your D linker understands, use a conversion tool to convert the libraries into the proper format, or use a tool to extract a link library from a DLL. Will you ship the libraries with your binding, in multiple formats for the different compilers? Or will you push it off on the users to obtain the C libraries themselves? I've seen both approaches.

Whichever way you decide to go really doesn't matter. In my mind, the only drawback to dynamic bindings is that you can't choose to have a statically linked program. I've heard people complain about "startup overhead", but if there is any it's negligble and I've never seen it (you can try it with Derelict -- make an app using DerelictGL/SDL/SDLImage/SDLMixer/SDLNet/SDLttf and see what kind of overhead you get at startup). The only drawback to static bindings is the object file mess. But with a little initial work upfront, it can be minimzed for users so that it, too, is negligible.

Once you decide between static and dynamic, you aren't quite ready to roll up your sleeves and start implementing the binding. First, you have to decide how to create the binding. Doing it manually is a lot of work. Trust me! That's what I do for all of the bindings in Derelict. Once you develop a systematic method, it goes much more quickly. But it is still drastically more time consuming than using an automated approach. To that end, I know people have used SWIG and a tool called htod. VisualD now has an integrated C++-to-D converter which could probably do it as well. I've never used any of them (which is really incredible when I think about it, given how precious little time I usually have), so I can't comment on the pros and cons one way or another. But I do know that any automated output is going to require some massaging. There are a number of corner cases that make an automated one-for-one translation extremely difficult to get right. So regardless of your approach, if you don't want your binding to blow up on you down the road, you absolutely need to understand exactly how to translate D to C. And that's where the real fun begins.

That's going to have to wait for another post, though. It's Sunday evening here and I've got things to do. In part two, I'll talk about function declarations. I think they're easier to cover than types, which I'll save for a third post. Until then, Happy New Year!
Aldacron

Dolce and Da5teroids

So I've finally gotten started on some game code. In order to see if Dolce is actually useful or a waste of time, I decided to start by porting the A5teroids demo that ships with the Allegro 5 package. Here's what I've learned so far.

First, the whole idea of Dolce as a framework on top of Allegro to allow an absolute minimum amount of startup code for a game was a good idea on paper. All of the initialization details were tucked away behind a single method call. In practice, it's rather silly. I was happy with the little example I gave some time ago. But, the number of options that need configuring in order to make it useful actually make it more complex than it would be if I didn't hide all of the initialization details. So, now I don't. Now, you initialize the modules you need as you need them. It's cleaner, still takes just a few lines to get to the good stuff, and makes more sense.

Second, I've forgotten how fun it is to make a game. Once I dug into the A5teroids source, I decided it's not something I really want to port. It appears to have been cobbled together with little thought or consistency. So I decided to mostly start from scratch. I'm using the same resources and related data, but that's it. I've not spent a whole lot of time on it so far, but I'm having a blast. Along the way, I've been tweaking Dolce to make it more useful. I've also begun to expand it a bit and start implementing some utility modules.

Finally, as a result of moving forward on all of this I've hit on some aspects of D that would make useful posts here. I've been rather quiet here for a while, so it will be nice to have something to say again.

I anticipate that over the next three or four weeks I'll be able to get a lot more work done in D-Land than I have up until now. Not only do I need to refine Dolce and prepare it for public consumption, I also need to get busy putting the finishing touches on Derelict 2 (which I've also finally decided to move to github, thanks to a bit of encouragement). Fun times ahead.
Aldacron

Dolce Gets Some Love

Once the semester ended, I took a week to do nothing but play Dark Age of Camelot and get some much needed guitar practice in. Then I dove back in to the gym for some long overdue exercise. I also managed to pick up a few more private classes to fill some of the gaps in my schedule. Once I let off all that steam, I fired up VisualD and took a look at Dolce.

This has been an interesting project for me so far. In all of the years I've been programming, I've never experienced anything quite like this. On the surface, it's such a simple thing. To date, it's just a handful of D modules with a very limited interface. It's that latter bit which has been the challenge. I never realized how tough it is to design a minimal interface.

The goal with this project from the beginning has been to enable getting a game off the ground with Allegro and D quickly and painlessly. No wrapping of the Allegro libraries, no complex abstractions. Just a collection of routines to package up a bunch of boilerplate so that you can get to the game code almost as soon as you pull up your IDE for the first time. I thought it would take a few days of dinking around. After all, I've got years of experience under my belt. I could do this in my sleep, right?

From the very beginning, I stumbled over which approach to take with the interface: free functions or static class methods. Ultimately, I settled on the latter approach. Then I started implementing the outline I had sketched out. As I progressed, I realized it was a bit overly complex. Did I really need a templated resource manager for a finite number of known Allegro resource types? So I scrapped the first pass and went back to the drawing board. More than once. It's been a while since my last major rewrite, but I think there were a total of four. Some of that process was documented here on this blog.

Over the past couple of months, I've not done much more than a bit of tinkering here and there due to my busy schedule. But now that I've come back to the project in earnest, I can say it's looking good. I've got it to the point where it's ready to use for a game. And that is, in fact, my next step. I'm going to port over the A5teroids demo that ships with Allegro. And probably implement a few other old skool games just to get an idea of what else I need to add to the core.

The weird part for me is how much refactoring and thinking went into getting such a pitifully small amount of code put together. Of course, it would have been much easier had I been more familiar with Allegro in the first place. I used it a lot back in the late 90s, but that doesn't count given how long ago it was and how much Allegro has changed. Still, I will never again approach a "simple" interface with the idea that it will be simple to implement.

As the project moves forward, I'll be working on some utilities that could be useful for any game project, Allegro-based or otherwise. I'm looking forward to getting into some of the features of D I've not yet had much experience with, like ranges. Before all of that though, once I get a couple of small games knocked out and know for sure that the current interface achieves my goal, I'll do some proper DDoc comments and put all of the source up on Github.
Aldacron
Dolce is still very much alive, just getting little attention while I wrap up the second semester at the university. For anyone interested, I'm teaching in a special program where a local university here in Seoul is partnered with a university in America. The students do two semesters here in Korea before moving off to the States for three years. We started the second semester two months early this year in order to give the students more time between the end of the semester and their departure to America next January. Last year's students, the first to participate in the program, only had two weeks.

In addition to my Debate classes, I'm teaching a special English Fluency course this semester for the handful of students who have achieved their target TOEFL scores. Coupled with all of the private classes I teach six days a week, I don't have enough time to do everything I want to do. When my schedule gets full like this (which happens a couple of times a year), my D projects tend to suffer for it. I find it mentally tough to get any coding done if I can't work for long stretches at a time. I feel so unmotivated when it comes to short bursts at the keyboard.

Still, I've taken the time to do some nipping around the Dolce source here and there. Nothing major to report on that front yet. The semester ends for me in two more weeks, and I'm not giving my students a final exam this time around. So I'll find some nice gaps in my schedule soon that I'll use to give some attention to both Dolce and Derelict.

In the meantime, I'll leave you with this: a D compiler is now very close to being included in the GCC project. You might also be interested in checking out the related Reddit thread. That's great news all around. And I really need to get around to adding it to my D news blog.
Aldacron

Dolce Refinements

In my last post, I showed the minimal amount of code needed to get something up and running with Dolce. And while it's a really small amount of code, something kept bugging me about the implementation. It just wasn't "D" enough.

Over the years, the lion's share of my programming experience has been with C and Java. When working with either language, I naturally, and effortlessly, use appropriate design patterns. In other words, I design "to the language." Maybe "design pattern" is the wrong noun here, but the point is that my code structure differs based on the capabilities and features of each language. And I can switch between them seamlessly, thanks to the years of doodling around I've got under my belt. My C code is "C", and my Java code is "Java". Unfortunately, I haven't quite gotten that line of clarity yet in D.

In D, it's possible to structure things like I do in C. It's also possible to structure things like I do in Java. That's why I've refactored Dolce so many times already. It's mentally distracting having both styles in the same code base. Unfortunately, I've found that sticking to one of the two styles is a source of annoyance, as I can't get over the feeling that if I'm writing Java or C in D, then why am I using D?

The approach demonstrated in my last post, where you subclass a Game class and override certain methods as needed, was obviously very Java-ish. And I didn't like the fact that I was requiring any methods to be implemented at all, even if there were only two of them. Finally, it seemed silly to me that if you want to avoid using the Game class, your only option was to handle all the boilerplate yourself. I knew I could do better than that.

So, I went back to the drawing board and came up with this.


import dolce.game.framework;

class MyGame
{
}

int main(string[] args)
{
Framework.start(new MyGame, "MyGame");
return 0;
}


This will create a window that closes if the escape key or close button are pressed. Pretty useless in and of itself, but the point is that there are no longer any specific methods required and you don't need to subclass anything. In fact, you don't need a class at all. Change MyGame to a struct instead, and it will work fine (and you wouldn't need to 'new' it either, though you could if you wanted to). Even better, you can pick and choose the methods you want to implement.

Framework is a static class with a templated start method that looks like this:


static void start(T)(T game, string appname, string orgname = null)
{
init(appname, orgname);

static if(hasMember!(T, "init")) game.init();

_running = true;
al_start_timer(_frameTimer);

while(_running)
{
// Pump all events.
pumpEvents();

// If it's time to update, do it.
if(_update)
{
static if(hasMember!(T, "update")) game.update();
_update = false;
}

static if(hasMember!(T, "render")) game.render();
}

static if(hasMember!(T, "term")) game.term();
term();
}


The thing I want to focus on is the hasMember template. It's implemented in the standard library module std.traits. It is not a function template. It takes a type list and no parameter list. The template itself contains one field, a bool value called hasMember. This calls for a brief detour to talk about templates in D.

Let's say I want to define a template that acts as a boolean value. Let's call it "isTrue".


template isTrue()
{
enum bool val = true;
}


Here, the template has an empty type list and no parameter list. Internally, there is no function declared, only a single member, val (so it's not a function template). Notice that val is declared as an enum. In D2, single-member, anonymous enums are known as manifest constants. They are not actual variables, so cannot be used as lvalues or have their addresses taken. Essentially, every time the template is instantiated, that call to the template is replaced by the value of the enum.

Notice also that the template and the member have different names. This means that you have to explicitly type the member name when you instantiate the template. Also remember that D's template instantiation syntax is templateName!(). So, given that, this is how you would use this template:


void main()
{
assert(isTrue!().val);
}


Sometimes you may want a template to have multiple members, but the large majority of templates you write are going to have only one. In that case, D let's you take some shortcuts if the member name is the same as the template name. In this case, we can eliminate the .val part when instantiating the template.


template isTrue()
{
enum bool isTrue = true;
}

void main()
{
assert(isTrue!());
}


std.traits.hasMember uses this approach. But it also takes two args to its type list -- one is the Type that it is to operate on, and the other is a string that will be used to look up a member method or variable of that type at compile time (this is accomplished via D's compile time reflection capabilities).

You may be wondering how a string value could be part of the type list, rather than the parameter list. The type list isn't just for types. You can also pass any sort of symbol that can be known at compile time. Let's change our isTrue example now to demonstrate this. The new job of the template is to evaluate to true if and only if a given string is equal to the string "true".


template isTrue(string s)
{
enum bool isTrue = "true" == s;
}

void main()
{
assert(isTrue!("true"));
assert(isTrue!("bugaboo"));
}


Run this and the first assert will pass. The second will blow up. The string 's' is never used at runtime here. It's a compile-time-only symbol. When the template is instantiated, the compiler will run the test ("true" == s) and set the value of isTrue to the result. The asserts are basically being rewritten as:


assert(true);
assert(false);


By using hasMember the way I do in the Framework.start method shown above, no calls will be made to any methods not present in the given type. They simply won't be compiled in to the executable. If the method is implemented, a call to it will be compiled in. This means anyone using Framework.start can implement any combination of init, term, update and render methods.

One more thing to point out. Let's look at the minimal example again.


import dolce.game.framework;

class MyGame
{
}

int main(string[] args)
{
Framework.start(new MyGame, "MyGame");
return 0;
}


According to the rules for instantiating templates, the call to Framework.start should look like this:


Framework.start!(MyGame)(new MyGame, "MyGame");


Again, the compiler is letting me take a shortcut. Because I only have one type in the typelist, I can omit the typelist and the ! when I instantiate the template. The compiler has all the information it needs to infer the type, so there's no need for me to be verbose about it.

With the new implementation, I managed to cut out several lines of code from the framework (previously 'game') module. And though I haven't implemented it yet, the door is now open to use Framework for handling the boilerplate if you want to use free functions instead of a class or struct. This is more what I would call "D style" than the first pass was.

Head on over to d-programming-language.org for more info on templates and the std.traits module.
Aldacron
I'm making very slow progress on Dolce, but progress nonetheless. I managed to grab a couple of hours today to refactor event handling and to simplify the framework interface.

In my initial implementation, it was necessary when making a game with Dolce to make all of the initialization and termination calls yourself. All of these calls are found in the different modules of the core package. They wrap a lot of Derelict and Allegro setup/cleanup in a way that is configurable, providing sensible defaults where required. But, while it's a good deal less boilerplate than you would otherwise need to type, it could be improved.

So, now, to start implementing a game with Dolce, the minimal amount of code you need is this:


module mygame.main;

import dolce.game.game;

class MyGame : Game
{
protected override
{
string name()
{
return "MyGame";
}

void frame()
{

}
}
}

int main(string[] args)
{
Game.start(new MyGame);
return 0;
}


Add your per-frame code in the frame method, and away you go. Both name and frame are abstract methods in the Game class, so they are the only two methods that must be overridden. I may yet rework it to eliminate the frame method and have a pair of render/update methods instead (haven't decided where or how to handle timing yet). But the point is, there's very little to do just to display something on screen. Of course, there are several other methods that can be overridden for game-specific init/cleanup, event handling, and more. But as is, this will call the frame method as fast as possible and has default handling for the escape key and window close button. And now this is starting to look like what I had envisioned in the beginning.

It's also possible to avoid the Game.start method and manage the life cycle yourself:


int main(string[] args)
{
auto game = new MyGame;
scope(exit) game.term();
game.init();
game.run();
return 0;
}


Or, you can ignore the game module entirely and use the core package modules directly. Your choice.

The real point of this post, though, was a chance to talk about how D handles method overriding. Notice the override keyword in the first code block above. This is not mandatory when overriding, but it's a good idea to use it. DMD will spit out a warning if you don't. What it's good for is to protect against the case when the superclass changes. Imagine that you override a method from the Game class, but in the future that method is removed or renamed. By using override, the compiler will let you know the next time you compile that your method isn't actually overriding anything. Otherwise, it would be creating a new method in your derived class that doesn't override anything and you might never be the wiser until you get a funky bug in your app. So when writing D code, using override is a good habit to pick up.
Aldacron
D shares a lot of similarities with C++ and Java, but a lot of the sameness is just a bit different. One of the first places new users see this is in the handling of structs and classes.

D's classes have more in common with those of Java than C++. For starters, they're reference types. Whenever you need an instance of a class, you new it and it is allocated on the heap (though scoped stack allocation is possible via a standard library template). Structs, on the other hand, are value types. But, you can create pointers to them, allocate them on the heap with new, or pass them by reference to a function that takes ref parameters. Another difference between the two is that classes have a vtable and can be extended. Structs, however, have no vtable and can not be extended. That also applies to the Java-esque interfaces in D -- classes can implement them, structs cannot.

Because of the differences between the two, there are certainly some design implications that need to be considered before implementing an object. But I'll talk about that another day. For now, I just wanted to give a little background before getting to an example illustrating the primary motivation for this particular post.

While working on Dolce, I thought it would be a good idea to wrap Allegro's ALLEGRO_CONFIG objects, simply because working with a raw, string-heavy C API in D can be a bit tedious. You have to convert D strings to C strings and vice versa. In this particular case, you also have to convert from string values to integers, booleans, and so on, since Allegro only returns raw C strings.

So what I wanted was something that allowed me to create and load configs, then set and fetch values of the standard built-in types. Rather than implementing a separate get/set method for each type, I chose to use templates. And in this case, I didn't want the whole object to be templated, just the methods that get and set values.

Initially, I implemented it as a class, but in the rewrite of Dolce I pulled the load,create and unload methods out and made them free functions. I also realized that this is a perfect candidate for a struct. The reason is that it contains only one member, a pointer to an ALLEGRO_CONFIG. This means I can pass it around by value without care, as it's only the size of a pointer. Here's the implementation:


struct Config
{
private
{
ALLEGRO_CONFIG* _config;
}

ALLEGRO_CONFIG* allegroConfig() @property
{
return _config;
}

bool loaded() @property
{
return (_config !is null);
}

T get(T)(string section, string key, T defval)
{
if(!loaded)
return defval;

auto str = al_get_config_value(_config, toStringz(section), toStringz(key));
if(str is null)
return defval;

// std.conv doesn't seem to want to convert char* values to numeric values.
string s = to!string(str);
static if( is(T == string) )
return s;
else
return to!T(s);
}

void set(T)(string section, string key, T val)
{
if(!loaded)
return;

static if( is(T == string) )
auto str = val;
else
auto str = to!string(val);

al_set_config_value(_config, toStringz(section), toStringz(key), toStringz(str));
}
}


You'll notice that the _config field is private. I don't normally make struct fields private, as the structs I implement are usually intended to be manipulated directly. But in this case I thought it prudent to hide the pointer away. I still provide access through the allegroConfig property (and I'll discuss properties another day) in case it's really needed.

So you may also be wondering how _config is ever set if it's private and there's nothing in the struct itself that sets the field. The answer lies here:


Config createConfig()
{
return Config(al_create_config());
}


This is something that frequently trips up D newbies coming from other languages. What's going on is that the create function and the Config implementation are in the same module. You can think of modules as another level of encapsulation. And, for those who are steeped in C++ vernacular, you can think of modules as friends of every class and struct implemented within them. In other words, private class/struct members/methods are all visible within the same module. If you ever get to know D at all, you'll likely find this to be a very convenient feature.

Another thing that might jump out at you is the static if. This is one way to conditionally generate code at compile time. You'll frequently see it used in templates, though its use is not restricted to templates. Here, I'm testing if the type T is a string or not. In the get method, the value returned from Allegro is a char*, so it is converted to a D string. If the type of T is string, then there's no need for any further conversion and the D string can be returned. If not, the D string must be converted to the appropriate type (int, bool, long or whatever). Similarly, in the set method, a char* needs to be passed to Allegro, so any nonstring values are first converted to a D string. But if T is string, that step isn't necessary.

And now to the templates. D's template syntax is clean and extremely powerful. This example demonstrates the cleanliness part at least.


T get(T)(string section, string key, T defval)


If you've ever used C++ templates, it should be clear what is going on here. The type to be accepted is declared in the first pair of parentheses, the parameter list in the second pair. And in this case, a value of the specified type is returned.

Now, to use it:


auto config = createConfig();
auto i = config.get!int("Video", "Width", 800);
auto b = config.get!bool("Video", "Fullscreen", false);


Here, I've used the auto keyword for each variable. The compiler will infer the Config, int, and bool types for me. As for the template instantiations, notice the exclamation point used between the method name and the type. That's what you use to instantiate a template. Technically, I should be wrapping the type of the template in parentheses, like this:


auto i = config.get!(int)("Video", "Width", 800);


If you have a template with more than one type in the type list, then you have to use the parens. But if there is only one type, the compiler lets you get away with dropping them. For singly-typed templates, that has become a standard idiom in D.

If you decide to give D a spin and come from a C++ or Java background, I hope this post helps keep you from any initial confusion that might arise when you find that things aren't quite the same as you're used to.

You can learn more about D's structs, classes and templates at d-programming-language.org.
Aldacron
I've made a bit of progress on Dolce, but I realized something while I was doing it. My purpose for starting the project was to work on a game idea I've had for a long, long time. I knew from the get go that graphics were going to be a problem The problem is the open-endedness and complexity of the game experience. To pull it off, I either need very detailed graphics, or simplistic graphics with a good deal of descriptive text.

I'm no artist and I can't be bothered to pay anyone for the level of detail I would need for the first option. This iteration is just for fun, not profit. So I have to go for the second option -- minimal graphics plus text. My intention was to go very, very, simplistic. Not quite Dwarf Fortress simplistic, but very nearly. I had it in my head to use a simple tile set, with symbols of some sort for game entities.

My problem, as always, is time. Between work, family, and other projects that have higher priority, I'm wasting too much time on the graphical side of this game idea. I want to get busy with the game itself. Since I'll be using a good deal of text anyway, why not just ditch Allegro and go for a pure text-based game? So that's exactly what I've decided to do. Dolce, however, is a neat little idea that I hope to come back to now and again when I have a lull. And at some point I will very much want to make a graphical version of this game if it turns out to be as fun as I hope. But for now, text it is.

One of my first problems to consider is colored text in the console. I want the game to run on Windows, Mac and Linux, so I want something that's portable and simple. That means ANSI escape codes. But the Windows command console doesn't support ANSI escape codes right out of the box. After a bit of digging around, I found a solution.

ansicon is a little console for Windows that can understand ANSI escape codes. And it's freely distributable so I can bundle it with my app and use it in place of the standard command prompt. Of course, I wouldn't want the user to have to open it and type a command to run my app. Luckily, any arguments passed to ansicon that it doesn't recognize are considered to be a program to execute and its arguments. So a one line batch file would do the trick:


ansicon MyApp


Another option would be to make a simple executable to launch ansicon. In D, it might look something like this:


int main(string[] args)
{
import std.process;
return system("bin\\ansicon.exe MyApp");
}


Notice how I've got the import statement inside the main function. That's a new feature that was added in the latest DMD release. It doesn't seem like a big deal, but I can't count how many times I've knocked up a quick D script to test out some idea only to realize that I forgot to import std.stdio at the top. If I'm only using a symbol in one function, then instead of scrolling back up and adding the import at the top, I can just add it in place and drive on.

There are other functions in std.process that can be used to execute processes in a cross-platform manner. Of course, before someone points it out, I do realize that the path I passed is not cross-platform because of the backslash. Calls to system on Windows will fail if the path contains a forward slash, so you'd have to do the right thing based on platform. But for this launcher, I don't need to be cross platform.

For those who don't like escaping special characters, there's one more D feature that can help: WYSIWYG strings.


// Using r""
string s = r"bin\ansicon MyApp";

// Using backticks (not single quotes)
s = `bin\ansicon MyApp`;


D also supports heredoc strings, which also do not require escapes.

So now I'm on the road to making a complex text-based game to prototype my idea and you know a little more about D's strings.
Aldacron
I've been away from Dolce for a couple of weeks now. Just came back to it last night and realized I don't like it. I've horribly over engineered some of the modules. So from last night I started stripping stuff out and refactoring. In the process, I realized a silly mistake in my resource management code. I'm throwing it out and rewriting it anyway, but it inspired a topic for this blog.

My resource system was, overall, designed to work with any imaginable resource. So it's template-based and has a flexible interface. But I started from the perspective of Allegro resources, which means working with struct pointers. And I imagined that other potential resources that I might want to use would be class based. That led to an implementation detail that could cause compilation to fail in certain cases.

The problem boils down to something like this. Given a member called _resource of type T, I want to clear it out when I no longer need it in certain circumstances. Since I'm dealing with struct pointers and classes, which are always references, I can just do this:


_resource = null;


That works and does what I want. Then to determine whether a resource is loaded I can just test _resource for null. Until, of course, I decide one day to do something like use a struct resource by value, rather than as a pointer. Not something too farfetched. In that case, I'd get a compiler error. While DMD's template error messages are a good deal better than what most C++ compilers give us, it's annoying to get them. And there's no reason why I shouldn't support non-nullable types.

So here's a contrived example of what happens in this situation.


module nullify;

void nullify(T)(T t)
{
t = null;
}

void main()
{
int i = 10;
nullify(i);
}


So nullify is a templated function with no constraints, meaning it can accept any type at all (I'll talk a bit about D's template syntax another day -- this is a specific case where you can declare the template without the template keyword and call it as you would a normal function). Try to compile this code and you'll get the following output:


nullify.d(8): Error: cannot implicitly convert expression (null) of type void* to int
nullify.d(14): Error: template instance nullify.nullify!(int) error instantiating


Right. To solve this problem, there are two obvious choices. One is to use template constraints to restrict the template only to pointers and classes. There would still be compiler errors of a different nature, but it would be a signal that this template is not intended to work with value types. In some cases, that might be preferable. In this particular case, a better option is to make use of default initializers.

Every type in D has a default value to which instances are automatically initialized on declaration. For example, ints are initialized to 0, floats to nan, classes and pointers to null. This value is readable as a property, .init, both on the type and on the instance. So we can modify the nullify template above like so:


void nullify(T)(T t)
{
// You could use t.init or T.init here.
t = T.init;
}


Now the code will compile. Pointers and classes will be set to null, floats and doubles to nan, characters to 0xff, and so on. What about value structs? Try this:


module nullify;

import std.stdio : writeln;
import std.string : format;

void nullify(T)(T t)
{
t = T.init;
writeln(t);
}

void main()
{
struct Foo
{
int x = 10;
int y;

// Without a toString method, the name of the type
// would be output by default in the call to writeln
// above. In this case, "Foo".
string toString()
{
return format("(%d, %d)", x, y);
}
}
Foo f = Foo(23, 38);
nullify(f);
}


The output from this is "(10, 0)". Foo.y, as an int, has the default initializer 0. I've changed the default initializer of Foo.x, however, in the definition of Foo. So all instances of Foo will have the value 10 for x on instantiation. This is not the same as assignment. For example, this will not print 10, but 0:


int i = 10;
nullify(i);


Here, we are declaring an instance and assigning a value to this instance. We are not defining a type. Big difference.

So using default initializers is a convenient way to clear out a templated class/struct member for any given type. Then, tests like a hypothetical isLoaded method become


bool isLoaded()
{
return _resource == T.init;
}


You can read more about the .init property in the D documentation at d-programming-language.org.
Aldacron
Recently, in the Derelict forums, someone asked me if I wanted him to update his GLFW binding, based on the old Derelict, for the Derelict 2 branch so that I could add it to the trunk. We had a GLFW binding in Derelict before, but removed it due to issues with building the GLFW shared libraries. Derelict, you see, is designed to load shared libraries manually and cannot link with static libs. That was quite some time ago. In the intervening years, a new maintainer has taken over the GLFW project and made some improvements to it.

So I've had it in the back of my mind to give GLFW another look at some point for possible inclusion into Derelict 2. Today, I did. A new version was released late last year (2.7) and a new branch that streamlines the API (3.0) has been started. I really like the new branch. So, being the spontaneous sort of fellow I am, I decided I wanted a binding to it. I knocked one up in just over 30 minutes. It's now sitting in my local "scratch" copy of Derelict, waiting to be compiled and tested. Given that it's 1:30 am as I type this, I don't think I'm going to get to it just yet. Tomorrow for sure.

I won't be adding this new binding to the Derelict repository just yet. GLFW 3.0 is still in development. So, just as with the binding I've begun for SDL 1.3 (which will become SDL 2 on release), I'll wait until the C library is nearing a stable release before I check it in.

Making D bindings to C libraries is not a difficult thing to do. It's just tedious if you do it manually, like I do. I have a system I've grown used to now that I've done so many of them. It goes reasonably quick for me. Some people have experimented with automating the process, with mixed results. There are always gotchas that need to be manually massaged, and they might not be easily caught if the whole process is automated. One example is bitfields.

D doesn't support bitfields at the language level. There is a library solution, a template mixin, that Andrei Alexandrescu implemented in the std.bitmanip module. I don't know how compatible it is with C. I've only had to deal with the issue once, when binding to SDL 1.2, but that was before the std.bitmanip implementation. Besides, it's a D2 only solution and Derelict has to be compatible with both D1 and D2. So what I did was to declare a single integer value of the appropriate size as a place holder. The bits can be pulled out manually if you know the order they are in on the C side. I could have gone further by adding properties to pull out the appropriate bits, but I never did the research into how different C compilers order the bitfields on different platforms.

Another issue that crops up is dealing with C strings. For the most part, it's not a problem, but if you are new to D it's a big gotcha. Like C strings, D strings are arrays of chars (or wchars or dchars as the case may be). But, char strings in D are 8-bit unicode by default. Furthermore, D arrays are more than just a block of memory filled with array values. Each array is conceptually a struct with length and ptr fields. Finally, and this is the big one, D strings are not zero terminated unless they are literals. Zero-terminated string literals are a convenience for passing strings directly to C functions. Given a C function prototype that takes a char*, you can do this:


someCFunc("This D string literal will be zero-terminated and the compiler will do the right thing and pass the .ptr property");


If you aren't dealing with string literals, you need to zero-terminate the string yourself. But there's a library function that can do that for you:


import std.string;

// the normal way
someCFunc(toStringz(someString));

// or using the Universal Function Call Syntax, which currently only works with D arrays
someCFunc(someString.toStringz());


A lot of D users like the Universal Function Call Syntax and would like to see it work with more types instead of just arrays. Personally, I'm ambivalent. The way it works is that any free function that takes an array as the first argument can be called as if it were a member function of the array.

Going from the C side to the D side, you would use the 'to' template in std.conv:

// with the auto keyword, I don't need to declare a char* variable. The compiler will figure out the type for me.
auto cstr = someCFuncThatReturnsACharPtr();

// convert to a D string
auto dstr = to!(string)(cstr);

// templates with one type parameter can be called with no parentheses. So for to, this form is more common.
auto dstr = to!string(cstr);


Another gotcha for new users is what to do with C longs. The D equivalent of nearly all the C integral and floating point types can be used without problem. The exceptions are long and unsigned long. D's long and ulong types are always 64-bit, regardless of platform. When I initially implemented Derelict, I didn't account for this. D2 provides the aliases c_long and c_ulong in core.stdc.config to help get around this issue. They will be the right size on each platform. So if you see 'long' in a C header, the D side needs to declare 'c_long'. I still need to go through a few more Derelict packages to make sure they are used.

The issues that crop up when actually implementing the binding aren't so frequent and are easily dealt with. Sometimes, though, you run into problems when compiling or running applications that bind to C.

D applications can link directly to C libraries without problems, as long as the object format is supported by the compiler. On Linux, this is never an issue. Both DMD and GDC can link with elf objects. Problems arise on Windows, however. The linker DMD uses, OPTLINK, is ancient. It only supports OMF object files, while many libraries are compiled as COFF objects. If you have the source code and you can get it to compile with Digital Mars C++, then you're good to go. Otherwise, you have to use the DigitalMars tool coff2omf, which comes as part of the Digital Mars Extended Utilities Package. Cheap, but not free. Then you still might face the problem that the COFF format output by recent versions of Visual Studio causes the tool to choke. There are other options, but it's all nonsense to me. That's one of the reasons when I made Derelict I decided that it would only bind to libraries that come in shared form and they will be loaded manually. Problem solved. But there are other issues.

In a past update to DMD (not sure which), the flag '--export-dynamic' was added to the DMD config file (sc.ini) on Linux. So that means that every binary you build on Linux systems with DMD has that flag passed automatically to gcc, the backend DMD uses on Linux. Normally, not an issue. Until you try to build a Derelict app. The problem is that Derelict's function pointers are all named the same as the functions in the shared library being bound to. This causes conflicts when the app is built with --export-dynamic on Linux, but they don't manifest until run time in the form of a segfault. Removing the flag from sc.ini solves the problem. One of these days I need to ask on the D newsgroup what the deal with that is.

I know all of this could sound highly negative, giving the impression it's not worth the hassle. But, seriously, that's not the case. I have been maintaining Derelict for seven years now. Many bindings have come and gone. Version 2 currently supports both D1 and D2, as well as the Phobos standard library and the community-driven alternative, Tango. I can say with confidence that D works very well with C the large majority of the time. And for anyone planning to use D to make games, you will need to use C bindings at some level (Derelict is a good place to start!). As for binding with C++... well, that's another story that someone else will have to tell.