Jump to content

  • Log In with Google      Sign In   
  • Create Account

The Bag of Holding

Assorted Crap

Posted by , 03 January 2006 - - - - - - · 287 views

So I'm drinking another Bawls on Gin at the moment, surfing around a bit, getting ready to crack open the copy of Pulp Fiction that I just got from Netflix. I have three pieces of advice for my readers:

1. If you have not seen Pulp Fiction, rent a copy and see it.
2. If you have not signed up for Netflix, go do it. Now.


No, seriously - stop reading this and go sign up for Netflix. Best. Service. Ever. I love not having to go to my crap podunk Hollywood or Blockbuster, discover they are closed, rant in anger, go back over lunch break the next day, and discover they do not have the title I wanted to rent. Netflix is awesome.

3. If you have not tried Bawls on Gin, go do it. Soon. Maybe not now. Honestly, though; get yourself a pack of Bawls, a bottle of Tanqueray or Bombay, and mix one up - 1 bottle Bawls, 1 shot gin (vary the amount of gin to taste; I prefer about 1.1 shots worth). This drink is amazing and you want one. I expect all two of my dutiful regulars to report to me within the week that you have tried Bawls on Gin. I fully expect, at some point in the relatively near future, to be able to actually order this drink in a bar, and have the tender know what I'm talking about. Spread the word, minions loyal friends people possibly semi-sentient (or better) reading audience someone. I think.



Other than that... I'm drafting my article off and on. I have a slow, perfectionist method to writing formally, so it may be a while, but as soon as I have a completed and more-or-less coherent draft I'll post it for evaluation before I start revisions.

I've also got the beginnings of a very large, very intricate, and very twisted plot concept jotted down in Notepad, sitting on my desktop. I'm not going to say any more about it, because I need more time to really understand what I'm thinking (if I'm thinking at all) before I start discussing it publicly. Suffice it to say I've bounced it off a couple of people and gotten exceedingly good reactions, so I'm hoping to fully pursue this concept at some point in my game-writing future.

I'm also getting very, very close to finishing up my day job and going full-time with Egosoft. That makes me exceedingly happy, because I'm sick and tired of this product and just want to be done with the thing.


Anyways, happy 2006, and such, etc.



Hapi Djus

Posted by , 28 December 2005 - - - - - - · 371 views

I think I might keep my flatmate. Lately he seems to be starting this trend of actually having some significant value, aside from being an efficient resource sink for my Moon Pies and other valuable edible commodities.

You see, he has concocted a drink. Anyone can invent a drink, but this one is really something. The inspiration came as we drained our reserves of beverages to make way for our upcoming New Years fare; this involved disposing of a decent amount of gin. Somehow, a couple of bottles of Bawls got mixed up in this ruckus as well, and being generally bored and lacking sleep, we decided to try mixing them.

After some testing, we have arrived at my proposed final mixture, which I present to you now:

Bawls on Gin
Ingredients:
- 1 bottle Bawls
- 1 shot gin

Open the Bawls and drink off the first couple of swallows; this should leave the level of the Bawls just at the bottom of the neck where the bottle begins to flare out. Pour in the gin and mix gently with a circular motion. Consume and enjoy.


Awwwwww

Posted by , 20 December 2005 - - - - - - · 280 views

Ladies and gentlemen, please welcome my fourth and most recent nephew, Chase Andrew. All seven pounds, one ounce of Chase got permanently stuck with my weird and goofy family at about 7 AM on December 17th.




His big brother Chad is currently in a state of quantum flux, not entirely sure of whether to be jealous or overprotective. It's actually kind of funny to watch.






And with that, I'm going to sneak off and go on Christmas vacation, so I can go bug my other two nephews. To top off my devious nastiness, I'm going to leave everybody hanging about that article that I threatened you with.

Tune in next week for more total inanity! Woot.



Delusions of Writing Ability

Posted by , 19 December 2005 - - - - - - · 335 views

So I've been reading the forums instead of working on my own time, and I've started to see a lot of stuff cropping up that has to deal with abstraction and programming languages. I have a feeling that this is really not much different than usual, but now that I'm thinking about it, I'm seeing a lot more things that can be interpreted from that perspective. (Funnily enough, I have that pattern in all kinds of things. Ever learn a new word, and suddenly hear it in, say, twenty places all in the same day? I suspect this is a well-documented psychological phenomenon, but being the unedumacationed type, I have no idea. Anyone know if this has a name?)

Anyways, all of this has got me thinking. Specifically, it's got me thinking about articles. I love reading articles. Articles are nice and short, but still have goodies - well, the good ones do. Articles are perfect. You can sneak them in while you wait for the complete project rebuild to finish, and if you're questioned, a lot of articles can be linked to your work. ("I'm reading up on good design principles. It'll help us deliver this project faster and with fewer bugs. Really.") Since I like articles so much, and since I'm a veritable sack of hot wind about abstraction levels lately, I've come up with an idea: I could write an article.

Actually, I really don't care about writing articles, I just want something I can point people to about abstraction (and programming languages) without having to write all of my thoughts out from scratch. Besides, a long forum post is just mind-bendingly dull. Nobody reads long posts (ask Wavinator). Take the exact same text, slap a byline and caption on it, and shove it off into its own page, and it becomes great literature.


So, as usual, I've expelled a lot of excess gibberish to get to the point I'm after: I'm contemplating writing an article. That sounds distinctly like work, however, so I've decided that I'll only bother with it if there is sufficient demand. I know someone at least glances in here from time to time (either that or I've got a loyal fan who just sits in here pressing F5 all day), so here's my challenge to you: post your demand, and I'll give you an article.


RED ALERT! Brain dump at twelve o'clock!

Posted by , 14 December 2005 - - - - - - · 258 views

It is time yet again to dribble out a bunch of half-finished thoughts about things I don't really properly understand. For this episode, my topic of choice is code integrity.

First of all, let me buy myself some time by defining the phrase "code integrity" as I intend to use it. It might be worth noting that I just completely made that up, so it's probably a stupid choice of words. That aside, however, I think the point I was originally trying to convey (before I got sidetracked in thinking up all of this self-referential gibberish) is that code can be broken. Deeply profound, I know.

So code breaks. The first important question is when did it break: was it already broken because the design itself was broken? was it broken because it wasn't implemented correctly? did it break during a maintenance change? did it break due to coupling to another piece of code (or data) which has changed? did the evil Nazi code gnomes sneak into the production system in the middle of the night and spread havoc?

Once we know when the code broke, we need to know something else: how the breakage was found. An excellent piece of advice (which, as usual, I stole from The Pragmatic Programmer) is to make sure that things fail as quickly as possible. A good example of this is division by zero. On a modern IA-32 processor, a division by zero actually throws an exception at the hardware level. This is a great idea. Suppose the processor didn't do anything, and simply expected the user to be smart enough never to divide by zero. Some random data could get stuck into memory, and cause all manner of bogus results.

Email is an example of how not to implement failure. In the common POP3/SMTP email structure, I send an email to this_address_is_bogus@bogosityltd.com. Clearly, it never arrives. However, I don't know that this email failed until several hours later when the "I gave up trying to deliver your message" reply comes back from whatever server ended up barfing on my fake mail. If I have a spam filtration system, or if servers simply aren't configured to relay those messages, I may never know that my vitally important secrets about saving the world from Doomsday never reached my good friend Mr. Bogus.

In software, we've got some nifty things like assertions exceptions that let us make sure our code dies when things go wrong. Wait, wait, wait, though... aren't crashes bad? We don't want our users to see "ASSERTION FAILED IN FOO.CPP" when they run our program, right? Most likely, the answer is that no, we do not in fact want users to see that.

This brings up an important question. We want to fail as soon as possible, so that damage doesn't propagate into other areas of the code. We don't want to put an ugly "illegal operation" or "assertion failed" or "unhandled exception" type message on the screen. However, we also want to make sure that the information about the failure is preserved, so that the failure can be diagnosed and repaired.

There are some tricks we can use for that, like using structured exception handling hooks to capture OS and hardware exceptions (for C++ and Windows apps), logging systems, automated phone-home error reporting mechanisms, etc. However, those are beyond the scope of my rambling, by which I mean to say if I get started talking about that, I'll forget what it was I was pretending to talk about in the first place.


So, let's gloss over that issue for a bit. We now have a failure, and information about it. Assuming we have a good programmer handy, we also can fix the failure. We pat our trusty pager that got us out of bed at 3 AM for the fifth time this month, stumble out to the car, and drive home... oh wait, I have to be back at work in an hour... crap. One of those goofy proverb-things that your grandmother used to love to say comes back to mind... something about prevention and cure...

This is where it gets interesting. In fact, some very smart and capable people have already answered the question, a long time ago. You can tell that these people are smart, because they've answered the question, and I haven't even said what the question is yet. Deep Thought would be proud. The question is, can't we do something to keep these failures from happening at 3 AM and getting me out of bed?

The answer, of course, is fourty two yes. The smart ones who came before us (well, before me anyways) have done some hard work and hard thinking, and invented good stuff like unit tests. We can use unit testing to find out when things fail, and how exactly they fail. Best of all, unit tests can keep failures from ever making it into production. They're like little magic pills that make bugs disappear, and silence your pager (at least at 3 AM).

But unit tests are work, and I'm a lazy bastard. I hate work. Work is... well, work. It takes effort, and all of that stuff, and I just can't be bothered. Actually, I personally do run testing as much as possible, but only in a sort of lazy way. That's really why I'm dumping all of this; I think it should be possible to have unit testing that isn't work. The usual argument goes that the time spent writing and running a testing suite is easily won back in time not spent getting woken up by your pager at 3 AM. However, I think this is silly. I think I should be able to stay in bed at 3 AM, without taking the battery out of my pager, and without doing extra work making unit tests.


Here's the theory. First, we take our level-of-abstraction metacode thingy, and we generate code with it. Then, we use the same system to generate tests. This is based on a very arcane and deeply mysterious principle, which I shall now reveal to you, as Apoch's First Theorem of Testing Software:

Unit tests exist at a level of abstraction that is slightly higher than the module which the tests are designed to test. Therefore, both the unit test and the interface for the module itself can be specified completely at a level of abstraction that is slightly higher than that of the unit test itself.

The upshot of this is that it's getting late, my fingers are cold, and I want a sandwich. Sandwiches are in terribly short supply at the office. (Why I'm here doesn't really matter, although thankfully it does not involve a pager, and it doesn't include being awake at 3 AM. Yet. Sammidge.)

Oh, sorry, that's not the upshot of this. The real upshot of this is that unit tests can be generated with the same knowledge that generates the abstraction code. (This might not actually be an automated process. In fact, at the moment, it rarely is; the knowledge is stored in some people's brains, and the generation is done by typing and thinking.)

What's great about this is that, if we know enough to build a unit test, we also know enough to build a layer of abstraction. This means that the knowledge needed to generate both is highly coincident, if not identical. Let's call the set of knowledge needed to build the unit tests K(u), and the set of knowledge needed to build the abstraction layer K(a). Now, we'll introduce several contrived variables, and spend the rest of the discussion trying to make naughty words with our symbolic names.

I really should stop doing this late at night... I'm having a little trouble with that focus thing. Sammidge.

Now, for the sake of argument, let's say that the total knowledge about a module is K(u) U K(a) U K(o) where K(o) is other incidental knowledge about the module that isn't covered in the creation of the unit tests or the abstraction layer. I think that the K(o) set will always be empty, but I don't know for sure, so I'll leave it in there for now. Let's call all of this knowledge K(m).

Given some module m, and the specs for that module, K(m), we can therefore generate both an abstraction layer and a unit test for m. This should not be surprising. In fact, it's what software design is all about. However, the good stuff comes next.

It is quite likely, although I'm too lazy to logically prove, that there exists some common medium of representing K(m) that allows a nontrivial portion of the code for m and m's unit tests to be generated automatically. I hypothesize that the complete interface for m can be generated, and almost the complete unit test code for m. Actual implementation of m, and any dependencies of m that must play a role in the unit test, will probably have to be handled manually, but we're still ahead of the game.

We already need a medium to represent K(a) so we can build the abstraction layers automatically. Clearly, if we choose our medium wisely, we can use the same medium to generate quite a bit of code, all from a high level of abstraction. Our whole goal here is to work as abstractly as possible; by doing some mental trickery, we can actually work at a level of abstraction that is more abstract than the level of abstraction that we are trying to create.


I think there's probably some more that could be said, but that last bit really cooked my brain. Speaking of cooked brain, I'm really hungry. Grarghhh.


3H-GDC m.IV

Posted by , 10 December 2005 - - - - - - · 265 views

Well, the submissions are pouring in... here's a quick look at my submission, CartRunner, and what went into making it. Source is included, but be warned, it isn't pretty!

Enjoy.


How to kill yourself in five easy steps

Posted by , 06 December 2005 - - - - - - · 279 views

1. Obtain a large bottle of A1 Tabasco steak sauce.
2. Obtain a bottle of Tabasco Habanero sauce.
3. Obtain a bottle of Jameson's Irish whisky.
4. Combine. (I'd list proportions, but it was done quite blindly, so who knows. Mix to taste I guess.)
5. Consume. Several slices of fried Spam makes a good base.


My digestive tract hates me now... but damn, it was worth it. That was some awesome sauce.


Coding and Abstraction

Posted by , 05 December 2005 - - - - - - · 332 views

I'm a sucker for advice. Which isn't to say that I'm particularly keen on receiving it, even on the rare occasions when I have the sense to solicit it. I'm much more keen on giving it, and usually on occasions when it isn't solicited. I'm sure this says something highly unflattering about my character, but I'm not asking for your advice on my character. You're here to get some advice from me. You just don't know it yet.

So bear with me for a few minutes while I invent some advice to give you. In the mean time, I'm going to stall.


One of the things that I like to tell people is to code at the highest possible level of abstraction. I tell them this even if they ask me what toppings they should get on their pizza. I also stole it from people who are much smarter than me (but then, isn't advice all about stealing things from smarter and wiser people?).

The basic idea behind this is that good programming and good design centers around abstraction. Bits are an abstraction of electrical voltage levels. Bytes are an abstraction of bits. Integers are an abstraction of bytes. Beating the digital crap out of zombies is an abstraction of (among many other things) integers. I think there's something that's an abstraction of beating the digital crap out of zombies, and thus ad infinitum, but I think that access to those planes is regulated by Zen and/or LSD.

In practical terms, coding at high levels of abstraction means using the right tools for the job. There's a whole design aspect to it that I could spend a lot of time discussing, but if I do that I'll forget what I really was talking about and we'll all leave with our minds full of garbage and feeling vaguely drugged. In terms of pure coding, though, the big players are tools and languages. I'll focus on languages, since the point I originally set out to make with all this drivelling had to do with languages.

Languages are beautiful tools of abstraction. They're a way to make the intent of a block of code clear to the programmer. CPUs only understand opcodes; few programmers, however, understand them (I sure as heck don't!). Assembler is a step up, because we've got nice english-looking letters and such instead of those icky hex numbers. But the intent of assembler code is hardly clear at a glance, unless the code is laced with enough comments to make War and Peace look like a History Channel factoid. The progression continues up through all the usual suspects: C, C++, Smalltalk, Java, BASIC, Python, et. al. So-called "high-level" languages are high not in terms of being deeply acquainted with reefer, but in terms of being highly abstract. Python knows about strings intrinsically. Assembler doesn't.

High abstraction (usually) means easier implementation. There are some exceptions, but they are fairly easy to recognize when they occur, and I'm making this up on the fly so I'll pretend to have lots of examples, but leave the thinking up of said examples as an exercise to the reader. Games have been using scripting for years as a way to exploit abstraction. Writing code in a scripting engine is much more abstract - and therefore more efficient - than doing it in raw C++, or assembler, or hooking up a battery to the pins on the CPU and tapping in the signals by hand.

Scripting for abstraction has three benefits. First, and most importantly, it makes intent clear. PoliceShip.StartKillEnemiesAndLand(); is a (real life from Egosoft) script command. Isn't that a lot more obvious than three pages of threading code, calls into various AI, collision detection, and 3D rendering libraries, and a handful of housekeeping logic? You don't even have to know KC (the scripting language sampled) to know what that does. The second benefit is that it promotes encapsulation. In KC, there's not a magic function StartKillEnemiesAndLand() that ties directly in to the engine; there's actually a complete game logic system built in the language, and the low-level engine calls are quite a bit more basic and atomic than that. But we never have to worry about them, because they're wrapped in nice simple abstract calls. Entire dramatic battles can be laid out and set into blazing, exploding motion with just a few lines of abstract code. The third benefit of abstraction is that logic is localized to a single place. For instance, if there's a bug in the way one ship does StartKillEnemiesAndLand(), we can fix it once, and all ships will benefit from the fix. The logic for that operation is in one spot, not scattered implicitly across thousands of lines of engine code.

Scripting is good, but it's usually restricted to a simple (and false) dichotomy: engine vs. scripts. One of the things that I've thought about after reading The Pragmatic Programmer is that this should be a continuum, not a set of discrete layers. Of course it's eventually going to resolve into discrete layers (i.e. several different languages), because we haven't invented continuum languages yet.

At Egosoft, we have one of these dichotomies. There's an engine structure, with all of the modules and libraries and such built in, and there's the game logic layer implemented in KC. The KC layer has its own modules, libraries, and structure. It knows quite a lot about the engine, but that knowledge is constrained to wrapper functions and layers. In fact, KC even implements another scripting engine, that is highly abstract. The script engine controls things like AI and various goings-on in the universe. However, it's too abstract; it doesn't provide access to things like the menu system, or the 3D engine. It could, but adding that kind of access is neither easy to build or easy to use.


This is going somewhere... I think. Bear with me while I stall a bit more and pretend to have a purpose. (I'm really just drooling on my keyboard and seeing how long you'll watch before you give up and go play Ninja Loves Pirate.)


Engines, and scripting logic, have implicit layers of abstraction of their own; this is where design comes into play. For those of us who embrace the holy truth of OOP, we've got things like class hierarchies that let us abstract and encapsulate. A typical design has a lot of "basic worker" classes that exist simply to do specific things, and "logic" code that makes use of the workers to actually do something useful, like make the heads on zombies explode. In a scripted design, a lot (but not all) of this logic will be in the form of scripts, perhaps with additional layers of abstraction on top of that.

However, most of these layers of abstraction are split between a very small number of languages. The largest I've seen is four, on X³ (X² also used a similar model): assembler, C++, KC, and the scripting engine. There are implicit layers in each language, even though each layer needs only a specific subset of the language's capability. Specifically, layer of abstraction N needs only the ability to talk to layer (N-1), and the ability to expose functionality to layer (N+1) if needed.

I have a vague feeling that this can be exploited. For instance, instead of writing layers of abstraction in the same language, why not make a simple language framework, and build each layer in a separate "dialect?" Stuff like template metaprogramming in C++ comes close to this, but is still constrained to a single dialect. What I'm thinking is more along the lines of having a "language template" where the basic control structures and syntax is specified, but the available entities are generated dynamically from the lower layer. Basically, you could have an engine layer in C++ (or whatever) that does all of your "do stuff" code, and then a scripting framework engine. We'll call the "do stuff engine" layer 0. Layer 1 can use some kind of info about layer 0 (an automatically generated map of the classes, maybe?) to build a scripting dialect that the script engine can interpret. Then, layer 1 can build up some abstractions and "do stuff" layers of its own, and expose a dialect that can be spoken up in layer 2. Repeat this as much as you need.

The cost? It'd take a lot of up-front work to build such a system, and it would have to be done from scratch. The benefits? Many. Firstly, you get all of your layers in the same dialect. One of the things that bugs the crap out of me with Egosoft's method is that each layer is a different language entirely; I don't know the highest level of the scripting system, but I know the lower three. That seems backwards to me. I should be able to work at the highest possible level of abstraction - always.

The second benefit is localization of knowledge. Having discrete layers promotes encapsulation, and demands a good design. To wit, it ensures that each layer does precisely what it should - no more, no less. If it tries to do more, it will fail, because each layer's dialect doesn't have the vocabulary to do it. If it tries to do less, the system won't run - it may not even compile.

The real bottom line, though, is that each level of abstraction is automatically the right one. Each level is built on the knowledge of the level below it, and the dialect of the scripting language at that level does precisely what it needs to do. Each layer is therefore the optimal layer to do the work of that layer. You don't have to worry about whether or not Language Foo is the Right Tool For The Job; you fabricate the right tool.


I have doubts. I'm not sure if this is really practical in a large-scale project. I have a very clear idea of how I'd do it (down to building the script engine itself and the layer-generation mechanisms) but I'm not really sure how I'd use it in a real-world system. I think it might look different in practical use than in theory; there might be some automatic generation that creates "scripting" that actually is compiled C++ for performance reasons, while non-performance-critical stuff can be done in bytecode compiled languages or even interpreted languages. The cool thing is, if the scripting dialect generator is built right, it should be able to make a dialect that can target any of those endpoints. This means that the same scripting language, syntax, and philosophy can be "compiled" to C++, Foobletch, bytecode, or even straight interpreted. It could even change "compile targets" dynamically; does Layer N not run fast enough interpreted? Drop it down a layer and bytecode compile it. One extra step of preprocessing before your build is done, sure, but if you have a good automated build system that just means you can read one more post on GDNet per build than before. Bytecode not doing the job? Compile it straight into your engine by generating C++ code from the script on the fly.

I think I'll give it a shot with the Habanero engine. I've already sneakily built the basic layers so that they can be transported to other projects trivially. If this multiple-layer scheme pays off, it could usher in a whole new level of reusable code in my own work. That would be cool.


Now I know you've sat through this whole thing, eagerly awaiting the bit of advice that I promised you at the beginning. Well, I don't believe you. I think you just skipped to the end to get the juicy advice, and didn't mess with all that scary-looking nonsense up there. Well I'll show you: no advice! Hah!


The Habanero Chronicles: Phase 2

Posted by , 28 November 2005 - - - - - - · 271 views

Phase 2: Engine Framework
After the prototyping done in Phase 1, it's time to take the rough, dirty, and basically entirely stolen code and turn it into a framework that can be used for the future. This means it needs to be extensible, flexible, cleanly designed, and capable of handling all of the configuration and power that we need from the engine (stuff like screen resolution changes, et. al.).

The core framework is currently split into sections for the Win32-specific logic (message pumps and all that), a generic framework for building dynamic application loops, and some various utility classes. There is also the interesting bit, the rendering system. The renderer accesses a DirectX wrapper which then takes care of putting the pretties on the screen.

The most interesting feature so far is the dynamic application loop system, which is an idea I had on the spur of the moment today and decided to explore. Basically, instead of a giant GameLoop() function that calls all of the other systems (renderer, game logic, AI, sound effects, netcode, etc.) there is a single dispatch system that uses generic "callback" objects to trigger specific pieces of code. This dispatch system is currently built to allow multithreading, although I'm not really planning on making use of it, at least not yet.

The dispatchers are basically the while(I_am_still_running) loops for the program; each time the dispatcher loops, it goes through a list of registered callbacks and triggers each one. My plan is to set this up as a std::map that automatically orders the callbacks by priority. Callbacks can be added or removed at any time.

The upshot of this is that the game logic, and what executes, is fully configurable. For instance, I could set up the Renderer and UserInput callbacks at the beginning of the program (as they are now) and not remove them at all. Then, I can add in a MainMenuLogic callback that runs, then is removed and replaced with a PlayGameLogic callback, or a ShowCredits callback. The entire system is configurable and dynamic, and as a (very hefty) bonus, the actual game loop code doesn't have to know a darn thing about what each callback does. I'm seeing a lot of potential in this system and I'm getting excited to start playing with it and see what can be accomplished.

At this point, the engine consists of a paltry 11 classes and 2 POD structures. It is a valid and fully functional Win32 application, as well as a more-or-less operational Direct3D client. It also has a basic rendering loop that draws a little habanero pepper moving around in circles on the screen:

Phase 2 has been conquered

I hereby declare Habanero: Phase 2 conquered. Phase 3 will be to build a scene and camera system for the renderer that lets me set up batches of pretties to draw on the screen, handles shaders, and all of that good stuff. The core framework will be enough to consider Phase 3 complete, as I'm absolutely certain that the actual rendering system will be getting tweaked, improved, and revised up until we kick this thing out the door.


The Habanero Chronicles: Phase 1

Posted by , 28 November 2005 - - - - - - · 220 views

Introduction
This is the story of a hobby game, built by an industry programmer (that'd be me) and co-designed by an avid gamer (that'd be my flatmate). This project is primarily for our own entertainment, although we hope to produce some tools and resources (and maybe even a finished game) that others can enjoy as well. I plan to make most or all of the source and raw assets of the game freely available once we are finished.

During the development and production of the core game engine (currently code-named Habanero) as well as the final game itself, I'll be recording our devious exploits and grand adventures here for everyone to ignore and not comment on read and enjoy immensely. Most of this will be my own twisted mental dumping ground, so proceed with caution!


The Plan
I generally refuse to plan this one out until it becomes absolutely impossible to avoid. Since this is very much a spare-time project, and my spare time is not exactly copious, a rigorous plan is more or less guaranteed to be abandoned and burned as heresy within a few weeks. Everything will be played by ear here, at least until we get closer to the uh-oh-we-really-have-to-figure-out-what-we're-doing stage.

To this end, every time a noteworthy something happens, I'll log it here as a completed "phase." I expect to have quite a few phases before all is said and done.

The engine will have an isometric, fixed-camera perspective. I'd like to build much of the rendering pipeline to take advantage of this, although I'll make an attempt at a reusable framework that can be adapted to other view styles in the future. We'll see how well that works out in practice. I've chosen to target DirectX 9, primarily because DirectX is where my familiarity lies, and because I have some great reference code to compare to from Egosoft if I need inspiration. Win32 will of course be the target of choice; portability will be of very little, if any, concern.

Aside from that, very little is decided. I'm currently leaning towards using billboarded sprites on a 3D heightmapped landscape system, although we've been kicking around some game concepts that may lend themselves better to other approaches. A lot of those details will start falling into place as we get closer to needing to have them decided.


Phase 1: Initial Prototyping
This is one of the fastest-moving parts of software development, at least in the style that I use. Essentially, the concept is to accomplish a stripped-down, simple version of our final goal: to render isometric landscapes with D3D9. The stripped-down, simple version of this is to render a quad with D3D9. Our first prototype is actually some line-for-line thievery from a handful of DirectX articles and samples.

This thieving worked out fairly poorly in practice; some mix-and-match problems occured, the inevitable variable name differences caused some headaches, and once the framework was running, the texture was screwed up. Being lazy, I spent a minimal amount of time trying to diagnose the texture problem, and ended up just dropping some sample code into my project wholesale.

At this point, we've got a simple, naive little app that opens a window at 800x600, draws a single textured quad, and then waits for the window to be closed. I'd post a screenshot, but it's really not exactly impressive.


Next up is Phase 2, where the stolen sample is totally rewritten into a more permanent and useful engine framework. This process has actually already started and is moving along nicely, although it's been going slower than normal as I've spent a lot of time explaining the design process to my flatmate. Currently, I've just about finished the window management subsystem, and most of the debug console framework (which I'm working on making generic so that a Windows console window, a networked debug console, and a graphical pull-down console in-game can all be interchanged easily, even at runtime). Once that is in place, the next big project will be to take apart the DirectX code and re-wrap it into the framework that the engine will use in the future. I'll consider Phase 2 complete when the engine is factored and modularized nicely, and is back to a compiling/running state.






December 2016 »

S M T W T F S
    123
4 5 678910
11121314151617
18192021222324
25262728293031