• entries
10
51
• views
41825

## Improved Visuals, Physics, and Camera [Screenshot Heavy]

I've made a lot of progress since my last post, mainly in the renderer. I added High Dynamic Range Lighting, Cascade Shadow Mapping, FXAA, and the Cook-Torrance BRDF with roughness maps. The previous screenshots were an eye-sore, so it's a relief to have something that looks a little more modern. I'm still dinking around with the hacked-up level though. My hope is to write a plugin for Maya, which would allow me to use Maya as my level editor and export geometry/textures/metadata to a custom format. That way, I could designate custom Maya nodes for things like triggers, and use layering for exporting collision geometry. I haven't quite surveyed how difficult this will be, but I've written Maya plugins before and they're not too difficult. I had to add a post processing framework on top of my deferred renderer that manages render target allocation and provides standard texture samplers and constants. I shied away from a nice compartmentalized process chain, and went more for a set of functions within the post processing object itself. That probably isn't the best way to do it, but it has the benefit of allowing shared/merged shader code where it makes sense. Here's a short list explaining each of the visual improvements I added:

• The shadows are done by rendering a depth pass into an atlas of cascades. The light matrix for each is computed and stored in a constant buffer. Later, during the lighting pass of the deferred renderer, the shadow map is referenced. The implementation runs a PCF kernel over the shadow evaluation for each fragment, for both of the cascades that it straddles. The results are then linearly interpolated to give a nice smooth border between cascades. Refer to MJP's blog for more info on the algorithm.
• For HDR, I tried a bunch of different tone mapping filters and couldn't get it to look right. When I switched to the Uncharted 2 method, however, it just clicked. I really like how it preserves contrast and avoids the ugly middle gray tone.
• FXAA was easy to drop in. I can see how it's a hack though. I'm still noticing tons of temporal aliasing (obviously, since the algorithm is purely spacial), as well as some aliasing within textures. Given what it has to work with, though, I think it does a decent job. I'm really interested in seeing how TXAA looks/performs.
• My goodness, if you have never used a physically based BRDF like Cook-Torrance, and are still in the dark ages of Blinn-Phong for everything, please give it a try. There is such a difference. Cook-Torrance is a microfacet based BRDF that uses analytical approximations based on probability to compute self-occlusion within a surface, as well as proper Fresnel reflection. There are other physically based models that I'm sure are great too--I'm still learning about them--but Cook-Torrance is impressive.
• I also dropped in a procedural sky that I found off of one of MJP's samples. The annoying thing about having a pure black background with HDR is that is interprets it as absolute deep space black, causing overexposure of everything in the frame.

In terms of the tank, I'm still having to work with a hacked up distance joint chain to keep the wheels in sync with each other. Basically, for every neighboring wheel on a given side, I link them at the 0 degree and 90 degree mark, like so:

That way, there's always a (near) tangential torque applied to the wheel, avoiding lockups or instability. I found out today that you can jack up the iteration count for the solver on a constraint-by-constraint basis. Since the distance constraints are pretty cheap, I set a high value and didn't notice a performance hit. This tightened up the simulation quite a bit, which is a relief. I did the same thing for the tank barrel, since the forces rotating it to face the camera were causing it to jumble around loosely. Not exactly tank-like behavior if you ask me.

I also implemented an over-the-shoulder third person camera. You can orbit around the tank, within reasonably constrained angles, and the turret/barrel will follow you. I got to do lots of fun 3D math calculating the angles for the turret and barrel within their respective local frames. Currently the camera doesn't do any ray casting to detect/collide with objects, but that's on the radar. That's going to be a bit of a tough issue, since camera collision can be a nightmare for the player, especially if it's contorting itself into corners of the level rather than just fading out objects. I'll have to look into how other commercial games are handling it.

I'm finally understanding how to use matrices to transform between spaces, and perform projections via the dot product. I bought a book the other day on how commercial physics engines work, and I'm slated to take Calc III and Differential Equations this coming school year. I'm actually excited to learn all this stuff! It's been frustrating looking into the bowels of Bullet Physics and not having a clue how the solvers work. Hopefully by the end of the year I'll understand it much better.

I believe the next task I will conquer is getting the tank to fire something! I'll probably release another demo sometime around then. Until then, lots of screenshots!

## [DX11] Please Test My First Demo!

I put together a simple demo. Pardon the nasty textures and the level made from scaled cubes, it does not represent the final planned look of the game at all. A friend of mine is working on a tank model, but for now the cube and cylinder placeholders are what I've got.

I've been focusing a lot on trying to get the tank movement to work decently well. I went for a constraint-based tank rather than a ray-casting one. This basically means that the wheels are real rigid bodies held in place by a 6DOF spring constraint. The motor is applied directly to those constraints. The nice thing is that this is more physically correct than a ray-cast vehicle. I did try the ray casting vehicle functionality in Bullet, but I wasn't impressed. Unfortunately, this also means it's a pain to fine-tune things to act realistically (or realistically enough, I want to retain some of the arcade feel). Specifically, the tank is moved by the wheels--not by the treads--requiring a much higher level of friction. Because the cylinders are rigid bodies, they don't behave like you would expect from a tire, since they are infinitely rigid. The most you can get in terms of contact is a line. Luckily for me, I'm not going for a tire model with realistic drifting behavior. I think it's working pretty well for the moment--at least on my machine.

At any rate, I'd love to hear what you guys think, and also if it even runs on your machines. I've only tested it on mine at the moment. The engine is DirectX 11 based, but currently I'm [s]using Shader Model 4 with the option to use the DirectX 10 feature level[/s] (NOTE: The demo uses a read-only depth stencil view, which requires version 11). The controls are outlined in the readme file.

Let me know if you get any weird dialog messages or glitches.

Thanks!

## Tank Physics and Other Progress

So a lot of life-related stuff happened this Spring--most of it good, but all of it quite mentally taxing. Incidentally, I got sidetracked and took a break from working on my new project. Recently, however, my interest in it is back with a vengence. The truth is I can't stay away from game development for long, it's my life addiction, for better or for worse.

At any rate, I have made a satisfying amount of progress just within the past few weeks. Bullet Physics is now integrated into the engine, and I have made significant improvements to the entity/component system. Right now I basically have a class called EntityFactory that allows you to register new IEntitySpawner derived interface classes, each of which override a single method, SpawnEntity. This will allow me to organize prefabs, since essentially object is a giant bucket of random components. So far, things have been working quite nicely. I have deviated from the Artemis style by allowing my entities to have lists of components, as opposed to a single component of each type. This helps tremendously when your entity is an articulate figure.

Before I say more, check out my first demo video! I'm not sure why the framerate is choppy, it's a smooth 60fps with vsync on my machine.

[media]
[/media]

And if you looking closely, you'll notice that the tank flips over and does some ninja-like action with the turret barrel to pop himself back up. Booya! I plan to render the treads without simulating them, which requires some sort of gear-like rotation constraint between each wheel on a track. Unfortunately, Bullet has no such constraint. Therefore, I will need to write one myself eventually (unless one of you guys have one!). For the moment, I'm using a more involved method of two distance constraints per wheel pair along the track (just the immediate neighbors). They are set 90 degrees apart to avoid the wheels from accidentally switching directions and locking up the treads. It works quite nicely, but it's probably more computation than is necessary. Right now there are 16 distance constraints just to keep the tank wheels in lockstep. I'll need to buff up on my rigid body dynamics before I attempt a better method. If anyone has any suggestions I'd love to hear them.

The tank is basically a bucket of components, that are linked together as they require. For instance, there is basically one transform shared between each rigid body/renderable pair, and each physics constraint require at most two rigid body components. Right now I have three entity systems that are working behind the scenes, one for submitting renderables, one for simulating physics, and one for tank handling. Each system receives an event whenever an entity is created or a component is added. In turn, they keep a local registry of all the entities that they care about (i.e. they have a component that the system is flagged to watch for). The physics system watches out for constraints and rigid bodies. The rendering system watches out for Models (currently the only renderable type). Lastly, the player system watches out for the PlayerController component.

This last component is interesting because it helps turn a pile of intertwined components into a tank. The player controller contains info necessary to drive the tank, not user input. For instance, the tank needs a direction it should head in, a point to aim at, a request to fire, etc. This component is essentially agnostic of whether the user is driving or an AI, which is the point. Moving on, the player system looks for this component, and based on the type of prefab, it can pluck necessary components from the entity to control it. More specifically, the constraint motors are primarily what gives you control over the tank object. Basically, the player system is like a control console for the tank that operates all the machinery in the background.

I haven't figured out the best way yet to feed player input into this system. Most likely the input controller will operate elsewhere in the game object and fire events to this system. An entity could be tagged as the player, and the input applied to the player's tank. The current input scheme is a quick hack to get the demo up and running.

Finally, the concept art you see on the top left of this post is my first attempt at digital drawing. A coworker was giving away his old professional Wacom tablet, and I claimed it! Although my artistic abilities are somewhat lacking, you can see a rough idea of the feel I'm going for in the game. I don't want to give away too much right now, because it's all still very early in development. I just thought I'd throw that out there as a sort of teaser.

## C++ Port of Artemis Entity Component System Framework (In Progress)

This month is officially "wrap my head around Entity/Component Systems" month. I stepped away from coding for a bit and spent time researching different designs for handling game objects. Most interestingly, my search brought me to the Artemis Entity System Framework. Written in Java, the system is the best overall design I have seen so far (that's not saying much)--even if it is written in Java...just kidding. At any rate, I really liked what I saw so I used it as a reference for my system written in C++. In a way, it's really more like a port, without the reliance on RTTI (or whatever that class identification business is in Java). The architecture is virtually the same, but I took the liberty of writing things a bit differently to reflect the language differences.

Initially I considered using an std::map to relate entities to their component lists, but I really liked that they used a vector in Artemis to store all the entities. An O(1) lookup time is much better than O(log n) when you're dealing with several lookups per entity per frame. Like in Artemis, I opted to use a separate vector with removed entity indices. Whenever an entity is added, the system checks that vector first to see if there is a hole in the main vector. This reduces fragmentation and covers the case where the user removes several entities before adding one.

Unlike Artemis, I used an event system to decouple the entity manager and entity systems. The entity manager fires an event whenever a component is added, allowing the entity systems to check that the entity still has all of the required components. I figured that my game will use an event manager, so it made sense to integrate one now. Also, other subsystems could be notified whenever an entity is created/destroyed or a component is inserted/removed from a specific entity. The entity systems themselves contain a std::set< Entity* > data structure that holds all of the active entities. When it receives one of the previously mentioned events, it can check the entity to see if it should be added to or removed from the set.

Here's what I have so far in my main.cpp test program:

// Create the event manager and entity managerevtMgr = new SGF::EventManager();entMgr = new SGF::EntityManager( evtMgr );test = new SGF::TestSystem(evtMgr, entMgr);// Create an entitySGF::Entity *e = entMgr->CreateEntity();SGF::Transform *trans = new SGF::Transform();entMgr->InsertComponent(e, trans);while( !wnd.HasQuit() ){ wnd.PumpMessages(); ID3D11DeviceContext *dc = graphics->GetImmDeviceContext(); IDXGISwapChain *sc = wnd.GetSwapChain(); { wnd.SetAsRenderTarget(dc); dc->ClearDepthStencilView(wnd.GetDSView(), 0, 1.0f, 0); dc->ClearRenderTargetView(wnd.GetRTView(), D3DXCOLOR(0.0f,0.0f,1.0f,1.0f)); sc->Present(1, 0); } // This loops through any entities that match the bit flags for the component type. In this case, all it wants is a Transform component. test->Process();}entMgr->DestroyEntity(e);delete test;delete entMgr;delete evtMgr;

Here's the source for my TestSystem class:

// TestSystem.h ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////#ifndef __TEST_SYSTEM_H__#define __TEST_SYSTEM_H__#include "EntityProcessingSystem.h"namespace SGF{ class TestSystem : public EntityProcessingSystem { public: TestSystem( EventManager *eventManager, EntityManager *entityManager ); virtual ~TestSystem(); protected: static unsigned int TypeBits; virtual void ProcessEntity( EntityManager *manager, Entity *e ); };};#endif// TestSystem.cpp ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////#include "TestSystem.h"#include "EntityManager.h"#include "ComponentMapper.h"#include "Transform.h"#include using namespace SGF;unsigned int TestSystem::TypeBits = (1 << CT_TRANSFORM);TestSystem::TestSystem( EventManager *eventManager, EntityManager *entityManager ): EntityProcessingSystem(eventManager, entityManager, TypeBits){}TestSystem::~TestSystem() {}void TestSystem::ProcessEntity( EntityManager *manager, Entity *e ){ // Allows you to map the list as a specific component type. ComponentMapper tmap = manager->GetComponentList(e, CT_TRANSFORM); // We can index a specific component of a specific type. assert( 0 && tmap[0]->mat._11 );}

If you look closely at the last line, you'll notice that the component system supports multiple components per type. This is another difference from the Artemis framework, which only allows one per type. The reason I chose to diverge from their design in that respect is because I can think of at least a few key cases where it would be really nice to have multiple components per type. For instance, if I have an articulate vehicle entity, I might have the chassis connected to the wheels. With this system, I could have four joint components--each one connected to a wheel.

At any rate. If anyone is interested in the code, I'd be happy to provide it. It's essentially just a port of Artemis with a couple nuanced changes and additions.

EDIT: I got sidetracked and I haven't made much progress since this post. I decided I would just post what I have: http://cse.taylor.edu/~zbethel/ArtemisPortCpp.zip

If you have any questions, feel free to ask.

## Some Thoughts on Software Design

Software Engineering as I understand it involves the concept of software design. Anyone can program, and I would even venture to say that programming is the easy part. Good software design is where things get hairy. I consider myself a seasoned programmer, but a somewhat fledgling Software Engineer--a state that I think is fairly commonplace among academic or hobbyist developers. Bad habits done out ignorance can carry over into the industry and wreak havoc on bigger projects. I too am struggling when it comes to designing and implementing larger systems (like a game engine), although I am learning a lot. Though I am still inexperienced, I thought I might share some of my realizations so far. If you consider yourself a seasoned software engineer, feel free to comment/correct as you see fit.

[font=georgia,serif]

# Formulate Requirements

[/font]

First of all, trying to design a system without knowing the requirements will drive you insane if you are expecting any semblance of quality in your architecture. I know this because I've tried. My previous attempt at creating a game engine went down in flaming glory because the end result wasn't as flexible and usable as I had hoped. Before you write a single line of code, I implore you, write a design document for your project. Describe what the system needs to do in detail. Why is this so important? Without this step, you're essentially flying blind. You will re-factor your code three times as often, bloat your code base including features you don't need; and if you're like me, you'll agonize over finding the Right Way to do it--except you won't, because you don't know what you need.

Case in point: in my first engine, I rewrote my resource manager four times. Not a simple re-factor mind you; a complete rewrite. I included features like loading from a separate thread, which is a great feature, except that I didn't really need it. I just thought it was cool and that every engine should use it. Granted, I learned a lot about what I did need during those four rewrites, but I would have been better off starting with the essentials and refactoring from there. I went into it without fleshing out how the end product would use the system; as a result, my design suffered from code bloat.

[font=georgia,serif]

# Build a (Good) Foundation

[/font]

I've heard differing opinions on this, so take it with a grain of salt. I see a two specific mistakes that neophyte developers make (including myself) when they are just starting a big project. Either they try to sketch out the entire system (a large one, mind you) on paper, in detail, including every major class and how they'll interact with other classes; or they don't attempt to design anything and just start coding. The latter is just pure laziness in my opinion--the chances of designing a good system with that strategy are akin to winning the lottery (unless you're John Carmack). The former, however, is just as destructive. From my experience, trying to build exclusive from the top down is paralyzing. You scrap design after design trying to find the perfect one (hint: it doesn't exist).

As with all things in life, good design comes with moderation. Begin with your requirements and write some hypothetical sample code of how you will use the system. One good way I've found is to write a bogus program (or part of one) in pseudo code, testing out various interfaces and ideas. At the same time, begin building the foundation for your system. If it's a game, start with the basics, like a basic map loader. Just get something working.

# [font=georgia,serif]Break Down Work into Manageable Chunks[/font]

As you build the components of your system, not only will the next step become more clear, but so will the requirements. You'll realize something you need but couldn't have known early on in the process. This allows you to refine your requirements as well as your code. By starting simple, you will save time refactoring and gain momentum and efficiency. I see many developers who attempt to write hundreds of lines of code without ever compiling it. They proceed to spend hours trying to get it to even compile, and then twice as many working out the bugs. This is not good practice; a better method is to utilize unit testing. This is the process of writing small chunks of code and then rigorously testing it before moving on to another component. This gives you confidence in the components of your system as you build on them, and boosts your morale as you see small parts working independently. Furthermore, it simplifies bug fixing since you can focus on one part of the system exclusively.

# [font=georgia,serif]Pick an Achievable Goal and Stick to It[/font]

This is probably the most important point. Avoid the pain and suffering of trying to design the perfect system for any product. It won't happen. Always know your requirements and design for that only. This is especially true if you are trying something new. I don't personally know how the guys at Epic designed Unreal 3 or their story, but I would imagine they built it up using the good parts of their games. Not only that, but the guys developing Unreal Engine 4 are probably the same ones who built the previous three engines (at least the lead developers). In short, defer your amazing do-it-all project for when you actually know what you're doing (I say this to myself as well). Most of us started out wanting to develop the best MMO on the planet; we quickly realized this was foolish. However, some of us are repeating that foolishness on a smaller scale. We're trying to build something we have little experience with, and make it better than everything else out there. That's just my opinion.

How does this apply to me? Well, a lot. I'm trying to trim down my grandiose plans to something manageable. For instance, what are my goals? I've spoken about those in recent blog entries. If you go back to my first blog, you'll find that I've trimmed it down a bit since then. I may trim it down even more. As a humble hobbyist with only 10 or so hours a week to devote to doing what I love, this has to be the way it is. It's just reality.

Anyway, I hope this was helpful. Again, it's just my opinion, and I'd love to hear your feedback if you agree or disagree. Thanks!

## Project Progress

[font=georgia,serif]

# Entity Management

[/font]

As of late, I have spent hours agonizing over the best way to implement my entity system. I am suffering a bit from neophyte syndrome trying to think through all the different scenarios and needs of my application. One particular mind block I kept running into was how to handle articulate physics entities. I do not plan to have animated skeletal animations in my game. However, there will be skeletal entities consisting of a hierarchy of physics joints and rigid bodies. Originally, I had considered doing a fancy schmancy component system that seems to be the current rage, but I've run into hitches with that as well, and I think I'm best off slowing down and trying to write something specific to the needs of my application.

First off, the terminology has been messing with me. There's actors, entities, scene objects, renderables, and who knows what else. I think I've decided to call the building blocks of my scene entities. For my game, there are two large subsystems that the entity system will interact with: the render and physics subsystems. There could be more, like an AI subsystem for instance, but I'm trying to keep it simple so my brain doesn't explode.

As a result, until I have better idea of the requirements of my system, I plan to forgo a formal entity system. I'll simply do things the old fashioned way! Once I have something working, I may try to make a game out of it, in which case I may opt for a more formal entity system. What I'm realizing though is that these tools are only useful if they make my life easier. If I'm designing a pretty simple single player physics sandbox application, I don't need a fully fledged game engine with a component based entity system to make it work.

[font=georgia,serif]

# More Project Details

[/font]

So what exactly is this project I'm working on? Well, I thought it would be cool to experiment with more advanced rendering techniques in DirectX 11, at the same time, I would love to get my hands on a 3D physics engine and have some fun with it. Enter the tank sandbox! Don't ask me why, but I have this vision of an articulate tank physics model, and I really want to implement it. Something like this (except in 3D):

The world will be an assortment of static and dynamic simple objects. I'm planning to use an XML file to describe the world map. Basically, the player will be able to ride around and shoot stuff. Woohoo, sounds fun! This is where my engineering side dominates my creative side. I can't think of a good game idea, so I'm just going to create a fun and pretty sandbox application.

So what are my goals? Well, goal number one is to learn DirectX 11. This is my first project using it, and I wouldn't exactly be a good graphics programmer if I didn't know it though and through now would I? As I have stated in my previous posts, I really want to play around with Deferred rendering, a few different shadow techniques (say, Cascading Shadow Mapping for example), SSAO, HDR, motion blur, and depth of field. That means most of my time will be spent on the rendering side, which is what I want. Although the physics aspect is fun, I also chose it specifically to experiment with animation and add the challenge of balancing visual/physical aspects of a game object. So that's goal number two.

[font=georgia,serif]

# Formulating a Plan of Attack

[/font]

With I have so far, I can pop up a window and initialize DirectX in just a few lines of code. Here's what my main function looks like:

int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, PSTR pScmdline, int iCmdshow){ // Create DirectX with the default adapter SGF::IDX11Graphics *graphics = SGF::DX11CreateGraphics(); if( !graphics->Init() || !graphics->CreateDevice(0) ) return 1; // Initialize the window SGF::DX11SampleWindow wnd("Hello", "HelloWorld", 800, 600); wnd.Init(graphics, hInstance); while( !wnd.HasQuit() ) { wnd.PumpMessages(); // Do Logic } graphics->Release(); delete [] graphics; return 0;}

My next plan of attack is to write a basic resource loader for textures and meshes, followed by a mesh helper class. Deciding on a file format is always hard for me, but I think I'll go with .x, since I'm familiar with it. I will probably use something like Assimp to load it rather than write something myself. Next, I want to create camera class and get some basic meshes rendered. It would be a good morale booster for me to see something on the screen, so I think that's a good milestone.

After this, I want to create an interface for the physics system and start playing around with rudimentary scenes. Everything will be rendered and updated brute force, but just for testing purposes. It's always good to unit test sections of code to ensure that they work. It allows you the confidence to build on it with later systems. Once I have this up and running, I'll start looking into a world manager to manage scene objects and load/unload maps. At the same time, I want to research the best way to organize my rendering pipeline.

There is so much to do, but I just keep telling myself that if I cut it down into manageable chunks and shoot for modest milestones, things will get done. It's fun being able to do this in a hobby setting, because I can go about it less structured and more experimental. I will continue to update as I make progress. I hope to have some screenshots in the next week or so! Until then, later.

## Software Rasterizer Source Released and other Updates

[font=georgia,serif]

# A Modern Approach to Software Rasterization

[/font]

I have some potentially exciting news for any of you who are interested in software rasterization. I released the source code to my research project titled, "A Modern Approach to Software Rasterization." You can find the code here:

Here is a short description taken from my IOTD post a few weeks ago:

The source includes the demo executable. To run it, you will need a CPU supporting SSE 4.1. I recently tested it on a Core I5 dual-core processor with HyperThreading, and it runs 2.5 times faster on four threads than one! As far as I can tell, it scales very well to the number of cores on the machine. I haven't had a chance to try it on anything more than quad-core. If you happen to own a beastly machine with multiple processors, give it a try and see what kind of performance you get with more threads. On that note, the demo application takes in the number of threads to use as an argument. I created a few Windows batch files that run the demo with different settings--including how many threads to deploy.

[font=georgia,serif]

# Practical Rendering & Computation with DirectX 11

[/font]

I just received this book in the mail yesterday, and I'm loving it. The authors did a great job covering material that the Microsoft documentation doesn't cover, and leaving the more mundane implementation details out. I'm only 100 pages in, but the chapters all look very interesting, so I'm excited to get to them! Great job MJP, Jason Z, and jollyjeffers for your hard work! I am hoping to utilize the material in the book to help organize my DirectX wrapper classes to be the most useful. It helps to understand which features are more heavily utilized and which ones are useful only for specific algorithms.

[font=georgia,serif]

# Game Progress

[/font]

As I've stated in my previous entries, the struggle I have dealt with recently is trying to design a system without fully understanding the final product. I think this is bad practice, as requirement gathering should be the first order of business when starting a project. An idea for a good project has been formulating in my mind as of late. Since my overall goal is to learn how to design a game engine--with an emphasis on rendering, I think the best project is an interactive physics demo. My idea is to create a sandbox where the user controls a tank object composed of physics components. The world itself will be composed of standalone static and dynamic meshes which interact with the player. I want to explore some advanced rendering techniques like deferred shading, HDR, depth of field, SSAO, and motion blur as well. I have chosen Bullet as the physics library--I have heard great things about it.

Rather than design the engine for this demo from the bottom up, I want to start from both sides simultaneously. For instance, I want to start designing the world manager interface immediately as I am developing the rendering framework. This will help clarify what requirements I will need for materials, resource management, and pipeline organization. I will most likely utilize XML for the map file format, allowing the user to place objects and entities within the scene fairly easily. This would also allow something like a Maya plugin to export a scene along with the accompanying meshes fairly easily.

Zach.

## The Quest for a Great Engine Design

One big thing that I'm struggling with as a developer is wanting to do it right the first time. I'm such a perfectionist at writing code that sometimes I get so frustrated and lost trying to design a system that seems perfect. My brain isn't big enough to grasp all of the intimate details and relationships between components of the engine. Questions like, should I split the scene object into a renderable object for the render data, and scene node object for the spacial and transformation data? Or just lump them all together?" How decoupled should components be? Where do I even start?!

As I think about this problem, I am realizing that I have two major issues with my design philosophy. For on, I don't give myself enough room to make mistakes. Sometimes, in order to find the best way to do something, you have to try several different options. I found this out personally through my year-long school research project where I wrote a multithreaded software rasterizer. In my first attempt, I spent hours upon hours meticulously planning how the pipeline would work. Even worse, I gave in to the dreaded deadly sin of pre-optimization. In the end, I had to scrap the architecture entirely. I lacked the expertise to understand that the "optimizations" I was making were really making things slower--way slower. In my second attempt, I learned from my mistakes and built a designed architecture that was far superior. In the end, I didn't fail the first time. I just learned how not to make a software rasterizer (to paraphrase that grossly overused Edison quote). Sometimes we need to stumble in order to learn how to do it right.

The second problem with my stubbornly innate design philosophy is that I expect myself to be able to draft out a complete schematic of a fully functional game engine without even understanding its requirements. Every game engine that I know of has limitations--albeit some have more than others. At some point, the developers had to decide what type of games they were going to support and make assumptions based on that fact. Take the Doom 3 engine. From what I can tell from the source code, the guys at Id knew they were going to write an FPS, so they designed their engine to support an FPS. There are core engine classes that have special functionality to support certain types of world triggers and characters that are specific to an FPS. Somehow I got it into my head that my engine has to support every game type under the sun and be completely generic. I'm learning that this just isn't realistic. The difficult part for me now as hobby developer with virtually no real-world game development experience is knowing what my engine should support. Does it need to support instancing? How should physics components like joints be represented by the entity system? Should there be one entity per component? Or an entity for a tree of components? The same thing goes for mesh subsets and other things. How do you handle decals?

These are all important design questions that need to be answered before the core engine is built, because sometimes a new feature won't fit nicely into an existing solution. My problem is that I'm fairly ignorant about what these things require from the engine. It's like I have the drive and motivation, but there's just so much information to handle! I think I need to do a couple things. I need to spend more time playing around with existing solutions to see what's out there functionality wise and how it is presented to the end user. Also, I need to stop my daydreaming about designing the next big next-gen engine all by myself. Maybe I just need to set my sights lower to something achievable and take baby steps. Maybe a good first baby step is really learning DirectX 11 well.

One final observation. I have noticed when I look at engine code that every solution has its problems. There is no perfect engine (and never will be). I guess I shouldn't expect mine to be either.

## Abstraction in a Game Engine/Framework

If there's one thing that's dang hard, it's trying to find a decent balance between flexibility and simplicity when writing a good abstraction layer. I'm relatively new to DirectX 11, but so far it is a much nicer API to work with than previous versions of DirectX. It's so nice in fact, that I'm having a hard time finding ways to wrap things into nice little bundle classes without losing core functionality. While I don't particularly enjoy having to play around directly with a graphics API (I'd much rather hide it away behind some nice facade), I must say that the designers are making it hard on me.

I ended up creating an IDX11Graphics interface which encapsulate the device context, the device, and the factory object. I have a simple Init() function which enumerates all mode data from all of the adapters and files them away into a data structure that lets me filter modes by adapter, output, and format. Pretty simple stuff. The initialization stuff is the easiest to encapsulate, which is obvious when you look at the DXUT library that Microsoft provides.

I'm starting to realize that there is little you can do to improve the API without moving up a software abstraction layer. The farther up you go, however, the more flexibility you lose. For instance, I was trying to create a render target wrapper class, but I realized that the components of a render target really weren't meant to be coupled, which is undoubtedly why Microsoft did it in the first place. D'oh! There is a nice relationship between how DirectX handles resources and resource views; it seems clunky to just plop them both together into one helper class.

For instance, how do we handle render targets in a flexible way that actually simplifies the process? We could build a class that creates a texture, render target view, and shader resource view, But then how do we handle swap chains, which have a different creation process? The class could take a pointer to the texture as input to allow the user to create it however they want; or we could remove the shader resource view and put it in another supporting class. As another example, I considered writing one IDX11Texture class to encapsulate all types of textures, but one big immediate problem I ran into is high potential for bloat. In order to accommodate all different types of textures (1D, 2D, 3D, etc.), the class would have include ALL of the data. Ultimately, I think these options detract from the elegance of the SDK and make things more confusing and less flexible. This is one aspect of object oriented programming that drives me crazy. There just doesn't seem to be the Right Way to do it.

On the other hand, in order to simplify things, we have to sacrifice flexibility at some point. It's just going to happen. I guess where I struggle is finding the happy medium where an abstraction can make the programmer's life lots easier, but also allow retain the features that they need for their graphics algorithms. It's definitely hard to know what features are needed when you're just learning the SDK--a fact I know first hand.

I think this is the challenge that appeals to me the most about game engine development. There is so much software engineering involved. If you don't carefully plan out the intricate relationships between components and subsystems, the entire things quickly degrades into chaos. I have high respect for people who can grasp it all.

In any case, I have graphics interface in place that takes care of initialization and device enumeration. I am still deciding what other tasks I should give it--whether it should just be a glorified ID3D11Device, or more of an abstract renderer. Because I am still going to retain DirectX 11 access within my engine, I am hoping that it can become both. What really bugs me is when I create a helper interface that ends up just having 5 lines per function call that parrot things back to the Direct3D device. That doesn't seem constructive to me at all.

In any case, that's all I've got for now.

## Designing a Game Engine is Hard

There is something about designing a game engine that is both alluring and terrifying. My ten years of programming experience feels like nothing when I am faced with that challenge. Many times I've wondered why I feel so compelled to build one of these systems, considering that it is effectively reinventing the wheel the hundredth time. Why not just use an existing engine out there? I have seriously considered this, but in the end, I find myself more intrigued with game engine technology than games themselves. That is where my passion lies.

I have made a few attempts at building a game engine, and those have all failed for various reasons--the paramount reason being poor design decisions due to inexperience. My most recent attempt was a 2D game engine intended to power casual games similar to World of Goo. Unfortunately, components were so tightly coupled in all the wrong ways that functionality quick devolved into glorified hacks. For example, my entity system was confusing, inflexible, and difficult to use, as were the physics and scene managers. It came to a point where the design was salvageable--so I abandoned the project.

Like Thomas Edison, I successfully found a way not to build a game engine. It was hard for me to accept that my final result was essentially worthless, but the point was to learn. With that experience behind me, I now plan to build a simple 3D engine from the ground up. Hopefully this time I will avoid some of the pitfalls that I encountered in my first attempt.

One thing that I plan to do differently this time is put a lot more effort into the design stage of development. I ended up rewriting several parts of my engine and wasting valuable time because I realized my requirements weren't adequate. I decided on a name for this new engine: Tiny. The reasoning behind this name is fairly obvious. I want to minimize the amount of bloat, yet still provide a strong toolset that allows developers to have more control in the game building process. This engine will be code-based, so I don't expect to build virtually any point-and-click tools for development.

One particular aspect of the design phase that I want to stress is the use of interfaces. In my previous engine, I added interfaces after parts of the engine were already built--which basically defeats the purpose. I think building interfaces can be a huge aid in the design process, because it forces developers to think about how components interact with each other. With a good set of interfaces, the intricate relationships between components are already fleshed out, lessening the chance of one of those "Aha" moments of forgetting to include some vital piece of data or functionality. This may seem obvious to some people, but the learning process has been slow for me.

I am in the process of building a list of requirements, designing a high-level schematic for the engine, and gathering the list of third-party APIs that I plan to use. Here are the requirements so far:

• An engine backbone class that facilitates initialization of core subsystems like Input, Sound, Physics, Graphics, and Networking, as well as game state, resource, and window management.
• An event system which oversees message passing between engine subsystems.
• [color=#000000][font=Arial]

## DirectX 11 based renderer which natively supports HDR lighting, SSAO, motion blur, and depth of field.

[/font][/color]
• [color=#000000][font=Arial]

## Template based resource managers for various resources such as textures, meshes, materials, shaders, and scripts.

[/font][/color]
• [color=#000000][font=Arial]

[/font][/color]
• [color=#000000][font=Arial]

## A text based map format that supports a set of static world meshes and entities of various types.

[/font][/color]
• [color=#000000][font=Arial]

## A component-based entity which localizes all parts of the entity into one object--such as rendering, physics, AI, input controller, etc.

[/font][/color] Components will communicate through the event system.
• A robust error logging system.

The scariest part for me is designing the entity system and world manager. It is such a balancing act to avoid building yourself into a corner with a completely decoupled system, or a cluttered fully-coupled system. I hope that I can do better than last time.