• Advertisement
Sign in to follow this  
  • entries
    18
  • comments
    24
  • views
    12906

Entries in this blog

42

This was originally going to be a reply to this thread, but I think it kinda took on a life of its own. Usually I keep my journal posts technical, but I guess some philosophical musings here and there are good for the creative process. This was partly inspired by something a friend of mine nonchalantly said when the subject of mortality came up: "I have no intentions of leaving this world when this body dies." Everyone else looked at him like he was nuts, but I was on the same page, "Neither do I."

Regarding the question in the poll in the post, the whole man in the box vs the man on the stage thing is a problem that needs to be solved before I can vote yes. Here's how I see this potentially playing out. Initially to avoid this problem, more and more of our biological brains could be replaced with computer chips until eventually the entire brain is replaced. Hopefully this would work since the rest of my scenario kinda hinges on that. Imagine how fast we could write programs and run simulations without the whole using a keyboard for input, or even having to go through the trouble of translating thought into code. I'd imagine with this ability we could put our collective minds together and make faster and faster computer brains, and soon, a technological singularity would emerge. At that point, almost anything is possible, nanobots to keep our biological bodies alive forever, or leaving them behind and creating such a computer that could house millions of consciousnesses. From this point on, my imagination kicks in pretty hardcore.

Due to the inherently violent nature of human beings, I probably wouldn't want to be around when they eventually destroy themselves. I would probably create some sort of vehicle capable of wandering the galaxy, and would probably fashion it after either the Enterprise or the Tardis, I'm still undecided on this point. One could fashion some sort of transporter/replicator type device to synthesize a biological body, and given that we now have brains that can think and infinite speed, could probably find a way to transfer my consciousness back and forth between biological bodies and computers. Pretty appealing, wandering the universe, living lifetime after lifetime in biological bodies, simulating even more inside a computer. Time travel wouldn't even be out of the question. Boredom and stagnation would be your only threat to survival. I imagine though, whatever would be left of humans, and any other intelligence that may emerge in the universe would provide sufficient entropy to give us a reason to continue to exist. Whatever computer system that houses myself in between bodies would have to be pretty failproof and able to harvest energy from basically the fabric of the universe itself. Sounds pretty far fetched, but remember, with a technological singularity, you could solve much more difficult problems than that. Except maybe the end of the universe, that might require some serious thought. At least to come up with something more creative than just time traveling back to about 13 billion years after the big bang to when things start to get interesting.

The tricky thing to consider is what's to say that hasn't already happened, and maybe I've just decided that lives are less boring if you don't remember anything else.

More Deferred Rendering

It's about time for my bi-yearly post here at gdnet, so here goes. I've gotten a single directional light working, a huge milestone after shelving the project for the better part of this year. The directional light is still being rendered directly onto the main render target, and my next task is to break out another render target and texture, and handle combining light rendering passes onto that. Once that is working, I'll need to write another technique in my shader to handle combining the lightmap texture with I'm guessing the color component of the g-buffer to produce the final image.

Attached are some screenshots, because, everybody likes screenshots. #1 is of the separate components of the g-buffer, with just my skydome, which I'm very proud of, and kinda sad it stopped working. #2 is pretty much the same scene, only with the directional light rendered onto the background. #3 shows the directional light after some time has passed and the sun has kind of gone down.

The reason I am so proud of my skydome is because I draw the sun onto the skydome in a pixel shader program, and moved the sun's position across the sky, and use that to calculate the direction of the single directional light I am rendering. I used to have this exposed to a Lua script, which I'm not remembering why I stripped out at one point. Some of you might remember seeing a render of the sun and a terrain I used to have as my avatar on this site a long time ago. That was made with a previous version of this project, probably prototyped in XNA back then. Once I have the rest of the deferred rendering in place, I'll go back and see what needs to be done to draw the skydome. I'm guessing I will just have to draw it last, in its own pass and there will probably be a little refactoring of my renderer class to support that.

I sat down last night and actually came up with a set of requirements, more of a roadmap of things I want to implement for this project. For now it is just a hobby, but the roadmap does lead to it becoming a game in the distant future. Just doing that has given me alot of motivation, seeing a clear beginning and ending point, and how much more fun some of the later features will be to create once I've gotten some of the groundwork out of the way.

Hopefully there will be another entry soon with screenshots of my completed deferred rendering. Possibly with or without a sky.
Those two concepts don't necessarily have much in common, other than the fact that I've finished one and started another since my last journal entry.

I'm fairly pleased with the model loading. At first I was just loading models in the Milkshape3D format, which due to the way the format is optimized for, well, Milkshape itself, the processing I had to do ended up being a little sluggish. It took almost 3 seconds to process a model with around 12k vertices. That was ok for awhile, but has been on my list of things to rework for awhile now. The direction I took with fixing this up was to make up my own file format that would sacrifice a little storage space for extremely quick load times. I wanted something that would pretty much come straight from disk into vertex and index buffers with as little processing as possible. Following is a spec of the format in its current iteration:

UINT version;
UINT nameLen;
char[] name;
UINT numMaterial;


Material {
short Index;
char name[32];
float ambient[4];
float diffuse[4];
float specular[4];
float emissive[4];
float shininess;
float transparency;
char mode;
char texture[128];
char alphamap[128];
}

UINT numMeshes

Mesh {
UINT nameLen;
char[] name;
UINT materialId;
UINT vertexLen;
VertexPositionTextureNormal[] vertices;
UINT indexLen;
short[] indices;
}


Its really simple and basic and doesn't have many of the bells and whistles other formats may have like bones or animations, but adding those in shouldn't be too hard, and is something I think will help me better understand skeletal animation in the long run. I've also written a Milkshape plugin to export models into my custom format here, which was a fun and learning experience. And for the results, that same model it took my ms3d loader over 3 seconds to load now loads in less than 200 milliseconds, a huge difference. This will come in handy down the road when I want to load models in on the fly. After all this, I have a complete pipeline for creating models and resources in Milkshape and efficiently loading them into my engine.

On to deferred rendering. I'm not as pleased with my progress on this as my model loading, but I think I have a pretty good start. I won't even try to explain many of the details of deferred rendering here, other people have already done a much better job explaining it than I probably could. The point where I am with it, I'm calling 1/3 of the way done since I've gotten the first of three steps finished and working correctly, creating what's called the g-buffer or geometry buffer. I'm not on my dev pc at home or else I'd post some psychedelic looking screenshots of the separated components of the g-buffer. Overall my existing forward renderer was easily adapted to deferred rendering without too many major structural changes. Hopefully I will find some motivation this weekend to get this project to 2/3 of the way finished.

Short Update

Keeping in line with my rule #2 from last night's post, I was able to get my logger to send its log messages to my WinForms app. However the means are a little questionable.

Calling native code from managed wasn't too hard to figure out. I created a C++/CLR dll which interfaces with my native library. The native library is compiled as a static library, so I'm not quite 100% on exactly how that works, but it just sorta does. In order to have my logger (native) send its log messages up to the Winforms gui, I would have had to do the reverse of this and have native code call managed code. An afternoon reading various articles on the subject didn't really talk me into wanting to try and implement something like that. It would have taken seemingly forever, and not had much payoff as this would probably be the only thing I would use it for.

All that justification being presented, I decided to have my gui request log messages to display. I created another log target for my logger that would queue up log messages sent to it, and once every frame, the gui drains the queue and displays the messages. Its kinda kludgy and hacky, but it works for now.

Now I just need to figure out why the VS debugger won't hit breakpoints set in my native code.
Integrating Lua scripting into my game didn't exactly turn out as exciting as I thought it would. It is pretty sweet to type cube:SetPosition(x, y, z) in the console, and actually have the cube move around on the screen though. I'm not feeling like detailing much of the implementation because I don't think its really anything special at this point. A decent starting point and something to refactor a bit later on, that's all. And that's good enough for now. I'll come back to this thought at the end.

Working with and thinking about interfaces between Lua and C++ did get me thinking again about my .Net WinForms gui. I had originally started that project with the intentions of it becoming a generic game editor using my C++ engine, but got quickly frustrated trying to abstract and generify everything.

In case you haven't noticed by now, I only program at home out of boredom and its usually fueled solely by the randomness of my creativity. I have a hard time sticking to one project for long enough to get as much work done on it as I would like to. Contributing to that even more lately is work is pretty demanding of my creativity and in the evenings, on a creative level, I feel much more like consuming (playing Fable 2) than producing.

When creativity does come, it's hard to keep it going for the weeks on end I would need completely implement a certain feature. I'm thinking about bringing back the .Net Winforms gui I had started, but making a few changes and gearing it more towards being just tool to help in development of my specific game, and gradually make it more generic over time. Maybe if I have some different options of things to work on I'll be more likely to actually work on one of them instead of thinking I'm tired of that maybe I'll start a new project...

I've also made a few rules for myself.
Rule #1: No more deleting everything and starting from scratch!
Rule #2: To facilitate not starting from scratch, don't try to make everything perfect in the first attempt. Its much easier to refactor than to start from scratch.

I'd be interested to hear if any of you guys have any rules you follow to keep yourselves on track with hobby projects.

Lua Part 1

My last post did say something about once a week, so here I am. I was away for most of the weekend, but did manage to work on Avalanche on and off for most of the day Sunday. I set up a project to test out the Lua api and got a few things hastily working, the main ones being linking to the Lua libraries, registering functions and being able to call a Lua function from C++, and vice versa. I realized I will need some sort of console window to enter Lua commands in realtime and kinda got lost down the rabbit hole setting up such a console using plain Win32. This felt way more painful than gui programming should be, but I powered through it and am pretty happy with the results. As with the Lua test project I created, the code is pretty ugly and relies on global variables and free functions floating all around, so my next goal will be to clean that stuff up and make each of those pieces a little more self contained.

I'm feeling like scripting in a game can be a slippery slope. If you don't draw a line somewhere between what you want to script and what you want to code, you might end up with a game completely written in Lua. I'm going to err on the side of caution, and initially only expose things that will help speed up development to the scripting system. My plan is to write some reusable code to mainly allow for setting shader variables through my console, and to use that in picking up where I left off with my sky rendering I started in XNA. Once that is all established, it shouldn't be too hard from there to write a script to control a day/night sequence.

Possibly tomorrow look for some more details of how I implemented the console window and maybe some ideas about encapsulating some of the interfacing with Lua.

Project Progress

Reviewing my infrequent journal posts, I realize how much trouble I have sticking to one project, and having said project really get anywhere. To try to keep myself on track with one project, I'm going to try to post some kind of progress here at least once a week, even if I only change a handful of files, my goal is to record my progress here.

This week, I've revisted my C++/DirectX 11 project and heavily refactored much of what was there with more of a focus on getting stuff done and not so much obsessing over every last little detail and trying to get it perfect the first time. With a focus on getting stuff done, I've decided to make the scope of the project a bit smaller and focus on working towards a simple game at first, and gradually adding features and quests and so on. Currently the game I have in mind is a medieval fantasy type RPG. I'd like to be able to combine elements I've really liked from other games into one, for instance the vast open ended type world of Oblivion, but with a little more direction and engaging combat elements found in Twilight Princess.

I will have more updates with more details of my progress in the future here and a few other programming articles (and other random junk) at my Wordpress blog located here.

Entity Animation

With a little implementation and more thought, I've realized a separate entity for explosions might not be the best way to go. Instead of destroying a bug entity, and putting a short lived explosion entity in its place for a brief time, I think the bug should 'know how to blow itself up'.

In a more general sense, an entity should always support one or more sprite states, or animation states. I'm going to forego hardcoding entities with set animation states, and I've sketched up an entity, defined in xml. Basically an entity can have one or more animation states, each state can have a set duration, and either a single sprite sheet, or multiple sprite sheets for selecting at random from a pool. The selecting at random from a pool will support my idea for multiple explosion animations.

I normally like to be able to quickly test out the unit I'm working on, and I tend to work and test in pretty small units, so I've decided to kind of fork my development paths into the main game, and a kind of debug mode, in which a sidebar is present along with the original game screen, allowing me to put buttons to test certain functions at will. I imagine what I've created there is the beginnings of a game editor.

A pattern I arrived at with Avalanche (my attempt at a 3d game editor with D3D11) has shown its merit here as I arrived at it once again with InJders. I have a GameEngine class, which contains among other things, a Canvas, and an entity store, as I've called it. I had started by just hardcoding things in the engine's init method to add the player's 'ship', the invisible bounds of the screen, and the enemy bugs, but pulled that logic out into a GameManager class which now sets all that up, and handles mapping input to the player ship and updating the bug entities, etc. For the purpose of the editor, I pushed any common logic up into an abstract base class, and extended GameManager to create one that doesn't contain hardcoded things in its init method, rather it interfaces with the gui controls in my debug mode.

This allows me to reuse all the existing rendering and entity management code I've already written, as well as add new functionality to it and have it be interactively testable. Eventually I want to have my debug mode component create a file that can be passed to the actual game's GameManager to instruct it how to run and manage the game.

This is all to the end of developing some code that can parse my xml entity format and enumerate and switch between animation states, all to eventually get a bug to blow up with a spiffy looking explosion when its shot instead of just disappearing. I think the next milestone after that will be to give the enemies some form of scripted movement so their a little harder to hit. At that point, I plan on getting into an area I have very little experience in, going from programming the technical details to actually making shooting bugs that explode fun. I may either post the jar here for others to try out, or maybe even just embed it into an applet.

Anyway, work tomorrow involving cranking out much less fun java code, so that's all for tonight.

Back to Basics

Let me reiterate something from my last post. Designing an engine is hard. Probably more so if you don't really have a grasp on what it needs to do, only speculation on what it would be cool if it did.

Feeling a little lost and like not much progress was being made, I decided to shelve the 3D project for awhile and work on actually making a game. I've decided on a simple space invaders clone, which I call InJders, pronounced in-jay-ders. (It seems par for the course to randomly insert the letter 'J' into the name of anything written in java, so I figured why not.) To avoid being hung up on overly complicated rendering design, I decided to just use Java and Java2D for graphics. Combining that with more of a 'git r done' attitude instead of cautiously designing some overly complicated system, after only a few nights of coding, I have what could almost be construed as a game already.

Yes those are just stock icons from some free set I found. I'm really trying not to get caught up on making it look like much this early, not until it gets a little more substantial anyway. I have basic collision detection set up, keyboard controls to move the 'ship' back and forth and shoot. The bugs you shoot however don't have much in the way of intelligence right now, they just stay in one place, get shot, and disappear.

The next thing I'm looking into is how to animate sprites. I managed to stumble onto a 2D explosion generator someone was kind enough to write and give out for free. It does some fancy stuff to procedurally generate 16 64x64 pixel images of the progression of an explosion, which it then writes out to a 256x256 pixel file with all 16 'frames' in a 4x4 grid. I was able to toss together some code to resize the images and convert them to png form. From there my AnimatedSprite class loads them up, and splits them into 16 BufferedImages. Tonight I finished up the animated sprite debugger tool I was working on to loop over the images, and it actually renders a pretty spiffy looking explosion.

I had the tool generate about 30 explosion sequences, and plan on having the game select one of them at random each time a bug is destroyed, that way it doesn't just look like playing the same animation over and over again.

The next step will to create an ExplosionEntity class that uses an AnimatedSprite, so whenever a bug is collided with a bullet, it is replaced with an ExplosionEntity which plays its sprite's explosion sequence, then destroys (or at a minimum, hides) itself.

Hopefully this being a smallerish project and having a much higher velocity while using java as opposed to C++ I will be more inclined to journal my progress here on a more regular basis (since there may actually be progress to journal). Ultimately I hope to be able to use experiences with this project to start up a more realistic 3D project at some point. I'm already thinking about writing a 3D version of this project once the 2D one is finished, however I'm thinking the line between 2D and 3D probably should also be the same as the one between java and all its productiveness (for me anyway), and C++.
Oh noes! It looks like my GDNet+ subscription expires soon. I will probably get around to renewing it soon. Even though I don't post so much, I do like to support the site that's given help and more importantly a ton of inspiration to my adventures into graphics programming.

To ensure uninterrupted access to my bi-annual urge to blog, you can check out my wordpress blog here.

Whilst I'm here though I might as well say what's up and write about what I've been working on lately.

Designing an engine is hard. There's way too many components needed, and no good place to start, so I've decided to read through a few books on the subject. I'm currently working through 3D Game Engine Programming, which although a bit dated, has a lot of good ideas. Heck, I'm still using for a settings mechanism a slightly refactored version of the one from Enginuity.

I'm about halfway through the book now, and I've been updating the rendering code for D3D11 and refactoring things where I see fit. My hope is that at the end of the book I'll have a decent starting point for developing my own framework and be able to start on the editor component I previously mentioned.

Edit: Looks like GDNet has my back and renewed my subscription automagically.

Avalanche

Thanks to some comments on my last post, I've been considering using WTL for the GUI of my editor component. After experimenting with Qt, and minimally with MFC, it's very much getting back to the basics, which at this point seems very attractive to me. Qt and MFC both felt a little bloated, with tons of utility classes that you kind of get talked into using as if by some used car salesman. Because of this, it made me feel like I should have been maintaining a more generic interface between the actual GUI part of things and the core editor code in case I wanted to bale out on the GUI toolkit I was using without heavily refactoring the existing editor. I'm thinking now that there were more productive things to worry about rather than writing perfectly designed, loosely coupled, highly componentized code right out of the gate. I put quite a bit of exploration into trying to make Qt work for me, but the only thing I have to show for it is a lesson learned: get it to work first, then get it to work well, then finally make it pretty. Past lessons learned also are telling me not to stick to this too closely and just slop stuff together in an unmaintainable mess, rather gradually assimilate new concepts, refine, and slightly refocus.

For the next milestone, I plan to wire up a demo component into my existing rendering framework for use when integrating it into the skeleton WTL application I currently have set up. This will just be a spinning cube and a frame time to make sure everything's going to rendering and updating correctly when integrated with the editor. I hope to quickly get things to where there is something interesting to show as people (myself included) generally get more excited over pictures than a bunch of words.

As for the title of this post, I've also decided to officially name this project Avalanche. I've always just liked the sound of it from Final Fantasy 7.
I should first clear things up by saying I've read all the cautionary tales about 'make games not engines' and all that, and that would be well and good if my intentions were to make a game. I suppose my intentions are totally the opposite of those I have at work. At work I code, but I use the language and tools they decide we should use, we code what they decide we should code, and we have to be done coding it by a date and time they say we should be done coding it.

When I'm coding for fun, I seek to balance all that out, to make it fun again. I replace the rigid guidelines and schedules with just letting my creativity sort of meander. Lately I have been trying to guide these meanderings into eventually producing something by focusing efforts on a game framework and accompanying editor. I won't go so far as to call it an engine yet because I don't feel its grown into that level of organization; its more of a bunch of helper classes that I continually refine.

Over the past however long its been, I've been slowly trying to realistically incorporate meanderings into programming a gui into this project. Where I last left off with my journal I was looking at I think WinForms with XNA. I've since decided to abandon coding with managed languages for my coding for fun since it always seem to remind me too closely of programming with java at work. I really only have the attention span to master one VM's intricacies, and as a result of my job, that VM is java's.

Where I left off with my journal, Xna's content pipeline was annoying me. I was having a hard time being able to have my graphics framework take advantage of using it, and still have the editor be able to load and compile resources at runtime in a sane manner. All of the problems I was dealing with reinforced the realization that this wasn't was Xna was designed for. It felt to me anyway that its strength lied in putting together small prepackaged games that could be easily run on the Xbox, and not so much in it being a graphics API that happened to be implemented in .net.

I decided to abandon Xna in favor of pure DirectX, and was soon overcome by the difficulties involved in managed/native interoperability. I experimented quite a bit with Qt, but ultimately decided I didn't really need a cross platform gui library when I was only targeting Windows.

I have been recently looking into MFC. I may end up liking it, or it may not be at all what I'm looking for, but at the very least I'll have a little experience with another gui toolkit.

Hopefully more regular updates soon.
I've decided that designing an input system at this stage of the game would be a little premature given that I really only *need* 2 mouse axes to move a camera and 2 keyboard keys to move forward and backward.

For the mouse, I override OnMouseUp and OnMouseDown. When the right button is down, I freeze the cursor to the center of the control, hide it, and from then on, calculate the delta x and delta y from center. I multiply this by a factor of the game time, and rotate the camera by it. That way normally the cursor is free to move about the control and be displayed, but when you right click, the cursor disappears, and you're in camera move mode. Later I will still need to implement some sort of picking to be able to have the left click be sensitive to objects in the scene. I had a ScreenProjection class when I was using OpenGL that handled translating screen coordinates into world coordinates based on the gluUnproject() method, but I'll cross that bridge when I come to it.

The keyboard was a little more tricksy, until I figured out that when you subclass System.Windows.Forms.Control, you have to manually call Focus() in your MouseDown event handler in order to make the control have focus, and therefore fire KeyDown and KeyUp events. After spending an hour learning all about this, I realized that KeyDown and KeyUp events are not exactly fired in a realtime enough fashion for a simulation type application to use them. The events respect the system's key repeat rate. Well that's no good. Also, this was an event driven type of setup, so where would my game time for constant animation timing come from? Hmm, maybe I can track a velocity variable with keyup and keydown events, and scale that by the game time in my Update() method... no that won't do, it uglies up the control class, how does that work for multiple keys down, etc. I figured to heck with it, I'm using Xna, I might as well use Xna, so in the update method I grab up a KeyboardState and check for my two keys there with the handy IsKeyDown() methods. Problem solved. Could I probably do the mouse this way? Maybe, but its working the way it is right now, and I'm not a big fan of fixing stuff that's not broken, especially when I have more features on the 'unimplemented' list than 'implemented'.

So there you have it. The changeset I committed was only like 15 lines of code or so, and best of all, it confirmed that my suspicion in my last entry was correct, I click File->New, select a terrain size, pan the camera around and there's my terrain. Granted its only a bunch of triangles on the x,z plane, and there's no texture, and its all flat, but at least what IS being rendered is being rendered properly.

The next area I want to work on is getting a texture on said terrain. Looking into this, I realize in the before time, with OpenGL, some of the render engine code spilled into the model code in the form of the texture manager. I had the level store (model) set up so that anytime it encountered a storable that needed textures to render itself, it did a lookup from the renderer's texture manager. That worked for the time being, but Xna has exposed that for the bad idea that it was. If I port that design straight across, I'll end up with Texture2D's in my level store, which I have no idea how to write out to a file, nor is my level editor going to concern itself with reading in (That would involve the content pipeline, which I'm trying to keep out of my editor.) So my level store should store just the raw image data, as well as maybe the size might be useful as well. Then somehow I'll have to find a way to transform some raw image bytes into a Texture2D. Not sure how that's going to happen yet.

I'd really like to document the system I have set up for loading and saving the level store here, but I typed it up, and reading over it, either its not as elegant as I thought it was, or my technical writing skills are not as elegant as I thought they were. Either way that'll be an entry for some other time.
The outcome of my last entry seems blatantly obvious now that I should continue on and save the level in a specific format that's easy to read and write from the level editor, and write up a utility later that will 'Content Pipeline-ize' all the assets based on the level file. Oh well, if an hour of typing it all up and reading it back a few times helps my brain work it out, then it wasn't a waste of an hour.

I spent some time this weekend setting up a rendering framework using Xna. I've decided on having a single interface, IGameObject that all objects in the game will inherit from. I also have various other interfaces that help to group the game objects, and provide a few methods that all of a certain type of object should have. For example, ISceneObject has no methods, but objects inheriting from it tells the framework that this object must be a part of the scene graph. Similarly there are IRenderable, IUpdateable, IInitializeable and so on.

I've set up a kind of game object index which various manager classes store their objects in. The index contains basically multiple hash maps, each one storing references to data implementing a certain interface, one for updateables, one for renderables and so on. I figure that will be more efficient during rendering and updating, for example when update is called, the index iterates over the entire updateable map, calling update on every thing in there as opposed to iterating over one master hash map and checking the type of each element as it goes. The idea of indexing with multiple hash maps displaces this type checking from happening every frame to once per object when they're first added to the index.

This is probably not the final solution I will stick with for scene management as it seems like nothing more than just hash maps of game objects. Eventually the editor will get to a point where I can create pretty complex scenes, and this brute force rendering won't perform very well. At that point I'll need to research and refactor some things. The current solution just needs to get me to that point, which I think it can do without having taken me months to code up.

This evening I've finally gotten around to starting to uncomment the large portions of the editor's code I had to comment out during replacing the OpenGL based system I had in place with the Xna one. I'm fairly certain I'm able to add a terrain to the scene, and it's rendering code is being called, but I can't be sure if it looks right or not due to the keyboard and mouse input have been pretty much deleted and need to be rewritten. That being the case, the next area I'll be thinking about designing is the input system. Hopefully with my next entry, I'll post some of the details of that design since until it happens I'm kind of at an impasse. Manipulating all the camera rotations and translations via the debugger is not a very efficient way to verify your rendering code.
I haven't posted in quite awhile, partly due to real life work, and partly due to the bit of design block I've been having which I think writing it out in an entry here has helped me get through.

A bit of status on my implementation stuff so far first:
I've set up a basic rendering framework implemented in Xna as a game library project which my Winforms app and a regular Windows Game project each utilize. I've done this by modeling the Winforms app off of the sample on the Creators Club site. It was pretty straightforward, so I probably won't go too deep into detail there. Basically my framework is an empty shell with the standard init, update, and draw methods exposed, and also access to a scene graph and various other content managers. It is up to the clients of this framework to create and set up the GraphicsDevice and supply the framework with a reference to it. In the Winforms app, I've created an XnaControl which subclasses System.Windows.Forms.Control, sets up the GraphicsDevice and supplies it to the framework. In the Game project, the Game class initializes and supplies the GraphicsDevice. It is also up to clients of the framework to maintain a 'heartbeat' and call update and draw regularly at their discretion. I hook the Application.Idle event in Winforms and use a Stopwatch to keep time there, and just let the Game class do its thing in the Game project. So I've got three projects set up in a single solution, the framework core, the Editor and a TestGame project. I fire up the Editor and see my wonderful hardcoded spinning triangle, and when I fire up the TestGame, I see the same thing. So far, things are pretty solid.

For the past few days I've been researching the Content Pipeline (which I'll probably be typing alot and refer to as the cp) and trying to figure out how it best fits into my level editor design. Or any Xna based level editor design for that matter. The cp probably has no place in a standalone level editor as there's way too much on the fly loading and editing of content going on to gain any benefits from it. However, in my case my level editor's sole purpose is to produce what I'm calling a 'game pack', or a bunch of content that my Xna engine can load in and run as a game. Also, to add another variable to the equation, my editor and engine both share the same core rendering engine, so the in-memory format of loaded resources has to be consistent.

Hopefully the problem is starting to become apparent. With the editor and engine being driven by a common game library, there needs to be a common in-memory format for things they use, models, textures, terrains, maybe even cool stuff I haven't even thought up yet. Loading content from disk is fine, its in the little xnb format, I can use the cp to load it up, and spit out my in-memory objects to send to the scene graph. Content generated out of nothingness or edited on the fly by the editor, I'm not so sure about. What do you input to the cp to generate the xnb files?

Using a terrain for an example, suppose I create a custom content processor and whatnot to handle turning a heightmap and a few textures into a nicely packaged xnb file, and loading said xnb file into an in-memory object. Now using the editor's tools, I adjust the positions of some of the vertices, paint some roads and stuff, and generally change the in-memory object all around. How do I store those changes back to disk? Further, what if you have content that has no source files, that's just created out of nothingness using the editor? There seem to be a few paths forward.

I can store all these things on disk in some format that I come up with to be able to save my level editing progress. This format is useable by nothing other than the editor. Then I would have to write a utility to use the cp to read in that format, and produce a game pack, a set of xnb's and descriptor type resources that can all be read into the game by the cp. This provides a nice disconnect from the editor and the cp. Maybe down the road, I can write a different exporter and export to a useable, well documented format, and be able to share my editor with the rest of the world.

Another route is to follow a blog post I found on invoking only the write half of the cp to write out in-memory objects to xnb format. (found here: http://nickgravelyn.com/2008/10/creating-your-own-xnb-files) I could go this route and have the editor's default save format be my mythical 'game pack'. A few drawbacks with this include the pretty shady way you have to invoke pieces of the cp using reflection to access internal and protected methods and constructors, and that doing so would mean the editor is now dependent on the full Game Studio install and not just the Xna redistributable.

Maybe the solution is a hybrid of these two. The default save operation writes out to some format I come up with, and maybe I write a separate plugin exporter (or even standalone converter utility) that is able to export to a 'game pack'. That way the plugin alone depends on the Game Studio install and if you don't want that, the editor only requires the Xna redistributable.

The more I consider it, and consider what the cp was intended for, it seems some variation of the external utility to process content is less square-peg-round-holey than trying to have an editor whose save format is a compiled binary format optimized for loading on a console.

I'd be interested to know if anyone else is attempting the same kind of thing I am, or if there are much easier ways to go about this. Also, I should point out I'm very hesitant to totally strip out the cp because I do have intentions of one day running games created with the editor on the Xbox and it seems to be the consensus that loading content on the Xbox is much too involved without the cp, something about only having access to file io functions and reinventing the wheel several times over.
Tonight's entry will be a short one, mainly just because I was playing around with the file manager and wanted to put up a screenshot of my work in progress level editor:

Click for larger version

As I mentioned before, it is coded in C#/Winforms. My philosophy is if you're going to code something for Windows using something like Winforms, make it look like it was written for Windows. UIs are a real pet peave of mine. I can always pick out areas where two components don't quite line up like they should or something like that and the programmer was like 'Meh, its close enough for me, who's gonna notice?' Well, I am. Things like that instantly make a product wreak of poor quality to me and adversely affects my perception of the product from that point on. (And before you call me a hypocrite, I am well aware the separation between my Terrain Tool and my Terrain Layers controls is a sloppy mess and will soon be dealt with.) Keeping to the standard look and feel as close as possible also removes the temptation to get carried away with highly customized aesthetics and helps keep me focused on what the project is all about.

Keeping with the standard Windows look and feel, the first things I set up were a framework where the window would restore its size, position, state, even the position of the splitpanel's separator and which tab you had selected when you exited last. Most of the form is covered by a splitpanel with the rendered area as the left component, and the tab panel as the right. I want to make this layout pretty consistent, and have plans in the future of splitting the vertical space of the tab panel in two so that the PropertyGrid is always visible no matter which tab is selected, and will contain context sensitive properties about whatever happens to be selected within the current tab.

Like I said, short entry tonight, hopefully I'll get back into things this weekend a bit.
If you're reading this, it means I was able to transcribe the design in my head into a human readable form. I'da probably scrapped this entry if I found it not coming together very well. With the level editor, I started out at first not thinking too much about design and just cowboy coding it up, shooting from the hip as I went, but soon my MainForm.cs collapsed under its own massive bodice. So on to refactoring, that's a healthy activity right? I'll check that out. I discovered C# has the idea of partial classes, you can split the implementation of classes over multiple files. I'm not sure what the proper use case for that is, but I'm guessing it was not so you can still cram all your logic into one massive class yet still not lose yourself inside a single file. I finally broke down and realized even though I was using a language that yielded fairly quick results, quick results are very different from genuine progress, and some design thinking would be required up front.

What exactly is a level editor? At a pretty high level of abstraction, it is a tool that creates and maintains a file with certain data, written out in such a format that it can be read in by another application. In short, data manipulation. Thinking about data being at the core of the application, one component of the application emerges, a data store. The data store is the program's internal representation of what data is needed to render a 3d scene. This data can be in the form of an array of height data to create a terrain, a map of bits to use for an image to texture a tree, all of the plain old raw data that makes up objects in a 3d world. Data from the store is also all we want to persist to disk since it is so non-specific and therefore most flexible. Using data from the store, a render system should be able to build and render these 3d objects. So there is another component, the render system. The renderer will need to convert the data from its store format into a format that can be easily passed off to the graphics API for rendering.

Something's missing however. With just these two components there's kind of a chicken-or-the-egg thing going on here. There needs to be a component to manage and control actions on the data in the data store. At a high level it has to support the basic add, remove, and edit functions to be carried out on pieces of data in the store.

We still need a place for the user to interact with the application. So a final component which is related to the renderer in that it presents state to the user would be the GUI. The GUI's main function is to translate user input into actions the action component can understand and carry out on the store. The GUI and the renderer are kind of blurred together in that they both will present information from the store changes back to the user, obviously the scene itself for the renderer, and maybe a tree to visually represent the heirarchy of objects in the level. They can also both accept inputs, the obvious case being clicking buttons in the GUI, but maybe not so obvious is interacting directly with the scene, selecting and moving objects in realtime, painting terrain, you get the idea.

Hopefully if you've stuck this entry out this long you recognize this as a slightly modified model view controller pattern. The store is the model, the action component is similar to a controller, and the combination of the renderer and the GUI make up the view component.

My goal with this project is to be able to implement skeleton components solidly and abstractly enough such that adding new features to them only requires minimal augmenting of existing classes and mainly extending/overriding existing classes.

I have partially succeeded in at least the data store area, which I'll have to save for my next entry.

First Entry

I have decided after being a long time lurker and short time poster to break down and start one of these journals to document my various meanderings in programming for fun. People who aren't programmers don't understand how we can sit and work all day programming, then come home and 'relax' by doing more of it. My response to that is always to point out that I never code at home in the same language I code in at work, that would just be weird.

Anyway my current project is putting together a basic 3d level editor which doesn't have a name yet. I'm using C#/Winforms (WPF would have been nice, but the issues I ran into there might be another post sometime.) The renderer, being very basic, is set up using OpenGL, which I'm not entirely certain will stay that way, and need to make up my mind on before I get too far into its implementation. Being that it is C# and Winforms, I'm accepting the fact that I'll be tied to the Windows platform. (I have recently bought a Macbook, and with that, thought maybe a cross platform, API independent project might be fun. After struggling for a few months, I likened it to mowing your lawn with scissors. It could be done, but only realistically if you were really passionate about cutting blades of grass.) So if I'm tied to the windows platform, and ok with that, I might as well go with DirectX for graphics. I've used both in the past, and generally prefer the DirectX API to OpenGL's. Being that C# is a managed language then, the choice seems clear to use XNA, which is sounding better and better the more I think about it.

To back up a little, I decided to undertake the level editor project when I was designing the part of my game framework that loads the levels. Rather than cobble together various importers, or hatch a chicken before an egg, I decided to take a break from C++ and start the level editor project. I call it a game framework in that I can't even conceive of its design resembling that of a game engine just yet. Many more posts in the future will probably be devoted to talking about that, but for now, I am using C++ and am considering DirectX as opposed to OpenGL. That brings me back to my reasoning for using XNA with the level editor project so that in at least some aspects it could function as a prototype for the framework.

Hopefully next time it won't get so late so quick and I'll document some of the design of the level editor and the current issues I'm trying to work through.

Sign in to follow this  
  • Advertisement