Entries in this blog
I have to admit, looking back at the original Moose images I posted up, they look a little, well, crap.
I've kicked off another iPhone adventure, but am slowly skinning, rigging, UV mapping, and texturing the Moose in the background. I've only got the antlers and goatee left to do before I can start on his "Matrix" style clothes or some animation.
Here's a quick screenshot of him in engine with shadows and SSAO - you loose some of the detail, but get some moodiness to make up for it:
EDIT: the above image is crazy dark on my work machine - much more so than my laptop. Trying to work out where the gamma problem is, so I'd be interested to know how this looks to you - is it almost unreadably dark?
I've started to do some preliminary prototyping for my next project, and one of the things I need is (very basic) ragdolls. So over the next few entries we'll be following the buildup to get there.
My first thought was that, given ragdolls are pretty complex animals, that I'd have some Ragdoll controller which just did everything, and have the physics code recognise it and do ragdoll stuff. This should make it super easy to switch them on/off, as you'd just enable/disable that controller. But obviously, this is a pretty specialised solution, and doesn't get me any closer to doing similar things like rope bridges, swinging signs, bobbing pony tails, wobbly trees, etc. So I parked that for a bit, and decided I'd start off by adding support for building generic physics systems in objects - and see how this handled ragdolls before adding anything ragdoll specific.
Based on this, I've set about adding basic support for some simple constraints, starting with a ball joint. There's been a truly staggering amount of work behind the scenes to get them in (starting with re-writing the Constraint management system used for contacts to handle permanent constraints, and ending up having to re-write some nasty corners of portals and serialisation to handle serialisation of object references within other objects). But it all seems to be working now. Here's a little video of them in action. You can see that the switch is hooked up to drive the "strength" of the top constraint that attaches the block chain to the floating block, so that turning the switch on and off is like turning on a magnet.
I added an "Elasticity" parameter to the constraint, which is why the block chain is a little bit springy. You can turn this off, but it doesn't feel quite a "fun" when they're totally instant and infinitely strong.
The next step is to extend Skeletons so you can use constraints within the bones of a skeleton ... makes me feel sick just thinking about it really.
It's with great pleasure that I introduce to you the newest member of the Milkshake family: Moose!
I haven't decided on a name for him. I'm writing games under the moniker Chocolate Moose Games - so perhaps he'll be called Chocolate. But I quite like a few other foods too, so we'll have to see how that turns out.
As you can see, at this point, there's no texture on him ... and indeed, there's no UV mapping, no skeleton, no skinning, and no animation either - but as an initial mesh, I'm pretty happy with the volume and amount of character he has. Just for reference, here's the cow before and after texture so you can see how much (or rather how little) detail is actually in the underlying mesh.
One of the things I've tried to do is make him in a more modular way than the cow so that I can use the base mesh to make other characters in the Moose family. I haven't decided how far I'll want to (read: be able to) push the varaitions on top of the one base mesh, but just with a few rough props you can get some pretty varied characters.
You can read more here: www.melongolf.com
The worst bug so far is that I managed to spell Birdie as "Birdy" - but I'm not sure that justifies another few weeks in the iPhone review queue just yet, so I'll wait and see if anyone hits a bigger problem.
It's only 99c at the minute, so if you've got an iPhone/iPod touch, check out Melon Golf on the app store and support a fellow game-dever.
There - shameless self-promotion done.
If anyone's got any comments, suggestions or questions, I'd love to hear them.
Starting to add some dialog support this week, it's still a work-in-progress at this point. I've got the basic "Get this character to deliver this line of dialog" working, but now need to tackle the harder cases of working out how to present a conversation, and handle player input.
When I first started, I'd always planned to present dialog/narration to the player using comic-book style speech bubbles. I had a lot of this working (ported from an older incarnation of the game), but when I stepped back and looked at it, the honest truth was that it looks cool in a screenshot, but is totally unplayable (hard to read and confusing) in a game itself. It also suggests to the player that the game should still be playable while dialog is on-screen (as there might just be a small, one-word dialog off in one corner of the screen that barely grabs your attention), and it feels really awkward when the whole input system is locked out waiting for you to acknowledge the dialog. I've backed off for a more subtitle kind of approach that isn't as cool, but should be a lot easier to read and clearer as far as "your game is frozen until we go through this dialog".
While I was playing with it, I also scratched a long standing itch to see what things might look like with a toon shader. Still trying to work out whether I really like it, but it does remind me of the old monochromatic spectrum isometric games which is makes me all warm and fuzzy. I still need to sort out some way to monochromatically layer a detail texture over this and see how that changes things, but in the mean time, I'm very interested in any opinions on the look one way or the other.
I still need to writeup the last few bits of the rendering work (for fullscreen effects), but Milkshake (the cow, not me) has been off shopping for spaceships this week, so I thought I'd take the opportunity to show you the two he's currently test driving:
I've started messing about with how these things actually work in the scene, but still haven't decided the best way to get the cow into and out of the ship. I suspect the "proper" solution is to add some multi-object animation (so a single animation defines the joint animation of the ship and the cow that puts him into and out of the driver's seat) - but this is something I don't support right now, and is potentially quite a bit of work to do well. Which leaves me wondering whether I should just "hack" some cheap and nasty way of getting him in and out (that is presumably not so well animated). I do have a little ship-only animation working: at the start of the level, the rocket ship descends into the level, lands, and opens it's doors (if you look closely at the ship in the background, you can see the doors are left open) - but until I can move the character into and out of the ship, it's really just a stand-in.
My eldest son has decided he wants the cow to fly around in the saucer at the front, so in lieu of a better way to pick which spaceship to go with, that'll be the winner.
I've been on a massive Lighting bender this last week. Not a huge amount of this is immediately obvious in a screenshot, but it's been a serious hack for a long long time now.
The Light objects themselves used to be created as renderer specific subclasses (GLLight, DXLight). The problem with this is that those subclasses were implicitly really GL*FixedFunction*Light and DX*FixedFunction*Light. So there were these Enable/Disable methods on the light which simply turned the light on in the underlying fixed function pipeline. This was obviously useless for hardware shaders (which essentially ignore fixed function lights - unless of course you want to roll the performance dice with GLSL). To address this, the Light class has now become just a logical description of the light, and each class of material (GL fixed function, Cg, HLSL, GLSL) has it's own internal LightManager. Whenever a shader of that type is asked to render anything, the first thing it does is pass the current render context (which includes the active lights, view matrix, base scene transform, etc) to the manager to setup the lighting that type of shader needs. This gets cached across different shaders, so we only setup any one configuration of lights/view once per frame. This also means that all the fixed function state (lights, view matrix, world matrix) isn't being set at all when using Cg, which makes me happy.
The Cg shader lighting is implemented using Cg interfaces with unsized arrays. It's super cool ... though I still need to add a Cgprogram cache so I can support multiple light configurations in a given scene without suffering shader compiles in the draw loop. So now, by simply including "Lights.cg" at the top of your shader and dropping a simple light loop into your pixel shader, you can get nice per-pixel lighting driven straight off the lights in the scene.
Next I added Cg shader and GL fixed function support for ranged point lights and spot lights, and refactored the stencil buffer shadow code so it could support point/spot lights. Seeing nice long shadows dancing around is pretty hypnotic - even if it may not be the best gameplay setup (Doom3, I'm looking at you).
Finally, I fixed a lot of hitherto mysterious crosstalk bugs in the rendering code (e.g. if you tumbled the camera in the editor, the lighting on all the objects in the pallete would tumble around, and when you walked through a door, all the lights on the UI would flip around based on the coordinate change through the portal). It turned out a lot of rendering code went straight to "The Camera" to find out how the scene was being viewed, while other code just assumed the current GL view matrix is the one it should be using (lights were an example of this). All of this madness was fixed by making sure every object (light, shader) goes through the render context object that it has been told to render with to ensure it's always using the correct matrices. The UI (which used to inherit render state from the scene render) now has a formal "identity" render context that makes sure it renders the same way irrespective of what's happening in the scene.
At the end of last entry (which was now a long time ago ... my journal entries are lagging months behind the code unfortunately), we'd decomposed the render system into a set of reconfigurable building blocks, but those building blocks were still using the default fixed function render pipeline. Today, we take a giant leap into the late 90s and add hardware shader support.
My goals for hardware shader support was simple: from the game's/designer's point of view, any Cg or HLSL effect should look like a native object/material (i.e. totally indistinguishable from a built in C++ class). I didn't want the designer to have to create a CgShader object and then assign the shader to it. I wanted the game to automagically create dynamic classes on the fly that represented each of the Cg/HLSL effects it finds, and those classes should expose properties for each of the Cg/HSLS uniform parameters.
This does two things: it hides "how" a material is implemented (fixed function vs shader, Cg vs HLSL, C++ vs effect, etc) so that I can change how a material is implemented without breaking files that use it, and secondly, it allows you to connect or animate shader parameters just like any other object in the game.
Here's a shot of the current integration:
All the Shaders you see are just sucked right out of the Shader directory at startup (hence names like "shadow_PCSS" and "OldDepthOfField"). If you look at the Plastic object, you can see it has Specular and Diffuse powers, controls for the Fresnel reflection, etc all driven straight off Plastic.cgfx. It's actually pretty robust, you can drop most of the Cg effects off the web in and play with them.
Under the hood, this is built on top of a new "MetaObject" object that lets you dynamically build and then instantiate new types for the Object system on the fly. In this case, the CgShader class (indirectly) derives from MetaObject and uses it to dynamically register a new class for each .cgfx file it finds, creating properties for each Cg parameter in each effect. It then overrides all the Property methods to pipe them down into the Cg runtime (so that setting a game property will actually change the Cg program state if that instance is actually bound to the Cg program).
I also knocked up a quick DDS file reader so I could support some of the sample textures ... but there are still lots of limitation in texture support in general (no 1D, 3D or cubemaps, no hardware support for texture compression, etc).
Next time, we'll take a look at lighting. Keen eyed readers may already have found a clue in the screenshot (while the Metal shader has the typical hard coded light parameters, the Plastic one is strangely devoid of any light inputs).
Following on from the "teaser" depth-of-field shot in the last entry, today we'll start to look at how Milkshake evolved from a monolithic single-pass fixed-function render loop, to a modular, multi-language-programmable-shader based architecture. I must admit, hardware shader support is something I've put off for a long time for two very different reasons: one is I wanted to try and make some progress on gameplay first, and secondly (and most importantly) because I didn't really have a clue how to do it. This entry covers phase 1 of the rendering overhaul ... and I'll fully admit this is the boring bit.
Before we get into it - I'll just warn you I was forced to use flickr for the pictures this time as I haven't been able to upload to gamedev for 4 days and counting ... hopefully it works ok.
The best advice I ever heard when tackling a large complicated bit of work is: refactor until it is trivial. The idea being that, instead of fundamentally breaking your code with some wholesale change and then spending months trying to make it work, that you make a series of small cleanups/improvements/extensions to the code, that eventually makes what you're trying to do both very safe and very simple.
In the spirit of that advice, the first step in adding hardware shaders doesn't involve hardware shaders at all! All we're going to do is take the current fixed-function rendering code (which supports material sorting, transparent object sorting, and stencil-buffer shadow volumes) and decompose it (i.e. refactor it) into a series of rendering building blocks which we can then use to build more complicated rendering loops. If we do our job right, the game should look exactly the same before and after ... but obviously under the hood, it should be a little cooler.
The first step was to turn the old "Renderer" object (which did all the work) into a virtual base class for any object that wants to render objects in the scene. This obviously means we're going to pay a virtual function call overhead that we didn't before (and traverse the scene multiple times for each render pass), but this is just the price of admission for a more flexible renderer. I then moved all the basic scene rendering code into a new RenderScene class, and put all the old UI rendering code into a RenderUI class. Finally, I pulled the buffer clearing out into another object, and gave all the objects In and Out properties that allowed you to assemble them into a sequence.
With the basic rendering cleaned up, I then ported the old shadow code onto the new Renderer interface (so it runs as "just another scene rendering pass"), and hooked in into the render loop.
And then I made all the old physics diagnostics part of a new RenderDiagnostics pass. The cool thing here is that the blocks in the render loop could be any object in the game: you could run a script in the middle of the render loop, hide something, move things, change material properties ... anything you want really.
The final step in the fixed-function cleanup was to extend the RenderScene block to let you select the Camera you wanted to use, filter which passes you wanted to draw (opaque/transparent), and override the shader if you wanted. This last one is particularly powerful once we start looking at fullscreen effects (as it allows us to do things like depth passes, normal passes, etc) - but for now, here's a trivial example where we render the whole scene using a while lambert shader:
And that brings us to the end of the fixed-function cleanup. There are a couple of tricks we've skipped over for now (the render Target object being the main one), but on the whole, we've now turned our monolithic render into a set of rendering modules we can flexibly assemble into a render loop. And while we haven't got any real eye-candy working just yet, we've got the building blocks we need. All we need now are hardware shaders ...
It's been a long time coming, but hardware shaders have finally reached the distant world of Milkshake! I'm going to try and write it all up over the next few entries - but for now, here's a quick depth-of-field shot (in true "unplayable-effect-overkill-but-I-think-it-looks-cool" style) to tempt you to come back and read the how-does-it-work entries to follow ...
In the spirit of the mighty Zzap64, welcome to the Milkshake Christmas Special!
Despite the overwhelming evidence to the contrary (i.e. it's been over a month since my last post), things have been very busy in the little world of Milkshake. It's time to give the Elephants tusks, so to speak.
We'll start the ball rolling with a problem that's been plaguing me for years: how do you synchronise sounds and other game engine events with the animation on the characters? It's pretty easy to know when an animation starts or ends - but everything in the middle is a bit of a mystery. I always try to avoid magic numbers in my code, but I have to admit, when the cow fires his little gun, there's a hard coded "wait 10 ms after playing the animation" hack in there to synchronise the shot with the kick-back in the animation. The same problem arises when trying to animate a punch, or play a sound when the character's foot hits the ground, or play a little "hup" sound when a character jumps, etc, etc.
Now I could start peppering my code with loads of hard-coded timing values to match the game events to the animation, but this approach never *really* synchronises the game with the animation (particularly as the animation loops and the playback speed varies); it doesn't handle long, irregular animations it can get out of sync when the animation speed is tweaked; and any changes in the animation (or new characters) require changes to the C++. Very early on, I decided these timed events were really part of the animation itself. That way, an artist making an animation can just embed sounds, particle effects, combat events, or anything else directly into the character's performance. And the game code never knows or cares which sounds (or other events) are part of an animation, it just plays the animation back, and the animation itself injects the sounds/combat events as needed. Well, this all sounded well and good on paper - but I neither got around to adding it, and nor did I really know what it would look like ...
Enter the EventStream. The EventStream is a delightful little object you can attach to any animation in the game (and possible use standalone too), that allows you to define a stream of objects (events) that get triggered at defined times along the stream. The event objects could be sounds, combat triggers, messages to send, AI tasks to execute, or really anything else in the engine. The EventStream is built on top of the base AnimationCurve class - so the event evaluation is perfectly synced up to the animation (i.e. better than millisecond precision with no chance of getting out of sync no matter how many times you loop the animation, even as the playback speed is varied). And it's also totally integrated into the Maya plugin (this was almost free too - I just had to implement a few data types I hadn't needed up until now) - so you can define the events right in the Maya animation and have it go straight through to the game.
This brought me abruptly to a small limitation of my engine: there's no sound support at all. I spent a few hours on a train trip starting to implement a nice wrapped OpenAL based sound system ... but after sleeping on it, I realised this isn't stopping you playing the game, so I put it on ice, and got back to my real goal of letting the Elephants attack the cow.
I eventually want to give the Elephants some nice tusk charge attacks, but for the time being, I knocked out a simple back-hand attack.
And then I attached a "Concussion" event to the animation's event stream at the point of impact (using a spherical collision volume to describe where the concussion should be applied).
I now needed to let the Elephant take a swing at the cow. I knocked out a quick "Play" Task, that allows an AI to play an arbitrary animation, and instructed the Elephants to play the Backhand animation whenever they touched the cow. To my dismay though, when I let the Elephants loose, they ran out, took one lusty swing at the cow, and then stood there ignoring our hero, even though he was standing right under their trunks so to speak. You see, while I'd exposed a bunch of AI connections for monitoring the objects entering and leaving the character's senses, I really hadn't put too much thought into any real AI use-cases, and as a result it was really hard to write continuous reaction code (e.g. while I can see enemies, attack them). The problem was that I'd just exposed the underlying C++ interface, assuming that if the C++ code could do anything with that level of interface, then the AI script could too ... a pretty poor assumption in retrospect. I thought about this over a weekend and decided on a new design that took a very different approach to sensors and reaction processing, resulting in something that is both easier to use, and more powerful.
Firstly, it eschews a reflection of the internal C++ implementation in favour of the simplest interface possible: sensors and filters now have a single input and a single output. Under the covers, there are actually several C++ methods that handle different sensor events - but at the AI design level, you always just connect the output of the previous node to the input of the next one: simple.
Secondly, it cleanly separates how you want to sense the environment (the sensors), from what you're interested in (the filters), to how you want to react to the objects you're interested in (the reaction schemes). I've talked about the sensors and filters before - but the reaction schemes are new. In the old design, every sensor and filter node exposed add, remove, count and object outputs in the rather desperate hope that you could attach reaction tasks at any point you wanted them. In truth though, this just made all the nodes more complicated, and (based on my test-case trying to have the elephant attack the cow), didn't allow you to assemble useful AI at all. In the new design, the new reaction scheme node takes care of accumulating the list of valid objects, and processing them until nothing is left. The first reaction scheme I've implemented is the "ClosestFilter". This node continually processes the closest remaining target. So now, after the Elephant finishes its attack, the reaction scheme sees that the cow is still a valid target, and immediately launches another attack.
And finally, the new design allows me to separate the sensor array from the code that processes it. I'll hopefully come back to the significance of this in a later journal entry.
With the sensor system re-written (and a few new AI Tasks like Guard and PlayerFilter), we can finally start to write some more interesting enemy logic like this:
Now, as a little challenge to the reader, see if you can guess what that little AI program would do BEFORE you click on the little movie below. One hint you might need is the "Dependent Block" construct: the GoTo block has a zero tolerance (which means it will never finish), but there's an output Task attached to it. In these cases, the original block runs UNTIL the next one exits. So, the Elephant will GoTo the player until it touches the player.
Once you've made your guess, you can see him in action in the moovie below:
I've got lots of other test AI going now. My favourite so far is a bunch of enemies that play tag with you.
Last week, we equipped our little Elephants with some sensors, so they could tell when the cow was sneaking up on them. This week, we're going to start giving them some more interesting things they can do about it.
But before we go there, there's some plumbing in our way. As I hinted in a few previous entries, the fact that the reaction handlers worked at all was much more good luck than good management. All the task scheduler was doing was running any activated task until it completed. So while most of the event handlers (like the turn-when-you-bump-into-something Moove task) seemed to be working, what was actually happening was that the Agent was being told to both move forward (the default behaviour) and turn (the event handler) at the same time, and the turn won because it was last in the list. And while we got away with this for really simple behaviours, things start to break once we start adding more complicated behaviours (as all the tasks running on top of each other start to clobber and fight with each other, dead-locking the tasks and making the agent start trashing around on the screen).
So, my first job this week was to implement the default task scheduler. This little puppy works out which tasks should be running at any point based on the relative priorities assigned to tasks and the order they get run in. The default scheduler fell out a little easier than I'd been expecting - it's basically just a prioritised queue of tasks waiting to run, giving some special handling to sensors and parallel tasks.
Anyway, with this in place, the Agents now diligently stop whatever they were doing when an event interrupts them, go off and execute the tasks attached to that event, and then go back to what it was doing.
In the Agent program below, the little elephant now stops his bump'n'turn path when he notices the cow and runs after him using the new GoTo task:
You can see that I've cranked up the speed on the GoTo block so he really takes off after you. Here's a quick movie of the program in action (noting that only one of the elephants has the chase behaviour on him):
The task scheduler ensures that any bump events don't interrupt the GoTo block (as they're at the same priority and the GoTo started first) - but you could easily change this by messing about with the Task priorities.
There are still a lot of big-ticket items to add to the task framework - like explicit multi-threading which will allow the designer to have a lot more control over how tasks are scheduled. But the default behaviour above should let me do quite a bit.
So, last time we had some Elephants walking around and changing direction when they hit something: pretty much the simplest sensor a game can have. This week, we build on that a bit looking at a more extensible scheme for sensing who and what is around you in the environment.
For any long-time readers, you might remember the automatic door in the "moonbase" test scene. This door opened when anything came near it using a ProximitySensor behaviour. Well, that's only partially true, as I decided I didn't want bullets to open the door, so I put a size threshold on the ProximitySensor so it ignored things smaller than a certain size.
Now, let me admit that, even if you remember that door and how it was implemented, I had totally forgotten. The compiler wasn't about to let me off that easily though, and was happy to spit out a duplicate symbol link error when I tried to add my new AI version of a ProximitySensor. So my first task was to bring the old ProximitySensor class over to the new AI Task framework in a way that would allow it to still work on non-AI things (like the door). This turned out to be pretty nasty in itself, but I won't bore you with the details of it.
Anyway with the refactoring done, here's a little ProximitySensor running as a Task in the Elephant's AI:
The observant amongst you may notice there's no longer a "Threshold" parameter on the ProximitySensor object anymore. So going back to our door example, how would we stop the door opening for bullets? And while we're at it, how would I stop my Elephant attacking other Elephants? Or make this pressure pad only work when you drop one of those 3 crates on it? Or what about setting up a trigger volume that only triggers for the player, not any NPCs?
Where I'd originally started to build some of these criteria into the sensor itself (using the size threshold), it was starting to look like it would be quite cool to be able to plug in lots of different criteria here - and even combine them together to form more complex criteria (e.g. find the closest enemy, or the heaviest object, or the most valuable treasure). Based on this, I decided to split my sensor system into two halves: the sensor itself (which is a totally unfiltered) and a network of filters that describes the types of things this program is interested in. The system is designed to be pretty extensible - you can add game specific sensors and/or filters (or perhaps even write scripted filters which just evaluate some script code to decide what to do) to add game specific AI (e.g. filter based on hit points, or mana, or faction, etc).
Currently, I have filters for object type, object name, object dynamics (size/weight/movability), and a group filter that checks to see if it's one of N specific objects. Any filter can be assembled into a tree (so you can branch on one condition, and then another, and then another - so you can attach behaviours to as specific a conditions as you want). And I'm planning on adding higher-level filters that pick the best of the current inputs based on some criteria (like Weakest, Strongest, Closest, Furthest, Newest, Oldest) - which will allow your AI to use a Type filter followed by a Closest filter to target the closest enemy say.
Here's a cow-phobic Elephant that turns around whenever the any game object called "Milkshake" gets too close to him:
With this many "events" defined, I can "crash" the AI when both sensors detect and event at the same time, making two turning "Moove" blocks fight with each other. To go much further with this one I really need to make some progress on the Task scheduler ... but more on that next time I think.
And as an added bonus (not really something I thought about when designing it), all these filters work on non-AI elements too - so you can now build a filter that specifies who/what the door will open for. Here's a little test scene where the door is hooked up to only open for the Space Prawn (as my son calls him).
And here it is in action:
Welcome to another week of Milkshake development.
At the end of last week's entry, I'd laid out my basic idea for handling AI in a way that would support lots of different AI paradigms. In theory, it was very appealing - but the more examples I tried out, the more questions that popped up around how the AI "program" controls how its tasks are run. A complex bit of AI can have some main background logic, several event handlers, any of which can fire off a set of sub-tasks, which might in turn run event handlers and background tasks. Not only is working out how to prioritise/mediate between all these tasks complicated in itself, but providing a simple, intuitive and deterministic interface for the program author makes it that much harder again.
This was definitely a problem that wasn't going to solve itself in a day - so I set out to write a few basic Tasks to play with while I thought about it.
The first two blocks I've created are Moove and PhysicsSensor. The Moove Task allows you to directly drive an Agent around the game world, by propelling him forward and/or turning him. And the PhysicsSensor allows you to react to bumping into things.
Even with just two blocks, I've been able to build lots of different rule-based motion reminiscent of the mighty Head Over Heels. I've got enemies that turn left or right whenever they hit something, enemies which patrol back and forward, enemies which spin for a bit and head off in a new direction ... and I guess that's about as far as I've got. But it's still encouraging to see how easy it is to re-use even just two very simple blocks in lots of different ways and then pushing blocks in front of them to see them react.
Here's a simple "walk forward until you hit something, and then spin for 0.7 of a second" program:
And here are the little Elephants running about using those instructions:
The hardest bit to get right was the super-dooper accurate angle motion (turn exactly 90 degrees then walk forward) because even a slight error sends you into the wall and ruins the predictability (and hence the game play). In the end, I realised I'd never get it 100% accurate due to the numerical error in the arctan calculation used to determine the current direction - but I got it accurate to within 1/256th of a degree and then made the collision detection code ever-so-slightly-forgiving.
Anyway, with my little toy in place, I got back to thinking about the bigger problems of Task management. And I'm glad to say, I'm now the proud owner of an 8 page design document that lays it all out! It's a lot of relatively dry work (months of work I suspect), so don't expect to see the finished problem for quite a while, but I'm optimistic it will be pretty powerful once it's done.
So, after much reading and thinking, I've finally settled on a plan for my AI. Rather than picking a specific AI implementation (like STRIPS planning, or hierarchical state machines), I'm going with a task-based AI framework which will (in theory) allow me to plug any combination of them together and to easily mix autonomous and scripted behaviour together. Before I get into the boring stuff, here's an AI "Black Triangle" moovie:
So now let's dive under the hood ...
My design basically has two classes: an Agent, and a Task. An Agent represents any entity in the gameworld that can run AI, and the Tasks are the things you can get an Agent to do. So a task might be low level ("Moove", "Shoot"), high-level ("Fight", "PathFinding"), a state in a state machine ("Flee", "Attack"), a goal-based planner (e.g. a STRIPS system generating new subtasks), a sensor ("DamageSensor", "LineOfSight", "AudioSensor"), etc, etc. Any Agent can run zero or more Tasks (i.e. it can run Tasks in parallel) and new Tasks assigned to the Agent can either augment, interrupt or replace existing Tasks.
As well as managing and running the Tasks, the Agent provides an abstract interface to the game characters (or to use AI-speak, the Agent is the actuator of the AI). So rather than having the AI know how to attach a LookAt constraint to a game skeleton, the AI just asks the Agent to "LookAt" something, and it's the Agent implementation which knows how to make this type of game character look (e.g. using a Skeleton constraint, vs playing a blended animation, vs rotating the character, etc). This allows me to keep the AI code focused on AI, without knowing too much about the way the game world and characters are put together.
Now the nice thing about building the AI as Tasks on top of an Agent, is it allows me to totally mix and match control schemes. So I can have an enemy using some fancy-pants AI planning engine, but then have a game script which assigns him some simple scripted Tasks (e.g. go here, press this button, then say this) - and the Task framework will handle putting the high-level fancy AI to sleep while it carries out the scripted sequence. And this can work the other way too: I can give a game character some simple scripted motion, and then "call" some higher level combat AI task when it sees the player.
And most satisfyingly of all, the player input fits right it. There's a "ControlScheme" task which implements the game's control scheme (i.e. reading the joystick/keyboard input and turning into Agent calls). Not only does this allow me to re-use all the same Agent functions for controlling the main character - but it means you can script the player's character by simply assigning him some new tasks. Then when they end, the ControlScheme task takes back over and you get control back. But you could also do cool things like attach the ControlScheme task to other AI Agents in the game to let the player temporarily control other game characters or vehicles (as Vehicles are just another type of Agent that implement the Agent interface as a vehicle model rather than a character skeleton).
So that's my AI plan for world domination.
So far, I've refactored all my existing character and control classes to sit on top of the new Agent/Task architecture. I can control my cow by having him run the ControlScheme task, and I can assign a simple "Moove" task to the Elephants to make them run around in a circle. Strangely, just seeing the little Elephants run around crazily in a circle is very satisfying. If nothing else, just having a really clean decoupling between the control code and the character/agent makes me very happy (as there used to be a big mess of auto-push code mixed through the generic character code which always upset me).
The next goal is to add a basic PhysicsSensor so I can have the Elephants change direction when they bump into something. Unfortunately, this requires me to start working out how Task management is specified in the AI graph - something I don't have a nice solution to yet.
And just to wrap up, here's an in-development outtake where I forgot to lock some of the physics axes ...
A super frustrating few days for me trying to track down the skinning corruption on the little elephants. If you look closely at the screenshot from last week, you can see the vertices on the elephant's left shoulders and the top of the right leg are corrupted. Here's a more obvious shot with the shader removed:
Given I'd had a lot of grief in the past trying to correctly support Maya's "jointOrient" (which basically allows each bone in a skeleton to define an arbitrary "local" rotation space), I assumed there must still have been a some matrix or attribute I'd overlooked. I pulled the whole skeleton/skin export code apart and rebuilt all the matrix maths from scratch. After checking I'd correct exported the Maya joint matrices properly, I did the same thing to my runtime Skeletal animation and skinning code. After pouring through MBs of log files ... everything looked like it should have been fine.
So I started doing what I should have done right at the start. I removed parts of the model (bones, attached meshes, shaders) trying to build the simplest scene I could that still showed the problem. In my experience, this is the fastest way to fix any of these kinds of issues. In this case, the very first thing I did (removing all the bones inside the head ... which should have nothing to do with the arms or legs) made the problem go away. It turns out that despite all the different models I've exported, this is the first model that has skeleton bones that aren't actually used by some of the skin bindings. The runtime code didn't know how to deal with this - so a quick re-write of the exporter to optimise these out, and my little elephants are whole again.
Here's the fixed result with the "good enough" idle animation (click for movie):
Custom exporters can be really nice when they work ... but a huge amount of work when they don't. I'm still glad I have my own Maya integration (as it allows me to attach, configure and wire-up the run time behaviours on the game objects - meaning the behaviour of a game object and the object itself are all saved in a single file, rather than having a second level of "game bindings"), but if you have the option of using a "standard" exporter, it certainly saves a lot of work.
So after EasilyConfused quite rightly called me on my glacial development process, I've knuckled down over the past week to try and get my Elephant enemy agents into the game engine. I would have had them in much sooner - but after mapping and texturing the first version, I realised it didn't fit into the game at all, so I had to throw it away and start on a new version from scratch.
To avoid making this mistake again, I exported the second version to the game after each little body part was done to make sure the scale and the look fit what I was going for. The game is going to be a spy game, with an evil organization of enemy agents. The little elephant is the first enemy agent type - so I wanted him to look obviously sinister, but in a cartoony kind of way.
There's no UV map or texture on him yet - but I have setup a basic skeleton, skinning and collision volume so he can be posed and animated. Next step is to lay down some basic animations (idle, run, etc) and get him patrolling a bit of the level.
The [it has to be said, pretty boring] journey from 3d environment to game continues this week with more energy-sapping but unavoidable user interface work. Still, I can now hurl the little cow onto the spikes until his lives run out and see this:
Lots of things under the covers to get here.
Firstly, I split all the game management UI out into a separate XML file. This file contains a list of named objects (like "MainMenu", "Respawn", and "GameOver") which define all the UI that pops up when you're not in control of the character. Breaking things out like this has a few benefits: it removes all the ugly UI code from the C++; it lets me throw in some basic placeholder text UI today, knowing it can easily be replaced with nice graphics and icons later; and on the off chance I ever need to localise anything, it means there's just a single xml file to change.
Next there were a few C++ objects lingering around across game restarts and player death. I'd have dialogs from the last game session showing up over the main menu, and sometimes 8 cows would popup, somehow resurrected from defunct players. A few extensions to the Game::New and Game::Respawn quickly saw these off.
I added a rudimentary health bar (I really dislike it though, I can't see it lasting long in this form), and support for outlined fonts. The outlined text look miles better than the plain stuff I was using before - but after an hour messing around with Photoshop expressions, I can still only get a semi-transparent text outline for some reason. I can't quite convince myself whether it's a problem in the photoshop channel manipulation or if that's just what BMFont outputs when you tell it to use some smoothing. So if anyone knows a quick way to reformat BMFont output into simple white text on black background with an alpha mask that includes a solid-yet-smoothed outline, I'm interested.
Anyway, now that I'm starting to get closer to end of the game-cycle/UI tunnel, I've started work on the first enemy agent. Here's a sneak peek at him:
Still needs UV layout, texturing, rigging/skinning, and animating ... so it will be a good while yet before he pops up in the game. Still, I'm quite happy with him: my goal was that when you squint your eyes, you could almost imagine him falling only all fours and running off like the real animal.
Well, I've finally done it: I've started working on letting my little cow lose a life.
This is something I've been putting off (for literally years) for quite a few reasons: I didn't know how to portray the "death", I didn't know whether I wanted lives or health or both, I didn't know how to handle puzzle and item state across a character death. And it's not that I've necessarily resolved all those problems - but I figured I just needed to make a start on it.
The first thing I did is punted on the death animation/rag doll bit - and for now, I just catapult the little cow into space when he hits something deadly. In the test moovie below, the "Spikes" model has a "Toxic" behaviour attached to it which sends a "Damage" message to the cow, and for now at least, the cow loses a life rather than some hitpoints. I might look at adding health-style damage later - but my immediate goal is just getting the spawn-play-die-restart cycle working.
Next, I had to handle resetting the current area/map/room. This turned out to be a really interesting problem - and a lot harder to solve that just saving/reloading the current game state (which is already solved via the built in object serialisation). You see, the plan is for the cow to get lot of little puzzles to solve: How do you reach the key up on that platform? How do you open that door? Etc. So what happens when the player goes into a room and destroys some block that they actually need to finish the puzzle? Or they push the key off the cliff? I need a way of resetting all the elements of a given puzzle back to a solvable state.
Before I waffle on about the boring behind the scenes work, here's a little moovie of it working in a test scene: there's a switch which is setup to remove a block from the room, and there's a gun that rests on the block (so flipping the switch lets the gun fall to the ground where you could pick it up). In the vid, you can see the cow flips the switch, and then suicides on the spikes. When the character respawns, not only is the cow put back as he was, but the switch flips back, the block re-appears, and the gun falls back down from its starting point.
To implement this, I've added a Snapshot object. Whenever you enter a new area, the game populates the Snapshot with all the puzzle elements for that area and takes a snapshot of their current state. Then, whenever you die or re-enter the area, it restores the snapshot. The cool thing about the snapshot is that it's built on top of the exiting serialisation mechanism, but modified so that it restores values on the existing objects, rather than replacing them with a new object. This allows it to "undo" literally any change (e.g. not only can you move things around and have them put back, but you can remove the physics from an object and it will put it back, you can destroy an object completely and it will resurrect it for you, you can change the state of a state machine and it will put it back, you can turn off gravity and collisions on an object and it will turn them back on, you can change the colour of the shader ... you get the picture). And because it restores the current objects, it won't break any relationships to other objects in the gameworld (for bigger quests, etc). As a bonus, because it's a layer on top of the serialisation code - I can choose to use XML (for dev/debugging) or binary (for speed/size) to implement the snapshot - which has already saved me hours in debugging the feature to start with.
Next I need to think more about how many Snapshots the game will actually use to properly handle independently resetting the room vs specific parts of the player's character (e.g. right now, the snapshot restores the character's life count ... which kind of defeats the purpose of letting him lose a life in the first place). Still, I can now start a new game (from my spiffy menus of last entry), walk around until I lose a life, then restart the current room ... which is getting much closer to actually having a very simple end-to-end game cycle working.
Welcome back to my semi-annual series on Milkshake development. Last night I was reading the hackenslash diary ... that whole game was made in a week! I think I can safely say I haven't achieved anything close to that in the last year ... really makes you wonder if you're going about this the right way.
Anyway, I have at least made a little progress toward my goal of a first playable this year - but if I was half as honest with myself as the guy was in hackenslash, I'd probably admit that I was a tiny bit behind schedule. I'm currently trying a new strategy for making it playable, where I sit down with my project and load it up - and the first thing that doesn't work like an actual game goes to the top of my todo list (note I very deliberately didn't say "look like an actual game" because I'm trying to avoid polishing thing ... well that and I'm a bit of a crap artist).
Suffice it to say I didn't get very far without finding something ... so this entry will look at the startup menu =) Nothing fancy mind you - just a basic front end to the game - I'm even going to skip the game options for now. We'll start by looking at the result, and then I'll show you all the grief underneath it:
So how does this all work? Well, I had some very crude UI elements (menus, buttons, text labels, etc) kicking around from the game's editing tool (covered in a previous entry), but they not only looked pretty utilitarian, but they worked a lot like the MFC UI elements - with a bunch of callbacks into the code with user-defined event Ids. Essentially, it meant every time you wanted new UI, you had to write a bunch of C++ to do whatever it was that menu was supposed to do.
In typically bad style, I tackled the ugliness problem first by extracting the UI "style" out into a separate object that "skins" a set of generic UI definition objects. This means a Column menu can look one way in the Editor, and a totally different way in the game. It also means when I get around to writing my World War II POW game, I just have to write a skin and away I go.
Next I looked at the harder problem of how I could build a menu without writing a whole bunch of C++ code every time I did it. But before I get too far into the solution, I should perhaps explain something else about my engine - you can do everything from objects and/or script. Well, not everything, but when I find something I need that I can't do, I add it. This includes creating instances of game characters, loading/saving maps, setting the screen resolution, etc, etc. So the plan for the UI was to turn the UI widgets themselves into part of the engine's "Object" system. This not only allows them to be serialised (so you can build some UI, save it to disk, and load it back in) - but it means they can use (and can be used) by anything else in the game. So you can have UI widgets that open doors in the game, or a door that pops up a menu.
Here's are the Objects behind the main menu, including what to do when the item is selected:
[Note that the editor boxes don't show up too well on that black background ... whoops]
With a shiny new startup menu allowing me to start a "New Game", I quickly hit the next thing that was pretty un-game like: the game just drops you straight into the world with no idea what you should be doing or why you're there. Now I wanted to go for a bit of a comic/graphic novel style to my game, so I figured the next thing to add would be narration dialog. Just like the menu UI, there's a generic "Dialog" node, and then the "skin" renders it into the game. Here's a shot showing a narration box with an in game menu just because I can:
The one thing that occurred to me while I was putting these test scenes together was that I'm very quickly going to need programs with loops in them. And in the current code, the connection "loop" would create a memory leak (because upstream connections hold references to the downstream targets - and a full loop means they all keep each other reference counted even if everything else in the game has forgotten about them). After a few minutes of panic thinking I'd need to embark on some performance glitching garbage collection scheme - I think I've come up with a way to detect loops when the connections are made and then just store a description of the loop so that I can clean it up when the next map gets loaded in. Quite a relief that.
I now need to work out where to next. I'm a little overwhelmed with the whole damage/death/load/save bit ... so maybe a bit of plot/story work first?
Still basking in the newfound freedom of my game/engine refactoring of a few weeks back, I'm happily forging ahead turning my old scene graph into a real game. Before I get into the boring technical stuff, here's this week's screenshot ... behold - a real working character status panel!
(Click on the thumbnail for the full image)
As your character picks up an item, the item is automatically added to the status panel, including a real-time updating ammo count (or whatever status is meaningful for that item type). Ironically, while people have said for ages that the most immersive interfaces are invisible, I've found just having this little status panel makes the game world feel more alive for some reason. Just watching the ammo tick down as you fire the gun feels really solid.
The current graphics for the panel itself aren't the best - but as part of my "get to game" push, I'm trying really hard not to tinker and polish every little thing to death (as I spent much of last year doing this and didn't really get any closer to having a game).
In less visible progress, a lot of the work this week has been working out the difference between a map and a save game. Up until this week, the two have been pretty much the same thing: a simple serlialisation of the root node of the world. This approach has been fine for knocking up the little test arenas that I've posted up here in the past, but now that I'm starting to think about a real game, it falls apart in two pretty serious ways. Firstly, you really don't want your game characters in your level data (as you really want the player to be able to take his current character into any level). Secondly, there's a whole bunch of non-game-world data that a given game might want to be able to save (like player data, the screen layout, the current camera position, game preferences, the total play time, etc).
To address this, I've spent the bulk of the week centralising all the game state (camera, players, characters, world, etc) inside the Game object, and then cleaning up the tangled web of object initialisation dependencies to allow you to just drop in a new Game object and have all the moving parts (camera, camera controllers, input handlers, world graph, etc) hit the ground in a working state. It turned into a huge amount of work - especially when I tripped over a long standing reference counting bug that meant the game was never actually deleting the game world, even when you loaded a new one in.
Once that was working, I set about separating the player characters from the world, and adding support for logical game nodes that can be used to add things like spawn points to maps. The game will now happily persist your character(s) across new maps, starting them at the first spawn point it finds.
With all this plumbing done, I'm almost ready to add damage to the game.
So, after many late nights of pouring over options, photoshopping my little Cow onto all manner of screenshots, and reflecting on the type of game I wanted to make - the die is cast! I've decided to go with a nice fun stylised graphics style, as not only is it much easier to make, but I finally decided it just matched the whole fantasy animal game I wanted to make anyway. Rather than trying to polish everything to the best of my (programmer) abilities as I go, my plan is to try and get a "good enough to play" look for things, and then come back at the end and revise and polish as much or little as I want to (or have time to).
Here's an unfinished test room, with the first fir trees:
There still a bit I need to do, even to get to workable graphics (cliff tiles are just standins, there are a few missing grass border pieces, and the I want to add some more varieties of fir like trunks, long trunks, shorter bodies, etc) - but this is one of the first screenshots where I can see my game - as opposed to a prototype, which is pretty cool.
Another thing I'm keen to do is make the world itself a lot less "inert" - I want trees to wobble and sway when something hits them, I want flowers to wave around wildly as you march through them, etc. To this end, I've started to think about how I'm going to implement "composite" physics bodies so I can build an armature inside these trees and then wire it up with some spring constraints. My goal is to use the same system to do ragdolls and secondary animation (like pony tails, etc). I'm not expecting to have this up and running for a bit though.
Well, after the enormous outpouring of support for the Cow following last week's entry, he makes an earnest return this week.
I sat down with him this week and asked him what he most wanted to see in game next, and he was keen on getting some grass to run about on. Now, unfortunately, for such a common surface, good grass is surprisingly difficult to do well (my personal favourite to date is Warcraft 3's grass - a bit cartoony, non-repetitive, well bordered, and lush).
To let me experiment with some options, I knocked up some placeholder 3d tiles and slapped them into an existing test room I had lying about. I then ran a series of texturing options through them to see how they looked with the cow, close up, zoomed out, etc.
The first candidate was a photo of some nice lush grass I took on Toronto Island run through Photoshop's Pattern Maker. It looks nice up close:
But zoomed out, it's a repetitive filtered mess:
Some of this can be blamed on Pattern Maker (which doesn't do the best job of tiling for mine), but some of it is just inherent in the small tile size at a large zoom. Clearly, if I want to go for real looking grass textures, I'm going to need some world-space UV mapping solution. Given my hardware shading code is still bubbling along on the back burner, I tried out some more stylised/cartoony alternatives.
Here's a simple flat green - a bit Chaos Engine-y, but it's not as bad as I thought it would be (especially when you consider my game isn't going to have massive fields of grass - it'll more be smaller borders, paths, surrounds):
Here's the same grass with a simple flower decal stamped on to it:
The flower stamps seem to break up the repetition a bit, but obviously in the test above there's far too many of them. Perhaps with a few other stamp types and a used a little less frequenetly/more randomly, it might start looking ok. Most of the other texturing ideas I wanted to play with rely on UV mapping that goes outside a single tile - so they'll have to wait a while. But if you've got any opinions on the above, or other styles/approaches to try, I'd love to hear them.
I think I'll play around with some other alternatives for a day or so, and then just pick one to go forward with. The next thing I want to add is height field support so I can build some little hills and valleys for the cow to frolic on.
So a really profound change for my code this week - I think I've finally got a solution to the game world vs scene graph problem that's been haunting me for an awfully long time.
Over the past few months, I've been slowly populating my framework with nicely re-usable game objects: doors and switches, lifts and guns, items and characters. At first, it was going along really nicely. I had a few different game prototype building on top of the framework, and they were all nicely sharing these objects even though some were 3d and some were 2d, one starred a cow, and one had a spaceship. But as time went on, things started to get more complicated. The games had subtle differences in how they needed to use these things - the cow game (being a bit RPG-ish when it's done) wanted a serious inventory with equipable objects. But the space shooter wanted a lightweight gun swapping inventory. The Gun in the cow game was an item you equipped, but the guns in the space ship game needed to be part of a ShipComponent heirarchy.
Very quickly, I found myself spending more and more time trying to reconcile all the differences into a set of super-object that could be configured to be all things to all people. After all, these were objects in the framework darn it - so they better be re-usable!
So my "ah-ha!" moment came in two parts:
The first realisation was that I needed to formally separate game objects from scene objects. Previously, game objects were just another type of generic Behaviour object you could attach to a part of the scene - just like animation, a particle emitter, or a joystick controller. The problem with this is that you (i.e. the game, the scripts, the AI) couldn't unambiguously determine what something in the 3d world represents in the game world. Now, there's a new base class purely for game objects - allowing any game code to go up to a 3d object and know that it's a Door, or an Item, or a ShipComponent, or an Asteroid, or an EvilSorceror ... or any other game object your game wants to use.
The second realisation is that trying to commonise these things is a waste of time. Even if 3 games all have inventories, they're always going to need different gameplay rules, different slots, they'll need to know about different categories of items, they'll need to know how to map themselves onto different character models, etc. And the same is true for Guns, Items, and most everything else too. If I find myself re-using a set of objects across games in the future, I'll wrap them up into a re-usable game object library - but for now, a little code duplication is a glorious thing.
With these two changes under my belt, it's like a huge weight has been lifted off my shoulders! I honestly didn't realise how much this has been holding me back. Instead of agonising over every little addition to game logic (worrying about whether it's general enough, and how other types of games might want to use it and what options they should have, where it should fit into the class heirarchy, etc), I'm now totally free to just experiment with things - as each game project is in its own little sandbox, and the worst it can do is create a little mess of the small number of game objects it defines, while the framework code stays nice and clean.
Unfortunately, none of that translates particularly well into screenshots. But given I don't like to post without one, here's a cool ship prototype a friend of mine put together in Maya using the modular particle system I talked about last time:
A lot has happened since my last post - alas not much of it has anything to do with game development. I did take a bit of time over the New Year to reflect on how little playable progress I made over the last year - and (as I'm sure many of us do) resolved to do better this year.
I also took some time to stand back and look at the state of my engine, and I've decided my biggest problem is trying to work out how the game world relates to the 3d scene graph. You see, right now, I have a relatively nice interactive scene graph. It allows you to bring in a bunch of objects, attach behaviours to them, play animations, simulate things like physics, define connections between behaviours (e.g. doors, position sensors, switches, etc) ... but it's still just a scene graph with some smarts bolted onto it. And while you can get a certain degree of interactivity by just assigning behaviours to objects - at some point, there needs to be a more semantic model which says "This isn't just a collection of meshes - this is the bakery, and you come here if you want to buy bread" (not a very exciting example, but hopefully you get the idea). So my current slow burner design task is to work out what the game world objects should look like, and how they relate to the scene graph that represents them. If anyone knows of any good articles or engines that have good model for this - I'd really appreciate a pointer.
While I slowly mull all this over, I decided to tick off a few of the hundreds of todo items off my list. This week, I finally connected the last few missing dots on object network export from Maya. To test it out, I tried assembling a game-engine particle system in Maya and attaching it to an object. Here's a quick movie of a simple particle system attached to the fan. The particles just have some decay, Newtonian physics, and colour fade, but it's enough to give a nice blue plasma effect: