Sign in to follow this  
  • entries
    53
  • comments
    225
  • views
    69279

Fight

Sign in to follow this  

286 views

In the spirit of the mighty Zzap64, welcome to the Milkshake Christmas Special!

Despite the overwhelming evidence to the contrary (i.e. it's been over a month since my last post), things have been very busy in the little world of Milkshake. It's time to give the Elephants tusks, so to speak.

We'll start the ball rolling with a problem that's been plaguing me for years: how do you synchronise sounds and other game engine events with the animation on the characters? It's pretty easy to know when an animation starts or ends - but everything in the middle is a bit of a mystery. I always try to avoid magic numbers in my code, but I have to admit, when the cow fires his little gun, there's a hard coded "wait 10 ms after playing the animation" hack in there to synchronise the shot with the kick-back in the animation. The same problem arises when trying to animate a punch, or play a sound when the character's foot hits the ground, or play a little "hup" sound when a character jumps, etc, etc.

Now I could start peppering my code with loads of hard-coded timing values to match the game events to the animation, but this approach never *really* synchronises the game with the animation (particularly as the animation loops and the playback speed varies); it doesn't handle long, irregular animations it can get out of sync when the animation speed is tweaked; and any changes in the animation (or new characters) require changes to the C++. Very early on, I decided these timed events were really part of the animation itself. That way, an artist making an animation can just embed sounds, particle effects, combat events, or anything else directly into the character's performance. And the game code never knows or cares which sounds (or other events) are part of an animation, it just plays the animation back, and the animation itself injects the sounds/combat events as needed. Well, this all sounded well and good on paper - but I neither got around to adding it, and nor did I really know what it would look like ...

Enter the EventStream. The EventStream is a delightful little object you can attach to any animation in the game (and possible use standalone too), that allows you to define a stream of objects (events) that get triggered at defined times along the stream. The event objects could be sounds, combat triggers, messages to send, AI tasks to execute, or really anything else in the engine. The EventStream is built on top of the base AnimationCurve class - so the event evaluation is perfectly synced up to the animation (i.e. better than millisecond precision with no chance of getting out of sync no matter how many times you loop the animation, even as the playback speed is varied). And it's also totally integrated into the Maya plugin (this was almost free too - I just had to implement a few data types I hadn't needed up until now) - so you can define the events right in the Maya animation and have it go straight through to the game.

This brought me abruptly to a small limitation of my engine: there's no sound support at all. I spent a few hours on a train trip starting to implement a nice wrapped OpenAL based sound system ... but after sleeping on it, I realised this isn't stopping you playing the game, so I put it on ice, and got back to my real goal of letting the Elephants attack the cow.

I eventually want to give the Elephants some nice tusk charge attacks, but for the time being, I knocked out a simple back-hand attack.

Disabled

And then I attached a "Concussion" event to the animation's event stream at the point of impact (using a spherical collision volume to describe where the concussion should be applied).

Disabled

I now needed to let the Elephant take a swing at the cow. I knocked out a quick "Play" Task, that allows an AI to play an arbitrary animation, and instructed the Elephants to play the Backhand animation whenever they touched the cow. To my dismay though, when I let the Elephants loose, they ran out, took one lusty swing at the cow, and then stood there ignoring our hero, even though he was standing right under their trunks so to speak. You see, while I'd exposed a bunch of AI connections for monitoring the objects entering and leaving the character's senses, I really hadn't put too much thought into any real AI use-cases, and as a result it was really hard to write continuous reaction code (e.g. while I can see enemies, attack them). The problem was that I'd just exposed the underlying C++ interface, assuming that if the C++ code could do anything with that level of interface, then the AI script could too ... a pretty poor assumption in retrospect. I thought about this over a weekend and decided on a new design that took a very different approach to sensors and reaction processing, resulting in something that is both easier to use, and more powerful.

Firstly, it eschews a reflection of the internal C++ implementation in favour of the simplest interface possible: sensors and filters now have a single input and a single output. Under the covers, there are actually several C++ methods that handle different sensor events - but at the AI design level, you always just connect the output of the previous node to the input of the next one: simple.

Secondly, it cleanly separates how you want to sense the environment (the sensors), from what you're interested in (the filters), to how you want to react to the objects you're interested in (the reaction schemes). I've talked about the sensors and filters before - but the reaction schemes are new. In the old design, every sensor and filter node exposed add, remove, count and object outputs in the rather desperate hope that you could attach reaction tasks at any point you wanted them. In truth though, this just made all the nodes more complicated, and (based on my test-case trying to have the elephant attack the cow), didn't allow you to assemble useful AI at all. In the new design, the new reaction scheme node takes care of accumulating the list of valid objects, and processing them until nothing is left. The first reaction scheme I've implemented is the "ClosestFilter". This node continually processes the closest remaining target. So now, after the Elephant finishes its attack, the reaction scheme sees that the cow is still a valid target, and immediately launches another attack.

And finally, the new design allows me to separate the sensor array from the code that processes it. I'll hopefully come back to the significance of this in a later journal entry.

With the sensor system re-written (and a few new AI Tasks like Guard and PlayerFilter), we can finally start to write some more interesting enemy logic like this:

Disabled

Now, as a little challenge to the reader, see if you can guess what that little AI program would do BEFORE you click on the little movie below. One hint you might need is the "Dependent Block" construct: the GoTo block has a zero tolerance (which means it will never finish), but there's an output Task attached to it. In these cases, the original block runs UNTIL the next one exits. So, the Elephant will GoTo the player until it touches the player.

Once you've made your guess, you can see him in action in the moovie below:

Disabled

I've got lots of other test AI going now. My favourite so far is a bunch of enemies that play tag with you.

Merry Christmas!
Sign in to follow this  


2 Comments


Recommended Comments

Thanks rip-off. I still go back and forth on whether I'm better just coding or scripting AI behaviour directly ... but there does seem to be an awful number of tricksy situations that the AI framework handles for you, that you'd otherwise need to handle in script to get robust behaviour. So hopefully it turns out to be useful.

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now