This week I've been looking at adding the first smatterings of game logic. Like probably every other game engine on the planet, I have scripting support that allows you to drop a script onto any object to handle any events that are passed to that object, and generally tell the object what to do. Now scripts have their place - they're great for orchestrating unique events, cut scenes, dialogs, etc. But for basic game objects (like switches opening doors, or turning off the lights), I wanted something more like a toy box. You drop a switch and a door into the level, then you connect the output of the switch up to the input of the door.
So rather than creating loads of different door types to implement all the different ways a door can be opened, I create a single door node (that purely knows how to open and close a door), and then I can connect any kind of opening mechanism to it I want (like a proximity sensor for automatically opening doors, a lock for a door opened by a key, a switch for remotely opened doors, a pressure pad, etc). Any of these input devices can also be hooked up to any other output device (like a light, a conveyor belt, a lift, etc). I also have the usual set of AND, OR, NOT nodes for building more complicated logic (e.g. the door opens when the switch is ON and there's something on that pressure pad). You can obviously throw a script node in the middle if you need it, but I'm hoping I can keep things graphical (even though right now the connections are made by typing in console commands as I haven't written the graphical UI yet).
The first input device I decided to setup was the ProximitySensor. This input device gives you the list of (mobile) entities within its sensor shape (usually a sphere, but could be anything). I've thrown a transparent red shader on the sensor shape so you can see it below:
Like most things, implementing this turned out to be the tip of a moderately large ice-burg. My physics code used to spam messages every frame (X is touching Y, X is touching Y, X is touching Y, etc). Any controller trying to process these messages was responsible for filtering them to turn this stream of repetition into a more digestable set of changes (X has started touching Y, X has stopped touching Y). To make matters worse, these messages were being sent per-contactpoint per-frame. Not only did this add complexity to all the Controllers, but in a large scene, this could be a real performance killer (lots of virtual function calls, and lots of timers being setup to workout when the stream of messages stopped). So my first job was to refactor the physics code to only send a message when something changed. This turned out really well - I'm very happy with it.
The second thing I was missing was the ability to use a shape as a sensor or trigger. Rather than having any logic/AI "search" the scene to work out which entities are nearby (e.g. an orc AI might have a perception radius of 4 units, and it wants to know if anyone is close enough to be noticed), I wanted to leverage the efficiency of the collision system (which already has all the spatial optimisations, and only tests intersections when things move). Not only is this more efficient, but it re-uses all the same collision code I already have to let me use any shape as a sensor volume. Shapes in the game already had a set of flags which tell the engine what it's used for (Do I render it? Can I hit it? Does it cast shadows?), so adding a new "Is this a sensor" flag and adding some code to propagate and test the flag in a few places, and bingo!
Now, I'm off to create a switch!