Say we're making a game with strong stealth elements... Splinter Cell, Thief or something. We want the player to be able to sneak around, hide, and so on; we want the AIs looking for him to respond to him being hidden as realistically as possible.
Traditional approaches use a line-of-sight test. A line is traced from the AI agent's eyes to some point on the player. If the line intersects the environment, the AI agent cannot 'see' the player. The problem with this technique is that only tracing to a single point cannot possibly produce an accurate result (unless the player is a glowing ball of light or something). We need some technique that takes the whole of the player's geometry into account from the AI's point of view.
Enter differential rendering. Here's how it works:
- The camera is set to the AI agent's eyeposition.
- The game world is rendered (sans player) into a texture.
- The game world is rendered again (with player) into another texture.
- The two textures are bound to texture stages.
- The depth buffer is cleared to a value of 0.5.
- An occlusion query is issued.
- A quad is rendered. A pixel shader is in place which (a) samples both textures, (b) subtracts one from the other, (c) dots the result, (d) tests the result against some threshold value, (e) writes the test to the oDepth register.
- The occlusion query is ended.
- The results of the occlusion query are retrieved.
The resulting value is the total number of pixels for which the player character caused a significant difference to what the AI can see. What's nice about this?
- It takes all world and player geometry into account.
- It handles transparent stuff - as well as stuff with specialised shaders - seamlessly (you just render things normally for steps 2 and 3).
- It allows you to set the 'keen-eyed-ness' of your AI agents by varying the size of the buffer. The smaller the buffer, the less sensitive the AI will be to small changes, and vice versa.
- It allows camoflauge. If I'm wearing a black ninja-suit and I stand in a black area, I make a smaller difference than if I were standing against a white backdrop. And it's handled without any testing/calculation of light levels and what have you.
The big downside is that it requires pixel shader 2 (to write oDepth). There's also the fact that occlusion queries are asynchronous, which can make managing them a bit of a problem... using the results from the previous frame should be fine, though, because chances are you'll want your AI to pause for a split second before 'reacting' anyway.
Suggestions / comments? Otherwise I'll see about knocking up a demo of this...