AI Color coding

Started by
13 comments, last by verdad 18 years, 8 months ago
Sounds just like good ol' masked collision detection to me. "if collided with pink react."
Domine non secundum peccata nostra facias nobis
Advertisement
Why dont you just do a renderpass as usual rendering every object in different colours. Enemies in red and health packs green for example. Then you could use this texture you rendered the vision of the agent in as a saliency map. That is that certain colours in the image can make him react on. Once you have got this image you could use all these usual image detection algorithms and you could use the projected bounding box to do a grouping already. The different colours could be paineted depending on some function that takes attributes into account that the angent can percieve, like if something costs him health of will give him some. All these Fatures are then Mapped into a colour cube once you have it mapped to some space like this you could use a SOM like in this article here to classify the different classifications on the objects by colour.

http://www.ai-depot.com/Tutorial/SomColour.html
I have a question:

If the AI character reacts to anything pink, knowing full-well that it's the player, how close an approximation is that to real life?

In reality, there are a whole array of things that will influence whether someone would react to something falling inside their visual range. There would be no trigger colours to assist them - they'd have to weigh up other factors, such as whether it was there the last time they looked, whether it looked out of place with the rest of the landscape (consider lighting, etc), what its movement characteristics were...

I just wonder how realistic using a colour to trigger a reaction actually is...
An interesting approach would be to have a pixel shader that is able to kill pixels based on certain parameters (when rendering the character data). For example, you can look at the final lighting and if it is below a certain luminance then kill the pixel. Then when you get your occlusion query results take the number of pixels visible and compare that to a threshold. Now you can change these thresholds and get AI vision with different awareness and vision.
I don't understand the question...many games use AI to detect if in its view-range something exists in there that should be attacked. Are you new to this sort of thing?

This topic is closed to new replies.

Advertisement