Sign in to follow this  

AI Color coding

This topic is 4495 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

i was trying to think of a newer, and possibly easier, form of enemy AI. would it be feasible to draw every element the player uses in one color, maybe the common alpha pink. the character is drawn in pink, and maybe even the flashlight he/she projects is colored pink. all of the player elements are drawn in one color, and its only visible to the enemies. then maybe every quarter second the enemy gets a new render (maybe 128x128 or even 64x64) of everything its looking at. after the render is captured, its run through a loop looking for the pink color. if its found the enemy spots the player and does it's thing. i posted this here because i will be using opengl to render it. how feasible do u think this would be?

Share this post


Link to post
Share on other sites
Intresting idea.. Probably deadly slow though.. Reading from the videocard just isn't fast.. Could be quite cool though I suppose.. But think about 20-30 AI bots.. Not sure, but I kinda doubt your comp would like it.. I encourage you to try though..

Share this post


Link to post
Share on other sites
yeah, readbacks would probably be too slow. perhaps you could manage to have the ai running on the gpu as well... i don't think gpus have become general enough yet though. but who knows. maybe reading back a single 64x64 texture each frame (and switch between units each frame) isn't that bad at all if your api calls are ordered correctly (ie the gpu has stuff to do while you're reading back)... it might even work. but you should probably put more information into that texture than just if there is someone there or not. alternatively you could render it in software, which would make including extra information (player id etc.) easier as well.

it seems someone else already had a similar idea (although also only the idea apparently) http://www.movesinstitute.org/Publications/AI-on-the-GPU.pdf

Share this post


Link to post
Share on other sites
i read that article, but i think they are talking about using two textures to track the movement of objects. that is definitely cpu/gpu intensive. my idea is simplay an activation of the AI. usually AI tracks the player by use of their position and maybe the bounding box. if you are walking down a hallway, casting shadows and pointing your flashlight, a normal person would hide and do a surprise attack. bots cant see the lights or shadows. it also makes it much harder for the player to hide by peeking around corners.

then again this is an idea thats going to take a bit of work to get going since i havent even started AI programming yet lol

Share this post


Link to post
Share on other sites
i think they are talking about using a player id texture and one texture which is rendered normally to determine (using computer vision algorithms which to a large extent can be executed on gpu) if the player can be visible.

Share this post


Link to post
Share on other sites
I don't see the value of this. The math to determine the visibility of these objects might be less intense than actually rendering then reading. Locking buffers and reading will slow down all your other rendering as well. Maybe when shaders that output data become available (maybe 4.0!!!!) we could do stuff like this.

Share this post


Link to post
Share on other sites
why not use simple occlusion testing? Draw all the occluding geom from the AI POV and then test the flashlight and the player. That would eliminate the read from textures?

Share this post


Link to post
Share on other sites
Actually, JSoftware, Occlusion Culling would be perfect for this. 1) Setup a render target (128x128 or whatever) 2) Disable color writes, enable zbuffer read/write (saves bandwidth, some cards will render at 2x fill) 3) Render everything NOT attached to the player into your buffer 4) Render things attached to the player wrapped in an Occlusion Query. 5) Continue on with the game (in the previous state) until the query completes, then use the number of pixels visible to guage visiblity.

This would only have the cost of the render which will be mostly transform cost since the pixel shading will be very cheap. Framebuffer bandwidth would be a non-issue (due to simple pixel shaders, small resolution and z only rendering). And because the occlusion query is asynchronous no stalls or readbacks are needed.

Share this post


Link to post
Share on other sites
Why dont you just do a renderpass as usual rendering every object in different colours. Enemies in red and health packs green for example. Then you could use this texture you rendered the vision of the agent in as a saliency map. That is that certain colours in the image can make him react on. Once you have got this image you could use all these usual image detection algorithms and you could use the projected bounding box to do a grouping already. The different colours could be paineted depending on some function that takes attributes into account that the angent can percieve, like if something costs him health of will give him some. All these Fatures are then Mapped into a colour cube once you have it mapped to some space like this you could use a SOM like in this article here to classify the different classifications on the objects by colour.

http://www.ai-depot.com/Tutorial/SomColour.html

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I have a question:

If the AI character reacts to anything pink, knowing full-well that it's the player, how close an approximation is that to real life?

In reality, there are a whole array of things that will influence whether someone would react to something falling inside their visual range. There would be no trigger colours to assist them - they'd have to weigh up other factors, such as whether it was there the last time they looked, whether it looked out of place with the rest of the landscape (consider lighting, etc), what its movement characteristics were...

I just wonder how realistic using a colour to trigger a reaction actually is...

Share this post


Link to post
Share on other sites
An interesting approach would be to have a pixel shader that is able to kill pixels based on certain parameters (when rendering the character data). For example, you can look at the final lighting and if it is below a certain luminance then kill the pixel. Then when you get your occlusion query results take the number of pixels visible and compare that to a threshold. Now you can change these thresholds and get AI vision with different awareness and vision.

Share this post


Link to post
Share on other sites
I don't understand the question...many games use AI to detect if in its view-range something exists in there that should be attacked. Are you new to this sort of thing?

Share this post


Link to post
Share on other sites

This topic is 4495 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this