Raycasting for AI Detection

Started by
5 comments, last by Extrarius 19 years, 6 months ago
(This was written quickly while in transit. It may not make too much sense as a result; I'm not all there when I'm on the bus!) The idea was to use raycasting (or what I call raycasting; I hope they're one and the same) to determine what an AI-controlled entity would be able to "see". Once per Ai cycle, these entities would be provided with a list of all other entities in a certain radius. Each of these entities would be casted at to determine whether or not the original entity could see them. The issue, however, is what if there's something in the way, enough to block the ray but not enough to actually obscure the casted-at entity completely? In this case, the original entity would not be able to see it, but were we to replace that entity with a viewpoint, we would still be able to (as long as it gets rendered). So we would have false negatives as certain entities would be reported as invisible to others when they shouldn't be. With OGRE, entities are divided into subentities. One method to lessen the number of possible false negatives is to cast to each subentity rather than just the entity itself. This also allows for a visibility rating, determined as the number of rays that aren't intercepted divided by the total number of rays cast at the entity (or its subentities). There can still be false negatives, but it would certainly be an improvement over the original idea. I'd rather have something without false negatives at all, however. Does anyone know of any foolproof way of determining what entities are "visible" to another? Or is this the best?

Chris 'coldacid' Charabaruk – Programmer, game designer, writer | twitter

Advertisement
I don't know the answer but I've also thought about this problem - how can you determine if and how much of an object is visible from a given point. It could be done with occlusion volumes, but that's more overhead than you would want for a simple line-of-sight test.

Obviously you can point sample (such as with the subentities you mentioned), but the resolution is arbitrary, so there would always be the chance of a false negative.

I've thought about this problem in terms of visibility pre-calculation for arbitrary environments, i.e. is any part of octree cell A visible from any part of octree cell B. My guess is that that particular problem is more or less impossible to solve - otherwise, people would use pvs's for octrees all the time, I assume.

So I'll be interested to see if anyone offers a solution :)
well, I don't know how good this solution would be, but....

for each entity you are "looking at", you could do line of sight tests for each corner of it's bounding box (& maybe to it's center as well). But that might lead to too many occlusion tests coming back positive... when you can't actually see any polygons of that entity, depending on how tight the bounding box is. So perhaps if each entity is subdivided into an octree, you could do addition LOS tests for poly's that are closest to the corner that tested positive.

Whatever does work though... it'll have to be highly optimized.

Also, one thing you can think about, is maybe you don't have to check every entity every frame, maybe check 25% of them each frame & then after the 4th frame, use the 5th frame to update the list of entities within a certain radius. That would help some.

when it comes to real-time AI's... you have to start considering little "optimizations" like that, AI's can be VERY VERY taxing on performance.
Whatsoever you do, do it heartily as to the Lord, and not unto men.
another one in the 'I have no idea how good this is, hence it probably isn't' category, but it is something I want to try one day

create a view volume of what the ai can see and render that to a buffer, use no lights, use no textures, only use flat color shading for objects of interest, if using LOD use the lowest detail possible no matter the distance and at a really low resolution and with hardly any bpp

finally, instead of checking each pixel for the correct color, sum up every pixel and check the result, and split the render in a grid so you can know which direction the object is in

also, you can add every pixel up by color component so you have three classes of objects you can search for,
I'd have to think about the solution, but you can use division and remainders to give you a better idea of exactly what it is you're looking at-like the amount of redness in a pixel could be determined by the enemy strength-if you graduate it properly a division will show exactly how strong the enemy is that you're looking at, or if the division is uneven will show if you're looking at two enemies with different classes

visual processing is a lot easier if you control exactly what light is seen

I'm not exactly sure this is what you're asking for, but you can easily use vector math with no costly trig functions at all to determine if objects are within a cone of a given radius from a given point and direction vector. It doesn't account for line-of-sight issues, but you could handle that elsewhere.

You need to store the direction vector from the AI unit, and the vectors that define the left and right side of the view cone (so when the direction rotates, you have to rotate the cone vectors as well). By taking the cross-product between each cone side vector and the direction vector, and then crossing that vector with the cone side vector, you create a perpendicular vector for each of the cone sides. Then take the location of each of the entities your are testing and dot it with each of the perpendicular vectors. If the signs are both positive, it is within your field of view cone. Otherwise, it's not.

The advantage of this approach is that you can then vary the view cone vectors at any time by any amount, even making them asymmetric, and it still works.
Testing merely to the centroid of an object might be just fine. Would you want an enemy to notice you when only your hand was showing? I would start some tests and see what works best in practise.

Mark
Sound Effects For Game Developers
http://www.indiesfx.co.uk
A simple way would be to make a software renderer that only handles a z buffer. Then you could render all your scenery into this z buffer and finally render the objects that might be of interest, keeping track of how many pixels pass the z buffer test and get drawn (overdraw could complicate this, but with some carefull sorting it could be handled properly, or at least properly most of the time without too much computation).
With an little more work this method could also account apporoximately for things like translucent windows partially obscuring a view.
If you use lowest-LOD for everything and low resolution, this method could work well.
The reason that you do it in software is so it can work at the same time the gfx card is drawing the frame, plus reading back from the card and getting information like the number of pixels drawn can be difficult and slow.

A MUCH more complex solution would be to create a view frustum and perform CSG subtraction between the frustum and extruded outlines(not sure of the word, but the outlines would be extruded and expanded, so a point would be extruded as a pyramid, and a square would be treated as a pyramid frustum) of all view-blocking objects. Then you could perform an intersection between all objects of interest and the remaining (probably extremely complex) portion of the frustum to find out how much of the geometry is visible.
This would probably be rather slow since it involves lots of math, but with optimization I think this solution is also workable.
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk

This topic is closed to new replies.

Advertisement