Jump to content
  • Advertisement
Sign in to follow this  
marius1930

Pos u obscured to pos v?

This topic is 3857 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been looking around a bit, and can't seem to find the right answer to this. I'm making a basic AI that will follow the player, but only if the player is within view of the AI-unit. In other words, not obscured by terrain in particular. If player-in-view, update target-position Only i can't seem to find a detection method. The ai-unit may be behind the camera so occlusion wont work, unless its in a 360degree angle. I've been looking at rays, but knowing if a ray hits the camera 'at some point' wont help me, unless i can make the ray stop when it hits something else. Another way would be to check against a terrain-mesh but it would also mean checking against every other mesh that could potentially block the view, ex buildings. Also for the unit orientation, i'm rotating the object PI*2 * (target.X-pos.X) % 1 radias Making a 90 degree rotation, just swapping the start location to 180 in the case of a negative z value. But the objects is facing the camera somewhat skewed, usually slightly to the right side of the camera, sometimes to the left. Any ideas? Would quats perform better? Thanks.

Share this post


Link to post
Share on other sites
Advertisement
We need to know more about the world geometry. It is possible that some kind of optimisation could be used (Portal, PVS or such-like) but the most general scenario will require something a bit more brute-force.

If your scene is simple and the targets relatively small, the best option is to ray-test against every potential obstacle. You already expressed disinterest towards this, but be aware that it's probably faster than you think, especially if some kind of spatial partitioning is involved.

The other main option is to let the GPU do the work for you and perform a dedicated render. This can be made very simple or very complicated, but the basic idea is to render the follower's entire field of vision, perhaps onto a low-res surface, with a simple shader set so that nothing comes out but the target. For example, your shader could render the target in white and everything else in black. The visibility test would then be for the presence of any white pixels in the render. Obviously, it's costly to lock the surface and iterate over all the pixels, so you may be able to optimise at the cost of some accuracy. One such optimisation would be to keep the render target small enough so that a full mip-map chain (which could be created in a flash by the GPU) would come out with a not-completely-black bottom level (1x1) - within the bounds of floating-point error.

Tell us more.
Admiral

Share this post


Link to post
Share on other sites
It's just a simple world, adding spacial partioning later on.

My worries with checking every object is rooted in "what if there were 10 ai's", but i suppose it could be narrows down with a distance check and 'frustum' of the object itself.

I'll give it a try :)

Share this post


Link to post
Share on other sites
If you need more speed one frequently used optimization is to create a collision mesh for each object. This would be a very low poly mesh that's just for collision detection not for rendering.

You'll also want to store bounding spheres (and / or AABBs) for each object to enable rapid rejection of things nowhere near the ray.

Of course you can use spatial partitioning of some kind as well, but those may be enough on their own for a relatively small number of objects.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!