Stealth And Detection System

Started by
19 comments, last by AliAbdulKareem 1 month ago

@JonathanSavastano Regarding the radar method, you don't need to calculate the tangent unless the viewing frustum changes in size (like you're doing a sniper scope effect). Just calculate it once when you set the field of view angle.

Advertisement

JonathanSavastano said:
It’s a very odd thing to be caught up on, as I think my question was quite clear. My initial post was easily understandable, so your point is moot.

Your question is largely nonsense, actually, just enough for people to understand it. The awkward phrasing has enough to more or less understand what you're looking for, but it is far from ‘quite clear’ to me, and I've been doing this for over two decades plus worked with a bunch of students and beginners. I could figure it out, but it isn't as clear to me as you've made it out to be.

JonathanSavastano said:
I'd also like to accomplish this task via code if it makes any sense to do so. Thus, I come to you all with your vast experience and knowledge to seek aid. Any resources or tips would be welcome,

Unreal comes with the source. I strongly recommend you open it up and read it. Understand it. Figure out what they did and why.

Doing it the way you asked about is seldom done, use what the engine gives because it is already there, it works, and it is easily understood from the source. It is going to be maintained in the future, with bugs fixed and improvements introduced. Only do more if you have read the source and understand that you have a compelling need to do more.

I think it works better to cast just one ray, to the center of the player mesh, rather than aiming at the vertices of the bounding box. My reasoning: both are approximations, and when it comes to approximations, it's better to err on the side of the player. The player's bounding box extends beyond the player mesh, and you never ever want an enemy to see the player when it should be physically impossible for them to do so. An enemy not noticing the player when they should be able to see them is less of a problem because it actually happens all the time with real people. Also, using the vertices of the bounding box creates a paradoxical situation where the enemy can't see the player through a window directly in front of them because the vertices of the bounding box are outside the window.

(Side note: I think at a fundamental level game scripts and audio files and graphic files are all instances of the same general concept, i.e. “files that contain some content for the game”, especially when it comes to edge cases like visual scripting and domain-specific languages for in-game dialogue. The line between data and code is often fluid. I have no problem with people using the general term “asset” for this general concept.)

@undefined
Thank you for your input breeze. I definitely don’t want the player to become frustrated because they’ve been seen while in full cover. I used to think that creating a game that is difficult is the way to go, but whenever I have friends try a game prototype I’ve been working on, they get frustrated and put the game down in under 10 minutes. So I’ve adopted a new philosophy which is to ease the player in and reward them early, so they keep playing longer, and then it can become more difficult later in the game. what you’ve said ties directly into this, people in real life miss things that are right under their nose all the time, so why would we make an NPC in a video game beyond that innately human error.

Again, thank you for your input,

-Jonathan

JonathanSavastano said:
So I’ve adopted a new philosophy which is to ease the player in and reward them early

But it's questionable to implement this on top of an inaccurate system you have little control of.

If overestimation from the bounding box is a problem, which i agree with, then i would use the capsules making the ragdoll of the character. This is accurate then, which allows better design as well.

For example, say NPCs have gauge measuring player visibility. Only if it's full, NPCs are sure about the enemy and start to attack, like many games do.
If we have an accurate estimate of visibility, we can use this to control the speed of the filling gauge. If player is visibility is small, it fills slowly, giving the player some advantage.

On the other hand, we can also use this to make the NPC smarter. It could aim and shoot at the player even through a small hole in the wall, which is hardly possible otherwise.

The temporal lag of the estimate is no big problem either i think, since it models some natural response time for the NPCs. Characters in games often react instantly to events, which feels very unbelievable to me and breaks immersion.

JoeJ said:

JonathanSavastano said:
So I’ve adopted a new philosophy which is to ease the player in and reward them early

If overestimation from the bounding box is a problem, which i agree with, then i would use the capsules making the ragdoll of the character. This is accurate then, which allows better design as well.

In other words, test the visibility of each body part separately. This is a good idea that I like, as it allows more nuanced responses. Especially if combined with a disguise system where guards can see you without noticing that you are an intruder.

a light breeze said:
In other words, test the visibility of each body part separately.

Actually it would be nice if physics engines had a packet traversal option, so we could trace many rays with similar origin and direction for the traversal cost of one.

With increasing complexity of geometry, one ray alone is rarely enough anymore.

JoeJ said:
if he stands behind a narrow opening in a wall

In my honest opinion, the solution to this, would be to make your algorithm of such cases. if the doors have opening in them, then just make your detection algorithm aware of it. I think it is a lot better than trying to find a general solution this visibility problem.

None

AliAbdulKareem said:
the solution to this, would be to make your algorithm of such cases. if the doors have opening in them, then just make your detection algorithm aware of it.

What do you think of? An alternative to ray tracing?

I could imagine a shortest path algorithm. But it would fail if the opening in the wall is not in the graph, it would lack support for dynamic / destructable levels, and it would be very expensive.

And then an existing shortest path still tells nothing about visibility, which is what we need.
You need to trace a ray no matter what, so why increase complexity and cost for no reason?

As i think of it, NPCs might start to find paths to the player only after the cheap visibility test makes the NPC aware of the player.

@joej I probably phrased extremely poorly there, I more like meant, the ray-casting from an enemy NPC when it intersects a plane (a door) it can just query what type of things it did intersect. Unless I misunderstood the problem, then it is something like this:

if(Intersect(NPCRay, Object))
{
	if(Object.Type == DOOR_WITH_HOLE)
	{
		// Determine if the ray cone can pass through the opened hole and manage that.
	}	
}

This is what I meant by: An algorithm that is “Aware” of the context. a general plane-ray intersection won't have an idea what a hole in a wall/door is, but I could not see why can you not extend it. (as opposite to randomizing the rays)

None

This topic is closed to new replies.

Advertisement