• 12
• 12
• 9
• 10
• 13
• ### Similar Content

• By GytisDev
Hello,
me and few friends are developing simple city building game with unity for a school project, think something like Banished but much simpler. I was tasked to create the path-finding for the game so I mostly followed this tutorial series up to episode 5. Then we created simple working system for cutting trees. The problem is that the path-finding is working like 90% of the time, then it get stuck randomly then there's clearly a way to the objective (tree). I tried looking for some pattern when it happens but can't find anything. So basically I need any tips for how I should approach this problem.
Use this image to visualize the problem.
• By aymen
please any know how can i' calculate the centroid from any number vertices
• By owenjr
Hi there!
I am trying to implement a basic AI for a Turrets game in SFML and C++ and I have some problems.
This AI follows some waypoints stablished in a Bezier Courve.
In first place, this path was followed only by one enemy. For this purpose, the enemy has to calculate his distance between his actual position
to the next waypoint he has to pick.
If the distance is less than a specific value we stablish, then, we get to the next point. This will repeat until the final destination is reached. (in the submitting code, forget about the var m_go)

Okay, our problem gets when we spawn several enemies and all have to follow the same path, because it produces a bad visual effect (everyone gets upside another).
In order to solve this visual problem, we have decided to use a repulsion vector. The calculus gets like this:

As you can see, we calculate the repulsion vector with the inverse of the distance between the enemy and his nearest neighbor.
Then, we get it applying this to the "theorical" direction, by adding it, and we get a resultant, which is the direction that
our enemy has to follow to not "collide" with it's neighbors. But, our issue comes here:

The enemys get sepparated in the middle of the curve and, as we spawn more enemys, the speed of all of them increases dramatically (including the enemies that don't calculate the repuslion vector).
1 - Is it usual that this sepparation occours in the middle of the trajectory?
2 - Is it there a way to control this direction without the speed getting affected?
3 - Is it there any alternative to this theory?

I submit the code below (There is a variable in Spanish [resultante] which it means resultant in English):

if (!m_pathCompleted) { if (m_currentWP == 14 && m_cambio == true) { m_currentWP = 0; m_path = m_pathA; m_cambio = false; } if (m_neighbors.size() > 1) { for (int i = 0; i < m_neighbors.size(); i++) { if (m_enemyId != m_neighbors[i]->GetId()) { float l_nvx = m_neighbors[i]->GetSprite().getPosition().x - m_enemySprite.getPosition().x; float l_nvy = m_neighbors[i]->GetSprite().getPosition().y - m_enemySprite.getPosition().y; float distance = std::sqrt(l_nvx * l_nvx + l_nvy * l_nvy); if (distance < MINIMUM_NEIGHBOR_DISTANCE) { l_nvx *= -1; l_nvy *= -1; float l_vx = m_path[m_currentWP].x - m_enemySprite.getPosition().x; float l_vy = m_path[m_currentWP].y - m_enemySprite.getPosition().y; float l_resultanteX = l_nvx + l_vx; float l_resultanteY = l_nvy + l_vy; float l_waypointDistance = std::sqrt(l_resultanteX * l_resultanteX + l_resultanteY * l_resultanteY); if (l_waypointDistance < MINIMUM_WAYPOINT_DISTANCE) { if (m_currentWP == m_path.size() - 1) { std::cout << "\n"; std::cout << "[GAME OVER]" << std::endl; m_go = false; m_pathCompleted = true; } else { m_currentWP++; } } if (l_waypointDistance > MINIMUM_WAYPOINT_DISTANCE) { l_resultanteX = l_resultanteX / l_waypointDistance; l_resultanteY = l_resultanteY / l_waypointDistance; m_enemySprite.move(ENEMY_SPEED * l_resultanteX * dt, ENEMY_SPEED * l_resultanteY * dt); } } else { float vx = m_path[m_currentWP].x - m_enemySprite.getPosition().x; float vy = m_path[m_currentWP].y - m_enemySprite.getPosition().y; float len = std::sqrt(vx * vx + vy * vy); if (len < MINIMUM_WAYPOINT_DISTANCE) { if (m_currentWP == m_path.size() - 1) { std::cout << "\n"; std::cout << "[GAME OVER]" << std::endl; m_go = false; m_pathCompleted = true; } else { m_currentWP++; } } if (len > MINIMUM_WAYPOINT_DISTANCE) { vx = vx / len; vy = vy / len; m_enemySprite.move(ENEMY_SPEED * vx * dt, ENEMY_SPEED * vy * dt); } } } } } else { float vx = m_path[m_currentWP].x - m_enemySprite.getPosition().x; float vy = m_path[m_currentWP].y - m_enemySprite.getPosition().y; float len = std::sqrt(vx * vx + vy * vy); if (len < MINIMUM_WAYPOINT_DISTANCE) { if (m_currentWP == m_path.size() - 1) { std::cout << "\n"; std::cout << "[GAME OVER]" << std::endl; m_go = false; m_pathCompleted = true; } else { m_currentWP++; } } if (len > MINIMUM_WAYPOINT_DISTANCE) { vx = vx / len; vy = vy / len; m_enemySprite.move(ENEMY_SPEED * vx * dt, ENEMY_SPEED * vy * dt); } } }
¡¡Thank you very much in advance!!
• By SinnedB
Hello,
I am not sure if I phrased the title properly. What I am trying to achieve is the following:
Winning chances:
Red card: 10%
Blue card: 20%
Green card: 15%
Nothing card: 10%
Now a player has the chances above to win those cards but how would that look like in code?

• I'm stuck trying to make a simple ray sphere intersection test. I'm using this tutorial as my guide and taking code from there. As of now, I'm pretty sure I have the ray sorted out correctly. The way I'm testing my ray is by using the direction of the ray as the position of a cube, just to make sure it's in front of me.
cube.transform.position.x = Camera.main.ray.origin.x + Camera.main.ray.direction.x * 4; cube.transform.position.y = Camera.main.ray.origin.y + Camera.main.ray.direction.y * 4; cube.transform.position.z = Camera.main.ray.origin.z + Camera.main.ray.direction.z * 4;
So if I rotate the camera, the cube follows. So it's looking good.

The problem occurs with the actual intersection algorithm. Here are the steps I'm taking, I'll be very brief:
1) I subtract the sphere center with the ray origin:
L.x = entity.rigidbody.collider.center.x - ray.origin.x; L.y = entity.rigidbody.collider.center.y - ray.origin.y; L.z = entity.rigidbody.collider.center.z - ray.origin.z; L.normalize(); 2) I get the dot product of L and the ray direction:
const b = Mathf.dot(L, ray.direction); 3) And also the dot product  of L with itself (I'm not sure if I'm doing this step right):
const c = Mathf.dot(L, L); 4) So now I can check if B is less than 0, which means it's behind the object. That's working very nicely.
L.x = entity.rigidbody.collider.center.x - ray.origin.x; L.y = entity.rigidbody.collider.center.y - ray.origin.y; L.z = entity.rigidbody.collider.center.z - ray.origin.z; const b = Mathf.dot(L, ray.direction); const c = Mathf.dot(L, L); if (b < 0) return false;
Problem starts here
5) I now do this:
let d2 = (c * c) - (b * b); 6) ...and check if d2 > (entity.radius * entity.radius) and if it's greater: stop there by returning false. But it always passes, unless I don't normalize L and d2 ends up being a larger number and then it return false:
const radius2 = entity.rigidbody.collider.radius * entity.rigidbody.collider.radius; if (d2 > radius2) return false; but again, since I'm normalizing, it NEVER stops in that step. Which worries me.
7) I then do this:
let t1c = Math.sqrt(radius2 - d2); ...but it always returns a number in the range of 0.98, 0.97, if I'm standing still. But if I strafe left and right, the number lowers. If I rotate the camera, it makes no difference. Only if I strafe.
So I'm clearly doing something wrong and stopped there. Hopefully I made sense

# UDK Volumetric light beam

## Recommended Posts

Hi, I came across this udk article:

that somewhat teaches you how to make the volumetric light beam using a cone. I'm not using unreal engine so I just wanted to understand how the technique works.

What I'm having problems is with how they calculate the X position of the uv coordinate, they mention the use of a "reflection vector" that according to the documentation (https://docs.unrealengine.com/latest/INT/Engine/Rendering/Materials/ExpressionReference/Vector/#reflectionvectorws ) it just reflects the camera direction across the surface normal in world space (I assume from the WS initials) .

So in my pixel shader I tried doing something like this:

float3 reflected_view = reflect(view_dir, vertex_normal);
tex2D(falloff_texture, float2(reflected_view.x * 0.5 + 0.5, uv.y))

view_dir is the direction that points from the camera to the point in world space. vertex normal is also in world space. But unfortunately it's not working as expected probably because the calculations are being made in world space. I moved them to view space but there is a problem when you move the camera horizontally that makes the coordinates "move" as well. The problem can be seen below:

Notice the white part in the second image, coming from the left side.

Surprisingly I couldn't find as much information about this technique on the internet as I would have liked to, so I decided to come here for help!

Edited by ramirofages

##### Share on other sites

I am interested in this as well.

I tried to create hlsl by what is going on in their "material node graph" image:

void main_ps(in VERTEX_OUT IN, out PIXEL_OUT OUT)
{
static const float AddR = 8.0f;
static const float MulR = 0.8f;

float3 vvec = camPos.xyz - IN.WorldPos;
float3 v = normalize(vvec);
float3 n = normalize(IN.Normal);
float3 rvec = reflect(v, n);
float3 rx = rvec.x;
float rz = sqrt((rvec.z + AddR) * MulR);
float xcoord = (rx / rz) + 0.5f;
float2 coord = float2(xcoord, IN.TexCoord0.y);
float3 lightFalloff = tex2D(lightSamp, coord);

float3 lightCol = float3(1.0f, 1.0f, 1.0f);
OUT.Color = float4(lightCol, lightFalloff.r);
}

but wrong result:

I copied their image to use as light texture

If you got it solved please share.

##### Share on other sites

Thanks for sharing your stuff as well. Unfortunately I couldn't make it work yet

##### Share on other sites

I have downloaded Unreal engine to test it. Cant get it to work even there:

I even created same material as they have shown on tutorial:

##### Share on other sites

I kinda made some progress. Found their original material and inspect it, i was missing transform ReflectionVector to tangent space.

In Unreal :

In my test program:

i have seam error somehow (might be problems with my mesh exporter). And could not get correct result with CLAMP addressing (above image is WRAP)

void main_ps(in VERTEX_OUT IN, out PIXEL_OUT OUT)
{
float3x3 tangentBasis = float3x3(normalize(IN.Tangent), normalize(IN.Binormal), normalize(IN.Normal));

float2 uv = IN.TexCoord0;

static const float AddR = 8.0f;
static const float MulR = 0.6f;

float3 toEye = camPos.xyz - IN.WorldPos.xyz;

float3 v = normalize(toEye);
float3 n = normalize(IN.Normal);

float3 R = reflect(v, n);
float3 rvec = mul(R, tangentBasis);

float rx = rvec.x;
float rz = sqrt( (rvec.z + AddR) * MulR );
float xcoord = (rx / rz) + 0.5f;
float2 coord = float2(xcoord, uv.y);

float3 lightFalloff = tex2D(lightSamp, coord);

float3 lightCol = float3(1.0f, 1.0f, 1.0f);
OUT.Color = float4(lightCol, lightFalloff.r);
}

Edited by belfegor

##### Share on other sites

Wow thanks a lot, they never said anything about tangent space. Will try it out when I get home.

EDIT: Works great, thanks!

Edited by ramirofages