• 12
• 12
• 9
• 10
• 13
• ### Similar Content

• I want to calculate the position of the camera, but I always get a vector of zeros.

D3DXMATRIX viewMat; pDev->GetTransform(D3DTS_VIEW, &viewMat); D3DXMatrixInverse(&viewMat, NULL, &viewMat); D3DXVECTOR3 camPos(viewMat._41, viewMat._42, viewMat._43); log->Write( L"Camera Position: %f %f %f\n", camPos.x, camPos.y, camPos.z);

Could anyone please shed some lights on this?
thanks
Jack
• By bsudheer
Leap Leap Leap! is a fast-paced, endless running game where you leap from rooftop to rooftop in a computer simulated world.

This is a free run game and get excited by this fabulous computer simulated world of skyscrapers and surreal colors in parallax effect. On your way, collect cubes and revival points as many as you can to make a long run.

Features of Leap Leap Leap:
-Option of two themes: Black or White.
-Simple one touch gameplay.
-Attractive art.
-Effective use of parallax.
Appstore: https://itunes.apple.com/us/app/leap-leap-leap/id683764406?mt=8

• By isu diss
I'm following rastertek tutorial 14 (http://rastertek.com/tertut14.html). The problem is, slope based texturing doesn't work in my application. There are plenty of slopes in my terrain. None of them get slope color.
float4 PSMAIN(DS_OUTPUT Input) : SV_Target { float4 grassColor; float4 slopeColor; float4 rockColor; float slope; float blendAmount; float4 textureColor; grassColor = txTerGrassy.Sample(SSTerrain, Input.TextureCoords); slopeColor = txTerMossRocky.Sample(SSTerrain, Input.TextureCoords); rockColor = txTerRocky.Sample(SSTerrain, Input.TextureCoords); // Calculate the slope of this point. slope = (1.0f - Input.LSNormal.y); if(slope < 0.2) { blendAmount = slope / 0.2f; textureColor = lerp(grassColor, slopeColor, blendAmount); } if((slope < 0.7) && (slope >= 0.2f)) { blendAmount = (slope - 0.2f) * (1.0f / (0.7f - 0.2f)); textureColor = lerp(slopeColor, rockColor, blendAmount); } if(slope >= 0.7) { textureColor = rockColor; } return float4(textureColor.rgb, 1); } Can anyone help me? Thanks.

• By elect
Hi,
ok, so, we are having problems with our current mirror reflection implementation.
At the moment we are doing it very simple, so for the i-th frame, we calculate the reflection vectors given the viewPoint and some predefined points on the mirror surface (position and normal).
Then, using the least squared algorithm, we find the point that has the minimum distance from all these reflections vectors. This is going to be our virtual viewPoint (with the right orientation).
After that, we render offscreen to a texture by setting the OpenGL camera on the virtual viewPoint.
And finally we use the rendered texture on the mirror surface.
So far this has always been fine, but now we are having some more strong constraints on accuracy.
What are our best options given that:
- we have a dynamic scene, the mirror and parts of the scene can change continuously from frame to frame
- we have about 3k points (with normals) per mirror, calculated offline using some cad program (such as Catia)
- all the mirror are always perfectly spherical (with different radius vertically and horizontally) and they are always convex
- a scene can have up to 10 mirror
- it should be fast enough also for vr (Htc Vive) on fastest gpus (only desktops)

Looking around, some papers talk about calculating some caustic surface derivation offline, but I don't know if this suits my case
Also, another paper, used some acceleration structures to detect the intersection between the reflection vectors and the scene, and then adjust the corresponding texture coordinate. This looks the most accurate but also very heavy from a computational point of view.

Other than that, I couldn't find anything updated/exhaustive around, can you help me?
• By drcrack

It is a combination of fundamental RPG elements and challenging, session-based MOBA elements. Having features such as creating your unique build, customizing your outfit and preparing synergic team compositions with friends, players can brave dangerous adventures or merciless arena fights against deadly creatures and skilled players alike.

This time with no grinding and no pay to win features.

We're still looking for:
1) 3D Character Artist
2) 3D Environment Artist
3) Animator
4) Sound Designer
5) VFX Artist

Discord https://discord.gg/zXpY29V or drcrack#4575

# Ray - Sphere intersection test not working. What am I doing wrong?

## Recommended Posts

Posted (edited)

I'm stuck trying to make a simple ray sphere intersection test. I'm using this tutorial as my guide and taking code from there. As of now, I'm pretty sure I have the ray sorted out correctly. The way I'm testing my ray is by using the direction of the ray as the position of a cube, just to make sure it's in front of me.

cube.transform.position.x = Camera.main.ray.origin.x + Camera.main.ray.direction.x * 4;
cube.transform.position.y = Camera.main.ray.origin.y + Camera.main.ray.direction.y * 4;
cube.transform.position.z = Camera.main.ray.origin.z + Camera.main.ray.direction.z * 4;

So if I rotate the camera, the cube follows. So it's looking good.

The problem occurs with the actual intersection algorithm. Here are the steps I'm taking, I'll be very brief:

1) I subtract the sphere center with the ray origin:

L.x = entity.rigidbody.collider.center.x - ray.origin.x;
L.y = entity.rigidbody.collider.center.y - ray.origin.y;
L.z = entity.rigidbody.collider.center.z - ray.origin.z;
L.normalize();

2) I get the dot product of L and the ray direction:

const b = Mathf.dot(L, ray.direction);

3) And also the dot product  of L with itself (I'm not sure if I'm doing this step right):

const c = Mathf.dot(L, L);

4) So now I can check if B is less than 0, which means it's behind the object. That's working very nicely.

L.x = entity.rigidbody.collider.center.x - ray.origin.x;
L.y = entity.rigidbody.collider.center.y - ray.origin.y;
L.z = entity.rigidbody.collider.center.z - ray.origin.z;

const b = Mathf.dot(L, ray.direction);
const c = Mathf.dot(L, L);

if (b < 0) return false;

## Problem starts here

5) I now do this:

let d2 = (c * c) - (b * b);

6) ...and check if d2 > (entity.radius * entity.radius) and if it's greater: stop there by returning false. But it always passes, unless I don't normalize L and d2 ends up being a larger number and then it return false:

const radius2 = entity.rigidbody.collider.radius * entity.rigidbody.collider.radius;
if (d2 > radius2) return false;

but again, since I'm normalizing, it NEVER stops in that step. Which worries me.

7) I then do this:

let t1c = Math.sqrt(radius2 - d2);

...but it always returns a number in the range of 0.98, 0.97, if I'm standing still. But if I strafe left and right, the number lowers. If I rotate the camera, it makes no difference. Only if I strafe.

So I'm clearly doing something wrong and stopped there. Hopefully I made sense

Edited by Hashbrown

##### Share on other sites

Do you have checked with a second cube, that your ray is pointing in the right direction?

##### Share on other sites

Hey what's up Shaarigan? You mean instantiating another cube and testing the direction of the ray? I honestly haven't since I already tried with the one cube I show in the gifs above.

cube.transform.position.x = Camera.main.ray.origin.x + Camera.main.ray.direction.x * 4;
cube.transform.position.y = Camera.main.ray.origin.y + Camera.main.ray.direction.y * 4;
cube.transform.position.z = Camera.main.ray.origin.z + Camera.main.ray.direction.z * 4;

..and the cube remains in front of me. Also, before this implementation I had another one, and that one was working perfectly only if I don't move the camera. I wasn't considering the direction I guess.  So I'm guessing my ray is pointing towards the right direction. You got me wondering now.

##### Share on other sites

Found one bug where you squared a squared distance: c*c - b*b should be c - b*b

Returning on b<0 might be also wrong, but depends on your needs.

This should work i hope:

L.x = entity.rigidbody.collider.center.x - ray.origin.x;
L.y = entity.rigidbody.collider.center.y - ray.origin.y;
L.z = entity.rigidbody.collider.center.z - ray.origin.z;

const b = Mathf.dot(L, ray.direction);
const c = Mathf.dot(L, L);

//if (b < 0) return false; bad test: the ray could start inside the sphere, so pointing away but still hitting it
if (b < 0 && c < radius2) return false; // better: additionally test if the ray starts inside

let d2 = c - (b * b); // c is already squared distance, so don't square again

if (d2 > radius2) return false;

let t1c = Math.sqrt(radius2 - d2);

// first intersection:	ray.origin + ray.direction * (b - t1c);
// second intersection:	ray.origin + ray.direction * (b + t1c);

##### Share on other sites

I see you are confused about normalizing L or not. My own practice here is: First write the code normalizing stuff so it's easy to understand. Second, when it works try to remove square roots. But after that the code becomes often unreadable. The code from my snippet above is optimized and not very intuitive. You could rewrite it for an exercise, but don't forget to store the length of L before you normalize

##### Share on other sites
5 hours ago, Hashbrown said:

You mean instantiating another cube and testing the direction of the ray

In my professional all-day practise we draw some test stuff (Unity calls it Gizmos) to see if our calculations go right. This also means drawing a line from where we assume the ray to start to where it is calculated to go/end. This helps to see if your ray is logical/optical hitting the sphere before doing optimizations/bugfixing in your hit-test calculation.

I have had a lot of issues seen in my career that were caused by wrong calculated data rather than any test using them

##### Share on other sites

Thanks a lot guys, Shaarigan was right, there something going on with my ray too. I'll do what unity does and draw lines and get that sorted out first. And JoeJ, you're also right, I was confused about when to normalize, I copied your code and as soon as I fix my ray issue, I'll use your code.

I'll share what I learn once I do, thanks again guys