• Create Account

# JorenJoestar

Member Since 28 Sep 2004
Offline Last Active Jul 23 2016 08:19 PM

### In Topic: Using normal mapping with triplanar texture projection

16 July 2012 - 09:46 AM

Maybe you can look at this!

### In Topic: Reconstrucing World Position from Depth?

11 January 2012 - 07:42 AM

Possibly this line is broken:

posWorldSpace = mul(posViewSpace, InverseWorldViewMatrix);

because you are moving from the viewSpace to object space. To be back to world space you need to multiply for the inverse of the view matrix, that maps from view to world space.

Also, if you need other more informations I've put some stuff together on my blog:

Hope this helps!

### In Topic: Reconstructing view-space position from depth

24 March 2011 - 06:02 AM

Thanks Daniel, you are very kind to post your code!
Actually yesterday night I get the Arkano's implemenation working, I had to change a little in the uv calculation...it seems those math is working only with the view space texture!

I'm using this method to reconstruct:

float depth = tex2D(gDepthTex, uv).r * gFar;

float4 pos = float4( (uv.x-0.5)*2, (0.5-uv.y)*2, 1, 1 );

float4 ray = mul(pos, gProjectionInverse);

return ray.xyz * depth;

And it is working quite well!
I will try your method too, the trade between the multiplication with a texture fetch is interesting

Thanks again!!!

### In Topic: Reconstructing view-space position from depth

23 March 2011 - 10:24 AM

I have the same problem with reconstruction as you guys...I`m trying to use the depth and no luck.
I use this reconstruction method (I'm using right-handed coordinates):

float depth = tex2Dlod( g_depth, float4(uv, 0, 0) ).r * g_far_clip;

float4 positionCS = float4((uv.x-0.5)*2, (0.5-uv.y)*2, 1, 1);
float4 ray = mul( positionCS, gProjI );
ray.xyz /= ray.w;
position = ray.xyz * depth / ray.z
position.z *= -1; // This is for right-handed.

Daniel what method are you using?
Thanks!

### In Topic: NFAA - A Post-Process Anti-Aliasing Filter (Results, Implementation Details).

26 January 2011 - 06:28 AM

We had great success with this approach. Our implementation is somewhat different than that described above.

We take four strategically-placed samples to establish edge direction, leveraging bilinear filtering to get the most out of each sample. Then we do a four-tap blur (three also works well) with a strong directional bias along the edge. The size of the blur kernel is in proportion to the detected edge contrast, but the sample weighting is constant. This works astonishingly well and is a fraction of a millisecond compared to 3+ ms for MLAA.

Can you describe better the process you are using? Are you using both normal and depth???

Thanks!

PARTNERS