Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 22 Feb 2000
Offline Last Active Yesterday, 05:55 AM

#5169914 Reconstructing Position From Depth Buffer

Posted by REF_Cracker on 28 July 2014 - 07:28 PM

Hi BlueSpud,


It's actually quite simple to understand if you think of it this way to retrieve a viewspace coordinate.


- the screen coordinate comes in as 0 to 1 in x and y
- you remap that to -1 to 1 like so.... screencoord.xy * 2 - 1 (migth have to flip the sign of the results depending on API)
- you now have an xy value you can picture as laying on the near plane of your camera....so the z coordinate of the vector is whatever your near plane value is.
- you now have to figure out how to scale the -1 to 1 xy values properly to represent the dimensions of the near plane "quad" in view space... this is easy
- you just use some trig to figure out the width and height of the "quad" at the near plane.... basically it's .... tan(FOV * 0.5) * near
- also you'll have to multiply that by the aspect ratio for x probably.

- now after you do all this you will have calculated a vector from the eye to the near plane.
- you just have to now scale this vector by whatever the depth of that pixel is..... you'll have to account for the ratio of how close your near plane is

basically that's how you think of it... if you draw some pictures you should be able to get it..
in order to recover in another space... it's basically the same... but you move along the axis directions in x/y/z then add the eye position.

#5157671 Ideas for rendering huge vegetation (foliage)

Posted by REF_Cracker on 02 June 2014 - 05:22 PM

You may find this recent series of devblog's by Casey Muratori interesting. He goes into detail about his grass planting system. Less about the rendering side of things and more about proper placement of the ground cover itself to get the best coverage with the least amount of geometry. I don't think he goes into anything about LOD.. but it does get you thinking about how to provide the best coverage.


Starts in "Working On The Witness, Part 5" and continues on through his reasoning into part 8 where he has some code for optimal placement.



#5155907 Doing local fog (again)

Posted by REF_Cracker on 25 May 2014 - 12:02 PM

Hey there. This came out a week ago.. May be of interest to you. There's some good videos on the site so you can see if the effect is to your liking before even reading the paper. It's essentially a ray marching post process. I believe he's working on a demo to come as well which will be nice.


#5077653 Image Based Reflections - DX11

Posted by REF_Cracker on 14 July 2013 - 01:02 PM

Thanks for the tips Hodgman!

I'm going to give it a try when I get back to work next week. I'll report my findings if I'm able to pull it off. I like how you're sampling the different mips a la irridiance map. I wouldn't have thought of that. Do you fade near the edges of the quad? Or do some sort of clamping?

#5077453 Image Based Reflections - DX11

Posted by REF_Cracker on 13 July 2013 - 06:38 PM



That is a great article but not the method that the Unreal Engine is using.

If you read in the comments of one of the you videos that accompanies that article the author (Sebastien Lagarde) states:


"The algorithm aim to replace dynamic reflection. Goal is performance. So all is static and computed offline (No characters). All the details + code can be found at the links in the description of the video.The algorithm was design for current gen platform DX9/PS3/XBOX360, for modern platform there can be better way. Image-based reflection of Unreal are better quality but at a higher cost. All depends on your targets framerate."

So I'm wondering if anyone out there implemented the Unreal method!

#5077442 Image Based Reflections - DX11

Posted by REF_Cracker on 13 July 2013 - 05:47 PM

I've been reading a bit about Image Based Reflections as seen in the Epic Samaritan Demo. There's a bit of information on how they work over on this UDN page.. http://udn.epicgames.com/Three/ImageBasedReflections.html 

I was wondering if anyone has attempted supporting this type of reflection in their own engine or had any links they could point me to to learn more about how this is achieved. Since it seems to be a just quad reflector you could probably check intersection inside a pixel shader to determine texture coordinates for the quad. I'm wondering how this might be accelerated to robustly support multiple quads in a scene and limit the intersection checks.


Thanks in advance!

#5010736 DX9 -> Dx11 Port... Nothing being drawn?

Posted by REF_Cracker on 14 December 2012 - 03:06 PM

Haha! Did you have the same issue or are just saluting me for figuring it out? It may seem trivial but when you're in the middle of a port that took 3 days to compile and run without errors it's hard to zero in. Hoping this post helps someone out in the future!

#5010468 DX9 -> Dx11 Port... Nothing being drawn?

Posted by REF_Cracker on 13 December 2012 - 10:34 PM


If anyone else is having this problem ... you want to make sure you see the viewport outline in pix. The vertex values after transforms were correct in taht they should be displayed on screen. But the viewport structure itself wasn't properly setup (D3D11_VIEWPORT and RSSetViewports) it was set to 0 pixels X 0 pixels due to the fact that setting up the backbuffer is somewhat of a special case because you have to grab it from the swap chain.


#5010445 DX9 -> Dx11 Port... Nothing being drawn?

Posted by REF_Cracker on 13 December 2012 - 08:43 PM

Wait a minute isn't there supposed to be a rectangle in PIX representing the viewport?
I guess that means I'm way off!

#5010421 DX9 -> Dx11 Port... Nothing being drawn?

Posted by REF_Cracker on 13 December 2012 - 07:30 PM

- I'm clearing the backbuffer to a different color every frame and that is properly flickering
- PIX shows that there is output to the viewport as you can see in the image
- None the less it appears no pixels are being written to the backbuffer

Any ideas? Thanks!

Posted Image

#4903554 Cascade Stability

Posted by REF_Cracker on 17 January 2012 - 02:58 AM


There is a way to calculate the proper scale and offset directly form your cascade shadow viewproj matricies. It's right in the article but wasn't working due to a bug in my matrix inversion code.

The way to get the scale remains

float scale[n] = splitRadius[0] / splitRadius[n]

The offset is calculated as

float4 offset[n] = float4(0.0f,0.0f,0.0f,1.0f) * Inverse(

ShadowMat[n]) *