Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 02 Mar 2010
Offline Last Active Today, 02:08 AM

Posts I've Made

In Topic: Custom mipmap generation (Read mip0, write to mip1,...)

Yesterday, 03:10 PM

That was exactly what I was looking ! laugh.png

It's just too bad something like this isn't common knowledge and presented somewhere on MSDN.

In Topic: Parallax-corrected cubemap blending help

30 April 2015 - 05:41 AM

You're right I could do that in a fullscreen quad post effect pass however as I'm working on a Forward+ renderer right now this isn't as straightforward (performance wise) as with a regular deferred one. I was just curious of the result of his technique and it seemed like a very performant approach. 


However after reading your first post, I'm still not quite sure how this reflectedCamera is obtained. I've tried reflecting my camera position by a mirror plane (0, 1, 0) in world space but the result doesn't seem to work. Does this have to be the actual camera that I'm moving around or the cubemap face's ? Could you explain a bit more what you mean by "adjust all possible view vectors, so that you get the corrected vectors for the cubemap sampling" ?

In Topic: Localizing image based reflections (issues / questions)

19 April 2015 - 02:36 PM

Alright thanks for clearing that up ;)

I feel a little disappointed in IBL now sad.png


By the way do you guys know of any good way to do the local cubemap blending in a forward renderer ? I've been reading up on some articles but have only come to the conclusion that it's really crappy if you're not doing your lighting deferred...

The only thing I've found was lagarde's blog (https://seblagarde.wordpress.com/2012/09/29/image-based-lighting-approaches-and-parallax-corrected-cubemap/) where he introduces his method of having a point of interest (e.g. the camera) where he performs a pre pass to blend the 4 closest cubemaps together but this approach fails when it comes to the parallax correction. He then proposes a hacky way (which I don't quite understand) to do this but it limits everything to planar surfaces which I don't really like.


I'm also having issues on what method to use to calculate the blend weights between the probes. Most of the methods I've found seem really complicated (Unity's delauny triangulation,...)

In Topic: Volumetric Scattering (screen-space) done efficiently ?

25 January 2015 - 06:35 AM

I'm not combining them seperately in a final pass I'm using the blurred output from the first as the input to the second and at the third pass.

By the way the above screens were taken using your code. I don't get why we have so different results.

In Topic: Volumetric Scattering (screen-space) done efficiently ?

25 January 2015 - 05:21 AM

Unfortunately I can't upload a video here because of my crappy internet line but I exported screenshots from RenderDoc showcasing it using the 8x8x8 with different densities each.


Note: Seems like it doesn't gamma correct when exporting that's why they look so dark here.


Input Mask:



First 8 sample pass (Density = 1.0):



Second 8 sample pass (Density = 0.5):



Final 8 sample pass Density = 0.25):




If I multiply the final color by 0.2 or so it doesn't look as overblown but the sky completely fades to black which is not what I want...


The best quality/performance result I've gotten at this point is downsample to 1/2 using a wide gaussian blur then scatter using ~96 samples and upsample again using wide gaussian. Runs at about 1.8ms on my AMD Radeon 5750M. However the result is obviously a little blurry and not as tight as pure 128 samples in full res.


@kalle_h I'm not very familiar with using the jittering you describe. How is this done ? Multiply the Density by some random 2D vector ?

Or could I use a different step size every frame and combine these ?