Screen Space Reflection: trouble transforming reflection vectors to screen space.

Started by
1 comment, last by ramirofages 7 years, 3 months ago

I'm trying to slap screen-space reflections into my engine, and in theory it should work fine but I'm having trouble figuring out how to properly generate the ray vectors.

I have the fragment surface normals in world-space, I also have the vectors from camera to fragment in world-space. I can therefore generate the reflection vector itself using:

reflect(camtofrag, fragnormal)

However, I am trying to transform that into screen space properly for outputting to a framebuffer texture, which is then fed into a postfx shader to perform the actual raytracing with itself.

My inclination was to do this:

output.xyz = inverse(transpose(modelview)) * reflect(camtofrag, fragnormal);

But the problem is that the reflection vectors shift around a lot when the camera rotates. Surfaces only appear to reflect properly if the camera is at a 45 degree angle to them. A shallower angle results in squished reflections at the edge of the surface where the reflected geometry is connected. Conversely, looking straight at the surface (allowing the shader to reflect whatever is on the outside edges, like a mirror) the reflection stretches deep 'into' the reflecting surface.

Here's a youtube of the described, it may still be uploading at the moment:

Here's a set of images showing the reflection vector buffer, and it's clear that there's just too much gradiation across surfaces, and it moves with the camera's rotation. It looks like there needs to be some kind of inverse projection applied so that it's "flatter" and not producing a fisheye sort of reflection: http://imgur.com/gallery/h9w3X

I have a linearized depth buffer, so I figured I could just calculate the direction of the screenspace reflection ray while rasterizing the geometry that will be reflective in the postprocess render, and then in the postprocess do the screenspace reflection ontop of everything using the reflection normals and linear depth buffer, without having to do any more matrix transforms or anything - just trace lines in XYZ, check against depth buffer, and behave accordingly. I'm just using the UV coordinate of the fullscreen quad fragment to sample the reflection vector texture, whose alpha channel contains the linearized depths, and then tracing a line along the reflection vector checking the depths in the alpha channel along the way. As far as I can tell it would work just fine if my reflection vectors were correct - which they clearly aren't judging by the 3 screenshots that show how drastically the normals change just by rotating the camera.

Any ideas off the top of your heads? My goal is to keep this ultra simple and minimal (of course with all the edge fading and artifact mitigation stuff) without storing a bunch of textures to do it. It seems simple enough to just store the reflection vector as generated by the fragment shader of the geometry surfaces themselves - if I could just transform them properly. As I said, I have the exact right reflection vectors in world-space, but I'm just having trouble transforming into screenspace.

Thanks!

Advertisement
Maybe I'm missing something or maybe it's just a typo but doesn't the modelview transform from local space to view space? Since the reflection vector is in world space, shouldn't you use the view and then the projection + viewport?

Maybe I'm missing something or maybe it's just a typo but doesn't the modelview transform from local space to view space? Since the reflection vector is in world space, shouldn't you use the view and then the projection + viewport?

Indeed, you only need the view matrix (without the model), and then apply your projection.

This topic is closed to new replies.

Advertisement