Jump to content

  • Log In with Google      Sign In   
  • Create Account


Screen Space Reflections


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
17 replies to this topic

#1 Quat   Members   -  Reputation: 403

Like
0Likes
Like

Posted 13 June 2013 - 11:56 AM

I have a lot of small water surfaces that can be visible at once in my scene that can lie on different plane heights.  Currently I am doing a full reflection pass for each one (rendering the scene from a mirrored camera).  However, this increases my total per frame draw count a lot and is hurting my performance.  I am looking at two solutions: Static cube maps for reflections (won't handle dynamic objects), but should be good enough since the water surfaces are small. I'm also looking into screen space reflections that Crytek uses. The two main issues I see with screen space reflections are:

 

1. The reflection vector shoots off screen so we have no info.

2. False reflections due to only have depth info.

 

Does anyone have any experience with screen space reflections?  How do they work for planar surfaces (like water where there will be some wave distortion)?  Any other issues than the above 2 I mention?

 


-----Quat

Sponsor:

#2 phil_t   Crossbones+   -  Reputation: 3256

Like
1Likes
Like

Posted 13 June 2013 - 12:12 PM


1. The reflection vector shoots off screen so we have no info.

 

Maybe you can use a "clamp" texture addressing mode so you just get the value at the edge? Or do it manually such that the transition is a little smoother (not sure how to explain it properly, but have your texture coordinates "slow down" as they approach the edge of your screen texture).

 

I'm not sure it would suit your needs, but I use a technique that stores the elevation angle of surrounding terrain (and possibly objects) for each water vertex. It's very limited: no dynamic objects, and the reflection has no details in it (it just results in the water being a darker shader), but it comes at very little perf impact:

http://mtnphil.wordpress.com/2012/09/12/faking-water-reflections-with-fourier-coefficients/

http://mtnphil.wordpress.com/2012/09/15/water-shader-follow-up/



#3 kalle_h   Members   -  Reputation: 1338

Like
5Likes
Like

Posted 13 June 2013 - 01:03 PM

http://www.guerrilla-games.com/presentations/Valient_Killzone_Shadow_Fall_Demo_Postmortem.pdf

http://www.guerrilla-games.com/presentations/Drobot_Lighting_of_Killzone_Shadow_Fall.pdf

 

New Killzone have three different methods for reflections. First screenspace reflection is used and if that fails then it try to find is that point inside of static local cube map AABB and use that instead. If both these fails global skycube is used. Add some smooth tweening instead of binary fail and result should be good enough in most cases



#4 Matias Goldberg   Crossbones+   -  Reputation: 3149

Like
0Likes
Like

Posted 13 June 2013 - 05:55 PM

SS Reflections work really, really good for water (lakes, river, puddles) because of the Fresnel effect: It's most reflective when perpendicular to the view vector (which means there's plenty of depth information available) and is translucent when looking straight into it (which is when you have the least or even no relevant depth information on screen)

 

For the rest of the surfaces, "it depends". There's a lot of factors coming into play, which boil down on how much of the screen they occupy, blocking all of the surrounding; and their normal relative to the view's.

 

Also another alternative is to use hybrids:

  1. Pick between static map and SS reflections for far objects.
  2. Use real time updated env. map (or planar map reflection) for closest objects.


#5 Frenetic Pony   Members   -  Reputation: 1270

Like
0Likes
Like

Posted 15 June 2013 - 01:31 AM

I've never seen screenspace reflections work "very well". By its very nature you are missing any and all information not directly onscreen, and anything but purely reflective (zero roughness in whatever BRDF you're using) will also be problematic, as you vastly increase the chance your sample isn't onscreen. 

 

SS reflections can work OK in the case of rain as a kind of planar reflection hack on the ground, where it's not too noticeable that you're missing a ton of reflectivity, you've got a layer of water for reflectivity you don't have to take roughness into account for, and you've got a strong fresnel effect hiding everything as well. But playing around with it, it's not really a "solution" for reflections, just a nice piece of eye candy you have to control very tightly (or compensate for at every turn ala Killzone).

 

Even on a lake or other body of water, you've got strong reflections a lot of the time, if your player can control the camera they'll say, look down at the water and as they do far reflections will just clip out entirely as the reflected areas leave screenspace before the water reflecting them does, and you'll just have a blank area where reflections should be. And that's just one scenario that will happen all the time. It's a rather ugly hack all in all.


Edited by Frenetic Pony, 15 June 2013 - 01:45 AM.


#6 kalle_h   Members   -  Reputation: 1338

Like
1Likes
Like

Posted 15 June 2013 - 02:05 AM

I see screen space reflection more an occlusion technique for ambient speculars than reflection technique.

Basically stripped down version would just check reflection ray against depth buffer to determine is ambient specular cube used or not.



#7 Quat   Members   -  Reputation: 403

Like
0Likes
Like

Posted 17 June 2013 - 10:36 AM

Thanks for the feedback.  It seems simple enough to implement and try.  I'm going to try a hybrid of screen space and local cube map reflections.  If it works, it would be a huge savings not to redraw the scene for water surfaces on different planes. 

 

I do have one implementation question.  When I compute the reflected ray in view space (by reconstructing the view space position from the depth map and sampling the normal map), I need to march along that ray and test againt the depth buffer.  My question is, what should be the ray march step size in view space?  Would it basically be MaxNumSamples / FrustumLength?  That seems like a pretty big step unless there are a lot of samples. 


-----Quat

#8 Chris_F   Members   -  Reputation: 2226

Like
0Likes
Like

Posted 17 June 2013 - 12:04 PM

I've never seen screenspace reflections work "very well". By its very nature you are missing any and all information not directly onscreen, and anything but purely reflective (zero roughness in whatever BRDF you're using) will also be problematic, as you vastly increase the chance your sample isn't onscreen. 

 

SS reflections can work OK in the case of rain as a kind of planar reflection hack on the ground, where it's not too noticeable that you're missing a ton of reflectivity, you've got a layer of water for reflectivity you don't have to take roughness into account for, and you've got a strong fresnel effect hiding everything as well. But playing around with it, it's not really a "solution" for reflections, just a nice piece of eye candy you have to control very tightly (or compensate for at every turn ala Killzone).

 

Even on a lake or other body of water, you've got strong reflections a lot of the time, if your player can control the camera they'll say, look down at the water and as they do far reflections will just clip out entirely as the reflected areas leave screenspace before the water reflecting them does, and you'll just have a blank area where reflections should be. And that's just one scenario that will happen all the time. It's a rather ugly hack all in all.

 

You've made your dislike of SS reflections pretty well known at this point, but I really can't agree with you. Yes, they aren't perfect. Neither is SSAO, for the exact same reasons, and yet SSAO is infinitely better than no AO, which is why just about every game now uses it. The same goes for reflections.

 

The reflections in Killzone 4 look pretty amazing, thanks in part to their use of SS reflections.



#9 Frenetic Pony   Members   -  Reputation: 1270

Like
0Likes
Like

Posted 17 June 2013 - 12:14 PM

 

I've never seen screenspace reflections work "very well". By its very nature you are missing any and all information not directly onscreen, and anything but purely reflective (zero roughness in whatever BRDF you're using) will also be problematic, as you vastly increase the chance your sample isn't onscreen. 

 

SS reflections can work OK in the case of rain as a kind of planar reflection hack on the ground, where it's not too noticeable that you're missing a ton of reflectivity, you've got a layer of water for reflectivity you don't have to take roughness into account for, and you've got a strong fresnel effect hiding everything as well. But playing around with it, it's not really a "solution" for reflections, just a nice piece of eye candy you have to control very tightly (or compensate for at every turn ala Killzone).

 

Even on a lake or other body of water, you've got strong reflections a lot of the time, if your player can control the camera they'll say, look down at the water and as they do far reflections will just clip out entirely as the reflected areas leave screenspace before the water reflecting them does, and you'll just have a blank area where reflections should be. And that's just one scenario that will happen all the time. It's a rather ugly hack all in all.

 

You've made your dislike of SS reflections pretty well known at this point, but I really can't agree with you. Yes, they aren't perfect. Neither is SSAO, for the exact same reasons, and yet SSAO is infinitely better than no AO, which is why just about every game now uses it. The same goes for reflections.

 

The reflections in Killzone 4 look pretty amazing, thanks in part to their use of SS reflections.

 

 

Oh certainly, with a lot of help they're ok. Just using it by itself though is the problem, as I said that lake scenario would cause any player to just shake their head sadly. And if you're going with screen space reflections and cubemaps, you'll do a lot better, but your scene is now mostly static by default.

 

I guess I'd just like a really good engineering solution, something that you can say "ok, this works for everything everywhere all the time" and you don't need X to back it up. But until then, if you're doing something like Killzone, then it can look solid enough.


Edited by Frenetic Pony, 17 June 2013 - 12:15 PM.


#10 Chris_F   Members   -  Reputation: 2226

Like
1Likes
Like

Posted 17 June 2013 - 12:37 PM


I guess I'd just like a really good engineering solution, something that you can say "ok, this works for everything everywhere all the time" and you don't need X to back it up.

 

I think path tracing meets that criteria. wink.png

 

It is sort of like Heisenberg's uncertainty principle in a way. You can't have a general solution that works well for all scenarios and is computationally cheap at the same time.



#11 Frenetic Pony   Members   -  Reputation: 1270

Like
0Likes
Like

Posted 17 June 2013 - 02:42 PM

 


I guess I'd just like a really good engineering solution, something that you can say "ok, this works for everything everywhere all the time" and you don't need X to back it up.

 

I think path tracing meets that criteria. wink.png

 

It is sort of like Heisenberg's uncertainty principle in a way. You can't have a general solution that works well for all scenarios and is computationally cheap at the same time.

 

 

Virtualized textures is pretty much all anyone could ever ask for in terms of textures happy.png  But, yes reflections are a lot harder. I doubt that it's even mathematically possible for a "computationally cheap" way to do reflections or even worse actual (lightbounce) GI in general without a lot of precomputation/screenspace type hacks. Ohwell, I can dream.


Edited by Frenetic Pony, 17 June 2013 - 02:46 PM.


#12 cowsarenotevil   Crossbones+   -  Reputation: 2008

Like
0Likes
Like

Posted 17 June 2013 - 03:38 PM

I don't really know how screen space reflections typically work, but I'm curious as to what they offer that couldn't also be achieved with a camera-centered cubemap. It seems that any of the information needed for screen space reflections could also be included in a cubemap (depth buffer, etc.) and that using a cubemap could potentially help avoid the risk of rays escaping the screen.

 

Am I missing something?


-~-The Cow of Darkness-~-

#13 tool_2046   Members   -  Reputation: 1026

Like
0Likes
Like

Posted 17 June 2013 - 03:52 PM

Using cube maps like that would require either rendering several extra passes of the scene for cube faces, or not including dynamic objects. 



#14 cowsarenotevil   Crossbones+   -  Reputation: 2008

Like
0Likes
Like

Posted 17 June 2013 - 09:31 PM

Using cube maps like that would require either rendering several extra passes of the scene for cube faces, or not including dynamic objects. 

 

Well, yeah. If it's a camera-centered cubemap, you'd have to re-render it every time the camera (or anything else visible) moves. But in my experience rendering a single cube map every frame, especially with slightly lower quality (resolution etc.) is not really prohibitively expensive at all.


-~-The Cow of Darkness-~-

#15 ATEFred   Members   -  Reputation: 1052

Like
0Likes
Like

Posted 18 June 2013 - 02:37 AM

 

Using cube maps like that would require either rendering several extra passes of the scene for cube faces, or not including dynamic objects. 

 

Well, yeah. If it's a camera-centered cubemap, you'd have to re-render it every time the camera (or anything else visible) moves. But in my experience rendering a single cube map every frame, especially with slightly lower quality (resolution etc.) is not really prohibitively expensive at all.

 

 

In the case of a deferred engine with loads of lights, it takes a lot more to generate a lit correct cubemap than just render geo. You would need some way to get lighting information roughly consistent with the main scene.

After that, the issue with having one single cubemap for the camera is that the reflections can end up being wildly off for some objects. For example, you have a bunch of moving objects, which are reflective. It's very hard to get them to reflect each other plausibly without generating a cubemap for each entity (at least the ones close to the camera). It seems like ssr would kind of work much better in those cases, though admittedly I have never implemented them myself. From seeing a collegue implement them, it takes quite a lot of tuning to make them really usable after getting the base technique working, but at this point I can't think of achievable alternatives without generating loads of cubemaps. I guess if you have voxel cone tracing up and running for lighting, then you can use that, but it's not exactly cheap to setup and run.



#16 cowsarenotevil   Crossbones+   -  Reputation: 2008

Like
0Likes
Like

Posted 18 June 2013 - 03:26 AM


In the case of a deferred engine with loads of lights, it takes a lot more to generate a lit correct cubemap than just render geo. You would need some way to get lighting information roughly consistent with the main scene.

 

I don't see exactly why this matters; if we assume that each pixel takes roughly the same amount of time to render whether it's being rendered as part of our cubemap or not (which is not a terrible assumption, because, as you said, for the cubemap, we're rendering roughly the same scene, just with a 360 degree angle of view), if we are rendering at 1920x1080, we're rendering 2,073,600 pixels to render the main scene. A 256*256*6 cube map only requires 393,216 pixels, which is substantially smaller by a constant factor. And we can probably save even more time if we lower the quality of the cube map rendering in other ways.
 

 


After that, the issue with having one single cubemap for the camera is that the reflections can end up being wildly off for some objects. For example, you have a bunch of moving objects, which are reflective. It's very hard to get them to reflect each other plausibly without generating a cubemap for each entity (at least the ones close to the camera). It seems like ssr would kind of work much better in those cases, though admittedly I have never implemented them myself. From seeing a collegue implement them, it takes quite a lot of tuning to make them really usable after getting the base technique working, but at this point I can't think of achievable alternatives without generating loads of cubemaps. I guess if you have voxel cone tracing up and running for lighting, then you can use that, but it's not exactly cheap to setup and run.


This is the part that doesn't actually make sense. I certainly wouldn't dispute that the cube-mapped reflection makes all of these assumptions, (not that there aren't ways to mitigate this); the problem is that screen space reflections don't actually give any additional information. On a basic level, consider this: we could always replace the screen buffer with the section of the cubemap that is visible from the screen (projected into 2D of course) and do our "screen space" reflections this way instead (admittedly with a hit to resolution). This was what I meant originally when I said "it seems that any of the information needed for screen space reflections could also be included in a cubemap."

I am arguing that, in a sense, we can actually treat screen space reflections as a "special case" of cube-mapped reflections, since the data that's present on screen is also present in our cube map. And so, I'm simply saying that any sense in which screen-space reflections actually improve on the assumptions that are problematic to cubemap reflections could, at least in theory, be incorporated into the cubemap reflection algorithm itself.

 

EDIT: One more thing. There are actually perfectly valid reasons that, if we only use a single camera-centered cubemap, we don't actually have to worry about interreflection (multiple reflective objects reflecting each other):

 

A) if we calculate the cube map for the subsequent frame after we do our main rendering (using, say, a black cubemap for the very first frame), each successive "bounce" of our reflections will lag behind by one additional frame, but otherwise not have a problem

 

B) because our cubemap is camera-centered, we basically never even have to worry about objects reflecting themselves (except when they should,) because, on our cubemap, objects will always be "behind themselves." Unfortunately, this isn't the case with refraction (because objects do refract light from behind themselves), but for normal, shiny reflections it's always the case.


Edited by cowsarenotevil, 18 June 2013 - 03:38 AM.

-~-The Cow of Darkness-~-

#17 ATEFred   Members   -  Reputation: 1052

Like
0Likes
Like

Posted 18 June 2013 - 12:14 PM

 


In the case of a deferred engine with loads of lights, it takes a lot more to generate a lit correct cubemap than just render geo. You would need some way to get lighting information roughly consistent with the main scene.

 

I don't see exactly why this matters; if we assume that each pixel takes roughly the same amount of time to render whether it's being rendered as part of our cubemap or not (which is not a terrible assumption, because, as you said, for the cubemap, we're rendering roughly the same scene, just with a 360 degree angle of view), if we are rendering at 1920x1080, we're rendering 2,073,600 pixels to render the main scene. A 256*256*6 cube map only requires 393,216 pixels, which is substantially smaller by a constant factor. And we can probably save even more time if we lower the quality of the cube map rendering in other ways.
 

 


After that, the issue with having one single cubemap for the camera is that the reflections can end up being wildly off for some objects. For example, you have a bunch of moving objects, which are reflective. It's very hard to get them to reflect each other plausibly without generating a cubemap for each entity (at least the ones close to the camera). It seems like ssr would kind of work much better in those cases, though admittedly I have never implemented them myself. From seeing a collegue implement them, it takes quite a lot of tuning to make them really usable after getting the base technique working, but at this point I can't think of achievable alternatives without generating loads of cubemaps. I guess if you have voxel cone tracing up and running for lighting, then you can use that, but it's not exactly cheap to setup and run.


This is the part that doesn't actually make sense. I certainly wouldn't dispute that the cube-mapped reflection makes all of these assumptions, (not that there aren't ways to mitigate this); the problem is that screen space reflections don't actually give any additional information. On a basic level, consider this: we could always replace the screen buffer with the section of the cubemap that is visible from the screen (projected into 2D of course) and do our "screen space" reflections this way instead (admittedly with a hit to resolution). This was what I meant originally when I said "it seems that any of the information needed for screen space reflections could also be included in a cubemap."

I am arguing that, in a sense, we can actually treat screen space reflections as a "special case" of cube-mapped reflections, since the data that's present on screen is also present in our cube map. And so, I'm simply saying that any sense in which screen-space reflections actually improve on the assumptions that are problematic to cubemap reflections could, at least in theory, be incorporated into the cubemap reflection algorithm itself.

 

EDIT: One more thing. There are actually perfectly valid reasons that, if we only use a single camera-centered cubemap, we don't actually have to worry about interreflection (multiple reflective objects reflecting each other):

 

A) if we calculate the cube map for the subsequent frame after we do our main rendering (using, say, a black cubemap for the very first frame), each successive "bounce" of our reflections will lag behind by one additional frame, but otherwise not have a problem

 

B) because our cubemap is camera-centered, we basically never even have to worry about objects reflecting themselves (except when they should,) because, on our cubemap, objects will always be "behind themselves." Unfortunately, this isn't the case with refraction (because objects do refract light from behind themselves), but for normal, shiny reflections it's always the case.

 

my post was actually about the comment made before stating that it was quite possibly not prohibitively expensive to render the cubemap every frame. I would still argue that in the deferred setup, it is pretty expensive, given the number of passes you need to do to get the final results. (you could of course just forward render with limited lights / sunlight only, but it's tricky to get looking right).


"This is the part that doesn't actually make sense. I certainly wouldn't dispute that the cube-mapped reflection makes all of these assumptions, (not that there aren't ways to mitigate this); the problem is that screen space reflections don't actually give any additional information. On a basic level, consider this: we could always replace the screen buffer with the section of the cubemap that is visible from the screen (projected into 2D of course) and do our "screen space" reflections this way instead (admittedly with a hit to resolution). This was what I meant originally when I said "it seems that any of the information needed for screen space reflections could also be included in a cubemap.""

this sounds like a pretty smart idea, if you already have the generated cubemap. If/when I need to work on reflections, I'll definitely give this a test toast. 
I certainly don't deny that SSR lacks a large amount of information, but from a perf cost, it can be a pretty neat approximation depending on the type of shiny materials you are working with. Certainly cheaper than rendering the scene and lighting the scene 6 times.

 



#18 cowsarenotevil   Crossbones+   -  Reputation: 2008

Like
0Likes
Like

Posted 18 June 2013 - 01:30 PM

my post was actually about the comment made before stating that it was quite possibly not prohibitively expensive to render the cubemap every frame. I would still argue that in the deferred setup, it is pretty expensive, given the number of passes you need to do to get the final results. (you could of course just forward render with limited lights / sunlight only, but it's tricky to get looking right).

 

(...)

Certainly cheaper than rendering the scene and lighting the scene 6 times.

Like I said, rendering a cube map every frame obviously has some cost versus not doing it at all, but you can pretty much ensure that rendering the whole cubemap happens within some (small) factor of the time it takes to do the main render pass by rendering at a reduced resolution, so I don't think it's really prohibitive at all.


-~-The Cow of Darkness-~-




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS