Screen Space Reflections

Started by
16 comments, last by cowsarenotevil 10 years, 10 months ago


I guess I'd just like a really good engineering solution, something that you can say "ok, this works for everything everywhere all the time" and you don't need X to back it up.

I think path tracing meets that criteria. wink.png

It is sort of like Heisenberg's uncertainty principle in a way. You can't have a general solution that works well for all scenarios and is computationally cheap at the same time.

Virtualized textures is pretty much all anyone could ever ask for in terms of textures happy.png But, yes reflections are a lot harder. I doubt that it's even mathematically possible for a "computationally cheap" way to do reflections or even worse actual (lightbounce) GI in general without a lot of precomputation/screenspace type hacks. Ohwell, I can dream.

Advertisement

I don't really know how screen space reflections typically work, but I'm curious as to what they offer that couldn't also be achieved with a camera-centered cubemap. It seems that any of the information needed for screen space reflections could also be included in a cubemap (depth buffer, etc.) and that using a cubemap could potentially help avoid the risk of rays escaping the screen.

Am I missing something?

-~-The Cow of Darkness-~-

Using cube maps like that would require either rendering several extra passes of the scene for cube faces, or not including dynamic objects.

Using cube maps like that would require either rendering several extra passes of the scene for cube faces, or not including dynamic objects.

Well, yeah. If it's a camera-centered cubemap, you'd have to re-render it every time the camera (or anything else visible) moves. But in my experience rendering a single cube map every frame, especially with slightly lower quality (resolution etc.) is not really prohibitively expensive at all.

-~-The Cow of Darkness-~-

Using cube maps like that would require either rendering several extra passes of the scene for cube faces, or not including dynamic objects.

Well, yeah. If it's a camera-centered cubemap, you'd have to re-render it every time the camera (or anything else visible) moves. But in my experience rendering a single cube map every frame, especially with slightly lower quality (resolution etc.) is not really prohibitively expensive at all.

In the case of a deferred engine with loads of lights, it takes a lot more to generate a lit correct cubemap than just render geo. You would need some way to get lighting information roughly consistent with the main scene.

After that, the issue with having one single cubemap for the camera is that the reflections can end up being wildly off for some objects. For example, you have a bunch of moving objects, which are reflective. It's very hard to get them to reflect each other plausibly without generating a cubemap for each entity (at least the ones close to the camera). It seems like ssr would kind of work much better in those cases, though admittedly I have never implemented them myself. From seeing a collegue implement them, it takes quite a lot of tuning to make them really usable after getting the base technique working, but at this point I can't think of achievable alternatives without generating loads of cubemaps. I guess if you have voxel cone tracing up and running for lighting, then you can use that, but it's not exactly cheap to setup and run.


In the case of a deferred engine with loads of lights, it takes a lot more to generate a lit correct cubemap than just render geo. You would need some way to get lighting information roughly consistent with the main scene.

I don't see exactly why this matters; if we assume that each pixel takes roughly the same amount of time to render whether it's being rendered as part of our cubemap or not (which is not a terrible assumption, because, as you said, for the cubemap, we're rendering roughly the same scene, just with a 360 degree angle of view), if we are rendering at 1920x1080, we're rendering 2,073,600 pixels to render the main scene. A 256*256*6 cube map only requires 393,216 pixels, which is substantially smaller by a constant factor. And we can probably save even more time if we lower the quality of the cube map rendering in other ways.


After that, the issue with having one single cubemap for the camera is that the reflections can end up being wildly off for some objects. For example, you have a bunch of moving objects, which are reflective. It's very hard to get them to reflect each other plausibly without generating a cubemap for each entity (at least the ones close to the camera). It seems like ssr would kind of work much better in those cases, though admittedly I have never implemented them myself. From seeing a collegue implement them, it takes quite a lot of tuning to make them really usable after getting the base technique working, but at this point I can't think of achievable alternatives without generating loads of cubemaps. I guess if you have voxel cone tracing up and running for lighting, then you can use that, but it's not exactly cheap to setup and run.


This is the part that doesn't actually make sense. I certainly wouldn't dispute that the cube-mapped reflection makes all of these assumptions, (not that there aren't ways to mitigate this); the problem is that screen space reflections don't actually give any additional information. On a basic level, consider this: we could always replace the screen buffer with the section of the cubemap that is visible from the screen (projected into 2D of course) and do our "screen space" reflections this way instead (admittedly with a hit to resolution). This was what I meant originally when I said "it seems that any of the information needed for screen space reflections could also be included in a cubemap."

I am arguing that, in a sense, we can actually treat screen space reflections as a "special case" of cube-mapped reflections, since the data that's present on screen is also present in our cube map. And so, I'm simply saying that any sense in which screen-space reflections actually improve on the assumptions that are problematic to cubemap reflections could, at least in theory, be incorporated into the cubemap reflection algorithm itself.

EDIT: One more thing. There are actually perfectly valid reasons that, if we only use a single camera-centered cubemap, we don't actually have to worry about interreflection (multiple reflective objects reflecting each other):

A) if we calculate the cube map for the subsequent frame after we do our main rendering (using, say, a black cubemap for the very first frame), each successive "bounce" of our reflections will lag behind by one additional frame, but otherwise not have a problem

B) because our cubemap is camera-centered, we basically never even have to worry about objects reflecting themselves (except when they should,) because, on our cubemap, objects will always be "behind themselves." Unfortunately, this isn't the case with refraction (because objects do refract light from behind themselves), but for normal, shiny reflections it's always the case.

-~-The Cow of Darkness-~-


In the case of a deferred engine with loads of lights, it takes a lot more to generate a lit correct cubemap than just render geo. You would need some way to get lighting information roughly consistent with the main scene.

I don't see exactly why this matters; if we assume that each pixel takes roughly the same amount of time to render whether it's being rendered as part of our cubemap or not (which is not a terrible assumption, because, as you said, for the cubemap, we're rendering roughly the same scene, just with a 360 degree angle of view), if we are rendering at 1920x1080, we're rendering 2,073,600 pixels to render the main scene. A 256*256*6 cube map only requires 393,216 pixels, which is substantially smaller by a constant factor. And we can probably save even more time if we lower the quality of the cube map rendering in other ways.


After that, the issue with having one single cubemap for the camera is that the reflections can end up being wildly off for some objects. For example, you have a bunch of moving objects, which are reflective. It's very hard to get them to reflect each other plausibly without generating a cubemap for each entity (at least the ones close to the camera). It seems like ssr would kind of work much better in those cases, though admittedly I have never implemented them myself. From seeing a collegue implement them, it takes quite a lot of tuning to make them really usable after getting the base technique working, but at this point I can't think of achievable alternatives without generating loads of cubemaps. I guess if you have voxel cone tracing up and running for lighting, then you can use that, but it's not exactly cheap to setup and run.


This is the part that doesn't actually make sense. I certainly wouldn't dispute that the cube-mapped reflection makes all of these assumptions, (not that there aren't ways to mitigate this); the problem is that screen space reflections don't actually give any additional information. On a basic level, consider this: we could always replace the screen buffer with the section of the cubemap that is visible from the screen (projected into 2D of course) and do our "screen space" reflections this way instead (admittedly with a hit to resolution). This was what I meant originally when I said "it seems that any of the information needed for screen space reflections could also be included in a cubemap."

I am arguing that, in a sense, we can actually treat screen space reflections as a "special case" of cube-mapped reflections, since the data that's present on screen is also present in our cube map. And so, I'm simply saying that any sense in which screen-space reflections actually improve on the assumptions that are problematic to cubemap reflections could, at least in theory, be incorporated into the cubemap reflection algorithm itself.

EDIT: One more thing. There are actually perfectly valid reasons that, if we only use a single camera-centered cubemap, we don't actually have to worry about interreflection (multiple reflective objects reflecting each other):

A) if we calculate the cube map for the subsequent frame after we do our main rendering (using, say, a black cubemap for the very first frame), each successive "bounce" of our reflections will lag behind by one additional frame, but otherwise not have a problem

B) because our cubemap is camera-centered, we basically never even have to worry about objects reflecting themselves (except when they should,) because, on our cubemap, objects will always be "behind themselves." Unfortunately, this isn't the case with refraction (because objects do refract light from behind themselves), but for normal, shiny reflections it's always the case.

my post was actually about the comment made before stating that it was quite possibly not prohibitively expensive to render the cubemap every frame. I would still argue that in the deferred setup, it is pretty expensive, given the number of passes you need to do to get the final results. (you could of course just forward render with limited lights / sunlight only, but it's tricky to get looking right).


"This is the part that doesn't actually make sense. I certainly wouldn't dispute that the cube-mapped reflection makes all of these assumptions, (not that there aren't ways to mitigate this); the problem is that screen space reflections don't actually give any additional information. On a basic level, consider this: we could always replace the screen buffer with the section of the cubemap that is visible from the screen (projected into 2D of course) and do our "screen space" reflections this way instead (admittedly with a hit to resolution). This was what I meant originally when I said "it seems that any of the information needed for screen space reflections could also be included in a cubemap.""

this sounds like a pretty smart idea, if you already have the generated cubemap. If/when I need to work on reflections, I'll definitely give this a test toast.
I certainly don't deny that SSR lacks a large amount of information, but from a perf cost, it can be a pretty neat approximation depending on the type of shiny materials you are working with. Certainly cheaper than rendering the scene and lighting the scene 6 times.

my post was actually about the comment made before stating that it was quite possibly not prohibitively expensive to render the cubemap every frame. I would still argue that in the deferred setup, it is pretty expensive, given the number of passes you need to do to get the final results. (you could of course just forward render with limited lights / sunlight only, but it's tricky to get looking right).

(...)

Certainly cheaper than rendering the scene and lighting the scene 6 times.

Like I said, rendering a cube map every frame obviously has some cost versus not doing it at all, but you can pretty much ensure that rendering the whole cubemap happens within some (small) factor of the time it takes to do the main render pass by rendering at a reduced resolution, so I don't think it's really prohibitive at all.

-~-The Cow of Darkness-~-

This topic is closed to new replies.

Advertisement