Temporal Reprojection on Volumetric Cloud Rendering

Started by
9 comments, last by PLCrashy 4 years, 4 months ago

Hi all,

Recently I've been working (as hobbyist) to Volumetric Clouds rendering (something we've discussed in the other topic). I reached a visually satisfying result, but the performance are (on a GTX 960 @ 900p) quite slow: about 10-15 fps. I was trying to improve it. ATM the solely optimization it's the low transmittance (high alphaness) early exit. I wanted to implement even early exit on high transmittance (for example, when the cloud coverage is low the performance decrease a lot due to the absence of early exit): this require to keep in memory the last frame and check if in the previous frame (by finding the correct uv coords) its transmittance was high.

I've got some problems to find the correct uv coords. Currently I'm doing it in this way:


vec3 computeClipSpaceCoord(){
    vec2 ray_nds = 2.0*gl_FragCoord.xy/iResolution.xy - 1.0;
    return vec3(ray_nds, -1.0);
}

vec2 computeScreenPos(vec2 ndc){
    return (ndc*0.5 + 0.5);
}

//for picking previous frame color

vec4 ray_clip = vec4(computeClipSpaceCoord(), 1.0);

vec4 camToWorldPos = invViewProj*ray_clip;
camToWorldPos /= camToWorldPos.w;
vec4 pPrime = oldFrameVP*camToWorldPos;
pPrime /= pPrime.w;
vec2 prevFrameScreenPos = computeScreenPos(pPrime.xy); 

And then use prevFrameScreenPos to sample the last frame texture. Now, this (obviously) doesn't work: what I feel is that the issue here is that I'm setting z coord in computeClipSpaceCoord() as -1.0, ignoring the depth of that fragment. But: how can I determine the depth of a fragment in volume rendering, since it's all done by raymarching in the fragment shader? 

Anyway, it seems to be the key of Temporal Reprojection. I wasn't able to find anything about, do you have any resource/advice to implement this?

Thank you all for your help.

 

Advertisement

When i did that, i just assumed clouds are at max depth. If they are moving rather slowly it still gives a sharp image. Another idea could be, that you save of the distance/position of the first in cloud sample, so you get the most front sample position of the clouds. Elsewise, yeah that's a problem with volume rendering as there is no hard surface.


Another word towards optimization, i guess you read the articles from the Horzon:Zero Dawn team, so early out is one way to gain performance, skipping empty space and lowering the quality fully inside or at a distance are others. The empty space skipping is also quite usefull at low coverages, as you greatly reduce the amount of samples taken.


But the major thing, wich also requires reprojection is, you don't do that at fullscreen. I do that at 1/16th resolution, essentialy creating a full image over 16 frames, then apply a small 1pixel blur to hide the sampling noise. Sure if the cam is rotating or moving fast you get some more blurry clouds, but that's not so visible and looks more like a very soft motionblur. And even at that low update resolution it can take up to 1.8 ms on a gtx970

Yes, I also have to implement the optimization of the HZD article.

So, during the temporal reprojection you are speaking about, it is correct to use the reprojection similar to the one I wrote or I just have to simply write over different coordinate while I keep the others pixels invariate? 

I explain what I mean: for example, If I write the image over two frame, I first write the pixels with odd X coordinate, then with even X coordinate. It's that how does it work?

I have another question: does the usage of the mipmap increase performance? Otherwise, why do I would want to use it?

Yep, odd frames could write to odd pixel numbers while using the full image from the last frame to fill in the even pixels. For even frames the other way around. And i don't quite know it it's correct to use the repojection you posted(as i basicly use the same), but it looked fine. So you should give it a try.

Actually i use a 4x4 dither matrix https://en.wikipedia.org/wiki/Ordered_dithering as a threshold for which pixel should be written, along with a matching offset and use a frame counter internally. So for the first frame i would compute the upper left pixel, then to create the full image, update only that pixel in a 4x4 block, while reusing the previous fullscreen image on the other pixels. In case i don't have an old fullscreen image, i just use the new computed one with some linear filtering.

18 minutes ago, Ryokeen said:

Yep, odd frames could write to odd pixel numbers while using the full image from the last frame to fill in the even pixels. For even frames the other way around. And i don't quite know it it's correct to use the repojection you posted(as i basicly use the same), but it looked fine. So you should give it a try.

 Actually i use a 4x4 dither matrix https://en.wikipedia.org/wiki/Ordered_dithering as a threshold for which pixel should be written, along with a matching offset and use a frame counter internally. So for the first frame i would compute the upper left pixel, then to create the full image, update only that pixel in a 4x4 block, while reusing the previous fullscreen image on the other pixels. In case i don't have an old fullscreen image, i just use the new computed one with some linear filtering.

I'll definitively give a try soon. Are you sure the reprojection I posted it's correct? It gave me strange artifacts: when don't move the camera position (just rotating it) the reprojection is fine, but if i do, it mess up. I uploaded what I mean. The first image fits well, the second doesn't.

reprojection_ok.pngreprojection_not_ok.thumb.png.58cbadbb9acbcd0b548c9d5867fee8b4.png

Yeah it's the same i use, so either the one of the matrices is incorrect, it's because i use 1.0 for z in the computeClipSpaceCoord.

You sure that you inverse the current viewprojection with rotation and translation, same for last frames matrices

27 minutes ago, Ryokeen said:

Yeah it's the same i use, so either the one of the matrices is incorrect, it's because i use 1.0 for z in the computeClipSpaceCoord.

You sure that you inverse the current viewprojection with rotation and translation, same for last frames matrices

It seems to works with 1.0 in z! really thank you! Now I can go forward! :D

Hey there, I (think) have implemented the temporal reprojection: I achieved 6-8ms on GTX 960, rendering 1/16 frames. It's of course not impressive against the 1.8ms/2ms Andrew Schneider told about, but to me and my purposes (just an homebrew graphic engine) it's quite enough. It's impressive how it well it become faster, also it's funny how the "blurry" nature of temporal reprojection works well with the same blurry nature of the clouds! 

Anyway, before Implementing it, I tested on shadertoy the mechanism when a pixel must be written (hoping it's correct): Bayer Pixel Writing. 

PS: I also discovered how heavy could be a Post Processing effect, in this case God Rays...

 

Hi people,

I'm trying to achieve this kind of reprojection to render my clouds over a few frames to speed it up, and so far I've had good results.

I'm only using a 4x4 pattern because I have a camera moving and rotating quite fast and don't want too many artifacts. As I was already rendering clouds at half resolution it's now super fast.

BUT.

With this reprojection method, right now translations are almost perfect, and yaw / pitch camera rotations are also behaving quite good. Both at the same time can produce artifacts but nothing I can't hide with some motion blur.

However, I've a very strange issue when the camera rolls (rotation around z axis, where z is direction), as at some places, reprojection fails totally at some places.

Here the camera rolls at 30°sec

https://tof.cx/images/2019/11/28/e765d0431c7b6d9272f44f03e07e42e7.png

We can see there are some horizontal bands where it isn't working at all, here with a red overlay:

https://tof.cx/images/2019/11/28/09d4ac24b026a99911475c26cb4874d5.jpg

At this point it even looks better if I don't use reprojection and use the original texture coords

https://tof.cx/images/2019/11/28/b644072029e083be7bc5368418b0b8c6.jpg

I've also tried not using the projection matrix and rebuild an ndc position with view and an arbitrary depth without success.

Moreover, I also tried using the "real" depth of the cloud (grabbed the depth of the first non transparent pixel hit when raymarching) and result was the same : some places were wrongly reprojected.

One thing I've noticed is that the bands are located at the top/bottom when camera rolls slowly, and they moves to the center when roll speed increases.

Thanks for any hint/help :)




Allright I've found it: it's a precision issue.

If I send dmat4 to the shader (my matrices are computed in double anyway ) and do all the reprojection in double in the shader, the artifacts are gone, which is great, it means the computation are mathematically ok.

//EDIT: Hum, false celebration: it isn't working better at all, it was just an error updating the uniforms cancelling the reprojection. This + higher release mode framerate made the image appear ok. I guess I'll just don't do reprojection and use a smart camera motion blur to hide errors :(

//EDIT2: allright, sampling the previous frame texture using bilinear filtering instead of nearest solved the issue.


This topic is closed to new replies.

Advertisement