Jump to content

  • Log In with Google      Sign In   
  • Create Account

Horizon:zero Dawn Cloud System

  • You cannot reply to this topic
51 replies to this topic

#1   Members   

285
Like
1Likes
Like

Posted 27 July 2016 - 12:39 PM

recently  i read the gpu pro 7 article <<Real-Time Volumetric Cloudscapes>>, i want to implement a volumetric cloud system myself.

 

i create the 3d noise texture accord to the article.

and write the shader, test it. but i found it's hard to model the cloud.

i use the sample cloud density code from the article, and a weather texture from the video which embed in the siggraph 2015 ppt.

and the result is not good.

 

the article didn't mention the mapping method which mapping the weather texture to the sky dome.

i use a plane projection, just use the position.xy to calculate the weather texture uv.

but the uv is mostly distribute in the horizon direction, distortion is very obvious.

 

so how to mapping the weather texture to the sky dome?

 

and how to generate the weather?

i found the left bottom weather texture in the video maybe is not the original weather texture, it's just used to show some info.

 

has someone do the same thing?

any suggestion will be so appreciate.

thanks.


Edited by ChenA, 27 July 2016 - 12:52 PM.

hehe.

#2   Members   

699
Like
2Likes
Like

Posted 27 July 2016 - 03:30 PM

i kind of coded clouds similar to theirs(well based on their noise texture ideas), so the mapping is done by raymarching through the 3d noise texture and doing the density, coverage,refinement and lighting on each of the raymarch sample.

You can start raymarching by intersecting a lower cloud "plane" with the eye-vector at a pixel, then you march in steps and do your calculations.

Later on you can do an intersection with a sphere around your map so you get a nice curvature at the horizon.

So reading up on raymarching and volume rendering is a good idea.

 

About how one could generate the weather texture, that's something i need ideas for aswell. You could precompute/prepaint some weather textures for a single cloud.

Then based on the weather you want, pack some of the premade textures in a fbo, add some magic and use that in the final raymarch shader.

 

The cloud shape depends a lot on the baked 3D noise texture and how you combine it later on. I mean you have 4 channels with different octaves and only need a float density value. For me it was a lot of experimenting.



#3   Members   

285
Like
0Likes
Like

Posted 27 July 2016 - 08:43 PM

i kind of coded clouds similar to theirs(well based on their noise texture ideas), so the mapping is done by raymarching through the 3d noise texture and doing the density, coverage,refinement and lighting on each of the raymarch sample.

You can start raymarching by intersecting a lower cloud "plane" with the eye-vector at a pixel, then you march in steps and do your calculations.

Later on you can do an intersection with a sphere around your map so you get a nice curvature at the horizon.

So reading up on raymarching and volume rendering is a good idea.

 

About how one could generate the weather texture, that's something i need ideas for aswell. You could precompute/prepaint some weather textures for a single cloud.

Then based on the weather you want, pack some of the premade textures in a fbo, add some magic and use that in the final raymarch shader.

 

The cloud shape depends a lot on the baked 3D noise texture and how you combine it later on. I mean you have 4 channels with different octaves and only need a float density value. For me it was a lot of experimenting.

 

thanks for your reply.

 

i already implement the raymarch and volume rendering.

problem is how to get the cloud density, i think need to use the weather texture to get cloud base shape and empty sky regions.

so every raymarch step, we need a weather data to calculate the cloud density, so we need use position to calculate a uv to sample the weather texture.

my problem is that : how to calculate the weather texture uv?how to mapping the weather texture to sky dome?

 

 

 

for me the key point is the noise texture, weather texture, the mapping method, and the compose algorithm.

i already generated the noise texture .

below is the gpu pro 7 compose algorithm:

// Fractional value for sample position in the cloud layer .
float GetHeightFractionForPoint ( float3 inPosition , float2 inCloudMinMax )
{
    // Get global fractional position in cloud zone .
    float height_fraction = (inPosition.z − inCloudMinMax.x ) / ( inCloudMinMax.y − inCloudMinMax.x ) ;
    return saturate ( height_fraction ) ;
}

// Utility function that maps a value from one range to another .
float Remap ( float original_value , float original_min , float original_max , float new_min , float new_max )
{
    return new_min + ( ( ( original_value − original_min) / ( original_max − original_min ) ) ∗ ( new_max − new_min ) );
}

float SampleCloudDensity ( float3 p , float3 weather_data )
{
    // Read the low−frequency Perlin−Worley and Worley noises.
    float4 low_frequency_noises = tex3Dlod ( Cloud3DNoiseTextureA , Cloud3DNoiseSamplerA , float4 ( p , mip_level) ).rgba;

    // Build an FBM out of the low frequency Worley noises
    // that can be used to add detail to the low−frequency
    // Perlin−Worley noise.
    float low_freq_FBM = ( low_frequency_noises.g ∗ 0.625 )
    + ( low_frequency_noises.b ∗ 0.25 )
    + ( low_frequency_noises.a ∗ 0.125 );

    // define the base cloud shape by dilating it with the
    // low−frequency FBM made of Worley noise.
    float base_cloud = Remap ( low_frequency_noises.r , −( 1.0 − low_freq_FBM ) , 1 .0 , 0 .0 , 1 .0 );

    // Get the density−height gradient using the density height
    // function explained in Section 4.3.2.
    float density_height_gradient = GetDensityHeightGradientForPoint ( p , weather_data );
    
    // Apply the height function to the base cloud shape.
    base_cloud ∗= density_height_gradient ;

    // Cloud coverage is stored in weather data’s red channel.
    float cloud_coverage = weather_data.r;
    
    // Use remap to apply the cloud coverage attribute.
    float base_cloud_with_coverage = Remap ( base_cloud , cloud_coverage, 1. 0 , 0. 0 , 1.0);

    // Multiply the result by the cloud coverage attribute so
    // that smaller clouds are lighter and more aesthetically
    // pleasing.
    base_cloud_with_coverage ∗= cloud_coverage;


    // Add some turbulence to bottoms of clouds.
    p.xy += curl_noise.xy ∗ ( 1 . 0 − height_fraction );

    // Sample high−frequency noises.
    float3 high_frequency_noises = tex3Dlod ( Cloud3DNoiseTextureB , Cloud3DNoiseSamplerB , float4 ( p ∗ 0.1 , mip_level) ).rgb;

    // Build−high frequency Worley noise FBM.
    float high_freq_FBM = ( high_frequency_noises.r ∗ 0.625 )
    + ( high_frequency_noises.g ∗ 0.25 )
    + ( high_frequency_noises.b ∗ 0.125 );

    // Get the height fraction for use with blending noise types
    // over height.
    float height_fraction = GetHeightFractionForPoint ( p , inCloudMinMax );

    // Transition from wispy shapes to billowy shapes over height.
    float high_freq_noise_modifier = mix ( high_freq_FBM ,
    1 .0 − high_freq_FBM , saturate ( height_fraction ∗ 10.0 ) );

    // Erode the base cloud shape with the distorted
    // high−frequency Worley noises.
    float final_cloud = Remap ( base_cloud_with_coverage ,
    high_freq_noise_modifier ∗ 0.2 , 1.0 , 0.0 , 1.0 );


    return final_cloud;
}

so the last problem is the mapping method, and how to create the weather texture.

especially the coverage, the cloud type is easy to create.


Edited by ChenA, 27 July 2016 - 09:12 PM.

hehe.

#4   Members   

699
Like
3Likes
Like

Posted 28 July 2016 - 05:41 AM

For mapping the weather texture i use as simple planar projektion, or more xy position of the samplepos with some scaling and wind offset. Works quite well as long as the curvature of you cloudsphere is small.
As i said i use a ray-sphere intersection as the starting point, but with a quite big sphere wich is centered in xy at the camera's position and at a height so that above the cam the clouds start at the height you want.
 
For the weathertexture what you can try out, is using 2 premade tiling perlin noise textures(different noise in r,g,b,a) , scroll them, combine them and in my case apply a global coverage value to it(remapping with some exp magic).
It gives changing patterns , you have a global coverage value and as a result also local covarage values.
 

float t = 255.0-min(255.0,(inCoverage*255.0));//inCoverage is a global coverage value
float c = max(0.0,cloud_sampled*255.0-t);//some remapping i found somewhere years ago
float cld_cov = 1.0-saturate(pow(0.99325,c));//exponent contolls sharpness of the result(higher, more soft)

That's what i use to remap the combines perlin noises to build a cheap but animated weathertexture
 
4f630f497179338.jpg
 

Right Top image shows the weather texture. As you see it's quite simple but works.

 

For a more realistic outcome it could also be possible to use some texture just as a "heat and vapor" source and let a cellular automata evolve some nice shapes out of it.

But yeah as a start, don't worry too much about the weather texture as simple perlin noise textures work good for a beginning.



#5   Members   

285
Like
0Likes
Like

Posted 29 July 2016 - 12:58 AM

For mapping the weather texture i use as simple planar projektion, or more xy position of the samplepos with some scaling and wind offset. Works quite well as long as the curvature of you cloudsphere is small.
As i said i use a ray-sphere intersection as the starting point, but with a quite big sphere wich is centered in xy at the camera's position and at a height so that above the cam the clouds start at the height you want.
 
For the weathertexture what you can try out, is using 2 premade tiling perlin noise textures(different noise in r,g,b,a) , scroll them, combine them and in my case apply a global coverage value to it(remapping with some exp magic).
It gives changing patterns , you have a global coverage value and as a result also local covarage values.
 

float t = 255.0-min(255.0,(inCoverage*255.0));//inCoverage is a global coverage value
float c = max(0.0,cloud_sampled*255.0-t);//some remapping i found somewhere years ago
float cld_cov = 1.0-saturate(pow(0.99325,c));//exponent contolls sharpness of the result(higher, more soft)

That's what i use to remap the combines perlin noises to build a cheap but animated weathertexture
 
4f630f497179338.jpg
 

Right Top image shows the weather texture. As you see it's quite simple but works.

 

For a more realistic outcome it could also be possible to use some texture just as a "heat and vapor" source and let a cellular automata evolve some nice shapes out of it.

But yeah as a start, don't worry too much about the weather texture as simple perlin noise textures work good for a beginning.

 

thanks for your share.

if i understand correctly, you use a octave perlin noise as source coverage, and remap it to get a new coverage.

right?

 

i implement this remap function in mathematica:

coverage1
 
apply to octave perlin noise
coverage2

 

mathematica file is in attach files.

Attached Files


hehe.

#6   Members   

699
Like
0Likes
Like

Posted 29 July 2016 - 06:36 AM

Excactly, along with some code to make it look as needed.

Also notice i only use coverage and coudtype(rg) as i don't need that fine control, but still want a "random" look all over the map



#7   Members   

285
Like
2Likes
Like

Posted 29 July 2016 - 07:23 AM

this is my result.

cloud

 

but it looks very fuzzy, lack of details, but the detail noise texture repeat count is very high.

 

your cloud top have very rich details, how to do this?

 

my cloud bottom and top high is 1500 and 4000, planet radius is 640000, horizon start fade distance is 30000.

base frequency is 1e-5, base noise repeat is 5, and detail noise repeat is 60.

base noise position uv scale is 1e-5*5, the entire sky dome repeat 3-4 times.

is these data something wrong?

 

thanks.


Edited by ChenA, 29 July 2016 - 08:15 AM.

hehe.

#8   Members   

699
Like
0Likes
Like

Posted 29 July 2016 - 08:06 AM

From my experiments the way you combine the base noise octaves have the most impact on how detailed the clouds are. But well i'm not using perlin-worley , just worley noise with 3 octaves per channel with a size of 123^3.

 

So the way i combine them is totally different from what they do, some kind of fbm+inbetween remapping with coverage.

And ofc the powder effect adds a lot.

 

For the cloud top, think the main difference is, i don't modify the density of the clouds with the height signal, but modify the coverage.

Then sure, the smaller high frequency noise adds the very fine detail and is applied like that

base_cloud = base_cloud * high_freq_noise *(1.0-base_cloud);

You see i simply assume that an edge is where the cloud got a density below 1



#9   Members   

285
Like
0Likes
Like

Posted 29 July 2016 - 08:31 AM

From my experiments the way you combine the base noise octaves have the most impact on how detailed the clouds are. But well i'm not using perlin-worley , just worley noise with 3 octaves per channel with a size of 123^3.

 

So the way i combine them is totally different from what they do, some kind of fbm+inbetween remapping with coverage.

And ofc the powder effect adds a lot.

 

For the cloud top, think the main difference is, i don't modify the density of the clouds with the height signal, but modify the coverage.

Then sure, the smaller high frequency noise adds the very fine detail and is applied like that

base_cloud = base_cloud * high_freq_noise *(1.0-base_cloud);

You see i simply assume that an edge is where the cloud got a density below 1

 

 

thanks.

to get a more detailed base cloud noise, need a high octaves? or need other things?

this is my base noise texture.

base noise
i found the r channel looks fuzzy indeed, i will change the noise arithmetic to test, thanks.
the noise texture you use rgba fp16 texture or rgba8?
 
 
some kind of fbm+inbetween remapping with coverage.
something like Remap(low_freq_FBM, coverage, 1, 0, 1)?

 

modify coverage accord to the height, the top get less coverage?

 

use this formula, when base_cloud is 1.0, means it's inside the cloud.

base_cloud = base_cloud * high_freq_noise *(1.0-base_cloud) = 0.0?

Attached Files


Edited by ChenA, 31 July 2016 - 10:08 PM.

hehe.

#10   GDNet+   

1379
Like
0Likes
Like

Posted 29 July 2016 - 10:55 AM

Really great results guys! I wish I could achieve at least something relatively close to what you have for my game.

 

That GPU 7 Pro article mostly describes ways to create a realistic clouds in terms of density functions and overall weather simulation, but it somehow assumes that reader is already familiar with ray marching techniques. While I can understand the algorithm behind a single ray cast from a given point to sample various density functions and so on, I can't get my head over the more basic stuff like - what render pass does it all happen? Is it during rendering the skybox? Or do you render some special shape and then in it's shader you do all this? What does it mean to do raycasts in this cast? Is it per pixel? Per point in a world space? They say in the article they assume some spherical shell around the camera of some thickness, but how do you pick points on that sphere for actual raycast? And how is the result from such raycast used to actually shade the sky?

 

Are there some papers on basics like this? If you could provide some pseudocode used to actually "draw" and execute this shader that's supposed to create clouds, including what are you rendering, and what's the input for raymarching algorithm, it would greatly help me understand this concept.


Edited by noizex, 29 July 2016 - 10:59 AM.


#11   Members   

285
Like
5Likes
Like

Posted 29 July 2016 - 11:32 AM

*
POPULAR

Really great results guys! I wish I could achieve at least something relatively close to what you have for my game.

 

That GPU 7 Pro article mostly describes ways to create a realistic clouds in terms of density functions and overall weather simulation, but it somehow assumes that reader is already familiar with ray marching techniques. While I can understand the algorithm behind a single ray cast from a given point to sample various density functions and so on, I can't get my head over the more basic stuff like - what render pass does it all happen? Is it during rendering the skybox? Or do you render some special shape and then in it's shader you do all this? What does it mean to do raycasts in this cast? Is it per pixel? Per point in a world space? They say in the article they assume some spherical shell around the camera of some thickness, but how do you pick points on that sphere for actual raycast? And how is the result from such raycast used to actually shade the sky?

 

Are there some papers on basics like this? If you could provide some pseudocode used to actually "draw" and execute this shader that's supposed to create clouds, including what are you rendering, and what's the input for raymarching algorithm, it would greatly help me understand this concept.

 

it's a post-process render pass, so you just render a quad.

 

first you get a ray accord to the screen pixel's coordinate, calculate the intersect point of the camera ray and the bottom cloud sphere.

the intersect point is the start point, run step along the ray.

float Raymarcher::integrate(const V3f& pos, const V3f& dir, const float absorption) const
{
    // Determine intersection with the buffer
    float t0, t1;
    if (false == m_buffer->intersect(pos, dir, t0, t1))
        return 1.0f;

    // Calculate number of integration steps
    const int numsteps = int(ceil(t1 - t0) / m_stepsize);

    // Calculate step size
    const float ds = (t1 - t0) / numsteps;
    V3d stepdir(dir);
    stepdir *= ds;
    V3d raypos(pos);
    raypos += stepdir;
    const float rhomult = -absorption * ds;
    
    // Transmittance
    float T = 1.f;
    for (int step = 0; step < numsteps; ++step) {
        float rho = m_buffer.trilinearInterpolation(raypos);
        T *= std::exp(rhomult * rho);
        if (T < 1e-8)
        break;
        raypos += stepdir;
    }
    return T;
}

this code is from siggraph 2011 course Production Volume Rendering, you should read it.


hehe.

#12   GDNet+   

1379
Like
0Likes
Like

Posted 29 July 2016 - 12:49 PM

Hm, where does the buffer come from? This looks like C++ code, you do it on CPU? Do you run it for every pixel of the rendered image or only for those that are actually on "sky"? 

 

Thanks for the paper, I will read it, need to find some good explanations how raymarching works, as I see a lot of weird equations about the actual volume sampling, but nothing about how is this actually rendered :)



#13   Members   

285
Like
0Likes
Like

Posted 01 August 2016 - 08:28 AM

Hm, where does the buffer come from? This looks like C++ code, you do it on CPU? Do you run it for every pixel of the rendered image or only for those that are actually on "sky"? 

 

Thanks for the paper, I will read it, need to find some good explanations how raymarching works, as I see a lot of weird equations about the actual volume sampling, but nothing about how is this actually rendered :)

 

just render on sky.


hehe.

#14   Members   

282
Like
3Likes
Like

Posted 01 August 2016 - 11:32 AM

You may find this useful !

http://bitsquid.blogspot.co.uk/2016/07/volumetric-clouds.html



#15   Members   

285
Like
0Likes
Like

Posted 02 August 2016 - 01:58 AM

 

thanks.


hehe.

#16   Members   

699
Like
0Likes
Like

Posted 02 August 2016 - 04:14 AM

rgba8 should be enough precision. After all it's just a simple density value. For the base detail, you can play araound with the scaling. My clouds start at around 1,8km in height and have a scaling of 68m per texel. The detail noise got a 16 times higher scaling.

For remapping i use a weird calculation.

float4 noise = tex3Dlod(WorleyCloud,float4(pos,0.0));
float mask = (noise.r*0.6+noise.g*0.8+noise.b+noise.a*0.6)/3.0;
float denom = 1.0/(coverage*0.25+0.0001);
float lcov = 1.0-saturate((mask - coverage)*denom);
float4 n = saturate((noise-lcov)/(1.0001-lcov));

float cloud = saturate(max(n.x*1.1,max(n.y*1.14,max(n.z*1.13,n.w*1.12))));

That's what i use to sample the volume texture

 

Btw @ChenA

where did you find a tiling 3D perlin noise ?


Edited by Ryokeen, 02 August 2016 - 04:17 AM.


#17   Members   

285
Like
0Likes
Like

Posted 02 August 2016 - 04:33 AM

rgba8 should be enough precision. After all it's just a simple density value. For the base detail, you can play araound with the scaling. My clouds start at around 1,8km in height and have a scaling of 68m per texel. The detail noise got a 16 times higher scaling.

For remapping i use a weird calculation.

float4 noise = tex3Dlod(WorleyCloud,float4(pos,0.0));
float mask = (noise.r*0.6+noise.g*0.8+noise.b+noise.a*0.6)/3.0;
float denom = 1.0/(coverage*0.25+0.0001);
float lcov = 1.0-saturate((mask - coverage)*denom);
float4 n = saturate((noise-lcov)/(1.0001-lcov));

float cloud = saturate(max(n.x*1.1,max(n.y*1.14,max(n.z*1.13,n.w*1.12))));

That's what i use to sample the volume texture

 

Btw @ChenA

where did you find a tiling 3D perlin noise ?

 

i generate it myself, if you need, i can send you the source code.


hehe.

#18   Members   

699
Like
0Likes
Like

Posted 02 August 2016 - 04:43 AM

Thank you, that would be nice.



#19   Members   

285
Like
2Likes
Like

Posted 02 August 2016 - 04:52 AM

i modify from a github project znoise.

i use vs2013, the project is on build/vs2013.

i save the result to a dds, you can use nvidia dds tools to view it.

Attached Files


Edited by ChenA, 02 August 2016 - 05:04 AM.

hehe.

#20   Members   

285
Like
0Likes
Like

Posted 02 August 2016 - 04:59 AM

i'm very confused that how to get these formula?

just try many times?


hehe.