• Advertisement
Sign in to follow this  

Clouds Perlin & Scattering

This topic is 4608 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Ok, this is YATC: Yet Another Thread on the Clouds ! I know there are severals of this subject but ... I can't find really precise explanations (waiting for Yann L article !). If what I'am looking for is in a book please let me know.
Quote:
fboivin: JF Dube has a good article in Game Programming Gems 5 about cloud rendering using the latest hardware (ps3.0)
How to implement realtime (60 Hz) clouds changing ? Using: - a sky dome - a procedural texture (perlin noise) - some nice lighting/scattering/shading - shaders and modern hardware See Yann L's example: Others implementations (i.e. impostors ...) are out of subject. References are: - M. Ken Perlin himself ! Improved Noise reference implementation in JAVA (C++ translation is obvious). - Hugo Elias Perlin noise implementation and Clouds and the the article in gamedev form Francis "AK 47" Huang - THE thread on Clouds (Yann L). - Others threads (okonomiyaki: Cloud 2.5d raytracing ... need more eyes, HellRaiZer Cloud Shading ...) So, my questions are: - could someone explain the whole process in detail (especially scattering) ? - and also could you give some explanation on what is done in CPU and what in GPU ? What I understand so far: 0. The process: 0.1 Compute Perlin Noise in N octaves 0.2 Compute the scattering of sun using a 2.5d raytracing (Bresenham line drawing for example) on small resolution of noise and less octaves. 0.3 in the fragment shader compute the final sum of octaves and exponentiation (using a lookup texture ?)and compute the blending with sky colour and sun glow. 1. Perlin Noise: 1.0 Noise generation. The aim is to generate N octaves of noise. They are basically 2D arrays. The basic implementation uses Ken Perlin's noise generator which does the noise generation + interpolation + smoothing. Several others implementations use tricks to minimize the number of calculations. 1.1 Do I need to have seamless noise ? 1.2 If yes, how to have efficent seamless noise generation ? 1.3 Does anyone knows if hardware vendors will implement the GLSL noise() functions ? Trick 1 (cf. Yann L thread): Precompute sum of octaves in a RGB texture: R0 = (1*O1 + 1/2*O2 + 1/4*O3) G0 = (1*O4 + 1/2*O5 + 1/4*O6) B0 = (1*O7 + 1/2*O8) Then the final sum is: final = R0+1/8*G0+1/64*B0 Trick 2: Remember that the low frequency octaves give the general shape and high frequency octaves give the small variations. One simple trick is to compute only low frequency octaves when needed, using allways the same high frequency octaves. Trick 3: cf coelurus The lookup texture to do the exponentiation could be: lookup_texture[u, v] = 255 * (1 - exp(a, u + v)) 2. Sky dome 2.1 Polygons Can I use a dome generated by program ? Use for example : http://www.spheregames.com/files/SkyDomesPDF.zip 2.2 Tesselation I understand that dome needs to be more subdivided in the horizon than in the zenith, how to do that by program ? 3. Scattering 3.1 I understand that I have to consider the noise as a value of thickness of the cloud. (cf. Yann L thread). 3.2 How to have a voxel from the noise map ? Do I need a render to texture phase ? Trick 1: The sun is a point light. See Yann L's thread: Trick 2: Mark Harris does the scattering using a 2 phases rendering see his paper, Algorithm 1 P. 118. Place the observer at the sun position, render the clouds and get the opacity (from colour buffer or z-buffer ?), then render the scene using that opacity for scattering. Could we do that with our 2D texture ? 4. Colour of sky The reference document seems to be Dobashi: http://nis-lab.is.s.u-tokyo.ac.jp/~nis/cdrom/BasisSky.ps, how to implement ? J. [Edited by - jmaupay on September 8, 2005 4:09:53 AM]

Share this post


Link to post
Share on other sites
Advertisement
may I ask what is seamless noise? perlins noise interpolation seems to me reasonably seamless, you will notice repeating patterns if you vew a texture over a large enough domain, but nothing I'd consider seams?

Tim

Share this post


Link to post
Share on other sites
It would be nice if you managed this thread and put answers to your questions in your first post. A sort of reference for cloud... stuff.

0.1-0.3) Using the original Perlin noise-approach, you can add up octaves immediately per texel. Loop through the octaves you want, scale and offset coords for the noise function, scale result and sum up. Scaling buffers with noise isn't very good and you can get much nicer and smoother results using real Perlin noise. You shouldn't really use 2048x2048 textures either, you can tile high freq noise textures and use 64x64 textures for 3 octaves per texture.

1.1) If you want to get rid of 2048x2048 textures, yes.
1.2) Try the original approach, it's quick enough and with a proper RNG (search for Mersenne Twister on google), it looks great.

2.1) A skydome is just a collection of tris spanning a capped sphere, get it in any way you want.
2.2) Doesn't matter if it's efficient or not, the skydome should not be regenerated or reloaded every frame anyway.

3.2) As you mentioned in 3.1, the cloud array is a collection of noise values that specify cloud thickness. A cloud voxel exists at position (x, y, z) in the cloud array if "0 <= y < cloud[x + z * width]" (y being up). You can trick this out a bit and try:
c = cloud[x + z * width]
-c/4 <= y < c
to get some "substance" under the clouds during scattering, it really makes a difference.

That's what I got to say atm [smile]

Share this post


Link to post
Share on other sites
Quote:
Original post by timw
may I ask what is seamless noise? perlins noise interpolation seems to me reasonably seamless, you will notice repeating patterns if you vew a texture over a large enough domain, but nothing I'd consider seams?
Tim


Seamless for me means it can repeat without showing the tile. And when I generate noise it is not "repeatable" at all. The first octave gives the general pattern. Something has to be done to the octaves to be repeatable. So my question is: how to have repeatable noise ? For example: Matt Zucker
The Perlin noise math FAQ
explains how to do that but I feel that it is not an efficient way of doing it ?

If you take the 3DLabs 3Dnoise texture generation here GLSL demos from "Orange book", it's tilable but they are doing an other way (I don't understand). So if you can explain the "how to" ...


Quote:
Original post by coelurus
It would be nice if you managed this thread and put answers to your questions in your first post. A sort of reference for cloud... stuff.
I'll try to do that
Quote:

0.1-0.3) Using the original Perlin noise-approach, you can add up octaves immediately per texel. Loop through the octaves you want, scale and offset coords for the noise function, scale result and sum up. Scaling buffers with noise isn't very good and you can get much nicer and smoother results using real Perlin noise. You shouldn't really use 2048x2048 textures either, you can tile high freq noise textures and use 64x64 textures for 3 octaves per texture.


OK. No problem with that. By the way, the "original perlin noise" is from M. Ken Perlin himself ?! Improved Noise reference implementation in JAVA
(C++ translation is obvious).

Why don't we use that one ? Is there any copyright problem ? Or any not-random results due to the pre-computed permutation table ? Any limitation in dimension (256) ? Or any not good interpolation ?

Then generate octaves and sum them:
(peudo code from Hugo Elias:)

function PerlinNoise_2D(float x, float y)

total = 0
p = persistence
n = Number_Of_Octaves - 1

loop i from 0 to n

frequency = 2^i
amplitude = p^i

total = total + InterpolatedNoisei(x * frequency, y * frequency) * amplitude

end of i loop

return total

end function



But I thought that Yann L's approach is not this one. He is doing the sum in the GPU. I feel that he could provide smaller textures then let the GPU doing the magnification interpolations ?

Quote:

2.1) A skydome is just a collection of tris spanning a capped sphere, get it in any way you want.
2.2) Doesn't matter if it's efficient or not, the skydome should not be regenerated or reloaded every frame anyway.

Ok my point 2) is stupid. I'll remove it, perhaps providing some links to how to generate a sky dome for example here:
Sky dome tutorial from Spheregames

Any other good url for that?

Quote:

3.2) As you mentioned in 3.1, the cloud array is a collection of noise values that specify cloud thickness. A cloud voxel exists at position (x, y, z) in the cloud array if "0 <= y < cloud[x + z * width]" (y being up). You can trick this out a bit and try:
c = cloud[x + z * width]
-c/4 <= y < c
to get some "substance" under the clouds during scattering, it really makes a difference.


Sorry I don't understand. What is cloud[] ? Is it my noise texture ? For the moment I have a texture with several octaves (the sum and exponentiation is done in the frament shader). That texture is mapped to the dome (the dome is quite large, that's why I wanted the texture to be 2048x2048 and not to repeat it) using texture spherical coordinates(glBindTexture/glTexCoord). Where do I have a voxel ? Probably in the fragment program, but I feel that you are not talking about that ?

Share this post


Link to post
Share on other sites
First of all, here's a shot of a demo of mine that I took 1.5 year ago or so, just to make sure you know what I know [smile]
Dark clouds [Apoch: link redacted 2008-11-29 - no longer links to pleasant things]

I'll rearrange my answers here a bit:

The Perlin-noise link you found is _the_ one, afaik you should be able to simply copy and paste that code (and make it 2D for your clouds). It's way better than what Hugo Elias did. Make sure you get a proper RNG (Random Number Generator), rand() is usually not good enough.

Here's how I stored and added noise octaves (iirc):
I had 4 octaves for R and 4 for G, making 8 octaves. I used 2 128x128 textures with 4 octaves each and I used octave-coefficients around 1/2, 1/4, 1/8 and 1/16. The textures are very identical to this point, the only thing that differs is the noise values; the texture sizes and octave setups are identical. Then, I rendered the first texture to the skydome to the R channel without any scaling and the next to G scaled by 2^-4 and repeated. I did another pass with a 2D exponentiation and sum texture ( lookup_texture[u, v] = 255 * (1 - exp(a, u + v)) or whatnot ). I tried the 3D texture approach, but it slowed down considerably on my TI4400 I had at the time.
There's no need to sum everything on the GPU, otherwise you'd have to send a whole bunch of ultra-large textures to the GPU, which is not practical. Repeating the high-frequency cloud noise isn't bad either, it's practically impossible to see any repeating patterns. Low-freq octaves should not be scaled at all, but that doesn't mean the texture has to be large.

I never looked for refs on generating skydomes, it's simply the surface that's left when you cut a spherical shell with a cone. Play with restricted spherical coordinates and you should get a dome rather quickly. You don't have to worry about tessellation, flat domes with uniform distribution of vertices will compress geometry in image-space at the dome edges since they get far away from the camera viewpoint.

What kind of raycasting are you gonna do, preprocess on the CPU or by using the newest fragment program model in realtime? I have no experience at all from "ps2.0" and up so I can't comment at all on realtime solutions. The CPU preprocess is just a straightforward scan through the lowest freq octaves of your clouds. Loop through each entry in your generated cloud texture (you do generate it in some memory I hope?) and cast rays from cloud particles to light source and camera views. There are no distinct voxels in the clouds, only pillars of voxels so you have to check for lightrays that cut cloud pillars.

It's pretty tricky to explain the entire process in a post...

[Edited by - ApochPiQ on November 28, 2008 11:21:20 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by coelurus
Dark clouds [Apoch: link redacted]

Very nice shot !

Quote:

I'll rearrange my answers here a bit:

The Perlin-noise link you found is _the_ one, afaik you should be able to simply copy and paste that code (and make it 2D for your clouds).


Well, I use Z as the time and I would like clouds to change over the time. So just taking x,y values at a fixed z (time) is straightforward. New textures will be computed over several frames then replaced when completed (= time slicing). z will change only when a new cloud texture is completed and will change very slowly. So it's necessary to keep the 3D noise function.

Quote:

Make sure you get a proper RNG (Random Number Generator), rand() is usually not good enough.


Where do I need to use a RNG ? Ken Perlin uses a permutation table, I don't see any rand. If I generate a white noise, then do the smooth and interpolation myself, then a RNG is needed to generate the white noise. But with the Ken Perlin function, I don't see where to use rand() ?

Quote:

What kind of raycasting are you gonna do, preprocess on the CPU or by using the newest fragment program model in realtime? I have no experience at all from "ps2.0" and up so I can't comment at all on realtime solutions. The CPU preprocess is just a straightforward scan through the lowest freq octaves of your clouds. Loop through each entry in your generated cloud texture (you do generate it in some memory I hope?) and cast rays from cloud particles to light source and camera views. There are no distinct voxels in the clouds, only pillars of voxels so you have to check for lightrays that cut cloud pillars.

It's pretty tricky to explain the entire process in a post...


I dont know if I'll use preprocessed or GPU raycasting. For the mement I have to understand how to do it on CPU (preprocess). I'll see after if shaders could help in some way. (yes I have a texture in memory).

What you mean is (your texture is 128x128):
for (int i=0;i<128;i++)
for (int j=0;j<128;j++)
RayCast(LowFreqNoise[j],SunPosition,CameraPosition);

I have a problem of transformation. The texture will be mapped in a dome. So how to transform the i,j coordinates to the coordinates x,y,z in the dome ? I mean the sun and camera are in world coordinate, how to transform i,j to that coordinate system ? (I assume that the texture is not repeated and is entierly mapped on the dome i.e. u,v coordinates will vary from 0,0 to 1,1).

[Edited by - ApochPiQ on November 28, 2008 11:47:02 PM]

Share this post


Link to post
Share on other sites
I wonder where I got the idea of using a custom RNG from, as you said it's not needed unless you plan on generating your own permutation tables.

The dome should be very flat and always right above the player. You can approximate the skydome using a virtual plane and maybe offset the Y coord slightly by the distance from the zenith to adjust for the approximation. It depends a little on how you generate or get your model, try using only a virtual plane first and extend if the shading looks funny.

You should also think about interpolating cloud textures on the GPU, even small changes can look jerky.

Share this post


Link to post
Share on other sites
Quote:
Well, I use Z as the time and I would like clouds to change over the time. So just taking x,y values at a fixed z (time) is straightforward. New textures will be computed over several frames then replaced when completed (= time slicing). z will change only when a new cloud texture is completed and will change very slowly. So it's necessary to keep the 3D noise function.


I implemented a time slicing method as well, but didn't use the 3rd dimension explicitely like you are. I just kept a second cloud array and populated it with values over several frames. Memory wise, this costs about the same, but would allow for you to remove the 3rd dimension from your cloud array. However, the problem I found with the time slicing method is that there weren't enough updates being performed, so each time the new cloud texture was being combined with the old one, there was a jerk associated. I'm in the process of porting what I've done into a shader though, so that should help. But trying to recalculate everything on the cpu just wasn't fast enough.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Well I don't have a 3D noise array I just have a 2D noise array populated using the 3D function. To make this clearer (my english is so bad):

NoiseArray[j] = PerlinNoise(x,y,z)

where:
- x,y are calculated from i,j (offset, scale, frequency) and
- z depends on time.
Then the changes from 1 array of noise to the next one should be smooth (they are coherent noise in the 3 dimensions).

As said coelurus you have to interpolate your 2 sets of cloud values. This could be done for example in GPU using 2 textures units and it's probably better if you do it on the original noise, not on the exponentiated values.

I'm currently trying to implement scattering but I have strange pixelized lines crossing the clouds, if someone know if it is a normal effect of Bresenham raycasting ? (white square is supposed to be the sun !)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement