• ### Announcements

#### Archived

This topic is now archived and is closed to further replies.

# Sky-rendering techniques

## 138 posts in this topic

quote:
Original post by DeltaVee
This thread needs to go into the Resources section of GameDev. I have learnt more in this thread about sky rendering than I have in just about all the other threads combined.

Yann, you should really, really write an article (or articles) about this. It''d be perfect for the Hardcore column. I know you''re probably a bit time-limited, but considering how much you''ve written here , and the scarcity of articles covering doing these things in real time, you should definitely think about it.
0

##### Share on other sites
Could someone explain the term "exponentiation" to me, and why it''s used ( I know exponentials - just not how it''s used in this context... ).

Cheers

Death of one is a tragedy, death of a million is just a statistic.
0

##### Share on other sites
quote:

Could someone explain the term "exponentiation" to me, and why it's used ( I know exponentials - just not how it's used in this context... ).

Well, 'exponentiation' simply means raising a quantity to a power, but I guess you already know that It is a vital part of realistic cloud generation. The trick is to take a sharpness factor to the power of your original perlin noise. If the numeric ranges are well chosen, then this will apply a range mapping to the cloud noise. This mapping makes the clouds look more detailed and volumetric.

Here are some pics to show the effect.

This is the noise you normally get from a perlin noise generator:

Fig.1

It's distribution is very uniform, not really clouds yet. Or perhaps a very overcast day...

We need some clear parts in our sky, so we can just subtract an offset from the perlin noise, and clamp the result to a positive range:

Fig.2

The equation used is simply result = clamp_to_0( perlin_noise - cover_offset ) . It's already better, we have some cloudy and some clear parts. But it's still very fuzzy, more a kind of 'spiderweb' than actual clouds. Many sky engines stop at this step, and use this noise directly. This leads to very fuzzy and undefined clouds.

The problem is that in reality, clouds are 3D volumes. They have different thickness, and from a certain thickness on, a cloud becomes totally solid to the eye, no more transparency at all. This is not the case with our noise: it is always transparent to a certain degree.

We can change that by applying a range mapping. We are looking for a range mapping function that fully preserves our precision, since we only have 8 bit, and don't want to waste dynamic range.

The perfect candidate function is an exponential. Consider f(x) = nx . This will raise the constant factor n to the power x. This exponential function has an interesting property: if we can guarantee that n is always between 0 and 1, then the result itself will as well be between 0 and 1, for *all* exponents between 0 and infinity. Our exponent will be the clamped noise from Fig2. It is between 0 and 255. Let's assume a sharpness factor n of 0.96.

The two extremes will be:

f(0) = 0.960 = 1
f(255) = 0.96255 = 0.00003 (almost 0)

Great, we are between 0 and 1, as predicted. Now, let's map it back to the 0 to 255 range (and re-invert it, since the exponentiation inverted it in the first place):

final_clouds = 255 - ( 255 * 0.96x )

We haven't lost a single bit of precision, and we get a much nicer result, the clouds look more 3D now:

The hard part with exponentiation (as you noticed in the previous posts) is to have the 3D hardware do it on a perpixel level.

quote:

Yann, you should really, really write an article (or articles) about this. It'd be perfect for the Hardcore column. I know you're probably a bit time-limited, but considering how much you've written here , and the scarcity of articles covering doing these things in real time, you should definitely think about it.

Heh, yeah, I already wrote half of an article in this thread...

But you are right. It's very hard to find good public resources on those things. I already considered the idea of writing an article about it, perhaps a small series about photorealistic realtime rendering of various natural effects: terrain, sky, water, etc.

Besides the sky, I would love to do one about water, there is so much you can get out of a modern 3D card, people wouldn't believe it. Since most of the effects I wrote are used in our game, I'll have to check some IP/NDA issues with our company first. But besides that: yes I would be interested. I'll get back to you by mail for some more details and timing.

/ Yann

[edited by - Yann L on June 7, 2002 8:10:15 PM]
0

##### Share on other sites
Yann, wait ''till I''m done with clouds I don''t particularily like the terrain topic, especially considering that a lot has been said about it in "Texturing and Modelling: The Procedural Approach", but we''ll definetly have another thread discussing water
0

##### Share on other sites
Thanks Yann - you really cleared that up for me

As for water - look at the deep ocean rendering article on gamasutra.

Death of one is a tragedy, death of a million is just a statistic.
0

##### Share on other sites
quote:

Thanks Yann - you really cleared that up for me

I'm warming myself up for the article Dave wants me to write

quote:

As for water - look at the deep ocean rendering article on gamasutra.

I know this one, I implemented it a few months ago, and found it to be not realistic enough... I 'slightly' modified it by adding tons of pixelshader effects (bumpmapped chromatic aberration, Blinn speculars, etc). Now it looks good

Enough stuff for 2 articles about water, that's for sure...

BTW: my HW cloud implementation idea (see above) seems to work (more or less) ! I'm happy. It still behaves a bit strange, but that's smaller adjustments. I'll hopefully post some shots tomorrow.

/ Yann

[edited by - Yann L on May 24, 2002 7:44:15 PM]
0

##### Share on other sites
Yann, I''m curious what resources you use as a reference to the numerous extensions and features that can be used on the newer hardware. I was relatively happy with the clouds I made for my balloon demo in the physics competition (more because I made it without the aid of any tutorials and by trying various different methods, than because of any *real* aesthetic appeal), but I would like to have the additional option of optimization through accessing the graphics card after it''s all coded for sole use of the CPU Do you have a text source from nVidia, or a general reference? Are there any pertinent links (other than the online reference on nVidia''s site) that I should know?
0

##### Share on other sites
quote:

I'm warming myself up for the article Dave wants me to write

I'm waiting for it

I'd also be interested at the resources you use. When you say "Blinn speculars" do you basically mean specular bump mapping based on the Blinn not Phong model?

Death of one is a tragedy, death of a million is just a statistic.

[edited by - python_regious on May 25, 2002 5:45:43 AM]
0

##### Share on other sites
About the resources I use:

I primarily use OpenGL on nVidia, so I don't know about good Direct3D resources, but I guess MSDN would be a good point to start, if you want to do it in D3D.

For OpenGL, my main reference is the OpenGL extension registry. Most extensions are well documented, and if you don't like a huge textfile collection, there are pdf compilations available, containing all extensions in one document.

For ideas about how to use those extensions, the nVidia and ATi developer pages are very good. They are somewhat chaotic (esp. the nVidia one), but definitely contain very valuable information. The GDC presentations available on those sites are also very interesting.

I would highly recommend getting the reference papers to register combiners, vertex programs and texture shaders from nVidia's page.

They are a quick overview over the pipeline structure along with constants and gl commands. I actually printed them out, so that I have a handy reference over the whole programmable pipeline.

But the hardest part isn't understanding the 3D pipeline, the structure is actually rather simple. Once you have an algorithm you want to implement on the GPU, then the real challenge is to figure out how to make it work on the limited resources of that programmable pipe. This is more or less a question of the experience you have.

Here are some suggestions, on how to start using those features:

* start small. Do one at a time: first start with register combiners, they are the easiest to understand. Write some effects using them, perhaps a bumpmapper. Get a feeling about how an algorithm can be broken up to fit the combiner limitations. Learn to modify your equations, so that eg. textures or cubemaps are used as lookup tables for operations the pipeline can't do on it's own.

* Then go on with vertex programs. It helps tremendeously, if you know ASM here, the basic idea behind both is very similar. Keep in mind, that you can't loop, or do conditional jumps, so you have to adapt your algorithms. But you can use table lookups, that can be very valuable. Play around with VPs, try to combine them with regcoms. A good start is a full diffuse/specular bumpmapper using VPs to calculate the tangent and binormals on the GPU.

* Last but not least, go on to texture shaders. They are only available on GF3+, so make sure to have such a card, before attempting to use them. Texture shaders are a very powerful tool, but are very hard to use in the right way and *totally* unintuitive. Sometimes trial and error is the only way to get an effect run...

* Also have a look into render-to-texture faetures (pbuffers). Sometimes a complex problem can't be resolved in a single pass, even on powerful hardware. In that case, pbuffers can be a fast and convenient way to combine multiple passes.

* At this point, you'll see that the programmable vertex and fragment pipeline is a very powerful and versatile system, but with inhertent and sharply defined limitations and rules. The remaining challenge is to find algorithms and techniques, that produce the desired effect, but still fit the limitations imposed by the pipeline.

To start with vertex/pixelshaders, you can try Nutty.org or Delphi3d.net. Both have some nice examples, well suited for beginners.

Back to clouds (and some shameless self-advertising ):
OK, the full hardware implementation of my sky engine runs rather well now. All the clouds on the pics below are fully calculated by the GPU, and can morph and billow in realtime ! I still have a problem with lighting though, it's not as good as it was on the CPU only version.

Some shots:

Another sunset. Note that the cloud lighting is screwed. Don't know why yet. The clouds use 8 octaves Perlin noise, and I added a slight additive glow around the sun, makes it look nicer.

Clouds at daytime. These ones use 12 octaves of Perlin noise, you can see that the fractal detail is a lot finer than in previous images. Lighting is still bugged.

I had some fun tweaking some parameters Note the very low resolution of the cloud noise: I cut the thing down to 6 octaves, to see how it would perform. Isn't worth it (5% performance gain), 12 octaves is alot nicer !

/ Yann

[edited by - Yann L on June 7, 2002 8:11:25 PM]
0

##### Share on other sites
Looks like photos...

That''s awesome Yann! I didn''t believe one could do such skies yet...
0

##### Share on other sites
I like your cloud box idea, Yann. But I am most interested in how you are shading and rendering the clouds. How closely are you following Harris and Dobashi? (For example, I don't think you are using texture splats.) How are you generating voxel data from the 2D texture and how much of it are you using? How then do you shade and render? This seems to be the key to getting good, adjustable looks.

I have thought, for example, of "layering" the texture some number of times above itself. I've also thought that maybe the layers are only important for shading, and not for rendering -- i.e., only use the bottom voxel for rendering after its incident light has been determined by multiple forward scattering.

I haven't implemented any of this, but I just hacked together a function to approximate multiple forward scattering. Basically, clouds get darker the thicker they are, but the amount of attenuation depends on the angle between the camera ray and the sun ray. The exact dependence is e^(-7.5 * (1 - .9 * dot^100) * value), where value \in [0,1] is the cloud density and dot is the dot product between the two rays.

This makes the clouds look very flat, especially near the horizon, when you should be looking over the top of the clouds. To improve this, I added another hack where it decreases cloud density if the cloud density is decreasing when the camera ray moves up a little.

Here's what that looks like, with and without the sun so you can better see the shading nearby. The clouds in the distance still look flat, and even the clouds further up don't look quite right. Presumably with some parameter tweaking I could improve things, but I'd rather use a systematic, efficient method. (eight octaves of noise, halving the amplitude at each octave, base frequency 1, cloud plane height 1)

Here's the exact code (minus the last hack, it's too ugly!), if anyone needs some help getting started
 vector lt = normalize((0, -.4, -1)); I = vtransform("shader", "world", I); setycomp(I, ycomp(I) - horizonlevel); PP = transform("world", P); setycomp(PP, ycomp(PP) - horizonlevel); x = xcomp(PP); y = ycomp(PP); z = zcomp(PP); setycomp(PP, 1); setzcomp(PP, z + zcomp(I) * (1 - y) / y); setxcomp(PP, x + (zcomp(PP) - z) * xcomp(I) / zcomp(I)); dot = pow(abs(lt . normalize(I)), 100); attenuate = exp(-7.5 * (1 - 0.9 * dot));  /* Use fractional Brownian motion to compute a value for this point *//*  value = fBm (PP, omega, lambda, octaves); */ value = 0; l = 1;  o = 1;  a = 0; for (i = 0;  i < octaves;  i += 1) {  a += o * snoise (PP*l + label);  l *= 2;  o *= omega; } value = clamp(a - threshold, 0, 1); value = 1 - pow(sharpness, 255 * value); dot = pow(attenuate, value); skycolor = mix(midcolor, skycolor, smoothstep(.2, .6, v) ); skycolor = mix(horizoncolor, skycolor, smoothstep(-.05, .2, v) ); //skycolor = mix(skycolor, white, pow(abs(lt . normalize(I)), 1024) ); //skycolor = mix(skycolor, white, .5 * pow(abs(lt . normalize(I)), 64) ); Ct = value * dot * cloudcolor + (1 - value) * skycolor;

[edited by - greeneggs on May 26, 2002 5:25:57 PM]
0

##### Share on other sites
quote:

I like your cloud box idea, Yann. But I am most interested in how you are shading and rendering the clouds. How closely are you following Harris and Dobashi? (For example, I don't think you are using texture splats.) How are you generating voxel data from the 2D texture and how much of it are you using? How then do you shade and render? This seems to be the key to getting good, adjustable looks.

Right, shading is very important. Basically, I use the algorithm outlined in Harris2001, chapter 2. I modified the implementation, so that it works with a 2.5D digital differential analyzer to compute the integral over a low resolution voxel field. That's more or less all I use from Harris/Dobashi, since the remaing parts of their papers concentrate on their respective rendering techniques, that are substantially different from mine (esp. Dobashi's metaball splatting approach).

The original clamped cloud noise (before exponentiation, the Fig.2 in one of my posts above) can be seen as a heightfield (in fact, it would just look like a terrain, if you turned it upside down). Now, for each voxel in the cloud heightfield, I trace a ray from the voxel to the sun, through the voxel field, approximating the multiple scattering integral over the way (discretized over the resolution of the voxel grid). I do that on the CPU, and on a lowres version of the noise, usually only the first 3 or 4 octaves. Lighting doesn't need to be as precise as opacity. Also note that you need to do it before exponentiation, since the pow() will destroy any valid heightfield information !

Now, this is essentially a 2D raytracing operation (it's a 2D noisegrid). But you have to take the 'fake' thickness of the clouds into account. That's why the DDA I used is a 2.5D version: it traces through a 2D field (a simple Bresenham tracer is fine), but takes the extend traveled through a 3D voxel into account. That way, the result is as if I calculated the multiple scattering integral over a real 3D cloud volume (when in fact it was a fake 2D heightfield).

quote:

I have thought, for example, of "layering" the texture some number of times above itself. I've also thought that maybe the layers are only important for shading, and not for rendering -- i.e., only use the bottom voxel for rendering after its incident light has been determined by multiple forward scattering.

Well, the rendering is simply done by drawing a single layer 2D plane textured with the full cloud noise.

[*Yann is speculating again, if you're not into theoretical pixelshader stuff, feel free to skip *]
There is an interesting way one could try: it could actually be possible to calculate the full multiple scattering equation entirely on the GPU as well. Using a similar approach to Lastra/Harris, chapter 2.2: by spliting up the procedural cloud layer into multiple layers, say 12, divided by a threshold value, and rendering them from the viewpoint of the sun. blending would be done as described by Harris. That way, the 3D hardware would be able to approximate the scattering integral itself. The only problem would be the finite angle readback (gamma in their paper), this would probably require come kind of convolution kernel. Could be interesting to investigate further here.
[*end of speculation*]

quote:

I haven't implemented any of this, but I just hacked together a function to approximate multiple forward scattering. Basically, clouds get darker the thicker they are, but the amount of attenuation depends on the angle between the camera ray and the sun ray. The exact dependence is e^(-7.5 * (1 - .9 * dot^100) * value), where value \in [0,1] is the cloud density and dot is the dot product between the two rays.

That's exactly your problem. You are essentially approximating the integral over a different lightpath than the light incidence dotproduct. Your method would be correct, if the sun was exactly above the camera, and all clouds aswell. Then the integration over the cloud volume would exactly equal the cloud density at that point. But as further away from this ideal position the sun actually is, the more error you will introduce. At far away clouds, you are correctly computing the incidence light (through the dotproduct), but your scattering approximation still behaves as if the sun was exactly above the cloud. This is why you get those thin looking clouds with black interior.

quote:

Here's what that looks like, with and without the sun so you can better see the shading nearby. The clouds in the distance still look flat, and even the clouds further up don't look quite right. Presumably with some parameter tweaking I could improve things, but I'd rather use a systematic, efficient method. (eight octaves of noise, halving the amplitude at each octave, base frequency 1, cloud plane height 1)

Not bad, you're on the right way. If you fix your multiple scattering integration, you'll get approx. the same results as I have. The key is to take into account the full cloud density distribution along the ray between the cloud texel and the sun. This will also take care of self-shadowing. But to do that, you'll need to trace through the grid in one way or another (or use multiple layer projections). But as mentioned, that's not too bad, since you can do it at far lower resolution than the actual cloud noise itself.

/ Yann

[edited by - Yann L on May 27, 2002 11:16:32 AM]
0

##### Share on other sites
I just came back from Lake George (I went there for the Memorial Day weekend). The scenery there is beautiful, however I couldn't fully enjoy it because I kept looking at the clouds, landscape, vegetation and water and kept trying to point out all the different effects I should implement. Ignorance is truly bliss. My non-programming friends enjoyed everything so much more then I did

Anyway, here are the shots that I promised. I didn't implement the sun yet, but I am very tempted to post the shots nevertheless.

 Kill, Geocities won't allow you direct links to images. Use the proxy trick I also use for Brinkster - Edit your post to see how it works ! /Yann

[edited by - Yann L on May 28, 2002 9:40:23 AM]
0

##### Share on other sites
Yann : Could you please explain skybox thing again. In a little more detail if you can. I realy don''t get how this should work. And about adding rays of light as quads with 1D texture. Any more info about this would be welcome.

You should never let your fears become the boundaries of your dreams.
0

##### Share on other sites
How should the sun be blended with the clouds? I can''t come up with a way of doing it that''s really pleasing to the eye.
0

##### Share on other sites
To render shafts of light, use a trapezoid with the top, narrow edge across the disk of the sun, and the wide bottom edge on the ground. Use a 1-dimensional texture going across the top edge; this texture stretched across the trapezoid will give stripes of brightness and darkness that start narrow (at the sun) and widen realistically as they approach the ground (the base of the trapezoid).
  --    <-- Sun at top of trapezoid /  \/    \------  <-- Ground at bottom of trapezoid

When you have clouds, you should base the 1-dimensional texture on the profile of clouds crossing the sun. So, if a cloud is covering the right half of the sun, then the right half of the texture should be black (or have alpha = 0), the left half should be bright, and so a shaft of light leaves the left half of the sun.

This is the basic simple idea. I imagine getting the shafts and the colors just right requires playing around a little. (For example, if there are no clouds crossing the sun, then you probably don''t want any visible shafts, so set the 1-d texture to have alpha = 0 everywhere.)

The cloudbox idea is an optimization for reducing the size of textures that need to be sent to the graphics card (to save bandwidth). It isn''t really necessary until you get the other parts working. Also, if you are generating the noise on the graphics card itself (as Yann seems to be working towards), I''m not sure if it helps you at all. My understanding is that basically, instead of a very large rectangle across the sky with a high-resolution (e.g., 2048x2048) cloud texture on it, you draw a cube surrounding the camera (of course you can cull faces which aren''t visible) which has the cloud texture projected onto it from the large rectangle. This is exactly the same as if you are trying to make a cube environment map. The code I posted above does this projection for the front face of the cube.
0

##### Share on other sites
quote:

greeneggs wrote:
This is the basic simple idea. I imagine getting the shafts and the colors just right requires playing around a little. (For example, if there are no clouds crossing the sun, then you probably don''t want any visible shafts, so set the 1-d texture to have alpha = 0 everywhere.)

Yeah, that''s kind of tricky. There is no real physical counterpart to the trapezoid idea (the physically real shafts of light would be impossible to render in realtime), so I just improvised. I have coupled the intensity of the lightrays (the amount blended) with the visibility of the sun (average over a rectangle), colour, and the spherical angle over the horizon (at dusk or dawn you have stronger rays than at full daytime). For the 1D raytexture I basically did what greeneggs explained, just with an additional gaussian blur over it (to make the rays less sharp).

I also blend an additional circular glow over the sun, using the ''smooth-add'' blending operator (see below). This is only done when the spherical angle to the horizon is lower than a given threshold, I then slowly fade that glow in (again modulated by the visibility of the sun, a bit like one of those widely used flares/halos around lightsources). This gives additional atmospheric ''thickness'' to sunsets.

Both glow and lightrays are additively blended into the scene. But to avoid those ugly oversaturation effects you get with additive blending, I used the add-smooth blend mode:

Instead of c = col1 + col2 (standard additive blend), I use: c = col1 + col2 - (col1 * col2). This has a very similar effect to normal additive blending, but doesn''t saturate that quick. Gives a very nice effect. You can implement that using pixelshaders, or directly as a blending function: SRCCOLOR * ONE + DESTCOLOR * INVSRCCOLOR.

quote:

kill wrote:
How should the sun be blended with the clouds? I can''t come up with a way of doing it that''s really pleasing to the eye.

The standard way of doing that is as follows:

You have your gradient skydome. Now blend your primary sunglow onto that skydome, using additive blending (real additive one, we actually want the oversaturation here !). You should do that in a single pass: c = skygradient + suntexture, this saves fillrate. This should also give you that typical ''hyperbolic'' colour distribution around the sun, when it approaches the horizon. Make sure, you choose nice colour here, this is crucial for a visually nice result.

Now it''s time for the clouds: render them with whatever method you use (plane, cloudbox, etc) using standard alpha blending. Such as you already did on your screenshots, that looked fine to me.

The effect that makes the clouds ''glow'' around the sun is actually the multiple scattering cloud-shading. The results you will get at this point highly depend on your specific shading system.

If you want, you can now add an additional sunglow over the clouds, using additive or smooth-add blending. This one should be subtile, not too strong, or you might loose image detail around the sun due to saturation. Modulate the intensity of that glow by the current visibility of the sun-disc. The colour should more or less match the sun colour, but you can vary a bit to get interesting effects (eg. atmospheric diffraction).

The final touch would then be the lightrays. Or a rainbow

quote:

kill wrote:
I just came back from Lake George (I went there for the Memorial Day weekend). The scenery there is beautiful, however I couldn''t fully enjoy it because I kept looking at the clouds, landscape, vegetation and water and kept trying to point out all the different effects I should implement. Ignorance is truly bliss. My non-programming friends enjoyed everything so much more then I did

Heh, I know that... So many beautiful natural effects, just waiting to be implemented in realtime Oh well, should we really live in the Matrix, then I would like to meet whoever wrote it''s 3D engine

/ Yann
0

##### Share on other sites
quote:
Original post by kill
I just came back from Lake George (I went there for the Memorial Day weekend). The scenery there is beautiful, however I couldn''t fully enjoy it because I kept looking at the clouds, landscape, vegetation and water and kept trying to point out all the different effects I should implement. Ignorance is truly bliss. My non-programming friends enjoyed everything so much more then I did

ROFL, I feel that pain.. I now look at the world as if it''s one big rendered scene from a massive computer in Heaven, wondering how nature does such cool effects

I guess this is more of a question with a skybox, but how do make the curvature of the sky, and to what degree? Is it something you just play around with until it looks right?
I should probably be going to read some basic stuff on this before going into the advanced aspects (of which are being spoken of right now..) You guys make me feel so stupid -_-;

Anyway, I like the setting of your skies Kill. I think a sun would really make it look great.
0

##### Share on other sites
okonomiyaki: I had the curvature problem before. If you read the posts in this thread you''ll see the part where I asked Yann for some template values to start off with because my sky looked flat. Depending on what you''re using things might be easier/harder. With a skyplane there are a lot of parameters to play around with. It''s very hard to get it to look just right. A little off to one side, and your sky will look like a picture put in front of you. A little off to the other side and it will look like a picture right overhead. If you make it too curved it will look like there''s absolutely no perspective. The only way to get it to look right is to play around with the values.

One suggestion I can give you, make as many values as possible be configurable at runtime. Recompiling your code a thousand times is very time consuming and frustrating. It''s much easier to press a couple of keys during runtime. It will make a difference between taking a week to get it to look right and an hour.

On a different note, I just found out that point sprites have a very small maximum limit on many systems. On GeForce2 MX it''s 64 pixels in screenspace. I have to reimplement the functionality myself
0

##### Share on other sites
Yann, thanks for being so helpful. I implemented the 2.5D ray-tracer for tracing through cloud "heightfields." This certainly should improve cloud shading when the sun is at a low elevation.

A very realistic rendering would need to trace through the heightfield for shading and then also for rendering. I think clouds are flatter on the bottom, so the heightfield would have some low fraction beneath the cloud plane and a high fraction above the cloud plane. This should give very good results, and the ray-tracing for rendering would only need to be updated occasionally as the clouds move (not every frame) and should be not too slow as it could be done at low resolution (as long as the cloud shading ray-tracing was done at high resolution). There are various optimizations one could make, including tracing only for low elevation clouds and also taking advantage that most of the cloud height is above the cloud plane trace in only one direction.

More precisely, take the clouds stretched across the cloud plane. To render a ray that hits the cloud texture (cloud heightfield), trace the ray backwards until it leaves the cloud and use the computed shade color for the point at which the ray exits the cloud (shades stored in a 2D texture, indexed by horizontal position, just like the cloud heightfield). To render a ray that misses the cloud texture, trace the ray backwards to verify that it doesn't intersect any clouds before reaching the cloud plane (or skip this step since clouds lie mostly above the cloud plane so it is unlikely that you'll hit anything); then trace the ray forwards. If the ray hits the top of a cloud, then use the computed shade color for the point at which the ray enters the cloud (this point is on *top* of the cloud and so is shaded differently from the bottom of the cloud -- this is a second shading texture). The ray-tracing is done on the CPU, but at low resolution.

Without any ray-tracing for rendering, though, I don't see how one can expect good results. For example, consider noontime when (assume) the sun is directly overhead. The shading ray-tracer is tracing through parallel vertical lines, and so is trivial (it just takes the value from the heightfield). A typical cloud will be dark in the middle, with a lighter stripe around the outside, since this is what the height field looks like. However, even clouds near the horizon will look like this, foreshortened in perspective. And this is incorrect. Clouds near the horizon should have puffy white tops and narrow black bottoms -- because you are looking over the clouds. The tops of the clouds are shaded differently from the bottoms; light coming off the top has just been scattered once, while light off the bottom has gone all the way through the cloud. The basic problem is knowing whether you are looking at the top or the bottom of the cloud.

This is especially important for an elevated perspective, but seems to be important even for a ground-level viewer (as I see out my window). I believe I can see this effect in your screenshot "clouds at daytime."

This looks right! I think that you are not actually shading at high resolution, but are using the heightfield, exponentiated, to give the additional detail? This isn't really physically accurate but if it works who am I to argue? Then all ray-tracing can be done at low resolution.

I've thought that the perspective information can be precomputed for a few viewing directions (three or four) and then linearly combined, but it might look odd. Also when the sun is near the horizon there are other hacks to give the correct effect without ray-tracing to the viewer. But I'm curious how you've done it so efficiently.

On another note, I'm curious if anyone has gotten the Hoffman-Preetham air scattering results working. I implemented it and it seems to work well. It does not give good sky gradients, however. There are exposure problems. Plus, sky gradients would really benefit from full-spectrum calculations, I think. I find I need to hack the radius of the earth (or equivalently, the depth of the atmosphere) depending on the sun height, which isn't so cool.

[Edit: one last remark!]

In "A method for modeling clouds based on atmospheric fluid dynamics" (Miyazaki, Yoshida, Dobashi & Nishita), they show some results on making 2-dimensional Benard convection cells for cirrocumulus clouds. Has anyone gotten this running in real-time? (I imagine the three dimensional computations are too slow, although clouds change slowly so maybe it's doable.) These are very impressive results, much better than their cloud automata models.

[Edit: removed copyrighted image]

[edited by - greeneggs on May 31, 2002 8:06:09 PM]
0

##### Share on other sites
Here is some CML source code, and a screenshot of an unsuccessful attempt grow a cumulus cloud. (Bottom left is vapor source distribution, bottom right is velocity field cross-section, top right is vapor and droplet levels. Top left is a poor rendering of the droplets (clouds). In the upper left hand corner is the number of seconds simulated per computer second.)

        /* * convection.c * */#include "convection.h"#include <stdlib.h>#include <math.h> // for exp#include <stdio.h>typedef int bool;#define true 1#define false 0static float *vx, *vy, *vz, *vxnew, *vynew, *vznew;static float *E, *Enew;static float *wv, *wl, *wvnew, *wlnew;static float deltaT = 0;static float vaporSource = 0;void convectionVelocity(float **vxh, float **vyh, float **vzh) { *vxh = vx;  *vyh = vy; *vzh = vz; }float *convectionTemperature() { return E; }float convectionDeltaT() { return deltaT; }float *convectionVapor() { return wv; }float *convectionDroplets() { return wl; }float convectionVaporSource() { return vaporSource; }#define array(a,x,y,z) a[((z)+1)*(wy+2)*(wx+2) + ((y)+1)*(wx+2) + (x)+1]#define cleararray(a) for (z = -1; z < wz+1; z++) for (y = -1; y < wy+1; y++) for (x = -1; x < wx+1; x++) a(x,y,z) = 0;#define swaparray(a,b) temp = a; a = b; b = temp;#define addperiodic(a) \for (z = -1; z < wz+1; z++) {for (x = -1; x < wx+1; x++) {  a(x,wy-1,z) += a(x,-1,z);  a(x,-1,z) = a(x,wy-1,z);  a(x,   0,z) += a(x,wy,z);  a(x,wy,z) = a(x,   0,z);}for (y = -1; y < wy+1; y++) {  a(wx-1,y,z) += a(-1,y,z);  a(-1,y,z) = a(wx-1,y,z);  a(   0,y,z) += a(wx,y,z);  a(wx,y,z) = a(   0,y,z);}}#define copyperiodic(a) \for (z = -1; z < wz+1; z++) for (x = -1; x < wx+1; x++) { a(x,-1,z) = a(x,wy-1,z);  a(x,wy,z) = a(x,0,z); for (y = -1; y < wy+1; y++) { a(-1,y,z) = a(wx-1,y,z);  a(wx,y,z) = a(0,y,z); }#define addreflections(a) \for (x = -1; x < wx+1; x++) for (y = -1; y < wy+1; y++) { a(x,y,1) += a(x,y,-1);  a(x,y,wz-2) += a(x,y,wz); }#define swapreflections(a) \for (x = -1; x < wx+1; x++) for (y = -1; y < wy+1; y++) { a(x,y,1) -= a(x,y,-1);  a(x,y,wz-2) -= a(x,y,wz); }#define clearreflections(a) \for (x = -1; x < wx+1; x++) for (y = -1; y < wy+1; y++) { a(x,y,0) = 0;  a(x,y,wz-1) = 0; }#define    vx(x,y,z) array(   vx,x,y,z)#define    vy(x,y,z) array(   vy,x,y,z)#define    vz(x,y,z) array(   vz,x,y,z)#define vxnew(x,y,z) array(vxnew,x,y,z)#define vynew(x,y,z) array(vynew,x,y,z)#define vznew(x,y,z) array(vznew,x,y,z)static bool velocityInitialized = false;static void initializeVelocity(float vamp) {  int x, y, z;  if (velocityInitialized) return;  vx    = (float *) malloc((wx+2) * (wy+2) * (wz+2) * sizeof(float));  vy    = (float *) malloc((wx+2) * (wy+2) * (wz+2) * sizeof(float));  vz    = (float *) malloc((wx+2) * (wy+2) * (wz+2) * sizeof(float));  cleararray(vx)  cleararray(vy)  cleararray(vz)    // small amplitude random initial velocities  for (z = 0; z < wz; z++) {    for (y = 0; y < wy; y++) {      for (x = 0; x < wx; x++) {        vx(x,y,z) = vamp * (rand() / (float) RAND_MAX - 0.5f);        vy(x,y,z) = vamp * (rand() / (float) RAND_MAX - 0.5f);        vz(x,y,z) = vamp * (rand() / (float) RAND_MAX - 0.5f);      }    }  }  copyperiodic(vx)  copyperiodic(vy)  copyperiodic(vz)  clearreflections(vz)  vxnew = (float *) malloc((wx+2) * (wy+2) * (wz+2) * sizeof(float));  vynew = (float *) malloc((wx+2) * (wy+2) * (wz+2) * sizeof(float));  vznew = (float *) malloc((wx+2) * (wy+2) * (wz+2) * sizeof(float));  cleararray(vxnew)  cleararray(vynew)  cleararray(vznew)  clearreflections(vz)  velocityInitialized = true;}#define E(x,y,z) array(E,x,y,z)#define Enew(x,y,z) array(Enew,x,y,z)static bool temperatureInitialized = false;static void initializeTemperature(float dT) {  int x, y, z;  if (temperatureInitialized) return;  deltaT = dT;  E    = (float *) malloc((wx+2) * (wy+2) * (wz+2) * sizeof(float));  cleararray(E)    // small amplitude random initial temperatures  for (z = 1; z < wz-1; z++) {    for (y = 0; y < wy; y++) {      for (x = 0; x < wx; x++) {        E(x,y,z) = deltaT * (rand() / (float) RAND_MAX - 0.5f);      }    }  }    // top and bottom plates are separated by fixed temperature differential 2 deltaT  for (y = 0; y < wy; y++) {    for (x = 0; x < wx; x++) {      E(x,y,   0)    =  deltaT;      E(x,y,wz-1)    = -deltaT;    }  }  copyperiodic(E)  Enew = (float *) malloc((wx+2) * (wy+2) * (wz+2) * sizeof(float));  cleararray(Enew)  for (y = 0; y < wy; y++) {    for (x = 0; x < wx; x++) {      Enew(x,y,   0) =  deltaT;      Enew(x,y,wz-1) = -deltaT;    }  }  copyperiodic(Enew)  temperatureInitialized = true;}#define    wv(x,y,z) array(   wv,x,y,z)#define    wl(x,y,z) array(   wl,x,y,z)#define wvnew(x,y,z) array(wvnew,x,y,z)#define wlnew(x,y,z) array(wlnew,x,y,z)static bool vaporInitialized = false;static void initializeVapor(float vs) {  int x, y, z;  if (vaporInitialized) return;  vaporSource = vs;  wv    = (float *) malloc((wx+2) * (wy+2) * (wz+2) * sizeof(float));  wl    = (float *) malloc((wx+2) * (wy+2) * (wz+2) * sizeof(float));  cleararray(wv)  cleararray(wl)    // top and bottom plates are separated by fixed vapor differential vaporSource  for (y = 0; y < wy; y++) {    for (x = 0; x < wx; x++) {      wv(x,y,   0) = vaporSource;      wv(x,y,wz-1) = 0;    }  }  copyperiodic(wv)  copyperiodic(wl)  wvnew = (float *) malloc((wx+2) * (wy+2) * (wz+2) * sizeof(float));  wlnew = (float *) malloc((wx+2) * (wy+2) * (wz+2) * sizeof(float));  cleararray(wvnew)  cleararray(wlnew)  copyperiodic(wvnew)  copyperiodic(wlnew)  vaporInitialized = true;}static bool convectionInitialized = false;void initializeConvection() {  if (convectionInitialized) return;  initializeVelocity(1.0f);  initializeTemperature(.04f);  //.01f for stable pattern // 1.8 * lambda + .3 for onset of oscillation?  initializeVapor(5.75f);  convectionInitialized = true;}#define delta(arr,x,y,z) \(.1666666666667f * (arr(x-1,y,z)+arr(x+1,y,z)+arr(x,y-1,z)+arr(x,y+1,z)+arr(x,y,z-1)+arr(x,y,z+1)) - arr(x,y,z))// should the second constant be .125f? check#define graddivvx(x,y,z) \(.5f * (vx(x+1,y,z)+vx(x-1,y,z)) - vx(x,y,z) + f * \(vy(x+1,y+1,z)-vy(x+1,y-1,z)-vy(x-1,y+1,z)+vy(x-1,y-1,z)+vz(x+1,y,z+1)-vz(x+1,y,z-1)-vz(x-1,y,z+1)+vz(x-1,y,z-1)))#define graddivvy(x,y,z) \(.5f * (vy(x,y+1,z)+vy(x,y-1,z)) - vy(x,y,z) +5f * \(vx(x+1,y+1,z)-vx(x-1,y+1,z)-vx(x+1,y-1,z)+vx(x-1,y-1,z)+vz(x,y+1,z+1)-vz(x,y+1,z-1)-vz(x,y-1,z+1)+vz(x,y-1,z-1)))#define graddivvz(x,y,z) \(.5f * (vz(x,y,z+1)+vz(x,y,z-1)) - vz(x,y,z) 25f * \(vx(x+1,y,z+1)-vx(x-1,y,z+1)-vx(x+1,y,z-1)+vx(x-1,y,z-1)+vy(x,y+1,z+1)-vy(x,y-1,z+1)-vy(x,y+1,z-1)+vy(x,y-1,z-1)))void convectionTimestep(float dt) {  int x, y, z;  int l, m, n;  float dx, dy, dz;  float *temp;  if (!convectionInitialized) initializeConvection();// Eulerian part  {  // (a) buoyancy procedure    float kb = 3E0f;  // not specified in [Yanagita & Kaneko 95], 5 or 6 seems to be what they use, though    for (z = 0; z < wz; z++) {  // 0, wz for fixed; 1, wz-1 for reflective boundary      for (y = 0; y < wy; y++) {        for (x = 0; x < wx; x++) {          vznew(x,y,z) = vz(x,y,z) + kb * ( E(x,y,z) - .25f*(E(x+1,y,z)+E(x-1,y,z)+E(x,y+1,z)+E(x,y-1,z)) );        }      }    }    swaparray(vz,vznew)    copyperiodic(vz)  }  {  // (b) heat diffusion    float kdE = .2f;  // .4f, .02f    for (z = 1; z < wz-1; z++) { // 1, wz-1 always      for (y = 0; y < wy; y++) {        for (x = 0; x < wx; x++) {          Enew(x,y,z) = E(x,y,z) + kdE * delta(E,x,y,z);        }      }    }    swaparray(E,Enew)    copyperiodic(E)  }  {  // vapor diffusion -- do water droplets diffuse?    float kdw = 1E-1f;  // .4f, .02f    for (z = 1; z < wz-1; z++) { // 1, wz-1 always      for (y = 0; y < wy; y++) {        for (x = 0; x < wx; x++) {          wvnew(x,y,z) = wv(x,y,z) + kdw * delta(wv,x,y,z);          wlnew(x,y,z) = wl(x,y,z) + kdw * delta(wl,x,y,z);        }      }    }    swaparray(wv,wvnew)    //swaparray(wl,wlnew)    copyperiodic(wv)    //copyperiodic(wl)  }  { // (c) viscosity and pressure effect    float kv = .2f;    float kp = .2f;    for (z = 0; z < wz; z++) {  // 1, wz-1 for reflection      for (y = 0; y < wy; y++) {        for (x = 0; x < wx; x++) {          vxnew(x,y,z) = vx(x,y,z) + kv * delta(vx,x,y,z)                                   + kp * graddivvx(x,y,z);          vynew(x,y,z) = vy(x,y,z) + kv * delta(vy,x,y,z)                                   + kp * graddivvy(x,y,z);          vznew(x,y,z) = vz(x,y,z) + kv * delta(vz,x,y,z)                                   + kp * graddivvz(x,y,z);        }      }    }    swaparray(vx,vxnew)    swaparray(vy,vynew)    swaparray(vz,vznew)    copyperiodic(vx)    copyperiodic(vy)    copyperiodic(vz)  }// Lagrangian part#define advect(a,b) \b(  l,  m,  n) += (1-dx)*(1-dy)*(1-dz) * ,y,z); \b(l+1,  m,  n) +=    dx *(1-dy)*(1-dz) * ,y,z); \b(  l,m+1,  n) += (1-dx)*   dy *(1-dz) * ,y,z); \b(l+1,m+1,  n) +=    dx *   dy *(1-dz) * ,y,z); \b(  l,  m,n+1) += (1-dx)*(1-dy)*   dz  * ,y,z); \b(l+1,  m,n+1) +=    dx *(1-dy)*   dz  * ,y,z); \b(  l,m+1,n+1) += (1-dx)*   dy *   dz  * ,y,z); \b(l+1,m+1,n+1) +=    dx *   dy *   dz  * a(x,y,z);   {    cleararray(vxnew)    cleararray(vynew)    cleararray(vznew)    cleararray(Enew)    cleararray(wlnew)    cleararray(wvnew)    for (z = 1; z < wz-1; z++) { // 0, wz      for (y = 0; y < wy; y++) {        for (x = 0; x < wx; x++) {          dz = z + vz(x,y,z);          if (dz < 0)     dz = -dz;   //continue;  // for fixed boundary, just continue          if (dz >= wz-1) dz = (float) (2*(wz-1)-z);//continue;          dy = y + vy(x,y,z);          if (dy < 0)     dy += wy;          if (dy >= wy)   dy -= wy;          dx = x + vx(x,y,z);          if (dx < 0)     dx += wx;          if (dx >= wx)   dx -= wx;          l = (int) dx;  dx -= l;          m = (int) dy;  dy -= m;          n = (int) dz;  dz -= n;          advect(vx,vxnew)          advect(vy,vynew)          advect(vz,vznew)          advect(E,Enew)          advect(wl,wlnew)          advect(wv,wvnew)        }      }    }    swaparray(vx,vxnew)    swaparray(vy,vynew)    swaparray(vz,vznew)    swaparray(E,Enew)    swaparray(wl,wlnew)    swaparray(wv,wvnew)    addperiodic(vx)    addperiodic(vy)    addperiodic(vz)    addperiodic(E)    addperiodic(wl)    addperiodic(wv)    addreflections(vx)    addreflections(vy)    swapreflections(vz)    addreflections(E)    //clearreflections(vx)    //clearreflections(vy)    clearreflections(vz)    addreflections(wv)    addreflections(wl)  }    for (y = -1; y < wy+1; y++) {      for (x = -1; x < wx+1; x++) {         E(x,y,   0) =  deltaT;         E(x,y,wz-1) = -deltaT;        wv(x,y,   0) = vaporSource;        //wv(x,y,wz-1) = 0;        wl(x,y,0)    = 0;        //wl(x,y,wz-1) = 0;      }    }  {  // phase transition    float alpha = 1E-2f;    float Q = 7E-4f;  // cal / g??#define altitude 6000.0f#define wmax(T) (217 * (float) exp(19.482f - 4303.4f / ((T) - 29.5f)) / (T))    float wmax = .5f;//300 - 0.6f * altitude / 100.0f;    float delta;    //float temperature;    for (z = 0; z < wz; z++) {      //temperature = 300 - 0.6f * (altitude + 20 * z) / 100.0f;  // for cumulus clouds only      //wmax = wmax(temperature);      for (y = 0; y < wy; y++) {        x = 0;        //printf("%d %d %d %4.1f %4.5f %4.5f %4.5f %4.2f\t", x,y,z,E(x,y,z),wmax,delta,wv(x,y,z),wl(x,y,z));        for (x = 0; x < wy; x++) {          wmax = wmax(264+E(x,y,z));          delta = wv(x,y,z) - wmax;          //if (x == wy/2)             //printf("%d %d %d %4.1f %4.5f %4.5f %4.5f %4.2f\t\n", x,y,z,E(x,y,z),wmax,delta,wv(x,y,z),wl(x,y,z));          //printf("%d %d %d %4.1f %4.5f %4.5f %4.5f %4.2f\t", x,y,z,E(x,y,z),wmax,delta,wv(x,y,z),wl(x,y,z));          if (delta > 0) {            wvnew(x,y,z) = wv(x,y,z) - alpha * delta;            wlnew(x,y,z) = wl(x,y,z) + alpha * delta;          } else {            if (wl(x,y,z) + alpha * delta < 0) delta = -wl(x,y,z);            wvnew(x,y,z) = wv(x,y,z) - alpha * delta;            wlnew(x,y,z) = wl(x,y,z) + alpha * delta;          }          Enew(x,y,z) = E(x,y,z) - Q * delta;        }      }    }    swaparray(wv,wvnew)    swaparray(wl,wlnew)    swaparray(E,Enew)    copyperiodic(wv)    copyperiodic(wl)    copyperiodic(E)  }  {  // restore boundary conditions    for (y = -1; y < wy+1; y++) {      for (x = -1; x < wx+1; x++) {         E(x,y,   0) =  deltaT;         E(x,y,wz-1) = -deltaT;        wv(x,y,   0) = vaporSource;        //wv(x,y,wz-1) = 0;        wl(x,y,0)    = 0;        //wl(x,y,wz-1) = 0;      }    }    {  // try setting a horizontal flow as a boundary condition? (as in [Miyazaki, Yoshida, Dobashi & Nishita])      for (y = -1; y < wy+1; y++) {        for (x = -1; x < wx+1; x++) {#define tempscale 1.25f          vx(x,y,   0) =  .125f * tempscale;          vx(x,y,wz-1) = -.01625f * tempscale;          vy(x,y,   0) = .01625f * tempscale;          vy(x,y,wz-1) = .125f * tempscale;          //vz(x,y,   0) = .25f;          //vz(x,y,wz-1) = -.125f;        }      }    }  }}

[Edit: Added images of Benard convection cells suitable for cirrocumulus clouds (?). Phase transiton doesn''t work yet.]
[Edit: Updated code parameters and fixed phase transition]

[edited by - greeneggs on June 20, 2002 11:42:48 AM]<
0

##### Share on other sites
This thread is getting interesting

I prepared a longer reply, but I have to draw some images to make it complete, I''ll post it later tonight.

/ Yann
0

##### Share on other sites
quote:

Without any ray-tracing for rendering, though, I don't see how one can expect good results. For example, consider noontime when (assume) the sun is directly overhead. The shading ray-tracer is tracing through parallel vertical lines, and so is trivial (it just takes the value from the heightfield). A typical cloud will be dark in the middle, with a lighter stripe around the outside, since this is what the height field looks like. However, even clouds near the horizon will look like this, foreshortened in perspective. And this is incorrect.

Exactly. What you are mentioning is the second term in Dobashi's equations, the scattering towards the eye. I take that into account, by simply tracing a ray from each voxel bottom to the eye position, which is assumed fixed at (0,0,0).

So basically:

1) trace a ray from the sun to the voxel, approximate the scattering integral for the light that reaches the voxel.
2) trace a ray from the voxel to (0,0,0), approximate the light scattered from the voxel towards the eye.

But this alone won't solve the problem. The problem is not so much the scattering towards the eye (actually it doesn't really make that much difference, if I take it out. The clouds get a bit darker, not much more). The problem, is that we are using a 2D cloud layer.

In the realworld, when you see a cloud far away at the horizon, you are not looking at it's bottom, but at it's side. Keep in mind, that real clouds are 3D objects. If the sun is high above (eg. at noon), and sunlight is approximated through a directional lightsource, then you get this situation:

(OK, I know my drawing skills suck)

V is the viewpoint, the yellow arrows represent the sunlight direction, the blue line is our 2D cloud plane. Assume that cloud A and B are the same. If only sunlight direction is taken into account, then both will be illuminated the same way, regardless of their position. Consider the red voxel in both clouds: it receives very little light, since the ray has to travel through the whole cloud to reach it. It will thus be very dark. This behaviour will yield correct lighting for cloud A, but not for cloud B.

The reason why it works in reality is simple: consider the view frustum from the viewpoint to cloud B. If the cloud was a real 3D object, then you would actually see the side of the first voxel, which is highly illuminated. The red voxel would be almost invisible, since it would be hidden by the first two voxels.

But our cloud system is 2D. You always see the bottom of the clouds. That is incorrect, and leads to artifacts when the clouds are viewed from an inadequate perspective. It's a bit as with fixed billboards: they look nice and realistic, if you are just infront of them. But if your camera rises up, then the perspective get lost.

So what can we do ? A solution would be to use full 3D clouds, but this is would be rather expensive...

Solution: we fake the perspective ! We can't do something about the 3D geometry (it's a 2D plane), so lets do something about the lighting. The perspective needs to be reintroduced by a special form of lighting, so that clouds at the horizon appear 3D.

Normally, we use a directional light to approximate the sun, but this results in a uniform shading of all clouds. Also, directional lightsources do not contain perspective information, they are orthogonal projections. The best way to introduce a perspective into the lighting is to use a perspective lightsource: a pointlight.

Consider this situation:

As you can see, I replaced the directional sunlight by a pointlight source (the yellow 'blobb' is supposed to be the sun ). This is physically highly incorrect, but it gives the desired result. For cloud A, the lighting is still correct, since the incidence light is more or less directional. For cloud B however, we now have a fully perspective form of lighting: the former red voxel is now fully lit (green arrow). The cloud will appear as if it was seen from it's side.

The distance between the sunlight and our cloud plane (distance L) needs to be adjusted by trial and error, changing it influences the strength of the perspective fake. Play around with various values for L and the multiple scattering extinction coefficients, until you get a visually pleasing result.

Of course, being a fake, this method has it's drawbacks. It will introduce errors, if the sun is near the horizon, and a cloud is just infront of it. With our method, the lightrays would now come from behind the cloud, while in reality they would come from above. They will need to travel through the whole cloud, and this results in an over-attenuation during the scattering-integration: we get a dark spot in the cloud.

You can see this effect on the first screenshot:

However, I think this error isn't too bad, considering that the method is very fast and even hardware accelerateable. But there would be ways to overcome this problem, perhaps by adjusting the perspective 'fake-factor' based on the cloud distance from the viewer.

quote:

I think that you are not actually shading at high resolution, but are using the heightfield, exponentiated, to give the additional detail? This isn't really physically accurate but if it works who am I to argue?

That's what I'm doing. The shading is done at 3-4 octave noise, while the opacity detail uses 8 to 12 octaves. Sure it's not physically accurate, but the whole 2D plane Perlin noise idea isn't in the first place And as long as it looks good and is fast...

quote:

In "A method for modeling clouds based on atmospheric fluid dynamics" (Miyazaki, Yoshida, Dobashi & Nishita), they show some results on making 2-dimensional Benard convection cells for cirrocumulus clouds. Has anyone gotten this running in real-time? (I imagine the three dimensional computations are too slow, although clouds change slowly so maybe it's doable.) These are very impressive results, much better than their cloud automata models.

Hmmm, I downloaded the paper, it looks very interesting. I currently don't have the time to investigate into their algorithms, but I think I might add this onto my ToDo list It would be very nice to have a somewhat more controlable system than Perlin noise, something where you can specify atmospherical and climatic conditions, and the system will automatically synthesize the appropriate clouds. Although you'd have to calculate the clouds entirely on the CPU then, and I don't know if that can be done in realtime at high cloud-texture resolutions.

/ Yann

[edited by - Yann L on June 7, 2002 8:12:57 PM]
0

##### Share on other sites
Bit off topic - but I think there should be alot more threads like this, this thread is just so damn interesting

Death of one is a tragedy, death of a million is just a statistic.
0