Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

bishop_pass

Atmospherics

This topic is 5799 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Drawing inspiration from this sky thread, it might be interesting to further develop those ideas with more refinement and variety. Regarding clouds, they can be classified into several categories, notabale among them are: Cirrus, cumulus, cumulonimbus, and lenticular. Cirrus are high altitude clouds, likely best rendered as a skyplane, and are high frequency repeating splotches, often parallel, and generally broken up into approximatly equal sized pieces. Cumulus are fair weather clouds, puffy and stereotypical of a person''s perception of what a cloud is. Gardner suggested rendering such clouds with elipsoids textured with a noise function with an alpha transparency becoming greated towards the ellipsoid''s sillhouette. Cumulonimbus are the high billowing clouds, often vertically billowing, and are harningers of bad weather. Lenticulars often form over high mountain crests and peaks, are usually very horizontal and smooth in shape, and have often been said to look like flying saucers. In the High Sierra, the lenticulars have a name: the Sierra Wave. Here is a beautiful picture of a stunning Sierra Wave. Here is further information on clouds, including pictures. And here is more information on clouds and their formation. One thing which I am not sure has been correctly addressed with regard to illumination of clouds, or mountain peaks, for that matter, is the phenomenon of alpenglow. As we know, as the sun gets lower, its light passes through more atmosphere, scattering the blue wavelengths, causing the light to become orange and then pink. What this means with regard to altitude is: the higher the sruface which is receiving the light, the more red it can be, because it can receive light through a greater portion of the atmosphere. Long after the sun has set (or long before the sun has risen) for someone standing on a low plain, sunlight can still illuminate mountain peaks and high altitude clouds, and with a more red light than can be received by the viewer standing on the low altitude plain. But the viewer can see the light on the high altitude surfaces. Two things are obvious here: predawn light, and post dusk light should cast a very pinkish glow on high altitude clouds and peaks. With regard to the altitude of the viewer, the higher the viewer goes, the darker blue the sky becomes. The extreme of this is entering space. But the effect is very noticable at modest altitudes, such as above 10,000 feet. The sky is a notable deeper blue. Another effect of being at altitude is (what I believe) a smaller halo around the noontime sun, because of less density in the atmosphere. There are cases, where the Sun''s (and Moon''s) disk become very hardedged, with essentially no halo. These times are when the proper layer of haze or cloud density is in front of the disk. During midday, this might be due to a certain density of cloud cover, and result in a pure white, yet hardedged disk defining the sun. During sunset, when the sun might be yellow or orange, the disk again is sometimes very hardedged, with no halo, this time often due to haze.

Share this post


Link to post
Share on other sites
Advertisement
I would also be interested in a followup sky thread. I have significantly extended the sky model since that thread, but with more emphasis on the lighting and atmospherics part (newer 3D cards allow much better volumetric approximations of the actual light scattering integral). I have not really touched the cloud generation, but if I''m not mistaken, Greeneggs was working on a convection cell model, which could be used to simulate different meteorological conditions. Would be interesting to know about the results he got.

Share this post


Link to post
Share on other sites
I was thinking about clouds and textured skyplanes and projecting these textures to cubes to get more bang for your buck with regard to resolution, as described in the above referenced thread, and it ocurred to me that you could do it like this instead:

Compute several separate skyplane textures, each a fraction of the size of one larger one. Forget about connecting these separate textured planes together, as there is no point or reason. Render each of them at different altitudes. The ones near the viewer would have a higher resolution. The ones further from the viewer would have a lower resolution.

As these cloud planes move (due to wind forces), they can be upgraded to higher resolutions, or downgraded to lower resolutions, based on their distance.

So, instead of having one giant texture at 2048x2048, one could have two large textures at 512 x 512, several at 256 x 256, quite a few at 128 x 128, and so on. With alpha blending, each can be rendered at a slightly different altitude. If they overlap a little, it doesn't matter. If they don't overlap and are separated by gaps, it doesn't really matter. Each plane represents its own group of clouds. The appearance would basically be the same as one large texture, but you get the advantage of creating resolution where you need it, illuminating layers based on altitude, a more 3 dimensional layering of clouds, and no necessity to projecting to a cube.

[edited by - bishop_pass on January 24, 2003 10:12:06 PM]

Share this post


Link to post
Share on other sites
Taking the above methodology a step further, you could also incorporate into the above system a set of billboarded cumulus and cumulonimbus cloud planes, which would seem to be more conducive to a vertically aligned plane, rather than a horizontally aligned plane. The most important part would be to make sure these vertically aligned billboards don''t intersect with the horizontally aligned cloud planes. Some of these billboards could be above some horizontal planes, and below others. Because of the flexibility in adjusting the heights of the horizontal cloud planes, there should be little difficulty in interleaving the vertically aligned cloud planes among them.

Share this post


Link to post
Share on other sites
The layer idea is interesting, but you''ll end up with the same problem you have in one-plane cloud rendering: getting enough resolution for the clouds near the viewer. For a good looking overhead, low altitude texture, you''d need at least 1024². You can make it smaller (on the xy-plane dimensions, thus increasing perceptual resolution), but you will probably see cracks at the edges then. Even though the layered concept will mostly hide distant overlaps and gaps, this illusion will not work near the camera. Unless you use pretty small distinct clouds, each one in it''s own texture (and a fading out transparency near the texture edge, to hide artifacts).

Lighting will be another problem. Moving around the cloud layer pieces is easy. But at every motion step, you''ll have to recompute the lighting, as it will gradually change. Computing convincing 3D-ish lighting on separate patches (over different altitudes) will require heavy processing on the CPU side. The new solution will have to be transfered to the 3D card every frame (-> AGP saturation). You might be able to get away computing the lighting solution every, say, 5 frames or so, and interpolate inbetween.

If I had to redesign the cloud generation system, I would still use a multi-layer approach. But only with two or three fixed infinite layers. Use some meteorological algorithm to compute the basic cloudscape (convection cells, or whatever else), and use DX9-level pixelshaders to add fractal detail on the GPU. That would save transfer bandwidth, memory and fillrate. But I agree on the point, that multiple layers will allow for a better 3D effect, and more realistic cloud movement due to differing wind speeds at different altitudes.

quote:

Taking the above methodology a step further, you could also incorporate into the above system a set of billboarded cumulus and cumulonimbus cloud planes, which would seem to be more conducive to a vertically aligned plane, rather than a horizontally aligned plane


Yes, that''s a good idea. But here again, realistic lighting might become a serious problem to do in realtime.

Share this post


Link to post
Share on other sites
Regarding layer edges, I was assuming that no actual cloud imagery would extend to the boundary, so that no edges would be visible.

I was pondering the idea of faking the cloud shading. As you described in the other thread, the sun is treated as a point lightsource, presumably positioned in such a way that its position is in direct line with its more true infinite vectored rays and the viewer, and raised to a height to best simulate the perspective shift when calculating the lighting.

Given that position, vectors are computed to each vertex of the cloud plane from the light's position, and used to compute a texture shift in a second pass, very much like emboss bumpmapping.

By doing this, it seems that the clouds would accumulate the brightest lighting in just the right places. Actually, some redirecting and manipulation of the vertex to light vectors might be necessary, but a convincing result might be achievable.

The point that you have to realize is, they only need to look like a realistic lighting of any particular set of clouds' geometry, and not exactly like the realistic lighting of the set of clouds' gemoetry you think you are rendering. In other words, the audience doesn't know the exact geometry of the clouds you are rendering, and in fact, nor do you to any exact set of standards. If the audience can look at the clouds' rendering, and reconstruct in their mind where those clouds are, it doesn't matter if that minds' eye reconstruction parallels what was actually intended by the algorithm. It is only necessary that it would appear to be reconstructable to some physical geometry.

[edited by - bishop_pass on January 25, 2003 6:43:19 AM]

Share this post


Link to post
Share on other sites
Per-vertex lighting is not enough for a volumetric effect, might it be clouds, smoke, fog, etc. Why ? Because it will reveal the underlying geometry. A volumetric phenomenon cannot be represented by geometry, it is inherently 3D. Rendering volumetrics through 2D layers is always a fake approximation. If you want to make it look good, and really 3D, then the most important point is the lighting.

Now, with your idea, you will simply light the idividual 2D layers. It will look very flat. You need at least per-pixel lighting, there is no way around that. Each cloud voxel must influence the lighting solution at it's position. Now, you could probably do that using DOT3 bumpmapping, or something similar.

It will still look fake: shadows are missing. And that's the second most important point in realistic volume rendering. If you have a real cloudscape, then almost 90% of the lighting effect comes from mutual shadowing. Even if a cloud side is directly exposed to the sun, then that doesn't mean it is lit. Clouds infront, above, or even below (at dusk/dawn) can shadow it to various degrees. Even parts of the same cloud will achieve that effect, different cloud thicknesses will give totally different (non-linear) results.

Also light scattering is very important: A cloud side that does not face the sun, doesn't need to be unlit. If a large, lit cloud is direct behind it, then a part of that light will scatter (reflect) onto the unlit cloud.

To get a convincing result, you'd need at least some form of volumetric shadow tracing. There are various possibilities to do that, some better approximations then others. For instance, have a look at this paper.

You are right that you don't always need to simulate the physically exact real thing to get a good result. The Perlin noise I used in the other sky thread is a good example: it looks reasonably realistic, but has absolutely no physical relation with real clouds. But lighting is different. The human brain is highly trained to use light and shadows as the main clue to identify 3D structures at large distances (the stereoscopic effect breaks down from a certain distance on). Even a physically untrained viewer will directly notice, if something is wrong with the lighting. He will probably not be able to identify the problem, he will simply state that 'something is wrong, and looks fake'. We often had experiences like this, with our beta-testing team and customers viewing new effects in our photorealistic renderers. And also keep in mind, that clouds are an every-day phenomenon for everybody. People are highly used to the 'real look' of real clouds.

/ Yann

[edited by - Yann L on January 25, 2003 10:43:20 AM]

Share this post


Link to post
Share on other sites
Can you provide some links on the light scattering and on the integral?
I''d like to know how the actual scattering works and how it is evaluated.

Share this post


Link to post
Share on other sites
Light scattering is an important part in optics, so an advanced physics book should basically cover it. But that's mostly dry theory. For more practical exmaples, applied to computer graphics (esp. volume rendering and sky/atmosphere), you should take a look at those papers:

Same as in my reply above. Pretty simple and easy to understand.

Tomoyuki Nishita's publication list. 5 pages of highly interesting papers. Many of them treat realistic lighting. Nishita (along with Dobashi) is one of the Gods of realistic natural effect rendering. Download as much papers as you can, they are truely worth it


Vetrrain's atmosphere page, and their cloud rendering section.

The good old Preetham, Shirley & Smits paper, A Practical Analytic Model for Daylight.

The best resource is definitely Nishita and Dobashi papers, see above. They have a least 20 papers about particle scattering, special effects and lighting. I have some more very interesting papers here, but you need an ACM account to access them online.

/ Yann

[edited by - Yann L on January 25, 2003 11:04:45 AM]

Share this post


Link to post
Share on other sites
Yann, I didn''t suggest per vertex lighting. What I suggested was per pixel lighting using shifted vertex coordinates in a blended subtractive pass, akin to emboss bumpmapping, but using a different equation to shift the texture corrdinates in the second pass. Whether this would produce convincing results or not remains to be seen, but as you say, self shadowing, self illumination, and neigbor illumination are necessary too. No doubt you have more hands on experience with coding cloud light scattering than I, as I''ve only thought about.

Regarding cirrus clouds, they do in fact have a rather simple lighting model, because they are generally the top layer of clouds, and all resied in the same plane. To best produce them with noise, I would scale one texture dimension (let''s say t) to be possibly two to four times the other texture dimension, and rotate the texture so that the texture aligns with the predominant wind direction at that altitude. I would then work on modulation of the octaves until you achieve what is a typical scale for these multiple parallel broken up clouds, which all appear to be about the same size. For their illumination, I''ve noticed that they scatter the light well, and usually appear mostly white with feathery edges. But after sunset, they are still receiving orange light from the sun, because of their altitude. And well after sunset, they receive a light that has been scattered moreso than any landbased viewer could ever recieve, meaning more blue light is scattered than the landscape could ever receive. The result is a pink glow that is not achievable at lower altitudes.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!