• FEATURED

View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# Sky-rendering techniques

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

138 replies to this topic

### #21Yann L  Members

Posted 22 May 2002 - 01:34 AM

1. And here we come again to the beginning of the discussion: that's exactly what I do, I keep the texture in sysmem and transfer it to the card. But that way, you can't use the hardware to generate the Perlin noise, as you suggested...

quote:
A 1024x1024x1 texture is one meg. One byte per texel should give enough precision for the clouds

That's only for opacity. Don't forget the shading.

2. It all depends on your *texture* resolution, perspective projection, and scale. If I take out only one octave, I lose considerable detail. Look: let's start with a 16*16 texture as primary octave. 8 octaves would give you a 2048² texture, where every pixel has been 'fractalized'. That's more or less the resolution I use. If you cut down to 5 octaves, the maximum you will get out of your fractal function is a 256² texture. That is clearly not enough.

3. You can't combine one octave per colour channel. But you can preprocess 3 to 4 octaves into the alpha channel, by pre-adding them. The more ocvtaves you preprocess, the larger the texture will have to be. 3 octaves would be a 64² texture, that is acceptable to update every frame. It could even be created in realtime by using render-to-texture feedback.

quote:

Anyway, we're getting into very complex tricks here. Getting this to work and to look good will probably take a LONG time

I implemented it yesterday evening. It works, with the exception of a slight flaw in the exponentiation pass that required an additional texture shader (you can't address a texture on per-fragment level using the RGB result from a regcom, I didn't thought of that).

quote:

"Hugo writes: 'The secret behind Terragen's beauty: rather than going for all out brute force and mathematical accuracy, Matt just writes algorithms that produce good results, rather than trying to exactly model the physics. The clouds are essentially a beautiful bodge.'" Do you have any idea which algorithms they're talking about? I tried to find more details, but couldn't

Well, hmm, yes and no. Some time ago, and friend and I had a discussion with Matt about a realtime version of TerraGen, and he went briefly over his algorithms. But I guess details are his private 'professional secret'. In a nutshell, he means that the clouds look like 3D, but are really just a 2D plane. The trick is to make them look like 3D through the shading.

/ Yann

[edited by - Yann L on May 22, 2002 10:47:36 AM]

### #22kill  Members

Posted 23 May 2002 - 04:06 AM

Instead of implementing the shading equation, I simply used perlin noise again to shade the clouds. I specified two colors and interpolated them for every texel based on the value of The Great Noise Function. Although when I shoot for the realistic look the clouds look ugly, the world of warcraft type clouds are done very easily. I''ll play around with the values some more to get the realistic feel.

Hopefully in a couple of days I''ll post the screenshots... I want to get the sun to work well and find really good color/gradient/frequency values before I let the whole wide world see my creation

Yann, can you give a little bit more detail about your skybox trick to reduce the texture resolution? I''m not sure how projecting the plane on a skybox will require a smaller texture to look good...

### #23DeltaVee  Members

Posted 23 May 2002 - 04:47 AM

A little off topic.

This thread needs to go into the Resources section of GameDev. I have learnt more in this thread about sky rendering than I have in just about all the other threads combined.

Great links Yann, Impressive sky! Keep posting dude, I''m listening.

D.V.

Carpe Diem

### #24Yann L  Members

Posted 23 May 2002 - 04:59 AM

quote:

Hopefully in a couple of days I'll post the screenshots... I want to get the sun to work well and find really good color/gradient/frequency values before I let the whole wide world see my creation

Cool. It's always interesting to see shots from advanced sky engine implementations.

quote:

Yann, can you give a little bit more detail about your skybox trick to reduce the texture resolution? I'm not sure how projecting the plane on a skybox will require a smaller texture to look good...

Hmm, I was actually thinking of getting rid of it. It just takes too much processing power. And now that I have the clouds running in 100% HW on the GPU, there isn't really much need for it anymore (we're targeting GF3+ for the game).

OK, this was the idea behind it: If you texture your skyplane with a cloud texture (without tiling, since you don't want clouds repeating themselves), then this texture will have a highly degenerated texture space, due to the extreme perspective range it covers. It will be strongly magnified in the middle of the skyplane, just above your head. That's not good, since it will render the clouds just above the player very blurry. On the other side, towards the horizon, the cloud texture will be highly compressed. It will look OK, if you use mipmapping and anisotropic filtering, but you'll actually waste over 80% of your highres cloud texture, since the highres parts will never be displayed.

It would be possible to encode a kind of adaptive resolution texture, and adjust texcoords to reduce the effect. But it takes considerable performance to realtime encode such a texture and you will get weird looking artifacts when clouds are moving. You'd also need to tesselate your skyplane fairly well, to get a good texcoord approximation of the distortion function used.

Now take a skycube: the sampling distribution is almost constant over it's faces. That way, texture resolution is used to the maximal possible amount, and you don't waste texture space. But the CPU has to project the procedural cloud plane (that has 'infinite' resolution, due to it's procedural nature) onto the cloudbox using supersampling. I use hand optimized ASM for that, but it still swallows very much performance. As usual, it's a quality - performance tradeoff.

DeltaVee: thanks. Photorealistic rendering of natural effects is actually one of my favorite subjects, so expect some more threads of this type in the future

 made a few things clearer. Damnit, my spelling sucks...

/ Yann

[edited by - Yann L on May 23, 2002 4:15:01 PM]

### #25Yann L  Members

Posted 23 May 2002 - 10:22 AM

OK, for all pixelshader freaks out there: I just came up with an interesting idea on how to overcome the slight exponentiation flaw I mentioned yesterday. That way, you could get photorealistic, full quality, 100% dynamic realtime clouds, that are fully computed by the 3D card. There would be almost 0% CPU involvement.

I haven''t tested the stuff yet, I''ll do that tonight. For those who are interested in implementing something similar, I''ll just describe my idea. If someone wants to try it out, it would be nice if we could discuss and compare the results of our implementations ! The implementation idea is given in OpenGL, but can be easily adapted to D3D.

So, the idea is a two pass approach (work on GF3+ only): first pass creates the perlin noise (up to 16 octaves are possible, without performance penalty !). The second pass performs the exponentiation, shading and final cloud plane rendering.

First pass: Setup a pbuffer of the resolution of your screen. Now load 2 to 4 pre-added noise textures into the texture units, as described earlier. Each noise texture should be signed and contain 3 or 4 pre-added octaves of perlin noise. Setup a vertex shader, that adjust the s,t coordinates of each used texture unit, so that the noisetextures get added at the right frequency, by adjsuting the tiling.

Eg: you have 2 preadded textures of 64² each, containing 3 noise octaves each (a total of 6 octaves). Then st0 = 1*base_frequency and st1=8*base_frequency.

Create a regcom setup, that multiplies each texture with a amplitude factor and adds everything together. For our 6 octave example, that would be: c = tex0_alpha * constant0_alpha + tex1_alpha * constant1_alpha. Very simple, gets a bit more complex if more octaves are used. Constant0_alpha and Constant1_alpha give the amplitude of each noise texture.

In an additional regcom stage (or in the final combiner), subtract the cloud_cover value. We do that at this point, because we want to use the maximum precision available. The clamping after the subtraction will reduce dynamic range, so it''s good to do that before writing the result to the 8 bit framebuffer.

Finally, render your skyplane to the pbuffer, with the vertex/pixelshader setup as described. The result should be a full perlin noise layer, that will already have clear and covered cloud parts. But it is not yet exponentiated.

Second pass: bind the pbuffer as texture in unit 0. Since we now have the result of the perlin noise generator as a texture, we can directly use it in a texture shader stage (unlike my first idea, where it was impossible to use a texture addressing operation with a regcom result). Load a 256x1 intensity texture map into texture unit 1, containing a pow-lookup table for all values from 0 to 255. Note, that you can''t use a 1D texture because of the texture shader we''ll use, but a 256x1 will just do fine.

Set up a texture shader in stage 0: GL_DEPENDENT_AR_TEXTURE_2D_NV. This will convert the alpha and red value of our perlin noise texture to a (s,t) pair. t will be ignored. s will contain a value between 0 and 255. Use it as index into our lookup table texture: cfinal = Pow_table[s]. Now our perlin noise is realtime exponentiated by the GPU on a per fragment level !

You can use texture unit 2 to apply a shading pass. This one will have to be precalc''ed on the CPU (since we can''t do multiple scattering computations on the GPU yet ), but it can happen at a much lower resolution than the real cloud opacity. Combine alpha and shading values into an RGBA value using a regcom. Render your cloud layer into the framebuffer using standard alpha blending. Done: perfect quality, full dynamic clouds, calculated 100% on the GPU !

You can use a similar approach to HW accelerate the shafts of light coming from the sun.

OK, I''ll implement that now. If everything works, I''ll try to post some screenshots tomorrow, if anyone is interested.

/ Yann

### #26Dave Astle  Distinguished Rhino

Posted 23 May 2002 - 10:47 AM

quote:
Original post by DeltaVee
This thread needs to go into the Resources section of GameDev. I have learnt more in this thread about sky rendering than I have in just about all the other threads combined.

Yann, you should really, really write an article (or articles) about this. It''d be perfect for the Hardcore column. I know you''re probably a bit time-limited, but considering how much you''ve written here , and the scarcity of articles covering doing these things in real time, you should definitely think about it.

### #27python_regious  Members

Posted 23 May 2002 - 10:54 PM

Could someone explain the term "exponentiation" to me, and why it''s used ( I know exponentials - just not how it''s used in this context... ).

Cheers

Death of one is a tragedy, death of a million is just a statistic.

### #28Yann L  Members

Posted 24 May 2002 - 10:08 AM

quote:

Could someone explain the term "exponentiation" to me, and why it's used ( I know exponentials - just not how it's used in this context... ).

Well, 'exponentiation' simply means raising a quantity to a power, but I guess you already know that It is a vital part of realistic cloud generation. The trick is to take a sharpness factor to the power of your original perlin noise. If the numeric ranges are well chosen, then this will apply a range mapping to the cloud noise. This mapping makes the clouds look more detailed and volumetric.

Here are some pics to show the effect.

This is the noise you normally get from a perlin noise generator:

Fig.1

It's distribution is very uniform, not really clouds yet. Or perhaps a very overcast day...

We need some clear parts in our sky, so we can just subtract an offset from the perlin noise, and clamp the result to a positive range:

Fig.2

The equation used is simply result = clamp_to_0( perlin_noise - cover_offset ) . It's already better, we have some cloudy and some clear parts. But it's still very fuzzy, more a kind of 'spiderweb' than actual clouds. Many sky engines stop at this step, and use this noise directly. This leads to very fuzzy and undefined clouds.

The problem is that in reality, clouds are 3D volumes. They have different thickness, and from a certain thickness on, a cloud becomes totally solid to the eye, no more transparency at all. This is not the case with our noise: it is always transparent to a certain degree.

We can change that by applying a range mapping. We are looking for a range mapping function that fully preserves our precision, since we only have 8 bit, and don't want to waste dynamic range.

The perfect candidate function is an exponential. Consider f(x) = nx . This will raise the constant factor n to the power x. This exponential function has an interesting property: if we can guarantee that n is always between 0 and 1, then the result itself will as well be between 0 and 1, for *all* exponents between 0 and infinity. Our exponent will be the clamped noise from Fig2. It is between 0 and 255. Let's assume a sharpness factor n of 0.96.

The two extremes will be:

f(0) = 0.960 = 1
f(255) = 0.96255 = 0.00003 (almost 0)

Great, we are between 0 and 1, as predicted. Now, let's map it back to the 0 to 255 range (and re-invert it, since the exponentiation inverted it in the first place):

final_clouds = 255 - ( 255 * 0.96x )

We haven't lost a single bit of precision, and we get a much nicer result, the clouds look more 3D now:

The hard part with exponentiation (as you noticed in the previous posts) is to have the 3D hardware do it on a perpixel level.

quote:

Yann, you should really, really write an article (or articles) about this. It'd be perfect for the Hardcore column. I know you're probably a bit time-limited, but considering how much you've written here , and the scarcity of articles covering doing these things in real time, you should definitely think about it.

Heh, yeah, I already wrote half of an article in this thread...

But you are right. It's very hard to find good public resources on those things. I already considered the idea of writing an article about it, perhaps a small series about photorealistic realtime rendering of various natural effects: terrain, sky, water, etc.

Besides the sky, I would love to do one about water, there is so much you can get out of a modern 3D card, people wouldn't believe it. Since most of the effects I wrote are used in our game, I'll have to check some IP/NDA issues with our company first. But besides that: yes I would be interested. I'll get back to you by mail for some more details and timing.

/ Yann

[edited by - Yann L on June 7, 2002 8:10:15 PM]

### #29kill  Members

Posted 24 May 2002 - 10:33 AM

Yann, wait ''till I''m done with clouds I don''t particularily like the terrain topic, especially considering that a lot has been said about it in "Texturing and Modelling: The Procedural Approach", but we''ll definetly have another thread discussing water

### #30python_regious  Members

Posted 24 May 2002 - 11:55 AM

Thanks Yann - you really cleared that up for me

As for water - look at the deep ocean rendering article on gamasutra.

Death of one is a tragedy, death of a million is just a statistic.

### #31Yann L  Members

Posted 24 May 2002 - 12:29 PM

quote:

Thanks Yann - you really cleared that up for me

I'm warming myself up for the article Dave wants me to write

quote:

As for water - look at the deep ocean rendering article on gamasutra.

I know this one, I implemented it a few months ago, and found it to be not realistic enough... I 'slightly' modified it by adding tons of pixelshader effects (bumpmapped chromatic aberration, Blinn speculars, etc). Now it looks good

Enough stuff for 2 articles about water, that's for sure...

BTW: my HW cloud implementation idea (see above) seems to work (more or less) ! I'm happy. It still behaves a bit strange, but that's smaller adjustments. I'll hopefully post some shots tomorrow.

/ Yann

[edited by - Yann L on May 24, 2002 7:44:15 PM]

### #32Mordoch Bob  Members

Posted 24 May 2002 - 05:26 PM

Yann, I''m curious what resources you use as a reference to the numerous extensions and features that can be used on the newer hardware. I was relatively happy with the clouds I made for my balloon demo in the physics competition (more because I made it without the aid of any tutorials and by trying various different methods, than because of any *real* aesthetic appeal), but I would like to have the additional option of optimization through accessing the graphics card after it''s all coded for sole use of the CPU Do you have a text source from nVidia, or a general reference? Are there any pertinent links (other than the online reference on nVidia''s site) that I should know?

### #33python_regious  Members

Posted 24 May 2002 - 10:44 PM

quote:

I'm warming myself up for the article Dave wants me to write

I'm waiting for it

I'd also be interested at the resources you use. When you say "Blinn speculars" do you basically mean specular bump mapping based on the Blinn not Phong model?

Death of one is a tragedy, death of a million is just a statistic.

[edited by - python_regious on May 25, 2002 5:45:43 AM]

### #34Yann L  Members

Posted 25 May 2002 - 01:56 PM

I primarily use OpenGL on nVidia, so I don't know about good Direct3D resources, but I guess MSDN would be a good point to start, if you want to do it in D3D.

For OpenGL, my main reference is the OpenGL extension registry. Most extensions are well documented, and if you don't like a huge textfile collection, there are pdf compilations available, containing all extensions in one document.

For ideas about how to use those extensions, the nVidia and ATi developer pages are very good. They are somewhat chaotic (esp. the nVidia one), but definitely contain very valuable information. The GDC presentations available on those sites are also very interesting.

I would highly recommend getting the reference papers to register combiners, vertex programs and texture shaders from nVidia's page.

They are a quick overview over the pipeline structure along with constants and gl commands. I actually printed them out, so that I have a handy reference over the whole programmable pipeline.

But the hardest part isn't understanding the 3D pipeline, the structure is actually rather simple. Once you have an algorithm you want to implement on the GPU, then the real challenge is to figure out how to make it work on the limited resources of that programmable pipe. This is more or less a question of the experience you have.

Here are some suggestions, on how to start using those features:

* start small. Do one at a time: first start with register combiners, they are the easiest to understand. Write some effects using them, perhaps a bumpmapper. Get a feeling about how an algorithm can be broken up to fit the combiner limitations. Learn to modify your equations, so that eg. textures or cubemaps are used as lookup tables for operations the pipeline can't do on it's own.

* Then go on with vertex programs. It helps tremendeously, if you know ASM here, the basic idea behind both is very similar. Keep in mind, that you can't loop, or do conditional jumps, so you have to adapt your algorithms. But you can use table lookups, that can be very valuable. Play around with VPs, try to combine them with regcoms. A good start is a full diffuse/specular bumpmapper using VPs to calculate the tangent and binormals on the GPU.

* Last but not least, go on to texture shaders. They are only available on GF3+, so make sure to have such a card, before attempting to use them. Texture shaders are a very powerful tool, but are very hard to use in the right way and *totally* unintuitive. Sometimes trial and error is the only way to get an effect run...

* Also have a look into render-to-texture faetures (pbuffers). Sometimes a complex problem can't be resolved in a single pass, even on powerful hardware. In that case, pbuffers can be a fast and convenient way to combine multiple passes.

* At this point, you'll see that the programmable vertex and fragment pipeline is a very powerful and versatile system, but with inhertent and sharply defined limitations and rules. The remaining challenge is to find algorithms and techniques, that produce the desired effect, but still fit the limitations imposed by the pipeline.

To start with vertex/pixelshaders, you can try Nutty.org or Delphi3d.net. Both have some nice examples, well suited for beginners.

Back to clouds (and some shameless self-advertising ):
OK, the full hardware implementation of my sky engine runs rather well now. All the clouds on the pics below are fully calculated by the GPU, and can morph and billow in realtime ! I still have a problem with lighting though, it's not as good as it was on the CPU only version.

Some shots:

Another sunset. Note that the cloud lighting is screwed. Don't know why yet. The clouds use 8 octaves Perlin noise, and I added a slight additive glow around the sun, makes it look nicer.

Clouds at daytime. These ones use 12 octaves of Perlin noise, you can see that the fractal detail is a lot finer than in previous images. Lighting is still bugged.

I had some fun tweaking some parameters Note the very low resolution of the cloud noise: I cut the thing down to 6 octaves, to see how it would perform. Isn't worth it (5% performance gain), 12 octaves is alot nicer !

/ Yann

[edited by - Yann L on June 7, 2002 8:11:25 PM]

### #35zealouselixir  GDNet+

Posted 25 May 2002 - 02:06 PM

Those clouds give me a geekgasm.

Peace,
ZE.

//email me.//zealouselixir software.//msdn.//n00biez.//

[if you have a link proposal, email me.]

### #36hanstt  Members

Posted 25 May 2002 - 11:59 PM

Looks like photos...

That''s awesome Yann! I didn''t believe one could do such skies yet...

### #37greeneggs  Members

Posted 26 May 2002 - 10:16 AM

I like your cloud box idea, Yann. But I am most interested in how you are shading and rendering the clouds. How closely are you following Harris and Dobashi? (For example, I don't think you are using texture splats.) How are you generating voxel data from the 2D texture and how much of it are you using? How then do you shade and render? This seems to be the key to getting good, adjustable looks.

I have thought, for example, of "layering" the texture some number of times above itself. I've also thought that maybe the layers are only important for shading, and not for rendering -- i.e., only use the bottom voxel for rendering after its incident light has been determined by multiple forward scattering.

I haven't implemented any of this, but I just hacked together a function to approximate multiple forward scattering. Basically, clouds get darker the thicker they are, but the amount of attenuation depends on the angle between the camera ray and the sun ray. The exact dependence is e^(-7.5 * (1 - .9 * dot^100) * value), where value \in [0,1] is the cloud density and dot is the dot product between the two rays.

This makes the clouds look very flat, especially near the horizon, when you should be looking over the top of the clouds. To improve this, I added another hack where it decreases cloud density if the cloud density is decreasing when the camera ray moves up a little.

Here's what that looks like, with and without the sun so you can better see the shading nearby. The clouds in the distance still look flat, and even the clouds further up don't look quite right. Presumably with some parameter tweaking I could improve things, but I'd rather use a systematic, efficient method. (eight octaves of noise, halving the amplitude at each octave, base frequency 1, cloud plane height 1)

Here's the exact code (minus the last hack, it's too ugly!), if anyone needs some help getting started
 vector lt = normalize((0, -.4, -1)); I = vtransform("shader", "world", I); setycomp(I, ycomp(I) - horizonlevel); PP = transform("world", P); setycomp(PP, ycomp(PP) - horizonlevel); x = xcomp(PP); y = ycomp(PP); z = zcomp(PP); setycomp(PP, 1); setzcomp(PP, z + zcomp(I) * (1 - y) / y); setxcomp(PP, x + (zcomp(PP) - z) * xcomp(I) / zcomp(I)); dot = pow(abs(lt . normalize(I)), 100); attenuate = exp(-7.5 * (1 - 0.9 * dot));  /* Use fractional Brownian motion to compute a value for this point *//*  value = fBm (PP, omega, lambda, octaves); */ value = 0; l = 1;  o = 1;  a = 0; for (i = 0;  i < octaves;  i += 1) {  a += o * snoise (PP*l + label);  l *= 2;  o *= omega; } value = clamp(a - threshold, 0, 1); value = 1 - pow(sharpness, 255 * value); dot = pow(attenuate, value); skycolor = mix(midcolor, skycolor, smoothstep(.2, .6, v) ); skycolor = mix(horizoncolor, skycolor, smoothstep(-.05, .2, v) ); //skycolor = mix(skycolor, white, pow(abs(lt . normalize(I)), 1024) ); //skycolor = mix(skycolor, white, .5 * pow(abs(lt . normalize(I)), 64) ); Ct = value * dot * cloudcolor + (1 - value) * skycolor;

[edited by - greeneggs on May 26, 2002 5:25:57 PM]

### #38Yann L  Members

Posted 27 May 2002 - 04:14 AM

quote:

I like your cloud box idea, Yann. But I am most interested in how you are shading and rendering the clouds. How closely are you following Harris and Dobashi? (For example, I don't think you are using texture splats.) How are you generating voxel data from the 2D texture and how much of it are you using? How then do you shade and render? This seems to be the key to getting good, adjustable looks.

Right, shading is very important. Basically, I use the algorithm outlined in Harris2001, chapter 2. I modified the implementation, so that it works with a 2.5D digital differential analyzer to compute the integral over a low resolution voxel field. That's more or less all I use from Harris/Dobashi, since the remaing parts of their papers concentrate on their respective rendering techniques, that are substantially different from mine (esp. Dobashi's metaball splatting approach).

The original clamped cloud noise (before exponentiation, the Fig.2 in one of my posts above) can be seen as a heightfield (in fact, it would just look like a terrain, if you turned it upside down). Now, for each voxel in the cloud heightfield, I trace a ray from the voxel to the sun, through the voxel field, approximating the multiple scattering integral over the way (discretized over the resolution of the voxel grid). I do that on the CPU, and on a lowres version of the noise, usually only the first 3 or 4 octaves. Lighting doesn't need to be as precise as opacity. Also note that you need to do it before exponentiation, since the pow() will destroy any valid heightfield information !

Now, this is essentially a 2D raytracing operation (it's a 2D noisegrid). But you have to take the 'fake' thickness of the clouds into account. That's why the DDA I used is a 2.5D version: it traces through a 2D field (a simple Bresenham tracer is fine), but takes the extend traveled through a 3D voxel into account. That way, the result is as if I calculated the multiple scattering integral over a real 3D cloud volume (when in fact it was a fake 2D heightfield).

quote:

I have thought, for example, of "layering" the texture some number of times above itself. I've also thought that maybe the layers are only important for shading, and not for rendering -- i.e., only use the bottom voxel for rendering after its incident light has been determined by multiple forward scattering.

Well, the rendering is simply done by drawing a single layer 2D plane textured with the full cloud noise.

[*Yann is speculating again, if you're not into theoretical pixelshader stuff, feel free to skip *]
There is an interesting way one could try: it could actually be possible to calculate the full multiple scattering equation entirely on the GPU as well. Using a similar approach to Lastra/Harris, chapter 2.2: by spliting up the procedural cloud layer into multiple layers, say 12, divided by a threshold value, and rendering them from the viewpoint of the sun. blending would be done as described by Harris. That way, the 3D hardware would be able to approximate the scattering integral itself. The only problem would be the finite angle readback (gamma in their paper), this would probably require come kind of convolution kernel. Could be interesting to investigate further here.
[*end of speculation*]

quote:

I haven't implemented any of this, but I just hacked together a function to approximate multiple forward scattering. Basically, clouds get darker the thicker they are, but the amount of attenuation depends on the angle between the camera ray and the sun ray. The exact dependence is e^(-7.5 * (1 - .9 * dot^100) * value), where value \in [0,1] is the cloud density and dot is the dot product between the two rays.

That's exactly your problem. You are essentially approximating the integral over a different lightpath than the light incidence dotproduct. Your method would be correct, if the sun was exactly above the camera, and all clouds aswell. Then the integration over the cloud volume would exactly equal the cloud density at that point. But as further away from this ideal position the sun actually is, the more error you will introduce. At far away clouds, you are correctly computing the incidence light (through the dotproduct), but your scattering approximation still behaves as if the sun was exactly above the cloud. This is why you get those thin looking clouds with black interior.

quote:

Here's what that looks like, with and without the sun so you can better see the shading nearby. The clouds in the distance still look flat, and even the clouds further up don't look quite right. Presumably with some parameter tweaking I could improve things, but I'd rather use a systematic, efficient method. (eight octaves of noise, halving the amplitude at each octave, base frequency 1, cloud plane height 1)

Not bad, you're on the right way. If you fix your multiple scattering integration, you'll get approx. the same results as I have. The key is to take into account the full cloud density distribution along the ray between the cloud texel and the sun. This will also take care of self-shadowing. But to do that, you'll need to trace through the grid in one way or another (or use multiple layer projections). But as mentioned, that's not too bad, since you can do it at far lower resolution than the actual cloud noise itself.

/ Yann

[edited by - Yann L on May 27, 2002 11:16:32 AM]

### #39kill  Members

Posted 27 May 2002 - 04:24 PM

I just came back from Lake George (I went there for the Memorial Day weekend). The scenery there is beautiful, however I couldn't fully enjoy it because I kept looking at the clouds, landscape, vegetation and water and kept trying to point out all the different effects I should implement. Ignorance is truly bliss. My non-programming friends enjoyed everything so much more then I did

Anyway, here are the shots that I promised. I didn't implement the sun yet, but I am very tempted to post the shots nevertheless.

 Kill, Geocities won't allow you direct links to images. Use the proxy trick I also use for Brinkster - Edit your post to see how it works ! /Yann

[edited by - Yann L on May 28, 2002 9:40:23 AM]

### #40_DarkWIng_  Members

Posted 28 May 2002 - 07:01 AM