Sky-rendering techniques

Started by
137 comments, last by Filip 20 years, 2 months ago
quote:
Thanks Yann - you really cleared that up for me

I'm warming myself up for the article Dave wants me to write

quote:
As for water - look at the deep ocean rendering article on gamasutra.

I know this one, I implemented it a few months ago, and found it to be not realistic enough... I 'slightly' modified it by adding tons of pixelshader effects (bumpmapped chromatic aberration, Blinn speculars, etc). Now it looks good

Enough stuff for 2 articles about water, that's for sure...

BTW: my HW cloud implementation idea (see above) seems to work (more or less) ! I'm happy. It still behaves a bit strange, but that's smaller adjustments. I'll hopefully post some shots tomorrow.

/ Yann

[edited by - Yann L on May 24, 2002 7:44:15 PM]
Advertisement
Yann, I''m curious what resources you use as a reference to the numerous extensions and features that can be used on the newer hardware. I was relatively happy with the clouds I made for my balloon demo in the physics competition (more because I made it without the aid of any tutorials and by trying various different methods, than because of any *real* aesthetic appeal), but I would like to have the additional option of optimization through accessing the graphics card after it''s all coded for sole use of the CPU Do you have a text source from nVidia, or a general reference? Are there any pertinent links (other than the online reference on nVidia''s site) that I should know?
_________________________________________________________________________________The wind shear alone from a pink golfball can take the head off a 90-pound midget from 300 yards.-Six String Samurai
quote:
I'm warming myself up for the article Dave wants me to write


I'm waiting for it

I'd also be interested at the resources you use. When you say "Blinn speculars" do you basically mean specular bump mapping based on the Blinn not Phong model?

Death of one is a tragedy, death of a million is just a statistic.

[edited by - python_regious on May 25, 2002 5:45:43 AM]
If at first you don't succeed, redefine success.
About the resources I use:

I primarily use OpenGL on nVidia, so I don't know about good Direct3D resources, but I guess MSDN would be a good point to start, if you want to do it in D3D.

For OpenGL, my main reference is the OpenGL extension registry. Most extensions are well documented, and if you don't like a huge textfile collection, there are pdf compilations available, containing all extensions in one document.

For ideas about how to use those extensions, the nVidia and ATi developer pages are very good. They are somewhat chaotic (esp. the nVidia one), but definitely contain very valuable information. The GDC presentations available on those sites are also very interesting.

I would highly recommend getting the reference papers to register combiners, vertex programs and texture shaders from nVidia's page.

They are a quick overview over the pipeline structure along with constants and gl commands. I actually printed them out, so that I have a handy reference over the whole programmable pipeline.

But the hardest part isn't understanding the 3D pipeline, the structure is actually rather simple. Once you have an algorithm you want to implement on the GPU, then the real challenge is to figure out how to make it work on the limited resources of that programmable pipe. This is more or less a question of the experience you have.

Here are some suggestions, on how to start using those features:

* start small. Do one at a time: first start with register combiners, they are the easiest to understand. Write some effects using them, perhaps a bumpmapper. Get a feeling about how an algorithm can be broken up to fit the combiner limitations. Learn to modify your equations, so that eg. textures or cubemaps are used as lookup tables for operations the pipeline can't do on it's own.

* Then go on with vertex programs. It helps tremendeously, if you know ASM here, the basic idea behind both is very similar. Keep in mind, that you can't loop, or do conditional jumps, so you have to adapt your algorithms. But you can use table lookups, that can be very valuable. Play around with VPs, try to combine them with regcoms. A good start is a full diffuse/specular bumpmapper using VPs to calculate the tangent and binormals on the GPU.

* Last but not least, go on to texture shaders. They are only available on GF3+, so make sure to have such a card, before attempting to use them. Texture shaders are a very powerful tool, but are very hard to use in the right way and *totally* unintuitive. Sometimes trial and error is the only way to get an effect run...

* Also have a look into render-to-texture faetures (pbuffers). Sometimes a complex problem can't be resolved in a single pass, even on powerful hardware. In that case, pbuffers can be a fast and convenient way to combine multiple passes.

* At this point, you'll see that the programmable vertex and fragment pipeline is a very powerful and versatile system, but with inhertent and sharply defined limitations and rules. The remaining challenge is to find algorithms and techniques, that produce the desired effect, but still fit the limitations imposed by the pipeline.

To start with vertex/pixelshaders, you can try Nutty.org or Delphi3d.net. Both have some nice examples, well suited for beginners.

Back to clouds (and some shameless self-advertising ):
OK, the full hardware implementation of my sky engine runs rather well now. All the clouds on the pics below are fully calculated by the GPU, and can morph and billow in realtime ! I still have a problem with lighting though, it's not as good as it was on the CPU only version.

Some shots:

Another sunset. Note that the cloud lighting is screwed. Don't know why yet. The clouds use 8 octaves Perlin noise, and I added a slight additive glow around the sun, makes it look nicer.


Clouds at daytime. These ones use 12 octaves of Perlin noise, you can see that the fractal detail is a lot finer than in previous images. Lighting is still bugged.


I had some fun tweaking some parameters Note the very low resolution of the cloud noise: I cut the thing down to 6 octaves, to see how it would perform. Isn't worth it (5% performance gain), 12 octaves is alot nicer !


/ Yann


[edited by - Yann L on June 7, 2002 8:11:25 PM]
Those clouds give me a geekgasm.

Peace,
ZE.

//email me.//zealouselixir software.//msdn.//n00biez.//
miscellaneous links

[if you have a link proposal, email me.]

[twitter]warrenm[/twitter]

Looks like photos...

That''s awesome Yann! I didn''t believe one could do such skies yet...
I like your cloud box idea, Yann. But I am most interested in how you are shading and rendering the clouds. How closely are you following Harris and Dobashi? (For example, I don't think you are using texture splats.) How are you generating voxel data from the 2D texture and how much of it are you using? How then do you shade and render? This seems to be the key to getting good, adjustable looks.

I have thought, for example, of "layering" the texture some number of times above itself. I've also thought that maybe the layers are only important for shading, and not for rendering -- i.e., only use the bottom voxel for rendering after its incident light has been determined by multiple forward scattering.

I haven't implemented any of this, but I just hacked together a function to approximate multiple forward scattering. Basically, clouds get darker the thicker they are, but the amount of attenuation depends on the angle between the camera ray and the sun ray. The exact dependence is e^(-7.5 * (1 - .9 * dot^100) * value), where value \in [0,1] is the cloud density and dot is the dot product between the two rays.

This makes the clouds look very flat, especially near the horizon, when you should be looking over the top of the clouds. To improve this, I added another hack where it decreases cloud density if the cloud density is decreasing when the camera ray moves up a little.

Here's what that looks like, with and without the sun so you can better see the shading nearby. The clouds in the distance still look flat, and even the clouds further up don't look quite right. Presumably with some parameter tweaking I could improve things, but I'd rather use a systematic, efficient method. (eight octaves of noise, halving the amplitude at each octave, base frequency 1, cloud plane height 1)




Here's the exact code (minus the last hack, it's too ugly!), if anyone needs some help getting started
 vector lt = normalize((0, -.4, -1)); I = vtransform("shader", "world", I); setycomp(I, ycomp(I) - horizonlevel); PP = transform("world", P); setycomp(PP, ycomp(PP) - horizonlevel); x = xcomp(PP); y = ycomp(PP); z = zcomp(PP); setycomp(PP, 1); setzcomp(PP, z + zcomp(I) * (1 - y) / y); setxcomp(PP, x + (zcomp(PP) - z) * xcomp(I) / zcomp(I)); dot = pow(abs(lt . normalize(I)), 100); attenuate = exp(-7.5 * (1 - 0.9 * dot));  /* Use fractional Brownian motion to compute a value for this point *//*  value = fBm (PP, omega, lambda, octaves); */ value = 0; l = 1;  o = 1;  a = 0; for (i = 0;  i < octaves;  i += 1) {  a += o * snoise (PP*l + label);  l *= 2;  o *= omega; } value = clamp(a - threshold, 0, 1); value = 1 - pow(sharpness, 255 * value); dot = pow(attenuate, value); skycolor = mix(midcolor, skycolor, smoothstep(.2, .6, v) ); skycolor = mix(horizoncolor, skycolor, smoothstep(-.05, .2, v) ); //skycolor = mix(skycolor, white, pow(abs(lt . normalize(I)), 1024) ); //skycolor = mix(skycolor, white, .5 * pow(abs(lt . normalize(I)), 64) ); Ct = value * dot * cloudcolor + (1 - value) * skycolor;   


[edited by - greeneggs on May 26, 2002 5:25:57 PM]
quote:
I like your cloud box idea, Yann. But I am most interested in how you are shading and rendering the clouds. How closely are you following Harris and Dobashi? (For example, I don't think you are using texture splats.) How are you generating voxel data from the 2D texture and how much of it are you using? How then do you shade and render? This seems to be the key to getting good, adjustable looks.

Right, shading is very important. Basically, I use the algorithm outlined in Harris2001, chapter 2. I modified the implementation, so that it works with a 2.5D digital differential analyzer to compute the integral over a low resolution voxel field. That's more or less all I use from Harris/Dobashi, since the remaing parts of their papers concentrate on their respective rendering techniques, that are substantially different from mine (esp. Dobashi's metaball splatting approach).

The original clamped cloud noise (before exponentiation, the Fig.2 in one of my posts above) can be seen as a heightfield (in fact, it would just look like a terrain, if you turned it upside down). Now, for each voxel in the cloud heightfield, I trace a ray from the voxel to the sun, through the voxel field, approximating the multiple scattering integral over the way (discretized over the resolution of the voxel grid). I do that on the CPU, and on a lowres version of the noise, usually only the first 3 or 4 octaves. Lighting doesn't need to be as precise as opacity. Also note that you need to do it before exponentiation, since the pow() will destroy any valid heightfield information !

Now, this is essentially a 2D raytracing operation (it's a 2D noisegrid). But you have to take the 'fake' thickness of the clouds into account. That's why the DDA I used is a 2.5D version: it traces through a 2D field (a simple Bresenham tracer is fine), but takes the extend traveled through a 3D voxel into account. That way, the result is as if I calculated the multiple scattering integral over a real 3D cloud volume (when in fact it was a fake 2D heightfield).

quote:
I have thought, for example, of "layering" the texture some number of times above itself. I've also thought that maybe the layers are only important for shading, and not for rendering -- i.e., only use the bottom voxel for rendering after its incident light has been determined by multiple forward scattering.

Well, the rendering is simply done by drawing a single layer 2D plane textured with the full cloud noise.

[*Yann is speculating again, if you're not into theoretical pixelshader stuff, feel free to skip *]
There is an interesting way one could try: it could actually be possible to calculate the full multiple scattering equation entirely on the GPU as well. Using a similar approach to Lastra/Harris, chapter 2.2: by spliting up the procedural cloud layer into multiple layers, say 12, divided by a threshold value, and rendering them from the viewpoint of the sun. blending would be done as described by Harris. That way, the 3D hardware would be able to approximate the scattering integral itself. The only problem would be the finite angle readback (gamma in their paper), this would probably require come kind of convolution kernel. Could be interesting to investigate further here.
[*end of speculation*]

quote:
I haven't implemented any of this, but I just hacked together a function to approximate multiple forward scattering. Basically, clouds get darker the thicker they are, but the amount of attenuation depends on the angle between the camera ray and the sun ray. The exact dependence is e^(-7.5 * (1 - .9 * dot^100) * value), where value \in [0,1] is the cloud density and dot is the dot product between the two rays.

That's exactly your problem. You are essentially approximating the integral over a different lightpath than the light incidence dotproduct. Your method would be correct, if the sun was exactly above the camera, and all clouds aswell. Then the integration over the cloud volume would exactly equal the cloud density at that point. But as further away from this ideal position the sun actually is, the more error you will introduce. At far away clouds, you are correctly computing the incidence light (through the dotproduct), but your scattering approximation still behaves as if the sun was exactly above the cloud. This is why you get those thin looking clouds with black interior.

quote:
Here's what that looks like, with and without the sun so you can better see the shading nearby. The clouds in the distance still look flat, and even the clouds further up don't look quite right. Presumably with some parameter tweaking I could improve things, but I'd rather use a systematic, efficient method. (eight octaves of noise, halving the amplitude at each octave, base frequency 1, cloud plane height 1)

Not bad, you're on the right way. If you fix your multiple scattering integration, you'll get approx. the same results as I have. The key is to take into account the full cloud density distribution along the ray between the cloud texel and the sun. This will also take care of self-shadowing. But to do that, you'll need to trace through the grid in one way or another (or use multiple layer projections). But as mentioned, that's not too bad, since you can do it at far lower resolution than the actual cloud noise itself.

/ Yann

[edited by - Yann L on May 27, 2002 11:16:32 AM]
I just came back from Lake George (I went there for the Memorial Day weekend). The scenery there is beautiful, however I couldn't fully enjoy it because I kept looking at the clouds, landscape, vegetation and water and kept trying to point out all the different effects I should implement. Ignorance is truly bliss. My non-programming friends enjoyed everything so much more then I did

Anyway, here are the shots that I promised. I didn't implement the sun yet, but I am very tempted to post the shots nevertheless.



[Edit] Kill, Geocities won't allow you direct links to images. Use the proxy trick I also use for Brinkster - Edit your post to see how it works ! /Yann

[edited by - Yann L on May 28, 2002 9:40:23 AM]
Yann : Could you please explain skybox thing again. In a little more detail if you can. I realy don''t get how this should work. And about adding rays of light as quads with 1D texture. Any more info about this would be welcome.

You should never let your fears become the boundaries of your dreams.
You should never let your fears become the boundaries of your dreams.

This topic is closed to new replies.

Advertisement