Rendering clouds from space
Hi all.
I'm working on rendering virtual worlds with dynamic LOD, and so far the one thing that completely kills me is clouds (on a global scale). You can find some very old screenshots at http://home.comcast.net/~s-p-oneil/screenshots.htm. I have newer algorithms and code, but I haven't gotten around to publishing any updates in a while.
In any case, all of the publications I've found on rendering clouds refer to rendering them up close. I have 3 different sample implementations written for volumetric cloud rendering. Each has its own wrinkles to be ironed out, and they're all a bit performance-heavy, but they all look good up close.
Unfortunately, ALL of them look horrible when the camera is in space. And despite looking horrible, they still manage to cripple the system. From space, the view angle brings in too many clouds and the LOD level required is too great (even just to bring the visual quality up to merely piss-poor). If you do a web search for pictures of cloud systems from space, the detail level is usually incredible. From oblique camera angles, the clouds usually look very detailed, very 3D, and show some nice shadows on the ground.
So my main question is, has anyone here ever seen anything related to modeling/rendering clouds from space? If not, does anyone have some good ideas? I'm not aiming for super-realism, but it has to look good or there's no point in doing it. The clouds are NOT static and are NOT generated artistically. Everything is generated dynamically at runtime using various mathematical algorithms, and I plan to use the cloud implementation for everything from mostly dry planets to gas giants like Jupiter. I plan to add some cheap climate/weather modeling after I get the rendering working.
I've given some thought to generating high-res texture and bump/displacement maps for space, and switching to 3D volumetric rendering when the camera gets close enough, but there would be a lot of problems to work out with that. One would be keeping them from looking flat. Another would be that when working with dynamic LOD schemes, transitions between two different rendering methods are often difficult to make seamless. I never have much time to work on this, so I try to avoid going down a path that I know is going to be exceedingly messy. ;-)
Thanks,
Sean
hmmm.... i render coulds that looks more or less okay even from space, but i do it slowly... very slowly. Non-realtime, actually, far non-realtime.
You probably could still use kind of impostors, maybe together with kinda dome partly around the camera. And use slow-but-good rendering algorithms to update impostors and that pre-rendered thing. Or use other methods that use frame-to-frame coherency.
I also got idea about making game with whole galaxy and procedural planets and everything, BTW, and made a galaxy.... but not planets yet.
You probably could still use kind of impostors, maybe together with kinda dome partly around the camera. And use slow-but-good rendering algorithms to update impostors and that pre-rendered thing. Or use other methods that use frame-to-frame coherency.
I also got idea about making game with whole galaxy and procedural planets and everything, BTW, and made a galaxy.... but not planets yet.
I've thought about this stuff before... I think there was an article in Real Time Rendering (I think it was from there, not 100% sure though) about generating clouds from space -- procedurally. IIRC, the patterns looked realistic. Transition from 2D->3D is pretty tough though... so good luck on that.
-Navreet
-Navreet
Quote:Original post by Dmytry
hmmm.... i render coulds that looks more or less okay even from space, but i do it slowly... very slowly. Non-realtime, actually, far non-realtime.
You probably could still use kind of impostors, maybe together with kinda dome partly around the camera. And use slow-but-good rendering algorithms to update impostors and that pre-rendered thing. Or use other methods that use frame-to-frame coherency.
I also got idea about making game with whole galaxy and procedural planets and everything, BTW, and made a galaxy.... but not planets yet.
Thanks. I've got some impostoring test implementations, but one of the problems with it is how to partition the cloud particles so that they will be properly sorted for the alpha blending operation. My current ground LOD scheme uses 6 quad-trees (one for each face of a cube mapped onto a sphere), so everything is broken up into squares. If I try to impostor two neighboring squares, it becomes impossible to render the cloud particles that intersect the boundary in the proper interleaved order.
I could use a different partitioning scheme for the cloud particles, but then I would need more CPU time to sort them. The quad-tree will sort its own cells cheaply (plus I'm already doing it), and if I keep the cloud particles in each cell in a fixed grid, the rest of the sorting is cheap as well. I've tried using an alpha test dissolve to avoid sorting. It looks ok in SpeedTree's grass implementation, but it doesn't look good when applied to clouds.
I would be interested in hearing how you're rendering clouds, even if it's not real-time. If nothing else, it may help me think of a different angle to attack the problem from.
Thanks,
Sean
One approach could be to use real clouds from space. For instance, you could stitch together a world view from the following sources:
http://www.ssec.wisc.edu/data/geo/west/
http://rapidfire.sci.gsfc.nasa.gov/realtime/2005034/
and others...
Depending on the view and source, you could have a global update every 6 hours or so and portions updated more quickly. A collection of a few weeks worth of data could be reused.
As an alternative, you could look into what's availablr from THREDDS sources. you might be able to get somewhat real time cloud data that could be used to render decent cloud cover.
http://www.ssec.wisc.edu/data/geo/west/
http://rapidfire.sci.gsfc.nasa.gov/realtime/2005034/
and others...
Depending on the view and source, you could have a global update every 6 hours or so and portions updated more quickly. A collection of a few weeks worth of data could be reused.
As an alternative, you could look into what's availablr from THREDDS sources. you might be able to get somewhat real time cloud data that could be used to render decent cloud cover.
btw, very cool planet rendering on your site (i had trouble connecting to it yesterday)
i'm rendering clouds by stepping a ray through 'em. Also i use deep shadowmaps for shadows.(click on "some images" in my sig for examples) Speaking of alpha blending with impostors.... what if each impostor hold contribution of clouds in certain defined volume? For example, you have several belt-like cylinders around camera for clouds closer to horison, cylinders have textures and act as impostors. Clouds are rendered into 'em using some slow algorithms, and on cylinder that is between 2 other cylinders you have contribution of cloudspace that is closer to this cylinder than to other ones (that is, thick cylindrical volume). Cylinders is initially placed centered on camera. If camera moves, you'll see far away clouds "move slower" than close ones.
At each moment there's 2 sets of cylindrical impostors, old and new, and render of old is blended out, new is blended in.
And for clouds under camera, several spheres or planes could be used.
i must say i never really made impostors system.
i'm rendering clouds by stepping a ray through 'em. Also i use deep shadowmaps for shadows.(click on "some images" in my sig for examples) Speaking of alpha blending with impostors.... what if each impostor hold contribution of clouds in certain defined volume? For example, you have several belt-like cylinders around camera for clouds closer to horison, cylinders have textures and act as impostors. Clouds are rendered into 'em using some slow algorithms, and on cylinder that is between 2 other cylinders you have contribution of cloudspace that is closer to this cylinder than to other ones (that is, thick cylindrical volume). Cylinders is initially placed centered on camera. If camera moves, you'll see far away clouds "move slower" than close ones.
At each moment there's 2 sets of cylindrical impostors, old and new, and render of old is blended out, new is blended in.
And for clouds under camera, several spheres or planes could be used.
i must say i never really made impostors system.
looks very good so far. i dont know much about cloudrendering, but some general ideas nonethless.
i assume you generate your clouds by evaluating some density function, right?
first id make sure this density function can be evaluated at lots of different LOD's, which if youre using perlin noise, shouldnt be a problem.
for far away rendering, keep a spherically mapped texture for each planet, updating it only where needed, possibly with multiple layers LODed in depending on distance.
for closer-up clouds, i would suggest something like a dynamicly LODed particle system. particle size could simply be varied with distance: closeby the density function gets evaluated at a quite high resolution with small particles, far away low res big particles. just the basic LOD idea: keep the size in screen space constant.
maybe then some imposter stuff, i remember reading a good article on cloud imposters from the MS flightsim team, but that wasnt aimed at procedural clouds i believe. im not sure if its really needed though. i think you could already have decent results with a managable amount of particles.
crossfading between the two might indeed be tricky. certainly since the usability of the 2d cloud layer is hard to estimate. from the surface it might look good, atleast the outer layers. but on the horizon, which is further away, the 2d-ness might be obvious.
i assume you generate your clouds by evaluating some density function, right?
first id make sure this density function can be evaluated at lots of different LOD's, which if youre using perlin noise, shouldnt be a problem.
for far away rendering, keep a spherically mapped texture for each planet, updating it only where needed, possibly with multiple layers LODed in depending on distance.
for closer-up clouds, i would suggest something like a dynamicly LODed particle system. particle size could simply be varied with distance: closeby the density function gets evaluated at a quite high resolution with small particles, far away low res big particles. just the basic LOD idea: keep the size in screen space constant.
maybe then some imposter stuff, i remember reading a good article on cloud imposters from the MS flightsim team, but that wasnt aimed at procedural clouds i believe. im not sure if its really needed though. i think you could already have decent results with a managable amount of particles.
crossfading between the two might indeed be tricky. certainly since the usability of the 2d cloud layer is hard to estimate. from the surface it might look good, atleast the outer layers. but on the horizon, which is further away, the 2d-ness might be obvious.
I have worked on clouds rendering too (remember the emails we exchanged, Sean?), but only as seen from the ground. I will also have to implement it from space, so here are a few thoughts on the subject.
In my opinion it's possible to get "relatively easily" some decent results for rendering clouds from the ground (with particles/impostors techniques, ala Flight Simulator); or to render clouds from space (with procedural textures, as seen in "Texturing and Modeling: A procedural Approach"). The real problem is the transition. Clouds from space are usually applied on a plane (or, at the whole planet level, on a sphere), so they have no thickness. You can slowly switch between the two techniques (volumetric clouds with particles/impostors, and texture mapped plane) but i think the quality might not be good enough. One problem is to make the volumetric clouds appear only where there are clouds as seen from space. In some way, the texture from space must be used as a "probability" function for the density/presence of clouds at the ground level.
Another idea that popped in my mind would be to use layers of clouds from space, like the fur rendering technique. So now instead of generating one 2D texture for the clouds, you generate N 2D textures from a 3D noise function, and blend all the layers together.
If you want to go to the next step, you might directly generate the cloud densities in a 3D texture (as opposed to a set of 2D textures), and cast a ray from the viewer in a pixel shader. This will probably require ps 3.0 with loops.
Shadowing from space might be as easy as modulating the cloud texture onto the ground. From ground, you can still use a shadow mapping-like technique: generate a viewpoint in the sun direction, looking at the viewer, and generate a shadow frustum in an area of a few kilometers around the viewer. Then the two shadow solutions can be combined in some way.
Just some ideas..
Y.
In my opinion it's possible to get "relatively easily" some decent results for rendering clouds from the ground (with particles/impostors techniques, ala Flight Simulator); or to render clouds from space (with procedural textures, as seen in "Texturing and Modeling: A procedural Approach"). The real problem is the transition. Clouds from space are usually applied on a plane (or, at the whole planet level, on a sphere), so they have no thickness. You can slowly switch between the two techniques (volumetric clouds with particles/impostors, and texture mapped plane) but i think the quality might not be good enough. One problem is to make the volumetric clouds appear only where there are clouds as seen from space. In some way, the texture from space must be used as a "probability" function for the density/presence of clouds at the ground level.
Another idea that popped in my mind would be to use layers of clouds from space, like the fur rendering technique. So now instead of generating one 2D texture for the clouds, you generate N 2D textures from a 3D noise function, and blend all the layers together.
If you want to go to the next step, you might directly generate the cloud densities in a 3D texture (as opposed to a set of 2D textures), and cast a ray from the viewer in a pixel shader. This will probably require ps 3.0 with loops.
Shadowing from space might be as easy as modulating the cloud texture onto the ground. From ground, you can still use a shadow mapping-like technique: generate a viewpoint in the sun direction, looking at the viewer, and generate a shadow frustum in an area of a few kilometers around the viewer. Then the two shadow solutions can be combined in some way.
Just some ideas..
Y.
Ysayena - Yes I remember, and I still don't think it will be as easy as you claim. I've tried a few different implementations, and you haven't tried any yet. I'm not trying to be negative, and I would be very happy to have you prove me wrong. I'm just saying that your claims don't match up with what I've experienced. In this case I stepped away from the problem for a while, and I decided to see if anyone had any fresh ideas or had seen a recent publication I hadn't seen. One problem with applying a cloud texture map to a sphere is that they go completely flat at the horizon of that sphere (they completely disappear), and that's where they should look the most 3-dimensional. Near the horizon, you see the tops and bottoms of clouds where you should be seeing the edges of them. You may be able to alleviate that with displacement or parallax mapping. I haven't looked closely at those yet. Some of the algorithms won't run on my video card, and some require heavy pre-processing steps to generate 3D texture maps as lookup tables.
Eelco - That's a bit like how one of my implementations works now, but it's a bit slow and it has some of the issues I mentioned. Also, cloud particles at the horizon end up being several hundred miles in diameter. I can't make them smaller because the machine is already running pretty slow at this detail level. ;-) Because cloud particles can't be that tall without intersecting both the ground and space, they end up being very long flat ellipsoids, which is pretty noticeable (sometimes the edges poke out into space because they're flat and the atmosphere is not).
AP - I'm looking for something that will work for dynamically-generated worlds. I want to build my own virtual universe, and you can't reuse clouds from one planet to cover them all.
Dmytry - I've given some thought to the cylinder idea (or something like it). MS Flight Sim 2004 used 8 impostors that formed an octagonal ring around the camera. An octagon works very well when the camera is close to the ground. It is very easy to avoid cracks and overlapping areas. (Keeping overlapping areas from being visible seams is not easy.) It is also easy to keep the impostored billboards from intersecting the ground or the sky dome in a bad way, and the space in the impostor textures is used pretty efficiently (i.e. you don't have most of the cloud pixels crammed into one small part of the texture). You can also keep the impostor faces far enough away from the camera that other planes don't seem to magically "appear" up close after flying through a cloud that's supposed to be very far away.
Think about a camera viewpoint in orbit looking at the horizon. The clouds should look very 3D from this angle, and they are in a very thin circular strip surrounding a very large planet. This is not easy to impostor efficiently. (If I'm missing something, then someone please let me know what it is. ;-) Maybe some sort of transformation could be applied to straighten out the curve for the impostor (i.e. map the x axis to an angle and the y axis to altitude). You could transform it back applying the inverse of the transformation to the texture coordinates.
Another idea would be to only track individual cloud bunches and impostor them separately. So from space each storm system would have its own impostor. You wouldn't have to worry about cracks and/or overlap. When the camera gets close enough to one cloud bunch, you could transition to a different rendering mode just for that bunch. This would simplify the impostor rendering somewhat, but it introduces a question of how many cloud bunches you would need to track, and how to handle overlapping bunches, perhaps splitting and merging them as the wind pushes them together or tears them apart. This could make a gas giant particularly challenging. Though it varies, even the Earth seems to have more cloud cover than clear skies in a lot of pictures from space.
Thanks everyone,
Sean
Eelco - That's a bit like how one of my implementations works now, but it's a bit slow and it has some of the issues I mentioned. Also, cloud particles at the horizon end up being several hundred miles in diameter. I can't make them smaller because the machine is already running pretty slow at this detail level. ;-) Because cloud particles can't be that tall without intersecting both the ground and space, they end up being very long flat ellipsoids, which is pretty noticeable (sometimes the edges poke out into space because they're flat and the atmosphere is not).
AP - I'm looking for something that will work for dynamically-generated worlds. I want to build my own virtual universe, and you can't reuse clouds from one planet to cover them all.
Dmytry - I've given some thought to the cylinder idea (or something like it). MS Flight Sim 2004 used 8 impostors that formed an octagonal ring around the camera. An octagon works very well when the camera is close to the ground. It is very easy to avoid cracks and overlapping areas. (Keeping overlapping areas from being visible seams is not easy.) It is also easy to keep the impostored billboards from intersecting the ground or the sky dome in a bad way, and the space in the impostor textures is used pretty efficiently (i.e. you don't have most of the cloud pixels crammed into one small part of the texture). You can also keep the impostor faces far enough away from the camera that other planes don't seem to magically "appear" up close after flying through a cloud that's supposed to be very far away.
Think about a camera viewpoint in orbit looking at the horizon. The clouds should look very 3D from this angle, and they are in a very thin circular strip surrounding a very large planet. This is not easy to impostor efficiently. (If I'm missing something, then someone please let me know what it is. ;-) Maybe some sort of transformation could be applied to straighten out the curve for the impostor (i.e. map the x axis to an angle and the y axis to altitude). You could transform it back applying the inverse of the transformation to the texture coordinates.
Another idea would be to only track individual cloud bunches and impostor them separately. So from space each storm system would have its own impostor. You wouldn't have to worry about cracks and/or overlap. When the camera gets close enough to one cloud bunch, you could transition to a different rendering mode just for that bunch. This would simplify the impostor rendering somewhat, but it introduces a question of how many cloud bunches you would need to track, and how to handle overlapping bunches, perhaps splitting and merging them as the wind pushes them together or tears them apart. This could make a gas giant particularly challenging. Though it varies, even the Earth seems to have more cloud cover than clear skies in a lot of pictures from space.
Thanks everyone,
Sean
My idea actually was to make not really impostors, not turn them towards camera.
Just textured ring/belt, so faces of belt does not turn towards camera. If you wish, deformable belt to make it look nicer. Imagine large sphere and belt placed on surface, with belt radius less than sphere radius.
Actually, that belt should be conical and be orthogonal to sphere. When camera is far away, belt becomes a ring around planet, and is also used for atmosphere scattering. When camera looks from low orbit, it sees textures clouds sphere, that is blended out at certain distance where several belts is blended in. It should look 3D enough, and with sane speeds of camera (around orbital speed), will not requir to redraw too often. Belt radius and height should be calculated for optimal view. Actually it's not so hard.
It is also possible to make it be octagonal, tho requirs some more complicated equations for transparency to make it blend with plane seamlessly...
(And of course clouds that is close to camera, clouds must be rendered using other method.)
As about impostor-per-cloud, the problem is that small nice clouds spaced by some distance is not very typical on Earth... usually you have big areas of nearly overcast sky...
Just textured ring/belt, so faces of belt does not turn towards camera. If you wish, deformable belt to make it look nicer. Imagine large sphere and belt placed on surface, with belt radius less than sphere radius.
Actually, that belt should be conical and be orthogonal to sphere. When camera is far away, belt becomes a ring around planet, and is also used for atmosphere scattering. When camera looks from low orbit, it sees textures clouds sphere, that is blended out at certain distance where several belts is blended in. It should look 3D enough, and with sane speeds of camera (around orbital speed), will not requir to redraw too often. Belt radius and height should be calculated for optimal view. Actually it's not so hard.
It is also possible to make it be octagonal, tho requirs some more complicated equations for transparency to make it blend with plane seamlessly...
(And of course clouds that is close to camera, clouds must be rendered using other method.)
As about impostor-per-cloud, the problem is that small nice clouds spaced by some distance is not very typical on Earth... usually you have big areas of nearly overcast sky...
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement