Sign in to follow this  
s_p_oneil

Rendering clouds from space

Recommended Posts

Hi all. I'm working on rendering virtual worlds with dynamic LOD, and so far the one thing that completely kills me is clouds (on a global scale). You can find some very old screenshots at http://home.comcast.net/~s-p-oneil/screenshots.htm. I have newer algorithms and code, but I haven't gotten around to publishing any updates in a while. In any case, all of the publications I've found on rendering clouds refer to rendering them up close. I have 3 different sample implementations written for volumetric cloud rendering. Each has its own wrinkles to be ironed out, and they're all a bit performance-heavy, but they all look good up close. Unfortunately, ALL of them look horrible when the camera is in space. And despite looking horrible, they still manage to cripple the system. From space, the view angle brings in too many clouds and the LOD level required is too great (even just to bring the visual quality up to merely piss-poor). If you do a web search for pictures of cloud systems from space, the detail level is usually incredible. From oblique camera angles, the clouds usually look very detailed, very 3D, and show some nice shadows on the ground. So my main question is, has anyone here ever seen anything related to modeling/rendering clouds from space? If not, does anyone have some good ideas? I'm not aiming for super-realism, but it has to look good or there's no point in doing it. The clouds are NOT static and are NOT generated artistically. Everything is generated dynamically at runtime using various mathematical algorithms, and I plan to use the cloud implementation for everything from mostly dry planets to gas giants like Jupiter. I plan to add some cheap climate/weather modeling after I get the rendering working. I've given some thought to generating high-res texture and bump/displacement maps for space, and switching to 3D volumetric rendering when the camera gets close enough, but there would be a lot of problems to work out with that. One would be keeping them from looking flat. Another would be that when working with dynamic LOD schemes, transitions between two different rendering methods are often difficult to make seamless. I never have much time to work on this, so I try to avoid going down a path that I know is going to be exceedingly messy. ;-) Thanks, Sean

Share this post


Link to post
Share on other sites
hmmm.... i render coulds that looks more or less okay even from space, but i do it slowly... very slowly. Non-realtime, actually, far non-realtime.

You probably could still use kind of impostors, maybe together with kinda dome partly around the camera. And use slow-but-good rendering algorithms to update impostors and that pre-rendered thing. Or use other methods that use frame-to-frame coherency.

I also got idea about making game with whole galaxy and procedural planets and everything, BTW, and made a galaxy.... but not planets yet.

Share this post


Link to post
Share on other sites
I've thought about this stuff before... I think there was an article in Real Time Rendering (I think it was from there, not 100% sure though) about generating clouds from space -- procedurally. IIRC, the patterns looked realistic. Transition from 2D->3D is pretty tough though... so good luck on that.

-Navreet

Share this post


Link to post
Share on other sites
Quote:
Original post by Dmytry
hmmm.... i render coulds that looks more or less okay even from space, but i do it slowly... very slowly. Non-realtime, actually, far non-realtime.

You probably could still use kind of impostors, maybe together with kinda dome partly around the camera. And use slow-but-good rendering algorithms to update impostors and that pre-rendered thing. Or use other methods that use frame-to-frame coherency.

I also got idea about making game with whole galaxy and procedural planets and everything, BTW, and made a galaxy.... but not planets yet.


Thanks. I've got some impostoring test implementations, but one of the problems with it is how to partition the cloud particles so that they will be properly sorted for the alpha blending operation. My current ground LOD scheme uses 6 quad-trees (one for each face of a cube mapped onto a sphere), so everything is broken up into squares. If I try to impostor two neighboring squares, it becomes impossible to render the cloud particles that intersect the boundary in the proper interleaved order.

I could use a different partitioning scheme for the cloud particles, but then I would need more CPU time to sort them. The quad-tree will sort its own cells cheaply (plus I'm already doing it), and if I keep the cloud particles in each cell in a fixed grid, the rest of the sorting is cheap as well. I've tried using an alpha test dissolve to avoid sorting. It looks ok in SpeedTree's grass implementation, but it doesn't look good when applied to clouds.

I would be interested in hearing how you're rendering clouds, even if it's not real-time. If nothing else, it may help me think of a different angle to attack the problem from.

Thanks,
Sean

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
One approach could be to use real clouds from space. For instance, you could stitch together a world view from the following sources:

http://www.ssec.wisc.edu/data/geo/west/
http://rapidfire.sci.gsfc.nasa.gov/realtime/2005034/

and others...

Depending on the view and source, you could have a global update every 6 hours or so and portions updated more quickly. A collection of a few weeks worth of data could be reused.

As an alternative, you could look into what's availablr from THREDDS sources. you might be able to get somewhat real time cloud data that could be used to render decent cloud cover.


Share this post


Link to post
Share on other sites
btw, very cool planet rendering on your site (i had trouble connecting to it yesterday)

i'm rendering clouds by stepping a ray through 'em. Also i use deep shadowmaps for shadows.(click on "some images" in my sig for examples) Speaking of alpha blending with impostors.... what if each impostor hold contribution of clouds in certain defined volume? For example, you have several belt-like cylinders around camera for clouds closer to horison, cylinders have textures and act as impostors. Clouds are rendered into 'em using some slow algorithms, and on cylinder that is between 2 other cylinders you have contribution of cloudspace that is closer to this cylinder than to other ones (that is, thick cylindrical volume). Cylinders is initially placed centered on camera. If camera moves, you'll see far away clouds "move slower" than close ones.
At each moment there's 2 sets of cylindrical impostors, old and new, and render of old is blended out, new is blended in.
And for clouds under camera, several spheres or planes could be used.
i must say i never really made impostors system.

Share this post


Link to post
Share on other sites
looks very good so far. i dont know much about cloudrendering, but some general ideas nonethless.

i assume you generate your clouds by evaluating some density function, right?

first id make sure this density function can be evaluated at lots of different LOD's, which if youre using perlin noise, shouldnt be a problem.

for far away rendering, keep a spherically mapped texture for each planet, updating it only where needed, possibly with multiple layers LODed in depending on distance.

for closer-up clouds, i would suggest something like a dynamicly LODed particle system. particle size could simply be varied with distance: closeby the density function gets evaluated at a quite high resolution with small particles, far away low res big particles. just the basic LOD idea: keep the size in screen space constant.

maybe then some imposter stuff, i remember reading a good article on cloud imposters from the MS flightsim team, but that wasnt aimed at procedural clouds i believe. im not sure if its really needed though. i think you could already have decent results with a managable amount of particles.

crossfading between the two might indeed be tricky. certainly since the usability of the 2d cloud layer is hard to estimate. from the surface it might look good, atleast the outer layers. but on the horizon, which is further away, the 2d-ness might be obvious.

Share this post


Link to post
Share on other sites
I have worked on clouds rendering too (remember the emails we exchanged, Sean?), but only as seen from the ground. I will also have to implement it from space, so here are a few thoughts on the subject.

In my opinion it's possible to get "relatively easily" some decent results for rendering clouds from the ground (with particles/impostors techniques, ala Flight Simulator); or to render clouds from space (with procedural textures, as seen in "Texturing and Modeling: A procedural Approach"). The real problem is the transition. Clouds from space are usually applied on a plane (or, at the whole planet level, on a sphere), so they have no thickness. You can slowly switch between the two techniques (volumetric clouds with particles/impostors, and texture mapped plane) but i think the quality might not be good enough. One problem is to make the volumetric clouds appear only where there are clouds as seen from space. In some way, the texture from space must be used as a "probability" function for the density/presence of clouds at the ground level.

Another idea that popped in my mind would be to use layers of clouds from space, like the fur rendering technique. So now instead of generating one 2D texture for the clouds, you generate N 2D textures from a 3D noise function, and blend all the layers together.

If you want to go to the next step, you might directly generate the cloud densities in a 3D texture (as opposed to a set of 2D textures), and cast a ray from the viewer in a pixel shader. This will probably require ps 3.0 with loops.

Shadowing from space might be as easy as modulating the cloud texture onto the ground. From ground, you can still use a shadow mapping-like technique: generate a viewpoint in the sun direction, looking at the viewer, and generate a shadow frustum in an area of a few kilometers around the viewer. Then the two shadow solutions can be combined in some way.

Just some ideas..

Y.

Share this post


Link to post
Share on other sites
Ysayena - Yes I remember, and I still don't think it will be as easy as you claim. I've tried a few different implementations, and you haven't tried any yet. I'm not trying to be negative, and I would be very happy to have you prove me wrong. I'm just saying that your claims don't match up with what I've experienced. In this case I stepped away from the problem for a while, and I decided to see if anyone had any fresh ideas or had seen a recent publication I hadn't seen. One problem with applying a cloud texture map to a sphere is that they go completely flat at the horizon of that sphere (they completely disappear), and that's where they should look the most 3-dimensional. Near the horizon, you see the tops and bottoms of clouds where you should be seeing the edges of them. You may be able to alleviate that with displacement or parallax mapping. I haven't looked closely at those yet. Some of the algorithms won't run on my video card, and some require heavy pre-processing steps to generate 3D texture maps as lookup tables.

Eelco - That's a bit like how one of my implementations works now, but it's a bit slow and it has some of the issues I mentioned. Also, cloud particles at the horizon end up being several hundred miles in diameter. I can't make them smaller because the machine is already running pretty slow at this detail level. ;-) Because cloud particles can't be that tall without intersecting both the ground and space, they end up being very long flat ellipsoids, which is pretty noticeable (sometimes the edges poke out into space because they're flat and the atmosphere is not).

AP - I'm looking for something that will work for dynamically-generated worlds. I want to build my own virtual universe, and you can't reuse clouds from one planet to cover them all.

Dmytry - I've given some thought to the cylinder idea (or something like it). MS Flight Sim 2004 used 8 impostors that formed an octagonal ring around the camera. An octagon works very well when the camera is close to the ground. It is very easy to avoid cracks and overlapping areas. (Keeping overlapping areas from being visible seams is not easy.) It is also easy to keep the impostored billboards from intersecting the ground or the sky dome in a bad way, and the space in the impostor textures is used pretty efficiently (i.e. you don't have most of the cloud pixels crammed into one small part of the texture). You can also keep the impostor faces far enough away from the camera that other planes don't seem to magically "appear" up close after flying through a cloud that's supposed to be very far away.

Think about a camera viewpoint in orbit looking at the horizon. The clouds should look very 3D from this angle, and they are in a very thin circular strip surrounding a very large planet. This is not easy to impostor efficiently. (If I'm missing something, then someone please let me know what it is. ;-) Maybe some sort of transformation could be applied to straighten out the curve for the impostor (i.e. map the x axis to an angle and the y axis to altitude). You could transform it back applying the inverse of the transformation to the texture coordinates.

Another idea would be to only track individual cloud bunches and impostor them separately. So from space each storm system would have its own impostor. You wouldn't have to worry about cracks and/or overlap. When the camera gets close enough to one cloud bunch, you could transition to a different rendering mode just for that bunch. This would simplify the impostor rendering somewhat, but it introduces a question of how many cloud bunches you would need to track, and how to handle overlapping bunches, perhaps splitting and merging them as the wind pushes them together or tears them apart. This could make a gas giant particularly challenging. Though it varies, even the Earth seems to have more cloud cover than clear skies in a lot of pictures from space.

Thanks everyone,
Sean

Share this post


Link to post
Share on other sites
My idea actually was to make not really impostors, not turn them towards camera.

Just textured ring/belt, so faces of belt does not turn towards camera. If you wish, deformable belt to make it look nicer. Imagine large sphere and belt placed on surface, with belt radius less than sphere radius.

Actually, that belt should be conical and be orthogonal to sphere. When camera is far away, belt becomes a ring around planet, and is also used for atmosphere scattering. When camera looks from low orbit, it sees textures clouds sphere, that is blended out at certain distance where several belts is blended in. It should look 3D enough, and with sane speeds of camera (around orbital speed), will not requir to redraw too often. Belt radius and height should be calculated for optimal view. Actually it's not so hard.

It is also possible to make it be octagonal, tho requirs some more complicated equations for transparency to make it blend with plane seamlessly...

(And of course clouds that is close to camera, clouds must be rendered using other method.)

As about impostor-per-cloud, the problem is that small nice clouds spaced by some distance is not very typical on Earth... usually you have big areas of nearly overcast sky...

Share this post


Link to post
Share on other sites
Where I live, it is rare to have completely overcast sky for long. We have much more days with a thousands of small cumulous clouds drifting by all day long. A lot of views of Earth from space look like these:

http://www.solarviews.com/raw/earth/earthx.jpg
http://www.solarviews.com/browse/earth/earthafr.jpg

There are a lot of systems that blend together, and actually a lot of clumps of smaller clouds spread out across a large area. There's quite a bit of variety.

Here's a set of lower-orbit shots of specific cloud systems:
http://www.solarviews.com/eng/cloud1.htm

I'm not sure I understand what you're saying about the belts. Are you referring to bands circling the globe, like lattitude bands?

And impostors don't need to be turned to face the camera. You can set the projection matrix to be anything you need, so you can render onto triangles that are not facing the camera and it will look fine when you render them. However, you do need to have the correct projection matrix for each face and the video card renders into square or rectangular buffers, so it is not well-suited to curved shapes. As I mentioned, you may be able to apply a transformation to warp a curved shape into a straight one (and then map the resulting texture back to a curved shape). Of course, it is possible to use the CPU to update the texture, but it will be too expensive, even spread out across multiple frames.

Also keep in mind that orbital speeds are not "sane" speeds for a game on this scale. If it takes you several hours to get to another spot on the planet, no one will play the game. If it takes 5 minutes to get where you need to go, most players will decide the game is not worth it. Unless, of course, it is an MMORPG. ;-)

Share this post


Link to post
Share on other sites
yes, exactly , there's lot of varieties (including overcast sky), and optimization for some specific variety (many small spaced clouds) will not work.... also there's hurricanes and stuff. My point was that if you'll handle only small spaced clouds, it will be boring, because relatively large(relatively to cloud size(that is, these long lines of clouds you see on these images is, in fact, overcast sky for people under it)) areas of overcast sky is quite common. I meant that small nice clouds is not very typical in sense if them would be very typical, it wouldn't be necessary to handle something else, but that's not the case.

As about belts, i can draw"ascii art":



---- .
/ \ .
/ \ .
/ _ \ .
| / \ |
| | O | |
\ \_/ /
\ /
\ /
-----

View from above the camera. Say, camera is at 300km from surface (it's for cases camera is at relatively small altitudes, of course from far away simple texturing will work pretty well), and you look at it from point right above, at altitude 10000km You see concentric rings of that belts. Looks like that thing with concentric rings you shot into.
Like you draw some concentric circles on sphere, around camera.

Geometry could be static, or could be dinamic, you can make whatever tricks with projection matrices you like. Or change texture coordinates. When camera is too far from center of some belt, new belt is added and blended in while old is blended out. Maximal speed of movements, indeed, depends to how fast new belts could be traced by clouds renderer.

Share this post


Link to post
Share on other sites
Quote:
Original post by Dmytry
yes, exactly , there's lot of varieties (including overcast sky), and optimization for some specific variety (many small spaced clouds) will not work.... also there's hurricanes and stuff. My point was that if you'll handle only small spaced clouds, it will be boring, because relatively large(relatively to cloud size(that is, these long lines of clouds you see on these images is, in fact, overcast sky for people under it)) areas of overcast sky is quite common. I meant that small nice clouds is not very typical in sense if them would be very typical, it wouldn't be necessary to handle something else, but that's not the case.


I want to have many different varieties. The impostor per cloud group idea is really well-suited to large storm fronts like hurricanes. (Each front would have one large impostor.) It's even ok for clumps of smaller spaced clouds as long as they're clumped within an area the size of a hurricane. It's just not as good for a whole lot of really small localized cloud groups, and it's not good for cloud groups bumping into each other, splitting up, and so on. Thanks for the input. I'll give it some thought.

Share this post


Link to post
Share on other sites
dmytry:

im not sure what youre saying, but what you said made me come up with something i think is quite good.

far away and up close are easy: the first can be done with one or multiple texturelayers over the entire earth, the second should be done with particles if you want nice volumetrics.

however, that doesnt cover the medium range, where the first looks to crap and the second is too expensive, low orbit for instance. why? because when you look tangentially at the spheres, the 2dness is very appearent. now if you create a conic ring around the camera, with its apex the the centre of the planet, with an angle such that everything on the horizon is covered, this problem is solved.

keep three of these rings in memory: two that are being crossfaded between, and one being rendered from an anticipated location.

Share this post


Link to post
Share on other sites
I am doing something similar in my current game, and I can recommend a couple of methods, although our look is a little more stylized than what is on your site. Currently we are rendering our clouds as shells around the planet, in a naive fur-shader style. Render several shells, where each is the same texture *slightly* rotated and reduced in alpha. These look pretty puffy. Our original method that also worked pretty well was to generate a normal map from the noise we were making clouds from, and produce some realistic clouds with a single texture at low cost. I hope this helps. I have some early looks at what I talk about, but nothing too recent. http://scottlsmith.net/projectsolar/index.html

Scott

Share this post


Link to post
Share on other sites
Quote:
Original post by Eelco
dmytry:

im not sure what youre saying, but what you said made me come up with something i think is quite good.

far away and up close are easy: the first can be done with one or multiple texturelayers over the entire earth, the second should be done with particles if you want nice volumetrics.

however, that doesnt cover the medium range, where the first looks to crap and the second is too expensive, low orbit for instance. why? because when you look tangentially at the spheres, the 2dness is very appearent. now if you create a conic ring around the camera, with its apex the the centre of the planet, with an angle such that everything on the horizon is covered, this problem is solved.

keep three of these rings in memory: two that are being crossfaded between, and one being rendered from an anticipated location.

I actually exactly meant conical rings around camera. I got this idea while thinking how to make my clouds run in realtime for preview.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this