## Recommended Posts

I am working on a planet renderer at the moment, which is basically a sphere with various cube maps (texture, height, normal), and (depending on camera distance from the planet) an LOD patch which uses these cube maps, along with its own detail textures. I want to add a cloud layer to my planet, and my best idea for doing this is using another sphere with slightly larger radius than the planet, and cube mapping it with the clouds. This will probably work fine, but I want to have shadows projected from this cloud sphere onto the planets surface, using the sun as directional light source. To do this I would need to apply the cloud texture to the planets surface, but what is the vector math for the cube-map texture coordinates of the cloud shadow, relative to the rest of the textures? Thanks, Bill.

##### Share on other sites
I think you can just trace a line from each vertex on the planet-sphere to the sun, check where this line intersects the cloud-sphere and pick the cloud-sphere tex coords at that point. (This for per-vertex texturing, for per-pixel you would need to do the same for each fragment)

And remember that you only have to do this for the lit half of the planet. You might get strange results at the day/night edge from rounding errors though.
And you are likely to get heavy texture skewing near the edge as well.

##### Share on other sites
I did it like that. It works pretty good and fast.
Computing the cube map tex coords in a vertex shader is absolutely sufficient.
On my homepage, there is also a movie available.

Without taking care, it looks *a bit* strange near the day-night-border, where the shadows "wrap around", but that's not too bad IMO. You can't even see that effect in the screenies.

##### Share on other sites
Wow, what are the chances. I have already been to your homepage and downloaded two of those demo movies about 5 or 6 months ago. They were a large part of my inspiration for making my own LOD planet (along with way too much Frontier: Elite 2 when I was younger). I spent ages trying to find that web site again, and couldn't, and now it gets handed to me! I would love to pick your brains more about planet rendering, unfortunatly I can't understand your website!
I am trying to make my engine completely procedural (except for detail textures, which I intend to take from a standard set). At the moment I have texture, normal, and height all generated from 3D Perlin noise, and all the vertex blending, texturing choices, and material selections are done on the GPU based on the height map, and the camera and sun vectors.

The calculation for the intersection between a line and a sphere needs at least one sqrt, which I definitly couldn't do on a per pixel basis (I would imagine, not sure how much faster the GPU would be at it than the CPU), and would hurt alot on a per point basis in a full res LOD mesh.
I think for the clouds I will try simply adding a portion of the sun vector to my cube map coordinates, based on the height of the atmosphere, and see if that works. If I add a vector which is equivalent to the distance along a tangent from the surface sphere to the cloud sphere. This will mean that at the day night border and the center of the day half the shadows will be perfect, and they will gradualy become warped and un-warped inbetween. I don't know how big the effect would be, because I just did a sketch of it, not any actual math. There must be a compensating factor that can be added to that based on the dot between the texture base coordinate and the light vector.

/edit
Screen shots here

[Edited by - bluntman on November 4, 2005 11:07:57 PM]

##### Share on other sites
Quote:
 Original post by bluntmanWow, what are the chances. I have already been to your homepage and downloaded two of those demo movies about 5 or 6 months ago. They were a large part of my inspiration for making my own LOD planet (along with way too much Frontier: Elite 2 when I was younger). I spent ages trying to find that web site again, and couldn't, and now it gets handed to me! I would love to pick your brains more about planet rendering, unfortunatly I can't understand your website!

There is also an English version of my webpage. Just klick on the GB flag with the tea cup.
I don't know why I sent you the German version...

Elite is also my inspiration, together with Ysaneya's journal. Right now, I'm delevoping a
new version of planetary clipmaps that get rid of that annoying global grid without
sacrificing too many advantages. Once it's working properly, I'll put it on my HP.
It's also using the advanced noise functions of libnoise (though they are quite
slow in my eyes).

Quote:
 I am trying to make my engine completely procedural (except for detail textures, which I intend to take from a standard set). At the moment I have texture, normal, and height all generated from 3D Perlin noise, and all the vertex blending, texturing choices, and material selections are done on the GPU based on the height map, and the camera and sun vectors.

I intend to make the clipmaps completely independent of the choice procedural/predetermined.
However, this time I also want to make some procedural planets since everyone seems to
do it.

Quote:
 The calculation for the intersection between a line and a sphere needs at least one sqrt, which I definitly couldn't do on a per pixel basis (I would imagine, not sure how much faster the GPU would be at it than the CPU), and would hurt alot on a per point basis in a full res LOD mesh.

I don't think a sqrt is that expensive. I think it's comparable to a division but I might be wrong.
Anyway, it's not a bottleneck in my current engine.

Quote:
 I think for the clouds I will try simply adding a portion of the sun vector to my cube map coordinates, based on the height of the atmosphere, and see if that works. If I add a vector which is equivalent to the distance along a tangent from the surface sphere to the cloud sphere. This will mean that at the day night border and the center of the day half the shadows will be perfect, and they will gradualy become warped and un-warped inbetween. I don't know how big the effect would be, because I just did a sketch of it, not any actual math. There must be a compensating factor that can be added to that based on the dot between the texture base coordinate and the light vector.

Hey, good idea! I think that will work without a noticable difference (unless your
clouds are 1000km high). You could also try to Taylor-expand the sqrt term for a better
approximation...

Quote:
 /editScreen shots here

[/quote]

Looks good. You could improve the atmosphere by shading also the planet.

##### Share on other sites
Turns out, after reading up on it, that I am using clipmaps, although I just sort of made it up as I went along, mostly by imagining from your tech demo how you did it. Came up against alot of problems I would never have thought of, but now have a stable system that runs at about 30fps at full LOD (33x33 grid, 10 LODs). The blending between LODs is done using vertex shaders, and the drawing uses tri strips (which I found to be faster than drawing elements). I am currently calculating movement for a particular LOD (i.e. how many columns and rows to move the grid by) using angular distance from the current camera on the planet sphere to the center of the grid, divided by the angle between grid squares.
Whats the global grid you mention? I create a base LOD automatically when my camera is close enuf to the surface but this does not cover the globe.
If your assembly programming is good (unlike mine) you should look into using SSE for generating noise, I have seen 2D Perlin Noise implementations that calculate 4 heights for the price of one. Just needs an extra dimension added, but I don't understand Perlins algorithm well enough to add one to the C let alone the assembly! SSE could also be applied to normal recalculation aswell, more easily than to the Noise, but my tests show that I can calculate 10000000 normalisations in less than a second, but it takes 3550ms to calculate 100000 3D noise calculations.
Put the shadows on aswell, seems to work fine, except now I am running into problems with the granularity of the z-buffer. I am currently doing all rendering in one coordinate frame, but intend to split it up into outerspace coordinates, planet coordinates, and close LOD grid coordinates. How do you handle this in your system? I figure that using a non planetary coordinate system for any of the LOD levels means that one needs to transform all thier coordinates when the local origin moves, which sounds a bit expensive to me. Anyway I am going to try that tonight.
Thanks for the tips, I will check out libnoise, see how it compares with my function.

##### Share on other sites
Quote:
 Original post by bluntmanTurns out, after reading up on it, that I am using clipmaps, although I just sort of made it up as I went along, mostly by imagining from your tech demo how you did it. Came up against alot of problems I would never have thought of, but now have a stable system that runs at about 30fps at full LOD (33x33 grid, 10 LODs).

That's not quite fast. I have 128x128 grids and 11-12 LODs at 80-120 FPS (RadeOn 9800 Pro).
Did you already implement VBOs? They speed things up quite a bit.

I would love to see how you implemented the clipmaps. I can imagine at least three approaches.
Can you post a screenshot of your grid?

Quote:
 Whats the global grid you mention? I create a base LOD automatically when my camera is close enuf to the surface but this does not cover the globe.

Yeah, the problem is that the clipmaps never cover the globe - not even half of the globe in my implementation!. So what do you do when you are far away from the planet? You need some global grid. The global grid is fixed and has nothing to do with the clipmaps! You can see the global grid in the beginning of the tech movie before the little square clipmaps pop up. It's based on a subdivided cube (quite simple).

Quote:
 If your assembly programming is good (unlike mine) you should look into using SSE for generating noise, I have seen 2D Perlin Noise implementations that calculate 4 heights for the price of one. Just needs an extra dimension added, but I don't understand Perlins algorithm well enough to add one to the C let alone the assembly! SSE could also be applied to normal recalculation aswell, more easily than to the Noise, but my tests show that I can calculate 10000000 normalisations in less than a second, but it takes 3550ms to calculate 100000 3D noise calculations.

Your 3D noise is about as fast (resp. slow) as libnoise. I do not consider using SSE or something similar for now, because in the moment I want to optimise the look, not the speed. In the end, when everything is ready, I'll optimise for speed.
Don't modern compilers use SSE, anyway?

For now, I'm quite happy with libnoise. I've attached an image of some procedurally generated asteroid (using the turbulence module and ridgedMulti of libnoise, that's all). Try this with Perlin! I think it's worth the effort. There's another one on my homepage.

Quote:
 Put the shadows on aswell, seems to work fine, except now I am running into problems with the granularity of the z-buffer. I am currently doing all rendering in one coordinate frame, but intend to split it up into outerspace coordinates, planet coordinates, and close LOD grid coordinates. How do you handle this in your system? I figure that using a non planetary coordinate system for any of the LOD levels means that one needs to transform all thier coordinates when the local origin moves, which sounds a bit expensive to me. Anyway I am going to try that tonight.

The floating point and z-buffer precision are two important problems to solve. You can solve the z-precision problem by adjusting zNear and zFar each frame depending on the distance of the camera to the clipmaps. The floating point precision problem causes vertices to wobble around randomly on Earth when you are in <1000m height. This can be solved by computing the modelview-projection matrix completely in double precision on the CPU and transforming it to single precision in the very end. That's not a big overhead and solves (most of) the problem (I currently get some strange quantization effect which I still don't know the cause of, but it's not visible at speed >50km/h which is common for a star ship ;-).

I think having a special coordinate system for the clipmaps will work, but you'll have to adjust the system once in a while by adding some constant vector to shift it back to zero. You don't have to shift the system each time when the origin moves, that would be too expensive. Just check when the coordinates in the clipmaps are too far off the origin and move them away then. When you indend to stick with a 33x33 grid, this is OK, but with a 128x128 grid or bigger, you'll probably see a noticable jump each time you do this shift.

[Edited by - Lutz on November 10, 2005 7:16:59 AM]

##### Share on other sites
Quote:
 Original post by bluntmanI have seen 2D Perlin Noise implementations that calculate 4 heights for the price of one. Just needs an extra dimension added, but I don't understand Perlins algorithm well enough to add one to the C let alone the assembly! SSE could also be applied to normal recalculation aswell, more easily than to the Noise, but my tests show that I can calculate 10000000 normalisations in less than a second, but it takes 3550ms to calculate 100000 3D noise calculations.

I think this number is very low.

I just did the same test on my computer. It is a pretty outdated Athlon 2.0 Ghz (not 64 bits).

For the first test (in order to make a relevant comparison), i normalized 10 millions 3D vectors. For this i generated a huge array of 10 millions random 3D vectors, then a big loop which parses this array and renormalizes each of them. It takes approximately 400 milliseconds, which suggests (if you did the same test) that my computer is 2.5 times faster than yours. Remember that number to compare performance of 3D Perlin noise in the next test.

Now, for the second test, i generate 100 000 raw Perlin noise values. The input 3D vector given to the noise function is in the form (i*x,i*y,i*z) where i is the loop counter, and x y z some constants. The result is 22 milliseconds.

Since your computer is 2.5 times slower than mine, the equivalent test on your computer should then be around 55 milliseconds to generate these 100 000 3D noise values. If it's taking 3550 ms, the only conclusion is that my code is 64 times faster than yours. And i got it directly from Perlin's website..

Can you post your noise generation code, and how you tested it ?

Y.

##### Share on other sites
Lutz:
That asteroid is awesome! Really looks like some sort of crystalline formation. I will have to get on with looking at that library (haven't got around to it yet). I am starting to get bored with the look of my planet already.

I got the relative coordinate systems working, now my LOD is displayed at 1000 times the scale of the planet, but the camera is moved 1000 times further away. Also I switched to doubles for the internal calculations and the result is much smoother. I don't think even doubles are enough accuracy for representing a galactic coordinate though, so I will add another coordinate system for interstellar travel, then switch between calculating in this and the other two systems as needed.

Yeah I know 30fps 33x33x11 isn't that fast, but I tried glDrawElements and it is much slower than tri strips, so I can't imagine VBOs would be any faster. I mean the grid changes every frame, and has a dynamic hole in it, so if I need to re-create the data arrays every frame then how can it give me a performance boost? Still when I hide the patch the fps jumps to 60fps (I have vsync on).
Still I will try it, as I already use VBOs for the planet.
My 'global grid' is exactly the same as yours it sounds like, but what could you replace it with? Or you want to stitch the LOD patch into it somehow? I have seem ROAM planet approaches which are just one object.

Ysaneya:
Thanks alot for the comparison numbers! Well when I said Perlin Noise calulations I actually meant fBM calculations, to a depth of 20, so times your figure by 20! Also I know Perlins original noise is quicker than my version, but it also has a bug in it which he admitted. I am using his Java version in which the bug is fixed. Basically if you zoom in real close, and your scale gets real high, then the interpolation breaks down and there are discontinuities everywhere.
As for the normalizations: My vertex normalize is completly standard I think (this is from memory):

float Vector::Length()
{
return(sqrt(x*x + y*y + z*z));
}

void Vector::Normalize()
{
float lenmul = 1.0f/Length();
x *= lenmul; y *= lenmul; z *= lenmul;
}

I will have a look at my code when I am at home. I didn't think anyone would give me comparison times, or I would have been more careful, it could be that I was testing in debug mode, or some other stupid thing. I just wanted the relative timings between normalizations and fBm calculations, not hard figures. I was running Winamp aswell! I will try the test again tonight and get some better figures.

##### Share on other sites
Ah ok, then your numbers are much more in line with what i have. I just remember a small thing that might help, the code posted on Perlin's website is doing at one point a serie of casts from float-to-int, which are extremely slow. Reimplementing them in assembler, i had a speed up of almost 100% of the noise function.

Y.

##### Share on other sites
I bet! I wanted to convert mine to SSE based calculating 4 values at a time, but it looks to be a (very) tricky thing to do. I have some assembler experience, but I haven't actually used it for years, and SSE seems more complicated than straight asm, with the cache timing, streaming etc. I am better off spending my time working on my clipmap algorithm, and fill rate optimizations, as that is where my bottleneck is at the moment. Of course its slightly different when the camera is moving fast, and a lot of updates are required, but even then my engine fairly gracefully handles the reduction of LOD to compensate. Still I will be getting around to low level optimization at some point.
I did a quick google search for asm float to int, and turns out its two lines of asm!! Its a rounding op, but just sub 0.5 for a floorf operation. Now 100,000 fBms to depth 20 takes 1340 ms.
10,000,000 normalizations takes 492 ms. I had a vector add in there before, and it took 850 ms for the normalization and add. Strange that the vector add takes up almost half the time. Still my computer ain't doing any better than yours. I am using doubles though. Strangely enough, I swapped my maths library to double and it didn't seem to affect the performance of my engine.
....
I just swapped the math lib back to float and ran the normalization test again and it took 500 ms! Do double ops not take any longer than float ops?

##### Share on other sites
Quote:
 Do double ops not take any longer than float ops?

No! Except the divide I think.
But it normally means that you need to read/write twice as much memory and that CAN slow down a lot.

##### Share on other sites
Starfire website
Theres not much too it yet, just screenies and code. I will put up more info about what I doing when I have worked it out.
I was reading your jounral, Ysaneya, and it looks like I am about where you were when you started writing it, except my engine doesn't run nearly aswell as that demo of yours I downloaded (I got 80 to 120 fps, although the updates were a bit jumpy). I came at this from the point of view of putting a grid on a sphere, and have never written a flat LOD engine (or any LOD engine infact!), so don't have a good grounding in the basics. I just copied what I thought I saw in Lutz's demo, and it seemed to work pretty well. But as Lutz pointed out, 30 fps with such low res grids is really not that great. Still I am almost certain its fill rate and not my engine that is making it chugg, as when I hide the LOD the frame rate jumps to 60fps.
I read up some more on clip mapping, and what I have implemented sounds a lot like it, except clipping, in my case, is purely dependant on distance from viewer and height of viewer, theres no clever culling. Also the maps in my case are vertex coloured polys with detail textures, between different levels the colour, vertex, height and normal are blended.

##### Share on other sites
What's your graphics hardware, by the way? Maybe we're comparing apples and oranges when we talk about frame rate. I've got a RadeOn 9800 Pro.

Culling gives you a speed increase of about factor 2-3. There is a simple and efficient way to do culling: Split each clipmap level up into patches of size 8x8. Then, compute the radius of a sphere covering this patch and do a simple sphere-frustum-check for that patch. If the check passes, draw the patch, otherwise not.

Since the render regions of each clipmap level (i.e. the part you actually draw) looks like a square with a hole in the middle, it might be impossible to split it up into 8x8 patches.
Then, simply draw as many 8x8 patches as you can. The rest can be split up into 4x8 patches, 8x4 patches, 4x4 patches and so on. In the end, everything gets drawn. Moreover, each patch can be drawn as a VBO. It's a bit tricky but it's possible.
oooooooooooooooooooooooo   oooo o oooooooo oooo oo o oooooooooooooooooooooooooooo   oooo o oooooooo oooo oo o ooooooooo              ooooo   oooo o                  o ooooooooo              ooooo = oooo o                  o ooooooooo              ooooo   oooo o                  o ooooooooo              ooooo   oooo o                  o oooooooooooooooooooooooooooo   oooo o oooooooo oooo oo o oooooooooooooooooooooooooooo   oooo o oooooooo oooo oo o oooo

In the above example, the clipmap level on the left (each "o" is a vertex) is split up into 2 4x8 patches, 2 1x8 patches, 2 8x2 patches, 2 4x2 patches and 2 2x2 patches. That's 10 patches = 10 calls to OpenGL when you're using VBOs. Especially for large clipmaps this is much more efficient and faster.

One more request: Can you post/put on HP an image of the global grid? How is the blending from the global grid to the clipmaps done? Is there any blending or do you simply draw the global grid when you're far away and the clipmaps then you're close?

##### Share on other sites
For each clipmap level grid I actually have two grid, one with the data taken straight from the parent levels grid (I call it the base grid), and the other with the data as it would be calculated for this level (I call it the target grid). So for my lowest LOD clipmap level the base grid is taken straight from the planet grid. This interpolates the 3D geometry and the normals, but not the colours as I can't easily map them from the cube map (I ain't sure how to do it but I imagine you need to do plane-vector intersections, and they would be costly for a whole grid). Also even if I did map the colours directly from the cube map for the base level, there would still be discontinuities depending on how closely the interpolations match between the texture and the colour. So I decided to put the whole lot on the fragment and vertex processors. For each vertex in a particular grid I calculate a blend factor (a simple y = mx + c type thing) based on the cameras offset from the center of the grid. I also factor in height and scale it from 0 where the grid should first be visible, to 1 when the camera is half way between the current min grid height and the childs min grid height. This has the effect of fading the grid in from the center of the camera as it decends. The calculated (and clamped) weight is then passed to the vertex shader via the .w component of the base grid vertex, along with all the data for the base grid and target grid. The vertex shader blends the two grid together and calculates the distance from the blended vertex to the (local) camera position (which is also passed to the shader). This distance is scaled based on another couple of params passed to the shader, so it ended up between 0 and 1. 0 is calculated to be at the _furthest_ distance at which the patch should be visible at all. 1 is about 1/4 as far a way. This distance factor is sent to the fragment shader, along with the blended vertex. The fragment shader calculates the colour as it would be taken straight from the planet grids cube textures, and the colour as the LOD grids vertex colours say it should be, and then blends them based on the distance factor. Theres some other stuff in there, which handles alpha blending at the edges of the grids, and it works pretty much the same way. This gives nearly flawless results, except that I am having some problems with alpha and specular on the sea at the moment.
The full code with shaders is on my site, feel free to check it out if you really want to know all the details. Its not heavily commented at the moment, but if people are interested then I will comment it fully.

I was gonna have a go at VBOs tonight. Thanks for the tips. I can imagine it being very tricky! At the moment I use flat arrays for my grids, and a nice pair of deque<int> to map logical grid coordinates to coordinates in the actual array. The deques start of filled with thier own indicies (0-n). This means that when the camera moves one grid square to the left, I pop the back of the x deque and push it onto the front, and it still maps the vertices correctly. To work with multiple VBOs per strip I would need to keep track of logical index mapping for each VBO seperately, and which vertices within them have changed.

[Edited by - bluntman on November 11, 2005 4:48:45 PM]

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628285
• Total Posts
2981837

• 10
• 10
• 10
• 11
• 17