Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 05 Jul 2013
Offline Last Active Oct 10 2016 07:56 AM

Posts I've Made

In Topic: Procedural Planet - GPU normal map artifacts

17 September 2014 - 11:53 PM

Hello raRaRa,
We are on the same boat ! I'm working on a procedural planet engine, too.
My process is close to yours, as I'm using quadtree and having exactly the same artifacts starting LOD 15.
I did not need to post a screenshot because yours show exactly the same problem.
We can exchange some stuff for resolving this.
Some differences between our projects. I'm using 6 grids (one for each faces of the cube). Coordinates are, for the moment only in single float precision.
The grid size varies on parameters, but I'm using often 256x256.
I did not pass the 4 corners to the shaders, only an offset and a scale, based on the node to render.
The grids are allways the same in memory, they change size and position based on the node.
I'm using an heightmap and a normalmap for each node, which are computed one time per node, between frames, in a "work queue".
LOD is based on CDLOD algorithm, using for each node, a parent normal map for morphing purpose.
I thought, like you, using a local noise for high LOD levels, but did not try for the moment. 

My doubts are about jittering (end of float precision), and I'll try to render grids based on eye position rather than the planet center. Like descibed in this book : : http://www.virtualglobebook.com/, did you try ?


Sorry for my poor english and good luck ;)

In Topic: Quadtree vs multiple resolution grids for terrain rendering

17 September 2014 - 06:51 AM

I recommend you this paper :


It did not answers all questions, but there are some clues in it.

In Topic: Compile shaders in build time with common functions

03 September 2014 - 05:10 AM

So easy.. Shame on me ;)


Thank you both !!

In Topic: About fixed points

23 April 2014 - 01:09 AM

Whoah ! Thank you all so much for all your help !
I start to understand. There is just some things to learn more...
For the moment, I will concentrate myself on only a planet, I will see later for solar system, galaxies, etc.
For very high precision, you say it's better to use int64. I think I need to create a class for converting it to better readable numbers (for coding) ?
If not, with micrometers precision, I will deal with very very big numbers (without digits as it's integer), not very convenient.. ? Some tips ?
I understand the concept of  using camera position as the center, for more precision when converting to float.
But subtracting the vertices x,y,z from the camera position in the CPU will be huge ?! There is lot of vertices in the node's grid (x 6 faces)...
Better to do that in the vertex shader, no ? But if I do that, how to send the coordinates to the shader ? 
As the max input is R32B32G32 and I will use 3 x int64... ?
My grids are allways the same, they are offseting and scaling in vertex shader based on node position and LOD.
In conclusion, when to convert to float and where ? What format to send to the shaders ? 
Again, thank you for all your help ;)
PS : I know it's a hard concept for a beginner, but I'm here for learning and not affraid to spend hours on that.

In Topic: About fixed points

21 April 2014 - 11:13 PM

Thank you all for your answers !


Frob, you talk about mixing numbers. It's one of my doubts, do I need to convert the fixed point back to float ? Float to Integer => no loss.

If don't need to convert back, how will work the shaders, as the numbers will be to much big... 

I read, a lot of engine are using fixed points, there must be a solution.. 


Stainless, thank you for the links, I will read them now.


Vortez, I read that it's a bad solution to use double, I tried to get my idea about. There was too much loss of performances.