Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


#ActualSock5

Posted 30 April 2013 - 07:04 PM

Ok I know this topic has been beat to death in many threads, but there's a few things I couldn't find anywhere and I wanted to ask about, mainly about the way the geometry is handled.In most of the implementations I've seen, the planet is basically a cube with the cube vertices transformed to shape a sphere, however it's not rendered as a single cube, but as 6 quads(I guess it makes it easier for the programmer to work with the 6 faces as if they're just normal planar terrain) and when you close up and level of detail increases, the children faces(which are also seperate quads) pop up in place of the quad that is nearest to the camera.They either generate the planet geometry(basically create the vertex buffers of all quads at all LoD levels) at initialization(guess it only works for small planets with low overall detail, otherwise memory won't be enough) or they create and release them at run-time(very heavy on run-time performance) and overall, these methods cause big pop-ups.I was wondering about the other way of implementing it, where they do it entirely on the GPU(or so it's stated).I was thinking adaptive tessellation with displacement and hull shader frustum culling)?But when I try that I get horrible performance when I get too deep, infact I've never been able to get good performance with tessellation when generating a lot of geometry out of nothing, it seems that the correct way to use it is to add small details to an alredy thick mesh?Can anyone share a tip on GPU planetary generation?Some people say they transform the vertices in compute shaders, but how do they generate them?Or do they do some hybrid approach between the CPU and the GPU version(which I suppose would still be subject to LoD popping).


#1Sock5

Posted 30 April 2013 - 06:54 PM

Ok I know this topic has been beat to death in many threads, but there's a few things I couldn't find anywhere and I wanted to ask about, mainly about the way the geometry is handled.In most of the implementations I've seen, the planet is basically a cube with the cube vertices transformed to shape a sphere, however it's not rendered as a single cube, but as 6 quads(I guess it makes it easier for the programmer to work with the 6 faces as if they're just normal planar terrain) and when you close up and level of detail increases, the children faces(which are also seperate quads) pop up in place of the quad that is nearest to the camera.They either generate the planet geometry(basically create the vertex buffers of all quads at all LoD levels) at initialization(guess it only works for small planets with low overall detail, otherwise memory won't be enough) or they create them at run-time(very heavy on run-time performance) and overall, these methods cause big pop-ups.I was wondering about the other way of implementing it, where they do it entirely on the GPU(or so it's stated).I was thinking adaptive tessellation with displacement and hull shader frustum culling)?But when I try that I get horrible performance when I get too deep, infact I've never been able to get good performance with tessellation when generating a lot of geometry out of nothing, it seems that the correct way to use it is to add small details to an alredy thick mesh?Can anyone share a tip on GPU planetary generation?Some people say they transform the vertices in compute shaders, but how do they generate them?Or do they do some hybrid approach between the CPU and the GPU version(which I suppose would still be subject to LoD popping).


PARTNERS