Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Texture Splat on Large Mesh - Correct Use of Geometry Shader ?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 PhillipHamlyn   Members   -  Reputation: 486

Like
0Likes
Like

Posted 25 October 2013 - 05:31 AM

Hi

 

I have a large terrain mesh with associated heightmap. I want to splat some local detail on it (small areas of higher definition, high res height variances, village footprints etc).

 

Currently I do a multi-pass render of the mesh and heightmap, passing in my local information as an extra texture with the World Coordinates that the texture applies to. I just Clip() in the pixel shader if the current pixel world coordinates do not fall within my rectangle of interest containing the locally interesting data.

 

In this way I pass in the (for instance) village footprint (paths, and other local textures specific to the village) and splat it onto the landscape mesh.

 

However I need to execute a landscape mesh render once for each texture splat I carry out, after frustum culling the areas of interest. This doesn't cause me any performance problems at present, but I'm wondering if this is a good candidate for introducing a Geometry Shader step into my pipeline ? I am aggrieved that I have to execute the landscape mesh render once for each texture I want to splat, knowing the vast majority of vertexes dont actually cover the texture splat world rectangle.

 

My understanding would be that I could intercept the landscape mesh render in the GS and test each vertex against its overlap with the particular area I am rendering, and add it to the output stream only if that vertex actually covers my area of interest. In this case I would then only render the mesh vertexes that actually overlap my area of interest, saving a whole bunch of dead triangle renders for all the visible areas that are not covered by my texture splat.

 

Is this a reasonable approach, and would it actually save any rendering time do you think ?

 

Phillip Hamlyn



Sponsor:

#2 PhillipHamlyn   Members   -  Reputation: 486

Like
0Likes
Like

Posted 25 October 2013 - 08:12 AM

Ignoring the fact I'm using XNA4 which doesn't support geometry shaders - I'd have to move to SlimDX to use this feature - but the question still stands; is this a good method ?



#3 kauna   Crossbones+   -  Reputation: 2894

Like
0Likes
Like

Posted 25 October 2013 - 08:51 AM

Are you able to implement volume decals?

 

http://www.humus.name/index.php?page=3D&ID=83

 

Cheers!

 

[edit] otherwise, frostbyte style virtual texturing ? 


Edited by kauna, 25 October 2013 - 08:52 AM.


#4 PhillipHamlyn   Members   -  Reputation: 486

Like
0Likes
Like

Posted 25 October 2013 - 10:29 AM

kauna,

 

Thanks for the links - I can already project my decals onto the landscape, but because my VB for the geometry is the whole landscape, I was hoping to see if I could use a GeometryShader stage to effectively clip the geometry texels where they dont overlap my projected texture, saving potentially many thousands of needless vertex renders.

 

Phillip



#5 kauna   Crossbones+   -  Reputation: 2894

Like
0Likes
Like

Posted 25 October 2013 - 07:44 PM

You didn't maybe read the link in detail. The volume decals can be drawn on any surface without touching the geometry data used for rendering. For that reason how ever you'll need access to depth buffer. 

 

Of course you may go the traditional decaling route but I think that it'll be inefficient, even with geometry shader.

 

Cheers!



#6 PhillipHamlyn   Members   -  Reputation: 486

Like
0Likes
Like

Posted 01 November 2013 - 08:47 AM

Kauna

 

Took me a while to re-read the linked Humus blog (I don't do C++) , but I finally begin to understand that he is using a deferred renderer and writing his own depth buffer as a render target, then resampling the depth buffer in the decal shader, using some math-magic. Actually he seems to be reusing the depth buffer generated by DX and passing that through to his decal shader, but I dont think that option is available to XNA Studio - I dont think we get access to the depth buffer as a texture. I think I can write out my own depth buffer using multi-render-target on my original geometry shader, and then see if I can follow his maths.

 

Thanks for putting me right

 

Phillip






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS