Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 20 Jul 2012
Offline Last Active Jul 03 2014 08:27 AM

Topics I've Started

Heightmap Tiling - Aligning with multiple geometry fragments.

03 July 2014 - 06:43 AM



I've got a successul implementation of a terrain (using something similar to Geo Clip Mapping) but am slightly stuck when it comes to assigning the heightmap textures to a particular tile or clipmap region.


The GPU Gems papers on this aren't specific about how you render a "chopped up" heightmap onto the geometry - I have broken down my heightmap texture into lots of smaller tiles (256x256) with teach tile contributing to a coarser level tile, something like how a mip map works. When I come to render my geometry I can calculate how many tiles, and of what scale, I need to pass to the shader (along with the appropriate matrix to provide the UV coordinate mapping from the geometry to the heightmap).


My problem comes with the count of heightmap textures I would need to pass in - I'm using XNA4 so have a limited number of texture slots, and besides, using a lot of textures just for my heights seems a waste. I've tried making my geometry chunk boundaries (which dont have any T-junctions) align with the texture sizes (i.e. one texture of one specific size per geometry chunk) but I get sampling discontinuities along the borders between the geometry chunks - but this isnt my main area of ignorance ( I am thinking that to get accurate heightmap tesselation the heigthmap for any geometry chunk must cover more than that geometries footprint (i.e. the texture is slightly bigger than the surface area in world terms) to prevent edge-sampling artefacts )


I am missing something here - given that my geometry is static with respect to the camera, and just has its position translated to be camera centric, aligning my heighmap texture chunks with a  nominal geometry grid still will require me to pass in multiple heightmap texture chunks to a single geometry pass - this isn't what the examples indicate I should be doing. If this were the case I'd be seeing examples with dynamic texture creation to create geometry aligned heightmap textures, or some examples passing multiple heightmap textures to the shader (a maximum of four would be required to cover a single geometry pass - assuming the geometry aligns with none of the heightmap world locations exactly).


Can anyone give me a clue as to how to properly align my heightmap texture chunks with the geometry pass - do I need to dynamically create a single texture out of those which overlap my geometry (every pass :-( ) or is there some better way of doing this.





World Space point in Bounding Box, in Shader

30 March 2014 - 10:18 AM



Can someone help unjam my coders-block please ? I have a point in world space in my Pixel Shader and want to check if it lies within a bounding box (or conceptually any bounding mesh). Whats the quickest way of doing this check ? In 2D C# I'm used to taking an arbitrary line from a point and counting the number of times it crosses a line that makes up a containing polygon - it seems to me there must be an analogy for this in 3D that counts how many times my projected line crosses a plane making up the containing polygon (an odd count means I'm "inside", but an even count means I'm outside, as does a zero count). I'm doing this check for a lot of pixels so it can't be an expensive call.


I have searched around the phrases ("point in bounding box","point in mesh") and it does give me a lot of collision based solutions, but I'm already inside a shader so they dont seem appropriate. It seems like something I should just "get" but I can't seem to visualise it.


Thanks in advance,




PS Just in case I'm going at this completely the wrong way, the problem occurs where I'm trying to "layer" a landscape feature over a previously rendered landscape mesh. I'm using a bounding cube that intersects the landscape, and back-calculating the world position of each pixel that is rendered on the surface of the cube, using the inverse projection matrix and the depth buffer to construct the world space of the pixel. Once I have the world space pixel, I want to check to see if it fell within the bounding volume - in which case I should render my texure (river, road, field etc) otherwise the viewer should see the previously rendered landscape.

Texture Splat on Large Mesh - Correct Use of Geometry Shader ?

25 October 2013 - 05:31 AM



I have a large terrain mesh with associated heightmap. I want to splat some local detail on it (small areas of higher definition, high res height variances, village footprints etc).


Currently I do a multi-pass render of the mesh and heightmap, passing in my local information as an extra texture with the World Coordinates that the texture applies to. I just Clip() in the pixel shader if the current pixel world coordinates do not fall within my rectangle of interest containing the locally interesting data.


In this way I pass in the (for instance) village footprint (paths, and other local textures specific to the village) and splat it onto the landscape mesh.


However I need to execute a landscape mesh render once for each texture splat I carry out, after frustum culling the areas of interest. This doesn't cause me any performance problems at present, but I'm wondering if this is a good candidate for introducing a Geometry Shader step into my pipeline ? I am aggrieved that I have to execute the landscape mesh render once for each texture I want to splat, knowing the vast majority of vertexes dont actually cover the texture splat world rectangle.


My understanding would be that I could intercept the landscape mesh render in the GS and test each vertex against its overlap with the particular area I am rendering, and add it to the output stream only if that vertex actually covers my area of interest. In this case I would then only render the mesh vertexes that actually overlap my area of interest, saving a whole bunch of dead triangle renders for all the visible areas that are not covered by my texture splat.


Is this a reasonable approach, and would it actually save any rendering time do you think ?


Phillip Hamlyn

Finding Points on a Irregular 2D Shape

06 September 2013 - 07:04 AM

Hi, can anyone give me a suggestion ?


For various reasons I have a 3D mesh projected orthoganally into a 2D representation so that it shows the "front" of my object as if it were going to be rendered. I want to work out which of my mass of vertexes represent the hull of the shape. The hull isn't neccessarily convex, but I have the triangle indexes and all the vertexes have been welded, so I know "what connects with what". I need to bear in mind that the vertexes that make up the visible hull of the shape aren't neccessarily connected to each other (i.e. they are not part of the same triangle) since the triangles are 3D projected onto a 2D surface.


I could just render the image onto a surface and use simple edge finding to locate the vertexes, but this seems a low quality approach. I hoped there might be a better solution.


Does anyone have any suggestions ?





Projecting a Textured Mesh onto a Cube Texture

20 July 2013 - 11:10 AM



I have a technique which I'm trying out, in order to reduce poly count on distant multi-textured objects. The approach is to generate a convex hull of the object and render that instead (when at a distance). I do this reasonably successfully for my shadows - although they are clearly not high quality shadows, for buildings and trees etc they suffice for me, and this means I can render only the hull on my shadow casting pass. However; I want to extend the low poly goodness of hull drawing by texturing my hull using a cube map.


In order to properly texture the 3D hull I want to use texCUBE to sample a cube map, prepared during conditioning, of the multi-textured mesh fully rendered.


Although I can create a "sky box" map of each of the faces of the original mesh (rotated orthogonally), its not what is needed here I dont think. I want to be able to reverse the process of texCUBE by projecting the mesh onto the texture render target "from within the mesh" - i.e. with the camera in the mesh centroid. I would hopefully generate a cube map of the mesh which I can later use with textCUBE to render my hull.


A couple of questions;


1) Is this approach common/reasonable ?

2) How do I go about projecting the cube map from my multi-textured mesh. The projection is neither Orthogonal or Perspective but "exploded", but I've no real idea whether that is feasible.