Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

  1. Hi I have a procedurally generated tiled landscape, and want to apply 'regional' information to the tiles at runtime; so Forests, Roads - pretty much anything that could be defined as a 'region'. Up until now I've done this by creating a mesh defining the 'region' on the CPU and interrogating that mesh during the landscape tile generation; I then add regional information to the landscape tile via a series of Vertex boolean properties. For each landscape tile vertex I do a ray-mesh intersect into the 'region' mesh and get some value from that mesh. For example my landscape vertex could be; struct Vtx { Vector3 Position; bool IsForest; bool IsRoad; bool IsRiver; } I would then have a region mesh defining a forest, another defining rivers etc. When generating my landscape veretexes I do an intersect check on the various 'region' meshes to see what kind of landscape that vertex falls within. My ray-mesh intersect code isn't particularly fast, and there may be many 'region' meshes to interrogate, and I want to see if I can move this work onto the GPU, so that when I create a set of tile vertexes I can call a compute/other shader and pass the region mesh to it, and interrogate that mesh inside the shader. The output would be a buffer where all the landscape vertex boolean values have been filled in. The way I see this being done is to pass in two RWStucturedBuffer to a compute shader, one containing the landscape vertexes, and the other containing some definition of the region mesh, (possibly the region might consist of two buffers containing a set of positions and indexes). The compute shader would do a ray-mesh intersect check on each landscape vertex and would set the boolean flags on a corresponding output buffer. In theory this is a parallelisable operation (no one landscape vertex relies on another for its values) but I've not seen any examples of a ray-mesh intersect being done in a compute shader; so I'm wondering if my approach is wrong, and the reason I've not seen any examples, is because no-one does it that way. If anyone can comment on; Is this a really bad idea ? If no-one does it that way, does everyone use a Texture to define this kind of 'region' information ? If so - given I've only got a small number of possible types of region, what Texture Format would be appropriate, as 32bits seems really wasteful. Is there a common other approach to adding information to a basic height-mapped tile system that would perform well for runtime generated tiles ? Thanks Phillip
  2. PhillipHamlyn

    Tessellation Factor that preserves vertexes

    JoeJ,   Thanks - thats what I was missing. I control the Tess factor like I do a LOD and increase by doubles.   Thanks for the insight. I've put the calc in and I have the result I was after.
  3. Hi,   I have a working tessellation shader but am not getting the results I hoped for. The shader works Ok but the visual effect is problematic. I base my tessellation factor on a distance from the viewer and can see the new triangles being created as I move around my model. However the generated vertexes also get destroyed as I move to higher tessellation factors and this causes some horrible visual effects. I had hoped to google up a tessellation factor which would add vertexes, but not remove previously generated ones, but I can't find anything that describes it. In a naieve manual tessellation scheme I would subdivide my control points into triangles, then get the centroid and subdivide again (my triangles are all quad based so this would give a reasonably even distribution of roughly equalateral triangles). Each step of further subdivision would build on the previous one and no generated vertexes are lost as I make the model more detailed. The hardware tessellator doesnt attempt this scheme - it retriangulates my control point interior in a mathematically correct way to achieve the best number of even triangles - but in doing so it does not attempt to preserve or build on the previous tessellation output. Is there a way to achieve a steady tessellation along the naieve lines I describe ? Is it just a matter of controlling my tessellation factors "manually" (having a set of known values which achieve my expected results) rather than allowing the tessellation factor to be calculated entirely based on distance ?   Thanks, Phillip.
  4. Hi,   I pass in an array of textures into my shader, with a vertex value ("textureIndex") indicating which texture from the array should be used to paint that particular vertex. If my triangle has vertexes which have different selection values the textureIndex is interpolated across the face of the triangle and my PS gets a whole "rainbow" of indexes across the triangle, instead of just the two possible values I want to paint.   Is there any way of instructing the PS how to interpolate values ?   If my verexes have textureIndex of 1,5,5 on my triangle I'd wish my PS to get either 1 or 5 but not 1,2,3,4,5 (and all floats inbetween).   If I'm simply doing things a weird way, that would also be a great answer :-)   Phillip
  5. PhillipHamlyn

    DirectX Tessellation Function for Triangles

    Thanks TeaTreaTim - that works fine now.   Out of interest, how did you know which of the edge indexes were correct ? I couldn't find any docs on this.
  6. Hi - I have the following simple patch function in DX11, but I keep getting rips, and when I look at the wireframe its clear that adjacent edges are not getting the same tessellation factor. The CalcTessFactor() function just does a distance from the camera to the point passed, so should always give the same value for the same edge center that I pass in.   PatchTess patchFunction_Far(InputPatch<VertexToPixel_Far, 3> patch, uint patchID : SV_PrimitiveID) {  PatchTess pt;  // Compute midpoint on edges, and patch center  float3 e0 = 0.5f*(patch[0].WorldPosition + patch[1].WorldPosition);  float3 e1 = 0.5f*(patch[1].WorldPosition + patch[2].WorldPosition);  float3 e2 = 0.5f*(patch[2].WorldPosition + patch[0].WorldPosition);  float3  c = (patch[0].WorldPosition + patch[1].WorldPosition + patch[2].WorldPosition) / 3.0f;  pt.EdgeTess[0] = CalcTessFactor(e0);  pt.EdgeTess[1] = CalcTessFactor(e1);  pt.EdgeTess[2] = CalcTessFactor(e2);  pt.InsideTess = CalcTessFactor(c);    return pt; }   My patches are triangles.   Is there something I'm doing trivially wrong here (like assuming that EdgeTess[0] is correctly assumed to be edge 0-1, rather than edge 2->0 for instance ? - its a wild guess..
  7. PhillipHamlyn

    GPU Gems 3 - True Imposters

    @wodinoneeye - I take your point, I was using relief mapping to simulate sea surface but have now converted to geometry. You are quite correct and I was using it in a context where it wasnt supposed to be used.   @C0lumbo - thanks for your comment - I agree that somewhere I've got myself lost trying to translate my current understanding of imposters over to their concept of ray tracing. I agree the pixel shader of their technique, if it contains relief mapping, would seem to be really expensive even if I've only really got half an idea what they are on about.
  8. Hi,   I've been reading and re-reading the NVIDIA GPU Gems 3 Chapter 21 "True Imposters" (available online).   I'm having some trouble understanding the implementation of their idea. I wonder if someone can help me out. I currently use billboards to display imposters by taking "photos" of my models at various stages of rotation in the production pipeline. I then Lerp between the best two "photos" to give a fairly good transition as the viewer rotates around an object. I can understand from the chapter that they introduce using the ZW parts of the float4 normally used as the Texture Coord (normaly a float2) to store extra information about the pixel at that location, presumably (and heres where it gets hazy) stored at the point where the texture that will be applied to the billboard is rendered during the production pipeline. I can readily see how I could store the Depth in one of those floats, and then use "relief mapping" to generate a much more rich 3D effect than just a flat texture, or possibly I could use both ZW to store two of the Normal dimensions of a normalized normal and therefore generate a better lighting on the flat texture. Both the techniques I've mentioned are the only ones I can conceive, but the chapter seems to indicate they use all four channels. I presume they are talking about the imposter texture itself ? However my imposter texture contains 3 RGB channels already, unless they are suggesting a second texture which contains the information - like using a bump map texture, but not storing bumps, but other details. I am at a loss to think what else I can store that would be useful at render time. Their diagram 21-3 indicates I might be able to store the "front depth" and "back depth" of the model when taking my perpendicular "photo" at production pipeline time, and therefore be able to reconstruct the hull of the model by ray marching (like a relief map, but not stopping at the first intersect, but continuing until I've exited the shape). This looks plausible - but what would I do with such a hull shape, and additionally, doesnt relief mapping get really grody when the angle to the plane gets more and more extreme ? I've observed its only good for a range of angles-of-incidence and once you get towards a view vector that is less than 30 degrees to the plane, the visual artefacts grow really blatent. The resultant hull generation would get obviously distorted as the angle-of-incidence approaches zero. Does any one know an implemention, blog or other source I might use to understand this slightly gnomic chapter ? Thanks  
  9. Hi born69   Yes - that was a typo.   Very nice demo - we are definitely enthused by the same things.   I've got my OQ inside my render loop now and its happily eliminating whole rafts of forest that are occluded by mountains, so I'm happy the framerate is not rendering overdrawn pixels now.   For info; my trees have five LOD - a horizontal texture splat for really distant stuff (and this is baked into the landscape tile drape texture). Two billboard levels - one single Y-axis for distant billboards, and a 8x8 texture atlas of the tree rotated aound Y-axis taken during asset creation for closer imposters. I lerp between the two atlas subtextures dependent on the viewers angle, and means I can use asymmetric trees and imposters without too many artefacts.   I've then got three LODS of tree model, depending on distance from viewer.   Out of interest, where do you source your tree/vegetation models from ? I've been using some free stuff from Turbosquid but its not really optimised for real time rendering.   All this works OK but I still try to eliminate geometry wherever possible. I will add the "number of seconds plus random" as my timeout for objects becoming de-occluded from being occluded - it seems a sensible way of doing it.
  10. Thanks born49 -   So occlusion queries return the number of pixels rejected by Z-testing at the point they were tested, so doens't accurately refer to the number of pixels visible when the render completes. That makes sense to me, especially since I submit Begin/End commands on either side of my DrawIndexedInstanced call and the render pipeline always does things in the order its been given, that means my counter will start before my mesh is rendered, and stop immediately after. Subsequent mesh renders might overwrite my pixel and alter the depth buffer on which the Z-rejection initially passed, but since I've told the pipeline to stop collecting once my mesh render has completed it can only have my objects information in the measure.   I have sorted my occluders front-to-back anyway to take advantage of early Z-rejection (although I have an additional question as to how that is compatible with batching calls based on resource usage - which suggests I should sort by resource set, not world position - but thats another story).   I didn't quite understand where in your engine you actually reject a mesh because of its previous frame occlusion - i guess that was in the step "...used occlusion queries from the previous frame.." ? When then did you re-set the "occluded" flag for a previously occluded mesh ? Do you always render all your geometry to the z-buffer and then use occlusion queries to restrict those objects that fail from the main pass ? I can see how that would work, but when I tried that I found that the performance benefit I was hoping to find by skipping the geometry rendering of occluded objects was outweighed by the fact I was rendering all of them all of the time in the Z-buffer pass, and only getting a benefit of not rendering them in the main pass. My pixel shaders on the main pass aren't complicated enough to outweigh the cost of the double-render. Maybe I am expecting too much benefit from this technique if the double-pass method is quite a common solution.   My specific problem is distant forests - in most cases they are occluded by a landscape tile, but fall within the view frustum. I use imposters and pre-rendered landscape coverage for really distant treelines to reduce the hit, but I still want to eliminate them from the frame completely if I can. Maybe I'll just monitor the camera position and do another render/query for each forest in turn when the viewer has moved an appropriate distance. Its a bit of a bodge but I cant see another way.
  11. Hi,   I've experimented with occlusion querying and culling using a pre-render pass on simplified meshes of my landscape and have hit a few problems in getting a good balance between mesh complexity and over-aggressive culling with simple AABB style meshes. This is especially true when trying to cull my landscape tiles, for which there just doesn't seem a good "simplified mesh" that doesn't lead to popping artefacts.   I read on GameDev that the other approach is to weave the occlusion queries into the main render pipeline (taking care to avoid stalling) and checking the outputs on the following frame, and skipping the render of any objects that met the frustum cull but subsequently didn't get anything rendered. I have two questions on implementing this, which I think are not framework specific.   1) I need to pre-sort my object front-to-back to ensure that occluded objects are rejected base on depth value and not just overdrawn. I believe that overdrawn pixels get added to the occlusion query results even though they are never visible. Is that the correct interpretation ? 2) If I reject occluded objects on the following frame, at what point should I attempt to write them again to retest their visibility ? Is this simply based on knowledge of the world dynamics (i.e. camera and object location changes), or is there a more technical approach ?   Thanks in advance.
  12. I pretty much answered my own question. If I use a set of BC1_UNORM planting maps I can cram three "planting schemes" into each single textures and costs me only 4bpp. So for a set of 9 possible textures I use three BC1 textures which costs 12 bpp. I can interpolate freely between all three, and this seems quite efficient. If there are any other schemes out there I'd be interested in knowing still.
  13. PhillipHamlyn

    Should I always use float4 ?

    I recenty converted from XNA4 to SharpDX11 and had a nasty shock because of the need to byte-block align all my constant buffers. Its the only case where I think you need to pad your input to the shader to use space which is "larger than needed" by your application. Because your question is about VB specifically I realise this does not directly relate to your query, but worth pointing out.,
  14. Hi,   On a standard terrain I want to render using a texture based on a precalculated "environment map" - i.e. Meadow Texture, Beach Texture etc. I have seen many examples use a full R8G8B8A8 "environment map" texture to allow linear sampling then blend their textures based on the weight of each channel that the sample returns.   Is there a more modern way of achieving this ? I feel that committing 8bits to each channel seems wasteful (depending on how high resolution the "environment map" is). Also the need to have multiple environment map textures since each can only depict four possible textures again seems wasteful.   Is there a common better method than this ?   I have attempted using an R8_UINT texture and using the Load() method - this gives me 255 possible texture selections - I could then do my own 4 tap interpolation and blend based on pixel world distance for each tap. Does this seem a reasonable approach or is it too computationally expensive ?   Philip H.
  15. PhillipHamlyn

    Imposter Lighting

    Hi,   I am trying to implement an imposter lighting scheme where I record a texture atlas of my model taken at various Y axis rotation angles (to pre-calculate  a set of textures I can render as imposters, lerping between them). I have a system where I write a second texture atlas containing the model normals instead of the texture values, as a kind of deferred rendering process in my pipeline.   Aside from having some trouble using my low grade maths skills to rotate the normal stored in the appropriate pixel, I get some reasonable results; i.e. the lighting on the 3D model somewhat smoothly interpolates into the imposter, which lights itself using the normals stored in the normals texture atlas.   I am having one issue though and am looking for help. In my Imposter VS I pass in a radian angle of rotation which matches the angle I will use in my Model matrix when rendering the full 3D model. I use this to select the appropriate texture and normal pixels from my texture atlas. This works OK. In order for my Normal to work I need to rotate my normal in the PS from the pre-baked model normal value rotated via the Y-axis rotation value (Y is UP in my world) - this should then present the same value for the normal as though I'd read it via more standard means through the 3D model VS input structure rotated using a Model matrix.   My code fragments are; imposter VS; // Matrix def from output.ModelRotation = float3x3( cos(modelRotation), 0.0f, -sin(modelRotation), 0.0f, 1.0f, 0.0f, sin(modelRotation), 0.0f, cos(modelRotation)); imposter PS; // LERP between my two possible pre-baked normal textures. These are stored in model space, not tangent space. float3 normalSample = ((tex2D(TextureSampler1, ps_input.TextureCoordinate0.xy) * ps_input.TextureCoordinate0.z) + (tex2D(TextureSampler1, ps_input.TextureCoordinate1.xy) * ps_input.TextureCoordinate1.z)).rgb; // Correct them into -1->1 range. float3 normal = (2 * (normalSample)) - 1.0f; // The normal is in model space. We need to apply the specific model rotation on top of it, for this particular instance of the imposter normal = normalize(mul(normal, ps_input.ModelRotation)); // Rotate I then use the standard lighting calculation I use elsewhere to light the pixel using the normal.   My problem is that I think my Y-axis rotation matrix is incorrect, but I struggle with the row-vs-column ordering concepts in HLSL vs. DirectX, so cannot easily verify the matrix has the correct effect on the Normal via a C# unit test. If anyone can guide me to the correct method of constructing that matrix, I'd be grateful.   Any other comments on the basic methods also gratefully received.   Phillip
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!