• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

PhillipHamlyn

Members
  • Content count

    154
  • Joined

  • Last visited

Community Reputation

579 Good

About PhillipHamlyn

  • Rank
    Member

Personal Information

  1. JoeJ,   Thanks - thats what I was missing. I control the Tess factor like I do a LOD and increase by doubles.   Thanks for the insight. I've put the calc in and I have the result I was after.
  2. Hi,   I have a working tessellation shader but am not getting the results I hoped for. The shader works Ok but the visual effect is problematic. I base my tessellation factor on a distance from the viewer and can see the new triangles being created as I move around my model. However the generated vertexes also get destroyed as I move to higher tessellation factors and this causes some horrible visual effects. I had hoped to google up a tessellation factor which would add vertexes, but not remove previously generated ones, but I can't find anything that describes it. In a naieve manual tessellation scheme I would subdivide my control points into triangles, then get the centroid and subdivide again (my triangles are all quad based so this would give a reasonably even distribution of roughly equalateral triangles). Each step of further subdivision would build on the previous one and no generated vertexes are lost as I make the model more detailed. The hardware tessellator doesnt attempt this scheme - it retriangulates my control point interior in a mathematically correct way to achieve the best number of even triangles - but in doing so it does not attempt to preserve or build on the previous tessellation output. Is there a way to achieve a steady tessellation along the naieve lines I describe ? Is it just a matter of controlling my tessellation factors "manually" (having a set of known values which achieve my expected results) rather than allowing the tessellation factor to be calculated entirely based on distance ?   Thanks, Phillip.
  3. Hi,   I pass in an array of textures into my shader, with a vertex value ("textureIndex") indicating which texture from the array should be used to paint that particular vertex. If my triangle has vertexes which have different selection values the textureIndex is interpolated across the face of the triangle and my PS gets a whole "rainbow" of indexes across the triangle, instead of just the two possible values I want to paint.   Is there any way of instructing the PS how to interpolate values ?   If my verexes have textureIndex of 1,5,5 on my triangle I'd wish my PS to get either 1 or 5 but not 1,2,3,4,5 (and all floats inbetween).   If I'm simply doing things a weird way, that would also be a great answer :-)   Phillip
  4. DX11

    Thanks TeaTreaTim - that works fine now.   Out of interest, how did you know which of the edge indexes were correct ? I couldn't find any docs on this.
  5. Hi - I have the following simple patch function in DX11, but I keep getting rips, and when I look at the wireframe its clear that adjacent edges are not getting the same tessellation factor. The CalcTessFactor() function just does a distance from the camera to the point passed, so should always give the same value for the same edge center that I pass in.   PatchTess patchFunction_Far(InputPatch<VertexToPixel_Far, 3> patch, uint patchID : SV_PrimitiveID) {  PatchTess pt;  // Compute midpoint on edges, and patch center  float3 e0 = 0.5f*(patch[0].WorldPosition + patch[1].WorldPosition);  float3 e1 = 0.5f*(patch[1].WorldPosition + patch[2].WorldPosition);  float3 e2 = 0.5f*(patch[2].WorldPosition + patch[0].WorldPosition);  float3  c = (patch[0].WorldPosition + patch[1].WorldPosition + patch[2].WorldPosition) / 3.0f;  pt.EdgeTess[0] = CalcTessFactor(e0);  pt.EdgeTess[1] = CalcTessFactor(e1);  pt.EdgeTess[2] = CalcTessFactor(e2);  pt.InsideTess = CalcTessFactor(c);    return pt; }   My patches are triangles.   Is there something I'm doing trivially wrong here (like assuming that EdgeTess[0] is correctly assumed to be edge 0-1, rather than edge 2->0 for instance ? - its a wild guess..
  6. @wodinoneeye - I take your point, I was using relief mapping to simulate sea surface but have now converted to geometry. You are quite correct and I was using it in a context where it wasnt supposed to be used.   @C0lumbo - thanks for your comment - I agree that somewhere I've got myself lost trying to translate my current understanding of imposters over to their concept of ray tracing. I agree the pixel shader of their technique, if it contains relief mapping, would seem to be really expensive even if I've only really got half an idea what they are on about.
  7. Hi,   I've been reading and re-reading the NVIDIA GPU Gems 3 Chapter 21 "True Imposters" (available online).   I'm having some trouble understanding the implementation of their idea. I wonder if someone can help me out. I currently use billboards to display imposters by taking "photos" of my models at various stages of rotation in the production pipeline. I then Lerp between the best two "photos" to give a fairly good transition as the viewer rotates around an object. I can understand from the chapter that they introduce using the ZW parts of the float4 normally used as the Texture Coord (normaly a float2) to store extra information about the pixel at that location, presumably (and heres where it gets hazy) stored at the point where the texture that will be applied to the billboard is rendered during the production pipeline. I can readily see how I could store the Depth in one of those floats, and then use "relief mapping" to generate a much more rich 3D effect than just a flat texture, or possibly I could use both ZW to store two of the Normal dimensions of a normalized normal and therefore generate a better lighting on the flat texture. Both the techniques I've mentioned are the only ones I can conceive, but the chapter seems to indicate they use all four channels. I presume they are talking about the imposter texture itself ? However my imposter texture contains 3 RGB channels already, unless they are suggesting a second texture which contains the information - like using a bump map texture, but not storing bumps, but other details. I am at a loss to think what else I can store that would be useful at render time. Their diagram 21-3 indicates I might be able to store the "front depth" and "back depth" of the model when taking my perpendicular "photo" at production pipeline time, and therefore be able to reconstruct the hull of the model by ray marching (like a relief map, but not stopping at the first intersect, but continuing until I've exited the shape). This looks plausible - but what would I do with such a hull shape, and additionally, doesnt relief mapping get really grody when the angle to the plane gets more and more extreme ? I've observed its only good for a range of angles-of-incidence and once you get towards a view vector that is less than 30 degrees to the plane, the visual artefacts grow really blatent. The resultant hull generation would get obviously distorted as the angle-of-incidence approaches zero. Does any one know an implemention, blog or other source I might use to understand this slightly gnomic chapter ? Thanks  
  8. Hi born69   Yes - that was a typo.   Very nice demo - we are definitely enthused by the same things.   I've got my OQ inside my render loop now and its happily eliminating whole rafts of forest that are occluded by mountains, so I'm happy the framerate is not rendering overdrawn pixels now.   For info; my trees have five LOD - a horizontal texture splat for really distant stuff (and this is baked into the landscape tile drape texture). Two billboard levels - one single Y-axis for distant billboards, and a 8x8 texture atlas of the tree rotated aound Y-axis taken during asset creation for closer imposters. I lerp between the two atlas subtextures dependent on the viewers angle, and means I can use asymmetric trees and imposters without too many artefacts.   I've then got three LODS of tree model, depending on distance from viewer.   Out of interest, where do you source your tree/vegetation models from ? I've been using some free stuff from Turbosquid but its not really optimised for real time rendering.   All this works OK but I still try to eliminate geometry wherever possible. I will add the "number of seconds plus random" as my timeout for objects becoming de-occluded from being occluded - it seems a sensible way of doing it.
  9. Thanks born49 -   So occlusion queries return the number of pixels rejected by Z-testing at the point they were tested, so doens't accurately refer to the number of pixels visible when the render completes. That makes sense to me, especially since I submit Begin/End commands on either side of my DrawIndexedInstanced call and the render pipeline always does things in the order its been given, that means my counter will start before my mesh is rendered, and stop immediately after. Subsequent mesh renders might overwrite my pixel and alter the depth buffer on which the Z-rejection initially passed, but since I've told the pipeline to stop collecting once my mesh render has completed it can only have my objects information in the measure.   I have sorted my occluders front-to-back anyway to take advantage of early Z-rejection (although I have an additional question as to how that is compatible with batching calls based on resource usage - which suggests I should sort by resource set, not world position - but thats another story).   I didn't quite understand where in your engine you actually reject a mesh because of its previous frame occlusion - i guess that was in the step "...used occlusion queries from the previous frame.." ? When then did you re-set the "occluded" flag for a previously occluded mesh ? Do you always render all your geometry to the z-buffer and then use occlusion queries to restrict those objects that fail from the main pass ? I can see how that would work, but when I tried that I found that the performance benefit I was hoping to find by skipping the geometry rendering of occluded objects was outweighed by the fact I was rendering all of them all of the time in the Z-buffer pass, and only getting a benefit of not rendering them in the main pass. My pixel shaders on the main pass aren't complicated enough to outweigh the cost of the double-render. Maybe I am expecting too much benefit from this technique if the double-pass method is quite a common solution.   My specific problem is distant forests - in most cases they are occluded by a landscape tile, but fall within the view frustum. I use imposters and pre-rendered landscape coverage for really distant treelines to reduce the hit, but I still want to eliminate them from the frame completely if I can. Maybe I'll just monitor the camera position and do another render/query for each forest in turn when the viewer has moved an appropriate distance. Its a bit of a bodge but I cant see another way.
  10. Hi,   I've experimented with occlusion querying and culling using a pre-render pass on simplified meshes of my landscape and have hit a few problems in getting a good balance between mesh complexity and over-aggressive culling with simple AABB style meshes. This is especially true when trying to cull my landscape tiles, for which there just doesn't seem a good "simplified mesh" that doesn't lead to popping artefacts.   I read on GameDev that the other approach is to weave the occlusion queries into the main render pipeline (taking care to avoid stalling) and checking the outputs on the following frame, and skipping the render of any objects that met the frustum cull but subsequently didn't get anything rendered. I have two questions on implementing this, which I think are not framework specific.   1) I need to pre-sort my object front-to-back to ensure that occluded objects are rejected base on depth value and not just overdrawn. I believe that overdrawn pixels get added to the occlusion query results even though they are never visible. Is that the correct interpretation ? 2) If I reject occluded objects on the following frame, at what point should I attempt to write them again to retest their visibility ? Is this simply based on knowledge of the world dynamics (i.e. camera and object location changes), or is there a more technical approach ?   Thanks in advance.
  11. I pretty much answered my own question. If I use a set of BC1_UNORM planting maps I can cram three "planting schemes" into each single textures and costs me only 4bpp. So for a set of 9 possible textures I use three BC1 textures which costs 12 bpp. I can interpolate freely between all three, and this seems quite efficient. If there are any other schemes out there I'd be interested in knowing still.
  12. I recenty converted from XNA4 to SharpDX11 and had a nasty shock because of the need to byte-block align all my constant buffers. Its the only case where I think you need to pad your input to the shader to use space which is "larger than needed" by your application. Because your question is about VB specifically I realise this does not directly relate to your query, but worth pointing out.,
  13. Hi,   On a standard terrain I want to render using a texture based on a precalculated "environment map" - i.e. Meadow Texture, Beach Texture etc. I have seen many examples use a full R8G8B8A8 "environment map" texture to allow linear sampling then blend their textures based on the weight of each channel that the sample returns.   Is there a more modern way of achieving this ? I feel that committing 8bits to each channel seems wasteful (depending on how high resolution the "environment map" is). Also the need to have multiple environment map textures since each can only depict four possible textures again seems wasteful.   Is there a common better method than this ?   I have attempted using an R8_UINT texture and using the Load() method - this gives me 255 possible texture selections - I could then do my own 4 tap interpolation and blend based on pixel world distance for each tap. Does this seem a reasonable approach or is it too computationally expensive ?   Philip H.
  14. Hi,   I am trying to implement an imposter lighting scheme where I record a texture atlas of my model taken at various Y axis rotation angles (to pre-calculate  a set of textures I can render as imposters, lerping between them). I have a system where I write a second texture atlas containing the model normals instead of the texture values, as a kind of deferred rendering process in my pipeline.   Aside from having some trouble using my low grade maths skills to rotate the normal stored in the appropriate pixel, I get some reasonable results; i.e. the lighting on the 3D model somewhat smoothly interpolates into the imposter, which lights itself using the normals stored in the normals texture atlas.   I am having one issue though and am looking for help. In my Imposter VS I pass in a radian angle of rotation which matches the angle I will use in my Model matrix when rendering the full 3D model. I use this to select the appropriate texture and normal pixels from my texture atlas. This works OK. In order for my Normal to work I need to rotate my normal in the PS from the pre-baked model normal value rotated via the Y-axis rotation value (Y is UP in my world) - this should then present the same value for the normal as though I'd read it via more standard means through the 3D model VS input structure rotated using a Model matrix.   My code fragments are; imposter VS; // Matrix def from http://gamedev.stackexchange.com/questions/103002/how-to-rotate-a-3d-instance-using-an-hlsl-shader output.ModelRotation = float3x3( cos(modelRotation), 0.0f, -sin(modelRotation), 0.0f, 1.0f, 0.0f, sin(modelRotation), 0.0f, cos(modelRotation)); imposter PS; // LERP between my two possible pre-baked normal textures. These are stored in model space, not tangent space. float3 normalSample = ((tex2D(TextureSampler1, ps_input.TextureCoordinate0.xy) * ps_input.TextureCoordinate0.z) + (tex2D(TextureSampler1, ps_input.TextureCoordinate1.xy) * ps_input.TextureCoordinate1.z)).rgb; // Correct them into -1->1 range. float3 normal = (2 * (normalSample)) - 1.0f; // The normal is in model space. We need to apply the specific model rotation on top of it, for this particular instance of the imposter normal = normalize(mul(normal, ps_input.ModelRotation)); // Rotate I then use the standard lighting calculation I use elsewhere to light the pixel using the normal.   My problem is that I think my Y-axis rotation matrix is incorrect, but I struggle with the row-vs-column ordering concepts in HLSL vs. DirectX, so cannot easily verify the matrix has the correct effect on the Normal via a C# unit test. If anyone can guide me to the correct method of constructing that matrix, I'd be grateful.   Any other comments on the basic methods also gratefully received.   Phillip
  15.   Looks great, I'm looking forward to your blog post.