• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

wyrzy

Members
  • Content count

    516
  • Joined

  • Last visited

Community Reputation

430 Neutral

About wyrzy

  • Rank
    Advanced Member
  1. Quote:Original post by Zipster Instead of rendering each sub-mesh individually, batch all the sub-meshes that have the same texture/shader. That removes the number of buildings from the equation. I was thinking of doing that exclusively, but I have to work with user created content (ie, content I have no control over), and there is little guarantee several sub-meshes will be visible that have the same texture (all objects will generally have the same shader). Also, perhaps my buildings description was a bit vague - the user can create their scene of whatever they choose, we just import it. Some of our test scenes at the moment were composed of buildings with various textures though since that was not performance friendly at the moment. I could just impose the constraint that your performance is going to drop significantly is you decide to have different textures on meshes that are the same. I don't know how well the customers would like that though. At the moment, I'm leaning towards implementing the texture atlases for meshes that wouldn't batch well otherwise, and use Zipster's idea in the scenarios where I have a decent number of smaller meshes that all of the same texture. I was holding off on implementing the texture atlas approach since its likely going to be a pain to get working with proper wrapping, mip-mapping, and such. I have a good idea how to go about it, its just going to likely take some time to get working correctly.
  2. Quote:Original post by VladRBind several textures to stages 0-15, use relevant texcoords and inside a shader decide which vertex should be textured with either of the textures. I'm not sure if I follow. Are you suggesting I decide in the vertex shader which texture to sample in the pixel shader. I didn't think that was possible (unless using SM4 texture arrays which I can't use). How would I go about doing that? If I decided in the pixel shader, it would force 16 lookups on SM2 cards which would likely kill the performance.
  3. Hi, I am attempting to improve the batch performance of some simulation I am working on. Currently, I'm working with scenes that has relatively simple meshes, but often have several varying textures. Imagine a building scene, with lots of simple buildings but often widely different textures. Some of the buildings may have multiple textures as well. At the moment, the meshes are broken up into sub-meshes, with one texture per sub-mesh. I render each sub-mesh individually. With lots of buildings, or even a good number of complex buildings (20+ textures on each building), the performance gets quite horrible even on recent hardware. I'm trying to improve performance by drawing several buildings at a time. However, its often difficult to find several that all have the same texture. The only real option I could think of is to use texture atlases. I'm limited to SM2+ hardware, so I can't use those wonderful texture arrays or anything like that. On SM2/3 hardware, I can't think of any other technique that would achieve what I am trying to do other than texture atlases. I thought texture atlases were a bit of a dated technique, but I'm not really up to date with batching techniques. Are there any other options for rendering batches of geometry with varying textures? Or am I limited to texture atlases? If I do decide to go with atlases, I know about the drawbacks with texture wrapping and such. Thanks for any other suggestions or confirmations that texture atlases would be the best option. [Edited by - wyrzy on June 23, 2008 7:40:54 PM]
  4. I'm fairly certain that: ID3D10RenderTargetView * renderTargetView[] = {0}; ID3D10Device::OMSetRenderTargets(1, renderTargetView, depthStencilView) should work as well. This is unlike DX9 where you always needed to have a color buffer set. I haven't gotten around to trying that myself yet though, since I haven't had a need to do so.
  5. Quote:Original post by AndyTX The second you should really address by using parallel-split shadow maps (aka. cascaded shadow maps) or similar. I thought he said he was already using parallel-split shadow maps? Quote: _ When i use a shadow map of 4096 instead of 1024 (with or without PCF), artifacts are very minimized. As AndyTX mentioned, this is more of a problem with lack of shadow map resolution, as opposed to lack of precision in the shadow map. How many splits do you use? Also, what near / far plane values are you using? What are the near/far planes of the subdivided view frustum during your shadow pass? Have you tried rendering the view frustums? It may be the case that one split is covering a large portion of your scene, negating any improvements you'd get from using parallel split shadow maps. You can often modify the practical split formula a bit so that the splits cover your scene more nicely. I've found it to be more of an art than an exact science, but keep in mind what the point to the multiple splits is and that will probably help when tweaking the constants. Also, as others mentioned, there is no 'perfect' constants for any of these values as they are largely scene dependent.
  6. Quote:Original post by texel3d Can i simulate polygon offset with D3D9 ? How ? D3DRS_DepthBias / D3DRS_SlopeScaleDepthBias or (DepthBias / SlopeScaleDepthBias if setting in the FX file) would be the equivalent of OpenGL's polygon offset as far as I can tell. Assuming you are using hardware shadow maps, you'd want to set the render states when you render the shadow maps. I use DepthBias = 0.00002f and SlopeScaleDepthBias = 6.00000 and that looks pretty nice for me. It fixed my self shadowing problems for my scenes. Also, good values for DepthBias and SlopeScaleDepthBias are dependent on your near / far planes. See ftp://download.nvidia.com/developer/presentations/2004/GPU_Jackpot/Shadow_Mapping.pdf for a better description. Also, do you do things like move up the far plane when clipping to the view frustum for PSSM? I have a far plane at 1000 but move it up to 400 when subdividing the view frustum. If you have a first person game/demo, the shadows in the far distance are hardly noticeable. I also use a split bias of 0.75 instead of the default 0.5. Again, its dependent on how much you care about shadows further away from you but for my purposes it seemed to produce nicer results. Another thing I added that reduced incorrect self-shadowing artifacts was to clip to the minimum of the eye frustum's z values. This of course has the potential to miss shadow casters, so in my vertex shader of the light pass I do something like: vsOut.Position.z *= vsOut.Position.z >= 0; which sets all potential shadow casters between the eye frustum and the light to have a z value of 0. When doing a shadow compare, it will result in that object being in front (w.r.t. the light) of anything the eye can see. This is exactly what you want. This can cause problems if you have very large triangles casting shadows (since the interpolation of depth values won't be correct if one vertex of a triangle has z < 0 and another has z > 0), but in practice I've found the this tradeoff to be worth it and for reasonably tesselated scenes I have not noticed any artifacts. Alternatively you could write to the DEPTH register, but for performance reasons you probably don't want to do this. However, you can get much better depth precision with this approach since your z-values get confined to a much smaller range (min-z / max-z of eye's frustum in light's NDC space). Which will in turn help with incorrect self-shadowing artifacts you are getting.
  7. Quote:Original post by skytiger My understanding of a perspective transform's treatment of Z values is that it scales them from the range near->far to the range 0->1 so Z' = (Z - near) / (far - near) This assumption is not correct. The mapping of Z-values from view space to post-projective space is not linear. Also, Quote: if you look at the directx documentation it says: w 0 0 0 0 h 0 0 0 0 zfarPlane/(zfarPlane-znearPlane) 1 0 0 -znearPlane*zfarPlane/(zfarPlane-znearPlane) 0 which means Z' = ((5 * 10) / (10 - 1)) + ((-1 * 10)/(10 - 1)) = 0.444444 Correct, that's Z before you project back into w=1. Dividing by w (5 in your case), gives you Z = 0.888888955 (in NDC space), which is what the function TransformCoordinate() is giving you. So, do you see why the D3DX functions are correct now? [Edited by - wyrzy on March 23, 2008 6:31:13 PM]
  8. I would recommend starting with parallel split / cascaded shadow maps, and see if that works well for your scene. There's a decent demo at http://hax.fi/asko/PSSM.html.
  9. It really depends on your application whether to sort or z-prepass. I tried both for something I had been working on. Sorting by effect and then by distance, with no z-prepass, worked best for me. Of course you'd still want to sort by effect even with a z-prepass. Quite a few effects may require the depth buffer in texture format, so you might need to do a z-prepass already. However, supporting multisampling is tricky. If you use an R32F texture to store depths, Geforce 7 and below doesn't support MSAA on that, so the depth buffer for your prepass and forward pass can't be the same. There are hardware specific FOURCC formats (for DirectX), though I'm not sure what range of video cards the NVIDIA ones are supported on. Of course if you are CPU limited then then the extra time required to sort front to back can be unnecessary. So it really depends on your application. For today's standards, I think it is generally accepted sorting on render state would often cost you more than you gain, though I have not done any analysis of this myself.
  10. Quote:Original post by MJP I'm not really aware of very many D3D10 tutorials period, since the API is still relatively new at this point. Humus has a few, but nothing related to HDR. Perhaps someone else knows of some more? I don't know if you can count these as tutorials, but there's a D3D10 HDR sample in NVIDIA's SDK 10. I think there also might be a D3D10 one in the DirectX SDK samples.
  11. I settled on creating quite a few different input layouts (like 6 or so) and making a dummy shader with all the various vertex shader input structs. You can also put these vertex shader input structs in a global fx header file that is included in other fx files so your shaders only use those formats. That seemed to work pretty well for me. I think you could also use shader reflection via the ID3D10ShaderReflection interface, but unless you need to work with any type of input layout, I think it might be too much work. I then store a pointer to an input layout on a per mesh basis, since you need the input layout when rendering. You could choose this at load time based on the vertex layout of your mesh's vertex buffer. I don't claim this is the 'best' way to do things, but it seems Hellgate London uses a similar approach (see: pg 61 of http://developer.download.nvidia.com/presentations/2007/gdc/UsingD3D10Now.pdf). There are probably much more elegant ways, but this method is pretty simple and easy to implement.
  12. Quote:Original post by David Sanders The problem is with the quality of the text when zoomed in and out. Have you looked at Valve's Improved Alpha-Tested Magnification for Vector Textures and Special Effects paper? That could solve the text problem. The implementation doesn't look too difficult, and they provide HLSL code (which could be translated into GLSL quite easily).
  13. I think wolf is talking about "Percentage Closer Software Shadows" (PCSS) which is a NVIDIA sample. I could be wrong, but I think adaptive shadow maps (ASM) is a movie-targeted technique used in offline renderers (like Renderman). IIRC, they use a lazy-evaluation technique which doesn't seem like it would be too fast. As far as GPU implementations are concerned, I don't know of any source implementations but there's a paper here that I found with a quick google search for "Adaptive shadow maps gpu". It claims 5-10 fps on a 6800GT for somewhat simple scenes in the worst case.
  14. The control panel should allow you to specify directories that d3d10 debugging is enabled for. Also, when creating the device, pass D3D10_CREATE_DEVICE_DEBUG for the flags. I also recall having to add D3D10SDKLayers.dll to my system path, but that was a while ago so the latest installer might automatically do that now.
  15. Are you saying you're trying to create a rendertarget with a format of D3DFMT_A16B16G16R16F and with multisampling? CreateRenderTarget() can create 64-bit FP rendertargets just fine, but unless you have a 8-series NVidia or a ATI card (I think the x1XXX series and up supported FP16 msaa), then CreateRenderTarget() will most likely fail. What video card do you have? If its a Geforce 6/7, then creating a multisampled fp16 render target will most likely not work. Also, as far as work-arounds go, Valve uses an HDR technique that renders to 8-bit per channel textures (see: http://ati.amd.com/developer/techreports/2006/SIGGRAPH2006/Course_26_SIGGRAPH_2006.pdf, pg 138), however, I implemented it and don't like the look of blooming after tonemapping. For DX9 hardware, I would suggest an 'EdgeAA' method (just blur parts of the image at depth discontinuities). It won't give the same result as hardware msaa, but will probably look better than no anti-alising at all.