• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Reitano

Members
  • Content count

    79
  • Joined

  • Last visited

Community Reputation

727 Good

About Reitano

  • Rank
    Member

Personal Information

  1. Thank you for the link. I am aware of LEAN mapping and the Bruneton paper. In fact, I already use a baked C-LEAN variance texture for the computation of the wave variance and the filtering of all functions that depend on the normal. The issue I am having is related to the undersampling of the displacement map and the temporal aliasing implicit in the projective grid technique.  Anyway, by biasing the tessellation towards the horizon I managed to reduce these artifacts. I also refactored the pipeline and now the water geometry and shading phases are entirely decoupled, with a noticeable performance improvement on my 5 years old laptop :) I also uploaded a new demo on my website along with new features. I'd really appreciate the feedback of anyone reading this thread !   Thanks !   Mandatory screenshot:
  2. My engine has a Position class which represents a 3D position in world space and internally uses doubles/__m128d simd registers, with utilities to add/subtract Position(s) and compute relative vectors in float coordinates. The Transform component contains it and consequently all model instances, cameras, sound sources, AI agents, particle effects etc take advantage of it. At the beginning of each frame, a new world origin is chosen, which usually coincides with the position of the main camera. All Transform components are then converted to float precision = Transform.position.Subtract(worldOrigin). Frustum culling, rendering, water simulation, AI and other simulation tasks then operate in this float-based relative world space. It's an elegant approach and it works very well. To give an example, I am working on a new demo with an island located very far from the zero origin (longitude: 354750, latitude: 3703690) and everything works perfectly. For the depth buffer, I am now using a reversed depth buffer and it magically fixed all the z-fighting artifacts I was having.
  3. Thank you for the replies. I should have provided more information in my original post. I would use deferred texturing for the rendering of water normals only, not for the whole scene. This pass is particularly expensive due to the pixel shader complexity (it combines several wave layers), many texture fetches with anisotropic filtering and the abovementioned quad overdraw (a term I borrowed from RenderDoc). I use the projective technique for the geometry and in order to minimize temporal aliasing, caused by sampling slightly different points on the water plane as the camera moves, the tessellation must be very high, especially near the horizon. I tried a couple of stabilization approaches but sadly none worked to my satisfaction. @MPJ Luckily divergent sampling is not a problem in my case as the shader uses the same set of textures for the whole pass. I will work on an implementation in the weekend and let you know my findings.   Thanks again!
  4. The most expensive pass in my rendering pipeline processes highly tessellated geometry and thus suffers from a high degree of quad overdraw. At the highest quality setting, the tessellation generates sub pixel triangles and the performance loss is quite drastic. An obvious optimization is deferred texturing, which consists in running a pre pass which rasterizes the geometry and saves to some buffers all the data required by the original rendering pass: texture coordinates, derivatives etc. The original pass is then replaced by either a fullscreen triangle or a compute shader, with optimal quad utilization. My question is: are gradient-based sampling functions nowadays less efficient than the ones not taking the gradient as argument ? That's my main worry and I could not find any information on this. I know that Deus Ex Manking uses this technique but I'd like to have some confirmation before coding a prototype. Also, apart from the need for intermediate buffers, are there other non obvious disadvantages ? Thanks! Stefano
  5. If I understood it correctly, your problem is related to the seamless filtering of a spherical function, a heightmap encoded with a cubemap in your case. That's analogous to the convolution of radiance maps for IBL. It's easier to work in 3D with the required 2D->3D and 3D->2D mapping steps where necessary. In pseudo code: for each cubemap texel     .compute the corresponding 3D direction vector D; D along with an aperture (user defined or fixed) define a disk around the cubemap texel     .compute a orthogonal tangent frame for D : T, B     .take N samples inside the disk. For each sample (u,v) (**1)         .compute the corresponding 3D vector as Di = u * T + v * B + D         .compute the cubemap texel coordinates and face corresponding to Di         .fetch the cubemap with bilinear filtering (**2)     .process the N samples and compute the result. What operator are you using ?     .write back the result to the cubemap The image processing part happens in 3D space which is continuous, and the difficulties due to corners and edges are implicitly taken care of by the mapping from/to cubemap space. (**1) I'd suggest a uniform sampling because the tangent frame is going to be discontinuous  (**2) If you're doing this on the GPU, as Hodgman mentioned modern cards offer seamless bilinear filtering of adjacent cubemap faces. If you're doing this in software, you'll have to emulate bilinear filtering yourself. If something is unclear or you need some code, please let me know.   Stefano
  6. Hi all,   In the past months I have been working on a brand new version of Typhoon. Typhoon is an engine specialized in the simulation and rendering of oceans and underwater environments, targeting AAA games and maritime simulations. It is written in C/C++ 11/Lua/HLSL and currently runs on the Windows/DirectX 11 platform. I have released a demo on www.typhoon3d.com and I am now looking for testers. The requirements are Windows 7 64-bit or later and a DirectX 11 compliant card.   Regarding water, the features are:   - Projective grid tessellation - FFT ambient waves - Procedural waves - Kelvin waves - Ship wakes - Whitecaps - Physically based shading - Specular anti aliasing (baked LEAN) - Reflections - Refractions - Cascaded caustics - Underwater godrays - Underwater shadows - Underwater defocusing - Seamless rendering at the water/air interface - Buoyancy simulation - Support for bathymetry maps   After a break, I will focus on these new features: - Underwater reflections - Support for round earth - Wave particles - Convolution waves for water/bodies interaction - More wave and foam primitives (e.g. helicopter rotors, missile trails) - Spray effects - Anti aliased caustics - Wide angle cameras - Scuba diving pack - And many more engine features...   Next year I will then focus on the seemingly impossible problem of simulating breaking waves and wave refraction in shallow water in real-time, which I'd say is my programming-related dream. I will also seek potential partnerships and work on the integration with other engines/products in order to finance further developments.   Please let me know your feedback and bug reports here or by email (typhoon3d@gmail.com).   Thanks!   Stefano Lanza www.typhoon3d.com   [attachment=36017:gamedev1.jpg][attachment=36018:gamedev2.jpg] [attachment=36019:gamedev3.jpg][attachment=36020:gamedev4.jpg][attachment=36021:gamedev6.jpg][attachment=36022:gamedev7.jpg]
  7. Thank you for the information. I will profile the use of SV_Depth and see what overhead and performance savings brings to my scenes. Manually rejecting pixels in the pixel shader with dynamic branching might be good enough but we'll see.  Related: what is the granularity of depth and stencil rejection on recent GPUs ? And would it be more efficient to use both in cases where one suffices ? For example, draw the sky on pixels whose depth == 1 AND stencil == skyRef instead of only depth == 1.        
  8. Is it possible in DirectX 11 to downsample a depth stencil buffer, and then re-bind it again as a read-only depth stencil buffer ? I only managed to bind it as a shader resource view but I would like, if possible, to take advantage of early depth culling and the stencil buffer. The use case is volumetric effects, rendered at half resolution, although the question is relevant to lower res rendering in general.
  9. Mistery solved. I was binding the depth map as R16_FLOAT instead of R16_UNORM. All works fine now, thank you again!
  10. Thank you so much! All these years of hlsl programming and I never knew about this. On a related note, according to the debug layer, my card does not support a comparison sampler with a 16-bit depth/shadow map, which forces me to either emulate PCF filtering with GatherRed or use a 32 bit depth format. First, is that true for all cards ? I remember that hardware PCF worked fine in DX9 for 16 bit depth maps... My use case is volumetric shadows (8 samples per ray, 1x1 PCF). I will have to profile the two options but I suppose the reduced bandwidth will win over the additional ALU and register pressure. What's your recommendation ?   Thanks
  11. I render shadowmaps the standard way, that is by binding a depth/stencil buffer and by setting a null render target and a null pixel shader. That works fine except for alpha tested geometry. In this case I need a pixel shader to discard pixels that fail the alpha test. Of course the DirectX debug layer complains as no render target is bound: #D3D11 WARNING: ID3D11DeviceContext::DrawIndexedInstanced: The Pixel Shader expects a Render Target View bound to slot 0, but none is bound. The results are correct but I'd like to suppress this annoying warning. Any suggestions ? For example, is there something in DirectX 11 equivalent to the NULL render target hack in DirectX 9 ? Or should I simply create and bind a temporary render target for alpha tested geometry ?    Thanks
  12. Instead of using a shadowmap for shadows, you could bake the visibility between the terrain and the main light in an occlusion map. For each texel, cast a ray between the corresponding terrain sample towards the light, and store 0 or 1 depending on whether the ray hits the terrain or not. The intersection code can be optimized in several ways, for instance by precaching the terrain geometry instead of evaluating the fractal function on the fly. An occlusion map has many advantages: it filters correctly, takes little memory (an 8 bit format is enough), gives you soft shadows for free and it's view independent. Of course you can tweak its resolution depending on your memory and runtime budget. That's the approach I used more than 10 years ago for shadowing my terrains (on the CPU) and it worked very well. You should be able to prototype it on the GPU rather quickly. Just an idea!
  13. I'm going to upvote you because of your avatar Bud is a legend!
  14. The projected grid method only relates to water rendering, not the physical simulation of waves. For that you'll need other techniques, like iWave, wave particles, procedural effects, artist-authored textures etc. Most of these techniques use textures to store the wave state. They often operate in a coordinate space that lies parallel to the water surface. The coordinate space of the projected grid is, by definition, screen space (approximately). Therefore, you'll need a mechanism to move data from one coordinate space to the other, taking into account aliasing of course. At this purpose, in my engine I use two render targets. The first has the size of the projected grid and stores the total displacement at each vertex of the grid. First, the 3D displacement of ambient (FFT) waves is written in this target. Then, other wave types are added to it (remember that water waves combine linearly). In my case all wave types are represented by textures. For antialiasing, I rely on the mipmapping capability of the GPU in conjunction with tex2Dgrad, to which I pass the analytical derivatives at each vertex. The second render target has the size of the backbuffer and stores the horizontal components of the water normal. As above, it is initialized first with ambient waves. Then, other waves types are blended on it, again relying on mipmapping and, this time, hardware derivatives at each pixel. You can blend procedural effects such as circular or Gerstner waves (trochoids) directly on these two render targets, without baking them into textures (which would be lossy in addition to unnecessary). Again, anti-alias these effects based on the frequency of the effects and the derivatives at the pipeline stage where they are drawn.  
  15. I am well aware of the technical limitations of texture atlases vs arrays vs slots. In the case of texture atlases, naturally I would manually unwrap the texture coordinates in the pixel shader and use tex2Dgrad (or equivalent function) for fetching the data. This approach has been used for more than a decade by tons of engines; I am simply looking for an existing bleeding-aware tool to pack textures. Simple slots are not a solution in my case, as I want to support 64 textures and I want to dynamically fetch different textures per-pixel based on a material mask.