Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 19 Jun 2007
Offline Last Active Today, 10:29 AM

Topics I've Started

Texture atlas generator

05 March 2015 - 12:23 PM


I am looking for a tool capable of generating texture atlases. The intended usage is terrain texturing, which naturally must not suffer from the well known problem of bleeding. In that regard, I really like the solution described here: https://mtnphil.wordpress.com/2011/09/26/terrain-texture-atlas-construction/
Also, direct support for various formats (DXT1, DXT5, BC5 etc.) would be nice but that could be deferred to another tool. Plus, a configurable layout would be a handy feature too.
Now, does such tool exist or should I write my own one ? I looked everywhere for one but could not find any suitable.


Tool or algorithm for mesh tessellation

23 January 2014 - 04:37 AM

Recently I switched to logarithmic depth buffer in order to fix depth-fighting artifacts in large scenes. Sadly, while this technique works brilliantly on highly tessellated meshes, it performs poorly on low poly ones. One solution to this issue consists in writing depth from the pixel shader but I'd prefer to avoid it for reasons related to performance and complications to the shader pipeline. Furthermore, I cannot use a floating point depth buffer, as suggested in other threads, as I need the stencil and I want to keep bandwidth and memory consumption down to a minimum.

Most of my 3D models have a high poly count and behave well but some have long big triangles that suffer from obvious depth testing artifacts. I would like to  preprocess them, either using an existing free tool or by writing my own tessellation algorithm. I'm still using DirectX 9 so HW tessellation is not an option yet. Do you know any tool or simple subdivision schemes that would help in this case?


Thanks a lot

Shading in Unreal Engine 4

13 January 2014 - 05:58 AM


Over the weekend I've read the presentation on physically-based shading in the Unreal 4 engine (http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf). I have a question on the integration of environment maps.


As described in the paper, this is accomplished by splitting the integration in two parts: the average of the environment lighting (a mip mapped cubemap) and a pre-convolved BRDF, parametrized by the dot product (normal.view) and the material roughness.


For the BRDF, we calculate many random directions around the normal based on the roughness, then calculate the corresponding reflected vector and use it to evaluate the BRDF. My question is: should we weight each sample by the dot product between the reflected vector and the normal ? That makes sense to me as it's part of the lighting equation, but it gives very dark results at glancing angles and for low roughness values because in that case, the majority of reflected vectors are almost perpendicular to the normal. The sample code in the paper does not consider this factor which is a little surprising.




Initialization of static buffers in DirectX 11

25 November 2013 - 04:22 AM

I am adding support for DirectX 11 to my engine and I have a question about the initialization of static vertex and index buffers. The approach I am using now, common to DirectX 9 engines, is that of creating a buffer, mapping it and then filling it with data. DirectX 11 apparently requires a different approach: according to the documentation, one should create a static buffer passing initialization data at the moment of creation. For me this approach would require a good amount of refactoring. So my question is: is it still allowed to defer a static buffer initialization by mapping the buffer after its creation ? If so, are there any restrictions or performance related implications ?





Square <-> hemisphere mapping

25 October 2013 - 04:49 AM



I am looking for a mapping (and its inverse) between a square and a hemisphere. I need it to store samples of a hemispherical function (the sky colour) in a 2D texture, which I can then fetch in shaders. The requirements are:


- shader-efficient inverse mapping from the hemisphere to the square (in other words, conversion of a 3D direction to U,V coordinates)

- the mapping should allow some control on the sample distribution. In my case I need more samples near the horizon where the sky colour changes quickly


I've done some research these days but could not find anything suitable yet. Most projections use polar coordinates and have costly inverse mappings in terms of ALU (due to atan2, acos functions). I have code for mapping normal vectors to two coordinates for deferred rendering but in that case I cannot control the sample distribution.


Perhaps I could use a cylindrical projection with a cheap approximation to atan2 if one exists ?! Any ideas are much appreciated.