Jump to content

  • Log In with Google      Sign In   
  • Create Account

Reitano

Member Since 19 Jun 2007
Offline Last Active Yesterday, 08:21 AM

Topics I've Started

Tool or algorithm for mesh tessellation

23 January 2014 - 04:37 AM

Recently I switched to logarithmic depth buffer in order to fix depth-fighting artifacts in large scenes. Sadly, while this technique works brilliantly on highly tessellated meshes, it performs poorly on low poly ones. One solution to this issue consists in writing depth from the pixel shader but I'd prefer to avoid it for reasons related to performance and complications to the shader pipeline. Furthermore, I cannot use a floating point depth buffer, as suggested in other threads, as I need the stencil and I want to keep bandwidth and memory consumption down to a minimum.

Most of my 3D models have a high poly count and behave well but some have long big triangles that suffer from obvious depth testing artifacts. I would like to  preprocess them, either using an existing free tool or by writing my own tessellation algorithm. I'm still using DirectX 9 so HW tessellation is not an option yet. Do you know any tool or simple subdivision schemes that would help in this case?

 

Thanks a lot


Shading in Unreal Engine 4

13 January 2014 - 05:58 AM

Hi,

Over the weekend I've read the presentation on physically-based shading in the Unreal 4 engine (http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf). I have a question on the integration of environment maps.

 

As described in the paper, this is accomplished by splitting the integration in two parts: the average of the environment lighting (a mip mapped cubemap) and a pre-convolved BRDF, parametrized by the dot product (normal.view) and the material roughness.

 

For the BRDF, we calculate many random directions around the normal based on the roughness, then calculate the corresponding reflected vector and use it to evaluate the BRDF. My question is: should we weight each sample by the dot product between the reflected vector and the normal ? That makes sense to me as it's part of the lighting equation, but it gives very dark results at glancing angles and for low roughness values because in that case, the majority of reflected vectors are almost perpendicular to the normal. The sample code in the paper does not consider this factor which is a little surprising.

 

Thanks,

Stefano


Initialization of static buffers in DirectX 11

25 November 2013 - 04:22 AM

I am adding support for DirectX 11 to my engine and I have a question about the initialization of static vertex and index buffers. The approach I am using now, common to DirectX 9 engines, is that of creating a buffer, mapping it and then filling it with data. DirectX 11 apparently requires a different approach: according to the documentation, one should create a static buffer passing initialization data at the moment of creation. For me this approach would require a good amount of refactoring. So my question is: is it still allowed to defer a static buffer initialization by mapping the buffer after its creation ? If so, are there any restrictions or performance related implications ?

 

Cheers,

 

Stefano


Square <-> hemisphere mapping

25 October 2013 - 04:49 AM

Hi

 

I am looking for a mapping (and its inverse) between a square and a hemisphere. I need it to store samples of a hemispherical function (the sky colour) in a 2D texture, which I can then fetch in shaders. The requirements are:

 

- shader-efficient inverse mapping from the hemisphere to the square (in other words, conversion of a 3D direction to U,V coordinates)

- the mapping should allow some control on the sample distribution. In my case I need more samples near the horizon where the sky colour changes quickly

 

I've done some research these days but could not find anything suitable yet. Most projections use polar coordinates and have costly inverse mappings in terms of ALU (due to atan2, acos functions). I have code for mapping normal vectors to two coordinates for deferred rendering but in that case I cannot control the sample distribution.

 

Perhaps I could use a cylindrical projection with a cheap approximation to atan2 if one exists ?! Any ideas are much appreciated.

 

Thanks,

 

Stefano

 

 


Practical benefits of the restrict keyword

18 October 2013 - 04:34 AM

Recently I've re-discovered the restrict keyword. Though not part of the C++ standard yet, it is supported by most compilers. For anyone not familiar with it, wikipedia has a good introduction: http://en.wikipedia.org/wiki/Restrict In theory, this keyword allows the compiler to optimize read and write operations and might be particularly useful in the case of loops that process long sequences of data. See http://assemblyrequired.crashworks.org/2008/07/08/load-hit-stores-and-the-__restrict-keyword/.
In my engine I've adopted a functional style at a low level and I could apply this keyword to most, perhaps all, my functions. My question is: is it really worth it ? Before I apply it blindly to my code, I would like you to hear about your experience with it.  Examples with actual performance measurements would be great.

Thanks,

Stefano


PARTNERS