Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 15 Sep 2003
Offline Last Active Yesterday, 08:12 PM

Posts I've Made

In Topic: When will DX11 become obsolete?

Yesterday, 08:15 PM

I wish they would add bindless texturing API to d3d11, as well as the "fast geometry shader" feature for cube map rendering.  I think they could have done more for d3d11 to reduce draw call overhead without having to drop to the low d3d12 level.

In Topic: [D3D12] Descriptor heaps and memory (de)allocation

26 October 2015 - 02:33 PM

 But when designing an "general" (if that even exists) engine, you cannot really know the amount of descriptors your user will need...


I think you can for the most part.  If you have fixed level sizes (arena map, race track), you can pretty much know at load time what resources you have (number of objects, materials, textures, etc.), so you can size your descriptor heap appropriately.  You can then add a a fixed maximum count to support room for dynamic objects that will be inserted/removed on the fly.  Descriptors don't cost much memory, so it wouldn't be a big deal to over allocate some extra heap space.  For more advanced scenarios, you can reuse heap space that you aren't using anymore.


For a level editor type application, I'd imagine you could grow heaps sort of the way vectors grow.


Another problem I have is: when you no longer require a given buffer, you no longer need its associated descriptor. So the best solution would be to reuse its location within the descriptor heap. But this requires me to implement some kind of advanced memory allocation algorithm, which I would like to avoid if possible...


I don't think it needs to be too advanced.  Just keep track of "free" descriptors and whenever you need a new descriptor, pull the next free one.  A lot of particle systems use a similar "recycling array" like this. 


Imagine I have a set of drawable objects, each of them having their own world transform. I would use a constant buffer (one for each object) to pass the associated matrices to the vertex shader. Allocating them is easy, as I only need to use the location right after the last descriptor I allocated on the heap. But when I delete one of these objects, is it my responsibility to make sure that the space that is no longer used will be reused for the next descriptor?


I didn't really follow your question.  The way I would assume you would do is allocate the cbuffer memory.  Then you need to allocate CBVs that live in a heap that reference subsets of that cbuffer memory.  If an object is deleted, it would be easiest to just flag that cbuffer memory region and CBV as free so it can be used the next time an object is created.  

In Topic: Ray-triangle intersection on scaled model

19 October 2015 - 05:27 PM

Do the ray/triangle test in the local space of the mesh.

In Topic: DirectX12's terrible documenation -- some questions

30 September 2015 - 03:05 PM

Though it's still a bit unclear to me when one would pick a CBV over a SRV as SRVs can reference buffers which presumably can be accessed in shaders. And if they can be accessed in shaders then it must be in a similar manner?


This is a good question.  Yes you can put data you would typically put in a constant buffer in a structured buffer and then bind an SRV to the structured buffer and index it in your vertex shader.  You would have to profile and see if one performs more optimally.  


In the d3d11 days, I assumed constant buffers were distinguished in that they were designed for changing a small amount of constants often (per draw call), and so had special optimizations for this usage, whereas a structured buffer would not be changed by the CPU very often and would be accessed more like a texture.  


I'm not sure if future hardware will continue to make a distinction or if it is all the same.

In Topic: ndotL environment mapping?

28 September 2015 - 04:12 PM

1) there is a tradeoff between diffuse & specular reflected energy.  more diffuse reflections mean less energy available for specular and vice versa. 


Right.  The incoming light is scaled by ndotL because the light gets spread over a different area based on the angle between n and L.  Some of that light will get specularly reflected based on Fresnel, and the remaining will undergo diffuse reflection.  


For specular reflections, there is a light ray coming from the reflected direction r (the color sampled from the cube map).  Shouldn't this also be scaled by ndotL as the light gets spread over a different area based on the angle between n and L?