Jump to content

  • Log In with Google      Sign In   
  • Create Account


Husbjörn

Member Since 27 Jan 2014
Offline Last Active Today, 06:22 PM
-----

Topics I've Started

Getting around non-connected vertex gaps in hardware tessellation displacement mapping

26 June 2014 - 10:10 AM

Sorry for the long title, couldn't figure out how to express it shorter without being overly ambigious as to what this post is about.

 

Anyway, I've been poking around with displacement maping using the hardware tessellation features of DX11 for getting some more vertices to actually displace the last few days, for no particular reason other than to try it out so I'm not really looking for other ways to solve some specific problem.

Displacing a sphere or some other surface with completely connected faces work out as intended but issues obviously occur where there are multiple vertices with the same position but different normals (these vertices then get displaced in different directions and thus become disconnected => gaps appear in the geometry). I tried to mock up some simple solution to this by finding out which vertices share positions in my meshes and then setting a flag for these to tell my domain shader to not displace those vertices at all; it wouldn't be overly pretty but at least the mesh should be gapless and it hopefully wouldn't be too noticeable I reasoned. Of course this didn't work out very well (the whole subdivision patches generated from such overlapping vertices had their displacement factors set to 0 creating quite obvious, large frames around right angles and such). What I'm wondering is basically if this is a reasonable approach to try to refine further or if there are other ways to go about it that may be better? The only article on the topic I've managed to find mostly went on about the exquisitness of Bezier curves but didn't really seem to come to any conclusions (although maybe those would've been obvious to anyone having the required math skills).

Thankful for any pointers on this, the more I try to force this, the more it feels like I'm probably missing something.

 

As for my implementation of the tessellation, I've mostly based it around what is described in chapter 18.7 and 18.8 of Introduction to 3D Game Programming With DirectX 11 (http://www.amazon.com/Introduction-3D-Game-Programming-DirectX/dp/1936420228).


Per-instance data in (non-vertex) shaders

22 June 2014 - 04:18 PM

So I've been using per-instance input slots for providing world and world-view-projection matrices to my vertex shaders for instanced meshes. This works just fine and I've been passing them along through the output of the vertex shader for some small tests when needed. However as shaders get more complex it cannot be the best way to copy such per-instance data for each vertex just to make it available to other shader stages. I know I could set up a constant buffer that contains arrays indexable by the instance id that is a smaller overhead to pass along throughout the shader pipeline, but is there any other way to achieve this? I'd rather not use the cbuffer array approach since the amount of instances in any given frame may vary.


Standard approach to shadow mapping multiple light sources?

14 June 2014 - 10:04 AM

So I've been contemplating this lately, is there any standard approach to how to (efficiently) handle dynamic shadow mapping of multiple light sources?

As I've understood it the common advice is to just render separate depth maps for each visible light in the scene and then let the scene shader(s) iterate over all of those. However this just sounds like it would get extremely wasteful with relatively few lights.

Assume for example that I have a moderately complex scene lit by three point lights; this translates into having to render the scene 18 times just to generate the depth maps and then those maps have to be stored in memory as well (assuming 2048x2048x16 maps that alone will use 144Mb VRAM, I suppose that isn't overly much, but it still adds up with further lights).

Another big issue is that this approach would quickly eat up texture slots for the actual scene shader (I suppose you could put multiple shadow maps into a texture atlas but that has its problems as well).

I'd just imagine there should be a way to somehow combine shadow calculations, or is it really all about the art of cheating (ie. only make the x most significant lights in the current frame actually cast shadows)?

 

If anybody would like to share some information, thoughts or links to papers or similar on this subject it would be greatly appreciated smile.png


Is it possible to step-debug D3D11 functions?

25 April 2014 - 06:10 PM

This feels extremely off, but I just can't seem to track the issue down so I thought I'd ask in case someone else has experienced something similar.

 

So I'm writing a game engine in C++ using DX11 for rendering. Its going rather well for the moment and everything works just as intended when I build it as an executable. However, as a side project I'm also wrapping the engine functionality into a dynamic link library so that it may be used from other languages as well. Now this is also working... for the most part. For some reason that has me dumbfounded the host application using the dll ends up in a deadlock upon calling DLLMain with the DLL_PROCESS_DETACH reason. I've tracked the issue down to happen inside my final release call that results in my ID3D11Device getting a reference count of 0, so it most likely happens as the device is destroyed. Unfortunately there is no debugging information available that lets me see exactly where it all goes sour in there, except for that control goes into the final Release() call and never comes back out.

I have logged the refcounts for the device, device context, swapchain, etc. and they are released in exactly the same manner in DLL and executable build alike, yet the DLL version gets stuck while the executable happily terminates without as much as a sniffle from the DX debug layer or anything that would hint at hidden errors. I've gone through my code two times searching for lingering buffers that may be dependent on the device (however were that really the case it should maintain its own reference and thus prevent it from being destroyed before the buffer / whatever resource is).

 

Code and reference counts:

// pSwapChain has 1 reference, pDevCon 1 and pDev 3 here
if(pSwapChain)
    pSwapChain->Release(); // pSwapChain has 0 references, pDevCon 1 and pDev 2 after this line
if(pDevCon)
    pDevCon->Release();    // pSwapChain is unreferenced, pDevice context hits 0 references and pDev has 1 remaining reference
if(pDev)
    pDev->Release();       // 0 references for all three resources in executable build; control never returns from this call in dll build

So for my questions... is there any way to debug inside the ID3DDevice::Release() / ID3DDevice::dtor() bodies to see what's really happening? Is this code even open source / otherwise available for reading? I'm guessing not. Secondly, is this a known issue that can arise and, if so, what causes it? I've tried searching a good deal for it but have so far been unable to find anything relevant.

 

 

Thanks for any ideas, 

Husbjörn


Looking for a decent Assimp animation tutorial

21 April 2014 - 01:48 PM

I've been trying to get my head around how to properly set up a skinning animation system using the bone transform data that can be imported using the Open Asset Import Library (http://assimp.sourceforge.net) but always seem to fail, which I admit is starting to get on my nerves.

I've made a good 5+ attempts inspired by this article: http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html as well as the limited hints that are given in the Assimp documentation. One thing to note is that I'm using DirectX rather than OpenGL, so things in the above tutorial will obviously not match to 100%, which may be where I'm going wrong. I'm adhering to the standard matrix multiplication order of scale * rotation * translation and I'm transposing all of Assimp's imported matrices which the documentation suggests are in a right handed format. Comparing my animation frame matrices to that exposed by the assimp_view application, they do seem to match so the problem most likely lies in the way I append all these local matrices in order to construct the final transform of each bone. Pseudo-code of my approach to that:

XMMATRIX transform = bone.localTransform;
BoneData *pParent = bone.pParent;
while(pParent) {
    transform = pParent.localTransform * transform;
    pParent = pParent.pParent;
}

// The actual bone matrix applied to each vertex, scaled by its weight, is calculated like so:
XMMATRIX mat = globalMeshInverseTransform * transform * bone.offsetMatrix;

The other thing I'm doing differently from all info on this I've been able to find is that I'm doing the vertex transformation on the CPU by updating the vertex buffer rather than doing it in a vertex shader. I do see the upside to the vshader approach, I just want to see that I'm able to do it in this, presumably easier, way before moving on to that though.

 

 

So my question is basically; is there anything obvious one needs to do when using DX rather than OGL or when setting vertex data on the CPU rather than in the vertex shader (not likely) that I've missed, or is there some more to-the-point tutorial on how to go about this somewhere (I've searched but haven't been able to find any), preferably using DirectX?

 

 

Thanks for any pointers, 

Husbjörn


PARTNERS