Jump to content

  • Log In with Google      Sign In   
  • Create Account


Husbjörn

Member Since 27 Jan 2014
Offline Last Active Yesterday, 03:31 PM

Topics I've Started

Unordered access view woes with non-structured buffers

11 August 2014 - 04:59 AM

I've been trying to get this to work all morning, as far as I can tell I'm not doing anything obviously wrong. I cannot find any information about it but all examples I can find on using unordered access views are using structured buffers; may it be that they simply won't work with "normal" (primitive datatype) buffers?

 

Here's what I have:

// Buffer description
D3D11_BUFFER_DESC desc;
ZeroMemory(&desc, sizeof(D3D11_BUFFER_DESC));
desc.Usage		= D3D11_USAGE_DEFAULT;
desc.ByteWidth		= (UINT)cbByteSize;
desc.BindFlags		= D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS;

// The above works for successfully creating a buffer without any warnings or errors, 
// some initial data is provided and a normal shader resource view can be created, 
// bound to and used by shaders without issues.
// However when I try to create another (unordered access) view for the same buffer 
// like so...

D3D11_UNORDERED_ACCESS_VIEW_DESC udesc;
ZeroMemory(&udesc, sizeof(D3D11_UNORDERED_ACCESS_VIEW_DESC));
udesc.Format			= srvDesc.Format; // This seems to be what causes trouble
udesc.ViewDimension		= D3D11_UAV_DIMENSION_BUFFER;
udesc.Buffer.FirstElement	= srvDesc.Buffer.FirstElement;
udesc.Buffer.NumElements	= srvDesc.Buffer.NumElements;

The debug layer tells me that "the format cannot be used with a typed unordered access view". It doesn't matter if I specify the same format as that which works with the shader resource view, the corresponding TYPELESS format or DXGI_FORMAT_UNKNOWN; I get the same error at all times.

What puzzles me is that it says "typed unordered access view". What does this mean? It sounds like it might be suggesting it is indeed expecting a structured buffer?

 

I'm posting this here in the hope that someone with more experience will be able to shed some more light on this.


Geometry shader patchlist input?

08 August 2014 - 05:30 AM

I just ran into somewhat of a snag trying to send a D3D11_PRIMITIVE_TOPOLOGY_8_CONTROL_POINT_PATCHLIST indiced mesh through a geometry shader stage. Apparently judging by the produced error message, the GS stage needs a geometry specifier preceding the input array. Now, looking those up, it would seem the only valid patch input to the GS is either point (1 vertex), line (2 vertices), triangle (3 vertices), lineadj (2 base and 2 adjacent vertices) or triangleadj (3 normal and 3 base vertices).

Does this mean that the geometry shader can in fact only be used with point, line or triangle topologies? My intent was to produce the final geometry based on sets of control points patches.


Getting around non-connected vertex gaps in hardware tessellation displacement mapping

26 June 2014 - 10:10 AM

Sorry for the long title, couldn't figure out how to express it shorter without being overly ambigious as to what this post is about.

 

Anyway, I've been poking around with displacement maping using the hardware tessellation features of DX11 for getting some more vertices to actually displace the last few days, for no particular reason other than to try it out so I'm not really looking for other ways to solve some specific problem.

Displacing a sphere or some other surface with completely connected faces work out as intended but issues obviously occur where there are multiple vertices with the same position but different normals (these vertices then get displaced in different directions and thus become disconnected => gaps appear in the geometry). I tried to mock up some simple solution to this by finding out which vertices share positions in my meshes and then setting a flag for these to tell my domain shader to not displace those vertices at all; it wouldn't be overly pretty but at least the mesh should be gapless and it hopefully wouldn't be too noticeable I reasoned. Of course this didn't work out very well (the whole subdivision patches generated from such overlapping vertices had their displacement factors set to 0 creating quite obvious, large frames around right angles and such). What I'm wondering is basically if this is a reasonable approach to try to refine further or if there are other ways to go about it that may be better? The only article on the topic I've managed to find mostly went on about the exquisitness of Bezier curves but didn't really seem to come to any conclusions (although maybe those would've been obvious to anyone having the required math skills).

Thankful for any pointers on this, the more I try to force this, the more it feels like I'm probably missing something.

 

As for my implementation of the tessellation, I've mostly based it around what is described in chapter 18.7 and 18.8 of Introduction to 3D Game Programming With DirectX 11 (http://www.amazon.com/Introduction-3D-Game-Programming-DirectX/dp/1936420228).


Per-instance data in (non-vertex) shaders

22 June 2014 - 04:18 PM

So I've been using per-instance input slots for providing world and world-view-projection matrices to my vertex shaders for instanced meshes. This works just fine and I've been passing them along through the output of the vertex shader for some small tests when needed. However as shaders get more complex it cannot be the best way to copy such per-instance data for each vertex just to make it available to other shader stages. I know I could set up a constant buffer that contains arrays indexable by the instance id that is a smaller overhead to pass along throughout the shader pipeline, but is there any other way to achieve this? I'd rather not use the cbuffer array approach since the amount of instances in any given frame may vary.


Standard approach to shadow mapping multiple light sources?

14 June 2014 - 10:04 AM

So I've been contemplating this lately, is there any standard approach to how to (efficiently) handle dynamic shadow mapping of multiple light sources?

As I've understood it the common advice is to just render separate depth maps for each visible light in the scene and then let the scene shader(s) iterate over all of those. However this just sounds like it would get extremely wasteful with relatively few lights.

Assume for example that I have a moderately complex scene lit by three point lights; this translates into having to render the scene 18 times just to generate the depth maps and then those maps have to be stored in memory as well (assuming 2048x2048x16 maps that alone will use 144Mb VRAM, I suppose that isn't overly much, but it still adds up with further lights).

Another big issue is that this approach would quickly eat up texture slots for the actual scene shader (I suppose you could put multiple shadow maps into a texture atlas but that has its problems as well).

I'd just imagine there should be a way to somehow combine shadow calculations, or is it really all about the art of cheating (ie. only make the x most significant lights in the current frame actually cast shadows)?

 

If anybody would like to share some information, thoughts or links to papers or similar on this subject it would be greatly appreciated smile.png


PARTNERS