Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 27 Jan 2014
Offline Last Active Oct 22 2015 02:37 AM

#5256930 How to create a depth-stencil-only pixel shader?

Posted by Husbjörn on 12 October 2015 - 04:22 PM

Right, it turns out I accidentally had set a normal rendering shader to a billboard instead of this depth one, so that was indeed trying to write to SV_Target. How embarrasing. On the upside, no more warnings now that I have fixed that oversight!

Thanks a lot for pointing it out ajmiles and Matias Goldberg :)

#5227275 Inconsistent falloff for light sources

Posted by Husbjörn on 05 May 2015 - 04:30 AM

I've been trying to implement lighting in my project, essentially following the method that is described here: http://www.3dgep.com/texturing-lighting-directx-11


While this does seemingly work, I've noticed some discrepancies where it essentially appears that certain meshes receive more light than they should based on their distance from the light source(s).

Here's an image showcasing this issue with a spot light:


The light source is situated a small distance above the floor and is aimed straight along the Z axis. As can be seen the light fall off such that it is essentially non-inifluential at the floor where the wall is, yet the wall receives a lot more light than I feel it should at this distance.


Still, one may think that this could possibly be related to the fact that the imaginary light rays hit the wall straight on as opposed to the floor and as such would light it up more. However, the same problem is also evident with omnidirectional light sources which in this case should have the same effect on all surfaces it hits (including the floor). As can be seen in the following image this is not the case however.

The dark circle is a semi-transparent sphere indicating the position of the light source and not a shadow of the sphere above it by the way; no shadow mapping has been implemented yet.





I am wondering what might be the cause of this.

All walls / floor / ceiling are just instances of the same quad mesh repositioned and rotated using world matrices so I don't believe it would relate to one (read: all) of these having incorrect normal data or such.

Also when only rendering the calculated attenuation (see the link above), it does indeed fall off as it "should" from the light source and opposite walls do not get a greater influence so the problem shouldn't be related to the attenuation factor either.


From what I can gather, it would in fact seem the problem relates to this function, since only returning C from it also causes proper falloff:

 * Calculates the diffuse contribution of a light source at the given pixel.
 * @param L		- Light direction in world space
 * @param N		- Normal direction of the pixel being rendered in world space
 * @param C		- Light colour
float4 CalculateDiffuseContrib(float3 L, float3 N, float4 C) {
	float NL = max(0, dot(N, L));
	return C * NL;

Since the only real factor here is the fragment normal (the light direction should be correct and is calculated as normalize(lightPos - fragmentPos), where both are in world space) I imagine the problem would have to be somehow related to that.

I am using normal mapping in the above screenshots, but even when using interpolated vertex normals the results are the same.


Is this a common occurence, or may it even be that this is how it is supposed to look?

Thanks in advance for any illuminating (wink.png) replies!

#5191574 Getting around non-connected vertex gaps in hardware tessellation displacemen...

Posted by Husbjörn on 06 November 2014 - 03:43 PM

Off the top of my head each individual face of your cube is tessellated and then displaced.

You need to ensure that the edge vertices are shared in each (subdivided) side-face or else these seams will occur since all vertices on the top face are displaced only along the up axis and all vertices of the front face are displaced only along the depth axis.

A simple solution is to displace along the vertex normals and ensure that whereever you have overlapping vertices (such as at the corners of a cube) you set the normal of all such vertices to the average of all "actual" vertex normals at that position. This will make the edges a bit more bulky but keep the faces connected.


My previous post in this thread (just above yours) describes how I solved this in a relatively simple way in more detail.

#5178123 Seemingly incorrect buffer data used with indirect draw calls

Posted by Husbjörn on 04 September 2014 - 01:13 PM

Like any other GPU-executed command, CopyStructureCount has implicit synchronization with any commands issued afterwards. So there shouldn't be any kind of manual waiting or synchronization required, the driver is supposed to handle it.

That's what I thought.


After a third rewrite (and a full rendering shader rewrite as well) it turned out I managed to build my quads the wrong way in the geometry shader so that they weren't visible; the appropriate vertex count does indeed seem to be passed to the DrawInstancedIndirect call. However, RenderDoc is still reporting the call as having a zero argument for the vertex count, so I guess there's a quite sneaky bug in there too which threw me off (naturally I expected it to give the correct output).

Thanks for your suggestions though smile.png



Edit: Didn't see your ninja post baldurk.




To clarify - the number that you see in the DrawInstancedIndirect(<X, Y>) in the event browser in RenderDoc is just retrieved the same way as you described by copying the given buffer to a staging buffer, and mapping that.

That is indeed weird because now I do get the proper count read back if I map it to a staging buffer myself and the correct draw results, yet RenderDoc claims this function is called with the arguments <0, 1>. I guess it clips away the last two offset integers because in reality the buffer should contain 4 values (mine would be x, 1, 0, 0) right?

My byte offset is zero, there is nothing more in the indirect buffer than the 16 bytes representing the argument list.


I'll try to add to my currently working minimalistic program to see if it still renders correctly and whether RenderDoc will keep on showing that 0 (or something else that's unreasonable) and get back. Maybe the problems will resurface in a different way once I add some complexity back in, though I hope not.

#5172779 Unordered access view woes with non-structured buffers

Posted by Husbjörn on 11 August 2014 - 08:05 AM

RWBuffer<float3> RWBuf : register(u0);

But it fails at the call to ID3D11Device::CreateUnorderedAccessView so I don't think the shader declaration is of any relevance since they haven't been bound together yet by then.

#5172584 Rendering blended stuff before skybox?

Posted by Husbjörn on 10 August 2014 - 05:50 AM

I would draw the skybox first since it should always be furthest in the background anyway and you should sort your opaque objects from back-to-front.

If you draw the skybox last, your transparent objects will only blend with opaque and other, previously drawn transparent ones, but not the skybox. This means they will get an edge around the transparent parts of the colour it was blended with which will be the render target clear colour in areas where only the skybox would be behind these objects. Of course this won't look pretty once the sky gets filled in to the non-transparent pixels surrounding those blended areas ;)

#5163027 Getting around non-connected vertex gaps in hardware tessellation displacemen...

Posted by Husbjörn on 26 June 2014 - 10:10 AM

Sorry for the long title, couldn't figure out how to express it shorter without being overly ambigious as to what this post is about.


Anyway, I've been poking around with displacement maping using the hardware tessellation features of DX11 for getting some more vertices to actually displace the last few days, for no particular reason other than to try it out so I'm not really looking for other ways to solve some specific problem.

Displacing a sphere or some other surface with completely connected faces work out as intended but issues obviously occur where there are multiple vertices with the same position but different normals (these vertices then get displaced in different directions and thus become disconnected => gaps appear in the geometry). I tried to mock up some simple solution to this by finding out which vertices share positions in my meshes and then setting a flag for these to tell my domain shader to not displace those vertices at all; it wouldn't be overly pretty but at least the mesh should be gapless and it hopefully wouldn't be too noticeable I reasoned. Of course this didn't work out very well (the whole subdivision patches generated from such overlapping vertices had their displacement factors set to 0 creating quite obvious, large frames around right angles and such). What I'm wondering is basically if this is a reasonable approach to try to refine further or if there are other ways to go about it that may be better? The only article on the topic I've managed to find mostly went on about the exquisitness of Bezier curves but didn't really seem to come to any conclusions (although maybe those would've been obvious to anyone having the required math skills).

Thankful for any pointers on this, the more I try to force this, the more it feels like I'm probably missing something.


As for my implementation of the tessellation, I've mostly based it around what is described in chapter 18.7 and 18.8 of Introduction to 3D Game Programming With DirectX 11 (http://www.amazon.com/Introduction-3D-Game-Programming-DirectX/dp/1936420228).

#5160510 Standard approach to shadow mapping multiple light sources?

Posted by Husbjörn on 14 June 2014 - 10:04 AM

So I've been contemplating this lately, is there any standard approach to how to (efficiently) handle dynamic shadow mapping of multiple light sources?

As I've understood it the common advice is to just render separate depth maps for each visible light in the scene and then let the scene shader(s) iterate over all of those. However this just sounds like it would get extremely wasteful with relatively few lights.

Assume for example that I have a moderately complex scene lit by three point lights; this translates into having to render the scene 18 times just to generate the depth maps and then those maps have to be stored in memory as well (assuming 2048x2048x16 maps that alone will use 144Mb VRAM, I suppose that isn't overly much, but it still adds up with further lights).

Another big issue is that this approach would quickly eat up texture slots for the actual scene shader (I suppose you could put multiple shadow maps into a texture atlas but that has its problems as well).

I'd just imagine there should be a way to somehow combine shadow calculations, or is it really all about the art of cheating (ie. only make the x most significant lights in the current frame actually cast shadows)?


If anybody would like to share some information, thoughts or links to papers or similar on this subject it would be greatly appreciated smile.png

#5126839 Best approach to scaling multi-node mesh?

Posted by Husbjörn on 27 January 2014 - 06:28 PM

When constructing any matrix, you need to consider how you perform each task-- the order does matter.


True, I knew that, but I just instinctively assumed you would multiply the child transform by the parent's. Now that I think about it it makes more sense to do it the other way around, and indeed that solved this issue.

Thanks for your assistance :)