Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Husbjörn

Member Since 27 Jan 2014
Offline Last Active Nov 26 2014 05:59 AM

#5191574 Getting around non-connected vertex gaps in hardware tessellation displacemen...

Posted by Husbjörn on 06 November 2014 - 03:43 PM

Off the top of my head each individual face of your cube is tessellated and then displaced.

You need to ensure that the edge vertices are shared in each (subdivided) side-face or else these seams will occur since all vertices on the top face are displaced only along the up axis and all vertices of the front face are displaced only along the depth axis.

A simple solution is to displace along the vertex normals and ensure that whereever you have overlapping vertices (such as at the corners of a cube) you set the normal of all such vertices to the average of all "actual" vertex normals at that position. This will make the edges a bit more bulky but keep the faces connected.

 

My previous post in this thread (just above yours) describes how I solved this in a relatively simple way in more detail.




#5178123 Seemingly incorrect buffer data used with indirect draw calls

Posted by Husbjörn on 04 September 2014 - 01:13 PM


Like any other GPU-executed command, CopyStructureCount has implicit synchronization with any commands issued afterwards. So there shouldn't be any kind of manual waiting or synchronization required, the driver is supposed to handle it.

That's what I thought.

 

After a third rewrite (and a full rendering shader rewrite as well) it turned out I managed to build my quads the wrong way in the geometry shader so that they weren't visible; the appropriate vertex count does indeed seem to be passed to the DrawInstancedIndirect call. However, RenderDoc is still reporting the call as having a zero argument for the vertex count, so I guess there's a quite sneaky bug in there too which threw me off (naturally I expected it to give the correct output).

Thanks for your suggestions though smile.png

 

 

Edit: Didn't see your ninja post baldurk.

 

 

 

To clarify - the number that you see in the DrawInstancedIndirect(<X, Y>) in the event browser in RenderDoc is just retrieved the same way as you described by copying the given buffer to a staging buffer, and mapping that.

That is indeed weird because now I do get the proper count read back if I map it to a staging buffer myself and the correct draw results, yet RenderDoc claims this function is called with the arguments <0, 1>. I guess it clips away the last two offset integers because in reality the buffer should contain 4 values (mine would be x, 1, 0, 0) right?

My byte offset is zero, there is nothing more in the indirect buffer than the 16 bytes representing the argument list.

 

I'll try to add to my currently working minimalistic program to see if it still renders correctly and whether RenderDoc will keep on showing that 0 (or something else that's unreasonable) and get back. Maybe the problems will resurface in a different way once I add some complexity back in, though I hope not.




#5172779 Unordered access view woes with non-structured buffers

Posted by Husbjörn on 11 August 2014 - 08:05 AM

RWBuffer<float3> RWBuf : register(u0);

But it fails at the call to ID3D11Device::CreateUnorderedAccessView so I don't think the shader declaration is of any relevance since they haven't been bound together yet by then.




#5172584 Rendering blended stuff before skybox?

Posted by Husbjörn on 10 August 2014 - 05:50 AM

I would draw the skybox first since it should always be furthest in the background anyway and you should sort your opaque objects from back-to-front.

If you draw the skybox last, your transparent objects will only blend with opaque and other, previously drawn transparent ones, but not the skybox. This means they will get an edge around the transparent parts of the colour it was blended with which will be the render target clear colour in areas where only the skybox would be behind these objects. Of course this won't look pretty once the sky gets filled in to the non-transparent pixels surrounding those blended areas ;)




#5163027 Getting around non-connected vertex gaps in hardware tessellation displacemen...

Posted by Husbjörn on 26 June 2014 - 10:10 AM

Sorry for the long title, couldn't figure out how to express it shorter without being overly ambigious as to what this post is about.

 

Anyway, I've been poking around with displacement maping using the hardware tessellation features of DX11 for getting some more vertices to actually displace the last few days, for no particular reason other than to try it out so I'm not really looking for other ways to solve some specific problem.

Displacing a sphere or some other surface with completely connected faces work out as intended but issues obviously occur where there are multiple vertices with the same position but different normals (these vertices then get displaced in different directions and thus become disconnected => gaps appear in the geometry). I tried to mock up some simple solution to this by finding out which vertices share positions in my meshes and then setting a flag for these to tell my domain shader to not displace those vertices at all; it wouldn't be overly pretty but at least the mesh should be gapless and it hopefully wouldn't be too noticeable I reasoned. Of course this didn't work out very well (the whole subdivision patches generated from such overlapping vertices had their displacement factors set to 0 creating quite obvious, large frames around right angles and such). What I'm wondering is basically if this is a reasonable approach to try to refine further or if there are other ways to go about it that may be better? The only article on the topic I've managed to find mostly went on about the exquisitness of Bezier curves but didn't really seem to come to any conclusions (although maybe those would've been obvious to anyone having the required math skills).

Thankful for any pointers on this, the more I try to force this, the more it feels like I'm probably missing something.

 

As for my implementation of the tessellation, I've mostly based it around what is described in chapter 18.7 and 18.8 of Introduction to 3D Game Programming With DirectX 11 (http://www.amazon.com/Introduction-3D-Game-Programming-DirectX/dp/1936420228).




#5126839 Best approach to scaling multi-node mesh?

Posted by Husbjörn on 27 January 2014 - 06:28 PM


When constructing any matrix, you need to consider how you perform each task-- the order does matter.

 

True, I knew that, but I just instinctively assumed you would multiply the child transform by the parent's. Now that I think about it it makes more sense to do it the other way around, and indeed that solved this issue.

Thanks for your assistance :)




PARTNERS