• Advertisement
Sign in to follow this  

DX11 Getting around non-connected vertex gaps in hardware tessellation displacement mapping

This topic is 1148 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Sorry for the long title, couldn't figure out how to express it shorter without being overly ambigious as to what this post is about.

 

Anyway, I've been poking around with displacement maping using the hardware tessellation features of DX11 for getting some more vertices to actually displace the last few days, for no particular reason other than to try it out so I'm not really looking for other ways to solve some specific problem.

Displacing a sphere or some other surface with completely connected faces work out as intended but issues obviously occur where there are multiple vertices with the same position but different normals (these vertices then get displaced in different directions and thus become disconnected => gaps appear in the geometry). I tried to mock up some simple solution to this by finding out which vertices share positions in my meshes and then setting a flag for these to tell my domain shader to not displace those vertices at all; it wouldn't be overly pretty but at least the mesh should be gapless and it hopefully wouldn't be too noticeable I reasoned. Of course this didn't work out very well (the whole subdivision patches generated from such overlapping vertices had their displacement factors set to 0 creating quite obvious, large frames around right angles and such). What I'm wondering is basically if this is a reasonable approach to try to refine further or if there are other ways to go about it that may be better? The only article on the topic I've managed to find mostly went on about the exquisitness of Bezier curves but didn't really seem to come to any conclusions (although maybe those would've been obvious to anyone having the required math skills).

Thankful for any pointers on this, the more I try to force this, the more it feels like I'm probably missing something.

 

As for my implementation of the tessellation, I've mostly based it around what is described in chapter 18.7 and 18.8 of Introduction to 3D Game Programming With DirectX 11 (http://www.amazon.com/Introduction-3D-Game-Programming-DirectX/dp/1936420228).

Share this post


Link to post
Share on other sites
Advertisement

Thanks MJP.

I don't suppose there's any video / audio recording of the presentation using those slides available somewhere, possibly for a nominal fee?

 

On a more off-topic note, I recognize that avatar of yours but have been unable to remember the name of the show (or was it possibly a book?) it featured in, care to enlighten me? 

Share this post


Link to post
Share on other sites

I think i have implemented index buffer for PN-AEN as described in that pdf document, but once i finished i saw that patch funcion HS_Constant is missing from shader that they have provided in appendix. sad.png

Copy-paste error. laugh.png

 

 


D3D11 ERROR: ID3D11DeviceContext::DrawIndexed: Mismatched topology. Current Hull Shader expects input Control Point count of 3, but Input Assembler topology defines a patch list with 9 Control Points per patch. [ EXECUTION ERROR #2097222: DEVICE_DRAW_HULL_SHADER_INPUT_TOPOLOGY_MISMATCH]

 

Damn. Why did they provide code that does not work.

 

edit 6841:

I get shader to work this time but i am not sure if i have indices right as i see no displacement.

normal.jpg

 

if i add displacement along normal:

... // domain shader
float3 n = mul(f3Normal, (float3x3)g_f4x4WorldView);
    n = normalize(n);
    f3EyePosition += n * 0.02f;

disp.jpg

CRACKS!

void Test::calcPNAENIndices(const std::vector<USHORT>& ind, const std::vector<VERTEX>& verts, std::vector<USHORT>& out)
{
    out.resize(ind.size() * 3);
    for (std::size_t i = 0; i < ind.size(); i += 3)
    {
        out[3 * i + 0] = ind[i + 0];
        out[3 * i + 1] = ind[i + 1];
        out[3 * i + 2] = ind[i + 2];

        out[3 * i + 3] = ind[i + 0];
        out[3 * i + 4] = ind[i + 1];
        out[3 * i + 5] = ind[i + 1];

        out[3 * i + 6] = ind[i + 2];
        out[3 * i + 7] = ind[i + 2];
        out[3 * i + 8] = ind[i + 0];
    }

    struct Edge
    {
        float3 p[2];
        USHORT inx[2];

        bool operator == (const Edge& o) const
        {
            if (inx[0] == o.inx[0] && inx[1] == o.inx[1])
                return true;

            if (Equal(p[0].x, o.p[0].x) && Equal(p[0].y, o.p[0].y) && Equal(p[0].z, o.p[0].z))
            {
                if (Equal(p[1].x, o.p[1].x) && Equal(p[1].y, o.p[1].y) && Equal(p[1].z, o.p[1].z))
                {
                    return true;
                }
            }

            return false;
        }
    };
 
    // reverse edges
    std::vector<Edge> edges;
    edges.resize(ind.size());
    for (std::size_t i = 0; i < ind.size(); i += 3)
    {
        edges[i + 0].p[1]   = verts[ind[i + 0]].pos;
        edges[i + 0].p[0]   = verts[ind[i + 1]].pos;
        edges[i + 0].inx[1] = ind[i + 0];
        edges[i + 0].inx[0] = ind[i + 1];

        edges[i + 1].p[1]   = verts[ind[i + 1]].pos;
        edges[i + 1].p[0]   = verts[ind[i + 2]].pos;
        edges[i + 1].inx[1] = ind[i + 1];
        edges[i + 1].inx[0] = ind[i + 2];

        edges[i + 2].p[1]   = verts[ind[i + 2]].pos;
        edges[i + 2].p[0]   = verts[ind[i + 0]].pos;
        edges[i + 2].inx[1] = ind[i + 2];
        edges[i + 2].inx[0] = ind[i + 0];
    }
 
    // compare
    for (std::size_t i = 0, j = 0; i < out.size(); i += 9, j += 3)
    {
        Edge e;

        // edge 0
        e.p[0]   = verts[out[i + 0]].pos;
        e.p[1]   = verts[out[i + 1]].pos;
        e.inx[0] = out[i + 0];
        e.inx[1] = out[i + 1];

        if (e == edges[j + 0])
        {
            out[i + 0] = edges[j + 0].inx[0];
            out[i + 1] = edges[j + 0].inx[1];
        }

        // edge 1
        e.p[0]   = verts[out[i + 1]].pos;
        e.p[1]   = verts[out[i + 2]].pos;
        e.inx[0] = out[i + 1];
        e.inx[1] = out[i + 2];

        if (e == edges[j + 0])
        {
            out[i + 1] = edges[j + 0].inx[0];
            out[i + 2] = edges[j + 0].inx[1];
        }

        // edge 2
        e.p[0]   = verts[out[i + 2]].pos;
        e.p[1]   = verts[out[i + 3]].pos;
        e.inx[0] = out[i + 2];
        e.inx[1] = out[i + 3];

        if (e == edges[j + 0])
        {
            out[i + 2] = edges[j + 0].inx[0];
            out[i + 3] = edges[j + 0].inx[1];
        }

        // edge 3
        e.p[0] = verts[out[i + 3]].pos;
        e.p[1] = verts[out[i + 4]].pos;
        e.inx[0] = out[i + 3];
        e.inx[1] = out[i + 4];

        if (e == edges[j + 1])
        {
            out[i + 3] = edges[j + 1].inx[0];
            out[i + 4] = edges[j + 1].inx[1];
        }

        // edge 4
        e.p[0] = verts[out[i + 4]].pos;
        e.p[1] = verts[out[i + 5]].pos;
        e.inx[0] = out[i + 4];
        e.inx[1] = out[i + 5];

        if (e == edges[j + 1])
        {
            out[i + 4] = edges[j + 1].inx[0];
            out[i + 5] = edges[j + 1].inx[1];
        }

        // edge 5
        e.p[0] = verts[out[i + 5]].pos;
        e.p[1] = verts[out[i + 6]].pos;
        e.inx[0] = out[i + 5];
        e.inx[1] = out[i + 6];

        if (e == edges[j + 1])
        {
            out[i + 5] = edges[j + 1].inx[0];
            out[i + 6] = edges[j + 1].inx[1];
        }

        // edge 6
        e.p[0] = verts[out[i + 6]].pos;
        e.p[1] = verts[out[i + 7]].pos;
        e.inx[0] = out[i + 6];
        e.inx[1] = out[i + 7];

        if (e == edges[j + 2])
        {
            out[i + 6] = edges[j + 2].inx[0];
            out[i + 7] = edges[j + 2].inx[1];
        }

        // edge 7
        e.p[0]   = verts[out[i + 7]].pos;
        e.p[1]   = verts[out[i + 8]].pos;
        e.inx[0] = out[i + 7];
        e.inx[1] = out[i + 8];

        if (e == edges[j + 2])
        {
            out[i + 7] = edges[j + 2].inx[0];
            out[i + 8] = edges[j + 2].inx[1];
        }

        // edge 8
        e.p[0] = verts[out[i + 8]].pos;
        e.p[1] = verts[out[i + 0]].pos;
        e.inx[0] = out[i + 8];
        e.inx[1] = out[i + 0];

        if (e == edges[j + 2])
        {
            out[i + 8] = edges[j + 2].inx[0];
            out[i + 0] = edges[j + 2].inx[1];
        }
    }
}
Edited by belfegor

Share this post


Link to post
Share on other sites


I don't suppose there's any video / audio recording of the presentation using those slides available somewhere, possibly for a nominal fee?

 

Not that I know of, sorry.

 


On a more off-topic note, I recognize that avatar of yours but have been unable to remember the name of the show (or was it possibly a book?) it featured in, care to enlighten me?

 

It's Rocko!

Share this post


Link to post
Share on other sites

I think i had some mistakes in code, having hard time to understand what exactly i need to do.

Here is corrected version but still wrong results. sad.png

void Test::calcPNAENIndices(const std::vector<USHORT>& ind, const std::vector<VERTEX>& verts, std::vector<USHORT>& out)
{
    struct Edge
    {
        float3 p[2];
        USHORT inx[2];

        bool operator == (const Edge& o) const
        {
            if (inx[0] == o.inx[0] && inx[1] == o.inx[1])
                return true;

            if (Equal(p[0].x, o.p[0].x) && Equal(p[0].y, o.p[0].y) && Equal(p[0].z, o.p[0].z))
            {
                if (Equal(p[1].x, o.p[1].x) && Equal(p[1].y, o.p[1].y) && Equal(p[1].z, o.p[1].z))
                {
                    return true;
                }
            }

            return false;
        }
    };

    std::vector<Edge> edges(ind.size());
    out.resize(ind.size() * 3);

    for (std::size_t i = 0; i < ind.size(); i += 3)
    {
        // initial values
        out[3 * i + 0] = ind[i + 0];
        out[3 * i + 1] = ind[i + 1];
        out[3 * i + 2] = ind[i + 2];

        out[3 * i + 3] = ind[i + 0];
        out[3 * i + 4] = ind[i + 1];
        out[3 * i + 5] = ind[i + 1];

        out[3 * i + 6] = ind[i + 2];
        out[3 * i + 7] = ind[i + 2];
        out[3 * i + 8] = ind[i + 0];

        // store reversed
        edges[i + 0].p[1]   = verts[ind[i + 0]].pos;
        edges[i + 0].p[0]   = verts[ind[i + 1]].pos;
        edges[i + 0].inx[1] = ind[i + 0];
        edges[i + 0].inx[0] = ind[i + 1];

        edges[i + 1].p[1]   = verts[ind[i + 1]].pos;
        edges[i + 1].p[0]   = verts[ind[i + 2]].pos;
        edges[i + 1].inx[1] = ind[i + 1];
        edges[i + 1].inx[0] = ind[i + 2];

        edges[i + 2].p[1]   = verts[ind[i + 2]].pos;
        edges[i + 2].p[0]   = verts[ind[i + 0]].pos;
        edges[i + 2].inx[1] = ind[i + 2];
        edges[i + 2].inx[0] = ind[i + 0];
    }

    for (std::size_t i = 0; i < out.size(); i += 9)
    {
        // i think i should skip first 3 indices as they point to triangle and i need to check edges
        for (std::size_t j = 3; j < 9; ++j)
        {
            std::size_t first  = j;
            std::size_t second = j + 1;
            if (second == 9)
                second = 3;

            Edge e;
            e.p[0]   = verts[out[i + first]].pos;
            e.p[1]   = verts[out[i + second]].pos;
            e.inx[0] = out[i + first];
            e.inx[1] = out[i + second];

            for (std::size_t k = 0; k < edges.size(); ++k)
            {
                if (e == edges[k])
                {
                    out[i + first]  = edges[k].inx[0];
                    out[i + second] = edges[k].inx[1];
                }
            }
        }
    }
}

Can someone take a look please and decipher instructions given for this to work:

 

 

1.  Create an output IB that is 3 times the size of input IB.

2.  For each input Triangle in IB, with indices i0, i1 and i2:
    a.  Write out an initial output entry of:  i0, i1, i2, i0, i1, i1, i2, i2, i0, which sets edges to

        initially be neighbors of themselves. This would produce identical results to PN
        Triangles.
    b.  Lookup the positions p0, p1, and p2, using i0, i1 and i2 to perform a lookup for
        position of the associated vertex in VB.
    c.  Define 3 Edges, which consist of the two indices and two positions that make up
        the corresponding Edge. An Edge should consist of the origin index, the
        destination index, the origin position and the destination position.
    d.  For each edge, store the reverse of that edge in an easily searchable data structure
        for the next step. The reference implementation uses an stdext::hash_map<Edge,
        Edge> for this purpose. Reverse simply flips the sense of the edge (originating at
        the destination position and index and heading to the origin position and index).

 

3.  Walk the output index buffer (OB) constructed in step 2. For each patch of 9 indices:
    a.  For each Edge in the current Patch, perform a lookup into Edge->Edge mapping
        created in step 2d.
    b.  If found, replace the current indices with the indices found in the map. Note that
        two edges should be considered matching if their "from" and "to" indices match,
        OR if their "from" and "to" positions match.
    c.  If not, continue to use the existing indices.

Upon completion of this algorithm, a buffer suitable for usage with PN-AEN will be
available.

 

Edited by belfegor

Share this post


Link to post
Share on other sites

That is that. Thank you very much. smile.png

 

With "initial" IB (notice cracks):

crack.jpg

 

PNAEN

good.jpg

 

Now i would like to displace position along normal using heightmap but i dont know how to get average normal so i can pass it to domain shader. Any pointers?

Share this post


Link to post
Share on other sites
Now the real challenge starts. Here the sample goes the easy way and just passes the normals from the inner triangle and interpolates linearly. "Correct" PN triangles uses a quadratic bezier patch for the normals, examples of which you can find in the (June 2010 SDK) samples or in the Hieroglyph 3 engine (there's also full chapter in the Practical Rendering book).
There's also a displacement tesselation sample using decals in the SDK which might be worth a peek.

Either way it sounds complex as the presentation MJP linked to shows. Can't help you any further now, I haven't done displacement mapping (not counting terrain). For a simple start maybe you can get away averaging normals at corners (not using a bezier quad patch and taking averaged normals from all adjacent triangles). But that's really just an idea.

Anyway, looks like it also needs special care on the content creation side (citing above presentation).

Speaking of which (and out of curiosity): Why does that pig/boar generate cracks ? I wouldn't expect them on organic surfaces (the initial index buffer results in usual PN triangles). You got a link to that mesh ?

Share this post


Link to post
Share on other sites


It's Rocko!

Ah yes, haha, remember watching that in the mid-late 90's. Maybe worth a rewatch now that I'm old enough to actually understand it better than back then...

 

As for the displacement mapping I eventually settled on a rather simple approach for use with pre-existing triangle list meshes (it is by no means perfect but it seems that the "real" solution is really just having the 3D / texturing artist(s) being aware of your intents and have them author appropriately mapped bump maps and ensure corners actually are rounded, albeit with extremely short edges between corner vertices, as seemed to be the main point that the chapter in Zink, Pettineo & Hoxley (2011) as referred by unbird in the last post). Basically I average all disjoint vertex normals and create a 6-control patch index buffer that holds the initial triangle in the first 3 indices and then any "dominant" vertices sharing the position of these original vertices in the last 3 indices. The dominant vertices are arbitrarily chosen as the first found in the vertex data for a given position and are set to the same vertex as in the first three indices if there are no shared vertices at a given position. The dominant vertex is used to ensure that all overlapping vertices will sample the same value from the displacement map, while still allowing them to sample other textures by their UV coords.

Not perfect but good enough for some general-purpose examples.

As for actually employing this kind of displacement mapping in a more professional game / visual demo / what-have-you, the artist should ensure that there are *no* disjoint vertices (as said, such vertices can be moved slightly apart and be connected by a short edge, allowing the corners to appear mostly sharp) and a secondary set of texture coordinates should be provided for the displacement map / alternatively the displacement map should be handcrafted such that when displaced (with a reasonable strength / height factor), faces won't extrude through each others.

Of course there exists more elaborate ways as described in the previous posts, I just thought I would share this if anyone found it interesting, After tracking down a copy of that practical rendering book just for that tessellation chapter, it didn't say that much more about this particular problem, so might as well save others the trouble.

Share this post


Link to post
Share on other sites

Hi,

i'm struggling with PN AEN now for two weeks and I'm not sure where my mistake could be... I'm pretty sure, the indices are correct, since I tried out unbirds exmaple and get the same indices. However my result looks like this:screenshot.png

 

I get the same result as with PN Triangles, no difference.

 

My TCS is:

#version 420
layout (vertices = 9) out;
in vec3 vp[];
in vec2 UV[];
in vec3 vn[];
out vec3 vpos[];
out vec2 outuv[];
out vec3 vnor[];
uniform vec3  tessLevelOuter;
uniform float tessLevelInner;

patch out Patch
{
    vec3 b210;
    vec3 b120;
    vec3 b021;
    vec3 b012;
    vec3 b102;
    vec3 b201;
    vec3 b111;
    vec3 n110;
    vec3 n011;
    vec3 n101;
	vec2 t110;
    vec2 t011;
    vec2 t101;
} OutPatch;

#define b300 vp[0]
#define b030 vp[1]
#define b003 vp[2]

#define n200 vn[0]
#define n020 vn[1]
#define n002 vn[2]

#define t200 UV[0]
#define t020 UV[1]
#define t002 UV[2]

void main()
{

	if( gl_InvocationID == 0 )
		{
			gl_TessLevelOuter[0] = tessLevelOuter[0];
			gl_TessLevelOuter[1] = tessLevelOuter[1];
			gl_TessLevelOuter[2] = tessLevelOuter[2];
			gl_TessLevelInner[0] = tessLevelInner;
		}
		vpos[gl_InvocationID]=vp[gl_InvocationID];
		vnor[gl_InvocationID]=vn[gl_InvocationID];
		outuv[gl_InvocationID]=UV[gl_InvocationID];
		
		if( gl_InvocationID == 0 )
    {

						 
						 
		OutPatch.b210 = (2.0 * b300 + b030 - dot( b030 - b300, n200 ) * n200 + 2.0 * vp[3] + vp[4] - dot( vp[4] - vp[3], vn[3] ) * vn[3] ) / 6.0;
        OutPatch.b120 = (2.0 * b030 + b300 - dot( b300 - b030, n020 ) * n020 +2.0 * vp[4] + vp[3] -dot( vp[3] - vp[4], vn[4] ) * vn[4] ) / 6.0;
        OutPatch.b021 = (2.0 * b030 + b003 - dot( b003 - b030, n020 ) * n020 +2.0 * vp[5] + vp[6] - dot( vp[6] - vp[5], vn[5] ) * vn[5] ) / 6.0;
        OutPatch.b012 = (2.0 * b003 + b030 - dot( b030 - b003, n002 ) * n002 +2.0 * vp[6] + vp[5] - dot( vp[5] - vp[6], vn[6] ) * vn[6] ) / 6.0;
        OutPatch.b102 = (2.0 * b003 + b300 - dot( b300 - b003, n002 ) * n002 +2.0 * vp[7] + vp[8] -dot( vp[8] - vp[7], vn[7] ) * vn[7] ) / 6.0;
        OutPatch.b201 = (2.0 * b300 + b003 - dot( b003 - b300, n200 ) * n200 + 2.0 * vp[8] + vp[7] -dot( vp[7] - vp[8], vn[8] ) * vn[8] ) / 6.0;
				 
		OutPatch.b111 = (OutPatch.b210 + OutPatch.b120 + OutPatch.b021 +
                         OutPatch.b012 + OutPatch.b102 + OutPatch.b201) / 4.0 - (b300 + b030 + b003) / 6.0;				 
						 
						 
						 

        const vec3 d0 = b030 - b300;
        const vec3 d1 = b003 - b030;
        const vec3 d2 = b300 - b003;
        const vec3 n0 = n020 + n200;
        const vec3 n1 = n002 + n020;
        const vec3 n2 = n200 + n002;
        const vec3 v0 = (2.0 * dot( d0, n0 ) / dot( d0, d0 )) * d0;
        const vec3 v1 = (2.0 * dot( d1, n1 ) / dot( d1, d1 )) * d1;
        const vec3 v2 = (2.0 * dot( d2, n2 ) / dot( d2, d2 )) * d2;
        OutPatch.n110 = normalize( n0 - v0 );
        OutPatch.n011 = normalize( n1 - v1 );
        OutPatch.n101 = normalize( n2 - v2 );
     }
}

Any ideas what I'm doing wrong?

Share this post


Link to post
Share on other sites

Off the top of my head each individual face of your cube is tessellated and then displaced.

You need to ensure that the edge vertices are shared in each (subdivided) side-face or else these seams will occur since all vertices on the top face are displaced only along the up axis and all vertices of the front face are displaced only along the depth axis.

A simple solution is to displace along the vertex normals and ensure that whereever you have overlapping vertices (such as at the corners of a cube) you set the normal of all such vertices to the average of all "actual" vertex normals at that position. This will make the edges a bit more bulky but keep the faces connected.

 

My previous post in this thread (just above yours) describes how I solved this in a relatively simple way in more detail.

Share this post


Link to post
Share on other sites

I'm not doing any Displayment mapping so far.

My TES looks like this:

	vec3 vp=vpos[0] * w * w * w +
                          b030 * u * u * u +
                          b003 * v * v * v + 
						  InPatch.b210 * 3.0 * w * w * u + 
                          InPatch.b120 * 3.0 * w * u * u +
                          InPatch.b201 * 3.0 * w * w * v +
                          InPatch.b021 * 3.0 * u * u * v +
                          InPatch.b102 * 3.0 * w * v * v +
                          InPatch.b012 * 3.0 * u * v * v +
                          InPatch.b111 * 6.0 * w * u * v; 

	vec3 vn= normalize( n200 * w * w + n020 * u * u + n002 * v * v +
                             InPatch.n110 * w * u + InPatch.n011 * u * v + InPatch.n101 * w * v );

About the screenshot above: I'm not applying any displacement yet. The cube simply cracks by smoothing the surface. As far as I understood this is a known issue with PN, but PNAEN shouldn't have this problem.

I know that you meant dominant UV to solve displacement map cracking, but I still have PN cracking with PNAEN.

Share this post


Link to post
Share on other sites

I just fed a standard cube to my shader and it stays a cube with flat sides, no matter the tesselation factors (both PN and PNAEN). Makes sense if you use the axis-aligned normals and not averaged, like suggested several times (the bezier surfaces are indeed flat if all normals are equal). Maybe you stumbled across the same bug I did (vertex shader, not normalized normals therefore wrong control point calculation).

 

Edit: How does your PN behave for a nice model like a sphere ?

Edited by unbird

Share this post


Link to post
Share on other sites

I just fed a standard cube to my shader and it stays a cube with flat sides, no matter the tesselation factors (both PN and PNAEN). Makes sense if you use the axis-aligned normals and not averaged, like suggested several times (the bezier surfaces are indeed flat if all normals are equal).

At first I used a standard cube generated by Maya and the cube stayed flat exactly as you said. Then I changed the normals of the cube like this:

cubes.png

PN got the cracks as expected, but PN doesn't look smooth like in the graphic above and looks 100% like PN.

 Maybe you stumbled across the same bug I did (vertex shader, not normalized normals therefore wrong control point calculation).

I tried that, adding normalize in the Vertexshader, but no difference, still cracks.

 

Edit: How does your PN behave for a nice model like a sphere ?

Looks smooth, no cracks. No difference to flat tessealtion or PNAEN.

Share this post


Link to post
Share on other sites
OK, now I made you move in circles, sorry about that. I can only guess: Either your re-indexing goes wrong or the shader has a bug (well, not very helpful either, I know).
 
You might show your re-indexing code, maybe someone spots something. Also, my test case is probably too small. Can you put that cube up for download (preferably in a common exchange format like Wavefront) ? I'll run it through my reindexer and give you the results.
 
As for the shader: Cannot help you there, I'm really just a D3D guy wink.png. You might have more luck in the OpenGL forum. Though if you carefully translated it from the paper it should work (apart from the vertex shader bug).

Share this post


Link to post
Share on other sites

You might show your re-indexing code, maybe someone spots something. Also, my test case is probably too small.

 

Here is my indexing code:

void indexPNAEN(std::vector<unsigned short> indices, std::vector<glm::vec3> & in_vertices, std::vector<unsigned short> & out_indicesaen)
{
	out_indicesaen.resize(indices.size()*3); //step 1 Create an output IB that is 3 times the size of input IB.
	//step 2 c define edge
	struct Edge
	{
		glm::vec3 p[2];
		short ind[2];

		bool operator == (const Edge& o) const
		{
			if (ind[0] == o.ind[0] && ind[1] == o.ind[1])
				return true;

			if (p[0] == o.p[0] && p[1] == o.p[1])
			{

					return true;
			}

			return false;
		}

	/*	bool operator != (const Edge& o) const{
			return !(*this == o);
		}*/


		Edge reverse()
		{
			Edge returnedge;
			returnedge.p[0] = p[1];
			returnedge.p[1] = p[0];
			returnedge.ind[0] = ind[1];
			returnedge.ind[1] = ind[0];

			return returnedge;	
		}
	};

	struct KeyHasher
	{
		std::size_t operator()(const Edge& k) const
		{
			using boost::hash_value;
			using boost::hash_combine;

			// Start with a hash value of 0    .
			std::size_t seed = 0;

			// Modify 'seed' by XORing and bit-shifting in
			// one member of 'Key' after the other:
			for (int hashi = 0; hashi < 2; hashi++)
			{
				hash_combine(seed, hash_value(k.p[hashi].x));
				hash_combine(seed, hash_value(k.p[hashi].y));
				hash_combine(seed, hash_value(k.p[hashi].z));
				hash_combine(seed, hash_value(k.ind[hashi]));
			}

			// Return the result.
			return seed;
		}
	};



	std::unordered_map<Edge, Edge,KeyHasher> edges(indices.size()); 

	//step 2
	for (int i = 0; i < indices.size(); i += 3) //For each input Triangle in IB,
	{
		out_indicesaen[3 * i] = indices[i]; //i0
		out_indicesaen[3 * i + 1] = indices[i+1]; //i1
		out_indicesaen[3 * i + 2] = indices[i + 2]; //i2

		out_indicesaen[3 * i + 3] = indices[i]; //i0
		out_indicesaen[3 * i + 4] = indices[i + 1]; //i1
		out_indicesaen[3 * i + 5] = indices[i + 1]; //i1

		out_indicesaen[3 * i + 6] = indices[i+2]; //i2
		out_indicesaen[3 * i + 7] = indices[i + 2]; //i2
		out_indicesaen[3 * i + 8] = indices[i]; //i0

		//2b and 2d

		Edge edge0;
		edge0.p[0] = in_vertices[indices[i + 1]];
		edge0.p[1] = in_vertices[indices[i]];
		edge0.ind[1] = indices[indices[i]];
		edge0.ind[0] = indices[indices[i + 1]];
		edges.emplace(edge0.reverse(), edge0);

		Edge edge1;
		edge1.p[0] = in_vertices[indices[i + 2]];
		edge1.p[1] = in_vertices[indices[i + 1]];
		edge1.ind[1] = indices[indices[i + 1]];
		edge1.ind[0] = indices[indices[i + 2]];
		edges.emplace(edge1.reverse(), edge1);

		Edge edge2;
		edge2.p[0] = in_vertices[indices[i]];
		edge2.p[1] = in_vertices[indices[i + 2]];
		edge2.ind[1] = indices[indices[i + 2]];
		edge2.ind[0] = indices[indices[i]];
		edges.emplace(edge2.reverse(), edge2);
	}


	//step 3 Walk the output index buffer (OB) constructed in step 2. For each patch of 9 indices:
	for (int i = 3; i < out_indicesaen.size(); i += 9)
	{

		//3a For each Edge in the current Patch, perform a lookup into Edge->Edge mapping created in step 2d.
		for (int k = 0; k < 6; k += 2)
		{
			int i0 = out_indicesaen[i + k];
			int i1 = out_indicesaen[i + k + 1];
			Edge temp;
			temp.ind[0] = i1;
			temp.ind[1] = i0;
			temp.p[0] = in_vertices[i1];
			temp.p[1] = in_vertices[i0];

			auto foundIt = edges.find(temp);
			if (foundIt!=edges.end()) //look up in edge vector
			{
				const Edge& second = foundIt->second;
				out_indicesaen[i + k] = second.ind[1];
				out_indicesaen[i + k + 1] = second.ind[0];
				
			}

		}

	}

}

Can you put that cube up for download (preferably in a common exchange format like Wavefront) ? I'll run it through my reindexer and give you the results.

Oh, thank you! IThe cube can be downloaded here. (I'm not permitted to attach an .obj file blink.png ). I'm starting to get frustrated with PNAEN...I can't find the bug for two weeks now. It's part of my master thesis.

Share this post


Link to post
Share on other sites

Ok. Had to adjust the import (Assimp, join identical vertices), so only 24 instead of 36 distinct vertices. In any case, I included a dump of the vertices, so you could reconstruct the mesh programmatically. It's a bloated html-log (and the indices are dumped as tables and "raw" for easier copying).

 

From a quick glance, I think your reindexer is fine. 

 

Edit: Wait, try something first. Hash only the positions (x,y,z), the map will fail otherwise (at least it did for my C# dictionary).

Share this post


Link to post
Share on other sites

My reindexer is indeed wrong, I get totally different indices. I copied your indices manually into the vector and the cracks are gone! The shader works!

I'll try to hash only the positions.  I hope that's the bug! Thank you very much for your help, the html log is awesome!

 

Edit: I found out that my edge Map seems to be already wrong. i have only 32 entried while your map has 36 entries.

Edited by windschuetze

Share this post


Link to post
Share on other sites

I solved my problem! I had serveal bugs:

  • filling the edge map: indices[indices] instead of indices...

  • I removed the hashing of the index and it worked!

Thank you very much, I couldn't have done it without your help!

 

Now my cube looks smooth and crackfree:

screenshot16111.png

Edited by windschuetze

Share this post


Link to post
Share on other sites

 


It's Rocko!

Ah yes, haha, remember watching that in the mid-late 90's. Maybe worth a rewatch now that I'm old enough to actually understand it better than back then...

 

As for the displacement mapping I eventually settled on a rather simple approach for use with pre-existing triangle list meshes (it is by no means perfect but it seems that the "real" solution is really just having the 3D / texturing artist(s) being aware of your intents and have them author appropriately mapped bump maps and ensure corners actually are rounded, albeit with extremely short edges between corner vertices, as seemed to be the main point that the chapter in Zink, Pettineo & Hoxley (2011) as referred by unbird in the last post). Basically I average all disjoint vertex normals and create a 6-control patch index buffer that holds the initial triangle in the first 3 indices and then any "dominant" vertices sharing the position of these original vertices in the last 3 indices. The dominant vertices are arbitrarily chosen as the first found in the vertex data for a given position and are set to the same vertex as in the first three indices if there are no shared vertices at a given position. The dominant vertex is used to ensure that all overlapping vertices will sample the same value from the displacement map, while still allowing them to sample other textures by their UV coords.

Not perfect but good enough for some general-purpose examples.

As for actually employing this kind of displacement mapping in a more professional game / visual demo / what-have-you, the artist should ensure that there are *no* disjoint vertices (as said, such vertices can be moved slightly apart and be connected by a short edge, allowing the corners to appear mostly sharp) and a secondary set of texture coordinates should be provided for the displacement map / alternatively the displacement map should be handcrafted such that when displaced (with a reasonable strength / height factor), faces won't extrude through each others.

Of course there exists more elaborate ways as described in the previous posts, I just thought I would share this if anyone found it interesting, After tracking down a copy of that practical rendering book just for that tessellation chapter, it didn't say that much more about this particular problem, so might as well save others the trouble.

 

Hi,

im trying to implement the dominandt data, but I'm struggling again...I'm not sure how to understand the  GDC 2012 slides, when i'm generating the index buffer:

? Can use AEN edge data as well

vs:

? All shared vertices must have the same dominant data
? Both edge vertices must be from the same primitive

 

I tried with the PN AEN Indexbuffer adding 3 arbitrarly chosen vertices, but it didn't worked.

Then I found on the Maya Homepage a despription of a PNAEN 18 implementation using dominant data with a 18 size index buffer and I tried to adapt it. But the result isn't good either, the UVs doesn't look smoother:

 

screenshot0312.png

 

My index buffer now looks like this for unbirds example:

0,1,2,0,1,4,3,2,0,0,1,4,3,2,0,0,4,2,3,4,5,2,1,4,5,5,3,4,3,4,5,5,3,4,4,5

(See also xls Attachment for details).

I'm pretty sure that the shader interpolates right:

			float 
		uCorner =  (u == 1 ? 1:0),
		vCorner =  (v == 1 ? 1:0),
		wCorner =  (w == 1 ? 1:0),
		uEdge =    (u == 0 && (v * w)!=0 ? 1:0),
		vEdge =    (v == 0 && (u * w)!=0 ? 1:0),
		wEdge =    (w == 0 && (u * v)!=0 ? 1:0),
		interior = (u * v * w)!=0 ? 1:0;

		vec2 displaceCoord= 	uCorner*InPatch.domVert[0]
							+vCorner*InPatch.domVert[1]
							+wCorner*InPatch.domVert[2]
							+uEdge*mix(InPatch.domEdge0[1],InPatch.domEdge1[1],v)
							+vEdge*mix(InPatch.domEdge0[2],InPatch.domEdge1[2],w)
							+wEdge*mix(InPatch.domEdge0[0],InPatch.domEdge1[0],u)
							+interior*UV;

Any clue?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By mister345
      Hi, I'm on Rastertek series 42, soft shadows, which uses a blur shader and runs extremely slow.
      http://www.rastertek.com/dx11tut42.html
      He obnoxiously states that there are many ways to optimize his blur shader, but gives you no idea how to do it.
      The way he does it is :
      1. Project the objects in the scene to a render target using the depth shader.
      2. Draw black and white shadows on another render target using those depth textures.
      3. Blur the black/white shadow texture produced in step 2 by 
      a) rendering it to a smaller texture
      b) vertical / horizontal blurring that texture
      c) rendering it back to a bigger texture again.
      4. Send the blurred shadow texture into the final shader, which samples its black/white values to determine light intensity.
       
      So this uses a ton of render textures, and I just added more than one light, which multiplies the render textures required.
       
      Is there any easy way I can optimize the super expensive blur shader that wouldnt require a whole new complicated system?
      Like combining any of these render textures into one for example?
       
      If you know of any easy way not requiring too many changes, please let me know, as I already had a really hard time
      understanding the way this works, so a super complicated change would be beyond my capacity. Thanks.
       
      *For reference, here is my repo, in which I have simplified his tutorial and added an additional light.
       
      https://github.com/mister51213/DX11Port_SoftShadows/tree/MultiShadows
       
    • By evelyn4you
      hi,
      after implementing skinning with compute shader i want to implement skinning with VertexShader Streamout method to compare performance.
      The following Thread is a discussion about it.
      Here's the recommended setup:
      Use a pass-through geometry shader (point->point), setup the streamout and set topology to point list. Draw the whole buffer with context->Draw(). This gives a 1:1 mapping of the vertices. Later bind the stream out buffer as vertex buffer. Bind the index buffer of the original mesh. draw with DrawIndexed like you would with the original mesh (or whatever draw call you had). I know the reason why a point list as input is used, because when using the normal vertex topology as input the output would be a stream of "each of his own" primitives that would blow up the vertexbuffer. I assume a indexbuffer then would be needless ?
      But how can you transform position and normal in one step when feeding the pseudo Vertex/Geometry Shader with a point list ?
      In my VertexShader i first calculate the resulting transform matrix from bone indexes(4) und weights (4) and transform position and normal with the same resulting transform Matrix.
      Do i have to run 2 passes ? One for transforming position and one for transforming normal ?
      I think it could be done better ?
      thanks for any help
       
    • By derui
      i am new to directx. i just followed some tutorials online and started to program. It had been well till i faced this problem of loading my own 3d models from 3ds max exported as .x which is supported by directx. I am using c++ on visual studio 2010 and directX9. i really tried to find help on the net but i couldn't find which can solve my problem. i don't know where exactly the problem is. i run most of samples and examples all worked well. can anyone give me the hint or solution for my problem ?
      thanks in advance!
    • By DiligentDev
      This article uses material originally posted on Diligent Graphics web site.
      Introduction
      Graphics APIs have come a long way from small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. Next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard. New APIs can provide substantial performance and functional improvements, but may not be supported by older hardware. An application targeting wide range of platforms needs to support Direct3D11 and OpenGL. New APIs will not give any advantage when used with old paradigms. It is totally possible to add Direct3D12 support to an existing renderer by implementing Direct3D11 interface through Direct3D12, but this will give zero benefits. Instead, new approaches and rendering architectures that leverage flexibility provided by the next-generation APIs are expected to be developed.
      There are at least four APIs (Direct3D11, Direct3D12, OpenGL/GLES, Vulkan, plus Apple's Metal for iOS and osX platforms) that a cross-platform 3D application may need to support. Writing separate code paths for all APIs is clearly not an option for any real-world application and the need for a cross-platform graphics abstraction layer is evident. The following is the list of requirements that I believe such layer needs to satisfy:
      Lightweight abstractions: the API should be as close to the underlying native APIs as possible to allow an application leverage all available low-level functionality. In many cases this requirement is difficult to achieve because specific features exposed by different APIs may vary considerably. Low performance overhead: the abstraction layer needs to be efficient from performance point of view. If it introduces considerable amount of overhead, there is no point in using it. Convenience: the API needs to be convenient to use. It needs to assist developers in achieving their goals not limiting their control of the graphics hardware. Multithreading: ability to efficiently parallelize work is in the core of Direct3D12 and Vulkan and one of the main selling points of the new APIs. Support for multithreading in a cross-platform layer is a must. Extensibility: no matter how well the API is designed, it still introduces some level of abstraction. In some cases the most efficient way to implement certain functionality is to directly use native API. The abstraction layer needs to provide seamless interoperability with the underlying native APIs to provide a way for the app to add features that may be missing. Diligent Engine is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. Full source code is available for download at GitHub and is free to use.
      Overview
      Diligent Engine API takes some features from Direct3D11 and Direct3D12 as well as introduces new concepts to hide certain platform-specific details and make the system easy to use. It contains the following main components:
      Render device (IRenderDevice  interface) is responsible for creating all other objects (textures, buffers, shaders, pipeline states, etc.).
      Device context (IDeviceContext interface) is the main interface for recording rendering commands. Similar to Direct3D11, there are immediate context and deferred contexts (which in Direct3D11 implementation map directly to the corresponding context types). Immediate context combines command queue and command list recording functionality. It records commands and submits the command list for execution when it contains sufficient number of commands. Deferred contexts are designed to only record command lists that can be submitted for execution through the immediate context.
      An alternative way to design the API would be to expose command queue and command lists directly. This approach however does not map well to Direct3D11 and OpenGL. Besides, some functionality (such as dynamic descriptor allocation) can be much more efficiently implemented when it is known that a command list is recorded by a certain deferred context from some thread.
      The approach taken in the engine does not limit scalability as the application is expected to create one deferred context per thread, and internally every deferred context records a command list in lock-free fashion. At the same time this approach maps well to older APIs.
      In current implementation, only one immediate context that uses default graphics command queue is created. To support multiple GPUs or multiple command queue types (compute, copy, etc.), it is natural to have one immediate contexts per queue. Cross-context synchronization utilities will be necessary.
      Swap Chain (ISwapChain interface). Swap chain interface represents a chain of back buffers and is responsible for showing the final rendered image on the screen.
      Render device, device contexts and swap chain are created during the engine initialization.
      Resources (ITexture and IBuffer interfaces). There are two types of resources - textures and buffers. There are many different texture types (2D textures, 3D textures, texture array, cubmepas, etc.) that can all be represented by ITexture interface.
      Resources Views (ITextureView and IBufferView interfaces). While textures and buffers are mere data containers, texture views and buffer views describe how the data should be interpreted. For instance, a 2D texture can be used as a render target for rendering commands or as a shader resource.
      Pipeline State (IPipelineState interface). GPU pipeline contains many configurable stages (depth-stencil, rasterizer and blend states, different shader stage, etc.). Direct3D11 uses coarse-grain objects to set all stage parameters at once (for instance, a rasterizer object encompasses all rasterizer attributes), while OpenGL contains myriad functions to fine-grain control every individual attribute of every stage. Both methods do not map very well to modern graphics hardware that combines all states into one monolithic state under the hood. Direct3D12 directly exposes pipeline state object in the API, and Diligent Engine uses the same approach.
      Shader Resource Binding (IShaderResourceBinding interface). Shaders are programs that run on the GPU. Shaders may access various resources (textures and buffers), and setting correspondence between shader variables and actual resources is called resource binding. Resource binding implementation varies considerably between different API. Diligent Engine introduces a new object called shader resource binding that encompasses all resources needed by all shaders in a certain pipeline state.
      API Basics
      Creating Resources
      Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. Graphics APIs usually have a native object that represents linear buffer. Diligent Engine uses IBuffer interface as an abstraction for a native buffer. To create a buffer, one needs to populate BufferDesc structure and call IRenderDevice::CreateBuffer() method as in the following example:
      BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); While there is usually just one buffer object, different APIs use very different approaches to represent textures. For instance, in Direct3D11, there are ID3D11Texture1D, ID3D11Texture2D, and ID3D11Texture3D objects. In OpenGL, there is individual object for every texture dimension (1D, 2D, 3D, Cube), which may be a texture array, which may also be multisampled (i.e. GL_TEXTURE_2D_MULTISAMPLE_ARRAY). As a result there are nine different GL texture types that Diligent Engine may create under the hood. In Direct3D12, there is only one resource interface. Diligent Engine hides all these details in ITexture interface. There is only one  IRenderDevice::CreateTexture() method that is capable of creating all texture types. Dimension, format, array size and all other parameters are specified by the members of the TextureDesc structure:
      TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); If native API supports multithreaded resource creation, textures and buffers can be created by multiple threads simultaneously.
      Interoperability with native API provides access to the native buffer/texture objects and also allows creating Diligent Engine objects from native handles. It allows applications seamlessly integrate native API-specific code with Diligent Engine.
      Next-generation APIs allow fine level-control over how resources are allocated. Diligent Engine does not currently expose this functionality, but it can be added by implementing IResourceAllocator interface that encapsulates specifics of resource allocation and providing this interface to CreateBuffer() or CreateTexture() methods. If null is provided, default allocator should be used.
      Initializing the Pipeline State
      As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors. With many individual calls tweaking various GPU pipeline settings it is very easy to forget to set one of the states or assume the stage is already properly configured when in fact it is not. Using pipeline state object helps avoid these problems as all stages are configured at once.
      Creating Shaders
      While in earlier APIs shaders were bound separately, in the next-generation APIs as well as in Diligent Engine shaders are part of the pipeline state object. The biggest challenge when authoring shaders is that Direct3D and OpenGL/Vulkan use different shader languages (while Apple uses yet another language in their Metal API). Maintaining two versions of every shader is not an option for real applications and Diligent Engine implements shader source code converter that allows shaders authored in HLSL to be translated to GLSL. To create a shader, one needs to populate ShaderCreationAttribs structure. SourceLanguage member of this structure tells the system which language the shader is authored in:
      SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source language matches the underlying graphics API: HLSL for Direct3D11/Direct3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter, so this value should only be used for OpenGL and OpenGLES modes. There are two ways to provide the shader source code. The first way is to use Source member. The second way is to provide a file path in FilePath member. Since the engine is entirely decoupled from the platform and the host file system is platform-dependent, the structure exposes pShaderSourceStreamFactory member that is intended to provide the engine access to the file system. If FilePath is provided, shader source factory must also be provided. If the shader source contains any #include directives, the source stream factory will also be used to load these files. The engine provides default implementation for every supported platform that should be sufficient in most cases. Custom implementation can be provided when needed.
      When sampling a texture in a shader, the texture sampler was traditionally specified as separate object that was bound to the pipeline at run time or set as part of the texture object itself. However, in most cases it is known beforehand what kind of sampler will be used in the shader. Next-generation APIs expose new type of sampler called static sampler that can be initialized directly in the pipeline state. Diligent Engine exposes this functionality: when creating a shader, textures can be assigned static samplers. If static sampler is assigned, it will always be used instead of the one initialized in the texture shader resource view. To initialize static samplers, prepare an array of StaticSamplerDesc structures and initialize StaticSamplers and NumStaticSamplers members. Static samplers are more efficient and it is highly recommended to use them whenever possible. On older APIs, static samplers are emulated via generic sampler objects.
      The following is an example of shader initialization:
      ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = {     {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},     {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},     {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader );
      Creating the Pipeline State Object
      After all required shaders are created, the rest of the fields of the PipelineStateDesc structure provide depth-stencil, rasterizer, and blend state descriptions, the number and format of render targets, input layout format, etc. For instance, rasterizer state can be described as follows:
      PipelineStateDesc PSODesc; RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; RasterizerDesc.AntialiasedLineEnable = False; Depth-stencil and blend states are defined in a similar fashion.
      Another important thing that pipeline state object encompasses is the input layout description that defines how inputs to the vertex shader, which is the very first shader stage, should be read from the memory. Input layout may define several vertex streams that contain values of different formats and sizes:
      // Define input layout InputLayoutDesc &Layout = PSODesc.GraphicsPipeline.InputLayout; LayoutElement TextLayoutElems[] = {     LayoutElement( 0, 0, 3, VT_FLOAT32, False ),     LayoutElement( 1, 0, 4, VT_UINT8, True ),     LayoutElement( 2, 0, 2, VT_FLOAT32, False ), }; Layout.LayoutElements = TextLayoutElems; Layout.NumElements = _countof( TextLayoutElems ); Finally, pipeline state defines primitive topology type. When all required members are initialized, a pipeline state object can be created by IRenderDevice::CreatePipelineState() method:
      // Define shader and primitive topology PSODesc.GraphicsPipeline.PrimitiveTopologyType = PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; PSODesc.GraphicsPipeline.pVS = pVertexShader; PSODesc.GraphicsPipeline.pPS = pPixelShader; PSODesc.Name = "My pipeline state"; m_pDev->CreatePipelineState(PSODesc, &m_pPSO); When PSO object is bound to the pipeline, the engine invokes all API-specific commands to set all states specified by the object. In case of Direct3D12 this maps directly to setting the D3D12 PSO object. In case of Direct3D11, this involves setting individual state objects (such as rasterizer and blend states), shaders, input layout etc. In case of OpenGL, this requires a number of fine-grain state tweaking calls. Diligent Engine keeps track of currently bound states and only calls functions to update these states that have actually changed.
      Binding Shader Resources
      Direct3D11 and OpenGL utilize fine-grain resource binding models, where an application binds individual buffers and textures to certain shader or program resource binding slots. Direct3D12 uses a very different approach, where resource descriptors are grouped into tables, and an application can bind all resources in the table at once by setting the table in the command list. Resource binding model in Diligent Engine is designed to leverage this new method. It introduces a new object called shader resource binding that encapsulates all resource bindings required for all shaders in a certain pipeline state. It also introduces the classification of shader variables based on the frequency of expected change that helps the engine group them into tables under the hood:
      Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. Shader variable type must be specified during shader creation by populating an array of ShaderVariableDesc structures and initializing ShaderCreationAttribs::Desc::VariableDesc and ShaderCreationAttribs::Desc::NumVariables members (see example of shader creation above).
      Static variables cannot be changed once a resource is bound to the variable. They are bound directly to the shader object. For instance, a shadow map texture is not expected to change after it is created, so it can be bound directly to the shader:
      PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new Shader Resource Binding object (SRB) that is created by the pipeline state (IPipelineState::CreateShaderResourceBinding()):
      m_pPSO->CreateShaderResourceBinding(&m_pSRB); Note that an SRB is only compatible with the pipeline state it was created from. SRB object inherits all static bindings from shaders in the pipeline, but is not allowed to change them.
      Mutable resources can only be set once for every instance of a shader resource binding. Such resources are intended to define specific material properties. For instance, a diffuse texture for a specific material is not expected to change once the material is defined and can be set right after the SRB object has been created:
      m_pSRB->GetVariable(SHADER_TYPE_PIXEL, "tex2DDiffuse")->Set(pDiffuseTexSRV); In some cases it is necessary to bind a new resource to a variable every time a draw command is invoked. Such variables should be labeled as dynamic, which will allow setting them multiple times through the same SRB object:
      m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); Under the hood, the engine pre-allocates descriptor tables for static and mutable resources when an SRB objcet is created. Space for dynamic resources is dynamically allocated at run time. Static and mutable resources are thus more efficient and should be used whenever possible.
      As you can see, Diligent Engine does not expose low-level details of how resources are bound to shader variables. One reason for this is that these details are very different for various APIs. The other reason is that using low-level binding methods is extremely error-prone: it is very easy to forget to bind some resource, or bind incorrect resource such as bind a buffer to the variable that is in fact a texture, especially during shader development when everything changes fast. Diligent Engine instead relies on shader reflection system to automatically query the list of all shader variables. Grouping variables based on three types mentioned above allows the engine to create optimized layout and take heavy lifting of matching resources to API-specific resource location, register or descriptor in the table.
      This post gives more details about the resource binding model in Diligent Engine.
      Setting the Pipeline State and Committing Shader Resources
      Before any draw or compute command can be invoked, the pipeline state needs to be bound to the context:
      m_pContext->SetPipelineState(m_pPSO); Under the hood, the engine sets the internal PSO object in the command list or calls all the required native API functions to properly configure all pipeline stages.
      The next step is to bind all required shader resources to the GPU pipeline, which is accomplished by IDeviceContext::CommitShaderResources() method:
      m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); The method takes a pointer to the shader resource binding object and makes all resources the object holds available for the shaders. In the case of D3D12, this only requires setting appropriate descriptor tables in the command list. For older APIs, this typically requires setting all resources individually.
      Next-generation APIs require the application to track the state of every resource and explicitly inform the system about all state transitions. For instance, if a texture was used as render target before, while the next draw command is going to use it as shader resource, a transition barrier needs to be executed. Diligent Engine does the heavy lifting of state tracking.  When CommitShaderResources() method is called with COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES flag, the engine commits and transitions resources to correct states at the same time. Note that transitioning resources does introduce some overhead. The engine tracks state of every resource and it will not issue the barrier if the state is already correct. But checking resource state is an overhead that can sometimes be avoided. The engine provides IDeviceContext::TransitionShaderResources() method that only transitions resources:
      m_pContext->TransitionShaderResources(m_pPSO, m_pSRB); In some scenarios it is more efficient to transition resources once and then only commit them.
      Invoking Draw Command
      The final step is to set states that are not part of the PSO, such as render targets, vertex and index buffers. Diligent Engine uses Direct3D11-syle API that is translated to other native API calls under the hood:
      ITextureView *pRTVs[] = {m_pRTV}; m_pContext->SetRenderTargets(_countof( pRTVs ), pRTVs, m_pDSV); // Clear render target and depth buffer const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); m_pContext->ClearDepthStencil(nullptr, CLEAR_DEPTH_FLAG, 1.f); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); Different native APIs use various set of function to execute draw commands depending on command details (if the command is indexed, instanced or both, what offsets in the source buffers are used etc.). For instance, there are 5 draw commands in Direct3D11 and more than 9 commands in OpenGL with something like glDrawElementsInstancedBaseVertexBaseInstance not uncommon. Diligent Engine hides all details with single IDeviceContext::Draw() method that takes takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
      DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); For compute commands, there is IDeviceContext::DispatchCompute() method that takes DispatchComputeAttribs structure that defines compute grid dimension.
      Source Code
      Full engine source code is available on GitHub and is free to use. The repository contains two samples, asteroids performance benchmark and example Unity project that uses Diligent Engine in native plugin.
      AntTweakBar sample is Diligent Engine’s “Hello World” example.

       
      Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to multiple render targets, using compute shaders and unordered access views, etc.

      Asteroids performance benchmark is based on this demo developed by Intel. It renders 50,000 unique textured asteroids and allows comparing performance of Direct3D11 and Direct3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures.

      Finally, there is an example project that shows how Diligent Engine can be integrated with Unity.

      Future Work
      The engine is under active development. It currently supports Windows desktop, Universal Windows and Android platforms. Direct3D11, Direct3D12, OpenGL/GLES backends are now feature complete. Vulkan backend is coming next, and support for more platforms is planned.
    • By kan123
      Hello,
      DX9Ex. I have the problem with driver stability in time of serial renderings, which i try to use for image processing in memory with fragment shaders. For big bitmaps the video driver sometimes becomes unstable ("Display driver stopped responding and has recovered") and, for instance, if the media player runs video in background, it sometimes freezes and distorts. I tried to use next methods of IDirect3DDevice9Ex:
      SetGPUThreadPriority(-7);
      WaitForVBlank(0);
      EvictManagedResources();
      with purpose to give some time for GPU between scenes, but it seems to be has not notable effect in this case. I don't want to reinitilialize subsystem for every step to avoid performance loss.
      So, my question is next: does some common practice exists to avoid overloading of GPU by running tasks? Many thanks in advance.
       
  • Advertisement