Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 26 Mar 2013
Offline Last Active Yesterday, 05:56 AM

Topics I've Started

Bad Performance On Intel HD

07 July 2014 - 04:45 PM

Hello all,

I am coding a game in C/C++ using the d3d11 api. Everything is going along with minor issues here and there. But one big issue. I have a laptop that I decided to dub as a dev system. Since it's GPU is a Intel HD 2000 Solution If I get my game to perform well on that then it will run well on anything

Performance is TERRIBLE!

If im lucky 2-3 fps! Im only sending 4000-6000 triangles in a single update. And the shader im using handels 5 point lights with one directional light. The shader handels normal/shadow mapping as well (Shadow Mapping only for the directional Light) and about 80% of my geometry is sent down that pipeline.

I have some ideas on where my Performance is going down the tube. I have already ordered my data so that like gemotrey (with identical buffers and textures ect.) Is fed sequentially to D3D as to minimize context switches on the Gpu. But here it goes!

1. I do have a lot of draw calls, maybe instancing could help me (but my fear isthe intel hd "says" it supports d3d11 and all of its features
but only supports these features like instancing and tesselation at a minimum level.)

2. I should probably look into.vertex buffer batching. Since I do create a lot of seperate buffers for seperate objects. And resubmit them to the pipeline per update

3. Maybe the shader I am using or the gemoetry Im sending is to much. (Though even when i substituted shaders that did only basic texture mapping I still had a problem wih speed)

If I missed something let me know, or maybe if one of the above mentioned items is the optimization technique I should look into (or maybe all of them) let me know as well

Hp Laptop specs

i3m 2.3 2nd gen
8gb of ram
intel hd 2000

Assimp and Flipping Normals

25 June 2014 - 10:11 PM

Hello all,


I am using Assimp for my models in my game. I am loading obj models exported from blender, into my classes through a model loader that utilizes Assimp.


I've run into an issue. Now right off the bat I know my shaders are correct, and the trasformation of lights are as well (in my Vertex Shader) since the only meshes that have the soon to be metioned issue are models exported and imported through Assimp.


It seems as if assimp is flipping my models normal vectors! Check the two images below, and it seems as if the backside of my model''s normals are pointing inward!




vsOut.pos = mul(vertex.pos, worldMat);
    vsOut.pos = mul(vsOut.pos , viewMat);
    vsOut.pos = mul(vsOut.pos , projMat);

    vsOut.color = vertex.color;    
    vsOut.wPos   = mul(vertex.pos, worldMat);

    cameraPos = mul(vsOut.pos, worldMat);
    cameraPos = mul(cameraPos, viewMat);

    vsOut.viewDir = camPos.xyz - vsOut.wPos.xyz;
    vsOut.viewDir = normalize(vsOut.viewDir);

    vsOut.norm  = mul(vertex.norm, worldMat);
    vsOut.norm  = normalize(vsOut.norm);




These are my preprocess flags for Assimp and This is how I load the normals in code




unsigned int processFlags =
    aiProcess_CalcTangentSpace         |
    aiProcess_JoinIdenticalVertices    |
    aiProcess_ConvertToLeftHanded      | // convert everything to D3D left handed space (by default right-handed, for OpenGL)
    aiProcess_SortByPType              |
    aiProcess_ImproveCacheLocality     |
    aiProcess_RemoveRedundantMaterials |
    aiProcess_FindDegenerates          |
    aiProcess_FindInvalidData          |
    aiProcess_TransformUVCoords        |
    aiProcess_FindInstances            |
    aiProcess_LimitBoneWeights         |
    aiProcess_SplitByBoneCount         |
    aiProcess_FixInfacingNormals       |







vertexVectorL.at(i).norm               = XMFLOAT3(mesh->mNormals[i].x, mesh->mNormals[i].y, mesh->mNormals[i].z);




Has anyone else heard about assimp doing this? It's been throwing me for a loop for a while now.

If something looks off in my code give me a hint or point me in the right direction!



P.s. I've included screen shots of the issue
      Thanks for any reply an advance!




Attached File  asd.png   536.68KB   2 downloads

Attached File  sd.png   535.44KB   2 downloads



Tweening Questions

26 May 2014 - 12:29 PM

Hello all!


I started to look into tweening since it looked like a fun and sort of quick way to get some basic animations going on my meshes.


Well, due to the sake of simplicity I decided to implement it on the CPU side first with dynamic buffers. While performance is pitifull, thats to be expected when pushing data back and forth from the CPU to the GPU.


However, I've run into a issue that makes me ask


The OBJ format. I've heard that the blender exporter (correct me if im wrong) will rearrange the vertices from export to export. thus negating any tweening attempts. Since I most comfortable with the OBJ format is there a way around it (I see in the mesh export options in blender, keep vertex order) or is this just wishful thinking. If you think OBJ is a no go. What format do you recommend?


Also, my implementation is based on Advanced Animation with DirectX, i just stuffed all the values into XNA types.

I'm including my tweening implementation just in case I missed something silly that someone could hint to. (And also as a validation that this isn't completely off XD) And also because my meshes when tweening wig out and start rendering crazy polygonal shapes and colors! Also as if I'm referencing out-of bounds garbage memory and passing it to D3D

float time = 0.0f;
float length = 5.0f;
if (isMoving)
        Time += 5.0f * dt;
        if (Time > 5.0f) Time = 0.0f;
//Note: vertices[x].at(x) represents a vector of pointers which hold all the info of each subset of the overall mesh

scalar = Time / length;

                for (int i = 0; i < mCount; i++) //mCount = subsetMeshCount
                    for (int j = 0; j < verticeCount[i]; j++)
                        //Interpolate Positions
                        XMVECTOR vecSource = XMLoadFloat3(&vertices[i].at(j).pos);
                        float value = (1.0f - scalar);
                        vecSource *= value;

                        XMVECTOR vecTarget = XMLoadFloat3(&vertices2[i].at(j).pos);
                        vecTarget *= scalar;

                        XMStoreFloat3(&verticesResult.at(j).pos, (vecSource + vecTarget));

                        //Iterpolate Normals
                        XMVECTOR vecSourceNorm = XMLoadFloat3(&vertices[i].at(j).norm);
                        value = (1.0f - scalar);
                        vecSourceNorm *= value;

                        XMVECTOR vecTargetNorm = XMLoadFloat3(&vertices2[i].at(j).norm);
                        vecTargetNorm *= scalar;
                        XMStoreFloat3(&verticesResult.at(j).norm, (vecSourceNorm + vecTargetNorm));

                        verticesResult.at(j).norm               = vertices[i].at(j).norm;

                        //Pass along Materials
                        verticesResult.at(j).color                = vertices[i].at(j).color;

                    //Lock the dynamic buffer (block the GPU)
                    bro->devcon->Map(vertexBuffer[i], NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms);

                    //Grab a pointer to the data, pass along update vertice info, and remap the buffer
                    VertexM *v = reinterpret_cast<VertexM*>(ms.pData);
                    *v          = verticesResult[0];
                    bro->devcon->Unmap(vertexBuffer[i], NULL);

Any replies are appreciated smile.png



How does the screenshots look?

13 April 2014 - 04:06 PM

Hello all,


I would just like to thank everyone at this website and on the forums for helping me grow as a graphics programmer. It's been a long time coming, but now I think I want to post a picture and get some feedback regarding the scene.


Some things I will note


     The shadows are a little bit off (Still working on 'em)

     The HUD can use improvement (anybody have ideas ;) )

      The brief case holding flash's items still need some work


Anyhow, I love HONEST opinion's! So any feedback I'll gobble up!




Shadow Map Bias issue

05 April 2014 - 11:24 AM

Recently I managed to get shadow mapping working within my application, however, a lot of the occluders are getting occluded, that is a LOT of self-shadowing


Now in my hlsl code I DO have a bias however it is simply NOT working. What I mean is no matter how much I increase or decrease the bias, the shadows stay static.


It seems like a bias value of 1 or 0 seem to only make a difference


    float2 projectTexCoord = float2(0.0f, 0.0f);
    float  bias            = 0.006f;
    float4 textureMap      = colorMap[0].Sample(colorSampler, texel.tex);
    float  lightIntensity  = 0.0f;
    float  depthValue      = 0.0f;
    float  lightDepthValue = 0.0f;
    float4 color           = float4(0.0f, 0.0f, 0.0f, 0.0f);
    float4 lightObject     = light[5].ambient;
    float4 specular        = float4(0.0f, 0.0f, 0.0f, 0.0f);
    bool   lightHit        = false;

    //Normal Mapping
    float4 bumpMap    = colorMap[1].Sample(colorSampler, texel.tex);
    bumpMap           = (bumpMap * 2.0f) - 1.0f;
    float bumpNormal = texel.norm + bumpMap.x * texel.tang + bumpMap.y * texel.binorm;
    float lightItensity2 = saturate(dot(bumpNormal, -1.5f));

    //Specular Mapping
    float3 reflection = normalize(2 * lightItensity2 * bumpNormal - light[5].dir);
    specular   = pow(saturate(dot(reflection, texel.viewDir)), 2.5f);
    specular += float4(0.02f, 0.02f, 0.06f, 0.0f);
    float4 specTex = colorMap[3].Sample(colorSampler, texel.tex);
    specular *= specTex;

    //Grab the lightPosition at the vertex
    texel.lightPos = light[5].dir.xyz - texel.wPos.xyz;
    texel.lightPos = normalize(texel.lightPos);

    // input.lpos.xyz /= input.lpos.w;

     //Create The texture coordinates for the projecting of the shadowMap
    projectTexCoord.x =  texel.lightViewP.x / texel.lightViewP.w  /  2.0f + 0.5f;
    projectTexCoord.y =  texel.lightViewP.y / texel.lightViewP.w / -2.0f + 0.5f;

    // Determine if the projected coordinates are in the 0 to 1 range.  If so then this pixel is in the view of the light
    if((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y))

        // Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location.
        depthValue = colorMap[2].Sample(sampleTypeClamp, projectTexCoord).r;

        // Calculate the depth of the light.
        lightDepthValue = texel.lightViewP.z / texel.lightViewP.w;

        // Subtract the bias from the lightDepthValue.
        lightDepthValue = (lightDepthValue - bias);

        // Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel.
        // If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it.
        if(lightDepthValue > depthValue)

             lightIntensity = saturate(dot(texel.norm, texel.lightPos));

            // Calculate the amount of light on this pixel.
            if(lightIntensity > 0.0f)

                lightObject = light[5].diffuse;
                color += float4(0.0f, 2.4f, 0.0f, 0.0f);

                lightHit    = true;

    float4 color2 = lightItensity2 * lightObject;
    color = saturate(color2 + lightObject) * textureMap;
    color = saturate(color + specular);

    //color += float4(0.5f, 0.0f, 0.0f, 1.0f);
    return color;


I just can't seem to figure this out. I tried working with the sampler, I tried to see if my depth shader was correct (it is). But I constantly get the same results of the self shadowing.


If someone could just point me in the direction UI should be looking to solve this problem I would be so grateful. Im just so lost right now.


And if anybody needs any code not shown here let me know


Any response will be appreciated


-Marcus Hansen