Sign in to follow this  
markypooch

OpenGL Assimp and Flipping Normals

Recommended Posts

Hello all,

 

I am using Assimp for my models in my game. I am loading obj models exported from blender, into my classes through a model loader that utilizes Assimp.

 

I've run into an issue. Now right off the bat I know my shaders are correct, and the trasformation of lights are as well (in my Vertex Shader) since the only meshes that have the soon to be metioned issue are models exported and imported through Assimp.

 

It seems as if assimp is flipping my models normal vectors! Check the two images below, and it seems as if the backside of my model''s normals are pointing inward!

 

[code]

 

vsOut.pos = mul(vertex.pos, worldMat);
    vsOut.pos = mul(vsOut.pos , viewMat);
    vsOut.pos = mul(vsOut.pos , projMat);

    vsOut.color = vertex.color;    
    vsOut.wPos   = mul(vertex.pos, worldMat);

    cameraPos = mul(vsOut.pos, worldMat);
    cameraPos = mul(cameraPos, viewMat);

    vsOut.viewDir = camPos.xyz - vsOut.wPos.xyz;
    vsOut.viewDir = normalize(vsOut.viewDir);

    vsOut.norm  = mul(vertex.norm, worldMat);
    vsOut.norm  = normalize(vsOut.norm);

 

[code]

 

These are my preprocess flags for Assimp and This is how I load the normals in code

 

[code]

 

unsigned int processFlags =
    aiProcess_CalcTangentSpace         |
    aiProcess_JoinIdenticalVertices    |
    aiProcess_ConvertToLeftHanded      | // convert everything to D3D left handed space (by default right-handed, for OpenGL)
    aiProcess_SortByPType              |
    aiProcess_ImproveCacheLocality     |
    aiProcess_RemoveRedundantMaterials |
    aiProcess_FindDegenerates          |
    aiProcess_FindInvalidData          |
    aiProcess_TransformUVCoords        |
    aiProcess_FindInstances            |
    aiProcess_LimitBoneWeights         |
    aiProcess_SplitByBoneCount         |
    aiProcess_FixInfacingNormals       |
    0;

 

 

 

...........................................

 

 

vertexVectorL.at(i).norm               = XMFLOAT3(mesh->mNormals[i].x, mesh->mNormals[i].y, mesh->mNormals[i].z);

 

[code]

 

Has anyone else heard about assimp doing this? It's been throwing me for a loop for a while now.

If something looks off in my code give me a hint or point me in the right direction!

 

 

P.s. I've included screen shots of the issue
      Thanks for any reply an advance!

 

-Marcus

 

[attachment=22319:asd.png]

[attachment=22320:sd.png]

 

 

Share this post


Link to post
Share on other sites

The walls look ok to me. Are they going to the same pipeline?

Are you sure the normals are ok in DCC? Sometimes they don't quite get the right "outside".

Share this post


Link to post
Share on other sites

In Blender when I'm creating the meshes, they are iron-maidening correctly with their normals.

Krohm did bring up a point. I am using two seperate shaders for Mesh's without textures and those WITH textures (diffuse, normal, spec, ect.)

 

So I am for my models using a seperate shader (so seperate pipeline) for rendering without textures. BUT for all intents and purposes the code should be indentical

minus the Texture2D type and mulitplication of that and the final fragment color.

 

I'll check my material shader and ill post here. Ill be gone for most of the day. Work and all that fun stuff, so thanks for the replies!

 

[/code]

 

///////////////////////////////////////////////////////
//
//                Material Shader
//
////////////////////////////////////////////////////////


//Load Unused Texture Sampler into GPU Register(s0)
//Load clamp Sampler for Shadow Map into GPU Register(s1)
//////////////////////////////////////////////////////////////////
SamplerState            colorSampler        : register(s0);
SamplerState            sampleTypeClamp        : register(s1);

//CONSTANT BUFFERS=========================
//CB's for Matrices, allows C++ Code to send constants
//to the shader and bind them into Registers (b0, b1, b2 respectively)
//===========================================================
cbuffer world : register(b0)
{
    matrix worldMat;
}

cbuffer view  : register(b1)
{
    matrix viewMat;
}

cbuffer proj  : register(b2)
{
    matrix projMat;
}

cbuffer lView : register(b3)
{
    matrix lightViewMat;
}

cbuffer lProj : register(b4)
{
    matrix lightProjMat;
}

cbuffer camBuffer : register(b5)
{
    float3 camPos;
    float  pad;
};
//==================================================

//Structures for Shader Constants
//===================================
struct Light
{
    float3 dir;
    float3 pos;
    float  range;
    float3 att;
    float4 diffuse;
    float4 ambient;
};
//===================================

//Load the main light into the shader and bind it to register b0
cbuffer LightCB  : register(b0)
{
    Light light[6];
}


//Structures for Vertex and Pixel
/////////////////////////////////////////
struct VS_Input
{
    float4 pos       : POSITION;
    float4 color     : COLOR;
    float3 norm      : NORMAL;
};

struct PS_Input
{
    float4 pos            : SV_POSITION;
    float4 wPos            : POSITION;
    float4 color        : COLOR;
    float3 norm            : NORMAL;

    float4 lightViewP    : TEXCOORD1;
    float3 lightPos     : TEXCOORD2;
    float3 viewDir      : TEXCOORD3;
};
///////////////////////////////////////////

//Vertex Shader
///////////////////////////////////
PS_Input VS(VS_Input vertex)
{
    //Init Output Struct
    /////////////////////////////////////
    PS_Input vsOut     = (PS_Input)0;
    vertex.pos.w = 1.0f;
    float4   cameraPos = 0.0f;

    //Transform Vertices into worldSpace
    ///////////////////////////////////////////
    vsOut.pos = mul(vertex.pos, worldMat);
    vsOut.pos = mul(vsOut.pos , viewMat);
    vsOut.pos = mul(vsOut.pos , projMat);

    //Transform LightMatrix into worldSpace - ShadowMapping
    ///////////////////////////////////////////
    vsOut.lightViewP = mul(vertex.pos, worldMat);
    vsOut.lightViewP = mul(vsOut.lightViewP, lightViewMat);
    vsOut.lightViewP = mul(vsOut.lightViewP, lightProjMat);

    //Transform VertexNormal into WorldSpace, normalize it
    //////////////////////////////////////////
    vsOut.norm = mul(vertex.norm, worldMat);
    vsOut.norm = normalize(vsOut.norm);
    
    //Grab the relative worldSpace position of a vertice (Usefull for lighting calculations)
    /////////////////////////////////////////
    vsOut.wPos   = mul(vertex.pos, worldMat);

    //Transform cameraPosition to worldSpace - SpecularMapping
    ////////////////////////////////////////
    vsOut.viewDir = camPos.xyz - vsOut.wPos.xyz;
    vsOut.viewDir = normalize(vsOut.viewDir);

    vsOut.color   = vertex.color;

    //Return output structure as Input into PixelShader
    ////////////////////////////////////////
    return vsOut;
}
//////////////////////////////////
//

//
float4 PS(PS_Input texel) : SV_TARGET
{
    //texel.lightViewP.xyz /= texel.lightViewP.w;
    
    //Variable Initalization
    /////////////////////////////////////////////////
    float2 projectTexCoord = float2(0.0f, 0.0f);
    float  bias            = 0.0000003f;
    float  lightIntensity  = 0.0f;
    float  depthValue      = 0.0f;
    float  lightDepthValue = 0.0f;
    float4 color           = float4(0.0f, 0.0f, 0.0f, 0.0f);
    float4 lightObject     = light[5].diffuse;
    float4 specular        = float4(0.0f, 0.0f, 0.0f, 0.0f);
    bool   lightHit        = false;
    /////////////////////////////////////////////////

    //Grab the lightPosition at the vertex
    /////////////////////////////////////////
    texel.lightPos = light[5].pos.xyz - texel.wPos.xyz;
    texel.lightPos = normalize(texel.lightPos);
    /////////////////////////////////////////

    //return lightObject;
    lightIntensity = saturate(dot(texel.norm, light[5].pos));

    texel.lightViewP.xyz /= texel.lightViewP.w;
 
    //if position is not visible to the light - dont illuminate it
    //results in hard light frustum
    if( texel.lightViewP.x < -1.0f || texel.lightViewP.x > 1.0f ||
        texel.lightViewP.y < -1.0f || texel.lightViewP.y > 1.0f ||
        texel.lightViewP.z < 0.0f  || texel.lightViewP.z > 1.0f )
    {
        
        lightObject = light[5].ambient;
    }
 
    //transform clip space coords to texture space coords (-1:1 to 0:1)
    texel.lightViewP.x = texel.lightViewP.x/2 + 0.5;
    texel.lightViewP.y = texel.lightViewP.y/-2 + 0.5;
 
    texel.lightViewP.z -= bias;

    //sample shadow map - point sampler
    float shadowMapDepth = colorMap.Sample(sampleTypeClamp, texel.lightViewP).r;
 
    //if clip space z value greater than shadow map value then pixel is in shadow
    if ( shadowMapDepth < texel.lightViewP.z)
    {
        lightObject = light[5].ambient;
    }

    float4 color2 = lightObject;
    color = saturate(color2 + lightObject) * texel.color;

    float4 finalColor = float4(0.0f, 0.0f, 0.0f, 0.0f);

        
    [unroll]
    for (int i = 0; i < 5; i++)
    {
        float3 lightToPixelVec = light[i].dir - texel.wPos;
        //Find the distance between the light pos and pixel pos
        float d = length(lightToPixelVec);

        //If pixel is too far, return pixel color with ambient light
        if( d > 100.0f)
            continue;
        
        //Turn lightToPixelVec into a unit length vector describing
        //the pixels direction from the lights position
        lightToPixelVec /= d;
    
        //Calculate how much light the pixel gets by the angle
        //in which the light strikes the pixels surface
        float howMuchLight = dot(lightToPixelVec, texel.norm);

        //If light is striking the front side of the pixel
        if( howMuchLight > 0.0f )
        {    
            //Add light to the finalColor of the pixel
            finalColor += howMuchLight * texel.color * light[i].diffuse;
        
            //Calculate Light's Falloff factor
            finalColor /= light[i].att[0] + (light[i].att[1] * d) + (light[i].att[2] * (d*d));
        }    
        
        //make sure the values are between 1 and 0, and add the ambient
        color = saturate(color + finalColor);
    }
    
    //color += float4(0.5f, 0.0f, 0.0f, 1.0f);
    return color;
}

 

[/code]

 

P.s. It seems I have forgotten how to do the code brackets :p

 

Marcus

Edited by markypooch

Share this post


Link to post
Share on other sites

If you want to be absolutely sure that the normals are the problem, and not your lighting implementation, you could add a geometry shader with a LineStream to render the normals for each vertex...

Or you could output the normal as color from the pixel shader to get an idea if the normals are ok.

 

Or use another program that can display normals, to inspect the normals from the obj file (just in case Blender is messing them up).

 

Anyway, it sounds to me like you're already certain that the normals from assimp are the problem... In that case, maybe assimp simply doesn't handle them properly with the aiProcess_ConvertToLeftHanded flag. You should find a way to export your models as left-handed - I think MeshLab can do that. Or you could remove the aiProcess_ConvertToLeftHanded flag and transform the vertices to left-handed yourself.

 

You should also try removing aiProcess_FixInfacingNormals. Their documentation says: "Generally it is recommended to enable this step, although the result is not always correct."

 

Plenty of things you could do... :)

Edited by tonemgub

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627741
    • Total Posts
      2978887
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now