Jump to content
  • Advertisement
Sign in to follow this  
break_r

simple HLSL Diffuse shader question

This topic is 4838 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

The following is a simple diffuse shader in HLSL:
matrix ViewMatrix;
matrix ViewProjMatrix;

vector AmbientMtrl;
vector DiffuseMtrl;
vector LightDirection;

vector DiffuseLightIntensity = {0.8f, 0.0f, 0.0f, 1.0f};
vector AmbientLightIntensity = {0.2f, 0.0f, 0.0f, 1.0f};

struct VS_INPUT
{
    vector position : POSITION;
    vector normal   : NORMAL;
};

struct VS_OUTPUT
{
    vector position : POSITION;
    vector diffuse  : COLOR;
};

VS_OUTPUT Main(VS_INPUT input)
{
    VS_OUTPUT output = (VS_OUTPUT)0;

    output.position = mul(input.position, ViewProjMatrix);

    input.normal.w   = 0.0f;
    input.normal     = mul(input.normal,ViewMatrix);

    float s = dot(LightDirection, input.normal);

    output.diffuse = (AmbientMtrl * AmbientLightIntensity) +
                     (s * (DiffuseLightIntensity * DiffuseMtrl));
    
    return output;
}

The only thing I don't understand is the normal transformation. Why are normals transformed in a different way than position vectors? Why must I set the w component to zero? Regards.

Share this post


Link to post
Share on other sites
Advertisement
The position is transformed into projection space so that it can be drawn.

In order to compute the diffuse reflection, the light vector and the normal must be in the same space. I assume the normal is in model space, so it must be transformed into into world space or view space (an alternative is to transform the light into the model space). One reason for choosing view space might be that OpenGL doesn't have a separate model-to-world transformation, only a model-to-world-to-view transformation.

Setting normal.w to 0 prevents translation. You don't want to translate the normal, just rotate it.

Share this post


Link to post
Share on other sites
The position is tranformed into screen space in the vertex shader for rendering. The vectors for the lighting calculations don't need to be projected, but transformed into view or world space (in your shader its to view space). When you implement point and spot lights you will need the world or view space position to get the light and view vectors. The w component in the normal is not used, you could use a 3D vector and get the same results. Hope this helps.

Share this post


Link to post
Share on other sites
Great, thx guys. Yes, it makes more sense now. Normals don't need to be transformed to projection space and they need to respond to rotation to reflect orientation changes, not translation.

Share this post


Link to post
Share on other sites
Also, I really need to build a software rasterizer so these fundamentals are entrenched. However, who has the time...

Beyond the mammoth book 'Tricks of the 3D Game Programming Gurus', does anyone have a solid tutorial series on building something like a small wireframe engine?

Or perhaps the equivalent of D3D (to a limited extent)?

Regards.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!