Fragment Shader HLSL vs GLSL

Started by
3 comments, last by BBeck 7 years, 11 months ago

float4 PixelShaderFunction2(float2 coords: TEXCOORD0) : COLOR0
{
    float4 color = tex2D(s0, coords);
    color = ColorTexture * color;

    return color;
}

Compared to GLSL where does the Texture Coords get its coordinates? I dont remember in my Game class that I pass any coordinates of my texture to it? How did the XNA know my Texture coord if I didnt pass anything?

In glsl


#version 330 core
layout (location = 0) in vec4 vertex;

out vec2 TexCoords;
out vec4 ParticleColor;

uniform mat4 projection;
uniform vec2 offset;    /
uniform vec4 color;

void main()
{
    float scale = 10.0f;
    TexCoords = vertex.zw;
    ParticleColor = color;
    gl_Position = projection * vec4((vertex.xy * scale) + offset, 0.0, 1.0);
}

the layout keyword is where the texture coordinate is. It was pass from the Game class


GLuint VBO;
GLfloat vertices[] = {
    // Pos      // Tex
    0.0f, 1.0f, 0.0f, 1.0f,
    1.0f, 0.0f, 1.0f, 0.0f,
    0.0f, 0.0f, 0.0f, 0.0f,

    0.0f, 1.0f, 0.0f, 1.0f,
    1.0f, 1.0f, 1.0f, 1.0f,
    1.0f, 0.0f, 1.0f, 0.0f
};

glGenVertexArrays(1, &this->quadVAO);
glGenBuffers(1, &VBO);

glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

glBindVertexArray(this->quadVAO);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 4 * sizeof(GLfloat), (GLvoid*)0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);

SO i was wondering how did XNA know about the Texture coordinate when I didint do something like assigning a buffer and binding to vertex array?

Advertisement

The input assembler stage delivers Vertex Buffer data to the vertex shader, which is configured using a Vertex Declaration in XNA.

You're comparing a (GLSL) vertex shader to a (HLSL) pixel shader here. You can't make a meaningful comparison.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

You're comparing a (GLSL) vertex shader to a (HLSL) pixel shader here. You can't make a meaningful comparison.

Oh no Im not comparing the two. Im just curious as to how XNA can know my Texture coordinates without explicitly passing anything.

XNA has some built in things such as built in default vertex types and a built in model class. Behind the scenes, you have to pass vertices. In the shader, you need to receive those.

GLSL and HLSL are practically the same thing, they just have a different way of describing things, from what I've seen as I'm fairly new to GLSL.

I was looking for a project where I have an example. I lost my computer this last year and lost a tremendous amount of code (shame on me for not having backups except most all of the code is on my website and I just haven't taken the time to download it and convert it all to MonoGame). I have a terrain project that I downloaded and converted and it uses VertexPositionColorTexture.VertexDeclaration which looks to be a built in vertex definition. XNA allows you to define your own vertex definitions although I don't have an example of that handy. Usually, there's no need to because the built in types handle most beginner projects.


GridVertexBuffer = new VertexBuffer(Game.GraphicsDevice, VertexPositionColorTexture.VertexDeclaration, GridVertices.Length, BufferUsage.WriteOnly);
            GridVertexBuffer.SetData<VertexPositionColorTexture>(GridVertices);

As for HLSL, I'll switch to my DX11 code, partially because it's more handy and it's basically the same thing as XNA although XNA before MonoGame was DX9 based and a lower shader model because of it. Still, it's about 98% the same for most of what you'll do starting out.

This is a basic Phong shader, which is what you need for most game programming. This one shader can be the only shader you need for 3D games if you don't care about bump mapping and other similar effects.

XNA HLSL has a few glaring differences with DX11 HLSL, and one of them is the Constant Buffers. Notice that they're basically the same thing as GLSL's Uniforms. XNA HLSL did not require you to explicitly define them as Constant Buffers and somehow handles that in the background for you.

VertexShaderMain is where this code starts. It takes a VertexShaderInput structure as a parameter and returns PixelShaderInput.

PixelShaderInput VertexShaderMain(VertexShaderInput Input)

Here is the definition of that structure:


struct VertexShaderInput
{
    float4 InputPosition : POSITION;
    float2 InputUV : TEXCOORD0;
    float3 InputNormal : NORMAL;
    float4 InputColor : COLOR;
};

So, there it's defining a vertex as Position, UV Texture Coordinates, a Vertex Normal, and a Color.

In DX11, the calling code defines the vertex like this:


                    D3D11_INPUT_ELEMENT_DESC InputElementDescription[] =
                    {
                        {"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0},        //Begins at offset 0 bytes.
                        {"TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0},        //Begins at offset 12 bytes.
                        {"NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 20, D3D11_INPUT_PER_VERTEX_DATA, 0},        //Begins at offset 20 bytes.
                        {"COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 32, D3D11_INPUT_PER_VERTEX_DATA, 0},    //Begins at offset 32 bytes.
                    };

I don't have any XNA code handy to compare it to, but I think XNA HLSL used a structure in the exact same way.

In my YouTube HLSL tutorial, I actually used XNA to call the HLSL code. So it shows how it's done in XNA including the differences with DX11. Although in that case, I think I was using XNA's built in model class and because of that did not have to define vertices from the XNA side. Still, you can see the structure on the HLSL side in those XNA examples.


Texture2D ColorMap;
SamplerState SamplerSettings;


cbuffer MatrixConstantBuffer : register(cb0)    //The main program assigns this to the Vertex Shader.
{
    matrix WorldMatrix;
    matrix ViewMatrix;
    matrix ProjectionMatrix;
}


cbuffer ParametersBuffer : register(cb0)    //The main program assigns this to the Pixel Shader.
{
    float4 AmbientLightColor;
    float3 DiffuseLightDirection;    //Must be normalized before passing as a parameter. So, that it can be used to calculate angles.
    float ControlCode;
    float4 DiffuseLightColor;
    float4 CameraPosition;
}


struct VertexShaderInput
{
    float4 InputPosition : POSITION;
    float2 InputUV : TEXCOORD0;
    float3 InputNormal : NORMAL;
    float4 InputColor : COLOR;
};

struct PixelShaderInput
{
    float4 Position : SV_POSITION;
    float2 UV : TEXCOORD0;
    float3 WorldSpacePosition : TEXCOORD1;
    float4 Color : COLOR;
    //nointerpolation float3 Normal : NORMAL;        //Nointerpolation turns off smooth shading.
    float3 Normal : NORMAL;        //Nointerpolation turns off smooth shading.
};


float4 BlinnSpecular(float3 LightDirection, float4 LightColor, float3 PixelNormal, float3 CameraDirection, float SpecularPower)
{
    float3 HalfwayNormal;
    float4 SpecularLight;
    float SpecularHighlightAmount;


    HalfwayNormal = normalize(LightDirection + CameraDirection);
    SpecularHighlightAmount = pow(saturate(dot(PixelNormal, HalfwayNormal)), SpecularPower);
    SpecularLight = SpecularHighlightAmount * LightColor;

    return SpecularLight;
}


float4 PhongSpecular(float3 LightDirection, float4 LightColor, float3 PixelNormal, float3 CameraDirection, float SpecularPower)
{
    float3 ReflectedLightDirection;    
    float4 SpecularLight;
    float SpecularHighlightAmount;


    ReflectedLightDirection = 2.0f * PixelNormal * saturate(dot(PixelNormal, LightDirection)) - LightDirection;
    SpecularHighlightAmount = pow(saturate(dot(ReflectedLightDirection, CameraDirection)), SpecularPower);
    SpecularLight = SpecularHighlightAmount * LightColor;

    return SpecularLight;
}


PixelShaderInput VertexShaderMain(VertexShaderInput Input)
{
    PixelShaderInput Output;

    Input.InputPosition.w = 1.0f;    //This is actually brought in as 3D instead of 4D and so we have to correct it for matrix calculations.
    Output.Position = mul(Input.InputPosition, WorldMatrix);
    Output.WorldSpacePosition = Output.Position;
    Output.Position = mul(Output.Position, ViewMatrix);
    Output.Position = mul(Output.Position, ProjectionMatrix);


    Output.Normal = mul(Input.InputNormal, (float3x3)WorldMatrix);    //Only the Object's world matrix need be applied, not the 2 camera matrices. Float3x3 conversion is because it's a float3 instead of a float4.
    Output.Normal = normalize(Output.Normal);    //Normalize the normal in case the matrix math de-normalized it.

    Output.UV = Input.InputUV;

    Output.Color = Input.InputColor;

    return Output;
}


float4 PixelShaderMain(PixelShaderInput Input) : SV_TARGET
{
    float3 LightDirection;
    float DiffuseLightPercentage;
    float4 OutputColor;
    float4 SpecularColor;
    float3 CameraDirection;    //Float3 because the w component really doesn't belong in a 3D vector normal.
    float4 AmbientLight;
    float4 DiffuseLight;
    float4 InputColor;

    if (ControlCode == 0.0f)
    {
        InputColor = Input.Color;
    }
    else
    {
        InputColor = ColorMap.Sample(SamplerSettings, Input.UV);
    }

    LightDirection = -DiffuseLightDirection;    //Normal must face into the light, rather than WITH the light to be lit up.
    DiffuseLightPercentage = saturate(dot(Input.Normal, LightDirection));    //Percentage is based on angle between the direction of light and the vertex's normal.
    DiffuseLight = saturate((DiffuseLightColor * InputColor) * DiffuseLightPercentage);    //Apply only the percentage of the diffuse color. Saturate clamps output between 0.0 and 1.0.

    CameraDirection = normalize(CameraPosition - Input.WorldSpacePosition);    //Create a normal that points in the direction from the pixel to the camera.

    if (DiffuseLightPercentage == 0.0f)
    {
        SpecularColor  = float4(0.0f, 0.0f, 0.0f, 1.0f);
    }
    else
    {
        //SpecularColor = BlinnSpecular(LightDirection, DiffuseLightColor, Input.Normal, CameraDirection, 45.0f);
        //SpecularColor = PhongSpecular(LightDirection, DiffuseLightColor+Input.Color, Input.Normal, CameraDirection, 45.0f);
        SpecularColor = PhongSpecular(LightDirection, DiffuseLightColor, Input.Normal, CameraDirection, 45.0f);
    }
    //OutputColor = saturate(AmbientLightColor + DiffuseLight * DiffuseLightPercentage + SpecularColor);

    
    if (ControlCode == 1.0f)
    {
        OutputColor = saturate((AmbientLightColor * InputColor) + DiffuseLight);
    }
    else
    {
        OutputColor = saturate((AmbientLightColor * InputColor) + DiffuseLight + SpecularColor);
    }

    return OutputColor;
}



This topic is closed to new replies.

Advertisement