Jump to content
  • Advertisement
fighting_falcon93

DX11 Help - DirectX11 - Color Interpolation Along Quad Diagonal

Recommended Posts

Imagine that we have a vertex structure that looks like this:

struct Vertex
{
    XMFLOAT3 position;
    XMFLOAT4 color;
};

The vertex shader looks like this:

cbuffer MatrixBuffer
{
    matrix world;
    matrix view;
    matrix projection;
};

struct VertexInput
{
    float4 position : POSITION;
    float4 color    : COLOR;
};

struct PixelInput
{
    float4 position : SV_POSITION;
    float4 color    : COLOR;
};

PixelInput main(VertexInput input)
{
    PixelInput output;

    input.position.w = 1.0f;

    output.position = mul(input.position,  world);
    output.position = mul(output.position, view);
    output.position = mul(output.position, projection);

    output.color = input.color;

    return output;
}

And the pixel shader looks like this:

struct PixelInput
{
    float4 position : SV_POSITION;
    float4 color    : COLOR;
};

float4 main(PixelInput input) : SV_TARGET
{
    return input.color;
}

Now let's create a quad consisting of 2 triangles and the vertices A, B, C and D:

// Vertex A.
vertices[0].position = XMFLOAT3(-1.0f,  1.0f,  0.0f);
vertices[0].color    = XMFLOAT4( 0.5f,  0.5f,  0.5f,  1.0f);

// Vertex B.
vertices[1].position = XMFLOAT3( 1.0f,  1.0f,  0.0f);
vertices[1].color    = XMFLOAT4( 0.5f,  0.5f,  0.5f,  1.0f);

// Vertex C.
vertices[2].position = XMFLOAT3(-1.0f, -1.0f,  0.0f);
vertices[2].color    = XMFLOAT4( 0.5f,  0.5f,  0.5f,  1.0f);

// Vertex D.
vertices[3].position = XMFLOAT3( 1.0f, -1.0f,  0.0f);
vertices[3].color    = XMFLOAT4( 0.5f,  0.5f,  0.5f,  1.0f);

// 1st triangle.
indices[0] = 0; // Vertex A.
indices[1] = 3; // Vertex D.
indices[2] = 2; // Vertex C.

// 2nd triangle.
indices[3] = 0; // Vertex A.
indices[4] = 1; // Vertex B.
indices[5] = 3; // Vertex D.

This will result in a grey quad as shown in the image below. I've outlined the edges in red color to better illustrate the triangles:

GJc3H.png

Now imagine that we’d want our quad to have a different color in vertex A:

// Vertex A.
vertices[0].position = XMFLOAT3(-1.0f, 1.0f, 0.0f);
vertices[0].color    = XMFLOAT4( 0.0f, 0.0f, 0.0f, 1.0f);

4vobe.png

That works as expected since there’s now an interpolation between the black color in vertex A and the grey color in vertices B, C and D. Let’s revert the previus changes and instead change the color of vertex C:

// Vertex C.
vertices[2].position = XMFLOAT3(-1.0f, -1.0f, 0.0f);
vertices[2].color    = XMFLOAT4( 0.0f,  0.0f, 0.0f, 1.0f);

AvKct.png

As you can see, the interpolation is only done half of the way across the first triangle and not across the entire quad. This is because there's no edge between vertex C and vertex B.

Which brings us to my question:

I want the interpolation to go across the entire quad and not only across the triangle. So regardless of which vertex we decide to change the color of, the color interpolation should always go across the entire quad. Is there any efficient way of achieving this without adding more vertices and triangles?

An illustration of what I'm trying to achieve is shown in the image below:

jCVaR.png

 

Background

This is just a very brief explanation of the problems background in case that would make it easier for you to understand the problems roots and maybe help you with finding a better solution to the problem.

I'm trying to texture a terrain mesh in DirectX11. It's working, but I'm a bit unsatisfied with the result. When changing the terrain texture of a single vertex, the interpolation with the other vertices results in a hexagon shape instead of a squared shape:

T2hjO.png

As the red arrows illustrate, I'd like the texture to be interpolated all the way into the corners of the quads.

Share this post


Link to post
Share on other sites
Advertisement

I see a few ways to fix that.

 

1. use 5 vertices, with a point at the center you can fill with the proper interpolated value, but you now have 4 triangles instead of 2.

2. Store the vertex color in a texture, and read it directly in the pixel shader, it will provide you not the quad, but the losange shape that is more logical in a sense :)

 

Something worth to mention, if you stay with 2 triangles per quads, you will have issues with the normal generation and the lighting as soon as the quads are not planar. The solution 1 is in that case the simplest again as the height will be derived from the quad and you do not have to deal with swapping edges.

 

And another thing, you do not seems to be sRGB compliant and your gradients are messed up. Do not forget that the display buffer is a sRGB content, textures are sRGB content but lighting and color interpolation should all be in linear space inside a shader. You have to use the "_SRGB" variant of the DXGI_FORMAT for shader and render target view to let the hardware do conversion for you. The colors in vertices or constant buffer have to be converted either manually in code ( need at least 16bits per value ) or in the vertex shader (before interpolation).

 

 

 

 

 

Share this post


Link to post
Share on other sites

Try using a different triangulation method:

terrain_patch.jpg?psid=1

It may not be what you're after, but it's an improvement.

For what it's worth, your current triangle strip style rendering is the worst for terrain (with the above being better, and equilateral triangles being best, but a pain to work with).

Edited by Mk_

Share this post


Link to post
Share on other sites

Thank you all very much for your replies.

 

15 hours ago, galop1n said:

1. use 5 vertices, with a point at the center you can fill with the proper interpolated value, but you now have 4 triangles instead of 2.

Yeah, I've thought about that solution aswell, but since it will result in twice as many triangles , I'd prefer to avoid that solution if possible due to performance reasons.

15 hours ago, galop1n said:

2. Store the vertex color in a texture, and read it directly in the pixel shader, it will provide you not the quad, but the losange shape that is more logical in a sense :)

I'm sorry, but I don't think that I understand what you mean :(

Why would the interpolation be different if I read the color from a texture rather than storing the color in the vertex structure? When rendering the terrain mesh I've swapped the vertex color to a texture coordinate, but it's the edges (the triangle layout) that controls the interpolation, not the way of reading the color, or am I mistaken?

15 hours ago, galop1n said:

Something worth to mention, if you stay with 2 triangles per quads, you will have issues with the normal generation and the lighting as soon as the quads are not planar. The solution 1 is in that case the simplest again as the height will be derived from the quad and you do not have to deal with swapping edges.

Would you like to explain why there will be issues with the normal generation? I've been following the tutorials on rastertek.com, and according to that tutorial you simply calculate the normal vector of each edge, and then add that normal vector to the normal vector of the triangles face. When all edge vectors have been added you just normalize the triangles vector and you have the triangles normal. Isn't that the correct way of doing it?

15 hours ago, galop1n said:

And another thing, you do not seems to be sRGB compliant and your gradients are messed up. Do not forget that the display buffer is a sRGB content, textures are sRGB content but lighting and color interpolation should all be in linear space inside a shader. You have to use the "_SRGB" variant of the DXGI_FORMAT for shader and render target view to let the hardware do conversion for you. The colors in vertices or constant buffer have to be converted either manually in code ( need at least 16bits per value ) or in the vertex shader (before interpolation).

Sorry if I'm not following you, I'm still learning and havn't grasped all details yet, so please have patience with me :P

What do you mean with the gradients being messed up?

This is how I initialize my DXGI_SWAP_CHAIN_DESC in the project with the terrain rendering:

DXGI_SWAP_CHAIN_DESC swapChainDesc;
ZeroMemory(&swapChainDesc, sizeof(DXGI_SWAP_CHAIN_DESC));
swapChainDesc.BufferCount = 1;
swapChainDesc.BufferDesc.Width = GRAPHICS_SCREEN_WIDTH;
swapChainDesc.BufferDesc.Height = GRAPHICS_SCREEN_HEIGHT;
swapChainDesc.BufferDesc.RefreshRate.Numerator = 60;
swapChainDesc.BufferDesc.RefreshRate.Denominator = 1;
swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.OutputWindow = hwnd;
swapChainDesc.SampleDesc.Count = 1;
swapChainDesc.Windowed = true;

D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &swapChainDesc, &swapChain, &device, NULL, &deviceContext);

And this is how I initialize the D3D11_INPUT_ELEMENT_DESC for my terrain shader:

D3D11_INPUT_ELEMENT_DESC polygonLayout[6];

ZeroMemory(&polygonLayout[0], sizeof(D3D11_INPUT_ELEMENT_DESC));
polygonLayout[0].SemanticName = "POSITION";
polygonLayout[0].Format = DXGI_FORMAT_R32G32B32_FLOAT;
polygonLayout[0].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;

ZeroMemory(&polygonLayout[1], sizeof(D3D11_INPUT_ELEMENT_DESC));
polygonLayout[1].SemanticName = "TEXCOORD";
polygonLayout[1].Format = DXGI_FORMAT_R32G32_FLOAT;
polygonLayout[1].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;

ZeroMemory(&polygonLayout[2], sizeof(D3D11_INPUT_ELEMENT_DESC));
polygonLayout[2].SemanticName = "NORMAL";
polygonLayout[2].Format = DXGI_FORMAT_R32G32B32_FLOAT;
polygonLayout[2].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;

ZeroMemory(&polygonLayout[3], sizeof(D3D11_INPUT_ELEMENT_DESC));
polygonLayout[3].SemanticName = "TERRAIN";
polygonLayout[3].Format = DXGI_FORMAT_R32_FLOAT;
polygonLayout[3].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;

ZeroMemory(&polygonLayout[4], sizeof(D3D11_INPUT_ELEMENT_DESC));
polygonLayout[4].SemanticName = "TERRAIN";
polygonLayout[4].SemanticIndex = 1;
polygonLayout[4].Format = DXGI_FORMAT_R32_FLOAT;
polygonLayout[4].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;

ZeroMemory(&polygonLayout[5], sizeof(D3D11_INPUT_ELEMENT_DESC));
polygonLayout[5].SemanticName = "TERRAIN";
polygonLayout[5].SemanticIndex = 2;
polygonLayout[5].Format = DXGI_FORMAT_R32_FLOAT;
polygonLayout[5].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;

Currently the terrain is limited to 3 different textures maximum. The terrain shaders (vertex and pixel) looks like this:

cbuffer MatrixBuffer
{
	matrix world;
	matrix view;
	matrix projection;
};

struct VS_INPUT
{
	float4 position : POSITION;
	float2 texcoord : TEXCOORD0;
	float3 normal   : NORMAL;
	float  terrain0 : TERRAIN0;
	float  terrain1 : TERRAIN1;
	float  terrain2 : TERRAIN2;
};

struct VS_OUTPUT
{
	float4 position : SV_POSITION;
	float2 texcoord : TEXCOORD0;
	float3 normal   : NORMAL;
	float  terrain0 : TERRAIN0;
	float  terrain1 : TERRAIN1;
	float  terrain2 : TERRAIN2;
};

VS_OUTPUT main(VS_INPUT input)
{
	input.position.w = 1.0f;

	VS_OUTPUT output;
	output.position = mul(input.position,  world);
	output.position = mul(output.position, view);
	output.position = mul(output.position, projection);
	output.texcoord = input.texcoord;
	output.normal   = mul(input.normal, (float3x3)world);
	output.normal   = normalize(output.normal);
	output.terrain0 = input.terrain0;
	output.terrain1 = input.terrain1;
	output.terrain2 = input.terrain2;

	return output;
}
Texture2D texture0 : register(t0);
Texture2D texture1 : register(t1);
Texture2D texture2 : register(t2);

SamplerState SampleType : register(s0);

cbuffer LightBuffer
{
	float4 ambient;
	float4 diffuse;
	float3 direction;
	float  padding;
}

struct PS_INPUT
{
	float4 position : SV_POSITION;
	float2 texcoord : TEXCOORD0;
	float3 normal   : NORMAL;
	float  terrain0 : TERRAIN0;
	float  terrain1 : TERRAIN1;
	float  terrain2 : TERRAIN2;
};

float4 main(PS_INPUT input) : SV_TARGET
{
	float4 color0;
	color0 = texture0.Sample(SampleType, input.texcoord);
	color0 = color0 * input.terrain0;

	float4 color1;
	color1 = texture1.Sample(SampleType, input.texcoord);
	color1 = color1 * input.terrain1;

	float4 color2;
	color2 = texture2.Sample(SampleType, input.texcoord);
	color2 = color2 * input.terrain2;

	float4 color;
	color = float4(0.0f, 0.0f, 0.0f, 1.0f);
	color = color + color0;
	color = color + color1;
	color = color + color2;
	color = saturate(color);

	return color;
}

Is there something wrong with the shader or should I just change the DXGI_FORMAT?

 

10 hours ago, vinterberg said:

Maybe this article would help?

https://mtnphil.wordpress.com/2011/09/22/terrain-engine/

I'm thinking specifically the last part, "Additional optimizations" ...?

Thank you very much for the article, that was exactly what I was looking for. I've been searching on google like crazy but it never showed up, sadly.

 

8 hours ago, Mk_ said:

Try using a different triangulation method:

terrain_patch.jpg?psid=1

It may not be what you're after, but it's an improvement.

For what it's worth, your current triangle strip style rendering is the worst for terrain (with the above being better, and equilateral triangles being best, but a pain to work with).

Thank you very much for your reply and the very illustrative picture.

Let's see if I've understood this correctly. So currently we're talking about 3 different triangulation methods:

GwLovA6.png

Am I right so far? :P

I'll try to implement triangulation method 2 and come back with a picture of the visual result.

In the meanwhile, I understand why triangulation method 3 is a pain to work with (because the vertices are not alinged as pixels in a texture), but it does result in a perfect interpolation for every single vertex. Is there any decent way of translating for example a heightmap to this format? Or how do you store the height and texture data when you use triangulation method 3?

Share this post


Link to post
Share on other sites

The article that was given to you shown the issue nicely with the normal and edge flipping, i won't go over it again, but just accept that there is no single answer to the question "what is my vertex normal?".

For using a texture instead of vertex color. If you use vertices, the interpolation is using barycentrique coordinate to interpolate between 3 values, this lead to your triangle not aware that it was part of a quad, losing a precious information. If you use a texture, and derive the texture coordinate in your pixel shader from the interpolated position instead to align the texel center to the vertex centers, you are using bi-linear interpolation now, and doing so make the interpolation consider the 4 vertex values.

 

Now, it is time to blow you a fuse with sRGB. Luminance and brightness are two different things. Luminance is the physical property, twice the amount of light is twice the value, it is a linear. Brightness is not, twice as bright is not twice the value. It is your gamma space.

The swap chain in RGBA8 is expecting sRGB values, aka the brightness. Now if you are not careful and just treat your values as luminance when it matters, you will screw up light accumulation, blending and gradients. 

 

Let's take an example, you have a pure grex 128 surface that reflect all light to you. Let say one one part of the screen, it is lit by a powerful light of intensity 1 and on another part, an all mighty light of 2. The result is pixels that are either 0.5 * 1 and 0.5 * 2, giving you 0.5 and 1.

 

But in sRGB, 1 brightness is 1 luminance and 0.5 brightness is only 0.18 luminance, and now you realize that your second light was not twice as bright but 5 times as bright !!!! This only apply to the surface you lit because of the mistake, a different color would have get a different amount of boost or damp ! This is why it is important to work in linear space, and the same problems exists for the same reasons with gradient and alpha blending.

 

What you need to be sRGB compliant :

1. albedo texture use FORMAT_SRGB variants, they are sRGB content (brightness value ) that need to be convert to luminance first when read in your shader

2. All colors if edited like in a color picker like in photoshop are brightness too and given as constant or vertex color need to be convert manually to luminance with the proper formula

3. Light intensity are physical values, they don't change ( but their color need to be converted remember )

4. The render target view is also a _SRGB format, so the GPU can do the opposite conversion at write, for your swap chain to receive the proper content.

The only reason we need to switch between this different representation is because we are more sensitive to low luminance and we need more bits in the dark to store values without banding. And it is what sRGB does

 

Edited by galop1n

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!