How to get pixel coordinates in HLSL (quick question)

Started by
4 comments, last by Tsus 11 years, 11 months ago
My vertex declaration has a D3DXVECTOR2 tex0; (texture coordinates).Is this what I need to get a pixel's location in the shader?Like right now I'm using a sample .fx file from a tutorial that blends 3 textures together depending on a blendmap rgb value.I wanna make it blend 7 textures and instead of rgb to use some values from a struct I made.The point is,I'm not exactly sure what tells you the pixel's location.Is it the texcoords?

This is the sample file - I hope it's not troublesome that I post it for a second time in this forum,the first time it was a different issue.


//=============================================================================
// Terrain.fx by Frank Luna (C) 2004 All Rights Reserved.
//
// Blends three textures together with a blend map.
//=============================================================================

uniform extern float4x4 gViewProj;
uniform extern float3 gDirToSunW;
uniform extern texture gTex0;
uniform extern texture gTex1;
uniform extern texture gTex2;
uniform extern texture gBlendMap;
static float gTexScale = 16.0f;
sampler Tex0S = sampler_state
{
Texture = <gTex0>;
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = POINT;
AddressU = WRAP;
AddressV = WRAP;
};
sampler Tex1S = sampler_state
{
Texture = <gTex1>;
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = POINT;
AddressU = WRAP;
AddressV = WRAP;
};
sampler Tex2S = sampler_state
{
Texture = <gTex2>;
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = POINT;
AddressU = WRAP;
AddressV = WRAP;
};
sampler BlendMapS = sampler_state
{
Texture = <gBlendMap>;
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = POINT;
AddressU = WRAP;
AddressV = WRAP;
};

struct OutputVS
{
float4 posH : POSITION0;
float2 tiledTexC : TEXCOORD0;
float2 nonTiledTexC : TEXCOORD1;
float shade : TEXCOORD2;
};
OutputVS TerrainVS(float3 posW : POSITION0, // We assume terrain geometry is specified
float3 normalW : NORMAL0, // directly in world space.
float2 tex0: TEXCOORD0)
{
// Zero out our output.
OutputVS outVS = (OutputVS)0;

// Just compute a grayscale diffuse and ambient lighting
// term--terrain has no specular reflectance. The color
// comes from the texture.
outVS.shade = saturate(max(0.0f, dot(normalW, gDirToSunW)) + 0.3f);

// Transform to homogeneous clip space.
outVS.posH = mul(float4(posW, 1.0f), gViewProj);

// Pass on texture coordinates to be interpolated in rasterization.
outVS.tiledTexC = tex0 * gTexScale; // Scale tex-coord to tile.
outVS.nonTiledTexC = tex0; // Blend map not tiled.

// Done--return the output.
return outVS;
}
float4 TerrainPS(float2 tiledTexC : TEXCOORD0,
float2 nonTiledTexC : TEXCOORD1,
float shade : TEXCOORD2) : COLOR
{
// Layer maps are tiled
float3 c0 = tex2D(Tex0S, tiledTexC).rgb;
float3 c1 = tex2D(Tex1S, tiledTexC).rgb;
float3 c2 = tex2D(Tex2S, tiledTexC).rgb;

// Blendmap is not tiled.
float3 B = tex2D(BlendMapS, nonTiledTexC).rgb;
// Find the inverse of all the blend weights so that we can
// scale the total color to the range [0, 1].
float totalInverse = 1.0f / (B.r + B.g + B.b);

// Scale the colors by each layer by its corresponding weight
// stored in the blendmap.
c0 *= B.r * totalInverse;
c1 *= B.g * totalInverse;
c2 *= B.b * totalInverse;

// Sum the colors and modulate with the shade to brighten/darken.
float3 final = (c0 + c1 + c2) * shade;

return float4(final, 1.0f);
}
technique TerrainTech
{
pass P0
{
vertexShader = compile vs_2_0 TerrainVS();
pixelShader = compile ps_2_0 TerrainPS();
}
}


Basically the variables I don't understand are shade and why is AdressUV only used in the structs and it's not even mentioned in the shader functions.
Advertisement
It looks like shade is performing a shadow like effect based off the direction of the sun and the position of the normal. The only address u and v I see is in the sampler, telling the shader to wrap (tile / repeat) the texture.

Seven textures is a lot for one shader but anyway, I usually have a value stored in my vertex formatted struct (calculated on the CPU once) that I pass to the shader that tells me how much of a cretin texture to sample for each vert. Then I sample the textures (3 in my situation) using the textcoords like any other rendering and then for each one I multiply them by the custom amount I passed in ( 0 - 1). I then add all the colors together. This allows me to blend (splatter) textures.

Hope this helps,
Dj

It looks like shade is performing a shadow like effect based off the direction of the sun and the position of the normal. The only address u and v I see is in the sampler, telling the shader to wrap (tile / repeat) the texture.

It looks like shade is performing a shadow like effect based off the direction of the sun and the position of the normal. The only address u and v I see is in the sampler, telling the shader to wrap (tile / repeat) the texture.


so in a 512 x 512 texture a UV of 0.5,0.5 will be the pixel at location 256,256 on the texture,right?


so in a 512 x 512 texture a UV of 0.5,0.5 will be the pixel at location 256,256 on the texture,right?


Yes. Top left (0,0) bottom right (1,1).
ok 1 last thing - about WRAP.If AdressU and AdressV are set to WRAP and they are exactly 1/4 the size of the whole area im working with,will that tile them exactly 4 times?Like if texture1 is 100x100 in size and the terrain mesh is 400x400 size,that tiles texture1 perfectly 4 tiems,right?So then I can just use UV's from 0.0 to 4.0 for that texture?
Hi Bogomil,


ok 1 last thing - about WRAP.If AdressU and AdressV are set to WRAP and they are exactly 1/4 the size of the whole area im working with,will that tile them exactly 4 times?Like if texture1 is 100x100 in size and the terrain mesh is 400x400 size,that tiles texture1 perfectly 4 tiems,right?So then I can just use UV's from 0.0 to 4.0 for that texture?

Yeah, kind of.
One of the ideas behind texture coordinates is to make your code independent from the resolution of the actual textures, which is why those coordinates are relative (in [0,0] to [1,1]).
For a shader it doesn’t matter whether texture1 has 100x100 pixels and texture2 400x400. If both textures are accessed with the same texture coordinate, e.g. in [0,0] to [1,1], they will be tiled equally. In order to repeat texture1 four times, you’d have to scale the texture coordinate only for texture1, i.e. [0,0] to [4,4].
The texture filters and mip mapping will take care for varying scales, i.e. if you’d use [0,0] to [8,8] the texture displayed will be down-filtered. Even if you pick something odd [0,0] to [7,7] the filtering will interpolate the colors accordingly.

Currently, you use a simple bilinear filter. The GPU can do better. You may want to experiment with an anisotropic filter (the thing you usually turn on in video games. smile.png). It adapts the filtering to varying distances and viewing angles. Also make sure you build mipmaps for your textures.

SamplerState g_samAnisotropic
{
Filter = ANISOTROPIC;
MaxAnisotropy = 8;
AddressU = Wrap;
AddressV = Wrap;
};

MaxAnisotropy steers the quality, is hardware-dependent and usually somewhere between 1 and 16, if I recall correctly.

Cheers!

This topic is closed to new replies.

Advertisement