• 12
• 12
• 9
• 10
• 13
• ### Similar Content

• By isu diss
I'm following rastertek tutorial 14 (http://rastertek.com/tertut14.html). The problem is, slope based texturing doesn't work in my application. There are plenty of slopes in my terrain. None of them get slope color.
float4 PSMAIN(DS_OUTPUT Input) : SV_Target { float4 grassColor; float4 slopeColor; float4 rockColor; float slope; float blendAmount; float4 textureColor; grassColor = txTerGrassy.Sample(SSTerrain, Input.TextureCoords); slopeColor = txTerMossRocky.Sample(SSTerrain, Input.TextureCoords); rockColor = txTerRocky.Sample(SSTerrain, Input.TextureCoords); // Calculate the slope of this point. slope = (1.0f - Input.LSNormal.y); if(slope < 0.2) { blendAmount = slope / 0.2f; textureColor = lerp(grassColor, slopeColor, blendAmount); } if((slope < 0.7) && (slope >= 0.2f)) { blendAmount = (slope - 0.2f) * (1.0f / (0.7f - 0.2f)); textureColor = lerp(slopeColor, rockColor, blendAmount); } if(slope >= 0.7) { textureColor = rockColor; } return float4(textureColor.rgb, 1); } Can anyone help me? Thanks.

• By cozzie
Hi all,
As a part of the debug drawing system in my engine,  I want to add support for rendering simple text on screen  (aka HUD/ HUD style). From what I've read there are a few options, in short:
1. Write your own font sprite renderer
2. Using Direct2D/Directwrite, combine with DX11 rendertarget/ backbuffer
3. Use an external library, like the directx toolkit etc.
I want to go for number 2, but articles/ documentation confused me a bit. Some say you need to create a DX10 device, to be able to do this, because it doesn't directly work with the DX11 device.  But other articles tell that this was 'patched' later on and should work now.
Can someone shed some light on this and ideally provide me an example or article on  how to set this up?
All input is appreciated.
• By stale
I've just started learning about tessellation from Frank Luna's DX11 book. I'm getting some very weird behavior when I try to render a tessellated quad patch if I also render a mesh in the same frame. The tessellated quad patch renders just fine if it's the only thing I'm rendering. This is pictured below:
'
However, when I attempt to render the same tessellated quad patch along with the other entities in the scene (which are simple triangle-lists), I get the following error:

I have no idea why this is happening, and google searches have given me no leads at all. I use the following code to render the tessellated quad patch:
ID3D11DeviceContext* dc = GetGFXDeviceContext(); dc->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_4_CONTROL_POINT_PATCHLIST); dc->IASetInputLayout(ShaderManager::GetInstance()->m_JQuadTess->m_InputLayout); float blendFactors[] = { 0.0f, 0.0f, 0.0f, 0.0f }; // only used with D3D11_BLEND_BLEND_FACTOR dc->RSSetState(m_rasterizerStates[RSWIREFRAME]); dc->OMSetBlendState(m_blendStates[BSNOBLEND], blendFactors, 0xffffffff); dc->OMSetDepthStencilState(m_depthStencilStates[DSDEFAULT], 0); ID3DX11EffectTechnique* activeTech = ShaderManager::GetInstance()->m_JQuadTess->Tech; D3DX11_TECHNIQUE_DESC techDesc; activeTech->GetDesc(&techDesc); for (unsigned int p = 0; p < techDesc.Passes; p++) { TerrainVisual* terrainVisual = (TerrainVisual*)entity->m_VisualComponent; UINT stride = sizeof(TerrainVertex); UINT offset = 0; GetGFXDeviceContext()->IASetVertexBuffers(0, 1, &terrainVisual->m_VB, &stride, &offset); Vector3 eyePos = Vector3(cam->m_position); Matrix rotation = Matrix::CreateFromYawPitchRoll(entity->m_rotationEuler.x, entity->m_rotationEuler.y, entity->m_rotationEuler.z); Matrix model = rotation * Matrix::CreateTranslation(entity->m_position); Matrix view = cam->GetLookAtMatrix(); Matrix MVP = model * view * m_ProjectionMatrix; ShaderManager::GetInstance()->m_JQuadTess->SetEyePosW(eyePos); ShaderManager::GetInstance()->m_JQuadTess->SetWorld(model); ShaderManager::GetInstance()->m_JQuadTess->SetWorldViewProj(MVP); activeTech->GetPassByIndex(p)->Apply(0, GetGFXDeviceContext()); GetGFXDeviceContext()->Draw(4, 0); } dc->RSSetState(0); dc->OMSetBlendState(0, blendFactors, 0xffffffff); dc->OMSetDepthStencilState(0, 0); I draw my scene by looping through the list of entities and calling the associated draw method depending on the entity's "visual type":
for (unsigned int i = 0; i < scene->GetEntityList()->size(); i++) { Entity* entity = scene->GetEntityList()->at(i); if (entity->m_VisualComponent->m_visualType == VisualType::MESH) DrawMeshEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::BILLBOARD) DrawBillboardEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::TERRAIN) DrawTerrainEntity(entity, cam); } HR(m_swapChain->Present(0, 0)); Any help/advice would be much appreciated!

• Am trying a basebones tessellation shader and getting unexpected result when increasing the tessellation factor. Am rendering a group of quads and trying to apply tessellation to them.
OutsideTess = (1,1,1,1), InsideTess= (1,1)

OutsideTess = (1,1,1,1), InsideTess= (2,1)

I expected 4 triangles in the quad, not two. Any idea of whats wrong?
Structs:
struct PatchTess { float mEdgeTess[4] : SV_TessFactor; float mInsideTess[2] : SV_InsideTessFactor; }; struct VertexOut { float4 mWorldPosition : POSITION; float mTessFactor : TESS; }; struct DomainOut { float4 mWorldPosition : SV_POSITION; }; struct HullOut { float4 mWorldPosition : POSITION; }; Hull shader:
PatchTess PatchHS(InputPatch<VertexOut, 3> inputVertices) { PatchTess patch; patch.mEdgeTess[ 0 ] = 1; patch.mEdgeTess[ 1 ] = 1; patch.mEdgeTess[ 2 ] = 1; patch.mEdgeTess[ 3 ] = 1; patch.mInsideTess[ 0 ] = 2; patch.mInsideTess[ 1 ] = 1; return patch; } [domain("quad")] [partitioning("fractional_odd")] [outputtopology("triangle_ccw")] [outputcontrolpoints(4)] [patchconstantfunc("PatchHS")] [maxtessfactor( 64.0 )] HullOut hull_main(InputPatch<VertexOut, 3> verticeData, uint index : SV_OutputControlPointID) { HullOut ret; ret.mWorldPosition = verticeData[index].mWorldPosition; return ret; }
[domain("quad")] DomainOut domain_main(PatchTess patchTess, float2 uv : SV_DomainLocation, const OutputPatch<HullOut, 4> quad) { DomainOut ret; const float MipInterval = 20.0f; ret.mWorldPosition.xz = quad[ 0 ].mWorldPosition.xz * ( 1.0f - uv.x ) * ( 1.0f - uv.y ) + quad[ 1 ].mWorldPosition.xz * uv.x * ( 1.0f - uv.y ) + quad[ 2 ].mWorldPosition.xz * ( 1.0f - uv.x ) * uv.y + quad[ 3 ].mWorldPosition.xz * uv.x * uv.y ; ret.mWorldPosition.y = quad[ 0 ].mWorldPosition.y; ret.mWorldPosition.w = 1; ret.mWorldPosition = mul( gFrameViewProj, ret.mWorldPosition ); return ret; }
Any ideas what could be wrong with these shaders?
• By simco50
Hello,
I've stumbled upon Urho3D engine and found that it has a really nice and easy to read code structure.
I think the graphics abstraction looks really interesting and I like the idea of how it defers pipeline state changes until just before the draw call to resolve redundant state changes.
This is done by saving the state changes (blendEnabled/SRV changes/RTV changes) in member variables and just before the draw, apply the actual state changes using the graphics context.
It looks something like this (pseudo):
void PrepareDraw() { if(renderTargetsDirty) { pD3D11DeviceContext->OMSetRenderTarget(mCurrentRenderTargets); renderTargetsDirty = false } if(texturesDirty) { pD3D11DeviceContext->PSSetShaderResourceView(..., mCurrentSRVs); texturesDirty = false } .... //Some more state changes } This all looked like a great design at first but I've found that there is one big issue with this which I don't really understand how it is solved in their case and how I would tackle it.
I'll explain it by example, imagine I have two rendertargets: my backbuffer RT and an offscreen RT.
Say I want to render my backbuffer to the offscreen RT and then back to the backbuffer (Just for the sake of the example).
You would do something like this:
//Render to the offscreen RT pGraphics->SetRenderTarget(pOffscreenRT->GetRTV()); pGraphics->SetTexture(diffuseSlot, pDefaultRT->GetSRV()) pGraphics->DrawQuad() pGraphics->SetTexture(diffuseSlot, nullptr); //Remove the default RT from input //Render to the default (screen) RT pGraphics->SetRenderTarget(nullptr); //Default RT pGraphics->SetTexture(diffuseSlot, pOffscreenRT->GetSRV()) pGraphics->DrawQuad(); The problem here is that the second time the application loop comes around, the offscreen rendertarget is still bound as input ShaderResourceView when it gets set as a RenderTargetView because in Urho3D, the state of the RenderTargetView will always be changed before the ShaderResourceViews (see top code snippet) even when I set the SRV to nullptr before using it as a RTV like above causing errors because a resource can't be bound to both input and rendertarget.
What is usually the solution to this?

Thanks!

# DX11 Phong shader problem, cannot find out the reason

This topic is 1607 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello everyone,

This is my first post here, and this is a (very) little "something" about me:
After finished my game in unity, I would like to port it to a custom framework in DX11 to have better performance and graphics, no matter how much time it will takes!

Actually I'm following some tutorial about DX11 here: http://www.rastertek.com/.

The problem I have right now is this:

As you can see it's a simple plane, subdivided to 4!

The model was imported in the simplest way trough an fbx, so I've got 24 vertices and for each vertices there's normal and uv coords!
The normal for each vertex are 0 0 -1!

I cannot define what's the problem here!

The only one I could think is that I should use 24 indices and 9 vertex/normal/uv, but using control point will make me loose hard/soft edge in complex model!

Could you help me to find out the problem?

Thank you very much!

EDIT:

I've tried without duplicated vertices, so 9  vertices and 24 indices!
The problem should be somewere else!
Edited by politoleo

##### Share on other sites

My guess is normal transformation issue, looks like the center vertex normal is (0,0,-1), and the rest are not. The specular highlight is also not distributed in a symetrical way, which is strange.

##### Share on other sites

The buffer i'm passing is this one:

Here is the shaders (I've just add the smoothstep function, the rest it's exactly like the tutorial, and I can observe the same with tutorial shader also)

VERTEX:

////////////////////////////////////////////////////////////////////////////////
// Filename: light.vs
////////////////////////////////////////////////////////////////////////////////

/////////////
// GLOBALS //
/////////////
cbuffer MatrixBuffer
{
matrix worldMatrix;
matrix viewMatrix;
matrix projectionMatrix;
};

cbuffer CameraBuffer
{
float3 cameraPosition;
};

//////////////
// TYPEDEFS //
//////////////
struct VertexInputType
{
float4 position : POSITION;
float2 tex : TEXCOORD0;
float3 normal : NORMAL;
};

struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
float3 normal : NORMAL;
float3 viewDirection : TEXCOORD1;
};

////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
{
PixelInputType output;
float4 worldPosition;

// Change the position vector to be 4 units for proper matrix calculations.
input.position.w = 1.0f;

// Calculate the position of the vertex against the world, view, and projection matrices.
output.position = mul(input.position, worldMatrix);
output.position = mul(output.position, viewMatrix);
output.position = mul(output.position, projectionMatrix);

// Store the texture coordinates for the pixel shader.
output.tex = input.tex;

// Calculate the normal vector against the world matrix only.
output.normal = mul(input.normal, (float3x3)worldMatrix);

// Normalize the normal vector.
output.normal = normalize(output.normal);

// Calculate the position of the vertex in the world.
worldPosition = mul(input.position, worldMatrix);

// Determine the viewing direction based on the position of the camera and the position of the vertex in the world.
output.viewDirection = cameraPosition.xyz - worldPosition.xyz;

// Normalize the viewing direction vector.
output.viewDirection = normalize(output.viewDirection);

return output;
}

PIXEL:

Texture2D shaderTexture;
SamplerState SampleType;

cbuffer LightBuffer
{
float4 ambientColor;
float4 diffuseColor;
float3 lightDirection;
float  specularPower;
float4 specularColor;
};

//////////////
// TYPEDEFS //
//////////////
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
float3 normal : NORMAL;
float3 viewDirection : TEXCOORD1;
};

////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
{
float4 textureColor;
float3 lightDir;
float lightIntensity;
float4 color;
float3 reflection;
float4 specular;

// Sample the pixel color from the texture using the sampler at this texture coordinate location.

// Set the default output color to the ambient light value for all pixels.
color = ambientColor;

// Invert the light direction for calculations.
lightDir = -lightDirection;

// Calculate the amount of light on this pixel.
lightIntensity = saturate(dot(input.normal, lightDir));

// Determine the final diffuse color based on the diffuse color and the amount of light intensity.

color += (diffuseColor * lightIntensity);

// Saturate the ambient and diffuse color.
color = saturate(color);

// Calculate the reflection vector based on the light intensity, normal vector, and light direction.
reflection = normalize(2 * lightIntensity * input.normal - lightDir);

// Determine the amount of specular light based on the reflection vector, viewing direction, and specular power.
specular = smoothstep(0.0,0.05,lightIntensity) * pow(saturate(dot(reflection, input.viewDirection)), specularPower);

specular = specular * specularColor;

// Multiply the texture pixel and the input color to get the textured result.
color = color * textureColor;

// Add the specular component last to the output color.
color = saturate(color + specular);

return color;
}


Thank you again!

Edited by politoleo

##### Share on other sites

PS.

It's not simmetrical because i was moving with the camera!

##### Share on other sites

What's your input layout (CreateInputLayout, IASetInputLayout)?

##### Share on other sites

This is the part of code for input layout initialisation for that shader:

	// Create the vertex input layout description.
// This setup needs to match the VertexType stucture in the Model and in the shader.
polygonLayout[0].SemanticName = "POSITION";
polygonLayout[0].SemanticIndex = 0;
polygonLayout[0].Format = DXGI_FORMAT_R32G32B32_FLOAT;
polygonLayout[0].InputSlot = 0;
polygonLayout[0].AlignedByteOffset = 0;
polygonLayout[0].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
polygonLayout[0].InstanceDataStepRate = 0;

polygonLayout[1].SemanticName = "TEXCOORD";
polygonLayout[1].SemanticIndex = 0;
polygonLayout[1].Format = DXGI_FORMAT_R32G32_FLOAT;
polygonLayout[1].InputSlot = 0;
polygonLayout[1].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
polygonLayout[1].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
polygonLayout[1].InstanceDataStepRate = 0;

polygonLayout[2].SemanticName = "NORMAL";
polygonLayout[2].SemanticIndex = 0;
polygonLayout[2].Format = DXGI_FORMAT_R32G32B32_FLOAT;
polygonLayout[2].InputSlot = 0;
polygonLayout[2].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
polygonLayout[2].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
polygonLayout[2].InstanceDataStepRate = 0;

// Get a count of the elements in the layout.
numElements = sizeof(polygonLayout) / sizeof(polygonLayout[0]);

// Create the vertex input layout.


Thanks!

##### Share on other sites

Not sure if that is the problem, but you need to normalize the viewDirection and Normal inside the pixel shader, as they are not guranteed to be normalized after interpolation. And also make sure that lightDir is normalized.

##### Share on other sites

Thank you satanir, it really helped a lot!

Now everything looks ok from distance, but when you moove the camera the reflection will follow the poligon edge:

this is the pixel shader now:

Texture2D shaderTexture;
SamplerState SampleType;

cbuffer LightBuffer
{
float4 ambientColor;
float4 diffuseColor;
float3 lightDirection;
float  specularPower;
float4 specularColor;
};

//////////////
// TYPEDEFS //
//////////////
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
float3 normal : NORMAL;
float3 viewDirection : TEXCOORD1;
};

////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
{
float4 textureColor;
float3 lightDir;
float lightIntensity;
float4 color;
float3 reflection;
float4 specular;

// Sample the pixel color from the texture using the sampler at this texture coordinate location.

// Set the default output color to the ambient light value for all pixels.
color = ambientColor;

// Invert the light direction for calculations.
lightDir = -lightDirection;

// Normalize view direction
input.viewDirection = normalize(input.viewDirection);

// Calculate the amount of light on this pixel.
lightIntensity = saturate(dot(input.normal, lightDir));

// Determine the final diffuse color based on the diffuse color and the amount of light intensity.

color += (diffuseColor * lightIntensity);

// Saturate the ambient and diffuse color.
color = saturate(color);

// Calculate the reflection vector based on the light intensity, normal vector, and light direction.
reflection = normalize(2 * lightIntensity * input.normal - lightDir);

// Determine the amount of specular light based on the reflection vector, viewing direction, and specular power.
specular = smoothstep(0.0,0.05,lightIntensity) * pow(saturate(dot(reflection, input.viewDirection)), specularPower);

specular = specular * specularColor;

// Multiply the texture pixel and the input color to get the textured result.
color = color * textureColor;

// Add the specular component last to the output color.
color = saturate(color + specular);

return color;
}



FAR:

NEAR:

Thank you!

##### Share on other sites

Fixed it by removing the viewDirection normalization in the vertex shader!

Thank you very much for the help guys!