Followers 0

# DX11 DX11 Specular Lighting

## 3 posts in this topic

Hi guys,

right now in HLSL I'm trying to implement specular lighting, but ran into an issue, first here's a code snippet:

                float3 reflection;
float4 specular;

// Initialize the specular color.
specular = float4(0.0f, 0.0f, 0.0f, 0.0f);

// Calculate the amount of light on this pixel.
float lightIntensity = saturate(dot(input.normal, -lightvec));

if(lightIntensity > 0.0f)
{
//color += (ambientcol * lightIntensity);

// Saturate the ambient and diffuse color.
color = saturate(color);

reflection = normalize(2 * lightIntensity * input.normal - (-lightvec));

specular = pow(saturate(dot(reflection, input.viewDirection)), specularPower);
}

// Add the specular component last to the output color.
color = saturate(color + specular);


Now all the alien variables are set correctly (debugged), but somehow the variable specular is NEVER correctly calculated.

Now this seams a bit vein, but this is all I got for now... But do you know why it happens?

Thank You

0

##### Share on other sites

How did you debug?

What do you mean by is never correctly calculated? Is the value of specular = NaN or other weird value after it is calculated?

Also do this:

Put a breakpoint in the line "specular = pow(...);" using PIX, nSight etc.

Check the value of each variable used to calculate specular then step to the next line and check the value specular gets.

Post the values here.

0

##### Share on other sites

It  may have something to do with the fact that your declaration of specular is a float4, while your other variables appear to be float3 (at least reflection is, I am assuming the others are too).  Do you get any warnings when you compile the shader?

In any case, it is good practice to make sure your types all match and that you aren't depending on any default behavior.

0

##### Share on other sites

Are your normals normalized? Interpolation can easily change their length and cause problems.

By the way if you post code that can be compiled, it's easier for other people to debug it.

Also

reflection = normalize(2 * lightIntensity * input.normal - (-lightvec));

Is equivalent to:

reflection = normalize(input.normal - (-lightvec));

0

## Create an account

Register a new account

Followers 0

• ### Similar Content

• Hi,
I started reading Introduction to 3D Game Programming with Direct3D 11.0 and have a little question about callback function. In author's example code d3dApp.cpp, he managed to assign a member function to WNDCLASS::lpfnWndProc
namespace {     // This is just used to forward Windows messages from a global window     // procedure to our member function window procedure because we cannot     // assign a member function to WNDCLASS::lpfnWndProc.     D3DApp* gd3dApp = 0; } LRESULT CALLBACK MainWndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) {     // Forward hwnd on because we can get messages (e.g., WM_CREATE)     // before CreateWindow returns, and thus before mhMainWnd is valid.     return gd3dApp->MsgProc(hwnd, msg, wParam, lParam); } in constructor D3DApp::D3DApp()
gd3dApp = this; and in bool D3DApp::InitMainWindow()
wc.lpfnWndProc = MainWndProc; Notice that D3DApp::MsgProc is a virtual function.
As far as I'm concerned, I would find it convenient to declare MsgProc member function as static. However, a static member can't be virtual. Is there any solution so that I can overcome the contradiction except author's method?

• I am working on a game (shameless plug: Cosmoteer) that is written in a custom game engine on top of Direct3D 11. (It's written in C# using SharpDX, though I think that's immaterial to the problem at hand.)
The problem I'm having is that a small but understandably-frustrated percentage of my players (about 1.5% of about 10K players/day) are getting frequent device hangs. Specifically, the call to IDXGISwapChain::Present() is failing with DXGI_ERROR_DEVICE_REMOVED, and calling GetDeviceRemovedReason() returns DXGI_ERROR_DEVICE_HUNG. I'm not ready to dismiss the errors as unsolveable driver issues because these players claim to not be having problems with any other games, and there are more complaints on my own forums about this issue than there are for games with orders of magnitude more players.
My first debugging step was, of course, to turn on the Direct3D debug layer and look for any errors/warnings in the output. Locally, the game runs 100% free of any errors or warnings. (And yes, I verified that I'm actually getting debug output by deliberately causing a warning.) I've also had several players run the game with the debug layer turned on, and they are also 100% free of errors/warnings, except for the actual hung device:
[MessageIdDeviceRemovalProcessAtFault] [Error] [Execution] : ID3D11Device::RemoveDevice: Device removal has been triggered for the following reason (DXGI_ERROR_DEVICE_HUNG: The Device took an unreasonable amount of time to execute its commands, or the hardware crashed/hung. As a result, the TDR (Timeout Detection and Recovery) mechanism has been triggered. The current Device Context was executing commands when the hang occurred. The application may want to respawn and fallback to less aggressive use of the display hardware). So something my game is doing is causing the device to hang and the TDR to be triggered for a small percentage of players. The latest update of my game measures the time spent in IDXGISwapChain::Present(), and indeed in every case of a hung device, it spends more than 2 seconds in Present() before returning the error. AFAIK my game isn't doing anything particularly "aggressive" with the display hardware, and logs report that average FPS for the few seconds before the hang is usually 60+.
So now I'm pretty stumped! I have zero clues about what specifically could be causing the hung device for these players, and I can only debug post-mortem since I can't reproduce the issue locally. Are there any additional ways to figure out what could be causing a hung device? Are there any common causes of this?
Here's my remarkably un-interesting Present() call:
SwapChain.Present(_vsyncIn ? 1 : 0, PresentFlags.None); I'd be happy to share any other code that might be relevant, though I don't myself know what that might be. (And if anyone is feeling especially generous with their time and wants to look at my full code, I can give you read access to my Git repo on Bitbucket.)
1. The errors happen on all OS'es my game supports (Windows 7, 8, 10, both 32-bit and 64-bit), GPU vendors (Intel, Nvidia, AMD), and driver versions. I've been unable to discern any patterns with the game hanging on specific hardware or drivers.
2. For the most part, the hang seems to happen at random. Some individual players report it crashes in somewhat consistent places (such as on startup or when doing a certain action in the game), but there is no consistency between players.
3. Many players have reported that turning on V-Sync significantly reduces (but does not eliminate) the errors.
4. I have assured that my code never makes calls to the immediate context or DXGI on multiple threads at the same time by wrapping literally every call to the immediate context and DXGI in a mutex region (C# lock statement). (My code *does* sometimes make calls to the immediate context off the main thread to create resources, but these calls are always synchronized with the main thread.) I also tried synchronizing all calls to the D3D device as well, even though that's supposed to be thread-safe. (Which did not solve *this* problem, but did, curiously, fix another crash a few players were having.)
5. The handful of places where my game accesses memory through pointers (it's written in C#, so it's pretty rare to use raw pointers) are done through a special SafePtr that guards against out-of-bounds access and checks to make sure the memory hasn't been deallocated/unmapped. So I'm 99% sure I'm not writing to memory I shouldn't be writing to.
6. None of my shaders use any loops.
Thanks for any clues or insights you can provide. I know there's not a lot to go on here, which is part of my problem. I'm coming to you all because I'm out of ideas for what do investigate next, and I'm hoping someone else here has ideas for possible causes I can investigate.
Thanks again!

• By thmfrnk
Hello,
I am working on a Deferred Shading Engine, which actually uses MSAA for Antialising. Apart from the big G-Buffer ressources its working fine. But the intention of my engine is not only realtime-rendering as also render Screenshots as well as Videos. In that case I've enough time to do everything to get the best results. While using 8x MSAA, some scenes might still flicker.. especially on vegetations. Unfortunately 8x seems to be the maximum on DX11 Hardware, so there is no way to get better results, even if don't prefer realtime.
So finally I am looking for a solution, which might offer an unlimited Sample count. The first thing I thought about was to find a way to manually manipulate MSAA Sample locations, in order to be able to render multiple frames with different patterns and combining them. I found out that NVIDIA did something equal with TXAA. However, I only found a solution to use NVAPI, in order to change sample locations. https://mynameismjp.wordpress.com/2015/09/13/programmable-sample-points/
While I am working on .NET and SlimDX I've no idea how hard it would to implement the NVIDIA API and if its possible to use it together with SlimDX. And this approach would be also limited to NV.
Does anyone have an idea or maybe a better approach I could use?
Thanks, Thomas

• For vector operations which mathematically result in a single scalar f (such as XMVector3Length or XMPlaneDotCoord), which of the following extractions from an XMVECTOR is preferred:
1. The very explicit store operation
const XMVECTOR v = ...; float f; XMStoreFloat(&f, v); 2. A shorter but less explicit version (note that const can now be used explicitly)
const XMVECTOR v = ...; const float f = XMVectorGetX(v);

• Hi guys,
this is a exam question regarding alpha blending, however there is no official solution, so i am wondering  whether my solution is right or not... thanks in advance...

my idea:
BS1:
since BS1 with BlendEnable set as false, just write value into back buffer.
-A : (0.4, 0.4, 0.0, 0.5)
-B : (0.2, 0.4, 0.8, 0.5)

BS2:

backbuffer.RGB: = (0.4, 0.0, 0.0) * 1 + (0.0, 0.0, 0.0) * (1-0.5)      = ( 0.4, 0.0, 0.0)
backbuffer.Alpha = 1*1 + 0*0   =1

A.RGB = (0.4, 0.4, 0.0)* 0.5 + (0.4, 0.0, 0.0)* ( 1-0.5)   = (0.4,0.2,0.0)
A.Alpha=0.5*1+1*(1-0.5) = 1

B.RGB = (0.2, 0.4, 0.8) * 0.5 + (0.4, 0.2, 0.0) * (1-0.5)  = (0.3, 0.3, 0.4)
B.Alpha = 0.5 * 1 + 1*(1-0.5)  = 1

==========================
BS3:

backbuffer.RGB = (0.4, 0.0, 0.0) + (0.0, 0.0, 0.0)  = (0.4, 0.0, 0.0)
backbuffer.Alpha = 0

A.RGB = (0.4, 0.4, 0.0) + (0.4, 0.0, 0.0) = (0.8, 0.4, 0.0)
A.Alpha = 0

B.RGB = (0.2, 0.4, 0.8) + (0.8, 0.4, 0.0) = (1.0, 0.8, 0.8)
B.Alpha = 0

• 22
• 11
• 15
• 17
• 19