Followers 0

# Top of the line vs Old crap

## 32 posts in this topic

So we are running into a bit of an issue. Our point lights are still not working. Here is a skinny on what we have:

1) we have deffered rendering.

3) we are using sharpDx and C#

4) dx10 and 11 are the only things we are attempting to use. We are not using any dx9 methods

So, the issue:

We are getting some odd errors and none of the tools we are using are showing a reason as to why. We can not find out if the issue is the code or the methods we are using or the math. We lack the understanding of the things in place to really grasp what is going on. Perhaps one of you more skilled people here can assist?

We are attempting to avoid a complete breakdown of ever system... as that seems to be the last result.

What our goal is:

We are attempting to keep the deffered rendering while using the point lights ( we would like to have many of them in a single location ). We are under the impression that to do this we must use dx11 methods. If we are unable to use these methods it could hamper some of our other systems since we are sending some of our ui elements through dx11.1 stuff.

Other issues:

PIX has been abandoned by MS ( apparently ) and dx11.1 support from nvidia will not exist for until nsight 3.0 ( which could take some serious time to get ) This means we would need to revert back to dx11 and if we do that we lose the methods for controlling our UI that let us have special effects as well as we take a big FPS hit when ui elements are open ( probably another issue with how we are doing it )

Final Thoughts:

So all in all, we have some compounded issues that need to be resolved. Right now the thing seems to be a mess. If we can get the point lights to work with the current setup we will be good.... Any and all help would be very appreciated.

1

##### Share on other sites
So, the real issue is that your logarithmic depth buffer isn't working (or your code to reconstruct position from depth isn't working), but somehow switching to DX 11.1 will solve this, which brings more issues? What's the theory behind the 11.1 fix?

Eh, our move to 11.1 was for the UI issues we were having. We pushed the UI elements to a new rendering technique with 11.1 and it only creates 8fps lag now instead of 100+, again probably because of our methods... but thats for another topic  I think I was trying to express that we may need to revisit how we are doing the UI in case we need to revert back to 11 or 10, but yeah that should be in another topic.

The issue is that the only way we know how to fix it would be to revert to something not dx related or get rid of the deferred rendering. So, your suggestions are very helpful. We will try them out today and I will let you know what results we get. Thank you for your continued help.

Edited by riuthamus
0

##### Share on other sites
It mostly works without the logarithmic depth. I say mostly because I can get it working by disabling the specular calculation (which is resulting in an output value of 1#QNAN0 or something like that). With the logarithmic depth on, it doesn't work at all. The logarithmic value is calculated in the vertex shader and then written to a standard depth buffer. I think the usual DX9 way is to actually write that out to a separate render target since you can't read from a depth buffer in DX9. Since we're using DX11, we're using an actual depth buffer, so we don't actually write to it ourselves, so I'm not sure if that's considered as calculating it per-vertex or per-pixel (I would assume per-pixel).

I might give the reversed floating point buffer a try, I was under the impression that log depth was the best but I guess not.
0

##### Share on other sites

Hey there,

unfortunately I can not say much about reconstruction of logarithmic depth because I never used it before. When it comes to pointlights and reconstruction I'm using linear depth in a seperate Rendertarget since I'm still on DX9 but if I have some time in the next days I may change some shadercode in my engine to see if it works for me.

For your issues with PIX and DX11: Maybe you can have look at the Intel Graphics Performance Analyzer. It says it has full Windows 8 support so I suppose it has also DX11.1 support. I'm using it by myself from time to time but on my rather old machine there are some problems with it so I normally stick with PIX.

1

##### Share on other sites
Sadly Intel GPA doesn't seem to work. It starts up fine, unlike all the other tools, but it doesn't display its HUD and ctrl+f1 doesn't bring it up.
0

##### Share on other sites

So I got some time in the morning to do some tests. I implemented the log depth and reconstruction in my engine and it showed up completely without lighting as I expected so I found some information in this post:

http://outerra.blogspot.de/2012/11/maximizing-depth-buffer-range-and.html

In the comments Kemen said that the reconstructed depthvalue is already in viewspace so I switched the construction of the worldposition in two parts. First multiplying the screenspaceposition with the depthvalue and then with the inverseprojection. But when I tried to get the worldposition with multiplying it with the inverseviewmatrix according to PIX the worldpos is still a bit off. Maybe I have an error somewhere else and I will look into it if I have more time but for now I wanted to share my results maybe you can get it working with that.

Obviously Intel GPA has different problems on different systems. For me I cannot view DepthBuffers and Stencil in the FrameAnalyzer. If you dont need the HUD you can try pressing Ctrl+Shift+C to make a snapshot for analyzing.

2

##### Share on other sites

Wow, nice find. That is more or less what we were aiming for so this might just be what we needed.

0

##### Share on other sites
I couldn't find any actual tutorials on how to do reversed z-depth so I just tried switching the near and far values on the viewport and changed the depth comparison to GreaterEqual. DirectX promptly yelled at me for having a viewport with a MinDepth > MaxDepth. It renders but fog stopped working and if I look at the depth buffer in the debugger, it's just pure black (though who knows if it's actually even showing me the right thing, the debugger is so buggy I wouldn't be surprised if it isn't). Reversing the fog distance values (from .999-1 to 0.1-0) results in a 100% fog covered scene.

I also tried the suggestion from that blog for the log depth, I couldn't get that to work either. I'm not really sure when/if I'm still supposed to do the position/w part. I tried it with and without, in different places, among other things. Pretty much all I get is a light that is invisible until your actually inside its radius, and then you see some not-very-correct shading.

Pressing ctrl+shift+c in Intel GPA doesn't do anything either. It's pretty much as if the program isn't running at all. Edited by Telanor
2

##### Share on other sites

If I'm not wrong the reversed z Method just means that you write (1- z/w) into the buffer instead of the standart z/w because floating point values have slightly more precision close to 0 than close to 1 so you get a better distribution when the depth is reversed.

You could also go with linear-z and a 32bit Buffer which should have the best precision but I cannot recall a comparison between logdepth and linear so I could be wrong.

Thats exactly what I experienced. I was thinking about the pos/w part too. My conclusion was that after the multiply with the inverseview w should be 1 so if it's necessary than after the first multiply with the inverseprojection.

There is also AMDs Gpu Per Studio but I cannot say what it supports or measures or what else it can do because it only support DX10 and up and I'm still using DX9 as I said so I never took a look at it.

1

##### Share on other sites

Wouldn't both of those require manually writing the depth from the pixel shader and losing all the early-z culling? I feel like there should be a way to do it without that...

AMD perf studio, Nvidia NSight, and PIX all crash on load for DX11.1 applications

This is what the depth is coming out as after reversing it:

[attachment=13207:RuinValor 2013-01-12 22-18-01-63.png]

Mind you it is being run through the HDR/bloom pipeline, but that pretty much seems to match what I'm seeing with the fog settings (requiring me to set it to go from 0.001 to 0 in order to get anything reasonable).  It certainly did reverse it, but without any precision gain what-so-ever.

1

##### Share on other sites

[quote name='Telanor' timestamp='1358047976' post='5020943']
AMD perf studio, Nvidia NSight, and PIX all crash on load for DX11.1 applications
[/quote]

Have you tried the graphics debugger that comes with Visual Studio 2012? According to the MSDN, it supports D3D 11.1.

0

##### Share on other sites

Yep, it's the only tool I can currently use.  Unfortunately it doesn't support texture arrays, mipmaps, doesn't seem to have performance metrics, randomly tries to open some render targets in photoshop (which appear pure black), and fudges number values during shader debugging (it 's shown me 0.000 for a number that was actually a QNAN).  Overall it's about the worst debugging tool of the ones available, but the only one that will even run.

1

##### Share on other sites

So, back on topic, what would cause the inside of the point light to render the lighting, but not the outside? if we reverse the buffer as you suggested we get an issue with some other elements. Any ideas?

0

##### Share on other sites

Sorry for the late reply I got screwed by my Nvidia mobile chip when I started testing the new revision of my engine on low-end hardware and needed to fix that first. So I finally got time to look at your problem again and I think I got it...hopefully. I cannot say why it was not working in first place though I'm pretty sure I had it that way before but however here is my code:

float4 mapValue = tex2D(DepthMapSampler, PSIn.texCoord + dimensionOffset);
float depth = mapValue.x;
depth = ((pow((0.001 * xFarClip) + 1, depth) - 1) / 0.001);
float2 invProjPos = mul(PSIn.ProjPos * -depth, xInvProjection);
float4 worldPos = mul(float4(invProjPos, depth,1), xInvView);

That should do it. Just get the depth from your DepthBuffer (here it's my seperate RenderTarget), reconstruct the viewspace depth value, multiply the screenspace position with the negativ depth and the inverse projection. Then there is just the normal multiplication with the inverse view left.
I have one question though: I tried using log depth in my depth buffer because my nvdia chip seems to have a terrible depth precision even close to the camera while all my Radeons are working fine but I got the interpolation issue so at some angles and distances close to the near plane the geometry is disappearing. I know it's because of how the GPU is interpolating the values between vertices but I'm curious if you experienced this in your game or is the vertex density in Voxel based worlds high enough that it never occurs. Maybe the main factor is the depth buffer itself using a 32bit buffer instead of a 24bit could help I dont know...

1

##### Share on other sites
Doesn't seem to be working for me. What is PSIn.ProjPos? Is it the screen position? I have this on the first line of my pixel shader:
input.ScreenPosition.xy /= input.ScreenPosition.w;
Is that still correct to do? Also, this is how my depth is written:
output.Position.z = log(0.001 * output.Position.z + 1) / log(0.001 * FarPlane + 1) * output.Position.w;
That's in the vertex shader for the GBuffer rendering. Is that the same as what you're using?

And to answer your question, I think the vertex density for our world is high enough that we don't really get that issue. If you stick your face into a block, then you see it, but anyone doing that deserves some visual glitches . We're not using a 32bit buffer even, just the standard 24 bit one
2

##### Share on other sites

Yes PSIn.ProjPos is the screen position. I suppose you are using the standart deffered approach where you are rendering a light volume for each light while I'm going with a fullscreen pass therefor I dont need the division with the w-component but you will still need it.

And yes that is the exact same function. Has anything changed? First I had the multiplication without the "-" so the light was moving over terrain when the camera rotated but there was no longer this problem with the light beeing visible from the inside of the light radius.

And thanks for clearing it up

1

##### Share on other sites
The light seems to behave exactly the same with or without the negative. I have to be inside the radius to see it and it moves around weirdly when I look around.
1

##### Share on other sites

Weird I cannot think of any difference between our methods. I can give you more of my code but I dont think it's relevant there seems to be another problem somewhere else.

Edit: Damn it...I'm afraid I was too tired last night when I wrote my post and today I could not remember the most important change I did right away when I started experimenting with it yesterday. The thing is I'm not using the screenspace.z value in depth calculation anymore. Instead I go with the viewspacePos.z so you get something like this:

output.Position.z = log(0.001 * viewSpacePosition.z + 1) / log(0.001 * FarPlane + 1) * output.Position.w;

that made more sense to me because it was said that z is in viewspace already after reconstruction and I forgot to mention it...Stupid me...

1

##### Share on other sites
Hmm, that works for you? I tried that and now the terrain doesn't show up at all. Is your output position actually being used for the rendering or is that just the value that you write to the render target?
0

##### Share on other sites

I'm not using the values in the depthbuffer just for outputting depth in a pre-pass so I cannot say how it behaves when using it as a depthbuffer value. Did you try to change the ZFunc-state in the geometry pass? Thats pretty unlikely but I will also have a look into my depth texture with pix maybe the values are reversed

1

##### Share on other sites

You don't need to keep view space position for the logarithmic depth. You can use the value of w, since it contains the view space depth after projection:

output.Position.z = log(0.001 * output.Position.w + 1) / log(0.001 * FarPlane + 1) * output.Position.w;

That's because the projection matrix (D3DXMatrixPerspectiveFovLH) is:

w       0       0               0
0       h       0               0
0       0       zf/(zf-zn)      1
0       0       -zn*zf/(zf-zn)  0

And thus w ? z, w ends up with the view space depth.

1

##### Share on other sites
Hmm. Well if I change it to output.Position.w, I can see the terrain again, but it's back to rendering the light weirdly. The strange thing I notice is when I debug the pixel shader, the reconstructed position always seems to be behind me. So for example the Y value of the light (which is in front of me) is 570 and the pixel I'm debugging is past that when I'm looking down, so you'd expect a Y < 570 but instead it comes out as 608. This happens regardless of whether I use -depth or depth in the invProjPos calculation.
0

##### Share on other sites

I dont get it. It's always the same, using output.Position.z or output.Position.w makes no difference. The light is only visible from inside the volume. I'm pretty sure I just missing a additional math operation but I dont see it and I need fresh view. So I will look into it later that evening (it's about 1pm here) so hopefully I get more for you tonight or tomorrow.

2

##### Share on other sites

Any luck my friend?

0

## Create an account

Register a new account