Sign in to follow this  
GinieDP

[XNA] Prelighting, Depth, Surfaceformat, Curious Artifacts

Recommended Posts

I always read the forum and have always found solutions for my problems. However, now i stuck for some time on a problem i cannot solve. I did implement a prelight renderer in my engine, but i have a problem with outputting depth values. All the math is clear to me, and thats what i do for depth: -PositionInViewspace.Z / DistanceNearToFar, so resulting value is linear and in [0;1] range. At the same time i do output the normal vectors. The setup of rendertarget is simple: rt1: Normal.x, Normal.y, Normal.z, specularPower rt2: linearDepth Writing values in the first pass, where normals and depth are gatherd to rendertargets, the rt1 is filled with normals but there are holes at rt2. i spent a lot of time checking all my Renderstate changes, but everything looks fine. I tried to clear the targets with different colors: Cleard both targets to transparent black (0, 0, 0, 0), gives black holes at rt2. Cleard both targets to white (1, 1, 1, 1), gives white holes at rt2. It is more strage when i try different surface formats fro rt2: Format: Color => no holes, but ugly banding when pointlights are rendered Format: halfVector2 => lots of holes Format: Single => less holes but still there Format: halfVector4 => no holes, everything fine (but wasted bandwidth) Format: Vector2 => no holes, everything fine (but wasted bandwidth) rt1 is always filled correctly I really dont know what it is. Any help is appreciated. http://img188.imageshack.us/img188/6004/finalck.jpg P.S. my GPU is: Geforce 7900GT

Share this post


Link to post
Share on other sites
Make sure you don't have any render states set that could affect whether or not a particular pixel gets drawn. This includes depth buffering, stencil culling, and alpha-testing. If you're not alpha-testing, turn it off. If you're not using stencil culling, turn it off. If you are, make sure the stencil buffer is cleared to an appropriate value and isn't getting cleared to garbage by a render target change.

PIX can be very handy for these kinds of problems. You can capture a frame and look at all D3D calls made, and check for renderstates getting set that you didn't intend. You can also look at your depth buffer or render target data. For render targets you can also get a pixel history that shows all draw operations that affected that pixel, including pixels that were culled due to depth, stencil, or alpha test. I wrote a tutorial for using PIX with XNA that's not actually published yet, but you can look at a preview version here if you want.

Share this post


Link to post
Share on other sites
thanks, i would like to read the preview version, but i get a directory listing denied error.
Stencil and alpha is turned off. Actually i dont understand how pixels can be written to the one rendertarget but not to some areas of the second, since in shader model 2.0 you have to write to all active rendertargets.
Seems like i have to learn how to use PIX in the next fev days.

[EDIT]: aargh, running an experiment does show the depth holes in the started application, but holes are gone when PIX re-renders the scene.

Share this post


Link to post
Share on other sites
Tried application on another hardware with Geforce 7600, everything fine there. Debug runtime doesnt show anything wrong, just lots of ignored redundant renderstate changes. Thanks four your article, its very informative. Still have no luck, but i keep on playing with pix.

Share this post


Link to post
Share on other sites
Just startet the application with reference rasterizer, everything looks fine. So now i'll grab the newest driver for my card and see what happens. Thank you very much for your useful tips.

[EDIT] no luck with this

[Edited by - GinieDP on September 12, 2009 4:54:51 AM]

Share this post


Link to post
Share on other sites
now i am able to see the holes in the PIX renderer:

Rendertarget with linear depth contains pixels that are black (depth = 0), since the rendertarget was cleared to black.
But trying to debug the pixel, pixel history shows a depthvalue above 0 (for example s.th. like 0.188, which is about 96 units in the scene). Steppung through the debugger shows correct depth calculation and resulting value.

damn, this is getting more and more nasty.

Share this post


Link to post
Share on other sites
The issue is potentially solved now. I took a deeper look into the libraries i wrote for my engine. Seems like a thing i always wanted to fix but always forgot caused an unexpected behaviour of the GPU. Here is a little explanation:

My instancing library also manages different vertex streams on a model. Bad thing is that i allowed some vertex streams to be null (disabled), but did not change the vertex declaration. So vertexdeclaration was declared for some streams that have never been set to GPU (using models like terrain). Weird thing is that i never got an error or warning. Rendering was always fine until i tried deferred lighting with two rendertargets where i got holes in the second one. Using proper vertexdeclaration seems to solve the issue: No holes anymore :D. I cant explain the behaviour with different surface formats. Why did halfVector4 work but halfVector2 not?

So i hope vertexdeclaration was really the problem and its not a short phenomenon.

P.S.: Using PIX leads me to try a different vertexdeclaration, after i was seing lots of: "SetStreamSource(1, NULL, ...)". Thank you very much MJP, you helped me really a lot.


Share this post


Link to post
Share on other sites
Unfortunately the problem is back again. Actually it was back already a day after i "solved" it. I had to go on with coding, so i decided to use a single A16R16G16B16 rendertarget for the prelighting properties and see if it works:

R = depth
GB = viewspace normal (X and Y)
A = specular power

It did work and i actually can live with that solution.
+ more precision in the normals
+ potentially able to store specular intensity and specular power in a single
component (have not tried it)
+ bandwidth is the same as before
+ no artifacts on my GPU
- trade off for speed of the worldspace normals
- caching may be a problem on some GPUs

It took me the last week to implement the whole postprocessing pipeline (SSAO, HDR, Bloom, DOF). All effects are optimized, but the prerendering is still a thorn in my side. So i switched the prerendering to two R16G16 targets. Now i got my problems back.
Im going to test the application on different GPUs to be sure it is really a driver bug (hope my university has some workstations with different GPUs).
Any idea or advice what else i could check is appreciated.
Maybe there is any volunteer who could run the application on a similar (or any) GPU.
Mine is XFX GeForex 7900GT

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this