Missing pixels in mesh seams

Started by
13 comments, last by Hodgman 5 years, 4 months ago

Hi all,

After seeing some missing pixels along the edges of meshes 'beside each other'/ connected, I first thought it was mesh specific/related.
But after testing the following, I still see this occuring:

- create a flat grid/plane, without textures
- output only black, white background (buffer cleared)

When I draw this plane twice, e.g. 0,0 (XZ) and 4,0 (XZ), with the plane being 4x4, I still see those strange pixels 'see through' (aiming for properly connected planes).
After quite some researching and testing (disable skybox, change buffer clear color, disable depth buffer etc), I still keep getting this.

I've been reading up on 't junction' issues, but I think here this isn't the case (verts all line up nicely).

As a workaround/ solution for now, I've just made the black foundation planes (below the scene), boxes with minimal height and bottom triangles removed. This way basically these 'holes' are not visible because the boxes have sides. Here a screenshot of what I'm getting. 

I wonder if someone has thoughts on this, is this 'normal', do studios workaround this etc.
There might be some rounding issues, but I wouldn't expect that with this 'full' numbers (all ,0).

2018-12-02-streetseams.jpg

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Advertisement

Could be related to the vertex ordering of your triangles, or too many operations on the vertices (it's normally best practice to multiply all needed matrices together before sending them to shaders) perhaps?

 

.:vinterberg:.

There are some vertex shader 'decorations'? that make the math exactly following your code, so it's not possible the compiler reverses multiplication and adds for example. I do not remember details, but if you would use different vertex shaders you could try this.

As you alluded to, numerical error could lead to vertex positions under different transforms not being coincident where mathematically they should be. But as you mention, this seems less likely when integer coordinates and simple translation transforms are being used.

I haven't used DirectX in a while, and maybe there's an obvious solution that someone else will be able to point out. Meanwhile though, here's a few diagnostic ideas:

- Double check the transform matrices you're submitting to make sure there isn't any unexpected numerical noise (for whatever reason). Based on what you're describing, I'd expect them to be identity except for some integer translation elements.

- Take your transform matrices and your vertex positions and perform the multiplications in software in your own code, just to see what you get. Obviously that only tells you so much because the results won't necessarily be the same as those of the shader code, but it still might be informative.

- Temporarily change the shader code to only translate the vertex positions via manual addition, and see what results you get.

- If it's possible and not too inconvenient, use transform feedback to see what the actual transformed vertex positions are and whether they're coincident where you'd expect them to be.

Thanks all.

@Zakwayda I just did some debugging in renderdoc, checking the world matrix that's used in the VS. This looks as expected, with round numbers for the XYZ position of the plane (X -56, Y 10, Z -94).

WorldViewProjection is not that easy to verify :)

image.png.9952794c9b18206ab7ead52d807c5672.png

I also tried to statically position the camera in front of a plane, with 90 deg rotation etc., to better inspect WVP, but then I don't see the issue )-;

Will try to debug further and see what I can find.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Quote

I also tried to statically position the camera in front of a plane, with 90 deg rotation etc., to better inspect WVP, but then I don't see the issue )-;

Although I could be wrong, I suspect that given the non-trivial composite transforms involved, perfectly coincident vertex positions may be too much to ask for. (That you don't see the artifacts with a trivial camera transform seems to be evidence for this.) Transform feedback could maybe tell you what's really happening, but I've never done that in DirectX, so I don't know if that analysis would be possible or how practical it would be.

In any case, my own inclination would be not to rely on mathematical equivalency here, and to look for alternatives to using a separate transform for each tile or grid element or what have you. This is probably a fairly obvious suggestion, but the first alternative that comes to mind is to construct meshes for the various elements or groups of elements as needed, allowing you to ensure that all vertex positions match exactly and that there are no artifacts. (Maybe someone else can offer a better suggestion though.)

if the triangles have the same vertex position and are 32bit floating point and you arent doing anything extra other than just rendering, id look more at aliasing and multisampling than accuracy of vertice position. 

@TeaTreeTim @Zakwayda I found it :) after a chat with @Hodgman

The solution is/ was a combination of 2 things:

- I have a fixed timestep for updating, the state of the camera's view projection matrix is not correct, as used when rendering
- the transformation of the vertex in the VS, using 2 separate world matrices (potentially, looking at possible accuracy/rounding issue. Old:


	// Transform to world space space.
	vout.PosW    = mul(float4(vin.PosL, 1.0f), gPerObject.MatWorld).xyz;
		
	// Transform to homogeneous clip space.
	vout.PosH = mul(float4(vin.PosL, 1.0f), gPerObject.MatWorldViewProj);

New:


	// Transform to world space space.
	vout.PosW    = mul(float4(vin.PosL, 1.0f), gPerObject.MatWorld).xyz;
		
	// Transform to homogeneous clip space.
	vout.PosH = mul(float4(vout.PosW, 1.0f), gPerFrame.MatViewProj);

Now the issue is gone.
Currently figuring out how to exactly approach the view/projection matrix updating, for testing I just moved updating the CB per frame to the render function :)

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

20 hours ago, cozzie said:

potentially, looking at possible accuracy/rounding issue

Glad it worked :)

Going from nice small, round model coordinates to nice small, round world coordinates is unlikely to encounter floating point precision issues. So, say you've got two instances, A and B, of a mesh with 2 vertices local coords 0 and 1, and their world matrices place them at position 10 and 11. Instance A ends up with verts at 10 and 11. Instance B ends up with verts at 11 and 12. Seeing as both instances agree that the edge is at 11, there's no crack. Both of those two different vertices at position 11 are transformed to the screen using the exact same projection matrix, so they'll end up with the bitwise same screen position and still have no crack. 

In your original code where you go directly from local vertex coordinates to screen coordinates (by using a world-view-projection matrix), the two instances are being transformed to the screen (which involves a lot of very precise, long, fractional numbers) using a completely different set of numbers. If you're unlucky (which it seems you were), the left vertex of one instance won't be bitwise exact with the right vertex of the other instance. Doh. 

This is kind of a funny problem though, because in general, the "fixed" version (local to world, followed by world to screen) has some bad precision implications -- because you're using world space as an intermediary, any vertices that end up being a long way from the origin can suffer quantisation problems. In a planetary scale renderer, this would absolutely destroy most of your data quality... Going directly from local coord to screen coord works well even for solar system sized scenes (assuming your code that constructed the 32bit float matrices was using 64bit double input data on the CPU).

So, use local to world, world to screen if you want edges of different models to perfectly match up. Use local to screen if you want best precision within a single model. 

There's also one other solution that's popular - use "camera relative world space" as an intermediate coordinate system, where it's world space, but [0,0] is relocated to be wherever your main camera is. This can give a blend of both of the other solution's strengths and weaknesses. 

@Hodgman thanks, the explanation is pretty clear.

I'm not planning for that large scaled scenes so far, but it's good to know.
Not sure though how the 3rd 'in the middle' option would work, would it be something like this:

- mul world with view on CPU
- mul model space vtx with worldview
- mul the result with projection?

Btw; I did notice some clipping within a mesh with the new/ fixed solution, but not 100% sure if that was caused by the change (or just coincedence). It only occured when being 'very far' away from the mesh (could also be Z fighting/ precision). Update; just did a test changing it back, the clipping issue (Faces 'through each other') does occur at closer Z with the new approach, compared to the old approach (issue occurs only when further away). But in this case I think I need a better mesh :)

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

This topic is closed to new replies.

Advertisement