# DirectX 9 MSAA single-pixel thick line rendering problems

## Recommended Posts

Shenjoku    161

I'm having some strange issues rendering single-pixel thick lines with DirectX 9 and MSAA, but only on AMD cards. I'm already adjusting the coordinates to convert them from texels to pixels by subtracting 0.5, which works perfectly fine and fixes rendering bugs on NVIDIA cards, but when doing the same on a system using an AMD card the line ends up rendering partially transparent because it's being anti-aliased when it shouldn't be.

Here's some sample code to show yout how the coordinates are calculated and how the line is being rendered:

// v is the locked vertex data which contains two points and is using an FVF of D3DFVF_XYZ | D3DFVF_DIFFUSE.
//
// x,y,w,h is the position and size of the line. For one thickness lines, either w or h is zero depending on whether it's horizontal or vertical.
//
// kVertexModifier is used to convert from texel to pixel coordinates based on the documentation found here: http://msdn.microsoft.com/en-us/library/windows/desktop/bb219690%28v=vs.85%29.aspx
const float kVertexModifier = -0.5f;

if (h == 0.0f)
{
x += kVertexModifier;
w -= kVertexModifier;
}

if (w == 0.0f)
{
y += kVertexModifier;
h -= kVertexModifier;
}

v[0].mSet(fColor, x, y);
v[1].mSet(fColor, x + w, y + h);

// Rendering is done like the following:
device->SetFVF(fFVF);
device->SetStreamSource(0, fVertexBuffer, 0, fVertexSize);
device->DrawPrimitive(D3DPT_LINELIST, 0, 1);


I took some screenshots from a test project to show what the problem looks like. Pay attention to the border around the blue rectangle (you'll most likely have to view the full-size image and zoom in to be able to see it):

This is how it looks on a machine with an NVIDIA card.

[attachment=14874:one_thickness_correct.png]

This is how it looks on a machine with an AMD card. Notice the border is very transparent compared to the other one.

[attachment=14873:one_thickness_broken.png]

I'm banging my head against a brick wall trying to figure out this problem, mainly because any fix that I figure out to make it work for AMD cards doesn't work for NVIDIA cards, and vice-versa. If anyone has any information or leads on how to fix this it would be greatly appreciated.

Edited by Shenjoku

##### Share on other sites
Matias Goldberg    9577

IIRC Line list's thickness is not very well defined in D3D9.

It's likely the line width is calculated differently (starting at the center vs starting at one of the edges) which could result in the width being of one subpixel when placed at the bottom edge, explaining AMD's behavior. While on the NVIDIA case, the full pixel is still covered.

Also use a shader. Latest devices will just replace the FF with an automatically generated shader that emulates the behavior, by using a shader you ensure the math is the same on both (although, you could still have precision differences)

##### Share on other sites
Shenjoku    161

I was hoping for a solution that didn't involve shaders since the engine doesn't have any shader support at all, and I would have to figure out how to do it for OpenGL as well.

I guess it's worth a shot though. I'll try to throw a simple test together and see how it goes.

##### Share on other sites
Hodgman    51234
It's inefficient, but you could also try replacing every line with a quad (made of 2 triangles) placed/shaped exactly as you want.
The triangle-covers-pixel/sample tests are very well defined, so it should work the same on every device/driver.

##### Share on other sites
Shenjoku    161

It's inefficient, but you could also try replacing every line with a quad (made of 2 triangles) placed/shaped exactly as you want.
The triangle-covers-pixel/sample tests are very well defined, so it should work the same on every device/driver.

Interesting, I didn't think you could draw a single pixel thick quad. I tried it and it seems to be working perfectly.

I had an idea last night that I'd like to try first before committing to that option: Is there an easy way to detect what kind of card is being used? That way I can just modify the code to do one thing for NVIDIA cards and another for AMD or whatever else that is different.

EDIT: Never mind that last part. I just tried on a computer with integrated Intel chips and the only thing that looks correct is creating a quad so I think I'm going to go with that solution. Thanks a lot Hodgman :) Edited by Shenjoku

##### Share on other sites
Hodgman    51234
You could draw a test pattern with lines, then read it back to the CPU. If its 'incorrect' you could then turn on your lines-as-quads code ;)

##### Share on other sites
MJP    19755

Line rendering is a mixed bag on consumer hardware, in terms of both consistency as well as performance.I've seen inconsistent line behavior from the same video card on two different driver versions, so I don't think there's any hope for relying on a particular behavior in your code. It's probably better to just render lines as quads.

Edited by MJP