Jump to content

  • Log In with Google      Sign In   
  • Create Account


DirectX 9 MSAA single-pixel thick line rendering problems


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
6 replies to this topic

#1 Shenjoku   Members   -  Reputation: 160

Like
0Likes
Like

Posted 15 April 2013 - 04:36 PM

I'm having some strange issues rendering single-pixel thick lines with DirectX 9 and MSAA, but only on AMD cards. I'm already adjusting the coordinates to convert them from texels to pixels by subtracting 0.5, which works perfectly fine and fixes rendering bugs on NVIDIA cards, but when doing the same on a system using an AMD card the line ends up rendering partially transparent because it's being anti-aliased when it shouldn't be.

 

Here's some sample code to show yout how the coordinates are calculated and how the line is being rendered:

 

// v is the locked vertex data which contains two points and is using an FVF of D3DFVF_XYZ | D3DFVF_DIFFUSE.
//
// x,y,w,h is the position and size of the line. For one thickness lines, either w or h is zero depending on whether it's horizontal or vertical.
//
// kVertexModifier is used to convert from texel to pixel coordinates based on the documentation found here: http://msdn.microsoft.com/en-us/library/windows/desktop/bb219690%28v=vs.85%29.aspx
const float kVertexModifier = -0.5f;

if (h == 0.0f)
{
	x += kVertexModifier;
	w -= kVertexModifier;
}

if (w == 0.0f)
{
	y += kVertexModifier;
	h -= kVertexModifier;
}

v[0].mSet(fColor, x, y);
v[1].mSet(fColor, x + w, y + h);

// Rendering is done like the following:
device->SetFVF(fFVF);
device->SetStreamSource(0, fVertexBuffer, 0, fVertexSize);
device->DrawPrimitive(D3DPT_LINELIST, 0, 1);

 

I took some screenshots from a test project to show what the problem looks like. Pay attention to the border around the blue rectangle (you'll most likely have to view the full-size image and zoom in to be able to see it):

 

 This is how it looks on a machine with an NVIDIA card.

one_thickness_correct.png

 

This is how it looks on a machine with an AMD card. Notice the border is very transparent compared to the other one.

one_thickness_broken.png

 

I'm banging my head against a brick wall trying to figure out this problem, mainly because any fix that I figure out to make it work for AMD cards doesn't work for NVIDIA cards, and vice-versa. If anyone has any information or leads on how to fix this it would be greatly appreciated.


Edited by Shenjoku, 15 April 2013 - 07:36 PM.


Sponsor:

#2 Matias Goldberg   Crossbones+   -  Reputation: 3059

Like
0Likes
Like

Posted 15 April 2013 - 08:43 PM

IIRC Line list's thickness is not very well defined in D3D9.

 

It's likely the line width is calculated differently (starting at the center vs starting at one of the edges) which could result in the width being of one subpixel when placed at the bottom edge, explaining AMD's behavior. While on the NVIDIA case, the full pixel is still covered.

 

Also use a shader. Latest devices will just replace the FF with an automatically generated shader that emulates the behavior, by using a shader you ensure the math is the same on both (although, you could still have precision differences)



#3 Shenjoku   Members   -  Reputation: 160

Like
0Likes
Like

Posted 15 April 2013 - 09:18 PM

I was hoping for a solution that didn't involve shaders since the engine doesn't have any shader support at all, and I would have to figure out how to do it for OpenGL as well.

 

I guess it's worth a shot though. I'll try to throw a simple test together and see how it goes.



#4 Hodgman   Moderators   -  Reputation: 28653

Like
1Likes
Like

Posted 15 April 2013 - 10:00 PM

It's inefficient, but you could also try replacing every line with a quad (made of 2 triangles) placed/shaped exactly as you want.
The triangle-covers-pixel/sample tests are very well defined, so it should work the same on every device/driver.

#5 Shenjoku   Members   -  Reputation: 160

Like
0Likes
Like

Posted 16 April 2013 - 12:29 PM

It's inefficient, but you could also try replacing every line with a quad (made of 2 triangles) placed/shaped exactly as you want.
The triangle-covers-pixel/sample tests are very well defined, so it should work the same on every device/driver.


Interesting, I didn't think you could draw a single pixel thick quad. I tried it and it seems to be working perfectly.

I had an idea last night that I'd like to try first before committing to that option: Is there an easy way to detect what kind of card is being used? That way I can just modify the code to do one thing for NVIDIA cards and another for AMD or whatever else that is different.

EDIT: Never mind that last part. I just tried on a computer with integrated Intel chips and the only thing that looks correct is creating a quad so I think I'm going to go with that solution. Thanks a lot Hodgman :)

Edited by Shenjoku, 16 April 2013 - 01:49 PM.


#6 Hodgman   Moderators   -  Reputation: 28653

Like
0Likes
Like

Posted 16 April 2013 - 05:29 PM

You could draw a test pattern with lines, then read it back to the CPU. If its 'incorrect' you could then turn on your lines-as-quads code ;)

#7 MJP   Moderators   -  Reputation: 10659

Like
0Likes
Like

Posted 16 April 2013 - 05:59 PM

Line rendering is a mixed bag on consumer hardware, in terms of both consistency as well as performance.I've seen inconsistent line behavior from the same video card on two different driver versions, so I don't think there's any hope for relying on a particular behavior in your code. It's probably better to just render lines as quads.


Edited by MJP, 16 April 2013 - 05:59 PM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS