# Alpha-to-coverage doubt

This topic is 3047 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hey guys,

I am wondering about two things of alpha-to-coverage I am failing to understand:

- How the samples that should be covered can be determined only from the alpha value? I mean, you know how many of them should be covered but not which of them. I suppose they are taken using a fixed pattern like, first top left, then bottom left, and so on. But that would mean that when there are overlapping geometry, the results will be wrong (but maybe it's just unnoticeable).

- Where the dithering pattern I see in demos came from?

Maybe someone can clarify this issues?

Thanks!

[Edited by - IrYoKu1 on July 17, 2010 8:40:31 AM]

##### Share on other sites
Two patterns get AND'ed together. Theres the regular MSAA pattern that comes from determining which MSAA-samples are actually covered by the triangle, then there's the alpha-to-coverage pattern, which is a stipple pattern based on the alpha value.

e.g. for 4x MSAA, the alpha-to-coverage patterns might look like:
100% alpha|  75% alpha|  |  50% alpha|  |  |  25% alpha|  |  |  |  0% alpha|  |  |  |  |XX 0X 0X 00 00XX XX X0 0X 00
For an MSAA sample to be written to, first it has to be covered by the polygon (the regular behavior) AND it has to be covered by the alpha-to-coverage pattern, which is chosen from a set (like the example above) based on the alpha value.

As you can see, this causes some banding to occur, as there's 255 possible alpha levels, which are mapped to much fewer coverage patterns (5 patterns in my example - actual implementations will differ).

##### Share on other sites
The two issues are related. Firstly, the selection of samples for a given alpha value is part of the render state (on every machine / API I've used). Typically, to give more apparent levels of alpha, there is a higher level grid, such as 2x2 (pixels, not samples). Each pixel then has a defined ordering for the samples within it. This does have the problem you describe: if you draw two objects to the same pixel, an object closer to the camera with a higher alpha value will completely hide a farther object with a lower alpha. This is often desired, though, so it isn't always bad.

Edit: Just to clarify, the dithering is because of the repetition of the pattern created by the coverage masks.

If the mask is programmable, you can shuffle it per draw call to fix that issue. On newer cards / APIs, the mask can actually be changed per pixel by the shader. This lets you randomize the mask, and essentially get perfect alpha-blending without any sorting. Google for "Stochastic Transparency" to see what I mean.

##### Share on other sites
Thanks! I got it.

I am not sure about a related thing about alpha testing and alpha blending: when you render an alpha blended quad, depth is written to the depth buffer even in the zones where alpha is equal to zero. But when you write it using alpha testing, the zones where alpha is zero will not be written to the depth buffer. Is this information correct?

Thanks!

[Edited by - IrYoKu1 on July 17, 2010 10:55:16 AM]

##### Share on other sites
Correct. Normal alpha testing is performed after the shader is run, and causes color and depth writes to be skipped. Alpha-to-coverage basically changes alpha blending into per-sample alpha testing.

I see, thanks!

1. 1
Rutin
42
2. 2
3. 3
4. 4
5. 5

• 9
• 27
• 20
• 14
• 14
• ### Forum Statistics

• Total Topics
633385
• Total Posts
3011605
• ### Who's Online (See full list)

There are no registered users currently online

×