Jump to content
  • Advertisement
Sign in to follow this  
cqulyx

How to calculate the color of fragments at the edge shared by two adjacent polygons?

This topic is 4485 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

How to calculate the color of fragments at the edge shared by two adjacent polygons? For example, Given two adjacent triangles whose normals differs from each other (say, the two normals are denoted as N1 and N2), my question are: 1) How to caculate the normals of fragments right at the shared edge? say, if a fragment X is located at the shared eage, and the normal of X is denoted as N3. Can I caculate N3 as the average of N1 + N2? If so, where and when should it be caculated? 2) If not, befroe Z-buufer test, there would be two fragments whose positions in the Screen Space are identical but whose colors and/or normals may be different. How would Z-buffer test determine to take one fragment and discard the other? The above questions have puzzled me for several days, I hope someone can answer them. Thanks in advance.

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by cqulyx
How to calculate the color of fragments at the edge shared by two adjacent polygons?

For example, Given two adjacent triangles whose normals differs from each other (say, the two normals are denoted as N1 and N2), my question are:

1) How to caculate the normals of fragments right at the shared edge? say, if a fragment X is located at the shared eage, and the normal of X is denoted as N3. Can I caculate N3 as the average of N1 + N2? If so, where and when should it be caculated?

2) If not, befroe Z-buufer test, there would be two fragments whose positions in the Screen Space are identical but whose colors and/or normals may be different. How would Z-buffer test determine to take one fragment and discard the other?

The above questions have puzzled me for several days, I hope someone can answer them. Thanks in advance.


The "edge" between two poly's in screenspace is infinitely thin. When poly's overlap, which have different normals and hence colours, but have the same depth it basically all depends on which one gets to the zbuffer first, and what the z fail test is.

Share this post


Link to post
Share on other sites
Quote:
Original post by python_regious
basically all depends on which one gets to the zbuffer first, and what the z fail test is.

Rasterisation rules for poly edges are well defined in both D3D and OpenGL. If two polys are sharing exactly the same vertexes (ie. you're using indexed tris or tri strips) then they'll get scan converted to cover each pixel once, with no overlap and no gaps.

However I'm not sure why the OP even cares about this? If you want to keep the colours continuous over two adjacent polys all you have to do is make sure they're using identical colours/normals/etc. and the interpolation will do the rest. For vertex lighting this means that the normal for a vertex is the avarage of all faces which use that vertex.

Share this post


Link to post
Share on other sites
Quote:
Original post by OrangyTang
Rasterisation rules for poly edges are well defined in both D3D and OpenGL. If two polys are sharing exactly the same vertexes (ie. you're using indexed tris or tri strips) then they'll get scan converted to cover each pixel once, with no overlap and no gaps.

However I'm not sure why the OP even cares about this? If you want to keep the colours continuous over two adjacent polys all you have to do is make sure they're using identical colours/normals/etc. and the interpolation will do the rest. For vertex lighting this means that the normal for a vertex is the avarage of all faces which use that vertex.


Thank you for your reply.

I don't mean shared vetexes but shared fragments which are produced by rasterization. The motivation to ask this is at the end of the OpenGL red book. Where the authors say that the normals of fragments shared by two polys should be the average of the respective normals of the two polys. As far as I know, neither Phong shading nor Gouraud shading would use this mothod. So, I figure it may be used in flat shading. However, when rasterization starts to rasterize a triangle into many fragments, it doesn't know which other triangle is adjacent to the current triangle, and so it can not obtain the normal of the adjacent triangle(say, the normal N2). The whole question lies here: without the normal N2, how does rasterization calculate the average of N1 and N2?

If we calculate the average of N1 and N2 and save it somewhere in Object-Space prior to rasterization, how is the pre-caculated normal passed to the next stage in the pipeline? If so, when rasterization processes a triangle, it must know three normals for the three edges, where should the three normals be stored?

PS: I think this mothod can improve the appearance of flat shading for more smooth transition normals on the shared edges.

Share this post


Link to post
Share on other sites
Quote:
Original post by cqulyx

If we calculate the average of N1 and N2 and save it somewhere in Object-Space prior to rasterization, how is the pre-caculated normal passed to the next stage in the pipeline? If so, when rasterization processes a triangle, it must know three normals for the three edges, where should the three normals be stored?

PS: I think this mothod can improve the appearance of flat shading for more smooth transition normals on the shared edges.


Phong and Gouraud are per-vertex lighting methods, that's why normals don't get interpolated across fragments, only colors do. If you want the normals to be interpolated, look into per-pixel lighting using shaders, I don't see any way around it. The fixed function pipeline doesn't support per-edge normals, only per-vertex normals. With a programmable pipeline you can do anything you want with those normals.

A very nice document from NVidia.

Although, I have no idea how to pass in the per-edge normals like you described to the shaders. And the worst part is that you can't calculate them in shaders either, because shaders don't have context (they are massively parallel). You can try making a normal map and passing that in, but it's probably overkill because it will pass in the normals for the inside of the polygon as well. Unless you use a 1-dimensional texture... Never mind, I'm just throwing in random ideas :-)

EDIT: I probably gave an answer to the wrong question, but whatever, I hope it's useful.

[Edited by - deathkrush on July 8, 2006 10:16:24 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by OrangyTang
Quote:
Original post by python_regious
basically all depends on which one gets to the zbuffer first, and what the z fail test is.

Rasterisation rules for poly edges are well defined in both D3D and OpenGL. If two polys are sharing exactly the same vertexes (ie. you're using indexed tris or tri strips) then they'll get scan converted to cover each pixel once, with no overlap and no gaps.


Of course, I was referring to having two independant polys that overlap.

Share this post


Link to post
Share on other sites
Quote:
Original post by cqulyx
I don't mean shared vetexes but shared fragments which are produced by rasterization.

Theres no such thing as a 'shared fragment'. A fragment comes from one, and only one, triangle. It's up to you to make sure that the edges between polys transition how you want them to.

Quote:
The motivation to ask this is at the end of the OpenGL red book. Where the authors say that the normals of fragments shared by two polys should be the average of the respective normals of the two polys. As far as I know, neither Phong shading nor Gouraud shading would use this mothod. So, I figure it may be used in flat shading. However, when rasterization starts to rasterize a triangle into many fragments, it doesn't know which other triangle is adjacent to the current triangle, and so it can not obtain the normal of the adjacent triangle(say, the normal N2). The whole question lies here: without the normal N2, how does rasterization calculate the average of N1 and N2?

Sounds like regular Phong lighting with per-vertex normals rather than per-face normals, however I can't find the bit in the red book your refering to, what's the actual page/section it's in?

Quote:
PS: I think this mothod can improve the appearance of flat shading for more smooth transition normals on the shared edges.

If thats your overall goal, then why not just use regular flat shading and enable anti-aliasing? You'll get better quality results, faster, and with lest implementation effort.

I can't help but feel that somewhere you've started from a flawed assumption, because most of your questions don't really make much sense.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!