Jump to content
  • Advertisement
Sign in to follow this  
ZaiPpA

OpenGL Possible to make triangles viewed from the side to appear as lines?

This topic is 3626 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi! I have a small question. When viewing a triangle directly from the side (i.e. when the normal of the triangle is orthogonal to the view-vector), it disappears - nothing is drawn on the screen when viewing a triangle directly from the side. In OpenGL, is it possible to make triangles which are viewed directly from the side to appear as a line on the screen instead of disappearing? (in other words, instead of triangles having a depth of zero, is it possible to increase their depth to 1 pixel?) Thanx! [smile]

Share this post


Link to post
Share on other sites
Advertisement
Well, you can change the render modes of front and back facing triangles, for instance you could set front facing triangles to be drawn filled, and backfacing triangles drawn in wireframe, or you could simply draw the triangles twice: once filled, then once in wireframe. Or you could use a geometry shader to extrude triangles when their normals are exactly perpendicular to the screen.

Share this post


Link to post
Share on other sites
i dont think that you want the depth to change. a poly drawn at any depth from the side will still be invisible.

for any traingle, simply determine its surface normal by crossprods. Then compare the camera viewdir with dot. if 0, then perpendicular to the viewing camera. If that is the case, draw the ouline, if not, draw a poly.

you coulds also optimize this a little, by also determining the min an max position of a perpendicular triangle, and only draw 1 line from top to bottom (instead of the entire outline, which will not even be visisble entirely because is is viewed from the side)'

Share this post


Link to post
Share on other sites
Quote:
Original post by Kincaid
i dont think that you want the depth to change. a poly drawn at any depth from the side will still be invisible.


I'm pretty sure by depth he meant volume of the triangle, not position relative to the camera.

Share this post


Link to post
Share on other sites
yeah, i wasnt quite sure what was ment there (triangle 'thickness' would be the right word maybe?), but i believe i adressed the question right (or am i still missing something???) and he shouldn't be tampering with the (actual) depths

Share this post


Link to post
Share on other sites
Thanks for the good answers!!

Yeah, i meant depth as in (height, width, depth) not as in depth-buffer depth. I guess thickness would have been a better word, yes :)

My card does not support geometry shaders, so the extrude-suggestion and the cross-product suggestion doesn't work for me, unfortunately. (and doing it on the CPU would be way too slow)

But really good ideas about rendering back or front faces as wireframe, or rendering the object twice with and without wireframe!

I must admit i have never tried wireframe rendering or tried rendering front and back faces differently. (but i am sure it's all in teh red book)


I am also not totally sure it will work, so i will describe the algorithm i'm making a bit closer:
In my algorithm i'm doing additive blending on the triangles rendered, to get the total sum of light (color) of all fragments rendered at each pixel. The reason i want the edges to be rendered, when triangles are viewed from the side is to also get the light (colors) from the edges of these triangles. (when rendering additive blending is enabled, the depth test is disabled)


So if i render the triangles in wireframe after having rendered them in solid, i guess this would mean that light at the edge of a triangle (when the triangle is not viewed from the side) will bee too bright, since the edge pixels are both rendered in the wireframe-pass and in the non-wireframe pass and are being added together due to additive blending. I.e. it will cause double contribution of the pixels at the triangle edges. Right? Is it possible, when drawing the solid triangles, to not draw the edge pixels?

Alternatively i could use the method of rendering front and backfaces differently. The problem is that i also need the light from the triangles that are facing away from the camera, in the additive blend (i have disabled backface-culling in order to do this).
So i guess that if i rendered the front and back faces differently, i would not be able to get the light from the triangles facing away from the camera, right?
I mean, if the backfaces are rendered in wireframe, only the light at the edges of the triangles facing away from the camera would be rendered right? Or is there a way to solve this perhaps?

Thanx.
(hmm... sorry for long post :/)

Share this post


Link to post
Share on other sites
since you're working with lighting, you probably already have the face normals for a triangle (or vertex), so that spares the crossprod (whihc isn't that costly) and only takes a dot check...which undoubltfully is given free in almost any light calculation, dont overestimate the cost of dotting. I find it surprusing that, on the other hand, you are willing to render everthing twice...

rendering in wireframe is different than polies. make 3d triangles, which can be viewed from any side, no exceptions, only dealing with polies, no different functions for wireframe.

Share this post


Link to post
Share on other sites
ZiaPpA:
Well you can use polygon offset to stop double rendering of pixels (assuming you are rendering using z-testing). Just offset the wireframe render back a 'minimum distiguishable machine unit' (I find glPolygonOffset tricky to use, but that might just be me, doesn't seem to work consistently), and render filled, then wireframe.
Also, if you already need to be able to see both sides of any polygon, then drawing using different modes for front and back facing will not give you the desired result, only drawing the polys twice, filled then wireframe will.

Kincaid:
Even though he gets free access to a dot product in the lighting calculations (assuming he is doing some camera dependant type of lighting, specular or whatever), how can that then be transformed into actually drawing pixels that otherwise would not be there? The pipe-line decides how many pixels to draw dependant on the screen-space vertex positions (determined by the vertex shader, homogenous divide and view-matrix transform), there is no way to draw extra pixels once the vertices have been sent to the card, without using geometry shaders (afaik..).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!