# OpenGL Are Polygons in OpenGL Divided?

## Recommended Posts

During a recent arguement with my friend, I could not prove to him that OpenGL divided polygons into triangles. We argued on this, because I believed if he calculated sprites out into triangles rather than quads, then passed them, he'd get a bit of a speed boost. This didn't pass with him, because I could not prove it. So, are polygons divided into triangles during rendering in OpenGL? What I'm mainly looking for it official sources, so I can show this guy how it works.

##### Share on other sites
All GeForce hardware and the latest generation of Radeon hardware can accept quads and polygons natively. That is, the driver does not decompose them into triangles. They do get chopped up into triangles by the hardware setup engine, but passing quads or polygons to the hardware can save you some vertex processing time. For example, rendering a bunch of particles as individual quads instead of pairs of triangles only requires the vertex program to be run 4 times as opposed to 6. (And the post-transform cache can't be used because we're not talking about indexed primitives.)

Why thank you :)

##### Share on other sites
Hmmm, alrighty. Good to know. Would they be same as, say, triangle strips? I mean handled the same way.

My point of view came from reading a couple articles on how software renderers work.

##### Share on other sites
Hey Eric, I didn't know you were still around these parts. Kick ass.

I have a question though. What do you get as results for wireframe mode with polygons and quads? I think the NV hardware can actually do wireframe natively, whereas ATI emulates it with slim triangles, so I'm curious what the results are on each.

##### Share on other sites
At the base level, the rasterizer works with triangles. You should be able to verify this because if you were to submit a polygon through the matrix stacks, the vertices would no longer be coplanar (due to precision problems) - and it shouldn't be able to be rendered. It should also accept non-convex polygons for this reason, but I haven't ever tried it to verify..

##### Share on other sites
I think you could probably show that a quad is divided into triangles by rendering a quad with coordinates like: (0,0,0) (1,0,0), (1,1,4), (0,1,0). Basically, make is so the quad doesn't lie in a plane, and you should see how it distorts.

There's two ways to make a rectangle from triangles:
--------|     /||    / ||   /  ||  /   || /    ||/     |--------

If the triangle is made like in the picture above, then using the coordinates above will lift up the lower right corner. This will result in the upper-left triangle being flat and the lower-right triangle streched upwards. You may have to play with which corner has a different z value, because the triangles could be divided across the other diagonal. But I think that would work.

##### Share on other sites
Quote:
 Original post by kanatoI think you could probably show that a quad is divided into triangles by rendering a quad with coordinates like: (0,0,0) (1,0,0), (1,1,4), (0,1,0). Basically, make is so the quad doesn't lie in a plane, and you should see how it distorts.There's two ways to make a rectangle from triangles:...If the triangle is made like in the picture above, then using the coordinates above will lift up the lower right corner. This will result in the upper-left triangle being flat and the lower-right triangle streched upwards. You may have to play with which corner has a different z value, because the triangles could be divided across the other diagonal. But I think that would work.

Doesn't really prove anything. The specification doesn't guarantee correct rendering unless the quad is planar (same with polygons too). So if you get the result you described, it could be because the result falls into the category of incorrect result instead of becuase it's split into two triangles.

##### Share on other sites
Quote:
 So, are polygons divided into triangles during rendering in OpenGL?

Before someone misunderstands the above answers, opengl documentation states, that the implementation is free to choose weither to decompose quads and bigger polygons into triangles or not, most of older hardware (i think nvidia 5xxx and older) do this in driver, but latest cards do it in hardware. you practically won't find any card capable of truly rendering native quads / multipolygons, altrough i think one console or some earlier espensive 3dfx chips were able to do it, it is not much use really and everybody rather decompose them to triangles at software or hardware level, to save room on the chip for different more important and modern features.

So you should rather send quads as quads and polygons as polygons, do not decompose them to triangles in your program by yourself, the driver or hardware will do it much faster than you ever can, and if in the future cards appear which will be able to natively render quads without decomposing them, your game / program will gain in visual detail.

##### Share on other sites
Quote:
 Original post by PromitHey Eric, I didn't know you were still around these parts. Kick ass.

I show up from time to time. :)

Quote:
 I have a question though. What do you get as results for wireframe mode with polygons and quads? I think the NV hardware can actually do wireframe natively, whereas ATI emulates it with slim triangles, so I'm curious what the results are on each.

I'm not exactly sure on this. It's true that Nvidia hardware handles wireframe rendering natively, but I don't know the details for ATI hardware. I seem to recall that the driver has to do some work and doesn't always get it right as far as the OpenGL spec is concerned.

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627665
• Total Posts
2978530
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 10
• 10
• 12
• 22
• 13