OpenGL Misunderstanding?

Recommended Posts

Hi all, i've got a little question refering to OpenGL. I heard of many people that they are using OpenGL for motion and rotation of 3d-objects. But i'm wondering about the related functions in OpenGL. I know how and when to use matrices and transformation, but i can't imagine that a whole 3d application (maybe with a good engine) uses OpenGL for motion and rotation, since this API won't know anything about the format of my coordinates and vectors. A call to glVertex after a glTranslate on the modelview matrix won't really modify the coordinates (inside my code). So is it right that i also need to transform coordinates on my own? If I wan't to move an object and I'll use glTranslate i only modify the way it is displayed by OpenGL right? So if I would use collision detection too in my engine i need to move that object with my own methods, since a call to glTranslate won't modify the real coordinates. Am I right? Thanks!

Share on other sites
Yes. Generally you keep for each object its position as the vector from the origin to world position. For rendering, you pass that same vector to OpenGL through glTranslate. For other things, you just use the vector as-is. Same goes for rotation and scaling.

Greetz,

Illco

Share on other sites
The matrix known as MODELVIEW matrix of OpenGL is used to transform any geometry (say vertices in general) on-the-fly from the co-ordinate frame they are given in, to the view co-ordinates. That happens totally internally of OpenGL, and your local data isn't touched by that.

The question is what you mean with "So is it right that i also need to transform coordinates on my own?" In most cases it would be desastrous if OpenGL (or any other API) would alter your main copy of geometric data. Numerical resolution issues will introduce unstability over time. Your models will began to metamorphose more or less arbitrarily.

To avoid such things, the vertices of a model are usually defined in a _local_ co-ordinate frame. Inside that frame the model's vertices will be fixed (ok, not if bones or morphing is active, but in principle). Then glTranslate and glRotate and so on is used to tell OpenGL how the co-ordinate frame is currently given.

As another consequence, animating the model (e.g. moving it) does usually not mean to translate each single vertex but to translate the local co-ordinate frame. That is much more efficient.

So yes, in the case of collision detection you have to do some transformations yourself (in fact there're many more situations you need to do so). However, several optimizations exist to avoid doing transformation on a large scale. Colliding 2 models on vertex basis is actually a performance killer. So normally simpler shapes like bounding spheres, cylinders, boxes (object-orientated "OOBB" or axis aligned "AABB" or whatever) are used. So not all vertices but the bounding volumes are to be transformed.

Share on other sites

Perhaps you're confused as to the nature of OpenGL - OpenGL actually has no features for animation whatsoever. It does not track 'objects' and draw them for you from frame to frame, it's just an API for drawing triangles. You have all the information about those triangles and you alter it as you see fit (and cause them to be transformed before drawing with the likes of glTranslatef) and then you tell OpenGL to draw them every once in a while to show the user what's going on.

Share on other sites
Quote:
 Original post by S0n0Hi all,i've got a little question refering to OpenGL.I heard of many people that they are using OpenGL for motion and rotation of 3d-objects. But i'm wondering about the related functions in OpenGL. I know how and when to use matrices and transformation, but i can't imagine that a whole 3d application (maybe with a good engine) uses OpenGL for motion and rotation, since this API won't know anything about the format of my coordinates and vectors. A call to glVertex after a glTranslate on the modelview matrix won't really modify the coordinates (inside my code).So is it right that i also need to transform coordinates on my own?If I wan't to move an object and I'll use glTranslate i only modify the way it is displayed by OpenGL right? So if I would use collision detection too in my enginei need to move that object with my own methods, since a call to glTranslate won't modify the real coordinates.Am I right?Thanks!
As the other posters have suggested, your intuition is correct. Although there are ways to keep this to a minimum, there are times when you need the transformed geometry for purposes other than rendering (notably collision detection), in which case you may have to perform the transformation yourself.

Share on other sites
Quote:
 Original post by ShmeeBegekPerhaps you're confused as to the nature of OpenGL - OpenGL actually has no features for animation whatsoever. It does not track 'objects' and draw them for you from frame to frame, it's just an API for drawing triangles. You have all the information about those triangles and you alter it as you see fit (and cause them to be transformed before drawing with the likes of glTranslatef) and then you tell OpenGL to draw them every once in a while to show the user what's going on.

I Think that is one FOR THE FAQ OpenGL, really all that OpenGL does is render polygons and their textures/shaders/blenders applied.

Although true it can render lines/points aswell.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

• Partner Spotlight

• Forum Statistics

• Total Topics
627654
• Total Posts
2978444
• Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
Thank you in advance!
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 10
• 12
• 22
• 13
• 33