Sign in to follow this  
S0n0

OpenGL Misunderstanding?

Recommended Posts

S0n0    138
Hi all, i've got a little question refering to OpenGL. I heard of many people that they are using OpenGL for motion and rotation of 3d-objects. But i'm wondering about the related functions in OpenGL. I know how and when to use matrices and transformation, but i can't imagine that a whole 3d application (maybe with a good engine) uses OpenGL for motion and rotation, since this API won't know anything about the format of my coordinates and vectors. A call to glVertex after a glTranslate on the modelview matrix won't really modify the coordinates (inside my code). So is it right that i also need to transform coordinates on my own? If I wan't to move an object and I'll use glTranslate i only modify the way it is displayed by OpenGL right? So if I would use collision detection too in my engine i need to move that object with my own methods, since a call to glTranslate won't modify the real coordinates. Am I right? Thanks!

Share this post


Link to post
Share on other sites
Illco    928
Yes. Generally you keep for each object its position as the vector from the origin to world position. For rendering, you pass that same vector to OpenGL through glTranslate. For other things, you just use the vector as-is. Same goes for rotation and scaling.

Greetz,

Illco

Share this post


Link to post
Share on other sites
haegarr    7372
The matrix known as MODELVIEW matrix of OpenGL is used to transform any geometry (say vertices in general) on-the-fly from the co-ordinate frame they are given in, to the view co-ordinates. That happens totally internally of OpenGL, and your local data isn't touched by that.

The question is what you mean with "So is it right that i also need to transform coordinates on my own?" In most cases it would be desastrous if OpenGL (or any other API) would alter your main copy of geometric data. Numerical resolution issues will introduce unstability over time. Your models will began to metamorphose more or less arbitrarily.

To avoid such things, the vertices of a model are usually defined in a _local_ co-ordinate frame. Inside that frame the model's vertices will be fixed (ok, not if bones or morphing is active, but in principle). Then glTranslate and glRotate and so on is used to tell OpenGL how the co-ordinate frame is currently given.

As another consequence, animating the model (e.g. moving it) does usually not mean to translate each single vertex but to translate the local co-ordinate frame. That is much more efficient.

So yes, in the case of collision detection you have to do some transformations yourself (in fact there're many more situations you need to do so). However, several optimizations exist to avoid doing transformation on a large scale. Colliding 2 models on vertex basis is actually a performance killer. So normally simpler shapes like bounding spheres, cylinders, boxes (object-orientated "OOBB" or axis aligned "AABB" or whatever) are used. So not all vertices but the bounding volumes are to be transformed.

Share this post


Link to post
Share on other sites
ShmeeBegek    196

Perhaps you're confused as to the nature of OpenGL - OpenGL actually has no features for animation whatsoever. It does not track 'objects' and draw them for you from frame to frame, it's just an API for drawing triangles. You have all the information about those triangles and you alter it as you see fit (and cause them to be transformed before drawing with the likes of glTranslatef) and then you tell OpenGL to draw them every once in a while to show the user what's going on.



Share this post


Link to post
Share on other sites
jyk    2094
Quote:
Original post by S0n0
Hi all,

i've got a little question refering to OpenGL.
I heard of many people that they are using OpenGL for motion and rotation of 3d-objects. But i'm wondering about the related functions in OpenGL. I know how and when to use matrices and transformation, but i can't imagine that a whole 3d application (maybe with a good engine) uses OpenGL for motion and rotation, since this API won't know anything about the format of my coordinates and vectors. A call to glVertex after a glTranslate on the modelview matrix won't really modify the coordinates (inside my code).

So is it right that i also need to transform coordinates on my own?
If I wan't to move an object and I'll use glTranslate i only modify the way it is displayed by OpenGL right? So if I would use collision detection too in my engine
i need to move that object with my own methods, since a call to glTranslate won't modify the real coordinates.

Am I right?

Thanks!
As the other posters have suggested, your intuition is correct. Although there are ways to keep this to a minimum, there are times when you need the transformed geometry for purposes other than rendering (notably collision detection), in which case you may have to perform the transformation yourself.

Share this post


Link to post
Share on other sites
dawidjoubert    161
Quote:
Original post by ShmeeBegek

Perhaps you're confused as to the nature of OpenGL - OpenGL actually has no features for animation whatsoever. It does not track 'objects' and draw them for you from frame to frame, it's just an API for drawing triangles. You have all the information about those triangles and you alter it as you see fit (and cause them to be transformed before drawing with the likes of glTranslatef) and then you tell OpenGL to draw them every once in a while to show the user what's going on.


I Think that is one FOR THE FAQ OpenGL, really all that OpenGL does is render polygons and their textures/shaders/blenders applied.

Although true it can render lines/points aswell.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
       
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now