# OpenGL Reason why opengl transform in reverse?

## Recommended Posts

Is it because when I have a point P in space. If I rotate, then scale it. OpenGl reads it like this P' = Rotate()P P'' = Scale()P' P'' = Scale()Rotate()P So starting from left it scale then rotate point P. Can anyone here enlighten me. Thank you very much.

##### Share on other sites
OpenGL matrices aer columnn major, when multiplynig two column major matrices A and B you hey: A*B in this order. When you multiply a vertex by A*B you get
A*B*v, this means that matrix B will affect first the vertex v and then matrix A will contribute to v, in this order.

When applying transformations like glRotatef you multiply the stack matrix with the rotate matrix and so for example you obtain this:

current stack matrix = I3.

glRotatef(90, 0,1,0); I3 * rotate
glScalef(1,1,0.5); I3 * rotate * scale;

And that's how the matrices are applied in reverse order considering the order you type the transformations.

[Edited by - Deliverance on February 7, 2010 8:32:43 AM]

##### Share on other sites
The order of transformation is just an interpretation of intermediate steps of a sequence of several transformations, and nothing related to OpenGL.

If you look at an object and transformations as being applied to the object's local coordinate system, they behave as if the transformations are applied individually in reverse order as they appear in the code. You can also look at transformations from a global coordinate system perspective, and then the objects appear to be transformed in forward order as they appear in the code.

So it is only about how you interpret the code and transformations. As far as OpenGL is concerned, there is a single matrix, and coordinates are multiplied by that matrix (considering the object and viewpoint transforms, so the modelview matrix concept only).

There are only two things that concerns OpenGL; that is the initial coordinate and the final transformed coordinate. If you need to interpret each individual steps, you concern yourself with intermediate steps, but OpenGL doesn't.

##### Share on other sites
Mathematicians transform points as OpenGL does. So, it's DirectX who do it in reverse. Even if we usually work with affine transformations, which can be represented by 4x4 matrices, general transformations aren't matrices. They are functions from a space to another. A simple example of non-affine transformation is this. So the composition of transformations should follow the conventions of composition of functions: if you have two functions f and g their composition is the function (f.g)(x) = f(g(x)). In your case, if S is the scaling and R the rotation then (S.R)(P) = S(R(P)) which is the rotation followed by the scaling. Hope it make sense for you.

##### Share on other sites
That way you can build hierarchical renderings:

For example you have a car with four wheels.

In real life (I mean the non-reversed order):

transform_wheel (its rotation, and steering)transform_wheel position (put the 4 wheels to the 4 corners)transformation_of_car_in_worldcamera_transformation.

So if you want to draw the wheels, you have to traverse trough all transformations to get the final transformation.
If you want to draw the body of the car: traverse through the last 3 transforms, and so on.

in opengl:
camera_transformationdraw_worldtransformation_of_car_in_worlddraw_car_bodypushtransform_wheel_1 position (put the 4 wheels to the 4 corners)transform_wheel_1 (its rotation, and steering)draw_first_wheelpoppushtransform_wheel_2_positiontransform_wheel_2 (its rotation, and steering)draw_second_wheelpop

...

So if you build the hierarchy well, you don't have to traverse through all transformations for every single objects, only have to multiply with the local transformation of the current object, and use the matrix stack (push/pop).

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628303
• Total Posts
2981924
• ### Similar Content

• By mellinoe
Hi all,
First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
• By aejt
I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
I have these classes:
For GPU resources:
Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).
And my plan is to define everything into these XML documents to abstract away files:
Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
Factory classes for resources:
For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
Factory classes for assets:
Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).

Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
Thanks!
• By nedondev
I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
Thanks.

• So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing.
Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance.
Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!

• Hi,
I'm trying to learn OpenGL through a website and have proceeded until this page of it. The output is a simple triangle. The problem is the complexity.
I have read that page several times and tried to analyse the code but I haven't understood the code properly and completely yet. This is the code: