Sign in to follow this  
Alessandro

OpenGL world to local rotations

Recommended Posts

In opengl I rotate objects in world space:

glRotatef(myObject[i].zRot, 0, 0, 1);
glRotatef(myObject[i].xRot, 1, 0, 0);
glRotatef(myObject[i].yRot, 0, 1, 0);


There is a way to find the object local angles of rotation?

Share this post


Link to post
Share on other sites
[quote name='Alessandro' timestamp='1318032168' post='4870317']
In opengl I rotate objects in world space:

glRotatef(myObject[i].zRot, 0, 0, 1);
glRotatef(myObject[i].xRot, 1, 0, 0);
glRotatef(myObject[i].yRot, 0, 1, 0);


There is a way to find the object local angles of rotation?
[/quote]

Look into quaternions. You can use that rotation matrix to extract a quat and then get eular angles (xyz).
There is a quaternion lib in my library.
There is also a function for extracting angles directly from a rotation matrix.

Share this post


Link to post
Share on other sites
[quote name='Alessandro' timestamp='1318032168' post='4870317']
In opengl I rotate objects in world space:

glRotatef(myObject[i].zRot, 0, 0, 1);
glRotatef(myObject[i].xRot, 1, 0, 0);
glRotatef(myObject[i].yRot, 0, 1, 0);
[/quote]

OpenGL does not work in world space. IF your modelview was still identity, the very first rotation will happen to in world space, but only because local and global are still the same. Every rotation after that is using the local coordinate system resulting from your previous transformations. If you want to use global coordinates, you have to change the order of the matrix multiplication (ie. do it manually, because OpenGL will always do modelview = rotationMatrix * modelview, when what you want is modelview = modelview * rotationMatrix; ).

Share this post


Link to post
Share on other sites
[quote name='Trienco' timestamp='1318480681' post='4872086']
[quote name='Alessandro' timestamp='1318032168' post='4870317']
In opengl I rotate objects in world space:

glRotatef(myObject[i].zRot, 0, 0, 1);
glRotatef(myObject[i].xRot, 1, 0, 0);
glRotatef(myObject[i].yRot, 0, 1, 0);
[/quote]

OpenGL does not work in world space. IF your modelview was still identity, the very first rotation will happen to in world space, but only because local and global are still the same. Every rotation after that is using the local coordinate system resulting from your previous transformations. If you want to use global coordinates, you have to change the order of the matrix multiplication (ie. do it manually, because OpenGL will always do modelview = rotationMatrix * modelview, when what you want is modelview = modelview * rotationMatrix; ).
[/quote]

Sorry for the confusion. Yes opengl works in local space. I do have a quaternion library but I don't know the math of how would I go from local angles to world space, using matrices. Any chance to have some pseudo-code?

Share this post


Link to post
Share on other sites
Here is the code I came up with. I'm still not sure I'm extracting angles correctly from the final step.<br><br>


[code]
float xRot=45.0*DEG_TO_RAD;
float yRot=30.0*DEG_TO_RAD;
float zRot=22.0*DEG_TO_RAD;

// build rotateX matrix
matX= new float[16];
matX[5] = cosf(xRot);
matX[6] = -sinf(xRot);
matX[9] = -matX[6];
matX[10] = matX[5];

// build rotateY matrix
matY= new float[16];
matY[0] = cosf(yRot);
matY[2] = sinf(yRot);
matY[8] = -matY[2];
matY[10] = matY[0];

// build rotateZ matrix
matZ= new float[16];
matZ[0] = cosf(zRot);
matZ[1] = sinf(zRot);
matZ[4] = -matZ[1];
matZ[5] = matZ[0];

matXY=new float[16];
matXYZ=new float[16];

matXY= matrixMultiply(matX, matY);
matXYZ=matrixMultiply(matXY, matZ); //calculate final rotation matrix matXYZ

float *modelview;
modelview= new float[16];
glGetFloatv( GL_MODELVIEW_MATRIX, modelview ); // Get the current MODELVIEW matrix from OpenGL

float *worldRotMatrix;
worldRotMatrix=new float[16];
worldRotMatrix=matrixMultiply(modelview,matXYZ); // this should be the final rotation matrix, in world space

angle_y = -asin( worldRotMatrix[2]); /* Calculate Y-axis angle */
float C = cos( angle_y );
if ( fabs( C ) > 0.005 ) /* Gimball lock? */
{
tr_x = worldRotMatrix[10] / C; /* No, so get X-axis angle */
tr_y = -worldRotMatrix[6] / C;
angle_x = atan2( tr_y, tr_x ) ;
tr_x = worldRotMatrix[0] / C; /* Get Z-axis angle */
tr_y = -worldRotMatrix[1] / C;
angle_z = atan2( tr_y, tr_x ) ;
}
else /* Gimball lock has occurred */
{
angle_x = 0; /* Set X-axis angle to zero */
tr_x = worldRotMatrix[5]; /* And calculate Z-axis angle */
tr_y = worldRotMatrix[4];
angle_z = atan2( tr_y, tr_x ) ;
}




[/code]

Share this post


Link to post
Share on other sites
Forgot to show the matrixMultiply function:

[code]
float* matrixMultiply(float* m1, float* m2)
{
float *finalMat;
finalMat=new float[16];
matrixIdentity(finalMat);
// Fisrt Column
finalMat[0] = m1[0]*m2[0] + m1[4]*m2[1] + m1[8]*m2[2] + m1[12]*m2[3];
finalMat[1] = m1[1]*m2[0] + m1[5]*m2[1] + m1[9]*m2[2] + m1[13]*m2[3];
finalMat[2] = m1[2]*m2[0] + m1[6]*m2[1] + m1[10]*m2[2] + m1[14]*m2[3];
finalMat[3] = m1[3]*m2[0] + m1[7]*m2[1] + m1[11]*m2[2] + m1[15]*m2[3];

// Second Column
finalMat[4] = m1[0]*m2[4] + m1[4]*m2[5] + m1[8]*m2[6] + m1[12]*m2[7];
finalMat[5] = m1[1]*m2[4] + m1[5]*m2[5] + m1[9]*m2[6] + m1[13]*m2[7];
finalMat[6] = m1[2]*m2[4] + m1[6]*m2[5] + m1[10]*m2[6] + m1[14]*m2[7];
finalMat[7] = m1[3]*m2[4] + m1[7]*m2[5] + m1[11]*m2[6] + m1[15]*m2[7];

// Third Column
finalMat[8] = m1[0]*m2[8] + m1[4]*m2[9] + m1[8]*m2[10] + m1[12]*m2[11];
finalMat[9] = m1[1]*m2[8] + m1[5]*m2[9] + m1[9]*m2[10] + m1[13]*m2[11];
finalMat[10] = m1[2]*m2[8] + m1[6]*m2[9] + m1[10]*m2[10] + m1[14]*m2[11];
finalMat[11] = m1[3]*m2[8] + m1[7]*m2[9] + m1[11]*m2[10] + m1[15]*m2[11];

// Fourth Column
finalMat[12] = m1[0]*m2[12] + m1[4]*m2[13] + m1[8]*m2[14] + m1[12]*m2[15];
finalMat[13] = m1[1]*m2[12] + m1[5]*m2[13] + m1[9]*m2[14] + m1[13]*m2[15];
finalMat[14] = m1[2]*m2[12] + m1[6]*m2[13] + m1[10]*m2[14] + m1[14]*m2[15];
finalMat[15] = m1[3]*m2[12] + m1[7]*m2[13] + m1[11]*m2[14] + m1[15]*m2[15];

return finalMat;
}
[/code]

Share this post


Link to post
Share on other sites
If you understand the dot product and know how to use it to find angles between vectors:

After you do those 3 global rotations you get a matrix that shows the new x,y,z vectors for your object. So by drawing your object with the identity matrix it is probably aligned with the global vectors (right = x, y = top, forward = z or negative z). Those 3 vectors now equal the 3 columns in your matrix after doing global rotations. In otherwords, The identity matrix gives you x,y,z axis, what do they map to now: column0,1,2 of your matrix. So do a dot product between the old x and new x, old y and new y, old z and new z. And you can figure out that but.

I assume that is what you want, but you should be knowing the local rotations (yaw,pitch,roll). I do what you do, I align all models on the z axis, that way I can do z rotation first, and then the x,y rotations are the ones the figure out the heading direction of the object. I think I just dont understand your problem though.

Share this post


Link to post
Share on other sites
Sorry, instead of writing code perhaps I had to explain more in detail what I'm trying to do.
I have a simple opengl application where I load and display .obj files. The application allows also to position, scale and rotate those models using the mouse. All those transformations happen in local space.
The application allows to export the above data in a .txt file, with the following scheme:

obj file | xRot | yRot | zRot | xPos | yPos | zPos

Later on, in a [u]separate, commercial application[/u], I read this .txt file and I recreate the scene, loading sequentually the obj's files and applying the transformation values read from the .txt file.
Problem is, that the commercial application perform rotations in world space, while the angles that I "exported" are in local space.

So, what I'm trying to do is to convert those angles from local to world space so that the objects are properly rotated in the commercial application.

I hope I could explain myself better...

Share this post


Link to post
Share on other sites
You should be able to use the local xyz rotations to compute what the object would draw like relative to its own Identity/Local space, Then apply the transform that makes the object be a certain way globally. Should be able to just multiply both matrices. That would give you the final matrix it should look like to draw with given the vertex data in local space. With that matrix, you could just dot product all vectors in that matrix (column0,1,2) with the regular world space vectors (1,0,0 0,1,0, 0,0,1) with their new representations column0,1,2 and that would give you the angle that the object is drawing at relative to the regular world axis.

Share this post


Link to post
Share on other sites
Thanks dpadam450, so the pseudo-code would look like this:

1) build the object local space matrix
2) get the current MODELVIEW matrix from OpenGL
3) build final matrix multiplying the MODELVIEW by the local space matrix (and I think I arrived at this point in previous code I posted)
4) dot product the vectors defined by columns 0,1,2 of the final matrix, by the corresponding regular world space vectors (1,0,0 0,1,0, 0,0,1)
5) extract euler angles from this last resulting matrix.

Would that work? Any comments also from others?

Share this post


Link to post
Share on other sites
Again for mine, I make sure that any game object in my game has NO transform, meaning it is drawn with the Identity matrix, and when it is drawn with the identity matrix I make sure it is fixed so that the xyz locally are the xyz globally. Meaning my model in Blender has up = y, x = right and negative z = forward for all models. If I rotate the object 30 degrees on the y-axis in my engine, I know that its local forward vector from export and in my game engine are going to be the exact same x,y,z components. So local = global. But again if your model already has a transform when exported, you should erase it if u can and just rotate the vertices and not the objects matrix. What exactly is rotx,y,z local to anyway? I can spin my body in any direction and my right relative to myself is still just "my right". I think what you have is a global transform of your model in your modeling application.

Share this post


Link to post
Share on other sites
In a different way of saying what does rotx mean relative to local: stand up right now close ur eyes and spin in circles then stop. What is your rotation x relative locally to you ?..........nothing. You could say it is 100 degrees relative to my old localness, or 20 degrees on my world x. So what you are exporting is probably rotx relative to world or something, the model exported shouldnt have any. If this is an instance that you have rotated then, if you make its local original system the identity matrix then local = global.

Share this post


Link to post
Share on other sites
[quote name='Alessandro' timestamp='1318535004' post='4872277']
Later on, in a [u]separate, commercial application[/u], I read this .txt file and I recreate the scene, loading sequentually the obj's files and applying the transformation values read from the .txt file.
Problem is, that the commercial application perform rotations in world space, while the angles that I "exported" are in local space.
[/quote]

Are you sure that it's not simply a matter of one application applying rotations in another order than the second app? The only difference between rotation around a local or global axis is whether you multiply like
local: matrix = rotation * matrix
global: matrix = matrix * rotation

I also really dislike storing orientations as Euler angles. It's completely useless without knowing in which order they need to be applied.

Share this post


Link to post
Share on other sites
[quote name='Trienco' timestamp='1318568713' post='4872415']


Are you sure that it's not simply a matter of one application applying rotations in another order than the second app? The only difference between rotation around a local or global axis is whether you multiply like
local: matrix = rotation * matrix
global: matrix = matrix * rotation
[/quote]

Yes, it's exactly like that. All I need to do is to convert the values of local rotations in Application1 to world rotation values, so that Application2 can use those values and display objects with the same orientation.

So, say in Application1 I have an object rotated in local space, what I'm doing to convert local angles to world angles is:

1) build the rotation matrix out of object rotation angles
2) multiply the modelview by the object rotation matrix built at step1
3) get the euler angles from the above matrix

The angles computed this way, and I applied to Application2 are not working (i.e. the object orientation doesn't match the one in Application1)
This problem is driving me nut.

Share this post


Link to post
Share on other sites
And what drives me crazy is that I'm so dumb to figure out what it seems to be such an easy problem. I mean, let's forget opengl for a moment.
I have an object rotated in [b]local[/b] space as follows:

pitch=45, yaw=30, roll=15

Given that world axis are [1,0,0] [0,1,0] [0,0,1]

Now, let's calculate [b]world[/b] angles out of it... what is the math beyond it?

Share this post


Link to post
Share on other sites
[quote name='Alessandro_FL' timestamp='1318584787' post='4872462']
And what drives me crazy is that I'm so dumb to figure out what it seems to be such an easy problem. I mean, let's forget opengl for a moment.
I have an object rotated in [b]local[/b] space as follows:

pitch=45, yaw=30, roll=15

Given that world axis are [1,0,0] [0,1,0] [0,0,1]

Now, let's calculate [b]world[/b] angles out of it... what is the math beyond it?
[/quote]

The math is that your problem has several potential solutions with only one of them being correct and without knowing the ORDER in which pitch/yaw/roll are supposed to be applied you can only try until it fits. Three angles alone do NOT describe a unique orientation, but six possible ones. That has nothing to do with local or global space (or rather: one of those potential orders already IS the equivalent of using world instead of local axes).

Also, an angle is an angle and doesn't care about local or whatever space. If wavefront does pitch-yaw-roll using object local axes and you insist on using world axes, you do roll-yaw-pitch and are done with it. Why? Because one does
totalRotation = rotation(roll, 0,0,1) * rotation(yaw, 0,1,0) * rotation(pitch, 1,0,0)
and the other does
totalRotation = rotation(pitch, 1,0,0) * rotation(yaw, 0,1,0) * rotation(roll, 0,0,1)

At no point does it make sense to actually calculate an objects local x,y,z axis and use that for a rotation, since this happens "automatically" if you multiply in the right order.

Assuming your application is exporting angles that are based on your own order (localX, localZ, localY = globalY, globalZ, globalX), what order is the OTHER application applying the rotations?

Share this post


Link to post
Share on other sites
To supplement Trienco's writings:

This code from the OP
[code]
glRotatef(myObject[i].zRot, 0, 0, 1);
glRotatef(myObject[i].xRot, 1, 0, 0);
glRotatef(myObject[i].yRot, 0, 1, 0);
[/code]
means a combined rotation (written n the following using column vector notation as is usual when dealing with OpenGL)
[b]R[/b][sub]z[/sub] * [b]R[/b][sub]x[/sub] * [b]R[/b][sub]y[/sub]
Probing a rotation [b]R[/b] * [b]v[/b] of a vector [b]v[/b], one notices that any point that is part of the axis of rotation is mapped onto itself ([b]0[/b] is always such a point), regardless of [b]R[/b]. Because a combined rotation like
[b]v[/b]'[b] [/b]=[b] R[/b][sub]z[/sub] * [b]R[/b][sub]x[/sub] * [b]R[/b][sub]y[/sub] * [b]v[/b]
can be seen as
[b]v[/b][sub]1[/sub] = [b]R[/b][sub]y[/sub] * [b]v[/b][b]
[b]v[/b][sub]2[/sub] = [b]R[/b][sub]x[/sub] * [b]v[/b][sub]1[/sub][b]
[b]v[/b]' = [b]R[/b][sub]z[/sub] * [b]v[/b][sub]2[/sub][/b]
I like to say that each particular rotation is done in a space that is coincident with the world space at the moment when the transformation is applied.
[/b][b]
[b]Now, you don't want to do so. You want to rotate around the axes that are coincident with the model space axes. So you have to ensure that the desired axis is coincident with the belonging "world axis" at the moment you want to apply the rotation. This is always true for the first rotation (y in this case).
[b]R[/b][sub]y[/sub] * [b]v[/b]
But after that rotation the model space x axis isn't coincident with the world y axis any longer. Hence you need to "undo" the first rotation, apply the desired rotation then, and "redo" the first rotation:[b][b]
[b][b][b][b][b][b][b]R[/b][sub]y[/sub] * [b][b][b][b][b][b][b][b][b][b]R[/b][sub]x[/sub][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][b][b][b][b][b][b][b][b][b][b] [/b][/b][/b][/b][/b][/b]* [/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][b][b][b][b][b][b][b][b][b][b][b][b][b][b][b][b][b][b]R[/b][sub]y[/sub][sup]-1[/sup] *[/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][b][b][b][b][b][b][b][b][b][b][b][b][b][b]( [b]R[/b][sub]y[/sub] * [b]v[/b] ) = [/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][b][b][b][b][b][b][b][b][b][b][b][b][b][b][b]R[/b][sub]y[/sub] * [b][b][b][b][b]R[/b][sub]x[/sub][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][b][b][b][b][b][b][b][b][b][b] [/b][/b][/b][/b][/b][/b]*[/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][b][b][b][b][b][b][b][b][b][b][b][b][b][b][b][b][b][b][b][b] v[/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b]
Doing this a 2nd time for the 3rd rotation as well, yields in
[b]R[/b][sub]y[/sub] * [b]R[/b][sub]x[/sub] * [b]R[/b][sub]z[/sub] * [b]v[/b]
what is obviously a combined rotation similarly to the original one but with a reversed order of matrices. Hence this rotation still uses the same "world axes", but due to its chosen order the effect is like rotating around the model axes. Notice please that the angles in use are not changed at all.


You [i]can[/i] compute something like "world angles" from "local angles" if you want. You would do it by choosing the order for the local rotation and compute the result with known "local angles", e.g.
[b]R[/b] := [b][b][b][b][b]R[/b][sub]y[/sub] * [b]R[/b][sub]x[/sub] * [b]R[/b][sub]z[/sub][/b][/b][/b][/b][b][b][b][b] [/b][/b][/b][/b]
and compute the angles that are needed to get the same result but with the reverse order, i.e.
[b][b][b][b][b]R[/b][sub]z[/sub] * [b]R[/b][sub]x[/sub] * [b]R[/b][sub]y[/sub] = [b][b][b][b][b][b][b][b][b][b][b][b][b]R[/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b][/b]
in this case. However, as Trienco already said, this is meaningless for the given problem.[/b][/b][/b][/b]


P.S.: This editor is really a pain! It seems impossible to prevent it to switch some portions of the text into a bold appearance...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628290
    • Total Posts
      2981858
  • Similar Content

    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
    • By Abecederia
      So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing.
      Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance.
      Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!
    • By test opty
      Hi,
      I'm trying to learn OpenGL through a website and have proceeded until this page of it. The output is a simple triangle. The problem is the complexity.
      I have read that page several times and tried to analyse the code but I haven't understood the code properly and completely yet. This is the code:
       
      #include <glad/glad.h> #include <GLFW/glfw3.h> #include <C:\Users\Abbasi\Desktop\std_lib_facilities_4.h> using namespace std; //****************************************************************************** void framebuffer_size_callback(GLFWwindow* window, int width, int height); void processInput(GLFWwindow *window); // settings const unsigned int SCR_WIDTH = 800; const unsigned int SCR_HEIGHT = 600; const char *vertexShaderSource = "#version 330 core\n" "layout (location = 0) in vec3 aPos;\n" "void main()\n" "{\n" " gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);\n" "}\0"; const char *fragmentShaderSource = "#version 330 core\n" "out vec4 FragColor;\n" "void main()\n" "{\n" " FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n" "}\n\0"; //******************************* int main() { // glfw: initialize and configure // ------------------------------ glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // glfw window creation GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "My First Triangle", nullptr, nullptr); if (window == nullptr) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); // glad: load all OpenGL function pointers if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } // build and compile our shader program // vertex shader int vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr); glCompileShader(vertexShader); // check for shader compile errors int success; char infoLog[512]; glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(vertexShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::VERTEX::COMPILATION_FAILED\n" << infoLog << endl; } // fragment shader int fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr); glCompileShader(fragmentShader); // check for shader compile errors glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(fragmentShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::FRAGMENT::COMPILATION_FAILED\n" << infoLog << endl; } // link shaders int shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); // check for linking errors glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success); if (!success) { glGetProgramInfoLog(shaderProgram, 512, nullptr, infoLog); cout << "ERROR::SHADER::PROGRAM::LINKING_FAILED\n" << infoLog << endl; } glDeleteShader(vertexShader); glDeleteShader(fragmentShader); // set up vertex data (and buffer(s)) and configure vertex attributes float vertices[] = { -0.5f, -0.5f, 0.0f, // left 0.5f, -0.5f, 0.0f, // right 0.0f, 0.5f, 0.0f // top }; unsigned int VBO, VAO; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); // bind the Vertex Array Object first, then bind and set vertex buffer(s), //and then configure vertex attributes(s). glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); // note that this is allowed, the call to glVertexAttribPointer registered VBO // as the vertex attribute's bound vertex buffer object so afterwards we can safely unbind glBindBuffer(GL_ARRAY_BUFFER, 0); // You can unbind the VAO afterwards so other VAO calls won't accidentally // modify this VAO, but this rarely happens. Modifying other // VAOs requires a call to glBindVertexArray anyways so we generally don't unbind // VAOs (nor VBOs) when it's not directly necessary. glBindVertexArray(0); // uncomment this call to draw in wireframe polygons. //glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); // render loop while (!glfwWindowShouldClose(window)) { // input // ----- processInput(window); // render // ------ glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // draw our first triangle glUseProgram(shaderProgram); glBindVertexArray(VAO); // seeing as we only have a single VAO there's no need to // bind it every time, but we'll do so to keep things a bit more organized glDrawArrays(GL_TRIANGLES, 0, 3); // glBindVertexArray(0); // no need to unbind it every time // glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.) glfwSwapBuffers(window); glfwPollEvents(); } // optional: de-allocate all resources once they've outlived their purpose: glDeleteVertexArrays(1, &VAO); glDeleteBuffers(1, &VBO); // glfw: terminate, clearing all previously allocated GLFW resources. glfwTerminate(); return 0; } //************************************************** // process all input: query GLFW whether relevant keys are pressed/released // this frame and react accordingly void processInput(GLFWwindow *window) { if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) glfwSetWindowShouldClose(window, true); } //******************************************************************** // glfw: whenever the window size changed (by OS or user resize) this callback function executes void framebuffer_size_callback(GLFWwindow* window, int width, int height) { // make sure the viewport matches the new window dimensions; note that width and // height will be significantly larger than specified on retina displays. glViewport(0, 0, width, height); } As you see, about 200 lines of complicated code only for a simple triangle. 
      I don't know what parts are necessary for that output. And also, what the correct order of instructions for such an output or programs is, generally. That start point is too complex for a beginner of OpenGL like me and I don't know how to make the issue solved. What are your ideas please? What is the way to figure both the code and the whole program out correctly please?
      I wish I'd read a reference that would teach me OpenGL through a step-by-step method. 
  • Popular Now