Sign in to follow this  

OpenGL Transform matrix calculation

Recommended Posts

This is a very basic question about 3D game engines, and the role of 3D hardware in transformation calculations. If you use a library like OpenGL to draw transformed polygons to the screen, you might push and pop matrices onto and off of the library's modelview matrix stack to achieve the correct transformation for each object being drawn. Using this method, you might be able to avoid performing expensive matrix multiplications in software (right? -- I'm a 3D newbie, so correct me if my assumptions are incorrect). However, if you're writing a game engine, you'll probably need access to these transform matrices for many different reasons -- collision detection, object picking, game-specific logic, etc -- so you'll be performing the multiplications in software and storing the results somewhere accessible by your engine (right?). So it seems like a game engine ends up having to perform model transforms twice for each object in the game -- once in hardware (inexpensive) while drawing transformed polys to the screen, and once in software (expensive) for engine operations unrelated to drawing. Is my understanding of the situation correct? Is there any way to have the computer's 3D hardware perform -- and return the results of -- various linear algebra operations for purposes unrelated to rendering? Since 3D hardware is inherently good at these sorts of operations, it seems silly to have the main CPU do them at all.

Share this post

Link to post
Share on other sites
It is true that 3D hardware is very good for those sorts of operations, but unless you are doing GPGPU stuff, the graphics pipeline is very much optimized in one direction, so getting those calculations back would be expensive. This is changing though and things like HavokFX do aim to use graphics hardware for exactly what you propose.

With the likes of needing to perform the calculations twice, this is one of the reasons why spatial partitioning and bounding volumes are used as early outs. Static worlds/levels are kept in model space for that reason, as they will in general be part of the most collision tests. For movable objects, bounding volumes are used to quickly reject most of the polygons so only a few need to be transformed and tested against for collisions (if you even need to do per-triangle collisions at all).


Share this post

Link to post
Share on other sites
Original post by tconkling
So it seems like a game engine ends up having to perform model transforms twice for each object in the game -- once in hardware (inexpensive) while drawing transformed polys to the screen, and once in software (expensive) for engine operations unrelated to drawing.

You'll probably stall trying to get matrices back from the GPU.

And your CPU calculated matrices are probably not as expensive as you think. A profiler is your friend.

If you do find your matrix/matrix or vector/matrix multiplications are costly, then use your library's functions (or write your own) to do these ops using the CPU's vector capabilities (i.e. SSE, VMX, whatever).

Share this post

Link to post
Share on other sites
While it's true that performing matrix transformations is more efficient in hardware than in software, the difference isn't that big. In fact, I'd be willing to go out on a limb and say that your CPU can transform more vertices per second than your GPU (when working dedicated), simply because the GPU is designed to do so many other other tasks at the same time, whereas the CPU is far more flexible. I digress.

Transforming a vertex twice is undeniably more demanding than transforming it once, which is exactly why we go to such lengths to design engines that don't need you to do so. An engine that requires all of its vertices to be transformed in software for physics purposes is one that is in need of optimisation. Usually, generalisations and approximations will be made so that the physical working set is smaller than the graphical one: It is (generally) beneficial to use generous axis-aligned bounding surfaces than accurate transformed ones - sacrificing some culling efficiency to save on transformational overhead. If a model has 3000 vertices, but is represented only by its centroid and a bounding radius, then you'd be a fool to worry about that extra one transform in a thousand.

As for picking: note that is is considerably more productive to untransform the picking ray into object space than it is to transform the object into screen space, so the accuracy tradeoff doesn't really apply.


Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Forum Statistics

    • Total Topics
    • Total Posts
  • Similar Content

    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
    • By cebugdev
      hi guys, 
      are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
      Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic 
      let me know if you guys have recommendations.
      Thank you in advance!
    • By dud3
      How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below? 
      Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
      Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.
      The video shows the difference between blender and my rotation:
    • By Defend
      I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
      My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
      * make lots of VAO/VBO pairs and flip through them to render different objects, or
      * make one big VBO and jump around its memory to render different objects. 
      I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
      If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?
    • By test opty
      Hello all,
      On my Windows 7 x64 machine I wrote the code below on VS 2017 and ran it.
      #include <glad/glad.h>  #include <GLFW/glfw3.h> #include <std_lib_facilities_4.h> using namespace std; void framebuffer_size_callback(GLFWwindow* window , int width, int height) {     glViewport(0, 0, width, height); } //****************************** void processInput(GLFWwindow* window) {     if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)         glfwSetWindowShouldClose(window, true); } //********************************* int main() {     glfwInit();     glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);     glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);     glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);     //glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);     GLFWwindow* window = glfwCreateWindow(800, 600, "LearnOpenGL", nullptr, nullptr);     if (window == nullptr)     {         cout << "Failed to create GLFW window" << endl;         glfwTerminate();         return -1;     }     glfwMakeContextCurrent(window);     if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))     {         cout << "Failed to initialize GLAD" << endl;         return -1;     }     glViewport(0, 0, 600, 480);     glfwSetFramebufferSizeCallback(window, framebuffer_size_callback);     glClearColor(0.2f, 0.3f, 0.3f, 1.0f);     glClear(GL_COLOR_BUFFER_BIT);     while (!glfwWindowShouldClose(window))     {         processInput(window);         glfwSwapBuffers(window);         glfwPollEvents();     }     glfwTerminate();     return 0; }  
      The result should be a fixed dark green-blueish color as the end of here. But the color of my window turns from black to green-blueish repeatedly in high speed! I thought it might be a problem with my Graphics card driver but I've updated it and it's: NVIDIA GeForce GTX 750 Ti.
      What is the problem and how to solve it please?
  • Popular Now