Sign in to follow this  
Halma

OpenGL speed: glRotatef vs. glMultMatrix

Recommended Posts

Halma    146
Hi, I'm writing a particle engine, and I'm going to be rendering a lot of billboard sprites. In order to get the sprites oriented correctly, I'm going to have to rotate them to face the camera. Now, here is my question: Would it be faster to compute and save the rotation angles, and then call glRotatef once or twice per billboard, or should I compute the matrix once and call glMultMatrix every time? (I intend on using the same rotation for all sprites, because I don't think it would make a noticeable difference if I compute the billboarding separately for each little sprite.) Would it work if I put the rotations in a display list? Would that be faster than calling the glWhatever? Or maybe I should put the rotation+render in a display list? I'm a bit new to OpenGL, please help me figure out a fast way to do this. Thanks everyone!

Share this post


Link to post
Share on other sites
timw    598
glrotatef

1) creates a rotation matrix
2) multiplies that matrix with the current active matrix.

so if you're calling it over and over with the same arguments(ie building the same rotation matrix every time) then you're wasting time. just remember what it's doing 1 and 2 and use that to determin if what you're planning to do is faster. I'd say if you're constantly using the same matrix, then just build it once and use glmulmatrix it has to be faster then glrotate. but if you're building different matrix's all the time then glrotate takes basically the same time for the above stated reasons and might even be more efficent.

Tim

Share this post


Link to post
Share on other sites
Halma    146

Thanks for the replies!

I had a suspicion glRotatef would be doing something like that, but I thought maybe the graphics card had a way of computing and multiplying the matrix really fast, and that sending the matrix from the cpu to the graphics card would be slow.

I'm going to be using the same rotations for every frame. Basically, in each frame, I want to compute the rotations, and draw all of the sprites with the same rotation. Would it make sense to recompute a display list at each frame, call the display list for the hundreds of sprites, and then destroy the list?

Share this post


Link to post
Share on other sites
Daaark    3553
Something I wonder about:

If your particles are meant to always be facing the screen, and they are little quads with a texture projected on them, why do you not just draw them as quads? translate to their position and draw them. Why rotate?

Share this post


Link to post
Share on other sites
Daaark    3553
Quote:
Original post by Code-R
the "facing the screen" part ;)
If the quads are always meant to be facing the screen, simply translating to their location and drawing a quad is enough. There is nothing to rotate. They already face the screen.

Share this post


Link to post
Share on other sites
python_regious    929
Quote:
Original post by Vampyre_Dark
Quote:
Original post by Code-R
the "facing the screen" part ;)
If the quads are always meant to be facing the screen, simply translating to their location and drawing a quad is enough. There is nothing to rotate. They already face the screen.


That wouldn't quite work. Particles that are at the edges of the screen would look incorrect (due to perspective skew) - as they'd be facing in the direction of the -normal of the near plane, rather than towards the camera like it should be.

Share this post


Link to post
Share on other sites
Daaark    3553
Quote:
Original post by python_regious
That wouldn't quite work. Particles that are at the edges of the screen would look incorrect (due to perspective skew) - as they'd be facing in the direction of the -normal of the near plane, rather than towards the camera like it should be.


I draw all my particles using my above mentioned method and I've never encountered a problem, or even seen what you refer to. Even when they are right close to the camera. Got a pic so I can see the effect? [grin]

Share this post


Link to post
Share on other sites
python_regious    929
No pic sorry, I'm at work at the moment and my engine has been out of action for a while. I'll try to find one, though essentially you can see the particle from a more "edge" on perspective, rather than looking directly at it.

Share this post


Link to post
Share on other sites
python_regious    929
Actually, here's some ASCII art to demonstrate what I'm talking about.


\ /
\ ___ /
\ | /
\ /
\ /
\__________/
\ /
\ /
\ /
\ /
\/


Here's a particle (with normal) facing the near plane as your method would produce. However, it isn't facing the camera, so it would become skewed from the perspective projection.

Share this post


Link to post
Share on other sites
Daaark    3553
But this is the same effect on everything else in the scene, including straight walls and the like. The effect is almost non existant... unless you are using some kindof odd FOV/Aspect?

Share this post


Link to post
Share on other sites
python_regious    929
Quote:
Original post by Vampyre_Dark
But this is the same effect on everything else in the scene, including straight walls and the like. The effect is almost non existant... unless you are using some kindof odd FOV/Aspect?


Granted, however, walls and the like are apart of the normal scene (as in, they aren't supposed to look the same from all directions) - and thus must be transformed as so. Particles (billboarded) need to always face the camera, such that they always look the same from every angle. This isn't the case if you just make the particle point at the -normal of the near plane. Still, your method of billboarding, while less accurate, would be faster - by what margin however is hard to say.

Share this post


Link to post
Share on other sites
Halma    146
Quote:
Original post by Vampyre_Dark
Quote:
Original post by Code-R
the "facing the screen" part ;)
If the quads are always meant to be facing the screen, simply translating to their location and drawing a quad is enough. There is nothing to rotate. They already face the screen.


Am I missing something? These are in a 3D view, not a 2D orthographic projection. Even if I want to do it the easy way and have everything parallel to the near clipping plane, I'd still have to draw them so that they're perpendicular to the look direction.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now