Jump to content
  • Advertisement
Sign in to follow this  
dragonmagi

OpenGL rotations about viewpoint

This topic is 4461 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

This is a rotation performance question. Consider a game where the viewpoint moves with the avatar, say it's tethered somewhere behind the avatar. The avatar (and viewpoint) are turning around on the spot. I've been assuming that the normal way this is done is to apply a rotation to the view vector, say every frame, and render as you go. That is, you would not rotate the scene objects in the oposite direction because that would involve a lot more operations. Now, someone was saying that effectively, (on modern grahics engine& hardware) the same number of operations would be performed (on vertices in scene) anyway, and therefore there would be no performance difference. Has anyone looked at the performance of these two options e.g. using Ogre or openGl and on modern graphics hardware? Any comments at all? thx

Share this post


Link to post
Share on other sites
Advertisement

Quote:

This is a rotation performance question.

Consider a game where the viewpoint moves with the avatar, say it's tethered somewhere behind the avatar. The avatar (and viewpoint) are turning around on the spot.

I've been assuming that the normal way this is done is to apply a rotation to the view vector, say every frame, and render as you go. That is, you would not rotate the scene objects in the oposite direction because that would involve a lot more operations.

Now, someone was saying that effectively, (on modern grahics engine& hardware) the same number of operations would be performed (on vertices in scene) anyway, and therefore there would be no performance difference.


There would be no performance difference in hardware or software since the additional transform would be included in the modelview matrix.

Quote:

Has anyone looked at the performance of these two options e.g. using Ogre or openGl and on modern graphics hardware? Any comments at all?


The only performance hit would be that a third person camera changes position when the avatar turns while a first person camera only rotates. So any visibility culling, LOD, texture loading, etc that depends on camera position would have to be performed.

Share this post


Link to post
Share on other sites

"There would be no performance difference in hardware or software since the additional transform would be included in the modelview matrix."

Is that because the modelview matrix is only applied to the view vectors and not to all the scene objects? That's basically what I was trying to get at.

Share this post


Link to post
Share on other sites
Quote:
Original post by dragonmagi
Is that because the modelview matrix is only applied to the view vectors and not to all the scene objects? That's basically what I was trying to get at.
In OpenGL at least, the modelview matrix is applied to all rendered geometry. AFAIK, the whole 'transform the world or transform the camera' issue is completely irrelevant; there's nothing you could do to cause any 'more' or 'fewer' matrix operations to be performed on the rendered geometry.

Any difference in how you apply transformations is purely conceptual, e.g. are you moving the object forward 10 units, or the camera back 10 units? For the most part, OpenGL doesn't care what's in the modelview matrix; it just blindly applies it to the geometry. (I say 'for the most part' because, I think, some implementations take shortcuts depending on what named transform functions you call.)

Now this is as far as interfacing with an API goes. For CPU-side processes such as lighting and collision detection, it can matter what you transform and what you don't. But if this is for a camera implementation, I feel pretty safe in saying that how you implement or conceptualize it is unlikely to have any effect on performance (if I'm mistaken about that, I'm sure someone will point it out).

Share this post


Link to post
Share on other sites
"In OpenGL at least, the modelview matrix is applied to all rendered geometry. AFAIK, the whole 'transform the world or transform the camera' issue is completely irrelevant; there's nothing you could do to cause any 'more' or 'fewer' matrix operations to be performed on the rendered geometry.

Any difference in how you apply transformations is purely conceptual, e.g. are you moving the object forward 10 units, or the camera back 10 units? For the most part, OpenGL doesn't care what's in the modelview matrix; it just blindly applies it to the geometry. (I say 'for the most part' because, I think, some implementations take shortcuts depending on what named transform functions you call.)"

Ok, that concurs with what others are saying. I just wanted to be clear about it.

"
Now this is as far as interfacing with an API goes. For CPU-side processes such as lighting and collision detection, it can matter what you transform and what you don't. But if this is for a camera implementation, I feel pretty safe in saying that how you implement or conceptualize it is unlikely to have any effect on performance (if I'm mistaken about that, I'm sure someone will point it out)."

understood :)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!