I am at the point where I am adding a first-person camera to my game with Mouse look. However I am also going to have an IsoMetric view where you use the Mouse to rotate the camera around the player.
Before I code out all of the movements and everything, I want to know,
Which is faster:
gluLookAt(player1->x,player1->y,player1->z+1.5,player1->x+cos(player1->zRot*3.14/180),player1->y-sin(player1->zRot*3.14/180),player1->z+1.5,0,0,1);
or
glRotatef(xRot,1.0,0.0,0.0);
glRotatef(yRot,0.0,1.0,0.0);
glScale....
It seems that one command would be faster, however they contain Cos() and Sin() which are Software processes that take alot of work while rotate are hardware progcesses.
I have seen both online in tutorials.
Which one is better and Faster?
When programming I like to stick to 1 concept, and this has bugged me since I started OpenGL.
Thanks in advance!!
[Edited by - coderWalker on December 22, 2010 12:09:26 PM]
gluLookAt() or glRotatef()
I don't think you should care about speed, in this case. This is hardly going to take any processing power, not from the CPU or GPU since it's user input, and as we all know - humans are damn slow.
As in, this is never going to be, say, per-frame, and even if it will be it's hardly any work.
And just to annoy you, glRotate* isn't supported anymore in 3.1 (or was it 3.2?) and higher contexts.
All the transformation functions are deprecated and should be done on the CPU :P
As in, this is never going to be, say, per-frame, and even if it will be it's hardly any work.
And just to annoy you, glRotate* isn't supported anymore in 3.1 (or was it 3.2?) and higher contexts.
All the transformation functions are deprecated and should be done on the CPU :P
While we're on the subject of deprecation, gluLookAt is also deprecated, if that matters to you.
Then how should should I go about having a first person camera, making it move and rotate?
I need to stop using deprecated functions, and stay up to date. I'm sure the new equivalent is better and most likely faster.
I need to stop using deprecated functions, and stay up to date. I'm sure the new equivalent is better and most likely faster.
You're supposed to implement your own matrices on cpu side, and then upload them as uniforms to your shaders.
If you want to use a handy library, I like GLM for my matrix library. You can look at the functions on the gluLookAt doc page if you want to create an equivalent function on the cpu.
If you want to use a handy library, I like GLM for my matrix library. You can look at the functions on the gluLookAt doc page if you want to create an equivalent function on the cpu.
Neither of them are faster; glRotate is actually not a hardware process - it's only the final multiplication of position by MVP that's hardware accelerated, the actual matrix computations are all done in software, even on a hardware T&L card. This is a common misconception.
Besides, this is something that you should only be doing once per frame. If you're worried about performance of this then you're micro-optimizing something that isn't a bottleneck, and you would be seriously better off forgetting about it and concentrating more on getting it working first, then focussing on areas such as vertex submission, fragment processing, fillrate and hidden-surface removal as optimization candidates.
So my answer is to use whichever one you're most comfortable with (and don't forget the point that gluLookAt is deprecated...)
Besides, this is something that you should only be doing once per frame. If you're worried about performance of this then you're micro-optimizing something that isn't a bottleneck, and you would be seriously better off forgetting about it and concentrating more on getting it working first, then focussing on areas such as vertex submission, fragment processing, fillrate and hidden-surface removal as optimization candidates.
So my answer is to use whichever one you're most comfortable with (and don't forget the point that gluLookAt is deprecated...)
I hate to stop using what I know, but I want to do it correctly.
I have used these tutorials mostly to learn OpenGL
http://www.videotutorialsrock.com/
http://www.lazyfoo.net/SDL_tutorials/
http://nehe.gamedev.net/
They all use deprecated functions.
Anyone know of any OpenGL tutorials that teach correct practices without any depracated functions?
I have used these tutorials mostly to learn OpenGL
http://www.videotutorialsrock.com/
http://www.lazyfoo.net/SDL_tutorials/
http://nehe.gamedev.net/
They all use deprecated functions.
Anyone know of any OpenGL tutorials that teach correct practices without any depracated functions?
Quote:Original post by coderWalkerThere may be some, but since the type of math you're talking about (building transforms and so on) isn't directly related to any specific API, I'd expect it might not be covered in tutorials that focus on OpenGL itself.
Anyone know of any OpenGL tutorials that teach correct practices without any depracated functions?
There are lots of references on 3-d math, although sometimes it can be difficult to separate the wheat from the chaff and find what you're looking for. If you need help with something in particular though, you can always ask here on the forums (I'd recommend Math & Physics for math questions that aren't specifically related to OpenGL).
I don't understand, I thought OpenGL was suppose to do all that for you?
Create 3d Shapes
Rotate and Translate the shapes
Make a 2d image to draw to the screen
all using the GPU.
Are you saying some of this stuff is no longer supported in the library and must be slowly done using my own code on the CPU?
I thought the point of OpenGL was to do everything related to 3d all the way down to matrix multiplication on the GPU?
I was asking if anyone knew any nondepreciated OpenGL tutorials. They don't have to be specifically related to gluLookAt()
Create 3d Shapes
Rotate and Translate the shapes
Make a 2d image to draw to the screen
all using the GPU.
Are you saying some of this stuff is no longer supported in the library and must be slowly done using my own code on the CPU?
I thought the point of OpenGL was to do everything related to 3d all the way down to matrix multiplication on the GPU?
I was asking if anyone knew any nondepreciated OpenGL tutorials. They don't have to be specifically related to gluLookAt()
Quote:Original post by coderWalker
I don't understand, I thought OpenGL was suppose to do all that for you?
Create 3d Shapes
Rotate and Translate the shapes
Make a 2d image to draw to the screen
all using the GPU.
Are you saying some of this stuff is no longer supported in the library and must be slowly done using my own code on the CPU?
I thought the point of OpenGL was to do everything related to 3d all the way down to matrix multiplication on the GPU?
I was asking if anyone knew any nondepreciated OpenGL tutorials. They don't have to be specifically related to gluLookAt()
You've got it mostly right, but you're blurring some concepts together. When you say opengl "rotates and translates the shapes" that's true, but I don't think that means what you think it does.
The GPU is built to feed vertices through transformation matrices to get their translated position. When you call "glRotate" it's not doing anything on the gpu, but rather just building the matrix with which you will eventually feed your vertices. Whether you use opengl or not I'm sure this isn't actually happening on the GPU, because there's no need for it.
The cpu is not "slow", it's just "different". GPU is built to feed thousands of vertices through simple mathematics paths in parallel. Note that this has nothing to do with building a rotation matrix, which is not a parallel operation, and quite trivial computationally. There's no reason why you would need to do this on the GPU when the cpu can do it in a nanosecond.
In the old API the gpu driver took care of some of this stuff for you because it was very simple, and it fit exactly in with what the gpu was doing, IE a vertex always gets processed by the model, view, and projection matrices and then is spit out on the screen. Now with shaders it is much more flexible, and you might want to use all kinds of different matrices. Because the old fixed matrix functions don't really support this kind of arbitrary operation, they just decided to clean out the API and streamline everything, allowing you to upload exactly what you want into the shader. It's more complex for the developer, but provides much more power and flexibility.
That's why you specify the matrices yourself now.
Unfortunately to really answer your question I don't know of tutorials that only cover the new opengl, but personally I just picked up bits and pieces of it as I went along and phased out all my deprecated code.
Basically if you just learn shaders (without the built in special variables) and vbo's you'll be 95% of the way there. You can search for GLSL tutorials and probably find lots of them, or ask for questions for things you don't understand.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement