Sign in to follow this  
cignox1

OpenGL my second step with openGL: improving perf.

Recommended Posts

cignox1    735
Finally I was able to make my 3ds importer work (more or less) and now I can render some more interesting model. But framerate drops down to 10 with a 100/150 polys model (I own a geForce 6600); ok. The model I use is covered by a 512x512 texture, using uv coords. I use 2 lights, linear mapping (no mip mapping), perspective correction to nicest and so on. Then this is my rendering code:
bool Render()
{
    //OpenGL initialization commands
    glClearColor(0.0, 0.0, 0.0, 0.5);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    //Reset the modelview matrix
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();

    //Rotate the triangle
    glTranslatef(0.0f,-1.0f,-25.0f);
    glRotatef(angle,0.0,1, 0.0);
    angle += 0.2;

	glBegin(GL_TRIANGLES);// Drawing Using Triangles
	int c = 0;

	for(int i = 0; i < numfaces; ++i)
	{
        glNormal3f(normals[i*3], normals[i*3+1], normals[i*3+2]);
     
        float v[4]; v[0] = matt.diffuse_rgb[0]/256.0; v[1] = matt.diffuse_rgb[1]/256.0; v[2] = matt.diffuse_rgb[2]/256.0; v[3] = 1.0;
       
        glTexCoord2f(mapcoords[faceslist[c]*2], mapcoords[faceslist[c]*2+1]);
        glVertex3f( geom[faceslist[c]*3], geom[faceslist[c]*3+1], geom[faceslist[c]*3+2]);
		c++;
     
        glTexCoord2f(mapcoords[faceslist[c]*2], mapcoords[faceslist[c]*2+1]);
        glVertex3f( geom[faceslist[c]*3], geom[faceslist[c]*3+1], geom[faceslist[c]*3+2]);
		c++;
		
		glTexCoord2f(mapcoords[faceslist[c]*2], mapcoords[faceslist[c]*2+1]);
		glVertex3f( geom[faceslist[c]*3], geom[faceslist[c]*3+1], geom[faceslist[c]*3+2]);
		c++;
    }
	glEnd();// Finished Drawing The Triangle

    SDL_GL_SwapBuffers();
    frames++;
    return true;
}


I understand that these are my first steps in opengl (I did few things some years ago, but just to experiment, nothing more) and wonder how can these lines be a problem for a card that runs quake 4 at 40 fps :-) EDIT: I use SDL, if that matters. And with the 'cube' model (without textures and so on) I reach 150 fps.

Share this post


Link to post
Share on other sites
bluntman    255
I think 150 poly model should be able to be drawn in immediate mode (i.e. calls to glVertex as opposed to glDrawElements) at 100's of fps. IMO there must be something wrong in another part of your code.
I am using a Geforce fx 5600 and am drawing a 3000+ polygon model in immediate mode at 50+ fps, using exactly the same technique you are (except for no SDL).

Share this post


Link to post
Share on other sites
Simian Man    1022
One optimisation I see is this line:

float v[4]; v[0] = matt.diffuse_rgb[0]/256.0; v[1] = matt.diffuse_rgb[1]/256.0; v[2] = matt.diffuse_rgb[2]/256.0; v[3] = 1.0;

Can you divide by 256 in the initialization code instead? This will make it a little faster, but it should not be responsible for the huge performance loss you are seeing.

Also, you don't need to call glClearColor each frame. You can also probably take out the call to glMatrixMode as it should still be set to GL_MODELVIEW from the previous frame.

Just make sure you do those two things somewhere in your code.

Share this post


Link to post
Share on other sites
cignox1    735
Thank you for the help. Of course, the code I posted is not a real project, only a small app made to test the 3ds importer. If I set off the texture, I get 90 fps. With the textures on, I get 10. The texture is a tga file, so BGRA (because it contains an alpha channel. The main problem is then something related to textures, but I don't know what.

Share this post


Link to post
Share on other sites
RuneLancer    253
Have you updated your drivers recently? I know that with ATI cards, not having catalyst makes OpenGL crawl at abnormal speeds, particularly in immediate mode. I figure GeForce might behave the same way.

If you still have your default drivers, upgrade. It's worth a shot; my game runs well over 1500 FPS in immediate mode and features ~1000 textured polies, badly-optimized billboarding (read: multiple unecessary world matrix manipulations), multiple background layers (old-school style, which covers the entire BG), no poly sorting, and a few other things. If a 100 poly model crawls, something is wrong and it almost certainly isn't your code...

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
Quote:
...The texture is a tga file, so BGRA...


BGRA? I don't know much about OpenGL but that raises alarm bells for me. Try swapping the red and the blue channels and then use GL_RGBA for glTexImage2D and see if that improves your performance.

Share this post


Link to post
Share on other sites
cignox1    735
Quote:
Original post by Anonymous Poster
Quote:
...The texture is a tga file, so BGRA...


BGRA? I don't know much about OpenGL but that raises alarm bells for me. Try swapping the red and the blue channels and then use GL_RGBA for glTexImage2D and see if that improves your performance.


Drivers are updated to the latest version (anyway, I installed them 1 month ago). I tried using RGBA but I get exactly the same results...

Share this post


Link to post
Share on other sites
wyled    127
What are your computer specs and what OS are you compiling and running this on?

BGR or RGB won't make any difference. TGA is BGR, so you should be using that or else inverting all your color data in the texture.

The problem here is most likely your computer. Download some of the Nehe tutorials and tell us what FPS you get on those. If they are still low, the issue is with your computer and not with the code. If those are fast then the issue is with your code and not the computer.

Share this post


Link to post
Share on other sites
cignox1    735
Quote:
Original post by wyled
What are your computer specs and what OS are you compiling and running this on?

BGR or RGB won't make any difference. TGA is BGR, so you should be using that or else inverting all your color data in the texture.

The problem here is most likely your computer. Download some of the Nehe tutorials and tell us what FPS you get on those. If they are still low, the issue is with your computer and not with the code. If those are fast then the issue is with your code and not the computer.


Well, athlon xp 2600Mhz+, 512Mb ram and geForce 6600. I run quake 4 and far cry without problems... As soon as I can I will try to compile a nehe example...

Share this post


Link to post
Share on other sites
anist    100
everything you've described should run much much faster.
make sure you are only loading the texture once, creating the texture once etc. other than that it's anyones guess without seeing the code.

Share this post


Link to post
Share on other sites
Specchum    242
okay, here are a couple of things I would try:

1) Remove the glClearColor call from the render loop. You don't need to do that every frame, besides it's an expensive operation.

2) Try using vertex arrays or buffers to draw the polygons. They are faster than immediate mode rendering esp. when complex geometry is involved.

Share this post


Link to post
Share on other sites
cignox1    735
Quote:
Original post by Specchum
okay, here are a couple of things I would try:

1) Remove the glClearColor call from the render loop. You don't need to do that every frame, besides it's an expensive operation.

2) Try using vertex arrays or buffers to draw the polygons. They are faster than immediate mode rendering esp. when complex geometry is involved.


I know that there could be many changes to do to improve perf., but I think that I should be able to play HL2 and still get some dozens of fps from my program in background :-)
I think that opengl is running in software mode: glGetString(GL_RENDERER); returns "GDI Generiversion: 3".

Now my question is: how can I set on the HW mode? (I'm using SDL by the way)

Share this post


Link to post
Share on other sites
deavik    570
[quote]Original post by cignox1
Quote:
Original post by Specchum
I think that opengl is running in software mode: glGetString(GL_RENDERER); returns "GDI Generiversion: 3".

Now my question is: how can I set on the HW mode? (I'm using SDL by the way)

You're right, that is the problem. How are you setting up the OpenGL context (I think that's the call the SDL_SetVideoMode or something similar)?

Share this post


Link to post
Share on other sites
cignox1    735
This is my initialization code.

//Initialize sdl and the video sublib
if(SDL_Init(SDL_INIT_VIDEO|SDL_INIT_TIMER)<0) exit(0);

SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8); //Use at least 5 bits of Red
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8); //Use at least 5 bits of Green
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8); //Use at least 5 bits of Blue
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16); //Use at least 16 bits for the depth buffer
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1); //Enable double buffering

screen = SDL_SetVideoMode(width, height, 32, SDL_OPENGL | SDL_HWSURFACE/* | SDL_FULLSCREEN*/);





EDIT: In addition, I used GL_BGRA when building the texture, but I've just read somewhere that this flag was added only in OpenGL 1.2. But since OGL 1.1 is the latest version providen on Windows, why donsn't the compiler complain about it? Am I using a newer header? But then I have header and library of different versions, could this be the reason?

[Edited by - cignox1 on March 29, 2006 4:46:18 AM]

Share this post


Link to post
Share on other sites
Simian Man    1022
Quote:
Original post by cignox1
EDIT: In addition, I used GL_BGRA when building the texture, but I've just read somewhere that this flag was added only in OpenGL 1.2. But since OGL 1.1 is the latest version providen on Windows, why donsn't the compiler complain about it? Am I using a newer header? But then I have header and library of different versions, could this be the reason?


If you include SDL/SDL_opengl it will define these extensions for you. And since your graphics card supports OpenGL extensions, it all works.

In your initialization code, you don't need the SDL_HWSURFACE flag. It should not cause a problem, but it's unnecesary.

Also, if you run this in fullscreen, do you still get the same problems?

Share this post


Link to post
Share on other sites
cignox1    735
Ok, solved the problem (as usual, it was my fault): now I render a 24000 polys model with a 1024x1024 texture at 35-40 fps without changing a line of code (I had a opengl32.dll in the same folder of the exe and the app was using it instead than the one provided with the drivers).

EDIT: 60fps when full screen :-)

[Edited by - cignox1 on March 29, 2006 9:35:14 AM]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
       
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now