Sign in to follow this  
cignox1

OpenGL my second step with openGL: improving perf.

Recommended Posts

Finally I was able to make my 3ds importer work (more or less) and now I can render some more interesting model. But framerate drops down to 10 with a 100/150 polys model (I own a geForce 6600); ok. The model I use is covered by a 512x512 texture, using uv coords. I use 2 lights, linear mapping (no mip mapping), perspective correction to nicest and so on. Then this is my rendering code:
bool Render()
{
    //OpenGL initialization commands
    glClearColor(0.0, 0.0, 0.0, 0.5);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    //Reset the modelview matrix
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();

    //Rotate the triangle
    glTranslatef(0.0f,-1.0f,-25.0f);
    glRotatef(angle,0.0,1, 0.0);
    angle += 0.2;

	glBegin(GL_TRIANGLES);// Drawing Using Triangles
	int c = 0;

	for(int i = 0; i < numfaces; ++i)
	{
        glNormal3f(normals[i*3], normals[i*3+1], normals[i*3+2]);
     
        float v[4]; v[0] = matt.diffuse_rgb[0]/256.0; v[1] = matt.diffuse_rgb[1]/256.0; v[2] = matt.diffuse_rgb[2]/256.0; v[3] = 1.0;
       
        glTexCoord2f(mapcoords[faceslist[c]*2], mapcoords[faceslist[c]*2+1]);
        glVertex3f( geom[faceslist[c]*3], geom[faceslist[c]*3+1], geom[faceslist[c]*3+2]);
		c++;
     
        glTexCoord2f(mapcoords[faceslist[c]*2], mapcoords[faceslist[c]*2+1]);
        glVertex3f( geom[faceslist[c]*3], geom[faceslist[c]*3+1], geom[faceslist[c]*3+2]);
		c++;
		
		glTexCoord2f(mapcoords[faceslist[c]*2], mapcoords[faceslist[c]*2+1]);
		glVertex3f( geom[faceslist[c]*3], geom[faceslist[c]*3+1], geom[faceslist[c]*3+2]);
		c++;
    }
	glEnd();// Finished Drawing The Triangle

    SDL_GL_SwapBuffers();
    frames++;
    return true;
}


I understand that these are my first steps in opengl (I did few things some years ago, but just to experiment, nothing more) and wonder how can these lines be a problem for a card that runs quake 4 at 40 fps :-) EDIT: I use SDL, if that matters. And with the 'cube' model (without textures and so on) I reach 150 fps.

Share this post


Link to post
Share on other sites
I think 150 poly model should be able to be drawn in immediate mode (i.e. calls to glVertex as opposed to glDrawElements) at 100's of fps. IMO there must be something wrong in another part of your code.
I am using a Geforce fx 5600 and am drawing a 3000+ polygon model in immediate mode at 50+ fps, using exactly the same technique you are (except for no SDL).

Share this post


Link to post
Share on other sites
One optimisation I see is this line:

float v[4]; v[0] = matt.diffuse_rgb[0]/256.0; v[1] = matt.diffuse_rgb[1]/256.0; v[2] = matt.diffuse_rgb[2]/256.0; v[3] = 1.0;

Can you divide by 256 in the initialization code instead? This will make it a little faster, but it should not be responsible for the huge performance loss you are seeing.

Also, you don't need to call glClearColor each frame. You can also probably take out the call to glMatrixMode as it should still be set to GL_MODELVIEW from the previous frame.

Just make sure you do those two things somewhere in your code.

Share this post


Link to post
Share on other sites
Thank you for the help. Of course, the code I posted is not a real project, only a small app made to test the 3ds importer. If I set off the texture, I get 90 fps. With the textures on, I get 10. The texture is a tga file, so BGRA (because it contains an alpha channel. The main problem is then something related to textures, but I don't know what.

Share this post


Link to post
Share on other sites
Have you updated your drivers recently? I know that with ATI cards, not having catalyst makes OpenGL crawl at abnormal speeds, particularly in immediate mode. I figure GeForce might behave the same way.

If you still have your default drivers, upgrade. It's worth a shot; my game runs well over 1500 FPS in immediate mode and features ~1000 textured polies, badly-optimized billboarding (read: multiple unecessary world matrix manipulations), multiple background layers (old-school style, which covers the entire BG), no poly sorting, and a few other things. If a 100 poly model crawls, something is wrong and it almost certainly isn't your code...

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
...The texture is a tga file, so BGRA...


BGRA? I don't know much about OpenGL but that raises alarm bells for me. Try swapping the red and the blue channels and then use GL_RGBA for glTexImage2D and see if that improves your performance.

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
Quote:
...The texture is a tga file, so BGRA...


BGRA? I don't know much about OpenGL but that raises alarm bells for me. Try swapping the red and the blue channels and then use GL_RGBA for glTexImage2D and see if that improves your performance.


Drivers are updated to the latest version (anyway, I installed them 1 month ago). I tried using RGBA but I get exactly the same results...

Share this post


Link to post
Share on other sites
What are your computer specs and what OS are you compiling and running this on?

BGR or RGB won't make any difference. TGA is BGR, so you should be using that or else inverting all your color data in the texture.

The problem here is most likely your computer. Download some of the Nehe tutorials and tell us what FPS you get on those. If they are still low, the issue is with your computer and not with the code. If those are fast then the issue is with your code and not the computer.

Share this post


Link to post
Share on other sites
Quote:
Original post by wyled
What are your computer specs and what OS are you compiling and running this on?

BGR or RGB won't make any difference. TGA is BGR, so you should be using that or else inverting all your color data in the texture.

The problem here is most likely your computer. Download some of the Nehe tutorials and tell us what FPS you get on those. If they are still low, the issue is with your computer and not with the code. If those are fast then the issue is with your code and not the computer.


Well, athlon xp 2600Mhz+, 512Mb ram and geForce 6600. I run quake 4 and far cry without problems... As soon as I can I will try to compile a nehe example...

Share this post


Link to post
Share on other sites
everything you've described should run much much faster.
make sure you are only loading the texture once, creating the texture once etc. other than that it's anyones guess without seeing the code.

Share this post


Link to post
Share on other sites
okay, here are a couple of things I would try:

1) Remove the glClearColor call from the render loop. You don't need to do that every frame, besides it's an expensive operation.

2) Try using vertex arrays or buffers to draw the polygons. They are faster than immediate mode rendering esp. when complex geometry is involved.

Share this post


Link to post
Share on other sites
Quote:
Original post by Specchum
okay, here are a couple of things I would try:

1) Remove the glClearColor call from the render loop. You don't need to do that every frame, besides it's an expensive operation.

2) Try using vertex arrays or buffers to draw the polygons. They are faster than immediate mode rendering esp. when complex geometry is involved.


I know that there could be many changes to do to improve perf., but I think that I should be able to play HL2 and still get some dozens of fps from my program in background :-)
I think that opengl is running in software mode: glGetString(GL_RENDERER); returns "GDI Generiversion: 3".

Now my question is: how can I set on the HW mode? (I'm using SDL by the way)

Share this post


Link to post
Share on other sites
[quote]Original post by cignox1
Quote:
Original post by Specchum
I think that opengl is running in software mode: glGetString(GL_RENDERER); returns "GDI Generiversion: 3".

Now my question is: how can I set on the HW mode? (I'm using SDL by the way)

You're right, that is the problem. How are you setting up the OpenGL context (I think that's the call the SDL_SetVideoMode or something similar)?

Share this post


Link to post
Share on other sites
This is my initialization code.

//Initialize sdl and the video sublib
if(SDL_Init(SDL_INIT_VIDEO|SDL_INIT_TIMER)<0) exit(0);

SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8); //Use at least 5 bits of Red
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8); //Use at least 5 bits of Green
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8); //Use at least 5 bits of Blue
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16); //Use at least 16 bits for the depth buffer
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1); //Enable double buffering

screen = SDL_SetVideoMode(width, height, 32, SDL_OPENGL | SDL_HWSURFACE/* | SDL_FULLSCREEN*/);





EDIT: In addition, I used GL_BGRA when building the texture, but I've just read somewhere that this flag was added only in OpenGL 1.2. But since OGL 1.1 is the latest version providen on Windows, why donsn't the compiler complain about it? Am I using a newer header? But then I have header and library of different versions, could this be the reason?

[Edited by - cignox1 on March 29, 2006 4:46:18 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by cignox1
EDIT: In addition, I used GL_BGRA when building the texture, but I've just read somewhere that this flag was added only in OpenGL 1.2. But since OGL 1.1 is the latest version providen on Windows, why donsn't the compiler complain about it? Am I using a newer header? But then I have header and library of different versions, could this be the reason?


If you include SDL/SDL_opengl it will define these extensions for you. And since your graphics card supports OpenGL extensions, it all works.

In your initialization code, you don't need the SDL_HWSURFACE flag. It should not cause a problem, but it's unnecesary.

Also, if you run this in fullscreen, do you still get the same problems?

Share this post


Link to post
Share on other sites
Ok, solved the problem (as usual, it was my fault): now I render a 24000 polys model with a 1024x1024 texture at 35-40 fps without changing a line of code (I had a opengl32.dll in the same folder of the exe and the app was using it instead than the one provided with the drivers).

EDIT: 60fps when full screen :-)

[Edited by - cignox1 on March 29, 2006 9:35:14 AM]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627721
    • Total Posts
      2978803
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now