Sign in to follow this  
Vanshi

OpenGL Need help for better performance

Recommended Posts

Vanshi    122
Hi guys, I am working on a project that involves importing textured 3ds max objects of the .3ds format into OpenGL . i have been able to do that but i have observed that it takes a lot of time to render the scence and loading just 4 buildings(.3ds format) takes upto 250 MB. The framerate is very low too. Can anyone give tips/tricks on how to improve performance and reduce memory usage? Would appreciate ur help a lot. Thanx.

Share this post


Link to post
Share on other sites
comedypedro    134
Hi. You'll need to give much more information than that before anyone can help. Firstly can you tell us how many vertices/triangles are in your models. And secondly what is your hardware spec and what drivers you have got. 250Mb is way too much theres something very wrong there. For improving performance have a look at vertex buffer objects or even display lists if youre not already using them. Post more details or some code and you'll get help.

Share this post


Link to post
Share on other sites
irreversible    2860
You're probably trying to load four huge buildings into RAM, which implies they contain a lot of polygons. At least tell us what size the 3ds files are (or better yet, as comedypedro said, how many polygons each building consists of). You're probably trying to render as many as a several million polys (based on your RAM usage and framerate), which - unless you're doing things very inefficiently - would clog up even the newest hardware.

Share this post


Link to post
Share on other sites
Vanshi    122
I am sorry for that really low amount of information. But being a newbie i didn't know much what to ask. I did find out the polygon count for my models.
1:32668 polygons file is 830 KB
2:59610 polygons --1.54 mB
3: 4058 polygons --116 KB
4: 224 polygons -- 11.7KB

so i guess from the earlier replies that i should try to reduced the polygon count for the .3ds objects. Any other suggestions are welcome. Thanx for ur time.

Share this post


Link to post
Share on other sites
irreversible    2860
Those polygon counts seem pretty reasonable for current hardware and shouldn't under any circumstances take up 250 megabytes in RAM. Check your code for memory leaks and be sure to free up all temporarily allocated objects. Can you also post your system specs? For instance, I wouldn't expect much over 4-5 FPS on a GeForce 2 with close to 100000 polygons on the screen.

Share this post


Link to post
Share on other sites
comedypedro    134
Agreed, that amount of polys shouldnt be a problem for a decent speced machine. How do you know that 250mg of memory is being used?? Or maybe you meant 2.5megs?? 830k + 1.54 mB + 116 KB + 11.7KB = about 2.5mg so Im assuming thats what you mean. If thats giving your machine trouble then like irreversible says youre doing something wrong. Let us know your specs and the relevant code and we'll see what the problem is.

Share this post


Link to post
Share on other sites
deavik    570
Tip for better performance: use a display list. This makes material changes pre-calculated, and that translates to much higher frame rates (I had my framerates drop 4 times by using a display list!). This has nothing to do with memory though. After you compile the display list, delete all the mesh info you have loaded (vertices, materials, etc).

There are also other options like VBO's or vertex arrays which you could look into.

Share this post


Link to post
Share on other sites
Vanshi    122
Thank u for ur replies everyone.
My machine specs are as follows:
AMD 3000+
MSI RS-480 m2 IL motherboard
onboard video(ATI Xpress Chipset)
1 GB RAM

the code can be found on APRON TUTORIALS for loading 3ds objects into OpenGL
http://www.morrowland.com/apron/tut_gl.php
3DS loader with just a few changes in the code for placing multiple objects into scene and some camera movement with the keyboard.
I found the memory usage with the Taskmanager

Share this post


Link to post
Share on other sites
comedypedro    134
Well Im not really an expert on hardware but I reckon the problem or part of the problem lies with your hardware. The CPU is fine but for graphics youre probably better of with a dedicated video card - even something cheap like an FX5200 would do the job. Like I said Im not really up on hardware so maybe someone else can confirm this.

I was able to optimise my 3ds code a bit by using temporary pointers to avoid excessive dereferencing i.e. if you have something like


for(iCurrectVertex = 0; iCurrectVertex < NumVerts; iCurrectVertex ++)
{
pModel->pObject[iCurrentObject]->pVertexList[iCurrectVertex] = some_value;
}





you can speed things up by



float* pVertexList = pModel->pObject[iCurrentObject]->pVertexList
for(iCurrectVertex = 0; iCurrectVertex < NumVerts; iCurrectVertex ++)
{
pVertexList[iCurrectVertex] = some_value;
}





Hope this helps, and Id also consider using ms3d over 3ds, its a much more user friendly format - check the other tutorial on that link you gave.

Share this post


Link to post
Share on other sites
Vanshi    122
Hi guys,
thanks for all the suggestions. i will try them out and let u know what the results are. but i cant use ms3ds format for my project. it has to be .3ds formats. once again thank u for all ur concern and time.

Share this post


Link to post
Share on other sites
Vanshi    122
Hi guys,
i need help on another issue. Somebody suggested to go for objects (buildings) in multiple resolution. as i go away from the building, the level of detail should decrease. this may help my frame rate go up. any suggestions on how to accomplish this will be really welcome. thanks for all the help earlier. working on the display list implementaion right now.

Share this post


Link to post
Share on other sites
comedypedro    134
That technique is used quite often in games/graphics, its often called dynamic level of detail or something similar. One way you could do this is to simply have different .3ds models for the various levels of detail you require. For a simple example : you keep the current model for close up viewing, have a reduced detail model for medium ranges and maybe just simply draw a box at long range. You can experiment to find out what works best and the idea is that the user dosent realise what is happening. In 3ds max theres an option somewhere to reduce the number of faces/vertices in a model, this might help you create simpler models. Then you simply calculate the distance from the camera to each model and decide which model to draw. Like I said, experimentation is needed. You could also look into occlusion or frustum culling which basically means that you only send the geometry to be rendered if its actually in front of the camera. This can involve a bit of maths though but most games/graphics programs also use this technique.

Share this post


Link to post
Share on other sites
Vanshi    122
Thank you for the suggestion. but the problem with loading two or more objects would elevate my problem of memory usage. i have already been trying that direct replacement and it has given good results. the frame rate does increase a little bit(by 8-9 frames). but then the memory usage increases. so i was hoping u guys could help me with more advanced methods of implementing this teachnique. thank u for ur time.

Share this post


Link to post
Share on other sites
comedypedro    134
Well you have 1 gig of ram so I dont see why youre worried about memory!!
The only other thing I can think of is to keep the one model but only draw the sides that are facing the camera. This would involve implementing some form of geometry management such as a quad-tree or bsp-tree. You may get away with spliting each building into 4 separate models (one for each wall, assuming theyre basic 4 sided buildings) and only drawing what will actually be seen. But like I said youve more than enough ram so why worry about the extra memory. My PC has only 1/2 gig and it runs Doom 3 and Quake 4!!!

Share this post


Link to post
Share on other sites
kburkhart84    3182
Some people will disagree with me, but you don't really need to be so careful with memory like we used to do. Now, the requirements usually fall on the rendering, as in the amount of rendering. Memory very unlikely will run out unless you have a low quality video card and use high resolution textures in a no compression format.
That is not to say that you should just waste memory because you can, but on modern computers, trading memory for speed is probably always better, unless it is a lot of memory and next to nil on the speed.

Share this post


Link to post
Share on other sites
Vanshi    122
Thank you for the suggestion. just like u said it is not the ram i am worried about. The problem is that the project i am doing is academic(counts for grades)and i have to do the rendering in realtime and on machines with lower memory than mine. i will try the quad-tree or bsp-tree technique u have suggested. the fps is really low just about 15 FPS with nothing but the basic flat textured surface with two billboarded trees.it goes down to 10 when i add just 4 buildings.
Also can u guys suggest good tutorials online for billboarding. thank you once again for ur time.
and about the quad-tree method can this be done for 3ds objects too??

Share this post


Link to post
Share on other sites
CRACK123    235
Well I may have missed a few things so if this is already said feel free to ignore it. Firstly make sure you are running in hardware(I assume part of the problem is due to running in software). Even in software when I was writing my rendering engine I was able to get a decent 30 fps with a lot of polygons, ofcourse I did use a few occlusion culling tricks. So its definitely feasible. I think the cost of FX 5200 in India at the moment is around Rs 5000 - 6000 depending on where you buy it. If you can afford it buy it but otherwise go with a few tricks like quad-tree/octree culling to reduce polygons thrown at the card.

Share this post


Link to post
Share on other sites
comedypedro    134
Here you go : Nehe Billboarding Tutorial

To answer your other question, yes you can store any geometry in a quad tree.

To be honest I dont think its very fair of your lecturers/university/college to expect you to do 3d graphics on low spec machines. If you want to post your rendering code I'll have a look and see if I can suggest how to speed it up.

Share this post


Link to post
Share on other sites
Vanshi    122
thank you for ur offer comedypro.but how do i post the code online? i don't have my own website. i could send it to u if you could give me ur mail id or something. really appreciate ur help. thank you.

Share this post


Link to post
Share on other sites
Vanshi    122
Hi guys,
thanks for the help. i have managed to solve the memory problem that i was having. i was not deleting the textures when redrawing the window. without that the memory usage increased. now that the problem has been solved the fps has increased to 26 to 24 fps. i am looking for more ways to increase performance. thank u all for the help.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
  • Popular Now