Jump to content
  • Advertisement
Sign in to follow this  
deffer

Good graphics vs. crappy hardware

This topic is 4949 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi. I'm developing a medium-size engine with couple of friends. One of them has to write AI, and to do that, he has to be able to test it a little on already working environment (terrain, models). The problem is, he's got very crappy graphic card, I believe everything is done in software there (even after we installed the newest drivers). :( From what I've noticed, fill rate is the problem here, and he's got about 1-2 FPS in 320x200. Ok, what can I do to speed up rendering? Reducing the geometry count didn't make much of a difference. I'm using vertex/index buffers by now. Maybe switching to immediate mode could do better? Or anything else? Any help appreciated. /def

Share this post


Link to post
Share on other sites
Advertisement
If you're using vertex buffers, don't switch back to immediate mode. As for the rendering speeds, there could be a few things affecting it. Here's some things I can think of:

- It could be using software rendering, so try using glGetString(GL_RENDERER) to see if it's using hardware acceleration (or was it GL_VENDOR? (which should return Microsoft or something if software)).
- It could be the use of 3D textures (which GeForce2s can use, but use *very* slowly (I know 'cause I use one)).
- It could be that you're rendering too many polys (you did say you reduced the geometry count, but even a GeForce2 starts to choke when it's fed 60k polys).
- It could be that the videocard doesn't have enough RAM to store everything, in which case it'd be dumping it into system RAM, and fetching it as it's needed (which in turn would be slowed down more if the system has to page the memory).

There's probably a few more things to add, but considering how much time I've wasted writing this long reply, people have probably beaten me to them.

[EDIT]
Or not... :D

Share this post


Link to post
Share on other sites
Do you absolutely HAVE to test it with the terrain and models for AI? You might be able to test AI routines using other ways of testing (like having a sphere look for the end of a maze or some other test) so he doesn't get stuck with crap stuff.

Share this post


Link to post
Share on other sites
Thanks for the replies:

@Gorax:
I cannot test it on the target machine as much as I wanted to, but if he was getting 1 FPS while rendering about 20[!] triangles (with somewhat texture blending, but still), there's not much left for interpretation.

And by all means: _I_ got GeForce2 and it runs smooth, my friend has a laptop with built-in graphic card.
Your RAM-dependant solution is interesting, I'll try to reduce texture usage, but on the other hand all models are rendering smooth. It's the same amount of geometry, and similar technique (only the terrain has 2-pass rendering, and it covers most part of the screen).

@SumDude:
In this case, it's not only AI, it's somewhat related to real-time response and also - the interface.
But still, I need that way of thinking - how can I render environment as simply as possible, without much reformatting of the code?

Share this post


Link to post
Share on other sites
Any chance you could find out what graphics chipset your friend is using? The model of the laptop should be good enough. That would help us determine where the problem is.

Might also be worth looking into getting him another machine to test on. It shouldn't be too hard to find someone getting rid of a T-bird machine with a GeForce 2 for around a hundred bucks.

Quote:
I cannot test it on the target machine as much as I wanted to, but if he was getting 1 FPS while rendering about 20[!] triangles (with somewhat texture blending, but still), there's not much left for interpretation.
Either it's a software renderer, or a really old graphics chip. I'm thinking ATI RAGE II-era.

Share this post


Link to post
Share on other sites
To be honest I don't think the laptop's owner is willing to buy anything just to have his work done properly. And all I can tell about the hardware at the moment is that it's Intel's _something_. I knew what it was, but it only consisted of [random] numbers and letters, so it's easy to forget.

In my opinion, the problem can be solved only by dropping the overall level of graphic computations.

Here's one practical problem: I want to turn off all texturing, leaving only static colors in vertices. How can I achieve this?
I know how to use textures something like:

glActiveTextureARB( GL_TEXTURE0_ARB );
glTexEnvi( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_ARB );
glTexEnvi( GL_TEXTURE_ENV, GL_SOURCE0_RGB_ARB, GL_PRIMARY_COLOR_ARB );
glTexEnvi( GL_TEXTURE_ENV, GL_OPERAND0_RGB_ARB, GL_SRC_COLOR );
glTexEnvi( GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_REPLACE );

but that involves using a texture, how can I do this straight?

Share this post


Link to post
Share on other sites
Are you rendering any text? I have had some weird problems before with text rendering methods which run really fast on some systems but randomly run at insanely slow rates on other systems. Sometimes I have thought a graphics program was slow because of the content, but it turned out that I could make it run at an acceptable frame rate simply by switching the text rendering method. I would turn off all text rendering and find an alternative way to log the FPS and see what that does to the frame rate before you work too hard on optimising the rest of the program.

Share this post


Link to post
Share on other sites
Quote:
Original post by mumpo
Are you rendering any text?
[...]
I would turn off all text rendering and find an alternative way to log the FPS and see what that does to the frame rate before you work too hard on optimising the rest of the program.


Nope, I'm not.
And I already know that it's the terrain rendering that causes this slowdown.
I'm rendering it in two passes, each time blending two textures (that's four in the end). That's why I'm trying to turn it down.

How to draw simple colored vertices instead?

Share this post


Link to post
Share on other sites
Generally intel's onboard graphic chipsets used shared sdram (reserving a chunk of the machine's system memory dedicated to video). This has always been notoriously slow in my experiences because of the low memory bus speeds (he prolly has pc100) and usually small amounts of allocated memory (I think intel's chipsets reserve only about 8-12MB on some older models). I would say try using some low res, low color depth textures and try to keep the texture swapping to a minimum.

Perhaps for terrain you could just create new vertex buffers that use a format that only contains position and color data. Fill in the colors manually and extract the old positions from your textured vb's. I'm not sure what the exact function calls might be, as I'm a DirectX guy. You could then blow away the textured vb's to free up the memory after you've constructed your new ones. There might be an easier way to do it, but I don't know a whole lot about openGL.

Share this post


Link to post
Share on other sites
Thanks, I managed to render colored vertices, without texturing.

As if it would help, I'll try it out tomorrow, if I'll catch the guy.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!