dramatic fps drop when using transparent texture

Started by
38 comments, last by 21st Century Moose 11 years, 4 months ago
Ahh that's it!! i got "microsoft corporation" for GL_VENDOR and "GDI generic" for GL_RENDERER query!!.

Now, im running the app in a laptop Dell XPS17 with integrated graphics card and an Nvidia GT555m card, maybe the problem is im not choosing the right graphics card for running the app.<br /><br />Now how the heck i do that?
It's also worthwhile downloading updated drivers for your machine

^ This.

I encountered the same problem when I moved to Windows 8. Apparently Windows 8 thinks its half-baked display driver is better than the drivers provided by my graphic card's manufacturer...
Advertisement
Hi, i have the nvidia control panel where i can select the .exe file related to whichever game i want to play and select which of the two video cards runs the game.

For game and other applications like adobe creative suite it works and it's the nvidia graphics card taking charge of.

But ive tried setting my application *.exe file in the list, and still wont run in the nvidia gt555m!! my Nvidia notifier says there are no programs running with the nvidia gpu plus the console spits out again GDI GENERIC plus no changes in fps of the app so it's definitely not working for me this way.

I've searched the internet but all i have found is people not knowing how to put the *.exe from a game in the list of the nvidia control panel, nothing about any programmer trying to run his own exe file in the nvidia gpu.......

Any ideas?

Graphics card makers only optimize their hardware for vbo's these days so its not unreasonable to assume legacy functions can cause all sorts of performance problems.


Quake still runs well on a modern GPU (which is likely to be emulating immediate mode in the driver by filling a dynamic VBO behind the scenes) - certainly much faster than the single-digit framerates that the OP is reporting, and with much more complex geometry, translucency, etc. Yes, you're right that it's not unreasonable to make that assumption, but when actual hard data invalidates the assumption (for this class of simple scene) then you do need to reconsider.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Ive downloaded lesson01 from nehe.gamedev.net opengl tutorials, the one that simply creates a window with rendering context and have noticed that the exe is being run by the nvidia gpu!!!

So, i guess it has to do either with the way glut manages window either with any initialization parameter im not setting properly..........

What do you guys think?

So, i guess it has to do either with the way glut manages window either with any initialization parameter im not setting properly..........


Definitely possible. One unfortunate thing that many GLUT examples I've seen do is create a single-buffered context, and it may be the case that your NVIDIA GPU/driver is unable to give you a hardware accelerated one. If that sounds like something you may have in your code then switching it to double-buffered (using GLUT_DOUBLE during startup and adding a call to glutSwapBuffers at the end of each frame) may be all that you need. Also worth checking the bit depth you request for your colour and depth buffers (not sure if you can control this via GLUT though).

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I have double buffer specified in glut.

Ive spent some time porting my application to use SDL + OPENGL for window managing and input, and again i dont know why my gpu is not the GL_RENDERER i have again "GDI generic" !!!!

I have missed to try just creating a window in SDL and see if it renders on the gpu, ill try this tomorrow.

Maybe im using some opengl command that causes the software rendering?
is it possible to post your window creation code, pretty much all the code upto when you check the vendor, and hopefully we can point out the problem.
Check out https://www.facebook.com/LiquidGames for some great games made by me on the Playstation Mobile market.
Hi,
Ive tested this SDL simple example and it doesnt run in my gpu neither.... although it initializes the video mode with SDL_OPENGL

example from http://lazyfoo.net/SDL_tutorials/lesson36/index.php

This is driving me nuts.............

I'll post my glut+Opengl window creation & initialization code soon, now i have to leave.

[source lang="cpp"]/*This source code copyrighted by Lazy Foo' Productions (2004-2012)
and may not be redistributed without written permission.*/

//The headers
#include "SDL.h"
#include "SDL_opengl.h"

//Screen attributes
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
const int SCREEN_BPP = 32;

//The frame rate
const int FRAMES_PER_SECOND = 60;

//Event handler
SDL_Event event;

//Rendering flag
bool renderQuad = true;

//The timer
class Timer
{
private:
//The clock time when the timer started
int startTicks;

//The ticks stored when the timer was paused
int pausedTicks;

//The timer status
bool paused;
bool started;

public:
//Initializes variables
Timer();

//The various clock actions
void start();
void stop();
void pause();
void unpause();

//Gets the timer's time
int get_ticks();

//Checks the status of the timer
bool is_started();
bool is_paused();
};

bool initGL()
{
//Initialize Projection Matrix
glMatrixMode( GL_PROJECTION );
glLoadIdentity();

//Initialize Modelview Matrix
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();

//Initialize clear color
glClearColor( 0.f, 0.f, 0.f, 1.f );

//Check for error
GLenum error = glGetError();
if( error != GL_NO_ERROR )
{
printf( "Error initializing OpenGL! %s\n", gluErrorString( error ) );
return false;
}

return true;
}

bool init()
{
//Initialize SDL
if( SDL_Init( SDL_INIT_EVERYTHING ) < 0 )
{
return false;
}

//Create Window
if( SDL_SetVideoMode( SCREEN_WIDTH, SCREEN_HEIGHT, SCREEN_BPP, SDL_OPENGL ) == NULL )
{
return false;
}

//Enable unicode
SDL_EnableUNICODE( SDL_TRUE );

//Initialize OpenGL
if( initGL() == false )
{
return false;
}

//Set caption
SDL_WM_SetCaption( "OpenGL Test", NULL );

return true;
}

void handleKeys( unsigned char key, int x, int y )
{
//Toggle quad
if( key == 'q' )
{
renderQuad = !renderQuad;
}
}

void update()
{

}

void render()
{
//Clear color buffer
glClear( GL_COLOR_BUFFER_BIT );

//Render quad
if( renderQuad == true )
{
glBegin( GL_QUADS );
glVertex2f( -0.5f, -0.5f );
glVertex2f( 0.5f, -0.5f );
glVertex2f( 0.5f, 0.5f );
glVertex2f( -0.5f, 0.5f );
glEnd();
}

//Update screen
SDL_GL_SwapBuffers();
}

void clean_up()
{
//Quit SDL
SDL_Quit();
}

Timer::Timer()
{
//Initialize the variables
startTicks = 0;
pausedTicks = 0;
paused = false;
started = false;
}

void Timer::start()
{
//Start the timer
started = true;

//Unpause the timer
paused = false;

//Get the current clock time
startTicks = SDL_GetTicks();
}

void Timer::stop()
{
//Stop the timer
started = false;

//Unpause the timer
paused = false;
}

void Timer::pause()
{
//If the timer is running and isn't already paused
if( ( started == true ) &amp;&amp; ( paused == false ) )
{
//Pause the timer
paused = true;

//Calculate the paused ticks
pausedTicks = SDL_GetTicks() - startTicks;
}
}

void Timer::unpause()
{
//If the timer is paused
if( paused == true )
{
//Unpause the timer
paused = false;

//Reset the starting ticks
startTicks = SDL_GetTicks() - pausedTicks;

//Reset the paused ticks
pausedTicks = 0;
}
}

int Timer::get_ticks()
{
//If the timer is running
if( started == true )
{
//If the timer is paused
if( paused == true )
{
//Return the number of ticks when the timer was paused
return pausedTicks;
}
else
{
//Return the current time minus the start time
return SDL_GetTicks() - startTicks;
}
}

//If the timer isn't running
return 0;
}

bool Timer::is_started()
{
return started;
}

bool Timer::is_paused()
{
return paused;
}

int main( int argc, char *argv[] )
{
//Quit flag
bool quit = false;

//Initialize
if( init() == false )
{
return 1;
}

//The frame rate regulator
Timer fps;

//Wait for user exit
while( quit == false )
{
//Start the frame timer
fps.start();

//While there are events to handle
while( SDL_PollEvent( &amp;event ) )
{
if( event.type == SDL_QUIT )
{
quit = true;
}
else if( event.type == SDL_KEYDOWN )
{
//Handle keypress with current mouse position
int x = 0, y = 0;
SDL_GetMouseState( &amp;x, &amp;y );
handleKeys( event.key.keysym.unicode, x, y );
}
}

//Run frame update
update();

//Render frame
render();

//Cap the frame rate
if( fps.get_ticks() < 1000 / FRAMES_PER_SECOND )
{
SDL_Delay( ( 1000 / FRAMES_PER_SECOND ) - fps.get_ticks() );
}
}

//Clean up
clean_up();

return 0;
}
[/source]
OK, one problem here is that you're using SDL_Delay to control framerate. That's internally implemented as a Sleep call and Sleep is not a good way of controlling framerate (it only specifies a minimum time to sleep for; actual time may be longer, and this will be true even if using a high resolution timer).

That's not going to fix your specific problem with not getting a hardware accelerated GL context, but it is some cleaning up you need to do.

For your GL context problem, I'll have a look over your code later on today and see if I can spot anything (assuming someone else doesn't come up with the solution before then).

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Of course there is a driver problem there because with a scene light that you should run into the 300fps ballpark easily.
Having said that, blending operations will always have a big impact on the scene because simply, they do more work, they have to read the current pixel color, apply the blending function and then write back the colour.

Stefano Casillo
TWITTER: [twitter]KunosStefano[/twitter]
AssettoCorsa - netKar PRO - Kunos Simulazioni

This topic is closed to new replies.

Advertisement