SwapBuffer trouble

Started by
24 comments, last by Greg K 20 years, 8 months ago
In my program, swapbuffers() takes 100 milliseconds. I know that openGL queues up commands and does them all in one go but I am only rendering at most 140 quads per frame! And not only that but they are 2D quads! Does anyone have an explanation/tip on how to speed this up? -Greg Reverie Entertainment
Advertisement
OpenGL doesn''t queue up commands. swapbuffers just waits till all the commands are complete. Post your code.

PS: What do you mean 2D quads?!

"C lets you shoot yourself in the foot rather easily. C++ allows you to reuse the bullet!"
Member of the Shove Pouya Off A Cliff club, SPOAC. (If you wish to join, add this line to your signature.)Member of "Un-Ban nes8bit" association, UNA (to join put this in your sig)"C lets you shoot yourself in the foot rather easily. C++ allows you to reuse the bullet!"
somebody told me anything about glFlush() or glFinish() and that they should be called before glSwapBuffers().

If it won''t help, try to check whether you really have TWO buffers, not one(can''t swap->copying->slow) or three (need special accelerated functions->copying->slow).
When I was younger, I used to solve problems with my AK-47. Times have changed. I must use something much more effective, killing, percise, pernicious, efficient, lethal and operative. C++ is my choice.
static	PIXELFORMATDESCRIPTOR pfd=				// pfd Tells Windows How We Want Things To Be	{		sizeof(PIXELFORMATDESCRIPTOR),				// Size Of This Pixel Format Descriptor		1,											// Version Number		PFD_DRAW_TO_WINDOW |						// Format Must Support Window		PFD_SUPPORT_OPENGL |						// Format Must Support OpenGL		PFD_DOUBLEBUFFER,							// Must Support Double Buffering		PFD_TYPE_RGBA,								// Request An RGBA Format		16, 										// Select Our Color Depth		0, 0, 0, 0, 0, 0,							// Color Bits Ignored		0,											// No Alpha Buffer		0,											// Shift Bit Ignored		0,											// No Accumulation Buffer		0, 0, 0, 0,									// Accumulation Bits Ignored		16,											// 16Bit Z-Buffer (Depth Buffer)  		0,											// No Stencil Buffer		0,											// No Auxiliary Buffer		PFD_MAIN_PLANE,								// Main Drawing Layer		0,											// Reserved		0, 0, 0										// Layer Masks Ignored	};		ShowWindow(hWnd,SW_SHOW);	SetForegroundWindow(hWnd);			glClearColor(0.0f, 0.0f, 0.0f, 1.0f);	glDisable(GL_DEPTH_TEST);	glEnable(GL_TEXTURE_2D);

glViewport(0, 0, width, height);	glMatrixMode(GL_PROJECTION);	glLoadIdentity();	glOrtho(0, 800, 600, 0,-1.0f,1.0f);	glMatrixMode(GL_MODELVIEW);	glLoadIdentity();

	glBindTexture(GL_TEXTURE_2D, nTexID);		glBegin(GL_QUADS);		glTexCoord2s( 0, 0 );glVertex2s(nX		 , nY		 ); 		glTexCoord2s( 1, 0 );glVertex2s(nX+nWidth, nY		 ); 		glTexCoord2s( 1, 1 );glVertex2s(nX+nWidth, nY-nHeight); 		glTexCoord2s( 0, 1 );glVertex2s(nX		 , nY-nHeight); 	glEnd();

SwapBuffers(hDC);


Here are some code tidbits. Hope this helps...

[edited by - Greg K on August 12, 2003 9:25:04 AM]
huh no idea...

ma brain almost dead now, but try not to send type _short_ arguments, opengl computes only with floats.


Yes, i understand. this hasn't any chance to survive...



--edit

AND TRY TO RUN WITH MORE COLORS, 16-bit is nearly palletized.
nothing? again?

--edit2

have you sent your pfd to windows? i don't see that

[edited by - exa_einstein on August 12, 2003 9:35:43 AM]
When I was younger, I used to solve problems with my AK-47. Times have changed. I must use something much more effective, killing, percise, pernicious, efficient, lethal and operative. C++ is my choice.
of course I sent it to windows. It wouldn't run otherwise would it? Those are just snippets. I don't see why I shouldn't use shorts, there is an ogl function that accepts them and maybe they can do some compression on them when sending them to vid hardware? Also, how would 32 bits speed anything up? I would think it would slow it down.
-Greg

[edited by - Greg K on August 12, 2003 9:44:22 AM]
...it was only idea. You never know what windows want...
When I was younger, I used to solve problems with my AK-47. Times have changed. I must use something much more effective, killing, percise, pernicious, efficient, lethal and operative. C++ is my choice.
yeah, I am at the point where I will try anything.
-Greg
glFlush(); //forces ogl computing
glSwapBuffers(hDC);
//or
glFinish(); //waits for finishing ogl comkputing
glSwapBuffers(hDC);
//or
//...or
//shit...
//yeah:
try to check your videocard (if you are using some tweaker, check it too) and look for some threads/processes/viruses, they can slow it down. and then...RESET WINDO.S


...i hate smiles.
but my favourite smile is...

<:-=| SADDAM
When I was younger, I used to solve problems with my AK-47. Times have changed. I must use something much more effective, killing, percise, pernicious, efficient, lethal and operative. C++ is my choice.
Check your profiler again (or do it from memory). Exactly WHAT is takin 100 milliseconds? Just the SwapBuffers() function or the entire renderloop?

My best guess anyway is that you are running in software mode. Query for GL_VENDOR. If it sais your driver, then something fishy is going on somewhere. If it says Windows, then you are in software mode and that''s why you are slow.

Sander Maréchal
[Lone Wolves Game Development][RoboBlast][Articles][GD Emporium][Webdesign][E-mail]


GSACP: GameDev Society Against Crap Posting
To join: Put these lines in your signature and don''t post crap!

<hr />
Sander Marechal<small>[Lone Wolves][Hearts for GNOME][E-mail][Forum FAQ]</small>

This topic is closed to new replies.

Advertisement