OpenGl as a blitter

Started by
5 comments, last by Aks9 10 years, 4 months ago

How to use OpenGl as a blitter? I know ogl but not to much, i should just show textured fullscreen quad then upload texture then draw - then flush?

which command exactly uploads ram to vram texture? glTexImage2D ? Then quad drawing method does not really matter ? (for example i could use gl begin - 4 vertex - gl end? then I need glFlush to immediately blit it on screen?

can such like blit be faster than winapi blitter (winapi blit is

somewhat slow on my old machine here ).

What with directX blitter is it such like ogl blitter or there is

a dirrerence? can dx blitter be faster than winapi or ogl blitter?

Advertisement
The modern way of drawing stuffs using the GPU is to simply draw a set of textured quads. There is no reason to flush after each quad (it would be extremely slow) and it is usually easier to simply draw everything each frame. There are several ways to draw a quad in OpenGL: you can draw two triangles, you can use the GL_QUADS constant (in legacy versions of OpenGL), you can use point sprites, you can use a geometry shader, you can use instancing, etc. In a simple 2D game, the method used is probably not so important.

P.S. If you really want to learn about OpenGL, I suggest to start using the modern APIs. The programmable pipeline is now common in the mobile and Web world too.
Apatriarca is right about modern OpenGL use practices, but to answer your actual question:

Yes, you've got the gist of it. Blit your software-generated buffer to this texture with the correct format specifiers. Then render the texture as a full-screen quad. You generally don't need a glFlush but rather just use your platform API's Present call which will implicitly flush before buffer swapping (GL doesn't do this itself, you need to use SDL/GLUT/GLFW/SFML/GLX/AGL/WGL/whatever).

You can relatively easily out-perform ancient Windows APIs if you know what you're doing. You'll naturally do significantly better with a real GPU rendering engine, of course.

Sean Middleditch – Game Systems Engineer – Join my team!

Moving to OpenGL forum

Apatriarca is right about modern OpenGL use practices, but to answer your actual question:

Yes, you've got the gist of it. Blit your software-generated buffer to this texture with the correct format specifiers. Then render the texture as a full-screen quad. You generally don't need a glFlush but rather just use your platform API's Present call which will implicitly flush before buffer swapping (GL doesn't do this itself, you need to use SDL/GLUT/GLFW/SFML/GLX/AGL/WGL/whatever).

You can relatively easily out-perform ancient Windows APIs if you know what you're doing. You'll naturally do significantly better with a real GPU rendering engine, of course.

Allright much tnx for it,

Can you maybe answer, i am urged to know, what api call moves the RAM pixet array into a vram memory,Is this glTexImage2D ? If I would like to blit I just need to call it each frame?

Also this flushing stuff is not clear, I previously used pure winapi + ogl (old version), so I just need to flush or do nothing or do something else?

Also this is also question about direct X. Never used it so

i got very weak idea, If i would like do it in direct x i should

do it like here in ogl by uplowading te texture to quad or

there is other way? (I heard something about pointers to

surfaces in directx, like it would be something other.. but

i am not sure) can such directx blit outperform ogl blit?

Yes, glTexImage2D copies a pixel array into the video RAM or other memory controlled by the video driver. You can also use glTexSubImage2D to load just the pixel data. Since you're dealing with 2D, I doubt it really matters which approach you use.

I think the point being made about flushing is you don't have to worry about it.

Flipping your textures onto the screen will not be your bottleneck whether you choose DX or OpenGL. If you do want to know more about DX, you're best off making a new thread over in the DX forum since your thread is in OpenGL forum now. But the approaches explained in this thread are the current best practices, so if you choose something like using an old DirectDraw surface to do 2D stuff it will almost definitely have worse performance since that stuff is deprecated and only maintained for backwards compatibility.

Yes, glTexImage2D copies a pixel array into the video RAM or other memory controlled by the video driver.

Actually, it is partially true.

Drivers do really copy data to theirs memory, but it is still RAM. Only when the data is really needed they are copped into a GPU memory.

In short, glTexImage2D prepares data to be sent to GPU, and tells GL that they should be sent, but it occurs immediately before the usage.

It is a kind of driver optimization. Neither glFlush or glFinish would force them to do it if the date from the texture are not used.

This topic is closed to new replies.

Advertisement