Jump to content
  • Advertisement
Sign in to follow this  
fsforall

OpenGL OpenGL in Video Card's memory

This topic is 4056 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I am having a problem setting up OpenGL. I need to draw with OpenGL to a memory bitmap. To do that, I need to set the flag PFD_DRAW_TO_BITMAP. While this works, it is not the solution that I need. This is because it renders through the software where the speed is very slow. Unfortunately I need a faster method. I am thinking of drawing to the video card. After some research, I found out that pbuffers and frame buffers can do this. My problem is setting them up. With pbuffers when I would make the context of the bitmap active, as I want to draw the pbuffer onto it, it would fail : a = wglBindTexImageARB(pbuffer.hBuffer, WGL_FRONT_LEFT_ARB); a is always 0. The code works, as if I work directly on a window's HDC, it does not return 0. However, I really need to get it working on the bitmap, as that will probably be my only context. Regarding frame buffers, the same problem occurs. Before making the window active, I use this : MakeCurrentBMP(); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); Again, the bitmap is returned white, even though I just want to render the cube at the end on the bitmap, with the software. I can provide the source code if it is needed. Can anybody help me ? Are there any examples of what I am trying to do ? Thank you very much in advance.

Share this post


Link to post
Share on other sites
Advertisement
If I understand you are trying to:
- render to a texture.
- get the texture to an array in RAM ("memory bitmap")

Render to a texture:

The good method in OpenGL with modern graphic cards is the Frame Buffer Object (FBO). you have to create a new FBO, then attach it to a texture. There is plenty of demos, have a look at that article here at gamedev. Note that you don't need neither Depth texture, nor MRT, just one FBO with a texture attached is enough.


Then you need to get back the texture values (FBO are stored on GPU memory):

- not efficient method is to fetch the values using glGetTexImage
- efficient methods uses the Pixel Buffer Extension (PBO): ReadPixels() to a PBO . Examples could be found at gpgpu for instance (see "fast transfer"): gpgpu

Share this post


Link to post
Share on other sites
Hello,

Thank you very much for your help. Here is exactly what I am trying to do :

I already have the HDC to a bitmap in RAM. I need to draw with OpenGL on it. I can use the PFD_DRAW_TO_BITMAP flag, but OpenGL would be run with software support which makes it slow. To get over this, I need to create a pbuffer or FBO.

To create either one of them, I need to initialize an OpenGL window created with the PFD_DRAW_TO_WINDOW tag. I can't do that since I only have a bitmap. I tried initializing a pbuffer and FBO with the PFD_DRAW_TO_BITMAP flag, but it doesn't work, since no extensions can be found, even though I already create the pixel format and context for that bitmap. Yes, I can draw on it with the OpenGL software renderer, it works fine.

To sum this up, I can use glBindFramebufferEXT in my situation.

Maybe I don't need to get as far as drawing on that HDC directly, I think that I can bypass that, I could do that by just getting the RGB values from the video card's memory bitmap.

Here is what I need, just in case I was not clear enough :

I already have a HDC for a bitmap created in system memory.
I need to draw on it with OpenGL.
I need to take advantage of the hardware renderer for speed, as I really need performance. I can draw on it with the software renderer, but it it too slow.
I can not create another window or use another window.

Is there anything that I can do ? Can anybody help me ? Thank you very much in advance.

Share this post


Link to post
Share on other sites
Your only choice is to create a window and make it invisible.
Init GL for this window, then create a FBO and render to it. That will give you hw accel. Don't use the backbuffer since that will likely return junk since the window is not visible. Use a FBO!

Share this post


Link to post
Share on other sites
Thank you very much for your help. I was able to get it working with a FBO and getting the data from there with glGetTexImage.

I need to blit the data onto a 16 bit bitmap (I have the function to amke it 16 bits from RGB)

Now my problem is blitting the data. With a FBO I can only create it as 512x512 (or smaller, but it must be a power of two). However, my bitmap can have any size ratio, like 300x250. I can draw it bigger and then scale down.

Where should I look for a function or class to do the blitting ? How is this called, is it a special term ? Is there any information about what I am trying to do available ?

I do not want to use the GDI blitter, as it does not work in this example and it is too slow.

Thank you.

Share this post


Link to post
Share on other sites
If I understand this correctly and you want to downscale a texture than you could just setup an orthographic projection (glOrtho or gluOrtho2D) and render a quad to the screen:

gluOrtho2D( 0.0, fWidth, 0.0, fHeight );

glBegin( GL_QUADS );
glTexCoord2f( 0.0f, 0.0f ); glVertex2f( 0.0f, 0.0f );
glTexCoord2f( 1.0f, 0.0f ); glVertex2f( fWidth, 0.0f );
glTexCoord2f( 1.0f, 1.0f ); glVertex2f( fWidth, fHeight );
glTexCoord2f( 0.0f, 1.0f ); glVertex2f( 0.0f, fHeight );
glEnd();

Share this post


Link to post
Share on other sites
Thanks.

No, I don't want to do that.

I have a bitmap in the RAM. I need to draw on that. If I initialize OpenGL, I only run it in software mode. That is why I create a hidden window and draw in a FBO.

I get the data from the FBO and I need to draw on that bitmap. I can draw on that bitmap easily if I have the RGB colours (and I do, have them). But what I need is a fast blitter. And also one that will take care of the FBO being 512x512 and the bitmap being any ratio. I do this every cycle, that is why it needs to be fast.

I hope that you understand what I am trying to do.

Share this post


Link to post
Share on other sites
Well, that's what my previous post should do for you :-). That short snippet is probably the simplest and fastest blitter you can get your hands on in OpenGL. It does two things:

1) It can scale any texture to any size;
2) It can blit any texture onto another texture (the latter being bound to an FBO).

So, if your bitmap is used to input ("blit source"), load it as a texture. It shouldn't matter if you can only use power-of-two (POT) sized bitmaps, when you can downscale them to any non-POT size (and fast). Meaning, if your input is non-POT, resize it offline or at load.

Consequently, if your bitmap is used as an output ("blit destination"), just allocate a texture that is large enough to contain the source (it can be larger!), and strip only that part that you need afterwards. Thus, if you need a 640x480 canvas, allocate the FBO texture/renderbuffer to size 1024x512 and ignore anything else outside the (0,0)-(640,480) viewport.

Share this post


Link to post
Share on other sites
Let me explain more about the bitmap.

It is created in RAM memory by a game. I just create a .dll that can access its data. The data is stored in an array at 16bits per pixel. I can not initialize OpenGL on that bitmap as it is in RAM, meaning very slow.

Ok, so I downscale the texture after I draw it in the pbuffer. Then I need to get that data someway out of it and then copy and paste every colour to the location in RAM.

Just getting the data, glReadPixels takes way too much.

glGetTexImage does not work at all :

glBindTexture(GL_TEXTURE_2D, tex);
glGetTexImage(GL_TEXTURE_2D,0,GL_RGB,GL_UNSIGNED_BYTE,text);

Nothing is written to text.

Please let me know if there is a solution to make this work faster as the blitter is not the problem, just the two functions that I just mentioned.

Thanks

Share this post


Link to post
Share on other sites
In the case of current gen hardware APIs, there's no fast solution as far as I'm aware. By that I mean that there's a significant cost involved in breaking the optimal flow between CPU and GPU, such as retrieving data from the GPU (such as glReadPixels). This you've already come to know, as it's considered bad practice to access GPU resources from the CPU (and vice versa). Even more so, hardware can't draw to a software framebuffer.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!