• Advertisement
Sign in to follow this  

OpenGL Offscreen Render for all graphics cards

This topic is 3511 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been really struggling to get a way to use hardware rendering while rendering off screen and then have it work on any machine, regardless of the graphics card manufacturer and on Vista and XP. I realize that pbuffers are generally the way to go, but this doesn't work on some ATI cards and it doesn't work on older versions of OpenGL (Older than 1.4 I believe). My Current Method is this: 1. Create a new window class 2. Create a new window with CreateWindowEx and make window invisible 3. wglCreateContext the window 4. Render 5. glReadPixels the pixels back This works on most implimentations that I have tried, but as always, there are exceptions that I have found. While debugging on an NVIDIA card installed on a x64 version of XP, for some reason calling the glReadPixels just gets a screenshot of where the window is currently at. Also, if i made the window visible, but it was covered or partially covered by another window, then the glReadPixel will show the other window, so once again, its like a screenshot. Anyone have any solutions or suggestions? Thanks! Dave

Share this post


Link to post
Share on other sites
Advertisement
If your requirement of "hardware rendering" only means you don't want to fall back to the years old default implementation from Microsoft, then I can strongly recommend Mesa3D's off screen renderer. It's almost the ultimate off screen rendering solution and works great.

Share this post


Link to post
Share on other sites
Quote:
Original post by DJHoltkamp
While debugging on an NVIDIA card installed on a x64 version of XP, for some reason calling the glReadPixels just gets a screenshot of where the window is currently at. Also, if i made the window visible, but it was covered or partially covered by another window, then the glReadPixel will show the other window, so once again, its like a screenshot. Anyone have any solutions or suggestions?
Thanks!
Dave


There are two screen buffers (front buffer and back buffer). And, if you are reading front buffer, the content can be depended on the other windows. However, if you are reading back buffer, there is no way other windows get into its way and therefore, no such problem.

For p-buffer (as a offline window), there is no possible for others windows to get into its way and so, no such problem neither.

Share this post


Link to post
Share on other sites
Quote:
Original post by DJHoltkamp
I realize that pbuffers are generally the way to go, but this doesn't work on some ATI cards and it doesn't work on older versions of OpenGL (Older than 1.4 I believe).

My Current Method is this:
1. Create a new window class
2. Create a new window with CreateWindowEx and make window invisible
3. wglCreateContext the window
4. Render
5. glReadPixels the pixels back

This has nothing to do with pbuffers. You just create a second rendering window, and read back its framebuffer. Doing this is undefined, as you have noticed, due to the pixel ownership test. It's pretty much guaranteed to fail if the window is hidden. This has nothing to do with back versus front buffer, BTW. Even reading back the backbuffer can fail if the window is partially invisible.

So you can in fact use real pbuffers, and you won't get this problem. However, pbuffers are really, really obsolete. Just use FBOs, like everybody else. They work on pretty much every card out there, except for maybe some Intel stuff with crappy drivers. But there's not much you can do in the latter case anyway.

Share this post


Link to post
Share on other sites
Thanks for the replies!

Brother Bob:
I may look into the Mesa3D thing but I doubt this is the way I'm going to go due to the fact that I don't want to include other libraries and need it to work on most all windows/mac machines.

ma_hty:
I may look into the front/back buffer deal and see if there is a way to swap and then read the back... I had never thought of that but it is an intersting idea. GlReadPixels does not seem to have a way to select the back buffer. How would you go about doing this?

Yann L:
Unfortunately, I do need this to work on the all crappy intel integrated cards. Surprisingly, these cards seem to be working better than the NVIDIA and ATI cards for this method. It works flawlessly with the window hidden. In fact, this method of window creation off screen is how one of the engineers at adobe said that some of their OpenGL offscreen rendering was accomplished, but perhaps they did not know what they were talking about...

Any other ideas or suggestions?

Share this post


Link to post
Share on other sites
Quote:
Original post by DJHoltkamp
Brother Bob:
I may look into the Mesa3D thing but I doubt this is the way I'm going to go due to the fact that I don't want to include other libraries and need it to work on most all windows/mac machines.

If you don't want to use Mesa, you have to use some other OpenGL interface instead. WGL on Windows for example. Mesa doesn't add anything, it replaces, so if you go for Mesa, you don't need WGL. And platform support is not an issue for Mesa. So these can't be the issues you don't want to use Mesa.

Share this post


Link to post
Share on other sites
Quote:
Original post by DJHoltkamp
I may look into the front/back buffer deal and see if there is a way to swap and then read the back... I had never thought of that but it is an intersting idea. GlReadPixels does not seem to have a way to select the back buffer. How would you go about doing this?

glReadBuffer. But the operation is undefined on overlapped or hidden windows.

Quote:

Unfortunately, I do need this to work on the all crappy intel integrated cards. Surprisingly, these cards seem to be working better than the NVIDIA and ATI cards for this method. It works flawlessly with the window hidden. In fact, this method of window creation off screen is how one of the engineers at adobe said that some of their OpenGL offscreen rendering was accomplished, but perhaps they did not know what they were talking about...

Either they meant something different (like a real pbuffer), or they don't know what they're talking about. The method you are using is undefined. This means that it might work - or it might not, depending on manufacturer, driver revision, or even from one PC to another. This is almost random, and completely out of your control. The fact that it works on your Intel chip doesn't mean it will work on another. Look here. Note that with 'nonvisible window' in the last paragraph, they don't mean a hidden window, but an abstract surface. This is not accelerated under Windows.

Quote:

Any other ideas or suggestions?

Have you tried FBOs before dismissing them ? And if they don't work, why don't you use actual PBuffers ? They were meant for cases such as this one...

Share this post


Link to post
Share on other sites
Brother Bob:
You cna see from my first post that I am using wgl because of wglCreateContext.

Yann L:
One of my contacts from adobe said the following:
"However, your approach does match some internal code we have to bind an offscreen
window to OpenGL, with the window being a member of a static class that
is initialized on PF_Cmd_GLOBAL_SETUP, and kept in memory"

I guess this offscreen window could be a FBO?

One reason I was staying away from PBuffers is because I know that some crappy OpenGL implimentations on ATI cards do not have this extention included. Do you know what version of OpenGL that FBO's came into existence? Why do you say that it won't work on intels?


Share this post


Link to post
Share on other sites
Quote:
Original post by DJHoltkamp
...
One reason I was staying away from PBuffers is because I know that some crappy OpenGL implimentations on ATI cards do not have this extention included. Do you know what version of OpenGL that FBO's came into existence? Why do you say that it won't work on intels?


PBuffer is something from your GPU not from OpenGL. Even the entry point functions are not declared in your opengl header file, they are there if your GPU support it. And, you can use glew library to test whether your GPU support it or not. If so, you access them using the entry point functions from glew library. ( You can download glew library from http://glew.sourceforge.net )

Therefore, you should not worry about which verion of OpenGL, instead, the model of your GPU. And, a well-written GPU driver would provide a software emulation in case a hardware support is not present (hopefully...). Anyway, the thing you should worry about should be the model of GPU.

By the way, PBuffer is difficult to be used as its initialization requires platform dependent API calls. Please use FBO instead.

Share this post


Link to post
Share on other sites
Thanks for all the replies guys,

After doing some research on what all you guys are saying, I decided to go with the following approach for offscreen rendering on all graphics cards.

if(FBO's exist on this card)
Initialize FBO's
else if(pbuffers exist)
Initialize pbuffers
else
Go get a nicer graphics card

Share this post


Link to post
Share on other sites
How would you create an FBO though? To initialize an FBO, there has to be a GL context, and as far as I know, you can only create one with a Pbuffer or a window..

Share this post


Link to post
Share on other sites
I plan on still using an offscreen window for this part. I can still attach a wglcontext to it (although I know it will not render correctly with many cards). After I have a GlContext I will then create the FBO for my offscreen render.

Anyone know of any problems with this?

Is there a better way?


David

Share this post


Link to post
Share on other sites
Quote:
Original post by DJHoltkamp
I plan on still using an offscreen window for this part. I can still attach a wglcontext to it (although I know it will not render correctly with many cards). After I have a GlContext I will then create the FBO for my offscreen render.

Anyone know of any problems with this?

Is there a better way?


There is no better way. FBO exists for offscreen rendering and you must create a GL context before creating a FBO or making any GL function calls.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
  • Advertisement