Sign in to follow this  
n00body

OpenGL Requesting list of useful extensions

Recommended Posts

n00body    345
I'm still learning my way around inside OpenGL, and trying to distill down the big list of extensions. Right now, I'm working with a 6800GT, affording me about 71 potential extensions (only counting ARB and EXT) in OpenGL 2.0.3. Can everyone make a list of extensions you find useful, or use a lot of the time? Also, could you give a brief description of what each does? On a side note, does anyone know which ones have been depreciated? For example, I'd heard GL_ARB_non_power_of_two is no longer needed since it's part of the core, but I'm not sure. Thanks for any help you can provide. ;)

Share this post


Link to post
Share on other sites
zedz    291
im only using one extension VBO ( at the most the number of extensions yould typically need u could count on your hand )
youll find if u use opengl2.0 (with glsl) a lot of the old extensions arent needed

Share this post


Link to post
Share on other sites
b2b3    602
Some useful extensions:

Vertex Buffer Objects
Extension: GL_ARB_vertex_buffer_object
This was promoted to core in OpenGL 1.5.
This extension allows you to store your data (e.g. vertices, normals, texture coordinates) in a fast memory that is directly accessible to the GPU. This basically means that you can render much faster than you would be able to if the data was stored in system memory.
This is preferred method of storing data to be rendered nowadays. I haven't seen code which does not use VBO (except for very simple examples) in quite a while.

GLSL Shading Language
Extension: GL_ARB_shading_language_100
Promoted to core in Gl 2.0.
Presence of this extension determines if your hardware supports GLSL.

Extension: GL_ARB_shader_objects
Promoted to core in OpenGL 2.0.
This extension defines all entry points you need when working with GLSL shaders. It allows you to create, manage and use GLSL shaders.

Floating-Point Textures
Extension: GL_ARB_texture_float
As the name suggests, this allows you to create and use textures with floating-point data format (colour componetes can be 16 or 32 bit floats). This can be used when working with a lot of modern techniques like HDR, deferred rendering etc.
When using fp textures you will probably need GL_ARB_color_buffer_float which adds various pixel format options that control clamping of the colour components.

Rectangular and non-power-of-two textures
Extensions:
GL_ARB_texture_rectangle
GL_ARB_texture_non_power_of_two
These extensions alow you to use non-power-of-two textures that do not have to be square. NPOT extension was promoted to core in GL 2.0.

Multiple render targets
Extension: GL_ARB_draw_buffers
This was promoted to core in GL 2.0.
This extensions allows you to write output from your shaders to multiple outputs at once. For example, you can write colour of the pixel to one output and normal to another. Very useful for many advanced techniques.

Multitexturing
Extension: GL_ARB_multitexture
Promoted to core in OpenGL 1.3 (old :). This allows you to use multiple textures simultaneously (d'oh).

WGL Extensions List
Extension: WGL_ARB_extensions_string
Windows only. This extension allows you to query list of WGL extensions supported by your drivers and hardware. If you are using some sort of extension loader (e.g. GLEW), you will not need it.

Pixel Buffer Objects
Extension: WGL_ARB_pbuffer
Windows extension only. This allows you to create offscreen buffer to which you can output your rendering calls. Pixel buffer is basically an "invisible" equivalent of the window.

Render to texture
Extension: WGL_ARB_render_texture
Windows extension only. As the name suggests, this allows you to "redirect" your output to a texture. This texture can later be used just as any normal texture.

===========

Of course, this list is not exhaustive. All ARB extensions can be found in official extension registry. Newer extensions defined by NVIDIA can be found on a special page here. Similar page for ATI extensions can be found here.

Extensions that have been promoted to the core can be found at the end of the OpenGL specification. Newest specs can be found on this page.

Share this post


Link to post
Share on other sites
n00body    345
Thanks for the suggestions so far. I plan to use GL_EXT_frame_buffer_object because of all the good things I'd heard about it, and all the bad things I'd heard about pixel buffers.

One little question. If you have NPOT (particularly in GL2.0) then why do you need texture rectangle?

Share this post


Link to post
Share on other sites
_the_phantom_    11250
Quote:
Original post by b2b3
Pixel Buffer Objects
Extension: WGL_ARB_pbuffer
Windows extension only. This allows you to create offscreen buffer to which you can output your rendering calls. Pixel buffer is basically an "invisible" equivalent of the window.


No, wrong.

WGL_ARB_pBuffer is for a pbuffer, this is NOT the same as a PBO or pixel buffer object.

PBOs are a method of performing Async transfers from the GPU and buffer to buffer copies on the GPU (such as textures to vertex buffers).

And frankly, unless you need a seperate context, you won't want to be using pbuffers, instead you want Frame Buffer Objects (FBO), which are the newer and much improved method of rendering to a texture.

There is also a short series of articles on FBOs written by some really top guy [grin]

Share this post


Link to post
Share on other sites
n00body    345
Okay, a couple more questions:
1.) What is sRGB, and why is it significant to OpenGL?

2.) PBOs. What is the most common use of them? Is there a place with a decent explanation? Are they to textures what VBOs are to geometry?

3.) How well supported is GL_EXT_texture_compression_s3tc across GPUs?

Share this post


Link to post
Share on other sites
_the_phantom_    11250
I should probably look at sRGB at some point, however to answer the others;

2) PBOs are used for async copies of data to and from the graphics card (and from one buffer type to another). Textures already exist in video ram, PBOs just help with the uploading (so that the various functions don't block CPU execution).

3) The texture compression extensions have been supported for many generations now, I think even the orignal GeForce cards had support for it.

Share this post


Link to post
Share on other sites
Yann L    1802
Quote:

What is sRGB, and why is it significant to OpenGL?

sRGB is an ISO standarized non-linear colour space. In a normal RGB8 image, the colour distribution is completely linear. Each of the possible 255 increments is exactly the same over the entire dark to white range: 1/255.

However, the colour response of a display device (often being the final target for all OpenGL rendered content) is not linear. This is why usually gamma correction is applied on the final rendering.

A linear colour space such as RGB8 thus wastes a lot of values for brightness ranges the human eye cannot even distinguish due to the non-linearity of the screen, while lacking precision in other ranges. The solution would be to go to RGB16 or RGB16F, but this comes with a significant memory overhead.

The new sRGB formats try to find a middle ground. They distribute the dynamic ranges of the image in a way that matches the response curve of a screen more closely than plain RGB8. The net result is a higher quality of smooth gradients (ie. less banding) while the memory consumption stays the same.


[Edited by - Yann L on March 21, 2007 5:52:02 PM]

Share this post


Link to post
Share on other sites
b2b3    602
Quote:
Original post by phantom
Quote:
Original post by b2b3
Pixel Buffer Objects
Extension: WGL_ARB_pbuffer
Windows extension only. This allows you to create offscreen buffer to which you can output your rendering calls. Pixel buffer is basically an "invisible" equivalent of the window.


No, wrong.

WGL_ARB_pBuffer is for a pbuffer, this is NOT the same as a PBO or pixel buffer object.

PBOs are a method of performing Async transfers from the GPU and buffer to buffer copies on the GPU (such as textures to vertex buffers).

And frankly, unless you need a seperate context, you won't want to be using pbuffers, instead you want Frame Buffer Objects (FBO), which are the newer and much improved method of rendering to a texture.

There is also a short series of articles on FBOs written by some really top guy [grin]


You are right, I got them mixed up [embarrass].

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now