Jump to content
  • Advertisement
Sign in to follow this  
kRogue

OpenGL OpenGL API wish list

This topic is 4232 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

All right the purpose of this thread is for folks to share what API features they wish were in OpenGL and also for member of the community to share if the features already exist (be it core or extension) or how to get something functionalyl equivalent. With that in mind here is my wish for today: shader for stencil buffer, as of now we have a shader for vertex prcessing and a shader for fragment processing (and a geometry shader in the works for core, and avaialble as an extension on nVidia GeForce 8) right now you can only do very specific test against the stencil buffer, and have only very specific things happen if the stencil test fails/passes and the depth test fails/passes. What I would *love* A stencil shader which has a few outputs: gl_StencilPass (out bool), shaders sets it to true if the stencil test passes, or false otherwise gl_StencilValue(inout int), automatically set to the value that the stencil buffer has coming into the shader, but writable (like glDepth in fragment shaders) and a read only value gl_DepthPassed(in bool), indicating if the depth test passed or failed... as to what it a shader takes as inputs, at first I was thinking nothing besides uniforms, but later I hae myself thinking maybe the exact same parameters that the fragment shader gets.... not too sure on that one.

Share this post


Link to post
Share on other sites
Advertisement
my thoguths: gl_DepthPassed can't be implemented in vertex/fragement/geormetry shaders because fragment operations take place after rasterization. therefore output of this layer can't be defined at this point.

i agree with the other points. on opengl.org is an extra forum to discuss and suggest new opengl features.

[sorry if i doubleposted here but i got a strange asp error thingy and the first post doesn't seem to appear]

Share this post


Link to post
Share on other sites
I was thinking that right now, the hardware has a "Stencil layer", i.e. it gets the depth value (if the fragment shader does not set it, then directly from the interpolator for rasterizing, otherwise it has to run the fragment shader anyways) so glDepthPassed to some extent is already implemented in current hardware (but it's use is hardwired for limited functionality, though it is not clear if this is a driver or hardware hardwire)... and that is why I *wish* for this since the issues of passing the depth test is not exposed in any of the shaders......

Share this post


Link to post
Share on other sites
Defining a separate shader stage for stencil operations doesn't make much sense. Stencil operations are fragment operations, and ultimately belong to the fragment shader. If more complex stencil or depth buffer manipulation would be made programmable, then the fragment shader would be the place to add them.

That said, I don't think the stencil buffer has a lot of future. The whole concept is quite obsolete, as it was developed for fixed function pipelines.

The best approach, and the one that is probably be going to be implemented in the near future, is to get rid of the last remaining fixed parts of the fragment pipeline altogether: the depth test, the stencil buffer, and the blending stage. All these can be efficiently simulated by a fragment shader, if full read/write access to the framebuffer (or the bound FBO target) is granted, and MRT is generalized to allow for generic mixed format targets (eg. surface0 is RGBA, surface1 is FP32 for depth, and surface2 is I16 for a stencil buffer).

The removal of the blending stage is the most important part. It will finally allow single pass rendering of transparent surfaces with fully saturated specular hilights, and similar. The RW access to the FB is required for this, and will also open new possibilities for fast, order independent transparency.

Note that this feature is in principle already available on *some* current hardware, but it's not really exposed and is highly non-standard. See the ZT-buffer algorithm presented in 2.8 of ShaderX5.

Share this post


Link to post
Share on other sites
when I was reading through the GLSL spec, there was a fair amount of Q and A on getting read access of of what the fragment buffer was before, but at the end of the day, they said no because it would make the hardware either too gnarly hard to do well, or make it all run slow (though the exact reasons why that was the case they did not state)...

though I have to admit, if one had direct read and write access to all the buffers and they did not have to be all the same, that would make life *so* *much* easier. Lots of drool for what could be done, so much easier and so much more cleanly (it would at the end of the day reduce the state that one would keep track of by so much, and make the using so much cleaner, jsut as in fragment shader is a heck of lot easier than multitexture combiner functions)...

my thoughts of making it a seprate shader was for the idea that it would be easier to do for the hardware fellas.. and also the fact that the current way it looks from the API user, is that the stencil is done before fragment shader (which is the case for performance reasons)...

but maybe 4-6 or so ears from now, we might see that on mainstream PC hardware.... whn do you think such hardware would become generic consumer price friendly enough? (read less that 300 or so USD)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!