• Advertisement
Sign in to follow this  

Disable pixel shader

This topic is 2037 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi guys,

For some reasons, I want to disable the pixel shader (fragment shader). It means that the hardware will throw fragments away right after they come out of the rasterization stage so that the fragments can't be able to get into the next stage (pixel shader).

I know that we can disable rasterization stage by enabling RASTERIZER_DISCARD, is there any similarity with pixel? or any trick to do that?

Thank in advance,
-D

Share this post


Link to post
Share on other sites
Advertisement
You can call glCullFace

[background=rgb(250, 251, 252)](GL_FRONT_AND_BACK). That will cull all faces of triangles, sending none to the fragment shader.[/background]

Edited by larspensjo

Share this post


Link to post
Share on other sites
glCullFace won't let the data get into the rasterization stage, but in my case, I still want primitives to be processed at rasterization stage.

Share this post


Link to post
Share on other sites
The general approach is to use glColorMask (GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE) but documentation I could find suggests that this just prevents the output colour from being written to the framebuffer - the shader will still run. Maybe your driver might be able to make an intelligent optimization, or maybe you can write a simple passthrough fragment shader for this purpose.

Share this post


Link to post
Share on other sites

What are you trying to achieve? Occlusion culling?

Nope. I just want to measure the processing time when pixel processing stage is disabled.

Share this post


Link to post
Share on other sites
I doubt you're going to get a meaningful measurement here. OpenGL API calls are asynchronous and operate in parallel with work on your CPU, so the calls you make with the PS disabled may not have even started executing on the GPU yet when you take your measurement.

Share this post


Link to post
Share on other sites
You can use GL_TIME_ELAPSED queries tough. But i would also say trying to measure the vertex stage alone is not that useful since the measurement is totally synthetic and not relevant for real use cases. If anything just measure with a trivial fragment shader

Share this post


Link to post
Share on other sites
Any kind of query is going to need to stall the pipeline during readback though, which will skew the results. The best way is probably to set up a test case without the pixel shader (or with it disabled) and let it run for a few minutes - because transient conditions on your PC may also skew the results when measured over a short timescale. Compare that against the same case run with the full pixel shader and you should get a meaningful enough relative measurement.

Even that won't be 100% perfect though as in a real program there will be all kinds of other stuff going on - CPU/GPU concurrency and concurrency of pipeline stages may even mean that an apparently complex pixel shader can be had for free.

Share this post


Link to post
Share on other sites

Any kind of query is going to need to stall the pipeline during readback though.

Only if you read back the result instantly, you can wait until it becomes available and read it back then.

Share this post


Link to post
Share on other sites
Nope. I just want to measure the processing time when pixel processing stage is disabled. [/quote]
gDEBugger does this, you can check the GL_GREMEDY_frame_terminator, GL_GREMEDY_string_marker and GL_ARB_debug_output extensions. Also you may want to check out the gDEBugger docs. Edited by Yours3!f

Share this post


Link to post
Share on other sites

Nope. I just want to measure the processing time when pixel processing stage is disabled.


It sounds like you want to check whether your program is fragment shader bound.
In that case, just make ultra simple fragment shader and measure execution time.
Otherwise, glEnable(GL_RASTERIZER_DISCARD) is the way to disable fragment shaders execution.
Also, you could try to discard fragments in the fragment shader. It is, of course, meaningless, but maybe that will trigger some optimization in the drivers and totally eliminate FS.

Share this post


Link to post
Share on other sites

[quote name='donguow' timestamp='1340634594' post='4952666']
Nope. I just want to measure the processing time when pixel processing stage is disabled.


It sounds like you want to check whether your program is fragment shader bound.
In that case, just make ultra simple fragment shader and measure execution time.
Otherwise, glEnable(GL_RASTERIZER_DISCARD) is the way to disable fragment shaders execution.
Also, you could try to discard fragments in the fragment shader. It is, of course, meaningless, but maybe that will trigger some optimization in the drivers and totally eliminate FS.
[/quote]
I did use this way. This happens before rasterization stage. I am now testing with discarding fragments. This way, of course, is not perfectly correct but giving pixel shader as little work as possible seems not a bad idea though.

Share this post


Link to post
Share on other sites
I'm probably the worst GLSL programmer in the world , but why you don't try something like :


uniform int enabled;
void main ()
{
if (enabled)
{
RunMyPixelShaderTask();
}
}


? unsure.png

Share this post


Link to post
Share on other sites


uniform int enabled;
void main ()
{
if (enabled)
{
RunMyPixelShaderTask();
}
}




It is possible to pass varying variables and attribute variables down to pixel shader but I don't think this works for functions. Anw, I still appreciate your idea
What I am doing now is as follows

void main ()
{
discard;
}

However, this, of course, can't give me accurate results. Edited by donguow

Share this post


Link to post
Share on other sites
As with your last thread on the subject the results are going to get are not going to be accurate; as I pointed out before ( http://www.gamedev.net/topic/626047-processing-time-at-vertex-shader/page__view__findpost__p__4947729 ) the only way you are going to get decent results is to the vendor supplied libraries or tools to time things.

More importantly; WHAT are you trying to do with this information?
Currently you are learning nothing useful what so ever...

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement