Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Shader Functions


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
7 replies to this topic

#1 korvax   Members   -  Reputation: 306

Like
0Likes
Like

Posted 21 November 2012 - 02:44 PM

Hi,
first let me state that im quiet new to shaders and directx 11 so this might be an easy or even "stupid" question.Is it possible to control what shaderfunctions in the shader file that will be executed and possible what input variables.. Let me show you and example.

[source lang="cpp"]float4 Texture(PS_INPUT input ) : SV_Target{ return tex.Sample(linearSampler, input.Tex ); }float4 noTexture( PS_INPUT input ) : SV_Target{ return input.Color; }[/source]

How can I tell my program what function of the two it should execute. Can I have the same PixleShader for this
or do I need to compile two diffrent PixelShaders with diffrent entrypoints, "Texture" and "noTexture".. Or can
I simple do something like m_pContext->PSSetShader(m_pPixelShader, blbblal ,"Texture")?

And my secon question is it posible to send values to the diffrent shader functions so I can send a bool forexample?
[source lang="cpp"]float4 Texture(PS_INPUT input, bool bTex ) : SV_Target{ if (bTex) return tex.Sample(linearSampler, input.Tex ); else return input.Color;}[/source]
I have done some searching but on the subject but I have mainly found some effect solutions for DX10
and some angrey post about the lacking effects system in directx 11 (even if the source for it exist in the sdk).
I hope someone can help pls or even have some examples (im quiet new to this as I stated
).

Edited by korvax, 21 November 2012 - 02:46 PM.


Sponsor:

#2 MJP   Moderators   -  Reputation: 11736

Like
1Likes
Like

Posted 21 November 2012 - 04:04 PM

In your first example, you would compile two different shaders. When you compile a shader you specify the entry point function, so you would compile one with "Texture" as the entry point and then the other with "noTexture" as the entry point.

To pass values to a shader, you use a constant buffer. The syntax looks like this:

cbuffer Constants
{
    bool bTex;
}
float4 Texture(PS_INPUT input) : SV_Target
{
    if (bTex)
	    return tex.Sample(linearSampler, input.Tex );
    else
	    return input.Color;
}

Then you also need to handle creating a constant buffer, filling it with data, and binding it in your C++ app code. If you're not familiar with constant buffers, I would simple recommend consulting some of the simple tutorials and samples that come with the SDK.

#3 lipsryme   Members   -  Reputation: 1042

Like
0Likes
Like

Posted 21 November 2012 - 04:44 PM

Not sure how you compile your shader code but if you use this function you can state for example the name of the pixel shader function to use.
Like this:
D3DX11CompileFromFile(PATH, 0, 0, NAME, MODEL, NULL, 0, 0, &this->PS_Buffer, NULL, 0);

Where PATH is the path where you shader file is located at, NAME is the name (string) of your shader function (like in your example "Texture") and MODEL the specific shader model you want to use.
For a more detailed description on how this function works:
http://msdn.microsoft.com/en-us/library/windows/desktop/ff476261(v=vs.85).aspx

I don't think you can set specific shader functions during run-time if that's what you were asking.
The compiler has to know which function the entry function should be.

I believe you could also do something like (haven't tried it)
#ifdef USETEXTURES
float4 PS(PS_input input) : SV_TARGET
{
   return texture.Sample(...);
}
#else
float4 PS(PS_input input) : SV_TARGET
{
   return 1.0f;
}
#endif

But that would still need to be done during the compile

Edited by lipsryme, 21 November 2012 - 05:07 PM.


#4 korvax   Members   -  Reputation: 306

Like
0Likes
Like

Posted 22 November 2012 - 01:55 PM

In your first example, you would compile two different shaders. When you compile a shader you specify the entry point function, so you would compile one with "Texture" as the entry point and then the other with "noTexture" as the entry point.


I can imagine that the with a big render application with a lot of effects like different lighting techniques or different surface materials you would end up with a x^2 number of different sharers to cover all the cases if you should have one compile of each entry point. And that would take a lot of resources to handle. Not very scalable solution I'm I right? And wouldn't it be a simalier problem with constant buffer approach. A lot of variables in the constant buffer. So i guess what im asking now is how would you solve this in a more real program.. Lets say you have one object that should be render with light shader 1 and material 2 and another with shader 1 material 1 and the last one with light shader 2 and material shader 2 I know this wasn't my original question but you answer lead me here :).



I believe you could also do something like (haven't tried it)

#ifdef USETEXTURES
float4 PS(PS_input input) : SV_TARGET
{
   return texture.Sample(...);
}
#else
float4 PS(PS_input input) : SV_TARGET
{
   return 1.0f;
}
#endif


cool I will try this out!

#5 MJP   Moderators   -  Reputation: 11736

Like
4Likes
Like

Posted 23 November 2012 - 01:48 AM

Ahh now that's much harder question to answer. Posted Image

It's true that compiling many different "permutations" of a shader can lead to the "explosion" of shaders that you're alluding to. The thing you have to keep in mind is that even if you support 2^16 possible shader variations, your game would never actually use that many. So the trick is to limit your set of shaders to the ones that you actually need. For instance some games have used an on-demand shader system during development that didn't compile and load a particular permutation until a mesh was loaded that needed it. They would then save those shaders to a cache, and once the game was QA'ed they would ship the game with the pre-compiled cache stored on disc. Some games will simply pre-determine the shaders needed as part of a level build step, and pre-compile them. In practice it depends a lot on the game, and how the game's shader/material system is setup.

Putting variables in constant buffers is a different problem. With that approach you don't have to deal with compiling or loading a lot shaders, and you can stick lots of variables in the constant buffer without worrying about doubling the number of shaders. Instead you have to deal with the runtime performance of having a branch (which is pretty cheap on modern hardware, especially since all threads will take the same branch) and suboptimal code from the compiler not being able to optimize things out. However it's possible to mix this approach with shader permutations, in order to minimize the downsides of either. In general I would suggest putting expensive, complex functionality in a permutation rather than a branch. This will ensure that you don't suffer a higher register count for materials that don't use the expensive feature.

Edited by MJP, 24 November 2012 - 11:54 AM.


#6 korvax   Members   -  Reputation: 306

Like
0Likes
Like

Posted 24 November 2012 - 07:03 AM

thx MJP this helps me a lot.

#7 Tordin   Members   -  Reputation: 604

Like
1Likes
Like

Posted 26 November 2012 - 02:43 AM

If i remeber correct. #ifndef/#endif are far more efficent to use rather than an "if-branch" in the pixelshader.
and you could set your defines in an global header file that you include to all your shaders. you could allso rewrite that file at runtime to set diffrent graphics options.
"There will be major features. none to be thought of yet"

#8 Tsus   Members   -  Reputation: 1061

Like
0Likes
Like

Posted 26 November 2012 - 04:57 AM

To summarize, there are a few ways to “select functions” at render time.
You could create an “ubershader”, which means all functionality is implemented in a single shader. You can either pre-compile the code paths by using different sets of macros, e.g.
#if DO_TEXTURE
color *= tx.Sample(input.tcoord);
#endif
This gives you tons of precompiled shader files. First, it is hard to obtain an overview of all the files and second, the ubershader is kind of hard to read.

The second option for an ubershader is to branch, depended on constant buffer flags.
If (g_doTexture)
  color *= tx.Sample(input.tcoord);
This is also not the best idea, since your shader must always be prepared for the worst case code path. Therefore, it needs to allocate the registers for the worst case code path. Less registers available means first, less threads executed in parallel, and second more memory necessary to put thread groups (warps) to rest, when waiting for memory accesses. If less thread groups can go to sleep, the latency hiding is not working at its best, thus you have slower execution times.

Option three is to write tons of independent shader files, which is really hard to maintain.

The intended solution to this dilemma is called “dynamic shader linkage”, which allows you to dynamically link functions at bind time. In Dx11 this is implemented as inheritance in the shader code. You define an abstract class, write a few implementations and select the implementation to use at bind time of the shader. The function will be inlined and the registers are optimally allocated for your code. If I recall correctly, there was a talk at Gamefest on dynamic shader linkage.
In OpenGL 4 this feature is called subroutine functions. Actually, it’s been around since Dx9 in the CG language.

Best regards!

Edited by Tsus, 26 November 2012 - 04:58 AM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS