Sign in to follow this  

HLSL questions

This topic is 2614 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all,

Firstly, I'm new around here, this is my "hello" thread:

http://www.gamedev.net/community/forums/topic.asp?topic_id=583769

So I hope I'm in the right place for this kind of stuff but I've spent the last week or so diving into the world of shaders, which, despite being and relatively experienced software guy, are making me feel very newbish! In an effort to better understand things I've compiled a short list of questions I have regarding HLSL (I'll move on to GLSL next). I hope you don't mind answering them if you can:

1: Can you run multiple shaders/techniques on one vertex buffer, in that a DrawPrimitives() call in a for loop which is looping through all the shaders/techniques/passes, or does this have to be done as seperate passes in the same technique and therefore I need one global shader? I spose you could say, can you 'stack' different shaders on a single mesh?

2: What is the difference, and pros/cons, between D3DXCreateEffectFromFile and D3DXCompileShader?

3: Since I am writing a game engine that I intend to use in a production environment eventually, is it worth getting used to compiling the shaders externally now? Or shall I just program for ASCII files now and then abstract ASCII/binary state later?

4: How similar are GLSL and HLSL, not in syntax, but in concept? What architectural differences exist between the two 'worlds'?

I think they're my main questions right now, I'm sure they'll be more.

Any help/steering would be appreciated...

Thanks

Tim

Share this post


Link to post
Share on other sites
1. You can only have one vertex shader and one pixel shader active at time for a draw call. No exceptions. If you need combine two bits of shader code, you essentially have 3 options:

A. Combine the shader code into a single shader, either manually or with an automated tool or framework

B. Combine the results of the two shaders using fixed-function blending states by rendering multiple passes of the same geometry (for instance, using additive blending to sum the contributions from two light sources)

C. Render one pass with one shader to a render target texture, then sample that texture in the second pass using the second shader which combines the results.

A is most ideal in terms of performance, while B is probably easiest in terms of implementing it. C isn't really that great for either.

2. Well one creates an effect, and one just creates a raw shader. The effects system is a framework built on top of the raw D3D9 shader API. It encapsulates multiple shaders and details about them, in order to make it easier to use them and build shaders with more complex functionality. Generally if you don't use effects, then you implement your own framework for doing what it does.

3. It probably won't matter unless you have many shaders, in which case it may take a while to compile the shaders every time you run the game.

4. I'm not really a GLSL expert, but in my experience the fundamental concepts are the same. One thing to watch out for is that in many cases GLSL can't be compiled offline, and is JIT compiled by the driver when you load it. This is really bad on some mobile platforms where the compiler sucks at optimizing, which can result in some really bad shader performance.

Share this post


Link to post
Share on other sites
Quote:
Original post by MJP
4. I'm not really a GLSL expert, but in my experience the fundamental concepts are the same. One thing to watch out for is that in many cases GLSL can't be compiled offline, and is JIT compiled by the driver when you load it. This is really bad on some mobile platforms where the compiler sucks at optimizing, which can result in some really bad shader performance.


On mobile platforms, you have OpenGL ES 2.0 which has on offline compiler.
Also, JIT is used on desktops where OpenGL 2.1 or 3.3 will be available and JIT gives better results since compilers are highly tuned up.

Share this post


Link to post
Share on other sites
Hi MJP,

Thanks for the info, its a fascinating subject.

So I have some follow ups then (sorry!)...

1 and 2. Writing your own effect framework... Is it worth it? I'm thinking I could implement my own framework and abstract it so parts of the engine are agnostic to GLSL/HLSL (obviously not the renderer). As well as provide a solution to combining different parts of shaders...

3. I see. I was also thinking about keeping my shader source private...

4. Is GLSL really used for OpenGL, or should I consider Cg (as its supposed to be so similar to HLSL)?

Cheers!

Tim

Share this post


Link to post
Share on other sites

Quote:
Original post by tgjones
1 and 2. Writing your own effect framework... Is it worth it? I'm thinking I could implement my own framework and abstract it so parts of the engine are agnostic to GLSL/HLSL (obviously not the renderer). As well as provide a solution to combining different parts of shaders...


Well, if you have to ask it probably isn't worth it [smile] The effects framework is rather flexible, complete, efficient, easy to work with etc, so unless you have any very specific and clear needs for your shader handling framework, you should probably be using the default one. As for making parts of the engine agnostic (crossplatform?) from my limited exprience I think it's more typical to create completely seperate renderers for multiple platforms.

Quote:
3. I see. I was also thinking about keeping my shader source private...


I'm not sure you can 'pre-compile' all your shaders to try and keep them safe, depending on your target platform of course. In any case, I think this is more pertinent to the general concern of keeping game assets private so you could try some of those techniques (obfuscation, encoding etc). On the other hand, most complex shaders worth protecting require specific inputs from the application code, so it might not be useful to try and steal them.

In various high-profile commercial games some snooping turned up the shaders in .txt files in their program directory, so if big studios aren't bothered by this, I wouldn't worry too much about it either.

Quote:
4. Is GLSL really used for OpenGL, or should I consider Cg (as its supposed to be so similar to HLSL)?


No clue, sorry [smile]

Share this post


Link to post
Share on other sites
A large conceptual difference between GLSL and HLSL, is one of the "pillars of hatred" I have for OpenGL generally.

In a GLSL shader, you can't specify contstants manually (ie there are no register semantics).

This means you can't write your shader expecting to get its ambient colour from constant 3 for example, whereas in HLSL you can. To handle constants, you declare a global variable in your shader and then after compiling it you need to ask where the ambient should go and remember it. (You can also do this in HLSL but you don't need to).

Sounds subtle, but it cause carnage in my engine that makes a lot of shaders via cpu code (strcat'ing lines of code together to build functions to be compiled). It also means you can't reserved some killer importance constants and just leave em alone.


Share this post


Link to post
Share on other sites
Quote:
Original post by V-man
On mobile platforms, you have OpenGL ES 2.0 which has on offline compiler.


OES_shader_binary is an optional extension.

Quote:
Original post by V-man
Also, JIT is used on desktops where OpenGL 2.1 or 3.3 will be available and JIT gives better results since compilers are highly tuned up.


Better results than what? A standarized offline compiler? I know there's no way I would ever prefer having to deal with the quirks and bugs of multiple shader compilers rather than just one.

Share this post


Link to post
Share on other sites
Hi remegius, Rubicon,

Thanks for the info, just beginning to get my head around this. So I take it from the discussion so far that writing my own framework seems somewhat redundant, for HLSL at least, as the Microsoft one seems to do a perfectly good job.

Additionally, as only one vs and ps can be active during a draw I'm guessing I can use techniques or passes to combine effects? I'm thinking all vertices need WVP transforming, but not need to flap like a flag for example, so I could have options associated with a mesh telling the renderer 'behaviour' and then combine the relevant passes at the renderer level. Does this sound like a reasonable approach?

Additionally, it seems trying too hard to keep shader source private is a bit pointless, and that I should really get my head around HLSL before I start trying to learn GLSL/Cg!!

Thanks again.

Tim

Share this post


Link to post
Share on other sites
Quote:
Original post by tgjones
Additionally, as only one vs and ps can be active during a draw I'm guessing I can use techniques or passes to combine effects? I'm thinking all vertices need WVP transforming, but not need to flap like a flag for example, so I could have options associated with a mesh telling the renderer 'behaviour' and then combine the relevant passes at the renderer level. Does this sound like a reasonable approach?


It's not unreasonable, but it's probably a can of worms and inefficient to boot. Shader passes are, in my opinion, a bit outdated and not really that useful for much anymore. The main usage was, as MJP pointed out, rendering the scene multiple times and blending the results. Since you cannot read back the results from previous passes (not without swithing render targets, which defeats the purpose of having multiple passes) it's only really suited for rendering the scene with multiple lights and additively blending these.

Even so, this multi-passing was only really needed with the limited instruction count allowed in pixel shaders on older hardware. These days, you can calculate numerous lights in one pass or use alternative techniques like deferred rendering, if you really want *a lot* of lights.

As a good rule of thumb, it's typically the best option to create a specific shader for each specific rendering technique. This may lead to some code redundancy, but in general it will give you better performance and the system as a whole will be easier to work with. Finally it might help to consider that you're probably only going to use one or two 'main' shaders for your game, since you'll want to have the lighting to look consistent across all objects anyway.

Share this post


Link to post
Share on other sites
Interesting.

So I'd sort of come to that conclusion actually myself too with my experimentation, duplicate code that is. A single FX file for each vertex format (declaration) seems a more reasonable (and easier to maintain) approach, #including external shader functions to minimize duplication...? Or is #including in shaders frowned upon?

Its quite a learning curve, although I think (hope) I'm getting over the hump now!

Tim

Share this post


Link to post
Share on other sites
For rendering 3D stuff, you can get some mileage out of sticking with a one vertex format fits all technique. I actually have two - one for skinning and one without.

In each there is a position, two normals (extra one for normal mapping), a colour, and two sets of texture coordinates. It's pretty fat and I've never felt a dire need for anything more. (Obviously the skinning version also contains bone influences).

When you get adept, you can shrink this block of vertex components down to fit into 32 bytes by compressing stuff, and this is highly beneficial for speed. (For example, a normal can be stored with a byte per component instead of a float, and you can uncompress the value back to float in the shader).

I'm not suggesting you do this compression shit yet - stick with floats for everything for now, but at least start using just a single format, as changing it in the pipeline can be fairly expensive and ultimately pointless.

+1 on the notion of having bespoke shaders for each effect. By all means #include common components, but not via dynamic linking - build a new shader each time out of the bits.

Share this post


Link to post
Share on other sites
Quote:
Original post by Rubicon
For rendering 3D stuff, you can get some mileage out of sticking with a one vertex format fits all technique. I actually have two - one for skinning and one without.


And a +1 to this notion as well. This gives you a much easier maintanable system and gives you generalization options you'll be glad at having somewhere down the line.

Share this post


Link to post
Share on other sites
Hi guys,

Sorry for the delay, Gmail took it upon itself to deem GameDev.net forum post notifications as spam, so I've only just noticed about the recent replies.

Thanks for the the information, I was thinking perhaps I could have conceptual materials that each have an main effect file (which #included common parts) and an associated vertex format, which, if I take Rubicon's advice (I obviously will!) will be the same for most materials.

Interesting point about the compression, and yes, I'll be sticking with floats for now! But worth bearing in mind down the line.

Thanks again.

Tim

Share this post


Link to post
Share on other sites
Another comprssion idea:
As normals are always normalized, you can store them as spherical coordinates, which only requires two floats. If you are using two normals, like Rubicon, that goes from six floats to four. Then you can reconstruct the three components in the shader.
If you were then using bytes for each component, your normals go right down from
24 bytes (six floats) to 4 bytes (two for each normal)

Share this post


Link to post
Share on other sites

This topic is 2614 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this