Jump to content
  • Advertisement
Sign in to follow this  
belfegor

Precompiled shaders

This topic is 1561 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

1. I am using VS 2013 express edition, i see that if i add any shader file to my project i can select HLSL compiler to use to build it, how should i load this shader afterwards? Do i read whole file in memory and pass that as first parameter to ID3D11Device::Create...Shader() method?

 

2. Any benefit to compile shaders on user machine (as installation process for example)?

 

Thanks.

Share this post


Link to post
Share on other sites
Advertisement

You have to be careful with precompiled shaders, in the general case it's probably a very good thing, but there will always be the specific case when it bites you in the posterior.

 

I personally have much, much more expierience in OpenGL than DX, and in OpenGL you should never use precompiled shaders. IMHO

 

The shader compiler is either a device specific one (usually supplied by the GPU manufacturer, often buggy and out of date, .... have a little cry thinking about it) , or shipped as part of the device drivers. (which you can update ... little cheer)

 

Most of these chipsets can have problems with precompiled shaders, if you are working in the OpenGL ES world they often are not supported at all. I can think of only a couple of devices that support them.

 

Which makes me wonder how Microsoft have got it to work at all. The hardware is no different from the chipsets Microsoft are running on. I sometimes wonder if HLSL compiles to pseudocode which gets 'recompiled' when loaded. Don't know, would be interesting to know.

Share this post


Link to post
Share on other sites

there will always be the specific case when it bites you in the posterior

 

 


Most of these chipsets can have problems with precompiled shaders, ...

 

For everyone's benefit, would you share specifics (rather than a general warning) about what problems you've had with the Microsoft shader compiler, which chipsets are incompatible with such precompiled shaders, examples of HLSL syntax or keywords that cause problems, etc.?

Edited by Buckeye

Share this post


Link to post
Share on other sites

I personally have much, much more expierience in OpenGL than DX, and in OpenGL you should never use precompiled shaders. IMHO


The OP was referring to the intermediate representation (bytecode, as alessio called it) of the shader, not the "actual" binary that is running on the GPU. OpenGL has nothing similar, whatsoever. There simply is no IR in OpenGL, you have to build it yourself.

Share this post


Link to post
Share on other sites

If your game isnt hardware specific, my recommendation is that you build up you application in order to execute all the shader compilations just the first time it is launched and save each compilation in a compiled shaders folder, each subsequent time you application is launched later on it should load the precompiled effects.

 

Loading precompiled effects is severaly faster than compiling them each time, so you will make the users quite more happy if you dont standby them a lot while loading your game :)

 

Hope it helps.

Share this post


Link to post
Share on other sites

Which makes me wonder how Microsoft have got it to work at all. The hardware is no different from the chipsets Microsoft are running on. I sometimes wonder if HLSL compiles to pseudocode which gets 'recompiled' when loaded. Don't know, would be interesting to know.

 

Like Alessio mentioned, the HLSL compiler produces hardware-agnostic shader assembly as output. It's basically an intermediate bytecode format, and it's JIT compiled by the driver at runtime into the native microcode ISA used by the GPU. It's rather similar to the process used by Java and .NET to generate runtime code. Compared to OpenGL it has a few advantages, namely that all of the language parsing and semantics is done through a unified front-end rather than having different implementations per-driver. It also can do a majority of optimizations (inlining, constant folding, dead-code stripping, etc.), which is nice since you can do that offline instead of having to do it at runtime. The downside is that the compiler can only target the virtual ISA, and the JIT compiler that produces the actual bytecode won't have full knowledge of the original code structure when performing optimizations.

FYI Nvidia also has PTX which serves a similar role for CUDA, and AMD has IL.

Edited by MJP

Share this post


Link to post
Share on other sites

I personally have much, much more expierience in OpenGL than DX, and in OpenGL you should never use precompiled shaders. IMHO


OpenGL doesn't actually have precompiled shaders in the sense of Direct3D's.

Newer versions have allowed an application-controlled shader cache though that's typically built into the GPU drivers anyhow and not necessarily of any real benefit. Some people try to use these for distribution on certain mobile devices and the like, but it's a terrible idea since the GL shader cache blobs are NOT meant to be used on any machine aside from the one that generated them, period. The output of the shader cache is dependent on the specific hardware, the specific driver, the specific OS, and the specific versions of each of those. You might update a driver and your cache is invalid, or some mobile phone might switch chip suppliers a couple months after being released and hence have a slightly/severely different GPU.

The only real use for OpenGL's shader cache is to compile the shaders on the end-user's machine during installation so that subsequent launches don't need to recompile anything (and then be prepared to recompile them when the game starts in the event the user dares to upgrade a driver or install a different GPU). In this use case and this use case only, OpenGL's shader cache is not only useful but can actually result in better load times than Direct3D.

You can do something similar with Direct3D by shipping HLSL, compiling to bytecode, caching that, and then passing that in on each startup, but the final compilation steps will always be performed every time you call CreateVertexShader or friends. There's no way for your application to read or save the final hardware-specific shader code in Direct3D, much less load it. However, it's typically a better idea in Direct3D to ship the compiled bytecode, if for no other reason than that it removes a dependency on d3dcompiler_XX.dll from your end product (which is not part of the standard end-user Direct3D redistributable). By not requiring external DLLs you can even make standalone .exe files that launch the game without needing an install process by building your data into the .exe itself, which is handy for some types of games in some circumstances since Fewer Steps to Play == More Players (so more in-game sales in a f2p title, more chance for a reviewer or producer to try it out, etc.). There's a few off-the-shelf engines that have the ability to build stand-alone game executables like that.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!