Sign in to follow this  
Discard_One

DX11 compile shader

Recommended Posts

Discard_One    100
hi all i having a relatively complex shader to compile it takes a significant amount of time can i save the compiled bitcode and simply load it instead to compile it each time ? D3DX11CompileFromFile doesnt takes a device so the compiled code shoud by device independent !? is it also system independent ? coud i get a cresh whit a diferent DX version or are there any other problems this coud casue an other question i dont understand the documentation on [branch] and [flatten] if(x) { x = sqrt(x); } this is a joke !!! whats intressting me is if(x) { y = sqrt(z); } also the attributes of the "switch" are a misterium to me my shader is basiclly 2 functions with a switch statement in each whit roufly 20 diferent cases numthreads( 30, 20, 1 ) and switch(GTid.y) betwean the 2 functions i need to syncronize whit a barier the first functions is creating betwean 20-500 points stored in groupshared memory and a few matrixes the 2. constructs a chamferbox out of this data what attribute shoud i use for the switch ? calculating the matrixes 30 threads will calculate the same matrix from my point of view it makes no sense to stop this and calculate it whit just one i probably coud save a few mikro wolt power !? but no speed even worser the check to do this whit just 1 thread woud cost aditional time (where by i have not a clue how threads are sheduled on the GPU) i am just not sure that i not getting corupted data if 30 threads writing at ones to the same lokation (the same data) it not seams to by the case but woud by better to now for sure cheers rick

Share this post


Link to post
Share on other sites
MJP    19788
Quote:
Original post by Discard_One
hi all

i having a relatively complex shader
to compile it takes a significant amount of time

can i save the compiled bitcode and simply load it instead to compile it each time ?

D3DX11CompileFromFile doesnt takes a device so the compiled code shoud by device independent !? is it also system independent ?

coud i get a cresh whit a diferent DX version or are there any other problems this coud casue



You can absolutely save the compiled bytecode. In fact most games will compile their shaders at build time and save them to a file, then at run time they will load the compiled shaders so that it's quicker. They bytecode is just a stream shader assembly opcodes, which are totally device-independent. It doesn't matter if you load them with a different version of the SDK.

Share this post


Link to post
Share on other sites
DieterVW    724
You shouldn't need to place attributes on any flow control constructs until your algorithm is working and you are specifically trying to optimize in some fashion. The compiler will automatically do what it thinks is best and normally doesn't need the hint.

I suggest reading the CUDA(NVIDIA), Stream(AMD) papers on GPGPU computing. You can find these on their developer sites. The papers talk about how threads are scheduled, how to use memory barriers, and much more. It's very helpful and should answer your questions.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By RubenRS
      How do i open an image to use it as Texture2D information without D3DX11CreateShaderResourceViewFromFile? And how it works for different formats like (JPG, PNG, BMP, DDS,  etc.)?
      I have an (512 x 512) image with font letters, also i have the position and texcoord of every letter. The main idea is that i want to obtain the image pixel info, use the position and texcoords to create a new texture with one letter and render it. Or am I wrong in something?
    • By thmfrnk
      Hey,
      I found a very interesting blog post here: https://bartwronski.com/2017/04/13/cull-that-cone/
      However, I didn't really got how to use his "TestConeVsSphere" test in 3D (last piece of code on his post). I have the frustumCorners of a 2D Tile cell in ViewSpace and my 3D Cone Origin and Direction, so where to place the "testSphere"? I thought about to also move the Cone into viewspace and put the sphere to the Center of the Cell with the radius of half-cellsize, however what about depth? A sphere does not have inf depth?
      I am missing anything? Any Ideas?
      Thx, Thomas
    • By Modymek
      hi all
      I want to enable and disable shader in MPCH Media player Classic
      the MPCH have shader option using HLSL shaders
      I want the shader to read each file extension before it plays the file
      so if the video file name is video.GR.Mp4 it will play it in Grayscale shader 
      if it is not and standard file name Video.Mp4 without GR. unique extension so it plays standard without shader or end the shader
      here is the shader I have for grayscale
      // $MinimumShaderProfile: ps_2_0
      sampler s0 : register(s0);
      float4 main(float2 tex : TEXCOORD0) : COLOR {
          float c0 = dot(tex2D(s0, tex), float4(0.299, 0.587, 0.114, 0));
          return c0;
      }
       
      I want to add if or block stantement or bloean to detect file name before it call the shader in order to go to the procedure or disable it or goto end direct without it
       
      any thoughts or help
    • By noodleBowl
      I've gotten to part in my DirectX 11 project where I need to pass the MVP matrices to my vertex shader. And I'm a little lost when it comes to the use of the constant buffer with the vertex shader
      I understand I need to set up the constant buffer just like any other buffer:
      1. Create a buffer description with the D3D11_BIND_CONSTANT_BUFFER flag 2. Map my matrix data into the constant buffer 3. Use VSSetConstantBuffers to actually use the buffer But I get lost at the VertexShader part, how does my vertex shader know to use this constant buffer when we get to the shader side of things
      In the example I'm following I see they have this as their vertex shader, but I don't understand how the shader knows to use the MatrixBuffer cbuffer. They just use the members directly. What if there was multiple cbuffer declarations like the Microsoft documentation says you could have?
      //Inside vertex shader cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float4 color : COLOR; }; struct PixelInputType { float4 position : SV_POSITION; float4 color : COLOR; }; PixelInputType ColorVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the input color for the pixel shader to use. output.color = input.color; return output; }  
    • By gomidas
      I am trying to add normal map to my project I have an example of a cube: 
      I have normal in my shader I think. Then I set shader resource view for texture (NOT BUMP)
                  device.ImmediateContext.PixelShader.SetShaderResource(0, textureView);             device.ImmediateContext.Draw(VerticesCount,0); What should I do to set my normal map or how it is done in dx11 generally example c++?
  • Popular Now