Sign in to follow this  

issue passing data into compute shader

Recommended Posts

i have a string of compute shaders that need to process a bock of data. the data comes in as 32 bit signed ints, then it moves them around and then combines them and then compresses them a couple of times.

I have debugged most of the shaders and overrided any processing to simplify the passage of data to try and see what is going on, so really it is just passing the same data between shaders until it gets output to the screen.

the issue is that when i write the data into the shader it is no where near what is expected. right now im just running test data to straighten everything out.

it seems like i am using the wrong type of dxgi format for the textures and I have tried every combination that made any sense to me.

 

the first shader (Parse)  uses the InputTex, this is where i write to from the cpu

                DXGI.SampleDescription sampleDescription = new DXGI.SampleDescription();

                // multisample count
                sampleDescription.Count = 1;
                sampleDescription.Quality = 0;     

                _InputTextureDescription = new D3D11.Texture2DDescription();
                _InputTextureDescription.Width = _SampleCount*4;
                _InputTextureDescription.Height = _LineCount/4;
                _InputTextureDescription.MipLevels = _InputTextureDescription.ArraySize = 1;
                _InputTextureDescription.Format = DXGI.Format.R32_UInt;
                _InputTextureDescription.SampleDescription = sampleDescription;
                _InputTextureDescription.Usage = D3D11.ResourceUsage.Dynamic;
                _InputTextureDescription.BindFlags = D3D11.BindFlags.ShaderResource;
                _InputTextureDescription.CpuAccessFlags = D3D11.CpuAccessFlags.Write;


                if (_InputTex != null)
                    _InputTex.Dispose();
                _InputTex = new D3D11.Texture2D(_Device, _InputTextureDescription);
                if (_InputTexView != null)
                    _InputTexView.Dispose();
                _InputTexView = new D3D11.ShaderResourceView(_Device, _InputTex);

coming out of that shader it writes to the _ParsedLinesOutTex


                _ParsedTextureDescription = new D3D11.Texture2DDescription(); 
                _ParsedTextureDescription.Width = _SampleCount;  
                _ParsedTextureDescription.Height = _LineCount;
                _ParsedTextureDescription.MipLevels = _ParsedTextureDescription.ArraySize = 1;
                _ParsedTextureDescription.Format = DXGI.Format.R32_UInt;
                _ParsedTextureDescription.SampleDescription = sampleDescription;
                _ParsedTextureDescription.Usage = D3D11.ResourceUsage.Dynamic;
                _ParsedTextureDescription.BindFlags = D3D11.BindFlags.ShaderResource;
                _ParsedTextureDescription.CpuAccessFlags = D3D11.CpuAccessFlags.Write;


                if (_ParsedLinesTex != null)
                    _ParsedLinesTex.Dispose();
                _ParsedLinesTex = new D3D11.Texture2D(_Device, _ParsedTextureDescription);
                if (_ParsedLinesTexView != null)
                    _ParsedLinesTexView.Dispose();
                _ParsedLinesTexView = new D3D11.ShaderResourceView(_Device, _ParsedLinesTex);


                _ParsedTextureDescription.Usage = D3D11.ResourceUsage.Default;
                _ParsedTextureDescription.BindFlags = D3D11.BindFlags.ShaderResource | D3D11.BindFlags.UnorderedAccess;
                _ParsedTextureDescription.CpuAccessFlags = D3D11.CpuAccessFlags.None;


                if (_ParsedLinesOutTex != null)
                    _ParsedLinesOutTex.Dispose();
                _ParsedLinesOutTex = new D3D11.Texture2D(_Device, _ParsedTextureDescription);
                if (_ParsedLinesOutTexView != null)
                    _ParsedLinesOutTexView.Dispose();
                _ParsedLinesOutTexView = new D3D11.UnorderedAccessView(_Device, _ParsedLinesOutTex);

which is then copied into the ParsedLinesTex to go into the next shader and that shader writes to the FusedLinesOutTex

   
                _FusedTextureDescription = new D3D11.Texture2DDescription(); 
                _FusedTextureDescription.Width = _SampleCount;  
                _FusedTextureDescription.Height = _LineCount;
                _FusedTextureDescription.MipLevels = _FusedTextureDescription.ArraySize = 1;
                _FusedTextureDescription.Format = DXGI.Format.R32_UInt;
                _FusedTextureDescription.SampleDescription = sampleDescription;
                _FusedTextureDescription.Usage = D3D11.ResourceUsage.Dynamic;
                _FusedTextureDescription.BindFlags = D3D11.BindFlags.ShaderResource;
                _FusedTextureDescription.CpuAccessFlags = D3D11.CpuAccessFlags.Write;



                if (_FusedLinesTex != null)
                    _FusedLinesTex.Dispose();
                _FusedLinesTex = new D3D11.Texture2D(_Device, _FusedTextureDescription);
                if (_FusedLinesTexView != null)
                    _FusedLinesTexView.Dispose();
                _FusedLinesTexView = new D3D11.ShaderResourceView(_Device, _FusedLinesTex);


                _FusedTextureDescription.Usage = D3D11.ResourceUsage.Default;
                _FusedTextureDescription.BindFlags = D3D11.BindFlags.ShaderResource | D3D11.BindFlags.UnorderedAccess;
                _FusedTextureDescription.CpuAccessFlags = D3D11.CpuAccessFlags.None;

                if (_FusedLinesOutTex != null)
                    _FusedLinesOutTex.Dispose();
                _FusedLinesOutTex = new D3D11.Texture2D(_Device, _FusedTextureDescription);
                if (_FusedLinesOutTexView != null)
                    _FusedLinesOutTexView.Dispose();
                _FusedLinesOutTexView = new D3D11.UnorderedAccessView(_Device, _FusedLinesOutTex);

which is then copied into the FusedLinesTex that goes into the next shader where it is compressed to 16 bits and that shader outputs to

                // create texture description
                _MagnitudeTextureDescription = new D3D11.Texture2DDescription();
                _MagnitudeTextureDescription.Width = _SampleCount;  
                _MagnitudeTextureDescription.Height = _LineCount;
                _MagnitudeTextureDescription.MipLevels = _MagnitudeTextureDescription.ArraySize = 1;
                _MagnitudeTextureDescription.Format = DXGI.Format.R16_UNorm;
                _MagnitudeTextureDescription.SampleDescription = sampleDescription;
                _MagnitudeTextureDescription.Usage = D3D11.ResourceUsage.Default;
                _MagnitudeTextureDescription.BindFlags = D3D11.BindFlags.ShaderResource | D3D11.BindFlags.UnorderedAccess;
                _MagnitudeTextureDescription.CpuAccessFlags = D3D11.CpuAccessFlags.None;


                if (_MagnitudeOutTex != null)
                    _MagnitudeOutTex.Dispose();
                _MagnitudeOutTex = new D3D11.Texture2D(_Device, _MagnitudeTextureDescription);
                if (_MagnitudeOutTexView != null)
                    _MagnitudeOutTexView.Dispose();
                _MagnitudeOutTexView = new D3D11.UnorderedAccessView(_Device, _MagnitudeOutTex);

this then gets copied over to another texture and compressed again down to 8 bits.

 

 

the parse shader seems to work fine and all the data coming out when i override the output is as expected, not sure why i need to divide it down for uints but it makes it work. the shader when running without the override takes each line and splits it up into 4 lines, its basically

l1s1, l2s1, l3s1, l4s1, l1s2, l2s2, l3s2 ... l3sN,l4sN, l5s1,l6s1,l7s1,l8s1,l5s2... and it spits out

l1s1, l1s2, l1s3,...l1sN,l2s1  etc...

[numthreads(1, 1, 1)]
void Parse(uint3 threadID : SV_DispatchThreadID)
{
    int2 pos = threadID.xy;

    int sampleNumber = (pos.x * 4)+(pos.y%4);
    int lineNumber = (pos.y / 4);

    pos = int2(sampleNumber, lineNumber);
    
    uint output = Input1[pos];
    

// override output to make sure data is getting passed out of here correctly
    pos = threadID.xy;
    Output1[threadID.xy] =  pos.y / 4294967296.0;
    
}

on the rest of the shaders i basically have this to override any processing to make sure the data is moving around correctly

    Output1[threadID.xy] = Input1[threadID.xy];

on the Magnitude shader i have it output

 Output1[threadID.xy] = Input1[threadID.xy]*65536;

to just use the bottom 16 bits as a unorm and just clip everything above to 1.

so here i am using 256 lines and using the bottom 8 bits of Output1[threadID.xy] =  pos.y / 4294967296.0; goes from 0-256 as is expected.

 

 

 

post-142561-0-92167200-1492642510_thumb.

 

 

so the shaders all work together fine and will pass data from one to the next as expected except for having to use a fraction output for uints which is weird...

but when i try to write data into the texture for the first shader to use it entirely stops making sense to me.

       private void writeDatatoTex(byte[] data)
        {
            try
            {
             // override data input with test data
                byte[] newData = new byte[data.Length]; 
                for (int y = 0; y < LineCount/4; y++)
                    for (int x = 0; x < _SampleCount * 4; x++)
                    {
                        newData[(x + y * _SampleCount * 4) * 4] = 0xff;//0;
                        newData[(x + y * _SampleCount * 4) * 4 + 1] = 0xff;//0;
                        newData[(x + y * _SampleCount * 4) * 4 + 2] = 0xff;//0;
                        newData[(x + y * _SampleCount * 4) * 4 + 3] = 0xff;// (byte)(y&0xff);
                    }


                int temprow = 0;
                //DataBox mappedTex = null;
                DataStream mappedTexDataStream;
                //assign and lock the resource
                DataBox mappedTex = _Device.ImmediateContext.MapSubresource(_InputTex, 0, D3D11.MapMode.WriteDiscard, D3D11.MapFlags.None, out mappedTexDataStream);

                texrowsize = mappedTex.RowPitch;

                if (templine == null || templine.Length != (_InputTextureDescription.Width * 4))
                {
                    templine = new byte[texrowsize];
                    rowsize = (_InputTextureDescription.Width * 4);
                }
                // if unable to hold texture
                if (!mappedTexDataStream.CanWrite) { throw new InvalidOperationException("Cannot Write to the Texture"); }
                // write new data to the texture

                for (int x = 0; x < _InputTextureDescription.Height; x++)
                {
                    Array.Copy(newData, temprow, templine, 0, rowsize);
                    mappedTexDataStream.WriteRange<byte>(templine, 0, texrowsize);
                    temprow += rowsize;
                }


                // unlock the resource
                _Device.ImmediateContext.UnmapSubresource(_InputTex, 0);
            }
            catch (Exception e)
            {
                e = e;
            }
        }

feeding 0xff into all values of the texture gets nothing to the shader.

    uint output = Input1[pos];
    Output1[threadID.xy] = output;

gives me all black, and if i add

    uint output = Input1[pos]; 
    if (output == 0)   output = 1; 
    Output1[threadID.xy] = output;

 then the whole image is white, so every value in the texture is 0, even though im writing 0xFF FF FF FF into every sample. tried using SInt and writing 0x7F FF FF FF and still nothing. how do you properly feed data into a 32 bit texture for use in a compute shader. i have no issues doing the exact same thing for 16 bit data into a UNorm.

 

Share this post


Link to post
Share on other sites

getting some further weirdness. if i pass ramped data in going from 0-256 left to right, and have the textures all set up as R32_Uint, but then cast it as a float on the first shader it will show input data. it is not correct as trying to cast a uint value packed into 32 bits as a float will very much change the values, but it does show data at that point and is no longer all 0's.

    float output = Input1[pos]; 
    Output1[threadID.xy] =  output ;

post-142561-0-45396600-1492703205_thumb.

 

 

keeping the data cast as an int or uint still gives me all 0's,

    uint output = Input1[pos];
    Output1[threadID.xy] =  output ;

gives me all black and

    uint output = Input1[pos];
    if (output == 0) output = 1; 
    Output1[threadID.xy] = output ;

gives me all white

 

 

 

Share this post


Link to post
Share on other sites
Posted (edited)

ok im retarded, was casting the output textures incorrectly, was using

Texture2D Input1 : register(t0); 
RWTexture2D<float4> Output1 : register(u0); 

 moved to

Texture2D<int> Input : register(t0);
Texture2D<int> ParsedData : register(t1);
Texture2D<int> FusedData : register(t2);
   
RWTexture2D<int> ParsedOut : register(u0);
RWTexture2D<int> FusedOut : register(u1);
RWTexture2D<float> MagOut : register(u2);

and i am getting a proper ramp from the input data. still not sure why the last one should be a float when i am using a unorm but it works now.

Edited by ucfchuck

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this