• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

218 Neutral

About GoodFun

  • Rank
  1. HLSL Language Service

    I've just recently started to use NShade and I definitely would love to see HLSL being elevated to a first class citizen... My shaders are ever getting more complex and the more support I get from my dev environment, the better... Thanks much for all the effort you guys are putting in to push this technology towards mainstream. Marcel
  2. HLSL Language Service

    I've just recently started to use NShade and I definitely would love to see HLSL being elevated to a first class citizen... My shaders are ever getting more complex and the more support I get from my dev environment, the better... Thanks much for all the effort you guys are putting in to push this technology towards mainstream. Marcel
  3. [SlimDX] DirectX 11 Query

    At this point I'm trying to time the execution of the ComputeShader. Eventually I will have another Compute or Pixel shader running after this initial one that will use the resource created by the first ComputeShader.
  4. [SlimDX] DirectX 11 Query

    Adam, I saw that possibility but that means it has to move the data back over the PCIe bus... I don't want to do that, I just need to know when the shader has completed...
  5. [SlimDX] DirectX 11 Query

    Still no one with an answer to this???
  6. [SlimDX] DirectX 11 Query

    Spent some more time looking for an answer but information around this is very slim at this point... have any of the SlimDX team worked with DirectCompute yet?
  7. DirectX 11 Shader Debugging

    if you're using nVidia cards, you can use Parallel Nsight, allows you to single step through shaders, setting breakpoints, inspecting values and so on... Pretty nifty tool, hasn't been around long yet though so it can have some childhood issues... http://developer.nvidia.com/object/nsight-downloads.html Hope that helps Marcel
  8. Hi there, I'm working on a compute shader in DirectX 11. I want to dispatch the compute shader and then wait for it to complete. Under DirectX 10, I could check with IsDataAvailable if the query has completed, I haven't been able to find the corresponding method on how to do this in DirectX 11. This is the code I'm currently using, reduced to just the necessary parts: SlimDX.Direct3D11.QueryDescription queryDescription = new QueryDescription(QueryType.Event, QueryFlags.None); SlimDX.Direct3D11.Query query = new Query(device, queryDescription); device.ImmediateContext.ComputeShader.Set(compute); device.ImmediateContext.ComputeShader.SetUnorderedAccessView(computeResult, 0); device.ImmediateContext.ComputeShader.SetShaderResource(pointerView, 0); device.ImmediateContext.ComputeShader.SetShaderResource(spanView, 1); device.ImmediateContext.Dispatch(8, 4096, 1); device.ImmediateContext.End(query); // want to check here on when the compute shader has finished string outFileName = Path.Combine(outFilePath, Path.ChangeExtension(Path.GetFileName(fileName), ".dds")); Texture2D.ToFile(device.ImmediateContext, outputTexture, ImageFileFormat.Dds, outFileName); Any help is greatly appreciated. Thanks Marcel
  9. Thanks, that was what I thought and was expecting... but for Microsoft to rule this out, I had to get it answered by someone with more C++ knowledge than myself. Thanks Marcel
  10. Hi there, I think I have asked that before but since I have a ticket with Microsoft open and they want me to verify this once more I'll ask once again. I have found limitations on how many textures I can create and they don't make sense to me... I am running the following code on a 48 Gig machine with a GTX 480. See the table below for results. SlimDX.Direct3D11.Device device = new SlimDX.Direct3D11.Device(DriverType.Hardware, DeviceCreationFlags.None); List<Texture2D> textureList = new List<Texture2D>(); int width = 8192; int height = 4096; for (int i = 0; i < 100000; i++) { try { Texture2DDescription texDesc = new Texture2DDescription(); texDesc.ArraySize = 1; texDesc.BindFlags = BindFlags.ShaderResource; texDesc.CpuAccessFlags = CpuAccessFlags.None; texDesc.Format = SlimDX.DXGI.Format.R16_Float; texDesc.Height = height; texDesc.MipLevels = 1; texDesc.OptionFlags = ResourceOptionFlags.None; texDesc.SampleDescription = new SampleDescription(1, 0); texDesc.Usage = ResourceUsage.Default; texDesc.Width = width; Texture2D texture = new Texture2D(device, texDesc); textureList.Add(texture); } catch (Exception ex) { MessageBox.Show("Error at texture " + i + " : " + ex.ToString()); break; } } foreach (Texture2D texture in textureList) { texture.Dispose(); } Now these are the results that I'm getting for different texture sizes... Texture Dimension Format Size MB Max # Textures Memory Used GB 256x256 R16_Float 0.125 62,256 7.60 512x256 R16_Float 0.25 31,040 7.58 512x512 R16_Float 0.5 15,520 7.58 1024x512 R16_Float 1 7,760 7.58 2048x1024 R16_Float 4 1,940 7.58 4096x2048 R16_Float 16 485 7.58 8192x4096 R16_Float 64 485 30.31 8192x8192 R16_Float 128 485 60.63 16384x8192 R16_Float 256 242 60.50 16384x16384 R16_Float 512 121 60.50 16384x16384 R32_Float 1024 60 60.00 As you can see the first few rows seem to be limited to about 7.6 gigs, the one row that is limited at a bit over 30 gigs, and the remaining rows are limited at 60 gigs and even though I only have 48 gigs physical RAM in my computer, I was able to create all those textures up to the limit stated above... if any of the SlimDX devs could verify that this is indeed not a problem stemming from SlimDX, that would be greatly appreciated. Thanks for your help Marcel
  11. I am working on a compute shader and I want to do some speed test... I read somewhere that compute shaders are being executed asynchronously and I wonder if that is the case with the following call: device.ImmediateContext.Dispatch(1, height, 1); Can I just time how long this call takes or do I need to do queries or mapping of the result set to get accurate measurements? Thanks Marcel
  12. I was just looking through the SlimDX source code and saw that only those two fields are being used... changed the code to us nbr elements for width and it looks like I'm getting the correct buffer views now... Thanks for the quick answer... Marcel
  13. Hi there, I think I might have run into a bug in the DirectX 11 version of the CreateShaderResourceView code of SlimDX... I am creating a buffer with compressed data and then create two ShaderResourceViews on top of it with different offsets and counts. After creating the view, it shows the element count as 0 even though I passed the correct count into the function... When I access the view, the first few entries come out ok, after that I get 0s. I am using Windows 7, GTX 480, latest nVidia Driver and SlimDX built for 64bit off the latest SVN branch. Anyone got some insight on this? This is the code I'm using to create the view: public static ShaderResourceView CreateShaderResourceView(Device device, Direct3D11.Resource resource, uint offset, uint recordSize, uint nbrRecords) { ShaderResourceViewDescription description = new ShaderResourceViewDescription(); description.ElementOffset = (int)offset; description.ElementWidth = (int)recordSize; description.ElementCount = (int)nbrRecords; description.Dimension = ShaderResourceViewDimension.Buffer; description.FirstElement = 0; description.Flags = ShaderResourceViewExtendedBufferFlags.None; description.Format = Format.R32_UInt; ShaderResourceView result = new ShaderResourceView(device, resource, description); return result; }
  14. Never mind... it helps to include the correct version of the DLL in your project... yeah, it's one of those days...
  15. I'm trying to get to the actual bit representation of the Half data type in SlimDX with no success so far... Is there a reason I don't see the RawValue property in C#? I see this listed under the public: section /// <summary> /// Gets or sets the raw 16 bit value used to back this half-float. /// </summary> property System::UInt16 RawValue { System::UInt16 get(); void set( System::UInt16 value ); }
  • Advertisement