rtclawson

Members
  • Content count

    18
  • Joined

  • Last visited

Community Reputation

168 Neutral

About rtclawson

  • Rank
    Member
  1. I am writing to a 3D texture with the format SlimDX.DXGI.Format.R16_UNorm, and using the following call:   DataBox b = _device.ImmediateContext.MapSubresource(_volumeTexture, 0, 0, MapMode.WriteDiscard, MapFlags.None);   For a texture that is 512x512x512, I get an E_OUTOFMEMORY error.  512 * 512 * 512 * 2 = 268?435?456 bytes.   The only other graphics objects that have been created are two small buffers for vertex lists, and a 2D texture that is 512x512.   My graphics card is the AMD FirePro V5900 (FireGL V), which should have at least 2GB of available video memory, and my system has 4GB of RAM. In short, I don't see how it is possible to be out of memory in this case.   How shall I go about debugging this problem? Any common errors I should be looking for?
  2. The following code is used in our tests of Directx 9 to profile the time it takes to complete a task. My question is, what would be the equivalent in Directx 11?     SlimDX, Directx 9 /// <summary> /// Flushes the queue of rendering commands. This is implemented as a blocking call. /// </summary> public void Flush() { // From DirectX SDK Documentation "Accurately Profiling direct3D API Calls (Direct3D 9)"  // Create an Event query from the current device using ( Query q = new Query( _device, QueryType.Event ) ) {   // Add an end marker to the command buffer queue. q.Issue( Issue.End );   // Empty the command buffer and wait until the GPU is idle.   while ( q.GetData<bool>( true ) == false ) { // CPU spinning //NOTE: device could be lost while querying driver } } }     Here is my attempt. I have tried different combinations of calls, but nothing seems to work. Specifically, I get the error "DXGI_ERROR_INVALID_CALL"  on the GetData call. If I uncomment the begin, flush, and end calls, then I get the error "D3D11 ERROR: ID3D11DeviceContext::Begin: Begin is not supported and cannot be invoked for the D3D11_QUERY" SlimDX, Directx 11 public void Flush() { QueryDescription qd = new QueryDescription() { Flags = QueryFlags.None, Type = QueryType.Event };   using (Query q = new Query(_device, qd)) {   //_device.ImmediateContext.Begin(q); //_device.ImmediateContext.Flush(); //_device.ImmediateContext.End(q);     while (_device.ImmediateContext.GetData<bool>(q,AsynchronousFlags.DoNotFlush) == false) { // CPU spinning //NOTE: device could be lost while querying driver } } }       Any help?    
  3. Someone can correct me, but I don't believe it is possible to compute the high quality gradients without doing more texture samples. I think what is being said in the article is that the same procedure for calculating the pixel value can be used to calculate the gradients, but they are separate processes. Under this model, there are 8 texture samples for the pixel value, and 8 samples each for the 3 cardinal directions. An extra 24 texture samples isn't as great as I would have hoped, but the gradient does look better than when it is computed using central differencing.
  4. Someone can correct me, but I don't believe it is possible to compute the high quality gradients without doing more texture samples. I think what is being said in the article is that the same procedure for calculating the pixel value can be used to calculate the gradients, but they are separate processes. Under this model, there are 8 texture samples for the pixel value, and 8 samples each for the 3 cardinal directions. An extra 24 texture samples isn't as great as I would have hoped, but the gradient does look better than when it is computed using central differencing.
  5. I am using a method described in the GPU Gems book (http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter20.html) for Fast Third-Order Texture Filtering. I have it working well for filtering volume textures. I would now like to calculate 3D gradients for use as surface normals for shading.   The article states that this is possible as part of the filtering step, and offers an example in one dimension. They hand wave the possibility of extending the process to multiple dimensions, but it isn't at all clear how this leap should be made. Here is the relevant sentence:   "To compute the gradient in higher dimensions, we obtain the corresponding filter kernels via the tensor product of a 1D derived cubic B-spline for the axis of derivation, and 1D (nonderived) cubic B-splines for the other axes."   Any ideas on how to accomplish this? Obviously I could just sample the texture a few more times and calculate the gradient on my own, but since I already have the samples from the filtering, it would be faster if I could calculate the gradient from them.   Thanks for your help!   Note: this post also appears in the Graphics Programming and Theory forum. I am not sure in which I will find the right person to reply.
  6. I am using a method described in the GPU Gems book (http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter20.html) for Fast Third-Order Texture Filtering. I have it working well for filtering volume textures. I would now like to calculate 3D gradients for use as surface normals for shading.   The article states that this is possible as part of the filtering step, and offers an example in one dimension. They hand wave the possibility of extending the process to multiple dimensions, but it isn't at all clear how this leap should be made. Here is the relevant sentence:   "To compute the gradient in higher dimensions, we obtain the corresponding filter kernels via the tensor product of a 1D derived cubic B-spline for the axis of derivation, and 1D (nonderived) cubic B-splines for the other axes."   Any ideas on how to accomplish this? Obviously I could just sample the texture a few more times and calculate the gradient on my own, but since I already have the samples from the filtering, it would be faster if I could calculate the gradient from them.   Thanks for your help!
  7. I am doing tricubic interpolation of a volume texture. The result is much smoother than nearest neighbor interpolation, but there are artifacts along gradients in the image. I'm in Direct3D 9 and SlimDX, with Shader Model 3. I am pretty sure I am taking into account the half pixel shift between pixel coordinates and texel coordinates. Any ideas? I've posted the code as well. You can assume the pixel shader calls tricubic_naive and returns the result.   EDIT: I Should have mentioned also that the dimensions of the volume are not the same. The volume is 512x512x87. This is a slice in the y-z plane, and similar artifacts occur in the x-z plane. However, the x-y plane is fine.     float cubic_interpolate_onedim(float4 p, float i) { //Bezier cubic interpolation float om = 1.0-i; return om*om*om*p.x + 3*om*om*i*p.y + 3*om*i*i*p.z + i*i*i*p.w; } float4 cubic_interpolate(float4x4 pixels, float interpolant) { pixels = transpose(pixels); float grayVal = cubic_interpolate_onedim(pixels[0],interpolant); return float4(grayVal,//r grayVal,//g grayVal,//b 1);//a } float4 xInt(sampler3D tex, float x, float y, float z, float3 incs,float3 interpolants) { float dx = incs.x; float4x4 row = float4x4(tex3D(tex, float3(x, y,z)), tex3D(tex, float3(x+dx, y,z)), tex3D(tex, float3(x+dx+dx, y,z)), tex3D(tex, float3(x+dx+dx+dx, y,z))); return cubic_interpolate(row, interpolants.x); } float4 yInt(sampler3D tex, float x, float y, float z, float3 incs,float3 interpolants) { float dy = incs.y; float4 r0 = xInt(tex,x,y ,z,incs,interpolants); float4 r1 = xInt(tex,x,y+dy ,z,incs,interpolants); float4 r2 = xInt(tex,x,y+dy+dy ,z,incs,interpolants); float4 r3 = xInt(tex,x,y+dy+dy+dy ,z,incs,interpolants); return cubic_interpolate(float4x4(r0,r1,r2,r3),interpolants.y); } float4 zInt(sampler3D tex, float x, float y, float z, float3 incs,float3 interpolants) { float dz = incs.z; float4 r0 = yInt(tex,x,y,z ,incs,interpolants); float4 r1 = yInt(tex,x,y,z+dz ,incs,interpolants); float4 r2 = yInt(tex,x,y,z+dz+dz ,incs,interpolants); float4 r3 = yInt(tex,x,y,z+dz+dz+dz ,incs,interpolants); return cubic_interpolate(float4x4(r0,r1,r2,r3),interpolants.z); } float4 tricubic_naive(sampler3D tex, float3 coord_grid, float3 size_grid) { float3 index = floor(coord_grid); index -= float3(0.5,0.5,0.5); float3 fraction = frac(coord_grid); //interpolants should be [0..1] float3 interpolants = (float3(1.0,1.0,1.0) + fraction)/3; //bc = bottom corner float3 bc = index/size_grid; float3 incs = float3(1.0/size_grid.x, 1.0/size_grid.y, 1.0/size_grid.z); return zInt(tex,bc.x,bc.y,bc.z,incs,interpolants); }
  8. I found the error. I was changing the size of the window after I initialized the swap chain/viewport etc. Sometimes I wonder...
  9. So, I am not actually drawing the text. It is part of the Visual Studio debugging framework. My code is as bare bones as you can get. I am basically doing what you would find on the SlimDX tutorial for drawing a triangle, except I am not using the RenderForm class. This is what gives me my suspicion that that difference is the cause of the problem. One thing to point out also is the pixelated triangle. It just seems like something is wrong, but I don't have the background in graphics to tell you what it is.
  10. When I use a SlimDX RenderForm in the Visual Studio 2012 debugger, the output looks fine. However, when I use a Windows Form, the debugging information in the window is huge and pixelated. Any ideas on what I am doing wrong? I've attached an image of what the Windows Form looks like.
  11. I found the issue. The code I posted I had pulled from various parts of my framework. As it turns out, in my framework, I was creating a new RenderTargetView every frame. Sheesh...
  12. Unfortunately, culling was not the issue. This is what I used to make sure that culling was not the problem.   RasterizerStateDescription description = new RasterizerStateDescription { CullMode = CullMode.None, FillMode = SlimDX.Direct3D11.FillMode.Solid, }; RasterizerState rs = RasterizerState.FromDescription(_device, description); _device.ImmediateContext.Rasterizer.State = rs;   Any other ideas?
  13. I have successfully created a window and cleared the screen, but I am having trouble taking the next step and drawing a triangle. When I run in the Visual Studio 2012 graphics debugger and view the rendering pipeline, it appears that the triangle is fine in the Input Assembler and Vertex Shader, but by the OutputMerger window it is gone.   I am running with Native symbols and the DirectX SDK, but I'm not getting any errors or warnings. I flattened out my code into one contiguous segment, posted below. The effect fx file is included as well.   Any ideas what may be going wrong?   edit: I forgot to mention that I resize the window to make it re-render many times, and triangles never appear. It is not a matter of culling triangles.   Device device; SwapChain swapChain; RenderTargetView renderTargetView; Effect effect; //This is the window's Hwnd Control control = Control.FromHandle(Hwnd); //BEGIN: Init SlimDX.DXGI.SwapChainDescription swapDesc = new SlimDX.DXGI.SwapChainDescription { BufferCount = 1, Flags = SlimDX.DXGI.SwapChainFlags.AllowModeSwitch, IsWindowed = true, ModeDescription = new SlimDX.DXGI.ModeDescription(0, 0, new SlimDX.Rational(60, 1), SlimDX.DXGI.Format.R8G8B8A8_UNorm), OutputHandle = Hwnd, SampleDescription = new SlimDX.DXGI.SampleDescription(1, 0), SwapEffect = SlimDX.DXGI.SwapEffect.Discard, Usage = SlimDX.DXGI.Usage.RenderTargetOutput }; SlimDX.Result res = Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.Debug, swapDesc, out device, out swapChain); using (var resource = Resource.FromSwapChain<SlimDX.Direct3D11.Texture2D>(swapChain, 0)) renderTargetView = new RenderTargetView(device, resource); var viewport = new SlimDX.Direct3D11.Viewport(0, 0, control.ClientRectangle.Width, control.ClientRectangle.Height); device.ImmediateContext.Rasterizer.SetViewports(viewport); device.ImmediateContext.OutputMerger.SetTargets(renderTargetView); //END //GET EFFECT file_stream = System.Reflection.Assembly. GetExecutingAssembly().GetManifestResourceStream(FX_FILE)) byte[] buffer; buffer = new byte[file_stream.Length]; file_stream.Read(buffer, 0, buffer.Length); SlimDX.DataStream ds = new SlimDX.DataStream(buffer.Length, true, true); ds.Write(buffer, 0, buffer.Length); ds.Position = 0; ShaderBytecode bc = new ShaderBytecode(ds); bc = ShaderBytecode.Compile(buffer, "fx_5_0"); effect = new SlimDX.Direct3D11.Effect(device, bc); //END CREATE EFFECT //BEGIN: All the following is called on render //color is set elsewhere to random values. Trippy, but just for debugging SlimDX.Color4 slimColor = new SlimDX.Color4(color.ToArgb()); device.ImmediateContext.ClearRenderTargetView(renderTargetView, slimColor); List<Vector3> vert; //Set to random for debugging purposes. Random values are between 0-1 vert.Add(new Vector3((float)_rand.NextDouble(), (float)_rand.NextDouble(), (float)_rand.NextDouble())); vert.Add(new Vector3((float)_rand.NextDouble(), (float)_rand.NextDouble(), (float)_rand.NextDouble())); vert.Add(new Vector3((float)_rand.NextDouble(), (float)_rand.NextDouble(), (float)_rand.NextDouble())); //simplified for now to Matrix.Identity effect.GetVariableByName("WorldViewProj").AsMatrix().SetMatrix(Matrix.Identity); Vector4 foregroundColor;//this is set elsewhere effect.GetVariableByName("ForeColor").AsVector().Set(foregroundColor); EffectTechnique tech = effect.GetTechniqueByName("RenderUber"); EffectPass pass = tech.GetPassByIndex(0); pass.Apply(_renderer.Device.ImmediateContext); _renderer.Signature = effect.GetTechniqueByName("RenderUber").GetPassByIndex (0).Description.Signature; DataStream stream = new DataStream(vert.Count * Marshal.SizeOf(Vector3), true, true); stream.WriteRange(vert.ToArray()); stream.Position = 0; var vertexBuffer = new SlimDX.Direct3D11.Buffer(device, stream, _vertices.Count * Marshal.SizeOf(Vector3), ResourceUsage.Default, BindFlags.VertexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0); var elements = new[] { new InputElement("POSITION", 0, SlimDX.DXGI.Format.R32G32B32_Float, 0)}; var sig = effect.GetTechniqueByIndex(0).GetPassByIndex(0).Description.Signature; var layout = new InputLayout(device, sig, elements); context.InputAssembler.InputLayout = layout; context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList; context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vertexBuffer, Marshal.SizeOf(Vector3), 0)); context.Draw(vertexCount, 0); //vertexCount is 3 here _swapChain.Present(0, SlimDX.DXGI.PresentFlags.None); // File: EffectSimple.fx // save with encoding: Western European (Windows)- Codepage 1252 //Global variables------------------------------------------------------------- float4x4 WorldViewProj; float4 ForeColor; //Input Output Structures------------------------------------------------------ struct VS_INPUT { float3 Position : POSITION; // vertex position }; struct VS_OUTPUT { float4 Position : SV_POSITION; // vertex position }; struct PS_OUTPUT { float4 RGBColor : SV_TARGET0; // Pixel color }; //Vertex Shaders--------------------------------------------------------------- VS_OUTPUT RenderSceneVS( VS_INPUT In) { VS_OUTPUT Output; Output.Position = mul(float4(In.Position,1), WorldViewProj); return Output; } //Pixel Shaders---------------------------------------------------------------- PS_OUTPUT RenderScene2D( VS_OUTPUT In ) { PS_OUTPUT Output; Output.RGBColor = ForeColor; return Output; } //Techniques------------------------------------------------------------------- technique11 RenderUber { pass P { SetVertexShader( CompileShader( vs_4_0, RenderSceneVS() ) ); SetPixelShader ( CompileShader( ps_4_0, RenderScene2D() ) ); } }
  14. So far, this is the best way I can think of for modifying the current RasterizerStateDescription:   class Renderer ... public bool Wireframe { set { RasterizerStateDescription curr = _device.ImmediateContext.Rasterizer.State.Description; RasterizerStateDescription description = new RasterizerStateDescription { CullMode = curr.CullMode, DepthBias = curr.DepthBias, DepthBiasClamp = curr.DepthBiasClamp, FillMode = (value ? FillMode.Wireframe : FillMode.Solid), IsAntialiasedLineEnabled = curr.IsAntialiasedLineEnabled, IsDepthClipEnabled = curr.IsDepthClipEnabled, IsFrontCounterclockwise = curr.IsFrontCounterclockwise, IsMultisampleEnabled = curr.IsMultisampleEnabled, IsScissorEnabled = curr.IsScissorEnabled, SlopeScaledDepthBias = curr.SlopeScaledDepthBias, }; RasterizerState rs = RasterizerState.FromDescription(_device, description); _device.ImmediateContext.Rasterizer.State = rs; } }   This seems necessary because the ImmediateContext's Rasterizer.State.Description is read only. Is there a simpler method for modifying one part of the state at a time, or do I have the right idea?   Alternatively, should I keep track of state in my code, and then just set the RasterizerStateDescription once?   Thanks.
  15. I decided to configure the texture using option 2. The following gives me an INVALIDARGS error in the Texture2D constructor. Could it be that I have not configured the device correctly?   Texture2DDescription description = new Texture2DDescription { ArraySize = 1, Format = _format, Width = _width, Height = _height, OptionFlags = ResourceOptionFlags.None, Usage = ResourceUsage.Dynamic, BindFlags = BindFlags.ShaderResource, CpuAccessFlags = CpuAccessFlags.Write, SampleDescription = new SlimDX.DXGI.SampleDescription(1, 0), MipLevels = _levelCount, }; _texture = new SlimDX.Direct3D11.Texture2D(_device, description);   Here is the device creation code:   Device device; SlimDX.DXGI.SwapChainDescription swapDesc = new SlimDX.DXGI.SwapChainDescription { BufferCount = 1, Flags = SlimDX.DXGI.SwapChainFlags.AllowModeSwitch, IsWindowed = true, ModeDescription = new SlimDX.DXGI.ModeDescription(0, 0, new SlimDX.Rational(60, 1), SlimDX.DXGI.Format.R8G8B8A8_UNorm), OutputHandle = handle, SampleDescription = new SlimDX.DXGI.SampleDescription(1, 0), SwapEffect = SlimDX.DXGI.SwapEffect.Discard, Usage = SlimDX.DXGI.Usage.RenderTargetOutput }; ?   Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.Debug, swapDesc, out device, out _swapChain);   I am new to this stuff. The answer may be obvious.