• Content count

  • Joined

  • Last visited

Community Reputation

102 Neutral

About dssannyasi

  • Rank
  1. Vertex Shader Clamps 0-1... Need float!

    Thanks for the responses... I was writing this up and found the issue and have resolved it... still not certain why its an issue.. maybe the clamping issue you spoke of "Dancin Fool"... I've included the shaders I was using incase anyone is able to give me any more insight. While writing this post I saw the differences in my shaders in the use of the depth : TEXCOORD0 parameter in the struct and being used in the poly shader. For the point clouds I figured since I only need points I shouldn't use a pixel shader thats why the 2nd shader only uses a vertexShader. I've switch the struct used to the one in the poly shader and added the same pixelShader and now I have proper depth. As a test I also tried switching the poly shader to write to color.w in the vertex shader and left out the color.w = depth from the pixelShader, and sure enough even the polys were then clamped at 1. Any one know why this extra step is necessary?, Why can't a vertex Shader write floating values to the screen? And does the VertexFormat used while passing the draw codes matter once the data is pushed to the gfxCard? (I would think not since my VertexFormat Struct in Slimdx doesn't even have texture coordinates in the first place) Thanks Here is the Pixel shader (This works on polys). float4x4 WVP; float4 camPos; float near; float far; /////// STRUCTS //////////////////////////////////////////// struct VS_INPUT { float4 Color : COLOR0; float4 Position : POSITION0; }; struct VS_OUTPUT { float4 Color : COLOR0; float4 Position : POSITION0; float Depth : TEXCOORD0; }; //// SHADER /////////////////////////////////////////////// VS_OUTPUT VS( VS_INPUT Input ) { VS_OUTPUT Output; Output.Position = mul( Input.Position, WVP ); Output.Color = Input.Color; Output.Depth = distance(Input.Position, camPos); return( Output ); } float4 PS(VS_OUTPUT input) : COLOR0 { float4 Color = input.Color; Color.w = input.Depth; return (Color); } ////// Techniques ///////////////////////////////////////// technique depthRaw { pass Pass0 { VertexShader = compile vs_2_0 VS(); PixelShader = compile ps_2_0 PS(); } }   Here's the Vertex Shader... (clamps 0-1)     float4x4 WVP; float4 camPos; float near; float far; /////// STRUCTS //////////////////////////////////////////// struct VS_INPUT { float4 Color : COLOR0; float4 Position : POSITION0; }; struct VS_OUTPUT { float4 Color : COLOR0; float4 Position : POSITION0; }; //// SHADERS ////////////////////////////////////////////// VS_OUTPUT VS( VS_INPUT Input) { VS_OUTPUT Output; Output.Position = mul( Input.Position, WVP ); Output.Color = Input.Color; //Output.Color.w = distance(Input.Position, camPos); // NEED THIS TO WORK Output.Color.w = (distance(Input.Position, camPos) - near) / far; // THIS IS WORKAROUND return(Output); } ////// Techniques ///////////////////////////////////////// technique pointDepth { pass Pass0 { VertexShader = compile vs_2_0 VS(); } }  
  2. I have written a point cloud viewer using slimdx and directx 9. We use it to visualize and render images from some content creation software we're building here. Currently my renderer can display points and polys. The main purpose is to render EXR files with color and zDepth (floating point values > 1) information in the alpha.   My render target is 32bit float, and I dump the framebuffer to an exr file. When rendering polys it works perfectly, I just calculate the color of the pixel shader like this (hlsl).       color = Input.color; color.w = distance(Input.Position, cameraPosition),       and I get values in real world units away from the camera. So if a object is 1000 units away the alpha of the exr shows 1000 for that pixel's raw value;       Now I'm trying to render just the vertices of point clouds using a vertex shader with the same calculation as above. But my depth values clamp themselves to a value of 1.   If I change the above code to       color = Input.color; color.w = (distance(Input.Position, cameraPosition) - near) / far; (near and far are my camera clipping distances)    This scales the depth values into the 0-1 range and they output this way, but I need greater precision and real world units!   Is there a limitation on vertex shaders that won't allow these kinds of floating point operations?   Thanks
  3. SlimDX - No depth sorting on a render target

    Thank you for that... It had something to do with my depthStencil, The veiwport must have been making it for me automatically but the render target wasn't... Now I'm issuing device.DepthStencilSurface = Surface.CreateDepthStencil() Format seems to be very picky here... im using D32.singleLockable and its working on my card.. Interesting thing now is I'm writing directly to openEXR from my depthStencil and its working properly, but now the viewport is having the same problem the render used to have (unsorted)... do I need to make both targets ( gui window and renderTarget) use the same depth, Right now its either one or the other? Thanks again.
  4. It has been a while since my first post on this subject (been working on some other projects)... but 3 months and no response to what I would think would be a simple problem :( Here is the original post Z Depth Error (order vs distance) Subject really says it all. I've got a gui where a 3d view works perfectly, but all I do is switch to a Floating point render target and my depth sorting doesn't work. With the exception of the floating point format of the renderTarget, there is literally no difference code wise between the viewport and the renderTarget. Please any assistance.
  5. Z Depth Error (order vs distance)

    MJP Thanks for the reply. I wanted to avoid posting all the code to simplify what others would have to read through, (that and it is pretty ugly code, since i'm just learning :) ) 1. zWrites are enabled just omitted from my post 2. This is run just before my draw code so if the renderFrame bool is triggered the renderTarget changes and the frame draws as it would in the viewport, but to the renderSurface. if (ui.renderFrame) { device.SetRenderTarget(0, renderSurface); } device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, globals.bgColor, 1.0f, 0); device.VertexFormat = VertexFormat.Position | VertexFormat.Diffuse; device.SetRenderState(RenderState.ZEnable, true); device.SetRenderState(RenderState.ZWriteEnable, true); device.BeginScene(); 3. As for #3 i've tried many combination's of formats sampling ect. I've enabled autoDepthStencil, and AutoDepthStencilFormat. Most modes I've found don't work..(fails on device = () ) the ones that do work however still output the same (wrong) depth. Instead of Surface.CreateRenderTarget(), I've tried Surface.CreateDepthStencil() But calling device.setRenderTarget to that surface fails. I've uploaded a comparison image showing my problem. Print screen from the viewport is on the left, output render is on the right, (color shift is from tone mapping the image to account for the float values). The ball was made using the sphere primitive from maya > exporting the points position as Vector3[] and generating voxels at those positions. The order in which it is rendering is the same point order as the primitive in maya. even the grid who's draw call happens first is under all of the voxels with no regard for zdepth.
  6. Using: DirectX9 via SlimDX (Feb 2010) I'm creating an application to visualize point cloud data, and I'm trying to render out floating point images, specifically depth. Problem is: In the viewport everything looks fine but when I switch the renderTarget to a different Surface and save the resulting image to disk my objects aren't rendering in the correct z order. What appears to be happening is the surface is drawing the primitives in the order it receives them, so a cube that it receives last even if its behind other objects gets rendered on top. On a side note i would prefer to output this data to an exr format (currently dds is the only float format that works). Anyone know of something for c# that I maybe able to use, or a way to get an array out of the surface object? Any help... please. Heres the basics of my code paraphrased a bit: Main() { UI ui = new UI(); PresentParameters pp = new PresentParameters(); Direct3D d3d = new Direct3D(); Device device = new Device(d3d, 0, DeviceType.Hardware, ui.viewer.Handle, CreateFlags.HardwareVertexProcessing, pp); Surface windowSurface = device.GetRenderTarget(0); Surface renderSurface = Surface.CreateRenderTarget(device, pp.BackBufferWidth, pp.BackBufferHeight, Format.A32B32G32R32F, MultisampleType.FourSamples, 4, false); MessagePump.Run(ui, () => { if (renderFrame) device.SetRenderTarget(0, renderSurface); //Switch surface device.SetRenderState(RenderState.ZEnable, true); //This doesn't help Draw() //Regardless of surface the draw code is always the same if (renderFrame) { renderSurface.toFile(); //Save file device.SetRenderTarget(0, windowSurface); //Set target back to UI renderFrame = false; } } }