qorthos

Member
  • Content count

    21
  • Joined

  • Last visited

Community Reputation

168 Neutral

About qorthos

  • Rank
    Member

Personal Information

  • Interests
    Programming
  1. Appears to be some PEBKAC.  It's working now, and Is wear I changed nothing =\
  2. Okay, so this one is probably fairly simple, and hopefully it's just me not using RawInput for the keyboard correctly (incidentally, it's working just fine for the mouse).   This is my code: public override void Initialize() { Device.RegisterDevice(UsagePage.Generic, UsageId.Keyboard, DeviceFlags.None); Device.RegisterDevice(UsagePage.Generic, UsageId.Mouse, DeviceFlags.None); SlimDX.RawInput.Device.KeyboardInput += Device_KeyboardInput; SlimDX.RawInput.Device.MouseInput += Device_MouseInput; base.Initialize(); } private void Device_KeyboardInput(object sender, KeyboardInputEventArgs e) { keysThisFrame.Add(e.Key); keyStateChanged = true; if (e.State != KeyState.Pressed) { return; } } I put a break point inside the if statement for the KeyState.  It never gets called.  I can press buttons all I like, but I will only ever get KeyState.Pressed.  It never tells me when a key is released.  I need that information.   Any ideas?
  3. [SlimDX] Blending woes.

    Well, this one should have been a no-brainer.   The output from my gbuffer pass had all zeros in the alpha channels.  So when I set the blendstate to alpha on the edge detection shader my gbuffer started outputting completely transparent color.   Here's what it looks like all combined. http://i.imgur.com/IoLPDPB.png  
  4. Hi everyone,   I'm having trouble getting an edge detection shader to blend with the previously rendered image.   Quick background: I render a GBuffer (color, normal depth) I then render an image with just direct and ambient light values (to the  Finally I use the normal and depth map from the GBuffer to create my edges.   Everything works fine when I render just the GBuffer and lighting.  I get this: http://imgur.com/iPxkXJc,mBCTA9P,aVi7B6U#0   When I render just the edge detection shader I get this: http://imgur.com/iPxkXJc,mBCTA9P,aVi7B6U#1   When I add in edge detection after the lighting render I get this: http://imgur.com/iPxkXJc,mBCTA9P,aVi7B6U#2   I am stumped.  I don't think its actually a blending issue.  I set the BlendState on the shader with this:   BlendState SrcAlphaBlend {    BlendEnable[0]           = TRUE;    SrcBlend                 = SRC_ALPHA;    DestBlend                = INV_SRC_ALPHA;    BlendOp                  = ADD;    SrcBlendAlpha            = ONE;    DestBlendAlpha           = ONE;    BlendOpAlpha             = ADD;    RenderTargetWriteMask[0] = 0x0F; };   I have a DepthStencilView set when I do the GBuffer render, but not when I do the lighting and edge detection.   Any ideas?
  5.   Doh, I didn't see that in your code.
  6. Try turning off backface culling in the rasterizer in case you're winding the vertices of your quad backwards.   var rsd = new RasterizerStateDescription() { CullMode = CullMode.None, DepthBias = 0, DepthBiasClamp = 0.0f, FillMode = FillMode.Solid, IsAntialiasedLineEnabled = false, IsDepthClipEnabled = false, IsFrontCounterclockwise = false, IsMultisampleEnabled = true, IsScissorEnabled = false, SlopeScaledDepthBias = 0.0f }; RasterizerState rs = RasterizerState.FromDescription(device, rsd); context.Rasterizer.State = rs;
  7. SlimDX and SSAO

    Here's the output from my SSAO shader.  I can't figure out why there's a jagged line in it.  
  8. Hi everyone.  I'm coming up against a wall against and need help!  I've been trying to figure out how to implement SSAO for over a week now and keep getting garbage.  A lot of the code I'm using is lifted from other places (MJP, Jose Mendez).  I think I understand it.  Mostly...   I'm rendering my scene to a diffuse color view, normal view (output.Normal = input.Normal * 0.5f + 0.5f) and a depth buffer (output.Depth = input.Depth.x / input.Depth.y)   This is the scene I'm rendering (just showing diffuse colors):   The cubes are 1 unit cubed.   This is my SSAO shader: Texture2D DepthTexture : register(t0); Texture2D NormalTexture : register(t1); Texture2D RandomTexture : register(t2); cbuffer Once : register(cb0) { float Bias; float Intensity; float Radius; float Scale; } cbuffer OncePerFrame : register(cb1) { float4x4 Projection; float4x4 InvProjection; float FarClip; } SamplerState sampleType { Filter = MIN_MAG_MIP_LINEAR; AddressU = CLAMP; AddressV = CLAMP; }; struct VS_IN { float4 Position : POSITION; float2 Texcoords : TEXCOORD0; }; struct VS_OUT { float4 Position : SV_POSITION; float2 Texcoords : TEXCOORD0; }; VS_OUT VS( VS_IN input) { VS_OUT output = (VS_OUT)0; output.Position = input.Position; output.Texcoords = input.Texcoords; return output; } float3 GetPosition(float2 vTexCoord) { // Get the depth value for this pixel float z = DepthTexture.Sample(sampleType, vTexCoord).r; // Get x/w and y/w from the viewport position float x = vTexCoord.x * 2 - 1; float y = (1 - vTexCoord.y) * 2 - 1; float4 vProjectedPos = float4(x, y, z, 1.0f); // Transform by the inverse projection matrix float4 vPositionVS = mul(vProjectedPos, InvProjection); // Divide by w to get the view-space position return vPositionVS.xyz / vPositionVS.w; } float DoAmbientOcclusion(in float2 tcoord, in float2 uv, in float3 p, in float3 cnorm) { float3 diff = GetPosition(tcoord + uv) - p; const float3 v = normalize(diff); const float d = length(diff) * Scale; return max(0.0, dot(cnorm, v) - Bias) * (1.0 / (1.0 + d)) * Intensity; } float4 PS(VS_OUT input) : SV_Target { const float2 vec[4] = { float2(1, 0),float2(-1, 0), float2(0, 1),float2(0, -1) }; float3 p = GetPosition(input.Texcoords); float3 n = NormalTexture.Sample(sampleType, input.Texcoords).xyz * 2 - 1; float2 rand = RandomTexture.Sample(sampleType, input.Texcoords).xy; float ao = 0.0f; float rad = Radius / p.z; //**SSAO Calculation**// int iterations = 4; for (int j = 0; j < iterations; ++j) { float2 coord1 = reflect(vec[j], rand) * rad; float2 coord2 = float2(coord1.x * 0.707 - coord1.y * 0.707, coord1.x * 0.707 + coord1.y * 0.707); ao += DoAmbientOcclusion(input.Texcoords, coord1 * 0.25, p, n); ao += DoAmbientOcclusion(input.Texcoords, coord2 * 0.50, p, n); ao += DoAmbientOcclusion(input.Texcoords, coord1 * 0.75, p, n); ao += DoAmbientOcclusion(input.Texcoords, coord2 * 1.00, p, n); } ao /= (float)iterations * 4.0; //**END**// return float4(1 - ao, 1-ao, 1-ao, 1); }     I've gone through and debugged the shader, all of my settings are being set.  I'm reasonably sure that I'm getting position from depth correctly and decoding the normal right.     Is the problem with the SSAO settings?  I'm using this:               float bias = 0.01f;             float intensity = 0.5f;             float radius = 1.1f;             float scale = 0.5f;   Which seems like are right...  Aggh.  I'm pulling my hair out here.  
  9. DX11 DX11 Uber Shading - Lighting

    Yup!  Deferred shading is great for large numbers of light sources of varying magnitudes, sizes and intensities.
  10. Stencil Buffer

    Are you setting your DepthStencilView with device.OutputMerger.SetTargets() ?
  11. Hah, setting DataStream.Position sets the byte location to write at, I was using the index in the array.  I need to multiply the position by the size of the instance.
  12.   Right, and I think I'm okay with doing.  The vertices that I'm setting in the first method are the only vertices that I'm trying to drawn that frame.
  13. Hello everyone, I'm currently instancing a set of primitives (cubes, spheres, etc) with modulo arithmetic and I'm having wit the DataStream.WriteRange<> method.   I'm using this to write to the InstanceBuffer: DataStream dsInstances = instanceBuffer.Map(MapMode.WriteDiscard, SlimDX.Direct3D10.MapFlags.None); if (instancesStart + instancesCount < instancesMax) { dsInstances.Position = instancesStart; dsInstances.WriteRange<VertexPrimitiveInstance>(instances, instancesStart, instancesCount); } else { dsInstances.Position = instancesStart; dsInstances.WriteRange<VertexPrimitiveInstance>(instances, instancesStart, instancesMax - instancesStart); dsInstances.Position = 0; dsInstances.WriteRange<VertexPrimitiveInstance>(instances, 0, instancesCount - (instancesMax - instancesStart)); } instanceBuffer.Unmap();     I see nothing the first loop around from 0 to InstancesMax.  However, I start seeing my objects once I've written all values to the buffer at least once.  Then I get some very strange flicker and occasionally deformed geometry.   If I change the above to write the entire array of VertexPrimitiveInstances to the buffer, I get no flicker. DataStream dsInstances = instanceBuffer.Map(MapMode.WriteDiscard, SlimDX.Direct3D10.MapFlags.None); dsInstances.Position = 0; dsInstances.WriteRange<VertexPrimitiveInstance>(instances); instanceBuffer.Unmap();       I presume that the second method is far slower than the first (if if worked) and is thus undesirable.  Am I using WriteRange wrong, or am I wrong in thinking I should only set the vertices that are changing (like I did in XNA)?
  14. [SlimDX] Instanced Triangles

    Oh, ffs, I figured it out.  Ugh.
  15. How to draw lots of billboard sprites

    Instancing!   Try reading this:http://www.float4x4.net/index.php/2011/07/hardware-instancing-for-pc-in-xna-4-with-textures/