• Advertisement

AdeptStrain

Member
  • Content count

    55
  • Joined

  • Last visited

Community Reputation

406 Neutral

About AdeptStrain

  • Rank
    Member
  1. Compute/DrawIndirect question

    One other question, MJP. If you have a buffer bound as an AppendStructureBuffer in one shader, can it be just a simple "read only" StructureBuffer in another shader, or do you have to bind is as a RWStructureBuffer even if you have no intention of writing to it?
  2. Compute/DrawIndirect question

    Ah, I see. So you use the SV_VertexID (which I assume comes from the draw indirect params) as an index into the structured buffer, grab the data, and pass it along the pipeline from there. That makes sense. Thanks MJP.
  3. Hi all,   I have a compute shader that generates a number of points which I then want to render via DrawIndirect. I understand how to generate and copy over the number of points and such (AppendStructuredBuffer -> CopyStructureCount -> DrawIndirectArgs buffer), but I don't understand how to bind the output from the compute shader (i.e. the AppendStructuredBuffer) to the input assembler so it can be drawn once I call DrawIndirect.   Do I have to create a separate vertex buffer, copy the output into that, and then bind and render that? Or is there a way to tell the assembler, here's a buffer, using a point topology, and then just send things down the pipeline like normal?
  4. Terrain tesselation and collision

    As Kuana said, keep a fully detailed mesh as your physics mesh (heightmap, whatever) which is separate from your visual mesh and have it loaded as long as anything can interact with it.   As for your lighting and shadows, as long as your lighting and shadows uses the depth buffer/lighting buffer or what-have-you, then it should be fine.
  5.   Yea, I've been trying to be stricter about using mul(matrix, vector). I agree jumping between them is bad form. I should point out I haven't done any clean up in this code so I'm sure some things are a bit rough.   Here's the image showing the behavior in question:   [attachment=24413:gLighting.png]   You can see there are some very dark fins parallel to lit ones. I agree this is most likely a normal problem, I've attached all the shader code below, but the normal calculation is pretty simple (unless I have my vert indices wrong...). I'd love a simple way to draw normals but since this geometry is created in the shader - I haven't thought of a great way to do that yet.   Shader: // Un-tweakables float4x4 World : World; float4x4 WorldView : WorldView; float4x4 WorldViewProjection : WorldViewProjection; float Time : Time; // Textures Texture2D<float4> HeightmapTexture; Texture2D<float4> FoliageMapTexture; Texture2D<float4> DiffuseMapTexture; Texture2D<float4> NormalMapTexture; Texture2D<float4> NoiseTexture; //Tweakables float MaxHeight; uint HeightmapTextureWidth; uint HeightmapTextureHeight; float HeightmapVertexStride; float4x4 TerrainWorldMatrix; float GrassHeight; float GrassWidth; // Static Values static const float MaxDistance = 512.0f; static const uint TotalLODSelections = 10; static const uint LODSelection[TotalLODSelections] = { 0, 1, 1, 2, 2, 3, 3, 3, 3, 3 }; // probably should replace this with some inverse log N value. static const uint TotalLODs = 4; static const int NumSegs[TotalLODs] = { 5, 3, 2, 1 }; // Structures struct VSInput { float3 Position : POSITION; float4 Uv : TEXCOORD0; }; struct VSOutput { float4 Position : SV_POSITION; float2 Uv : TEXCOORD0; float4 WorldPositionDepth : TEXCOORD1; float3 Normal : TEXCOORD2; }; sampler LinearClampSampler { Filter = Min_Mag_Mip_Linear; AddressU = Clamp; AddressV = Clamp; }; sampler LinearWrapSampler { Filter = Min_Mag_Mip_Linear; AddressU = Wrap; AddressV = Wrap; }; sampler PointClampSampler { Filter = Min_Mag_Mip_Point; AddressU = Clamp; AddressV = Clamp; }; // Helper Methods float GetHeightmapValue(float2 uv) { float4 rawValue = HeightmapTexture.SampleLevel(PointClampSampler, uv, 0); return rawValue.r * MaxHeight; } float3 CalculateNormal(float3 v0, float3 v1, float3 v2) { float3 edgeOne = normalize(v1 - v0); float3 edgeTwo = normalize(v2 - v0); return cross(edgeOne, edgeTwo); } float4 GetCornerHeights(float2 uv) { float texelWidth = 1.0f / (float)(HeightmapTextureWidth + 1); float texelHeight = 1.0f / (float)(HeightmapTextureHeight + 1); float2 topRight = uv + float2(texelWidth, 0.0f); float2 bottomLeft = uv + float2(0.0f, texelHeight); float2 bottomRight = uv + float2(texelWidth, texelHeight); float x = GetHeightmapValue(uv); float y = GetHeightmapValue(topRight); float z = GetHeightmapValue(bottomLeft); float w = GetHeightmapValue(bottomRight); float4 returnValue = float4(x,y,z,w); return returnValue; } float3 GetGrassCellOffset(VSInput input) { float4 cornerHeights = GetCornerHeights(input.Uv.xy); // TODO: FIX ME // Bilinear interpolation between heightmap cells. float yPos = lerp(lerp(cornerHeights.x, cornerHeights.y, input.Uv.z), lerp(cornerHeights.z, cornerHeights.w, input.Uv.z), input.Uv.w); float3 returnPos = input.Position.xyz; returnPos.y = yPos; return returnPos; } uint CalculateLOD(float3 distanceToCamera) { uint LODIndex = (uint)floor(distanceToCamera * 10.0f); LODIndex = clamp(LODIndex, 0, TotalLODSelections - 1); uint totalSegments = NumSegs[LODSelection[LODIndex]]; return totalSegments; } float2 CalculateGrassDirection(VSInput IN) { // Random seed is stored in the Y Component. float rotRadians = IN.Position.y; float2 buildDirection = { 1.0f, 0.0f }; float cosV = cos(rotRadians); float sinV = sin(rotRadians); float2x2 rotMatrix = { cosV, -sinV, sinV, cosV }; buildDirection = mul(rotMatrix, buildDirection); return buildDirection; } VSInput DeferredRenderVS(VSInput IN) { // TODO: Add an early out if our foliage map returns 0.0f. VSInput outVS = (VSInput)0; outVS.Position = IN.Position; outVS.Uv = IN.Uv; return outVS; } [maxvertexcount((NumSegs[0] + 1) * 2)] // Num Segements at lowest LOD * Num Of Triangles per segement * Num of Verts per triangle void GrassGS(point VSInput input[1], inout TriangleStream<VSOutput> TriStream) { const uint MaxSegments = NumSegs[0]; const uint MaxVerts = (MaxSegments + 1) * 2; float2 grassDir = normalize(CalculateGrassDirection(input[0])); const float3 bladePositionInCellSpace = GetGrassCellOffset(input[0]); const float4 posInObjectSpace = float4(bladePositionInCellSpace, 1.0f); const float terrainHeight = bladePositionInCellSpace.y; const float3 bladePosInWorldSpace = mul(World, bladePositionInCellSpace).xyz; float eyeDist = length(EyePosition.xz - bladePosInWorldSpace.xz) / MaxDistance; eyeDist = saturate(eyeDist); // Clamp to 0 - 1 uint totalSegments = CalculateLOD(eyeDist); float scaledGrassHeight = GrassHeight * (1.0f - eyeDist); scaledGrassHeight = clamp(scaledGrassHeight, 0.1f, 1.0f); float segmentHeightStep = scaledGrassHeight / (float)totalSegments; float scaledGrassWidth = lerp(GrassWidth * 2.0f, GrassWidth, (1.0f - eyeDist)); float grassHalfWidth = scaledGrassWidth * 0.5f; float vStep = 1.0f / (float)(totalSegments + 1); const uint totalVerts = (totalSegments + 1) * 2; VSOutput createdVerts[MaxVerts]; // Two verts per segment for (uint i = 0; i < totalVerts; i +=2) { createdVerts[i].Position = posInObjectSpace; createdVerts[i].Position.xz -= grassDir * grassHalfWidth; createdVerts[i].Position.y = terrainHeight + ((float)i * segmentHeightStep); createdVerts[i].Uv = float2(0.0f, 1.0f - ((float)i * vStep)); createdVerts[i].WorldPositionDepth = float4(mul(World, createdVerts[i].Position).xyz, -mul(WorldView, createdVerts[i].Position).z); createdVerts[i].Position = mul(WorldViewProjection, createdVerts[i].Position); createdVerts[i + 1].Position = posInObjectSpace; createdVerts[i + 1].Position.xz += grassDir * grassHalfWidth; createdVerts[i + 1].Position.y = terrainHeight + ((float)i * segmentHeightStep); createdVerts[i + 1].Uv = float2(1.0f, 1.0f - ((float)(i + 1) * vStep)); createdVerts[i + 1].WorldPositionDepth = float4(mul(World, createdVerts[i + 1].Position).xyz, -mul(WorldView, createdVerts[i + 1].Position).z); createdVerts[i + 1].Position = mul(WorldViewProjection, createdVerts[i + 1].Position); } float3 segmentNormals[MaxSegments]; for (uint j = 0; j < totalSegments; ++j) { int vIndex0 = j * 2; int vIndex1 = (j + 1) * 2; int vIndex2 = vIndex1 - 1; segmentNormals[j] = CalculateNormal(createdVerts[vIndex0].WorldPositionDepth.xyz, createdVerts[vIndex1].WorldPositionDepth.xyz, createdVerts[vIndex2].WorldPositionDepth.xyz); } for (uint k = 0; k < totalVerts; ++k) { // Use the normal of the segment above it. //uint normalIndex = clamp(k / 2, 0, totalSegments - 1); createdVerts[k].Normal = segmentNormals[0]; // HACK HACK HACK HACK TriStream.Append(createdVerts[k]); } } PSDeferredOutput DeferredRenderFP(VSOutput In , bool isFrontFace: SV_IsFrontFace) { // Flip normals on backfaces. float3 normal = normalize(lerp(-In.Normal, In.Normal, float(isFrontFace))); // No gloss or specular. float specPower = 0; float gloss = 0; float4 colour = float4(0.0f, 1.0f, 0.0f, 1.0f); // Remove - this is just for debugging. //float4 colour = DiffuseMapTexture.SampleLevel(LinearClampSampler, In.Uv, 0); colour.w *= gloss; float3 viewSpaceNormal = normalize(mul(WorldView, float4(normal.xyz, 0)).xyz); PSDeferredOutput Out; // Albedo Colour.xyz, Emissiveness Out.Colour = float4(colour.xyz, 0.0f); // Normal.xyz, Gloss Out.NormalDepth = float4(viewSpaceNormal.xyz * 0.5f + 0.5f, colour.w); return Out; }
  6.   Can you describe how you came to your conclusion? That is, what are the symptoms for the "problem?" Hopefully something more specific than "doesn't seem.."     Apologies for the non-descript language in the original post. The fins are all lit on one side, but completely black on the other. This is easiest to see when they are randomly rotated as a black fin will occasionally be next to a properly lit fin. I'll try and attach a screenshot later tonight when I can get a few moments at my home PC since I don't have my code sync'd at my current location.   @Unbird:   Reference device is a good idea as well, I can try that. I've been using RenderDoc to step through some of the improperly lit fins and it never seemed to flip the normal. The lerp is just me trying to avoid a branch statement in the pixel shader - glad to know it works. I'll try stepping through with the reference device and see what I find.   I'm using a deferred renderer which uses view space normals, but I wouldn't think that would matter as long as the normal is properly flipped before I move the normal into view space.
  7. You are never too late, unbird!   Shader code in question. The fin of geometry isn't build as a billboarded quad (as evident by the random rotations being applied). VSInput DeferredRenderVS(VSInput IN) { // TODO: Add an early out if our foliage map returns 0.0f. VSInput outVS = (VSInput)0; outVS.Position = IN.Position; outVS.Uv = IN.Uv; return outVS; } //////////////////////////////////////////////////////////////////////////////////////////////////////////////////// // Geometry shaders [maxvertexcount((NumSegs[0] + 1) * 2)] // Num Segements at lowest LOD * Num Of Triangles per segement * Num of Verts per triangle void GrassGS(point VSInput input[1], inout TriangleStream<VSOutput> TriStream) { const uint MaxSegments = NumSegs[0]; // Array of ints, LODs basically. const uint MaxVerts = (MaxSegments + 1) * 2; const float3 bladePositionInCellSpace = GetGrassCellOffset(input[0]); // This simply returns the correct height based on the point's position in a heightmap cell. const float4 posInObjectSpace = float4(bladePositionInCellSpace, 1.0f); const float terrainHeight = bladePositionInCellSpace.y; const float3 bladePosInWorldSpace = mul(World, bladePositionInCellSpace).xyz; float eyeDist = length(EyePosition.xz - bladePosInWorldSpace.xz) / MaxDistance; eyeDist = saturate(eyeDist); // Clamp to 0 - 1 uint totalSegments = CalculateLOD(eyeDist); float scaledGrassHeight = GrassHeight * (1.0f - eyeDist); scaledGrassHeight = clamp(scaledGrassHeight, 0.1f, 1.0f); float segmentHeightStep = scaledGrassHeight / (float)totalSegments; float scaledGrassWidth = lerp(GrassWidth * 2.0f, GrassWidth, (1.0f - eyeDist)); float grassHalfWidth = scaledGrassWidth * 0.5f; float vStep = 1.0f / (float)(totalSegments + 1); const uint totalVerts = (totalSegments + 1) * 2; VSOutput createdVerts[MaxVerts]; // Two verts per segment float2 grassDir = normalize(CalculateGrassDirection(input[0])); // Returns a random 2D rotation vector // Give a root position, build up a fin of geometry with a total of X segments (# of segments is based on LOD). // |\| // |\| // |\| // Input -> . Output-> |\| for (uint i = 0; i < totalVerts; i +=2) { createdVerts[i].Position = posInObjectSpace; createdVerts[i].Position.xz -= grassDir * grassHalfWidth; createdVerts[i].Position.y = terrainHeight + ((float)i * segmentHeightStep); createdVerts[i].Uv = float2(0.0f, 1.0f - ((float)i * vStep)); createdVerts[i].WorldPositionDepth = float4(mul(World, createdVerts[i].Position).xyz, -mul(WorldView, createdVerts[i].Position).z); createdVerts[i].Position = mul(createdVerts[i].Position, WorldViewProjection); createdVerts[i + 1].Position = posInObjectSpace; createdVerts[i + 1].Position.xz += grassDir * grassHalfWidth; createdVerts[i + 1].Position.y = terrainHeight + ((float)i * segmentHeightStep); createdVerts[i + 1].Uv = float2(1.0f, 1.0f - ((float)(i + 1) * vStep)); createdVerts[i + 1].WorldPositionDepth = float4(mul(World, createdVerts[i + 1].Position).xyz, -mul(WorldView, createdVerts[i + 1].Position).z); createdVerts[i + 1].Position = mul(createdVerts[i + 1].Position, WorldViewProjection); } float3 segmentNormals[MaxSegments]; for (uint j = 0; j < totalSegments; ++j) { int vIndex0 = j * 2; int vIndex1 = (j + 1) * 2; int vIndex2 = vIndex1 - 1; segmentNormals[j] = CalculateNormal(createdVerts[vIndex0].WorldPositionDepth.xyz, createdVerts[vIndex1].WorldPositionDepth.xyz, createdVerts[vIndex2].WorldPositionDepth.xyz); } for (uint k = 0; k < totalVerts; ++k) { // Use the normal of the segment above it. //uint normalIndex = clamp(k / 2, 0, totalSegments - 1); createdVerts[k].Normal = segmentNormals[0]; // HACK: Just using the normal of the first quad for all segments. TriStream.Append(createdVerts[k]); } } PSDeferredOutput DeferredRenderFP(VSOutput In , bool isFrontFace: SV_IsFrontFace) { // Flip normal if this is a back face. float3 normal = normalize(lerp(-In.Normal, In.Normal, float(isFrontFace))); // No gloss or specular. float specPower = 0; float gloss = 0; float4 colour = float4(0.0f, 1.0f, 0.0f, 1.0f); colour.w *= gloss; float3 viewSpaceNormal = normalize(mul(float4(normal.xyz, 0), View).xyz); PSDeferredOutput Out; // Albedo Colour.xyz, Emissiveness Out.Colour = float4(colour.xyz, 0.0f); // Normal.xyz, Gloss Out.NormalDepth = float4(viewSpaceNormal.xyz * 0.5f + 0.5f, colour.w); return Out; }
  8. I've been trying to debug an issue where SV_IsFrontFace doesn't seem to be returning anything but "true" for some geometry I create from points. The MSDN points out that SV_IsFrontFace will always return true for points and lines, but I was hoping that wouldn't be true if triangle strips were generated via a geometry shader from those points - currently it seems that no matter what the geometry looks like at the end of the pipeline, if you start out with lines/points then SV_IsFrontFace will return true.   Has anyone else run into this? Or have any suggestions on ways around this problem?
  9. 3D UI transformation matrix

    Sounds like you have your widgets in currently in screenspace. If you want them 3D you probably want them in world space (again, just out in front of your camera), rotate them there, and then multiply them by your ViewProjection matrix. Your view matrix doesn't make much sense to me, you're treating it like you're still in screen space when you're not.   If this is your first foray into 3D graphics you may want to read up on some linear algebra/matrix transformation tutorials (I don't know any great ones off the top of my head, maybe someone has a good suggestion and can chime in). Or am I misunderstanding your issue?
  10. 3D UI transformation matrix

    Off the top of my head, there's probably a could of ways you could do this:   Set your WIdgets transform to be your camera's position, then offset it a bit and rotate to get it how you want. Probably want to turn off depth filtering/writing as well. or Render all your widgets to separate texture (all rotated and such just using normal matrix math), then draw the UI as full screen quad over your scene before you present the frame. The only real difference is one uses your camera and one has no camera and all your widgets are rendered around the origin and then rendered a final time over the scene. There might be some even simpler method, but these are just some ideas off the top of my head.
  11. How to get rid of camera z-axis rotation

    You only need to update the view matrix before rendering as that is where it is used (obviously if you needed the camera or the view matrix for some math reason, i.e. culling, you could grab it sooner). Using the code above, you can call your "UpdateViewMatrix2" as much as you want but it's just going to be a waste if you aren't actually about to use that matrix.
  12. Terrain patch border issue

    The general way I've seen people deal with this issue is just adding the 1 pixel padding so your patches overlap at the borders. If you look into World Machine's tiling functionality - that's exactly what it does to solve this problem.
  13. How to get rid of camera z-axis rotation

    It's probably the most simple way to prevent Z rotation - not to mention it gives you a very easy way to see the exact rotation parameters of your camera (which is going to be difficult when trying to deconstruct a full rotation matrix). You can also limit either of the axis very simply by changing the min/max values of each axis which comes in handy for gameplay and the like. Also blending values smoothly between cameras, etc.   What lock are you referring to? The Yaw isn't locked at all, that's a full 360 degree circle. The pitch is locked to +/-90 degrees - any more and you end up "upside down". Change the clamp values to be -Pi and Pi and you'll see what I'm saying.   This setup should work fine for both a free camera and first person camera (as well as third person if you decide to go down that route).
  14. How to get rid of camera z-axis rotation

    CameraMatrix isn't needed (I edited my post to reflect this). Multiplying ViewMatrix * CameraMatrix would result in an Identity Matrix(A * A' = I), so you should just need the ViewMatrix.   If your camera isn't rotating at all, I'd debug and make sure you see valid accumx/accumy values. accumx should be a value between 0 and 6.28(ish), while accumy should be between -1.57 and 1.57. Also make sure Mod2PI isn't doing anything funky.
  15. How to get rid of camera z-axis rotation

      That's what we're trying to help you with. This should work (I don't have SlimDX installed to test it - which is what I'm assuming you're using). public void RotateX(float _degree_angle) { accumx += MathUtil.DegreesToRadians(_degree_angle); accumx = MathUtil.Mod2Pi(accumx); // This is the same as the previous wrap around as it just does value % TWO_PI. } public void RotateY(float _degree_angle) { accumy += MathUtil.DegreesToRadians(_degree_angle); accumy = MathUtil.Clamp(accumy, -MathUtil.PiOverTwo, MathUtil.PiOverTwo); } private void UpdateViewMatrix() <- This only needs to be called ONCE per frame, generally right before you set your rendering matrices. { Matrix cameraMatrix; // This is the camera's position and rotation. It's not the same as the view matrix (in fact, it's the inverse). Quaternion cameraRotation = Quaternion.RotateYawPitchRoll(accumx, accumy, 0.0f); cameraMatrix = Matrix.RotationQuaternion(cameraRotation) * Matrix.Translation(position); ViewMatrix = Matrix.Translation(-position) * Matrix.RotationQuaternion(cameraRotation.conjugate()); // conjugate just gives us the inverse rotation. } EDIT: You can skip building the "cameraMatrix" if you want. Originally I was going to use the transpose of that matrix as your view matrix, which would work as long as your transform is orthonormal, but decided I'd keep things simple.
  • Advertisement