Jump to content
  • Advertisement

NyquistVelocity

Member
  • Content Count

    5
  • Joined

  • Last visited

Community Reputation

2 Neutral

About NyquistVelocity

  • Rank
    Newbie

Personal Information

  • Role
    Artificial Intelligence
    Programmer
    UI/UX Designer
  • Interests
    Education
    Programming

Social

  • Twitter
    @wx_trevor

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. NyquistVelocity

    DX11 Bad Texture3D Sampling in Volume Renderer

    I figured it out. It had absolutely nothing to do with the Texture3D, sampler, or anything else that I was wanting to blame. Upon zooming in really far, I noticed that it probably wasn't the 3D texture's fault - it looked like the ray directions were being somehow miscalculated. The pixelated areas are looking slightly different directions from each other, causing tight gradients to get broken up. It turns out issue was the textures I was drawing the front/rear faces of the geometry to. They were initialized as B8G8R8A8_UNorm. As soon as I changed them to R32G32B32A32_Float it worked. It was just insufficient precision all along. Now it works properly:
  2. I'm having trouble wrapping my brain around what actually is the issue here, but the sampler I'm using in my volume renderer is only interpolating the 3D texture along the Y axis. I roughly followed (and borrowed a lot of code from) this tutorial, but I'm using SlimDX and WPF: http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html Here's an example, showing voxel-ish artifacts on the X and Z axes, which are evidently not being interpolated: ...whereas on the Y axis it appears to be interpolating correctly: If I disable any kind of interpolation in the sampler, the whole volume ends up looking voxel-ish / bad: Thinking maybe my hardware didn't support 3D textures (even though it's modern?) I wrote a little trilinear interpolation function, and got the same results. In the trilinear code, I calculate the position of the ray in grid coordinates, and use the fractional portion to do the lerps. So I experimented by just painting the fractional part of the grid coordinate where a ray starts, onto my geometry cast to a float4. As expected, the Y axis looks good, as my input dataset has 30 layers. So I see a white => black fade 30 times: However, my X and Z fractional values are strange. What I should be seeing is the same white => black fade 144 and 145 times, respectively. But what I get is this: ... which is definitely not right. The values are A) discretized and uniform per grid cell, and B) exhibit a pattern that repeats every handful of grid rows, instead of a smooth fade on each cell. My suspicion is that I'm initializing my texture badly, but here's a look at the whole pipeline from initialization to rendering 1) Loading data from a file, then constructing all my rendering-related objects: Data = new GURUGridFile(@"E:\GURU2 Test Data\GoshenDual\Finished\30_DOW7_(X)_20090605_220006.ggf"); double DataX = Data.CellSize[0] * Data.Dimensions[0]; double DataY = Data.CellSize[1] * Data.Dimensions[1]; double DataZ = Data.CellSize[2] * Data.Dimensions[2]; double MaxSize = Math.Max(DataX, Math.Max(DataY, DataZ)); DataX /= MaxSize; DataY /= MaxSize; DataZ /= MaxSize; Renderer.XSize = (float)DataX; Renderer.YSize = (float)DataY; Renderer.ZSize = (float)DataZ; int ProductCode = Data.LayerProducts[0].ToList().IndexOf("A_DZ"); float[,,] RadarData = new float[Data.Dimensions[0], Data.Dimensions[1], Data.Dimensions[2]]; for (int x = 0; x < Data.Dimensions[0]; x++) for (int y = 0; y < Data.Dimensions[1]; y++) for (int z = 0; z < Data.Dimensions[2]; z++) RadarData[x, y, z] = Data.Data[z][ProductCode][x, y]; int DataSize = Math.Max(RadarData.GetLength(0), Math.Max(RadarData.GetLength(1), RadarData.GetLength(2))); int mWidth = RadarData.GetLength(0); int mHeight = RadarData.GetLength(2); int mDepth = RadarData.GetLength(1); float mStepScale = 1.0F; float maxSize = (float)Math.Max(mWidth, Math.Max(mHeight, mDepth)); SlimDX.Vector3 stepSize = new SlimDX.Vector3( 1.0f / (mWidth * (maxSize / mWidth)), 1.0f / (mHeight * (maxSize / mHeight)), 1.0f / (mDepth * (maxSize / mDepth))); VolumeRenderer = new VolumeRenderEngine(false, Renderer.device); VolumeRenderer.Data = VolumeRenderTest.Rendering.TextureObject3D.FromData(RadarData); VolumeRenderer.StepSize = stepSize * mStepScale; VolumeRenderer.Iterations = (int)(maxSize * (1.0f / mStepScale) * 2.0F); Renderer.Initialize(); SetupSlimDX(); this.VolumeRenderer.DataWidth = Data.Dimensions[0]; this.VolumeRenderer.DataHeight = Data.Dimensions[2]; this.VolumeRenderer.DataDepth = Data.Dimensions[1]; It's worth noting here that I flip the Z and Y axes when passing data to the volume renderer so as to comply with DirectX coordinates. Next is my construction of the Texture3D and related fields. This is the step I think I'm messing up, both in terms of correctness as well as general violation of best practices. public static TextureObject3D FromData(float[,,] Data) { Texture3DDescription texDesc = new Texture3DDescription() { BindFlags = SlimDX.Direct3D11.BindFlags.ShaderResource, CpuAccessFlags = SlimDX.Direct3D11.CpuAccessFlags.None, Format = SlimDX.DXGI.Format.R32_Float, MipLevels = 1, OptionFlags = SlimDX.Direct3D11.ResourceOptionFlags.None, Usage = SlimDX.Direct3D11.ResourceUsage.Default, Width = Data.GetLength(0), Height = Data.GetLength(2), Depth = Data.GetLength(1) }; int i = 0; float[] FlatData = new float[Data.GetLength(0) * Data.GetLength(1) * Data.GetLength(2)]; for (int y = 0; y < Data.GetLength(1); y++) for (int z = 0; z < Data.GetLength(2); z++) for (int x = 0; x < Data.GetLength(0); x++) FlatData[i++] = Data[x, y, z]; DataStream TextureStream = new DataStream(FlatData, true, true); DataBox TextureBox = new DataBox(texDesc.Width * 4, texDesc.Width * texDesc.Height * 4, TextureStream); Texture3D valTex = new Texture3D(Renderer.device, texDesc, TextureBox); var viewDesc = new SlimDX.Direct3D11.ShaderResourceViewDescription() { Format = texDesc.Format, Dimension = SlimDX.Direct3D11.ShaderResourceViewDimension.Texture3D, MipLevels = texDesc.MipLevels, MostDetailedMip = 0, ArraySize = 1, CubeCount = 1, ElementCount = 1 }; ShaderResourceView valTexSRV = new ShaderResourceView(Renderer.device, valTex, viewDesc); TextureObject3D tex = new TextureObject3D(); tex.Device = Renderer.device; tex.Size = TextureStream.Length; tex.TextureStream = TextureStream; tex.TextureBox = TextureBox; tex.Texture = valTex; tex.TextureSRV = valTexSRV; return tex; } The TextureObject3D class is just a helper class that I wrap around a Texture3D to make things a little simpler to work with. At the rendering phase, I draw the back and front faces of my geometry (that is colored according to the vertex coordinates) to textures so that ray starting and ending positions can be calculated, then pass all that nonsense to the effect. private void RenderVolume() { // Rasterizer states RasterizerStateDescription RSD_Front = new RasterizerStateDescription(); RSD_Front.FillMode = SlimDX.Direct3D11.FillMode.Solid; RSD_Front.CullMode = CullMode.Back; RSD_Front.IsFrontCounterclockwise = false; RasterizerStateDescription RSD_Rear = new RasterizerStateDescription(); RSD_Rear.FillMode = SlimDX.Direct3D11.FillMode.Solid; RSD_Rear.CullMode = CullMode.Front; RSD_Rear.IsFrontCounterclockwise = false; RasterizerState RS_OLD = Device.ImmediateContext.Rasterizer.State; RasterizerState RS_FRONT = RasterizerState.FromDescription(Renderer.device, RSD_Front); RasterizerState RS_REAR = RasterizerState.FromDescription(Renderer.device, RSD_Rear); // Calculate world view matrix Matrix wvp = _world * _view * _proj; RenderTargetView NullRTV = null; // First we need to render to the rear texture SetupBlend(false); PrepareRTV(RearTextureView); SetBuffers(); Device.ImmediateContext.Rasterizer.State = RS_REAR; Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); ExecuteTechnique(Renderer.RayCasting101FX_RenderPosition); Device.ImmediateContext.Flush(); Device.ImmediateContext.OutputMerger.SetTargets(NullRTV); // Now we draw to the front texture SetupBlend(false); PrepareRTV(FrontTextureView); SetBuffers(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); ExecuteTechnique(Renderer.RayCasting101FX_RenderPosition); Device.ImmediateContext.Flush(); Device.ImmediateContext.OutputMerger.SetTargets(NullRTV); SetupBlend(false); //Set Render Target View Device.ImmediateContext.OutputMerger.SetTargets(SampleRenderView); // Set Viewport Device.ImmediateContext.Rasterizer.SetViewports(new Viewport(0, 0, WindowWidth, WindowHeight, 0.0f, 1.0f)); // Clear screen Device.ImmediateContext.ClearRenderTargetView(SampleRenderView, new Color4(1.0F, 0.0F, 0.0F, 0.0F)); if (Wireframe) { RenderWireframeBack(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; } SetBuffers(); // Render Position Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); Renderer.RayCasting101FX_Back.SetResource(new ShaderResourceView(Renderer.device, RearTexture));// RearTextureSRV); Renderer.RayCasting101FX_Front.SetResource(new ShaderResourceView(Renderer.device, FrontTexture));//FrontTextureSRV); Renderer.RayCasting101FX_Volume.SetResource(new ShaderResourceView(Renderer.device, Data.Texture)); Renderer.RayCasting101FX_StepSize.Set(StepSize); Renderer.RayCasting101FX_Iterations.Set(Iterations); Renderer.RayCasting101FX_Width.Set(DataWidth); Renderer.RayCasting101FX_Height.Set(DataHeight); Renderer.RayCasting101FX_Depth.Set(DataDepth); ExecuteTechnique(Renderer.RayCasting101FX_RayCastSimple); if (Wireframe) { RenderWireframeFront(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; } int sourceSubresource; sourceSubresource = SlimDX.Direct3D11.Resource.CalculateSubresourceIndex(0, 1, 1);// MSAATexture.CalculateSubResourceIndex(0, 0, out sourceMipLevels); int destinationSubresource; destinationSubresource = SlimDX.Direct3D11.Resource.CalculateSubresourceIndex(0, 1, 1); //m_renderTarget.CalculateSubResourceIndex(0, 0, out destinationMipLevels); Device.ImmediateContext.ResolveSubresource(MSAATexture, 0, SharedTexture, 0, Format.B8G8R8A8_UNorm); Device.ImmediateContext.Flush(); CanvasInvalid = false; sw.Stop(); this.LastFrame = sw.ElapsedTicks / 10000.0; } private void PrepareRTV(RenderTargetView rtv) { //Set Depth Stencil and Render Target View Device.ImmediateContext.OutputMerger.SetTargets(rtv); // Set Viewport Device.ImmediateContext.Rasterizer.SetViewports(new Viewport(0, 0, WindowWidth, WindowHeight, 0.0f, 1.0f)); // Clear render target Device.ImmediateContext.ClearRenderTargetView(rtv, new Color4(1.0F, 0.0F, 0.0F, 0.0F)); } private void SetBuffers() { // Setup buffer info Device.ImmediateContext.InputAssembler.InputLayout = Renderer.RayCastVBLayout; Device.ImmediateContext.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList; Device.ImmediateContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(Renderer.VertexBuffer, Renderer.VertexPC.Stride, 0)); Device.ImmediateContext.InputAssembler.SetIndexBuffer(Renderer.IndexBuffer, Format.R32_UInt, 0); } private void ExecuteTechnique(EffectTechnique T) { for (int p = 0; p < T.Description.PassCount; p++) { T.GetPassByIndex(p).Apply(Device.ImmediateContext); Device.ImmediateContext.DrawIndexed(36, 0, 0); } } Finally, here's the shader in its entirety. The TrilinearSample function is supposed to compute a good, interpolated sample but is what ended up highlighting what the problem likely is. What it does, or at least attempts to do, is calculate the actual coordinate of the ray in the original grid coordinates, then use the decimal portion to do the interpolation. float4x4 World; float4x4 WorldViewProj; float4x4 WorldInvTrans; float3 StepSize; int Iterations; int Side; float4 ScaleFactor; int Width; int Height; int Depth; Texture2D<float3> Front; Texture2D<float3> Back; Texture3D<float1> Volume; SamplerState FrontSS = sampler_state { Texture = <Front>; Filter = MIN_MAG_MIP_POINT; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; SamplerState BackSS = sampler_state { Texture = <Back>; Filter = MIN_MAG_MIP_POINT; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; SamplerState VolumeSS = sampler_state { Texture = <Volume>; Filter = MIN_MAG_MIP_LINEAR; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V AddressW = Border; // border sampling in W BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; struct VertexShaderInput { float3 Position : POSITION; float4 texC : COLOR; }; struct VertexShaderOutput { float4 Position : SV_POSITION; float3 texC : TEXCOORD0; float4 pos : TEXCOORD1; }; VertexShaderOutput PositionVS(VertexShaderInput input) { VertexShaderOutput output; output.Position = float4(input.Position, 1.0); output.Position = mul(output.Position * ScaleFactor, WorldViewProj); output.texC = input.texC.xyz; output.pos = output.Position; return output; } float4 PositionPS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { return float4(input.texC, 1.0f); } float4 WireFramePS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { return float4(1.0f, .5f, 0.0f, .85f); } //draws the front or back positions, or the ray direction through the volume float4 DirectionPS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { float2 texC = input.pos.xy /= input.pos.w; texC.x = 0.5f * texC.x + 0.5f; texC.y = -0.5f * texC.y + 0.5f; float3 front = Front.Sample(FrontSS, texC).rgb;// tex2D(FrontS, texC).rgb; float3 back = Back.Sample(BackSS, texC).rgb; // tex2D(BackS, texC).rgb; if(Side == 0) { float4 res = float4(front, 1.0f); return res; } if(Side == 1) { float4 res = float4(back, 1.0f); return res; } return float4(abs(back - front), 1.0f); } float TrilinearSample(float3 pos) { float X = pos.x * Width; float Y = pos.y * Height; float Z = pos.z * Depth; float iX = floor(X); float iY = floor(Y); float iZ = floor(Z); float iXn = iX + 1; float iYn = iY + 1; float iZn = iZ + 1; float XD = X - iX; float YD = Y - iY; float ZD = Z - iZ; float LL = lerp(Volume[float3(iX, iY, iZ)], Volume[float3(iX, iY, iZn)], ZD); float LR = lerp(Volume[float3(iXn, iY, iZ)], Volume[float3(iXn, iY, iZn)], ZD); float UL = lerp(Volume[float3(iX, iYn, iZ)], Volume[float3(iX, iYn, iZn)], ZD); float UR = lerp(Volume[float3(iXn, iYn, iZ)], Volume[float3(iXn, iYn, iZn)], ZD); float L = lerp(LL, UL, YD); float R = lerp(LR, UR, YD); //return ZD; return lerp(L, R, XD); return 0.0F; } float4 RayCastSimplePS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { //calculate projective texture coordinates //used to project the front and back position textures onto the cube float2 texC = input.pos.xy /= input.pos.w; texC.x = 0.5f* texC.x + 0.5f; texC.y = -0.5f* texC.y + 0.5f; float3 front = Front.Sample(FrontSS, texC).rgb; // tex2D(FrontS, texC).xyz; float3 back = Back.Sample(BackSS, texC).rgb; // tex2D(BackS, texC).xyz; float3 dir = normalize(back - front); float4 pos = float4(front, 0); float4 dst = float4(0, 0, 0, 0); float4 src = 0; float value = 0; //Iterations = 1500; float3 Step = dir * StepSize; // / (float)Iterations; float3 TotalStep = float3(0, 0, 0); value = Volume.Sample(VolumeSS, pos.xyz).r; int i = 0; for(i = 0; i < Iterations; i++) { pos.w = 0; //value = Volume.SampleLevel(VolumeSS, pos.xyz, 0); value = TrilinearSample(pos.xyz); // tex3Dlod(VolumeS, pos).r; // Radar reflectivity related threshold values if (value < 40) value = 40; if (value > 60) value = 60; value = (value - 40.0) / 20.0; src = (float4)(value); src.a /= (Iterations / 50.0); //Front to back blending // dst.rgb = dst.rgb + (1 - dst.a) * src.a * src.rgb // dst.a = dst.a + (1 - dst.a) * src.a src.rgb *= src.a; dst = (1.0f - dst.a) * src + dst; //break from the loop when alpha gets high enough if (dst.a >= .95f) break; //advance the current position pos.xyz += Step; TotalStep += Step; //break if the position is greater than <1, 1, 1> if (pos.x > 1.0f || pos.y > 1.0f || pos.z > 1.0f || pos.x < 0.0f || pos.y < 0.0f || pos.z < 0.0f) break; } return dst; } technique11 RenderPosition { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, PositionPS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 PositionPS(); } } technique11 RayCastDirection { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, DirectionPS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 DirectionPS(); } } technique11 RayCastSimple { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, RayCastSimplePS())); //VertexShader = compile vs_3_0 PositionVS(); //PixelShader = compile ps_3_0 RayCastSimplePS(); } } technique11 WireFrame { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, WireFramePS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 WireFramePS(); } } Any insight is hugely appreciated, whether on the specific problem or just random things I'm doing wrong. With the coordinates in the Texture3D being so messed up, I'm surprised this renders at all, let alone close to correctly. Thank you in advance!
  3. NyquistVelocity

    3D HLSL Geometry Shader not emitting triangles?

    The reason I'm using a geometry shader is to reduce memory usage. For a typical scan from one of the weather radars commonly found in the US, there are 720 rays with 1,832 pixels ("gates") per ray for 1,319,040 total observations per sweep. I used to split each observation into its two component triangles, then pre-calculate geometry (with double precision) on the CPU and create a vertex buffer at load time. But with a latitude and longitude (each 4 bytes) and a ray and gate index (each 2 bytes) in each vertex, I ended up with 72 bytes per vertex using ~95MB of memory for geometry alone. Obviously, this is a terrible way to handle sweep geometry. Fortunately, the beam propagates at regular intervals down its length. So, I can pass Texture2D's of those properties (per ray) to the geometry shader, then use ray and gate indices to calculate latitudes and longitudes for radar observations. This method means I'm consuming only ~5-10MB of memory to define geometry. For a real world example, some data I have here from Hurricane Harvey takes 47MB with a precalculated vertex buffer, vs. just over 2MB using a geometry shader. Unfortunately, the math I'm using in the geometry shader right now has about 1m (~0.00001°) precision, compared to the math I was using on the CPU with could geolocate to about 1cm. So my new challenge is to come up with a more precise way of calculating latitudes and longitudes using float math to retain speed. I need to look into best practices regarding GPU memory management. As of right now, I just tie up GPU memory whether you're looking at a given dataset or not - there has to be an intelligent way of swapping data in and out of the GPU as needed, otherwise modern games wouldn't look as good as they do. This turned out to be the problem. I got this working very late last night. I'm not super familiar with what the 'w' component is typically used for, so I'll be doing some Googling...
  4. NyquistVelocity

    3D HLSL Geometry Shader not emitting triangles?

    I didn't jump to D3D11 because I was concerned about hardware compatibility, weirdly. Thanks for pointing that out, I'll change my code over now and fire up the graphics debugger! Edit: My code renders to a texture, so there's no present call for the graphics debugger to see. Hmm.
  5. Hello everyone! I'm an atmospheric scientist working on radar data visualization software and have hit a bit of a snag with my move to DX10 and implementation of a geometry shader. Here's an example of the interface, showing several different products from a mobile radar dataset. In the build shown, I precalculate the positions of all the gates (basically, radar pixels) in the dataset and pass that to the VShader and PShader for transformation to screen coordinates and coloration based on color tables. I recently implemented a GShader to expand my road layers so as to have them be more than one pixel wide, and want to implement a GShader for data so that I can dramatically decrease the memory load of a dataset (long-range radar datasets can consume >1GB of video memory... not great). I initially wrote the whole shader implementation, but when it didn't work I backed way off and have just been trying to get the GShader to emit triangles that form a quad in the middle of each frame. In the input assembler stage, I'm passing the VShader two 2-byte integers: a beam index (to know which direction the antenna is pointing) and a gate index (range from radar). Below is my passthrough VShader (since all the actual geographical geometry is going to need to be calculated in the GShader stage). I put the "POSITION" semantic in VOut thinking that vertices without a defined position were getting culled, but that apparently is not the case. There's a few other radar-related fields in there (Value, FilterValue, SpecialValue) but I think we can safely ignore those, unless inclusion of them has pushed my vertex size over some limit and is the cause of my problems. struct VOut { float4 Position : POSITION; int2 GateRay : GATE_RAY; float Value : VALUE; float FilterValue : FILTER_VALUE; int SpecialValue : SPECIAL_VALUE; }; Texture2D<float> filterData : register(t0); Texture2D<float> valueData : register(t1); VOut VShader(int2 GateRay : POSITION) { VOut output; output.Position = float4(0.5, 0.5, 0.5, 0.0); output.GateRay = GateRay; output.Value = valueData[output.GateRay]; output.FilterValue = filterData[output.GateRay]; if (output.Value == -1.#INF) output.SpecialValue = 1; else if (output.Value == 1.#INF) output.SpecialValue = 2; else output.SpecialValue = 0; return output; } My dummy GShader code is below. I am intentionally winding one triangle the wrong way - I do this during shader development so that if I screw up badly I can see at least half of my triangles. At this point, I'm just trying to get something to show onscreen. I don't see anything glaringly wrong with it, but I suppose if I did I would have fixed it. I adapted this code from the GShader that expands my GIS road lines into rectangles. Unlike this one, that GShader works. struct PS_IN { float4 Position : SV_POSITION; int2 GateRay : GATE_RAY; float Value : VALUE; float FilterValue : FILTER_VALUE; int SpecialValue : SPECIAL_VALUE; }; [maxvertexcount(6)] void GShader(point VOut gin[1], inout TriangleStream<PS_IN> triStream) { PS_IN v[4]; v[0].Position = float4(-0.5, -0.5, 0.5, 0.0); v[0].GateRay = int2(1, 1); v[0].Value = 50.0; v[0].FilterValue = 0.0; v[0].SpecialValue = 0; v[1].Position = float4(0.5, -0.5, 0.5, 0.0); v[1].GateRay = int2(1, 1); v[1].Value = 50.0; v[1].FilterValue = 0.0; v[1].SpecialValue = 0; v[2].Position = float4(-0.5, 0.5, 0.5, 0.0); v[2].GateRay = int2(1, 1); v[2].Value = 50.0; v[2].FilterValue = 0.0; v[2].SpecialValue = 0; v[3].Position = float4(0.5, 0.5, 0.5, 0.0); v[3].GateRay = int2(1, 1); v[3].Value = 50.0; v[3].FilterValue = 0.0; v[3].SpecialValue = 0; triStream.Append(v[0]); triStream.Append(v[3]); triStream.Append(v[2]); triStream.RestartStrip(); triStream.Append(v[0]); triStream.Append(v[3]); triStream.Append(v[1]); triStream.RestartStrip(); } Below is the dummy pixel shader I'm using. It should just color my triangles white. Normally I use a pixel shader compiled from HLSL code I generate from a user-defined color table - but in the interest of reducing sophistication in debugging, I'm using this dummy. struct PS_IN { float4 Position : SV_POSITION; int2 GateRay : GATE_RAY; float Value : VALUE; float FilterValue : FILTER_VALUE; int SpecialValue : SPECIAL_VALUE; }; float4 PShader(float4 Position : SV_POSITION, int2 GateRay : GATE_RAY, float Value : VALUE, float FilterValue : FILTER_VALUE, int SpecialValue : SPECIAL_VALUE) : SV_TARGET { return float4(1.0, 1.0, 1.0, 1.0); } Thanks in advance for the help!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!