Jump to content
  • Advertisement
  • 12/02/18 10:30 PM

    How to Implement Custom UI Meshes in Unity

    Engines and Middleware

    Christopher Mielack

    You want to create custom meshes for your Unity3D UI, but you found the documentation lacking?

    In this article, I will describe

    • How to implement a bare-essentials custom Unity UI mesh
    • Point out all the pitfalls that leave you looking at invisible or non-existent meshes

     

    TL;DR

    • To implement your own UI mesh, derive from MaskableGraphic and implement OnPopulateMesh().
    • Don’t forget to call SetVerticesDirty/SetMaterialDirty upon changes to texture or other editor-settable properties that influence your UI-elements and should trigger a re-rendering.
    • Don’t forget to set the UIVertex’s color, otherwise, you won’t see anything due to alpha=0, i.e. full transparency.
    • You can look at the full, minimal code example, here.

     

    Of Rendering Mini Maps Inside Unity UI

    My use case was simple: I wanted to create level previews for my current puzzle game project Puzzle Pelago, and I wanted to try making a simple tiling system based on a custom UI mesh. The requirements I was eyeing was that it should behave like all the other UI elements in unity, i.e. it should fit inside its RectTransform, it should work inside a masked ScrollView, and it should respond to disabled state tinting since it would be living inside of a button.  

    What I ended up with looks something like this:

    Screenshot.png

     

    The path there was not that bad, but still frustrating at times since all I found online was forum posts and Unity's own source code to go off of. So here I want to build a simplified example in which we will render a grid of textured quads inside a UI element, using one script. This should take all the hurdles for building any kind of (flat, 2d) UI geometry you might want to build. 

     

    Unity Scene Setup

    Alright, let’s set up the scene as follows: 

    1. Open the Unity project and Scene you want to work in. If there is no Canvas in the scene yet, create one! For this tutorial, I left all the properties at default.
    2. Inside the Canvas, create a ScrollView - we will want to check that our new UI component works inside of that!
    3. Inside the ScrollView > Viewport > Content, create new empty game object - let’s call it MyUiElement
    4. Add a CanvasRenderer component to the new game object, and then add new script: MyUiElement
    5. Open the new script in your favourite c# editor (I love Rider btw.), and go back to Unity’s scene.
    6. To make our lives easier, we will want to set the Scene View’s render mode to “Shaded Wireframe” so we can see our UI mesh geometry in detail. Also, it is useful to switch to the 2D view perspective, select our “MyUiElement” object and press F, so unity zooms in just right.

     

    Screenshot2.pngScreenshot3.png

     

    Implementing the Custom Unity UI Mesh Script in C#

    Now we can go ahead and implement our new C# script!

    First off, our new script needs to at least derive from Graphic . But, if masking inside of ScrollViews, for example, needs to work, we better derive from MaskableGraphic. Otherwise, our graphics will render outside of the mask, too. Lol.

    Also, we want to be able to set the size of the grid cells in the editor, so we should add a public field for that.

    public class MyUiElement : MaskableGraphic
    {
        public float GridCellSize = 40f;

     

    Next, we want to be able to use a texture for our UI elements. Looking at Unity’s own implementation, e.g. that of the Graphic (source code) base class or the default Image (source code) UI element, we can see that a common pattern is to …

    • … define Texture/Material slots as properties, such that when the texture is changed in the inspector, we can trigger Unity UI to re-render even while in edit mode. This is done by calling SetMaterialDirty() and SetVerticesDirty().
    • … implement mainTexture as a default overridden property such that if no texture is provided, we return the default white texture.
        [SerializeField]
        Texture m_Texture;
        
        // make it such that unity will trigger our ui element to redraw whenever we change the texture in the inspector
        public Texture texture
        {
            get
            {
                return m_Texture;
            }
            set
            {
                if (m_Texture == value)
                    return;
     
                m_Texture = value;
                SetVerticesDirty();
                SetMaterialDirty();
            }
        }
        public override Texture mainTexture
        {
            get
            {
                return m_Texture == null ? s_WhiteTexture : m_Texture;
            }
        }

     

    Next, we have to override OnPopulateMesh() to do our rendering. It takes a useful little helper object for building meshes, the VertexHelper , as its argument. It tracks the vertex indices for you, and lets you add vertices, uvs and tris without having to do lots of array arithmetic and index tracking. It must be Clear()’ed before building a new mesh.

    I found it useful (and you may, too) to use a little quad-making helper function, AddQuad():

    // helper to easily create quads for our ui mesh. You could make any triangle-based geometry other than quads, too!
        void AddQuad(VertexHelper vh, Vector2 corner1, Vector2 corner2, Vector2 uvCorner1, Vector2 uvCorner2)
        {
            var i = vh.currentVertCount;
                
            UIVertex vert = new UIVertex();
            vert.color = this.color;  // Do not forget to set this, otherwise 
    
            vert.position = corner1;
            vert.uv0 = uvCorner1;
            vh.AddVert(vert);
    
            vert.position = new Vector2(corner2.x, corner1.y);
            vert.uv0 = new Vector2(uvCorner2.x, uvCorner1.y);
            vh.AddVert(vert);
    
            vert.position = corner2;
            vert.uv0 = uvCorner2;
            vh.AddVert(vert);
    
            vert.position = new Vector2(corner1.x, corner2.y);
            vert.uv0 = new Vector2(uvCorner1.x, uvCorner2.y);
            vh.AddVert(vert);
                
            vh.AddTriangle(i+0,i+2,i+1);
            vh.AddTriangle(i+3,i+2,i+0);
        }
    
        // actually update our mesh
        protected override void OnPopulateMesh(VertexHelper vh)
        {
            // Let's make sure we don't enter infinite loops
            if (GridCellSize <= 0)
            {
                GridCellSize = 1f;
                Debug.LogWarning("GridCellSize must be positive number. Setting to 1 to avoid problems.");            
            }
            
            // Clear vertex helper to reset vertices, indices etc.
            vh.Clear();
            
            // Bottom left corner of the full RectTransform of our UI element
            var bottomLeftCorner = new Vector2(0,0) - rectTransform.pivot;
            bottomLeftCorner.x *= rectTransform.rect.width;
            bottomLeftCorner.y *= rectTransform.rect.height;
    
            // Place as many square grid tiles as fit inside our UI RectTransform, at any given GridCellSize
            for (float x = 0; x < rectTransform.rect.width-GridCellSize; x += GridCellSize)
            {
                for (float y = 0; y < rectTransform.rect.height-GridCellSize; y += GridCellSize)
                {
                    AddQuad(vh, 
                        bottomLeftCorner + x*Vector2.right + y*Vector2.up,
                        bottomLeftCorner + (x+GridCellSize)*Vector2.right + (y+GridCellSize)*Vector2.up,
                        Vector2.zero, Vector2.one); // UVs
                }
            }
            
            Debug.Log("Mesh was redrawn!");
        }

     

    Note that in the AddQuad() function, we set position, uv, and color! Since in the UI material, texture is multiplied with the color by default. Leaving this at default, i.e. (r=0,g=0,b=0,a=0), this will yield 100% transparent material. So all you see is nothing, and if you are wondering why, this might be it. Here we use the component’s inherited color slot.

    Since we want our grid to update whenever the RectTransform is resized, we should also override OnRectTransformDimensionsChange():

        protected override void OnRectTransformDimensionsChange()
        {
            base.OnRectTransformDimensionsChange();
            SetVerticesDirty();
            SetMaterialDirty();
        }

     

    This should do. Now, back to our Unity scene, we should see a grid of white squares inside our RectTransform. To change this, we can select one of unity’s default textures in our texture slot.

    Screenshot4.png

     

    Adjusting the size of the RectTransform or the value of our Grid Cell Size, we can see that the grid updates automatically. Going into play mode, we should also be able to drag around the scroll view’s contents and have the grid be masked correctly. 

    Screenshot5.png Screenshot6.png

    Screenshot7.png

     

    CONCLUSION

    You can have a look at the full code example, here

    Of course, we are not limited to rendering quads, either, since the basic geometry we created here consist of triangles. So any 2D mesh should be possible to draw, and in principle, it could be animated, too! 

    Anyway, if anything in my writeup is unclear, don’t hesitate to ask questions in the comments or via Twitter, @hallgrimgames. 

    Good luck with your project!

     

    Note: This article was originally published on the Hallgrim Games blog, and is republished here with the kind permission of the author Christopher.



      Report Article


    User Feedback


    There are no comments to display.



    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Advertisement
  • Advertisement
  • intellogo.png

    Are you ready to promote your game?

    Submit your game for Intel® certification by December 21, 2018 and you could win big! 

    Click here to learn more.

  • Latest Featured Articles

  • Featured Blogs

  • Advertisement
  • Popular Now

  • Similar Content

    • By jb-dev
      This shows a boss telegraphing its attack to the player
    • By jb-dev
      This is a boss currently spotting the player
    • By rogerdv
      Im looking for a pixel artist to improve the current graphics of a mobile game. Here is a video of current beta. Knowledge of unity3d is a plus.
       
       
       
    • By NyquistVelocity
      I'm having trouble wrapping my brain around what actually is the issue here, but the sampler I'm using in my volume renderer is only interpolating the 3D texture along the Y axis.
      I roughly followed (and borrowed a lot of code from) this tutorial, but I'm using SlimDX and WPF: http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html
      Here's an example, showing voxel-ish artifacts on the X and Z axes, which are evidently not being interpolated:

      ...whereas on the Y axis it appears to be interpolating correctly:

      If I disable any kind of interpolation in the sampler, the whole volume ends up looking voxel-ish / bad:

      Thinking maybe my hardware didn't support 3D textures (even though it's modern?) I wrote a little trilinear interpolation function, and got the same results.
      In the trilinear code, I calculate the position of the ray in grid coordinates, and use the fractional portion to do the lerps.
      So I experimented by just painting the fractional part of the grid coordinate where a ray starts, onto my geometry cast to a float4. As expected, the Y axis looks good, as my input dataset has 30 layers. So I see a white => black fade 30 times:

      However, my X and Z fractional values are strange. What I should be seeing is the same white => black fade 144 and 145 times, respectively. But what I get is this:


      ... which is definitely not right. The values are A) discretized and uniform per grid cell, and B) exhibit a pattern that repeats every handful of grid rows, instead of a smooth fade on each cell.
      My suspicion is that I'm initializing my texture badly, but here's a look at the whole pipeline from initialization to rendering
      1) Loading data from a file, then constructing all my rendering-related objects:
      Data = new GURUGridFile(@"E:\GURU2 Test Data\GoshenDual\Finished\30_DOW7_(X)_20090605_220006.ggf"); double DataX = Data.CellSize[0] * Data.Dimensions[0]; double DataY = Data.CellSize[1] * Data.Dimensions[1]; double DataZ = Data.CellSize[2] * Data.Dimensions[2]; double MaxSize = Math.Max(DataX, Math.Max(DataY, DataZ)); DataX /= MaxSize; DataY /= MaxSize; DataZ /= MaxSize; Renderer.XSize = (float)DataX; Renderer.YSize = (float)DataY; Renderer.ZSize = (float)DataZ; int ProductCode = Data.LayerProducts[0].ToList().IndexOf("A_DZ"); float[,,] RadarData = new float[Data.Dimensions[0], Data.Dimensions[1], Data.Dimensions[2]]; for (int x = 0; x < Data.Dimensions[0]; x++) for (int y = 0; y < Data.Dimensions[1]; y++) for (int z = 0; z < Data.Dimensions[2]; z++) RadarData[x, y, z] = Data.Data[z][ProductCode][x, y]; int DataSize = Math.Max(RadarData.GetLength(0), Math.Max(RadarData.GetLength(1), RadarData.GetLength(2))); int mWidth = RadarData.GetLength(0); int mHeight = RadarData.GetLength(2); int mDepth = RadarData.GetLength(1); float mStepScale = 1.0F; float maxSize = (float)Math.Max(mWidth, Math.Max(mHeight, mDepth)); SlimDX.Vector3 stepSize = new SlimDX.Vector3( 1.0f / (mWidth * (maxSize / mWidth)), 1.0f / (mHeight * (maxSize / mHeight)), 1.0f / (mDepth * (maxSize / mDepth))); VolumeRenderer = new VolumeRenderEngine(false, Renderer.device); VolumeRenderer.Data = VolumeRenderTest.Rendering.TextureObject3D.FromData(RadarData); VolumeRenderer.StepSize = stepSize * mStepScale; VolumeRenderer.Iterations = (int)(maxSize * (1.0f / mStepScale) * 2.0F); Renderer.Initialize(); SetupSlimDX(); this.VolumeRenderer.DataWidth = Data.Dimensions[0]; this.VolumeRenderer.DataHeight = Data.Dimensions[2]; this.VolumeRenderer.DataDepth = Data.Dimensions[1]; It's worth noting here that I flip the Z and Y axes when passing data to the volume renderer so as to comply with DirectX coordinates.
      Next is my construction of the Texture3D and related fields. This is the step I think I'm messing up, both in terms of correctness as well as general violation of best practices.
      public static TextureObject3D FromData(float[,,] Data) { Texture3DDescription texDesc = new Texture3DDescription() { BindFlags = SlimDX.Direct3D11.BindFlags.ShaderResource, CpuAccessFlags = SlimDX.Direct3D11.CpuAccessFlags.None, Format = SlimDX.DXGI.Format.R32_Float, MipLevels = 1, OptionFlags = SlimDX.Direct3D11.ResourceOptionFlags.None, Usage = SlimDX.Direct3D11.ResourceUsage.Default, Width = Data.GetLength(0), Height = Data.GetLength(2), Depth = Data.GetLength(1) }; int i = 0; float[] FlatData = new float[Data.GetLength(0) * Data.GetLength(1) * Data.GetLength(2)]; for (int y = 0; y < Data.GetLength(1); y++) for (int z = 0; z < Data.GetLength(2); z++) for (int x = 0; x < Data.GetLength(0); x++) FlatData[i++] = Data[x, y, z]; DataStream TextureStream = new DataStream(FlatData, true, true); DataBox TextureBox = new DataBox(texDesc.Width * 4, texDesc.Width * texDesc.Height * 4, TextureStream); Texture3D valTex = new Texture3D(Renderer.device, texDesc, TextureBox); var viewDesc = new SlimDX.Direct3D11.ShaderResourceViewDescription() { Format = texDesc.Format, Dimension = SlimDX.Direct3D11.ShaderResourceViewDimension.Texture3D, MipLevels = texDesc.MipLevels, MostDetailedMip = 0, ArraySize = 1, CubeCount = 1, ElementCount = 1 }; ShaderResourceView valTexSRV = new ShaderResourceView(Renderer.device, valTex, viewDesc); TextureObject3D tex = new TextureObject3D(); tex.Device = Renderer.device; tex.Size = TextureStream.Length; tex.TextureStream = TextureStream; tex.TextureBox = TextureBox; tex.Texture = valTex; tex.TextureSRV = valTexSRV; return tex; } The TextureObject3D class is just a helper class that I wrap around a Texture3D to make things a little simpler to work with.
      At the rendering phase, I draw the back and front faces of my geometry (that is colored according to the vertex coordinates) to textures so that ray starting and ending positions can be calculated, then pass all that nonsense to the effect.
      private void RenderVolume() { // Rasterizer states RasterizerStateDescription RSD_Front = new RasterizerStateDescription(); RSD_Front.FillMode = SlimDX.Direct3D11.FillMode.Solid; RSD_Front.CullMode = CullMode.Back; RSD_Front.IsFrontCounterclockwise = false; RasterizerStateDescription RSD_Rear = new RasterizerStateDescription(); RSD_Rear.FillMode = SlimDX.Direct3D11.FillMode.Solid; RSD_Rear.CullMode = CullMode.Front; RSD_Rear.IsFrontCounterclockwise = false; RasterizerState RS_OLD = Device.ImmediateContext.Rasterizer.State; RasterizerState RS_FRONT = RasterizerState.FromDescription(Renderer.device, RSD_Front); RasterizerState RS_REAR = RasterizerState.FromDescription(Renderer.device, RSD_Rear); // Calculate world view matrix Matrix wvp = _world * _view * _proj; RenderTargetView NullRTV = null; // First we need to render to the rear texture SetupBlend(false); PrepareRTV(RearTextureView); SetBuffers(); Device.ImmediateContext.Rasterizer.State = RS_REAR; Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); ExecuteTechnique(Renderer.RayCasting101FX_RenderPosition); Device.ImmediateContext.Flush(); Device.ImmediateContext.OutputMerger.SetTargets(NullRTV); // Now we draw to the front texture SetupBlend(false); PrepareRTV(FrontTextureView); SetBuffers(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); ExecuteTechnique(Renderer.RayCasting101FX_RenderPosition); Device.ImmediateContext.Flush(); Device.ImmediateContext.OutputMerger.SetTargets(NullRTV); SetupBlend(false); //Set Render Target View Device.ImmediateContext.OutputMerger.SetTargets(SampleRenderView); // Set Viewport Device.ImmediateContext.Rasterizer.SetViewports(new Viewport(0, 0, WindowWidth, WindowHeight, 0.0f, 1.0f)); // Clear screen Device.ImmediateContext.ClearRenderTargetView(SampleRenderView, new Color4(1.0F, 0.0F, 0.0F, 0.0F)); if (Wireframe) { RenderWireframeBack(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; } SetBuffers(); // Render Position Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); Renderer.RayCasting101FX_Back.SetResource(new ShaderResourceView(Renderer.device, RearTexture));// RearTextureSRV); Renderer.RayCasting101FX_Front.SetResource(new ShaderResourceView(Renderer.device, FrontTexture));//FrontTextureSRV); Renderer.RayCasting101FX_Volume.SetResource(new ShaderResourceView(Renderer.device, Data.Texture)); Renderer.RayCasting101FX_StepSize.Set(StepSize); Renderer.RayCasting101FX_Iterations.Set(Iterations); Renderer.RayCasting101FX_Width.Set(DataWidth); Renderer.RayCasting101FX_Height.Set(DataHeight); Renderer.RayCasting101FX_Depth.Set(DataDepth); ExecuteTechnique(Renderer.RayCasting101FX_RayCastSimple); if (Wireframe) { RenderWireframeFront(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; } int sourceSubresource; sourceSubresource = SlimDX.Direct3D11.Resource.CalculateSubresourceIndex(0, 1, 1);// MSAATexture.CalculateSubResourceIndex(0, 0, out sourceMipLevels); int destinationSubresource; destinationSubresource = SlimDX.Direct3D11.Resource.CalculateSubresourceIndex(0, 1, 1); //m_renderTarget.CalculateSubResourceIndex(0, 0, out destinationMipLevels); Device.ImmediateContext.ResolveSubresource(MSAATexture, 0, SharedTexture, 0, Format.B8G8R8A8_UNorm); Device.ImmediateContext.Flush(); CanvasInvalid = false; sw.Stop(); this.LastFrame = sw.ElapsedTicks / 10000.0; } private void PrepareRTV(RenderTargetView rtv) { //Set Depth Stencil and Render Target View Device.ImmediateContext.OutputMerger.SetTargets(rtv); // Set Viewport Device.ImmediateContext.Rasterizer.SetViewports(new Viewport(0, 0, WindowWidth, WindowHeight, 0.0f, 1.0f)); // Clear render target Device.ImmediateContext.ClearRenderTargetView(rtv, new Color4(1.0F, 0.0F, 0.0F, 0.0F)); } private void SetBuffers() { // Setup buffer info Device.ImmediateContext.InputAssembler.InputLayout = Renderer.RayCastVBLayout; Device.ImmediateContext.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList; Device.ImmediateContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(Renderer.VertexBuffer, Renderer.VertexPC.Stride, 0)); Device.ImmediateContext.InputAssembler.SetIndexBuffer(Renderer.IndexBuffer, Format.R32_UInt, 0); } private void ExecuteTechnique(EffectTechnique T) { for (int p = 0; p < T.Description.PassCount; p++) { T.GetPassByIndex(p).Apply(Device.ImmediateContext); Device.ImmediateContext.DrawIndexed(36, 0, 0); } } Finally, here's the shader in its entirety. The TrilinearSample function is supposed to compute a good, interpolated sample but is what ended up highlighting what the problem likely is. What it does, or at least attempts to do, is calculate the actual coordinate of the ray in the original grid coordinates, then use the decimal portion to do the interpolation.
      float4x4 World; float4x4 WorldViewProj; float4x4 WorldInvTrans; float3 StepSize; int Iterations; int Side; float4 ScaleFactor; int Width; int Height; int Depth; Texture2D<float3> Front; Texture2D<float3> Back; Texture3D<float1> Volume; SamplerState FrontSS = sampler_state { Texture = <Front>; Filter = MIN_MAG_MIP_POINT; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; SamplerState BackSS = sampler_state { Texture = <Back>; Filter = MIN_MAG_MIP_POINT; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; SamplerState VolumeSS = sampler_state { Texture = <Volume>; Filter = MIN_MAG_MIP_LINEAR; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V AddressW = Border; // border sampling in W BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; struct VertexShaderInput { float3 Position : POSITION; float4 texC : COLOR; }; struct VertexShaderOutput { float4 Position : SV_POSITION; float3 texC : TEXCOORD0; float4 pos : TEXCOORD1; }; VertexShaderOutput PositionVS(VertexShaderInput input) { VertexShaderOutput output; output.Position = float4(input.Position, 1.0); output.Position = mul(output.Position * ScaleFactor, WorldViewProj); output.texC = input.texC.xyz; output.pos = output.Position; return output; } float4 PositionPS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { return float4(input.texC, 1.0f); } float4 WireFramePS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { return float4(1.0f, .5f, 0.0f, .85f); } //draws the front or back positions, or the ray direction through the volume float4 DirectionPS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { float2 texC = input.pos.xy /= input.pos.w; texC.x = 0.5f * texC.x + 0.5f; texC.y = -0.5f * texC.y + 0.5f; float3 front = Front.Sample(FrontSS, texC).rgb;// tex2D(FrontS, texC).rgb; float3 back = Back.Sample(BackSS, texC).rgb; // tex2D(BackS, texC).rgb; if(Side == 0) { float4 res = float4(front, 1.0f); return res; } if(Side == 1) { float4 res = float4(back, 1.0f); return res; } return float4(abs(back - front), 1.0f); } float TrilinearSample(float3 pos) { float X = pos.x * Width; float Y = pos.y * Height; float Z = pos.z * Depth; float iX = floor(X); float iY = floor(Y); float iZ = floor(Z); float iXn = iX + 1; float iYn = iY + 1; float iZn = iZ + 1; float XD = X - iX; float YD = Y - iY; float ZD = Z - iZ; float LL = lerp(Volume[float3(iX, iY, iZ)], Volume[float3(iX, iY, iZn)], ZD); float LR = lerp(Volume[float3(iXn, iY, iZ)], Volume[float3(iXn, iY, iZn)], ZD); float UL = lerp(Volume[float3(iX, iYn, iZ)], Volume[float3(iX, iYn, iZn)], ZD); float UR = lerp(Volume[float3(iXn, iYn, iZ)], Volume[float3(iXn, iYn, iZn)], ZD); float L = lerp(LL, UL, YD); float R = lerp(LR, UR, YD); //return ZD; return lerp(L, R, XD); return 0.0F; } float4 RayCastSimplePS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { //calculate projective texture coordinates //used to project the front and back position textures onto the cube float2 texC = input.pos.xy /= input.pos.w; texC.x = 0.5f* texC.x + 0.5f; texC.y = -0.5f* texC.y + 0.5f; float3 front = Front.Sample(FrontSS, texC).rgb; // tex2D(FrontS, texC).xyz; float3 back = Back.Sample(BackSS, texC).rgb; // tex2D(BackS, texC).xyz; float3 dir = normalize(back - front); float4 pos = float4(front, 0); float4 dst = float4(0, 0, 0, 0); float4 src = 0; float value = 0; //Iterations = 1500; float3 Step = dir * StepSize; // / (float)Iterations; float3 TotalStep = float3(0, 0, 0); value = Volume.Sample(VolumeSS, pos.xyz).r; int i = 0; for(i = 0; i < Iterations; i++) { pos.w = 0; //value = Volume.SampleLevel(VolumeSS, pos.xyz, 0); value = TrilinearSample(pos.xyz); // tex3Dlod(VolumeS, pos).r; // Radar reflectivity related threshold values if (value < 40) value = 40; if (value > 60) value = 60; value = (value - 40.0) / 20.0; src = (float4)(value); src.a /= (Iterations / 50.0); //Front to back blending // dst.rgb = dst.rgb + (1 - dst.a) * src.a * src.rgb // dst.a = dst.a + (1 - dst.a) * src.a src.rgb *= src.a; dst = (1.0f - dst.a) * src + dst; //break from the loop when alpha gets high enough if (dst.a >= .95f) break; //advance the current position pos.xyz += Step; TotalStep += Step; //break if the position is greater than <1, 1, 1> if (pos.x > 1.0f || pos.y > 1.0f || pos.z > 1.0f || pos.x < 0.0f || pos.y < 0.0f || pos.z < 0.0f) break; } return dst; } technique11 RenderPosition { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, PositionPS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 PositionPS(); } } technique11 RayCastDirection { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, DirectionPS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 DirectionPS(); } } technique11 RayCastSimple { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, RayCastSimplePS())); //VertexShader = compile vs_3_0 PositionVS(); //PixelShader = compile ps_3_0 RayCastSimplePS(); } } technique11 WireFrame { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, WireFramePS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 WireFramePS(); } } Any insight is hugely appreciated, whether on the specific problem or just random things I'm doing wrong.
      With the coordinates in the Texture3D being so messed up, I'm surprised this renders at all, let alone close to correctly.
      Thank you in advance!
    • By ggenije
      On picture 1 is actually what I get, but how to make it to look like pic 2?

×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!