Jump to content
  • Advertisement
Sign in to follow this  
  • entries
    13
  • comments
    11
  • views
    1467

Game Development Lessons - April 14, 2018 Weekly Update - Untouched Earth Game Development Blog

WaterMoon

1443 views

Hello Readers! I had a great opportunity this past weekend to have some people play test the game. For the most part it went well…that is, until the frame rates dropped to a point of being unplayable, glitches caused the player to reset to random places on the map, and the controller seemed to invert itself through an act of black magic. The experience was very enlightening and served as a gentle reminder that making games is a TON of work and requires loads of dedication to see it through to the end. In the spirit of humility, I thought I would use today’s blog entry to discuss the lessons I learned, and how I intend to progress moving forward. I welcome any insights and comments, and I would love to hear about some of the lessons you've learned along the way.
 
Lessons
1) Play test early and often with people other than yourself.
2) Profile early and often on the target devices.
3) Don’t leave highly problematic bugs and errors for later.
4) Don’t change the algorithm for essential game mechanics too late in development.
5) Keep lots of backups.
6) Learn from other's mistakes
 
Play test early and often with people other than yourself
This lesson was a hard one to learn. When the first level of the game was complete, I thought I could simply play test it myself and would get good feedback. This was true, but only to an extent. When I let other people play it, I quickly found out that the controls are confusing, the map doesn’t provide enough direction, and players will use the main character in ways I never even thought of. It would have been incredibly helpful to test the core mechanics on another person early on while I was developing them, rather than waiting until multiple levels have been made. As it stands, I’ve spent around 12 hours changing the controls on the hero’s climb animation so the mechanic feels more natural.
 
Profile early and often on the target devices
There were some serious frame rate drops going on that I didn’t realize because I was only testing the game on my home PC, which has a strong specs. When I put the game on my laptop, the frame rates dropped to the point of losing playability. I learned that profiling isn’t something that needs to wait till the end of development, and it would have been wise to profile as I built up the core mechanics. Using the profile after each new level or major game change and responding to its results is a great way to avoid these types of issues.
 
Don’t leave highly problematic bugs and errors for later.
It’s easy to become so fixated on finishing the game that errors and bugs are pushed off until a later time. I find, however, that engaging in this procrastination results in a ton of errors that compound on one another. For example, there was a problem with the hang animation, which caused a problem with the physics system, which caused a frame rate drop, etc. etc. Fixing these errors and bugs in order to build a strong foundation for the game mechanics is essential.
 
Don’t change the algorithm for essential game mechanics too late in development
In an attempt to fix the problems with my climb mechanic, I decided that the current algorithm had too many bugs to be salvageable. I spent a good number of hours creating a new algorithm that ended up failing miserably. The worst part is I screwed up the main characters script so much in the process that I had to revert to an earlier backup. After restoring the backup, I discovered a way to upgrade the mechanic as well as fix many of the bugs. Nothing is more heart breaking than losing hours of unnecessary work, which leads me to the final lesson.
 
Keep lots of backups
After every major change, make a backup. After every development session, make a backup. After every optimization, make a backup. The point is, keep lots of backups. I made the mistake of optimizing a lot of code and then trying to redo the algorithm for the climb mechanic. Had I made a backup AFTER the optimization, I wouldn’t have spent so much time redoing work.
I know there are a lot of lessons left for me to learn.
 
Learn from other's mistakes
All these lessons bring me to my final point - Learn from other's mistakes. I do a good bit of reading about challenges other developers experience, but it is so easy to think "that won't happen to me." Humbling oneself and learning from another person's mistakes is wise, and I plan to listen to the advice of seasoned developers.
 
Which brings me to my final question, what lessons have you learned about development along the way?
_______________________________________________________________________________________
As always, thank you for reading. I would love to hear from you and would love to hear any comments or ideas you have. Feel free to leave a comment or email me at watermoongames@gmail.com.
_________________________________________________________________________________________


2 Comments


Recommended Comments

Good info, thanks!  This type of thing always seems simple but bears repeating because I can't even count the amount of times I've accidentally lost a day's worth of work because of neglecting to hit save and the editor crashes on me.  Save early, save often!

 

Share this comment


Link to comment

I couldn't agree more! I've definitely had my fair share of lost work from not saving. Thank you for the reminder!

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
  • Advertisement
  • Blog Entries

  • Similar Content

    • By seedofpanic
      Hi everyone,
      Me and my friend are working on an 2d action shooting game.
      You can check techno demo of core mechanic here: https://spidamoo.itch.io/bunny
      So, as you can see, we really need an artist here. (And probably for other projects after) 
       
    • By NyquistVelocity
      I'm having trouble wrapping my brain around what actually is the issue here, but the sampler I'm using in my volume renderer is only interpolating the 3D texture along the Y axis.
      I roughly followed (and borrowed a lot of code from) this tutorial, but I'm using SlimDX and WPF: http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html
      Here's an example, showing voxel-ish artifacts on the X and Z axes, which are evidently not being interpolated:

      ...whereas on the Y axis it appears to be interpolating correctly:

      If I disable any kind of interpolation in the sampler, the whole volume ends up looking voxel-ish / bad:

      Thinking maybe my hardware didn't support 3D textures (even though it's modern?) I wrote a little trilinear interpolation function, and got the same results.
      In the trilinear code, I calculate the position of the ray in grid coordinates, and use the fractional portion to do the lerps.
      So I experimented by just painting the fractional part of the grid coordinate where a ray starts, onto my geometry cast to a float4. As expected, the Y axis looks good, as my input dataset has 30 layers. So I see a white => black fade 30 times:

      However, my X and Z fractional values are strange. What I should be seeing is the same white => black fade 144 and 145 times, respectively. But what I get is this:


      ... which is definitely not right. The values are A) discretized and uniform per grid cell, and B) exhibit a pattern that repeats every handful of grid rows, instead of a smooth fade on each cell.
      My suspicion is that I'm initializing my texture badly, but here's a look at the whole pipeline from initialization to rendering
      1) Loading data from a file, then constructing all my rendering-related objects:
      Data = new GURUGridFile(@"E:\GURU2 Test Data\GoshenDual\Finished\30_DOW7_(X)_20090605_220006.ggf"); double DataX = Data.CellSize[0] * Data.Dimensions[0]; double DataY = Data.CellSize[1] * Data.Dimensions[1]; double DataZ = Data.CellSize[2] * Data.Dimensions[2]; double MaxSize = Math.Max(DataX, Math.Max(DataY, DataZ)); DataX /= MaxSize; DataY /= MaxSize; DataZ /= MaxSize; Renderer.XSize = (float)DataX; Renderer.YSize = (float)DataY; Renderer.ZSize = (float)DataZ; int ProductCode = Data.LayerProducts[0].ToList().IndexOf("A_DZ"); float[,,] RadarData = new float[Data.Dimensions[0], Data.Dimensions[1], Data.Dimensions[2]]; for (int x = 0; x < Data.Dimensions[0]; x++) for (int y = 0; y < Data.Dimensions[1]; y++) for (int z = 0; z < Data.Dimensions[2]; z++) RadarData[x, y, z] = Data.Data[z][ProductCode][x, y]; int DataSize = Math.Max(RadarData.GetLength(0), Math.Max(RadarData.GetLength(1), RadarData.GetLength(2))); int mWidth = RadarData.GetLength(0); int mHeight = RadarData.GetLength(2); int mDepth = RadarData.GetLength(1); float mStepScale = 1.0F; float maxSize = (float)Math.Max(mWidth, Math.Max(mHeight, mDepth)); SlimDX.Vector3 stepSize = new SlimDX.Vector3( 1.0f / (mWidth * (maxSize / mWidth)), 1.0f / (mHeight * (maxSize / mHeight)), 1.0f / (mDepth * (maxSize / mDepth))); VolumeRenderer = new VolumeRenderEngine(false, Renderer.device); VolumeRenderer.Data = VolumeRenderTest.Rendering.TextureObject3D.FromData(RadarData); VolumeRenderer.StepSize = stepSize * mStepScale; VolumeRenderer.Iterations = (int)(maxSize * (1.0f / mStepScale) * 2.0F); Renderer.Initialize(); SetupSlimDX(); this.VolumeRenderer.DataWidth = Data.Dimensions[0]; this.VolumeRenderer.DataHeight = Data.Dimensions[2]; this.VolumeRenderer.DataDepth = Data.Dimensions[1]; It's worth noting here that I flip the Z and Y axes when passing data to the volume renderer so as to comply with DirectX coordinates.
      Next is my construction of the Texture3D and related fields. This is the step I think I'm messing up, both in terms of correctness as well as general violation of best practices.
      public static TextureObject3D FromData(float[,,] Data) { Texture3DDescription texDesc = new Texture3DDescription() { BindFlags = SlimDX.Direct3D11.BindFlags.ShaderResource, CpuAccessFlags = SlimDX.Direct3D11.CpuAccessFlags.None, Format = SlimDX.DXGI.Format.R32_Float, MipLevels = 1, OptionFlags = SlimDX.Direct3D11.ResourceOptionFlags.None, Usage = SlimDX.Direct3D11.ResourceUsage.Default, Width = Data.GetLength(0), Height = Data.GetLength(2), Depth = Data.GetLength(1) }; int i = 0; float[] FlatData = new float[Data.GetLength(0) * Data.GetLength(1) * Data.GetLength(2)]; for (int y = 0; y < Data.GetLength(1); y++) for (int z = 0; z < Data.GetLength(2); z++) for (int x = 0; x < Data.GetLength(0); x++) FlatData[i++] = Data[x, y, z]; DataStream TextureStream = new DataStream(FlatData, true, true); DataBox TextureBox = new DataBox(texDesc.Width * 4, texDesc.Width * texDesc.Height * 4, TextureStream); Texture3D valTex = new Texture3D(Renderer.device, texDesc, TextureBox); var viewDesc = new SlimDX.Direct3D11.ShaderResourceViewDescription() { Format = texDesc.Format, Dimension = SlimDX.Direct3D11.ShaderResourceViewDimension.Texture3D, MipLevels = texDesc.MipLevels, MostDetailedMip = 0, ArraySize = 1, CubeCount = 1, ElementCount = 1 }; ShaderResourceView valTexSRV = new ShaderResourceView(Renderer.device, valTex, viewDesc); TextureObject3D tex = new TextureObject3D(); tex.Device = Renderer.device; tex.Size = TextureStream.Length; tex.TextureStream = TextureStream; tex.TextureBox = TextureBox; tex.Texture = valTex; tex.TextureSRV = valTexSRV; return tex; } The TextureObject3D class is just a helper class that I wrap around a Texture3D to make things a little simpler to work with.
      At the rendering phase, I draw the back and front faces of my geometry (that is colored according to the vertex coordinates) to textures so that ray starting and ending positions can be calculated, then pass all that nonsense to the effect.
      private void RenderVolume() { // Rasterizer states RasterizerStateDescription RSD_Front = new RasterizerStateDescription(); RSD_Front.FillMode = SlimDX.Direct3D11.FillMode.Solid; RSD_Front.CullMode = CullMode.Back; RSD_Front.IsFrontCounterclockwise = false; RasterizerStateDescription RSD_Rear = new RasterizerStateDescription(); RSD_Rear.FillMode = SlimDX.Direct3D11.FillMode.Solid; RSD_Rear.CullMode = CullMode.Front; RSD_Rear.IsFrontCounterclockwise = false; RasterizerState RS_OLD = Device.ImmediateContext.Rasterizer.State; RasterizerState RS_FRONT = RasterizerState.FromDescription(Renderer.device, RSD_Front); RasterizerState RS_REAR = RasterizerState.FromDescription(Renderer.device, RSD_Rear); // Calculate world view matrix Matrix wvp = _world * _view * _proj; RenderTargetView NullRTV = null; // First we need to render to the rear texture SetupBlend(false); PrepareRTV(RearTextureView); SetBuffers(); Device.ImmediateContext.Rasterizer.State = RS_REAR; Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); ExecuteTechnique(Renderer.RayCasting101FX_RenderPosition); Device.ImmediateContext.Flush(); Device.ImmediateContext.OutputMerger.SetTargets(NullRTV); // Now we draw to the front texture SetupBlend(false); PrepareRTV(FrontTextureView); SetBuffers(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); ExecuteTechnique(Renderer.RayCasting101FX_RenderPosition); Device.ImmediateContext.Flush(); Device.ImmediateContext.OutputMerger.SetTargets(NullRTV); SetupBlend(false); //Set Render Target View Device.ImmediateContext.OutputMerger.SetTargets(SampleRenderView); // Set Viewport Device.ImmediateContext.Rasterizer.SetViewports(new Viewport(0, 0, WindowWidth, WindowHeight, 0.0f, 1.0f)); // Clear screen Device.ImmediateContext.ClearRenderTargetView(SampleRenderView, new Color4(1.0F, 0.0F, 0.0F, 0.0F)); if (Wireframe) { RenderWireframeBack(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; } SetBuffers(); // Render Position Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); Renderer.RayCasting101FX_Back.SetResource(new ShaderResourceView(Renderer.device, RearTexture));// RearTextureSRV); Renderer.RayCasting101FX_Front.SetResource(new ShaderResourceView(Renderer.device, FrontTexture));//FrontTextureSRV); Renderer.RayCasting101FX_Volume.SetResource(new ShaderResourceView(Renderer.device, Data.Texture)); Renderer.RayCasting101FX_StepSize.Set(StepSize); Renderer.RayCasting101FX_Iterations.Set(Iterations); Renderer.RayCasting101FX_Width.Set(DataWidth); Renderer.RayCasting101FX_Height.Set(DataHeight); Renderer.RayCasting101FX_Depth.Set(DataDepth); ExecuteTechnique(Renderer.RayCasting101FX_RayCastSimple); if (Wireframe) { RenderWireframeFront(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; } int sourceSubresource; sourceSubresource = SlimDX.Direct3D11.Resource.CalculateSubresourceIndex(0, 1, 1);// MSAATexture.CalculateSubResourceIndex(0, 0, out sourceMipLevels); int destinationSubresource; destinationSubresource = SlimDX.Direct3D11.Resource.CalculateSubresourceIndex(0, 1, 1); //m_renderTarget.CalculateSubResourceIndex(0, 0, out destinationMipLevels); Device.ImmediateContext.ResolveSubresource(MSAATexture, 0, SharedTexture, 0, Format.B8G8R8A8_UNorm); Device.ImmediateContext.Flush(); CanvasInvalid = false; sw.Stop(); this.LastFrame = sw.ElapsedTicks / 10000.0; } private void PrepareRTV(RenderTargetView rtv) { //Set Depth Stencil and Render Target View Device.ImmediateContext.OutputMerger.SetTargets(rtv); // Set Viewport Device.ImmediateContext.Rasterizer.SetViewports(new Viewport(0, 0, WindowWidth, WindowHeight, 0.0f, 1.0f)); // Clear render target Device.ImmediateContext.ClearRenderTargetView(rtv, new Color4(1.0F, 0.0F, 0.0F, 0.0F)); } private void SetBuffers() { // Setup buffer info Device.ImmediateContext.InputAssembler.InputLayout = Renderer.RayCastVBLayout; Device.ImmediateContext.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList; Device.ImmediateContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(Renderer.VertexBuffer, Renderer.VertexPC.Stride, 0)); Device.ImmediateContext.InputAssembler.SetIndexBuffer(Renderer.IndexBuffer, Format.R32_UInt, 0); } private void ExecuteTechnique(EffectTechnique T) { for (int p = 0; p < T.Description.PassCount; p++) { T.GetPassByIndex(p).Apply(Device.ImmediateContext); Device.ImmediateContext.DrawIndexed(36, 0, 0); } } Finally, here's the shader in its entirety. The TrilinearSample function is supposed to compute a good, interpolated sample but is what ended up highlighting what the problem likely is. What it does, or at least attempts to do, is calculate the actual coordinate of the ray in the original grid coordinates, then use the decimal portion to do the interpolation.
      float4x4 World; float4x4 WorldViewProj; float4x4 WorldInvTrans; float3 StepSize; int Iterations; int Side; float4 ScaleFactor; int Width; int Height; int Depth; Texture2D<float3> Front; Texture2D<float3> Back; Texture3D<float1> Volume; SamplerState FrontSS = sampler_state { Texture = <Front>; Filter = MIN_MAG_MIP_POINT; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; SamplerState BackSS = sampler_state { Texture = <Back>; Filter = MIN_MAG_MIP_POINT; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; SamplerState VolumeSS = sampler_state { Texture = <Volume>; Filter = MIN_MAG_MIP_LINEAR; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V AddressW = Border; // border sampling in W BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; struct VertexShaderInput { float3 Position : POSITION; float4 texC : COLOR; }; struct VertexShaderOutput { float4 Position : SV_POSITION; float3 texC : TEXCOORD0; float4 pos : TEXCOORD1; }; VertexShaderOutput PositionVS(VertexShaderInput input) { VertexShaderOutput output; output.Position = float4(input.Position, 1.0); output.Position = mul(output.Position * ScaleFactor, WorldViewProj); output.texC = input.texC.xyz; output.pos = output.Position; return output; } float4 PositionPS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { return float4(input.texC, 1.0f); } float4 WireFramePS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { return float4(1.0f, .5f, 0.0f, .85f); } //draws the front or back positions, or the ray direction through the volume float4 DirectionPS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { float2 texC = input.pos.xy /= input.pos.w; texC.x = 0.5f * texC.x + 0.5f; texC.y = -0.5f * texC.y + 0.5f; float3 front = Front.Sample(FrontSS, texC).rgb;// tex2D(FrontS, texC).rgb; float3 back = Back.Sample(BackSS, texC).rgb; // tex2D(BackS, texC).rgb; if(Side == 0) { float4 res = float4(front, 1.0f); return res; } if(Side == 1) { float4 res = float4(back, 1.0f); return res; } return float4(abs(back - front), 1.0f); } float TrilinearSample(float3 pos) { float X = pos.x * Width; float Y = pos.y * Height; float Z = pos.z * Depth; float iX = floor(X); float iY = floor(Y); float iZ = floor(Z); float iXn = iX + 1; float iYn = iY + 1; float iZn = iZ + 1; float XD = X - iX; float YD = Y - iY; float ZD = Z - iZ; float LL = lerp(Volume[float3(iX, iY, iZ)], Volume[float3(iX, iY, iZn)], ZD); float LR = lerp(Volume[float3(iXn, iY, iZ)], Volume[float3(iXn, iY, iZn)], ZD); float UL = lerp(Volume[float3(iX, iYn, iZ)], Volume[float3(iX, iYn, iZn)], ZD); float UR = lerp(Volume[float3(iXn, iYn, iZ)], Volume[float3(iXn, iYn, iZn)], ZD); float L = lerp(LL, UL, YD); float R = lerp(LR, UR, YD); //return ZD; return lerp(L, R, XD); return 0.0F; } float4 RayCastSimplePS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { //calculate projective texture coordinates //used to project the front and back position textures onto the cube float2 texC = input.pos.xy /= input.pos.w; texC.x = 0.5f* texC.x + 0.5f; texC.y = -0.5f* texC.y + 0.5f; float3 front = Front.Sample(FrontSS, texC).rgb; // tex2D(FrontS, texC).xyz; float3 back = Back.Sample(BackSS, texC).rgb; // tex2D(BackS, texC).xyz; float3 dir = normalize(back - front); float4 pos = float4(front, 0); float4 dst = float4(0, 0, 0, 0); float4 src = 0; float value = 0; //Iterations = 1500; float3 Step = dir * StepSize; // / (float)Iterations; float3 TotalStep = float3(0, 0, 0); value = Volume.Sample(VolumeSS, pos.xyz).r; int i = 0; for(i = 0; i < Iterations; i++) { pos.w = 0; //value = Volume.SampleLevel(VolumeSS, pos.xyz, 0); value = TrilinearSample(pos.xyz); // tex3Dlod(VolumeS, pos).r; // Radar reflectivity related threshold values if (value < 40) value = 40; if (value > 60) value = 60; value = (value - 40.0) / 20.0; src = (float4)(value); src.a /= (Iterations / 50.0); //Front to back blending // dst.rgb = dst.rgb + (1 - dst.a) * src.a * src.rgb // dst.a = dst.a + (1 - dst.a) * src.a src.rgb *= src.a; dst = (1.0f - dst.a) * src + dst; //break from the loop when alpha gets high enough if (dst.a >= .95f) break; //advance the current position pos.xyz += Step; TotalStep += Step; //break if the position is greater than <1, 1, 1> if (pos.x > 1.0f || pos.y > 1.0f || pos.z > 1.0f || pos.x < 0.0f || pos.y < 0.0f || pos.z < 0.0f) break; } return dst; } technique11 RenderPosition { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, PositionPS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 PositionPS(); } } technique11 RayCastDirection { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, DirectionPS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 DirectionPS(); } } technique11 RayCastSimple { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, RayCastSimplePS())); //VertexShader = compile vs_3_0 PositionVS(); //PixelShader = compile ps_3_0 RayCastSimplePS(); } } technique11 WireFrame { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, WireFramePS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 WireFramePS(); } } Any insight is hugely appreciated, whether on the specific problem or just random things I'm doing wrong.
      With the coordinates in the Texture3D being so messed up, I'm surprised this renders at all, let alone close to correctly.
      Thank you in advance!
    • By ggenije
      On picture 1 is actually what I get, but how to make it to look like pic 2?

    • By Valdiralita
      TLDR: is there a way to "capture" a constantbuffer in a command list (like the InstanceCount in DrawIndexedInstanced is captured) so i can update it before the command list is executed?
      Hey,
      I want to draw millions of objects and i use instancing to do so. My current implementation caches the matrix buffers, so I have a constantbuffer for each model-material combination. This is done so I don't have to rebuild my buffers each frame, because most of my scene is static, but can move at times. To update the constantbuffers I have another thread which creates command lists to update the constantbuffers and executes them on the immediate context. My render thread(s) also create command lists ahead of time to issue to the gpu when a new frame is needed. The matrix buffers are shared between multiple render threads.
      The result is that when an object changes color, so it goes from one model-material buffer to another, it hides one frame and is visible at the next or is for one frame at a different location where an object was before. I speculate this is because the constantbuffer for matrices is updated immediately but the InstanceCount in the draw command list is not. This leads to matrices which contain old or uninitialized memory.
      Is there a way to update my matrix constant buffers without stalling every renderthread and invalidating all render command lists?
      regards
       
    • By Mercutio604x
      Hi,
      I am using unitys mecanim system
      I have a layer that controls movement ie idle -> run
      it works fine when I press W I run forward with the animation via transform.position += anim.deltaPosition; // anim is on a child object that has the animator
      I have a second layer for attack set to override 100%, on this layer I have just the head moving and is masked just for the neck up. it plays when I press the mouse button.
       
      Now my problem is when I press W and mouse button, it plays both animations BUT it stops moving forward.
      Theres no location key frames on the attack or any key frames on the root node for that matter.
       
      Please tell me if I am being unclear.
      Thanks in advance,
       
      Also I lost my other acc because I cant remember my pass or the email i used, is there a way I can get it back? like I say the account name and it tells me which email i used?
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!