Jump to content
  • Advertisement
    1. Past hour
    2. Send me a PM with the details and I can look it up for you.
    3. I'm having trouble wrapping my brain around what actually is the issue here, but the sampler I'm using in my volume renderer is only interpolating the 3D texture along the Y axis. I roughly followed (and borrowed a lot of code from) this tutorial, but I'm using SlimDX and WPF: http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html Here's an example, showing voxel-ish artifacts on the X and Z axes, which are evidently not being interpolated: ...whereas on the Y axis it appears to be interpolating correctly: If I disable any kind of interpolation in the sampler, the whole volume ends up looking voxel-ish / bad: Thinking maybe my hardware didn't support 3D textures (even though it's modern?) I wrote a little trilinear interpolation function, and got the same results. In the trilinear code, I calculate the position of the ray in grid coordinates, and use the fractional portion to do the lerps. So I experimented by just painting the fractional part of the grid coordinate where a ray starts, onto my geometry cast to a float4. As expected, the Y axis looks good, as my input dataset has 30 layers. So I see a white => black fade 30 times: However, my X and Z fractional values are strange. What I should be seeing is the same white => black fade 144 and 145 times, respectively. But what I get is this: ... which is definitely not right. The values are A) discretized and uniform per grid cell, and B) exhibit a pattern that repeats every handful of grid rows, instead of a smooth fade on each cell. My suspicion is that I'm initializing my texture badly, but here's a look at the whole pipeline from initialization to rendering 1) Loading data from a file, then constructing all my rendering-related objects: Data = new GURUGridFile(@"E:\GURU2 Test Data\GoshenDual\Finished\30_DOW7_(X)_20090605_220006.ggf"); double DataX = Data.CellSize[0] * Data.Dimensions[0]; double DataY = Data.CellSize[1] * Data.Dimensions[1]; double DataZ = Data.CellSize[2] * Data.Dimensions[2]; double MaxSize = Math.Max(DataX, Math.Max(DataY, DataZ)); DataX /= MaxSize; DataY /= MaxSize; DataZ /= MaxSize; Renderer.XSize = (float)DataX; Renderer.YSize = (float)DataY; Renderer.ZSize = (float)DataZ; int ProductCode = Data.LayerProducts[0].ToList().IndexOf("A_DZ"); float[,,] RadarData = new float[Data.Dimensions[0], Data.Dimensions[1], Data.Dimensions[2]]; for (int x = 0; x < Data.Dimensions[0]; x++) for (int y = 0; y < Data.Dimensions[1]; y++) for (int z = 0; z < Data.Dimensions[2]; z++) RadarData[x, y, z] = Data.Data[z][ProductCode][x, y]; int DataSize = Math.Max(RadarData.GetLength(0), Math.Max(RadarData.GetLength(1), RadarData.GetLength(2))); int mWidth = RadarData.GetLength(0); int mHeight = RadarData.GetLength(2); int mDepth = RadarData.GetLength(1); float mStepScale = 1.0F; float maxSize = (float)Math.Max(mWidth, Math.Max(mHeight, mDepth)); SlimDX.Vector3 stepSize = new SlimDX.Vector3( 1.0f / (mWidth * (maxSize / mWidth)), 1.0f / (mHeight * (maxSize / mHeight)), 1.0f / (mDepth * (maxSize / mDepth))); VolumeRenderer = new VolumeRenderEngine(false, Renderer.device); VolumeRenderer.Data = VolumeRenderTest.Rendering.TextureObject3D.FromData(RadarData); VolumeRenderer.StepSize = stepSize * mStepScale; VolumeRenderer.Iterations = (int)(maxSize * (1.0f / mStepScale) * 2.0F); Renderer.Initialize(); SetupSlimDX(); this.VolumeRenderer.DataWidth = Data.Dimensions[0]; this.VolumeRenderer.DataHeight = Data.Dimensions[2]; this.VolumeRenderer.DataDepth = Data.Dimensions[1]; It's worth noting here that I flip the Z and Y axes when passing data to the volume renderer so as to comply with DirectX coordinates. Next is my construction of the Texture3D and related fields. This is the step I think I'm messing up, both in terms of correctness as well as general violation of best practices. public static TextureObject3D FromData(float[,,] Data) { Texture3DDescription texDesc = new Texture3DDescription() { BindFlags = SlimDX.Direct3D11.BindFlags.ShaderResource, CpuAccessFlags = SlimDX.Direct3D11.CpuAccessFlags.None, Format = SlimDX.DXGI.Format.R32_Float, MipLevels = 1, OptionFlags = SlimDX.Direct3D11.ResourceOptionFlags.None, Usage = SlimDX.Direct3D11.ResourceUsage.Default, Width = Data.GetLength(0), Height = Data.GetLength(2), Depth = Data.GetLength(1) }; int i = 0; float[] FlatData = new float[Data.GetLength(0) * Data.GetLength(1) * Data.GetLength(2)]; for (int y = 0; y < Data.GetLength(1); y++) for (int z = 0; z < Data.GetLength(2); z++) for (int x = 0; x < Data.GetLength(0); x++) FlatData[i++] = Data[x, y, z]; DataStream TextureStream = new DataStream(FlatData, true, true); DataBox TextureBox = new DataBox(texDesc.Width * 4, texDesc.Width * texDesc.Height * 4, TextureStream); Texture3D valTex = new Texture3D(Renderer.device, texDesc, TextureBox); var viewDesc = new SlimDX.Direct3D11.ShaderResourceViewDescription() { Format = texDesc.Format, Dimension = SlimDX.Direct3D11.ShaderResourceViewDimension.Texture3D, MipLevels = texDesc.MipLevels, MostDetailedMip = 0, ArraySize = 1, CubeCount = 1, ElementCount = 1 }; ShaderResourceView valTexSRV = new ShaderResourceView(Renderer.device, valTex, viewDesc); TextureObject3D tex = new TextureObject3D(); tex.Device = Renderer.device; tex.Size = TextureStream.Length; tex.TextureStream = TextureStream; tex.TextureBox = TextureBox; tex.Texture = valTex; tex.TextureSRV = valTexSRV; return tex; } The TextureObject3D class is just a helper class that I wrap around a Texture3D to make things a little simpler to work with. At the rendering phase, I draw the back and front faces of my geometry (that is colored according to the vertex coordinates) to textures so that ray starting and ending positions can be calculated, then pass all that nonsense to the effect. private void RenderVolume() { // Rasterizer states RasterizerStateDescription RSD_Front = new RasterizerStateDescription(); RSD_Front.FillMode = SlimDX.Direct3D11.FillMode.Solid; RSD_Front.CullMode = CullMode.Back; RSD_Front.IsFrontCounterclockwise = false; RasterizerStateDescription RSD_Rear = new RasterizerStateDescription(); RSD_Rear.FillMode = SlimDX.Direct3D11.FillMode.Solid; RSD_Rear.CullMode = CullMode.Front; RSD_Rear.IsFrontCounterclockwise = false; RasterizerState RS_OLD = Device.ImmediateContext.Rasterizer.State; RasterizerState RS_FRONT = RasterizerState.FromDescription(Renderer.device, RSD_Front); RasterizerState RS_REAR = RasterizerState.FromDescription(Renderer.device, RSD_Rear); // Calculate world view matrix Matrix wvp = _world * _view * _proj; RenderTargetView NullRTV = null; // First we need to render to the rear texture SetupBlend(false); PrepareRTV(RearTextureView); SetBuffers(); Device.ImmediateContext.Rasterizer.State = RS_REAR; Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); ExecuteTechnique(Renderer.RayCasting101FX_RenderPosition); Device.ImmediateContext.Flush(); Device.ImmediateContext.OutputMerger.SetTargets(NullRTV); // Now we draw to the front texture SetupBlend(false); PrepareRTV(FrontTextureView); SetBuffers(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); ExecuteTechnique(Renderer.RayCasting101FX_RenderPosition); Device.ImmediateContext.Flush(); Device.ImmediateContext.OutputMerger.SetTargets(NullRTV); SetupBlend(false); //Set Render Target View Device.ImmediateContext.OutputMerger.SetTargets(SampleRenderView); // Set Viewport Device.ImmediateContext.Rasterizer.SetViewports(new Viewport(0, 0, WindowWidth, WindowHeight, 0.0f, 1.0f)); // Clear screen Device.ImmediateContext.ClearRenderTargetView(SampleRenderView, new Color4(1.0F, 0.0F, 0.0F, 0.0F)); if (Wireframe) { RenderWireframeBack(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; } SetBuffers(); // Render Position Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); Renderer.RayCasting101FX_Back.SetResource(new ShaderResourceView(Renderer.device, RearTexture));// RearTextureSRV); Renderer.RayCasting101FX_Front.SetResource(new ShaderResourceView(Renderer.device, FrontTexture));//FrontTextureSRV); Renderer.RayCasting101FX_Volume.SetResource(new ShaderResourceView(Renderer.device, Data.Texture)); Renderer.RayCasting101FX_StepSize.Set(StepSize); Renderer.RayCasting101FX_Iterations.Set(Iterations); Renderer.RayCasting101FX_Width.Set(DataWidth); Renderer.RayCasting101FX_Height.Set(DataHeight); Renderer.RayCasting101FX_Depth.Set(DataDepth); ExecuteTechnique(Renderer.RayCasting101FX_RayCastSimple); if (Wireframe) { RenderWireframeFront(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; } int sourceSubresource; sourceSubresource = SlimDX.Direct3D11.Resource.CalculateSubresourceIndex(0, 1, 1);// MSAATexture.CalculateSubResourceIndex(0, 0, out sourceMipLevels); int destinationSubresource; destinationSubresource = SlimDX.Direct3D11.Resource.CalculateSubresourceIndex(0, 1, 1); //m_renderTarget.CalculateSubResourceIndex(0, 0, out destinationMipLevels); Device.ImmediateContext.ResolveSubresource(MSAATexture, 0, SharedTexture, 0, Format.B8G8R8A8_UNorm); Device.ImmediateContext.Flush(); CanvasInvalid = false; sw.Stop(); this.LastFrame = sw.ElapsedTicks / 10000.0; } private void PrepareRTV(RenderTargetView rtv) { //Set Depth Stencil and Render Target View Device.ImmediateContext.OutputMerger.SetTargets(rtv); // Set Viewport Device.ImmediateContext.Rasterizer.SetViewports(new Viewport(0, 0, WindowWidth, WindowHeight, 0.0f, 1.0f)); // Clear render target Device.ImmediateContext.ClearRenderTargetView(rtv, new Color4(1.0F, 0.0F, 0.0F, 0.0F)); } private void SetBuffers() { // Setup buffer info Device.ImmediateContext.InputAssembler.InputLayout = Renderer.RayCastVBLayout; Device.ImmediateContext.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList; Device.ImmediateContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(Renderer.VertexBuffer, Renderer.VertexPC.Stride, 0)); Device.ImmediateContext.InputAssembler.SetIndexBuffer(Renderer.IndexBuffer, Format.R32_UInt, 0); } private void ExecuteTechnique(EffectTechnique T) { for (int p = 0; p < T.Description.PassCount; p++) { T.GetPassByIndex(p).Apply(Device.ImmediateContext); Device.ImmediateContext.DrawIndexed(36, 0, 0); } } Finally, here's the shader in its entirety. The TrilinearSample function is supposed to compute a good, interpolated sample but is what ended up highlighting what the problem likely is. What it does, or at least attempts to do, is calculate the actual coordinate of the ray in the original grid coordinates, then use the decimal portion to do the interpolation. float4x4 World; float4x4 WorldViewProj; float4x4 WorldInvTrans; float3 StepSize; int Iterations; int Side; float4 ScaleFactor; int Width; int Height; int Depth; Texture2D<float3> Front; Texture2D<float3> Back; Texture3D<float1> Volume; SamplerState FrontSS = sampler_state { Texture = <Front>; Filter = MIN_MAG_MIP_POINT; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; SamplerState BackSS = sampler_state { Texture = <Back>; Filter = MIN_MAG_MIP_POINT; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; SamplerState VolumeSS = sampler_state { Texture = <Volume>; Filter = MIN_MAG_MIP_LINEAR; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V AddressW = Border; // border sampling in W BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; struct VertexShaderInput { float3 Position : POSITION; float4 texC : COLOR; }; struct VertexShaderOutput { float4 Position : SV_POSITION; float3 texC : TEXCOORD0; float4 pos : TEXCOORD1; }; VertexShaderOutput PositionVS(VertexShaderInput input) { VertexShaderOutput output; output.Position = float4(input.Position, 1.0); output.Position = mul(output.Position * ScaleFactor, WorldViewProj); output.texC = input.texC.xyz; output.pos = output.Position; return output; } float4 PositionPS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { return float4(input.texC, 1.0f); } float4 WireFramePS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { return float4(1.0f, .5f, 0.0f, .85f); } //draws the front or back positions, or the ray direction through the volume float4 DirectionPS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { float2 texC = input.pos.xy /= input.pos.w; texC.x = 0.5f * texC.x + 0.5f; texC.y = -0.5f * texC.y + 0.5f; float3 front = Front.Sample(FrontSS, texC).rgb;// tex2D(FrontS, texC).rgb; float3 back = Back.Sample(BackSS, texC).rgb; // tex2D(BackS, texC).rgb; if(Side == 0) { float4 res = float4(front, 1.0f); return res; } if(Side == 1) { float4 res = float4(back, 1.0f); return res; } return float4(abs(back - front), 1.0f); } float TrilinearSample(float3 pos) { float X = pos.x * Width; float Y = pos.y * Height; float Z = pos.z * Depth; float iX = floor(X); float iY = floor(Y); float iZ = floor(Z); float iXn = iX + 1; float iYn = iY + 1; float iZn = iZ + 1; float XD = X - iX; float YD = Y - iY; float ZD = Z - iZ; float LL = lerp(Volume[float3(iX, iY, iZ)], Volume[float3(iX, iY, iZn)], ZD); float LR = lerp(Volume[float3(iXn, iY, iZ)], Volume[float3(iXn, iY, iZn)], ZD); float UL = lerp(Volume[float3(iX, iYn, iZ)], Volume[float3(iX, iYn, iZn)], ZD); float UR = lerp(Volume[float3(iXn, iYn, iZ)], Volume[float3(iXn, iYn, iZn)], ZD); float L = lerp(LL, UL, YD); float R = lerp(LR, UR, YD); //return ZD; return lerp(L, R, XD); return 0.0F; } float4 RayCastSimplePS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { //calculate projective texture coordinates //used to project the front and back position textures onto the cube float2 texC = input.pos.xy /= input.pos.w; texC.x = 0.5f* texC.x + 0.5f; texC.y = -0.5f* texC.y + 0.5f; float3 front = Front.Sample(FrontSS, texC).rgb; // tex2D(FrontS, texC).xyz; float3 back = Back.Sample(BackSS, texC).rgb; // tex2D(BackS, texC).xyz; float3 dir = normalize(back - front); float4 pos = float4(front, 0); float4 dst = float4(0, 0, 0, 0); float4 src = 0; float value = 0; //Iterations = 1500; float3 Step = dir * StepSize; // / (float)Iterations; float3 TotalStep = float3(0, 0, 0); value = Volume.Sample(VolumeSS, pos.xyz).r; int i = 0; for(i = 0; i < Iterations; i++) { pos.w = 0; //value = Volume.SampleLevel(VolumeSS, pos.xyz, 0); value = TrilinearSample(pos.xyz); // tex3Dlod(VolumeS, pos).r; // Radar reflectivity related threshold values if (value < 40) value = 40; if (value > 60) value = 60; value = (value - 40.0) / 20.0; src = (float4)(value); src.a /= (Iterations / 50.0); //Front to back blending // dst.rgb = dst.rgb + (1 - dst.a) * src.a * src.rgb // dst.a = dst.a + (1 - dst.a) * src.a src.rgb *= src.a; dst = (1.0f - dst.a) * src + dst; //break from the loop when alpha gets high enough if (dst.a >= .95f) break; //advance the current position pos.xyz += Step; TotalStep += Step; //break if the position is greater than <1, 1, 1> if (pos.x > 1.0f || pos.y > 1.0f || pos.z > 1.0f || pos.x < 0.0f || pos.y < 0.0f || pos.z < 0.0f) break; } return dst; } technique11 RenderPosition { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, PositionPS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 PositionPS(); } } technique11 RayCastDirection { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, DirectionPS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 DirectionPS(); } } technique11 RayCastSimple { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, RayCastSimplePS())); //VertexShader = compile vs_3_0 PositionVS(); //PixelShader = compile ps_3_0 RayCastSimplePS(); } } technique11 WireFrame { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, WireFramePS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 WireFramePS(); } } Any insight is hugely appreciated, whether on the specific problem or just random things I'm doing wrong. With the coordinates in the Texture3D being so messed up, I'm surprised this renders at all, let alone close to correctly. Thank you in advance!
    4. Today
    5. I test with a USB LTE modem and get ~1-2% both in a big city and 15km near, friends report me same numbers. But of course this is just an observation and not a way to judge all modern wireless connections. 2d ships with arcade controls (can change direction instantly). I don't intend to use prediction because control feels ok even with high latency and the only annoying problem is occasional stuttering. And on a normal connection everything is fine. But it seems that what I want is not too far from an actual prediction but with a different purpose (not to hide latency). Ok, thanks, I now see that it should solve the "initial packet delayed" and "latency was higher but then decreased" problems. Btw, does a reverse problem of "latency increased for the rest of game session" exist and can it "safely" be ignored? By "safely" I mean won't a more complex scheme behave differently and "worse" than the most basic (just interpolate) scheme in that scenario?
    6. I'm Interested sent you a private message
    7. It is kinda fun.. to critique though rather than sing praises it sounds like 3 mins of 'introduction' .. there's no really significant key changes to make it interesting or memorable as a piece of music. This might be fine though if the intention is purely scene setting in a game. Other thing might be that for actual usefulness, not having ambient / bird song on the track might be better as the game designers can easily put that kind of stuff on, and choose the ambience that is needed in the game area. Could say the same perhaps about the long reverbs. Easy to add but not to take away. But then this stuff is easy to change in practice and re-export the track.
    8. New instrumental track out. Thx for your support.
    9. A best html5 game developer often proved to be a game changer. It is important to know what game developer does and with their skill what they are capable of doing. As html5 will be the best technology for a web game development, hiring a best html5 game development is equally important factor. Here some points need to consider while giving you game development project to any developer. A developer should have passion for gaming He must be well experienced in game development. He should be a team player and ready to handle new challenges His development pricing should be realistic. Full time service support Regular reporting about game development progress We proud to tell that Genieee is capable to fulfill all these points. Genieee created many different types of games across various genres. We have well experienced and talented team to make your game more creative and user friendly. You can hire dedicated game developers from Genieee to get the quality services. Their game development team helps not only big game production houses worldwide but also individuals. They assist you in transforming creative ideas into a stunning HTML5 game through offering real-time consultation. #hirehtml5gamedeveloper #html5gamedeveloperinindia #html5gamedevelopmentstudio
    10. Perto Koval

      Pre-rendered visual effects?

      So if I want to create a really fancy and beautiful 2d effect of a campfire I open After Effects, create the effect and then export 2d animation as a .xml file, png sequence etc. And it works anywhere - Unity, Unreal Engine... Can I create that campfire effect in 3d, so I could paste it anywhere I want? So it would be an independent 3d visual effect file? I know that you can create particle effects in most game engines but I am asking specifically about the independent visual effect file format and software with which I can create it. If I ask thison the wrong forum, please tell me where can I find the appropriate one
    11. Alberth

      Properly stretch texture on quad unity

      I'd say start by showing how you made picture 1. Obviously something is wrong in the settings, but without you telling us what you did, we can't point out the error. In other words, show the core code for making picture 1. For comparison with a simpler case, your question is like "The answer I got is 3. I want the answer to be 7. What did I do wrong?" No way to tell what is wrong without having the steps to obtain 3.
    12. Subtyven

      Orange Instinct - My first Web Game

      For the moment i prefer to keep this game as a little challenging game which award the best players. A more complete version could be created in the future, but may be i forgot to mention that it's just a student project Anyway trust me the first 20 levels are not that hard.
    13. On picture 1 is actually what I get, but how to make it to look like pic 2?
    14. lawnjelly

      Orange Instinct - My first Web Game

      Ah now you've said that .. I've worked it out I think, it's like you are batting the orange from the direction you click with the mouse, doh! Okay I got to level 4, and it got impossible lol. It is very difficult, one thing is that because you can only click below the floor a certain amount you have to 'get it going' in the air before you can get it to go very high. Maybe for difficulty settings you could affect how fast the orange moves?
    15. Valdiralita

      Capture constantbuffer in commandlist

      Currently i use one cbuffer for matrices and one cbuffer for material informations. Using command lists enables me to render multiple views of the same scene on one device. I create command lists for rendering different views (render windows) concurrently, so one rendering does not block the 2nd window. Prepairing commands lists helps also with VR performance. Yes, I submit command lists in the wrong order and Im looking for a way to update everything as fast as possible while not getting thread interlocks. All calls to the immediate context are interlocked, I get no d3d11 errors and also renderdoc API is implemented and shows no errors Switchting to D3D12 is currently not an option
    16. Subtyven

      Orange Instinct - My first Web Game

      It's your mission to find how to control the orange 😄
    17. Hodgman

      Capture constantbuffer in commandlist

      In D3D11, resource updates do occur in the order that you submit them in (as determined by the immediate context), along with draws. So, if you update a resource to contain the value"1", then draw a mesh "A" that uses the resource, then update that resource to contain the value "2" and then draw mesh "B", and then submit these commands to the GPU, then, mesh A will see the value 1 and mesh B will see the value 2. If you're getting weird results, you're likely submitting your command lists and draws to the immediate context in the wrong order, are doing something invalid (make sure you routinely test with the D3D debug layer enabled to check for API usage errors!) or have some kind of threading bug like a race condition, such as two threads using the immediate context simultaneously. This is one of the benefits of D3D11 vs 12 - 11 does a lot of work to make resource updates become visible in command order. In 12 you have to do a lot of work manually to implement those semantics yourself.
    18. vinterberg

      Capture constantbuffer in commandlist

      How are your cbuffer structures set up? I would probably have one for matrices, one for materials and a material-index sent through as "vertex" data .. Have you tried not using command lists for buffer rebuilding, but simply updating them in a thread and before rendering you do a transfer to GPU? Don't think you actually gain much by using command lists in DX11, other than possible headaches 😥 But all in all PCs today are very fast, so doing things the simple way first is a legit way to go - maybe the overhead of updating buffers etc isn't so bad that it needs threading/command lists? You would only need to update matrices+mat indexes for objects that have changed, and the cbuffer uploading is probably not a problem in regards to performance..
    19. @LorenzoGatti You are suggesting entire game mechanics, but the OP is about a simple minigame, to keep the player occupied for the several seconds it takes for HP bars to go down. @suliman How about throwing grappling hooks? Since an important aspect of boarding is grappling the other ship and keeping it close, you can use "keeping the hooks up" as a way to keep the boarding going. For example, you could have a top-down view of the ship and several grappling hooks alongside it, and have them fall off (get taken down) so you have to re-throw them. Basically a re-skinned whac-a-mole game. It also makes in-universe sense, since if you lose the minigame the ship gets free and the boarding fails. You can have your different crew types affect how many hooks there are in the minigame, or how fast they decay or something like that.
    20. lawnjelly

      Advice on efficient building placement

      Procedural layout of buildings.. and / or combining several buildings into 1 object, so you have less to place, and the rendering should be more efficient (although I seem to remember there is some unity functionality for grouping statics too for rendering efficiency).
    21. lawnjelly

      Orange Instinct - My first Web Game

      It worked but I the orange seemed to just jump all over the place, I couldn't work out how to direct it with mouse?
    22. Hello, i wanted to share my new project which i'm very proud. You can directly try it here: https://orange.stevenclain.fr/ (choose Play Beta->Offline if you don't want to create an account). As you can see it's a very simple game, just use your mouse 😄 Feel free to say what you want
    23. lawnjelly

      Yet another interpolation question

      Ah, I hadn't encountered this problem before, I found this interesting on the subject, maybe there will be a solution in the future: https://gafferongames.com/post/why_cant_i_send_udp_packets_from_a_browser/ With this TCP limitation I'd be wondering how statistically often you do get dropped / out of order packets in modern connections (especially wireless as you point out)? Maybe it is hard to predict. One question I'm wondering: what type of game is this? You say your movement is 'somewhat predictable'. Are you using a standard 'dumb client / all the players simulated on the server' setup? Are you using / intending to use client side prediction? How important is how close players are to their positions in the server simulation? Are you e.g. shooting at players and using their position relative to the aim to determine a hit? Note that a convenient way of doing this can simply be to include the server tick in the packet (providing you are using a fixed tick rate). The way I'd think of it, especially as you seem to be suffering from big laggy stutters, is that your client player reps should be independent, but 'chasing' the clients best idea of where the actual server player is. In a UDP scenario it is usually possible to use simple interpolation / extrapolation, however in the case of TCP where you might have frequent long periods of no incoming info, it might be an idea to e.g. consider each player client rep to have a position and velocity, so instead of having a sudden visible jump in course change after a delayed packet, you could more gradually change the velocity to smoothly move towards the destination. This could be more of a client side prediction type 'physics' approach for all players. Of course this might mean spending longer with the client rep further from the server position, but the trade off might be worth it depending on the game type. The other solution is to try and make a game type that 'designs out' the lag issue (e.g. strategy games that can get by with a large delay before performing a command). Anyway just some ideas. Showing maybe a captured video might be useful to help with suggestions.
    24. Hi guys I'm working on a racing game and want a variety of buildings that make for interesting silhouettes and scenery. At the moment I'm mostly using Unity primitives and very rough building shapes and I'm stuggling to find an efficient approach to working out what placements make best use of the buildings and create the most interested silhouettes for the player. I've a master scene file in Blender where the track and complex beings are slowing being created and I'm placing cubes in Unity to block out a skyline but its a painfully slow process: Place block > Press play > drive a bit > no looks weird > Press stop > re-position block > Press play > drive a bit > Think it's too small > Press stop > Re-size block > press play... ...you get the idea Times that my 1,000+ and I think the sea will have swallowed us all long before I get all the buildings just in my demo level placed. Any suggestions for improving the efficiency of this process? Here's some gameplay for context:
    25. TLDR: is there a way to "capture" a constantbuffer in a command list (like the InstanceCount in DrawIndexedInstanced is captured) so i can update it before the command list is executed? Hey, I want to draw millions of objects and i use instancing to do so. My current implementation caches the matrix buffers, so I have a constantbuffer for each model-material combination. This is done so I don't have to rebuild my buffers each frame, because most of my scene is static, but can move at times. To update the constantbuffers I have another thread which creates command lists to update the constantbuffers and executes them on the immediate context. My render thread(s) also create command lists ahead of time to issue to the gpu when a new frame is needed. The matrix buffers are shared between multiple render threads. The result is that when an object changes color, so it goes from one model-material buffer to another, it hides one frame and is visible at the next or is for one frame at a different location where an object was before. I speculate this is because the constantbuffer for matrices is updated immediately but the InstanceCount in the draw command list is not. This leads to matrices which contain old or uninitialized memory. Is there a way to update my matrix constant buffers without stalling every renderthread and invalidating all render command lists? regards
    26. Many thanks 👌 I must say I am a bit surprised that it compiles that much slower. I have never experienced MinGW being slower, quite the the contrary in fact, it is typically faster. But ok, it works and that is what counts. Optimisation is future worries. Time to get my feet wet and start integration. Stoked!
    27. There is other ways to add tactics to a game. Im just saying i always dislike that feature when added to games. Many games dont use it and are better for it
    28. @k3z4 So MinGW build seems to be working now (do not forget to update the submodules). To be honest, I should say (this was the first time I used MinGW) that I did not find anything that would compare positively against Visual Studio. Compile times are enormous. It seems like it's using only single core. I was not even able to wait until it builds the entire debug configuration: the linker was struggling with the first executable target for about 15 minutes (!) and produced ~500 MB executable (!??). Release targets were all built successfully. Couple things: * Only Vulkan and OpenGL backends are built. D3D11/D3D12 backends depend on headers from windows SDK which are not found. Even d3d11.h is not found. I don't know how to fix this. * I did not manage to make dynamic libraries to work. I build static library and then link static into shared one. On Linux, Android and Mac that works fine. MinGW produces empty dll (~47kb). When I use -whole-archive, it generated ~500 MB dll in debug build. So all targets are linked against static libraries (I don't really think this is a problem). Other than that everything seems to be working fine, so all functionality is there.
    29. Game Link: https://www.kongregate.com/games/WilliamOlyOlson/slingbot-boarding Okay, all non-racing stats are now updating Kongregate leader-boards. (Best Trick(combo trick points), Best Airtime(time in air in ms), Best Flight(longest linear distance in air), Best Speed(fastest recorded speed), and Courses Completed. I'll be adding 3 trophy/medal counters when I get NPCs and racing going (gold,silver, bronze).. Unfortunately, the stats system on Kongregate is designed mostly to help Kongregate encourage players to earn Kongregate badges and etc. achievements by playing games. So, the stats system is only a one-way transaction. Which makes it nearly impossible to display accurate leader-boards in-game. They display on the side-bar in non-cinematic mode, and I'm sure you can browse to them somewhere else too, but for now I guess that will have to do. I spent at least 3 hours trying to figure out how to get PlayFab(an actual leader-board back end that's supposedly "partnered" with Kongregate) to work with my WebGL build on Kongregate and no go... I've used PlayFab before on a PC/Android game and it worked flawlessly, so it's not just me, haha.. Got frustrated with it for now, so Kongregate stats will be all there are for the moment. haha I need to work on some other things for a while... Like providing some visual feedback on the fact that you've earned points for doing stuff.. Oh yeah, and some NPCs.. Anyhow, there's something at least sort-of competitively motivational in the game now. lol
    30. G'Day... so I am about a week into trying to learn C# and have been enjoying it so far. I am learning from a book called "The C# Players Guide" and in it around Chapter 20 is what the author calls a "mid term project". To make my first actual application. A TicTacToe game. This is obviously something very simple but I felt it was a good excuse to try out things and see how it all went. The Brief Even though it said to only take 20 minuets I took a lot longer. I wanted to try out things with my short term goals in mind. One of those short term goals is to make a multi-room adventure game as part of my learning process. So I took a lot longer than 20 mins .. in comming up with a FrameBuffer and FramePrinting system that felt modular to me. I had some real issues working out how to control the game flow through the main loop. The original plan was to have a class that checked the game state and sit out a false when the game was over. This lead to issues with choosing new games and stuff. I ended up adding a "state" variable for each game state that was check in the loop. I think this is not the best way and in my next project I want to try and produce these states as a natural output from the game engine itself. All in all I am pretty happy with it... Source Code + Binary I have started a GITHUB account. I was having problems uploading everything to pasteBin... You can see my code here.. and I know it will be messy nad bad.. but I fell I made a good start. You can also use the download link to download the actual binary to run. I compiled this on Windows 10.. so I guess it has to work in a windows environment. gitHub Source Code .exe Binary Download How to Play This is kinda redundant for something so simple, but I want to get a "format" working for this blog. Q / ESC == Quit at any time N == New Game (At Results Screen or During the Game Itself) 1-9 / Num1 - Num9 == Place a Player Token. 1st Player will be randomly set to X or O. See ya next time! --A4L
    31. You need all files from the bin release folder, that are .exe and .dll. If you have settings in your app.config file you'll also need the .exe.config file. The .pdb files are debug symbols, which are not required to run the application. They might help on debugging though. Obviously if you've open other external files those need to be copied as well.
    32. Copy the exe to a new folder somewhere away from the project and run it and see if it complains.
    33. G'Day... so I have finished my 1st application.. nothing fancy but I think I am doing ok for a guy that started a week ago. I would like to compile this as a "release" version to send to my mates and place as a download link on my Blog.. but I just wanted to make sure I was doing everythign correctly. I change the Drop Down Menu on the top car to RELEASE and ANY CPU... I then just the start button. I end up with three files... .exe / .config / .pdb Are all those files needed? Or do I just send people the exe and that is it.. or am I doing something wrong. Thanks!
    34. Yes, that is correct. Pointers are non-trivial types and strings contain (or strings are) a pointer to its characters. Your program appears to work because the books containing the strings containing the pointers to the character arrays are still valid. Run only the read from disk code and your program will crash. Serialize instead the contents of each string in the book: Store the length of the string first followed by its character data. And to read the string: Read the length of the string first, then read that many bytes into a temporary array, and then construct a string with the length and array. If you haven't already, consider examining the contents of the file with a hex editor. Serialization can be tricky, but it is not an uncommon problem. Read https://isocpp.org/wiki/faq/serialization for more information.
    35. Shaarigan thank you very much for the answer, that's all I needed. I kind of felt their documentation confusing. Once I figure everything, out, I'll post a guide just in case others find it confusing as well. Thanks again.
    36. Vilem Otte

      Effect: Volumetric light scattering

      I noticed that in the video I accidentally also captured Screen2Gif UI - as it doesn't impact the resulting video I will not be re-capturing it. Also note that this was actually my 1st time using Screen2Gif to capture HD video in 1080 format - I'm quite happy it worked well together with YouTube.
    37. I decided to write a small program that writes a vector to a binary file. However, I'm having some issues with the code. Here is what I have so far: #include <iostream> #include <map> #include <string> #include <vector> #include <cmath> #include <math.h> #include <fstream> using namespace std; class Book { protected: string m_pBookTitle; string m_pAuthor; string m_pGenre; int m_pNumberOfPages; public: Book() : m_pBookTitle(""), m_pAuthor(""), m_pGenre(""), m_pNumberOfPages(0){} Book(string title, string author, string genre, int numPages) : m_pBookTitle(title), m_pAuthor(author), m_pGenre(genre), m_pNumberOfPages(numPages) { } ~Book(){} string getBookTitle() const { return m_pBookTitle; } string getBookAuthor() const { return m_pAuthor; } string getBookGenre() const { return m_pGenre; } int getNumPages() const { return m_pNumberOfPages; } }; int main(int argc, char** argv) { vector<Book*> Books; Book* book1 = new Book("The man with a dog", "Robert White", "scifi", 300); Books.push_back(book1); Book* book2 = new Book("Just got here", "James Hancock", "Fantasy", 100); Books.push_back(book2); Book* book3 = new Book("The Girl with the Dragon Tattoo", "Eddinton Carlos","Fiction", 500); Books.push_back(book3); cout << "Number of books: " << Books.size() << endl; ofstream bookFile; bookFile.open("Books.book", ios::binary); if (bookFile.is_open()) { for (vector<Book*>::iterator i = Books.begin()+1; i != Books.end(); i++) { bookFile.write((char*)&(*i), sizeof((*i))); } } bookFile.close(); ifstream bookFileIn; bookFileIn.open("Books.book", ios::binary); vector<Book*> LoadedBooks; if (bookFileIn.is_open()) { Book* book = nullptr; while (!bookFileIn.eof()) { bookFileIn.read((char*)&book, sizeof(book)); LoadedBooks.push_back(book); } } bookFileIn.close(); for (vector<Book*>::iterator i = LoadedBooks.begin(); i != LoadedBooks.end(); i++) { Book* b = (*i); cout << "Book title: " << b->getBookTitle() << endl; cout << "Book Author: " << b->getBookAuthor() << endl; cout << "Genre: " << b->getBookGenre() << endl; cout << "Number of pages: " << b->getNumPages() << endl; } cout << "Number of books: " << LoadedBooks.size() << endl; system("PAUSE"); return EXIT_SUCCESS; } This is what happens with the +1: http://puu.sh/ChluR/3c42a42137.png And removing the +1: http://puu.sh/Chlwz/ed52d85cdb.png The problem is that one of the books gets added twice. Someone in the Discord server I'm in said that I can't do bookFile.write((char*)&(*i),sizeof((*i)) because the book has non trivial types such as strings. What should I do?
    38. Hi, I am using unitys mecanim system I have a layer that controls movement ie idle -> run it works fine when I press W I run forward with the animation via transform.position += anim.deltaPosition; // anim is on a child object that has the animator I have a second layer for attack set to override 100%, on this layer I have just the head moving and is masked just for the neck up. it plays when I press the mouse button. Now my problem is when I press W and mouse button, it plays both animations BUT it stops moving forward. Theres no location key frames on the attack or any key frames on the root node for that matter. Please tell me if I am being unclear. Thanks in advance, Also I lost my other acc because I cant remember my pass or the email i used, is there a way I can get it back? like I say the account name and it tells me which email i used?
    39. Nice! Thanks man! I need to get the hang of these condensed = things.. that make everything so much easier to read.
    40. This time around I've decided to try something different (and thanks @isu diss for suggestion), which is - volumetric light scattering. To make this a bit more challenging for me, I've decided to implement this completely in Unity, all the code snippets will be mostly in pseudo code. Alright, let's start! Introduction When we talk about volumetric light scattering - we often talk about simulating lights and shadows in fog or haze. Generally speaking, simulating light scattering in participating media. Before we go into basic math - we have to define some important properties of participating media we want to simulate (for the sake of computation efficiency): Homogeneity No light emission properties Light Scattering As we want to render this volume, we have to calculate how radiance reaching the camera changes when viewing light through the volume. To compute this a ray marching technique will be utilized, to accumulate radiance along the view ray from the camera. What we have to solve is a radiative transport equation: Where: Left side represents change in radiance along ray represents density (probability of collision in unit distance) albedo that equals probability of scattering (i.e. not absorbing) after collision represents phase function, probability density function of the scattering direction Note: similarity of this equation to "The Rendering Equation" is not random! This is actually Fredholm's 2nd kind integral equation - which are resulting in nested integrals. These equations are mostly solved using Monte Carlo methods, and one of the ways how to solve this one is using Monte Carlo Path Tracing. Although as I aim to use this effect in real time on average hardware, such approach is not possible and optimization is required. As per Real Time Volumetric Lighting in Participating Media by Balazs Toth and Tamas Umenhoffer (available from http://sirkan.iit.bme.hu/~szirmay/lightshaft.pdf), we will further ignore multiple scattering and using single in-scattering term like: And therefore: Which can be integrated into: And therefore: This equation actually describes whole technique, or to be precise - the last term in equation (the sum) defines the actual radiance that was received from given point.The computation is quite straight forward: For ray (per-pixel) we determine position where we enter participating media From this on, we are going to make small steps through participating media, these are our sample points For each of this sample points, we compute in-scattering term , absorption factor and add into radiance that is going to be returned The last thing is in-scattering function, which has to be calculated for each light type separately. Which can be calculated by solving this: In short - it will always contain density and albedo , phase function, absorption factor , visibility function (returns 0 for point in shadow/invisible, 1 otherwise) and radiance energy received from the light over the whole hemisphere - for point light this will be where: represents power of the light represents distance between sample point and light Which wraps up the math-heavy part of the article, and let's continue with Ray Marching description as it is required for understanding of this technique. Ray Marching Is a technique where given a ray origin and direction we pass through the volume not analytically, but using small steps. At each step, some function is computed often contributing to resulting color. In some cases we can early-exit upon satisfying some conditions. Ray marching is often performed within some specific boundaries - often an axis-aligned box, which I'm going to use in the example ray marching implementation. For the sake of simplicity let's assume our axis-aligned bounding box is at position _Position with size of _Scale. To perform ray marching we have to find entry point of the ray, and perform up to N steps through it until we exit out of the box. Before going further - I assume everyone has knowledge of what world, view, etc. coordinates are. To this set let's add one coordinate system and that is volume coordinates. These coordinates are from 0.0 - 1.0 for 3 axes determining where we are inside the volume (actually just like 3D texture coordinates). Let's have a following function determining whether given position is inside of the sphere or not: // Are we inside of unit sphere bound to volume coordinates // position - given position on which we decide whether we're inside or // outside, in volume coordinates bool sphere(float3 position) { // Transform volume coordinates from [0 to 1] to [-1 to 1] and use sphere // equation (r^2=x^2+y^2+z^2) to determine whether we're inside or outside if (length(position * 2.0f - 1.0f) < 1.0f) { // Inside return true; } else { // Outside return false; } } Now, with ray marching technique we should be able to render a sphere in this volume. Simply by starting at the edge of specified volume, marching through and at each step determining whether we're inside or outside of the sphere. Whenever we're inside, we can return and render white color, otherwise continue (if we miss everything, render black color): // Origin of our ray float3 rayOrigin = i.worldPos.xyz; // Origin in volume coordinates float3 rayCoord = (rayOrigin - _Position.xyz) / _Scale.xyz + 0.5f; // Direction along which ray will march through volume float3 rayDirection = normalize(i.worldPos.xyz - _WorldSpaceCameraPos.xyz); // Single step, the longest ray that is possible in volume is diagonal, // therefore we perform steps at size of diagonal length / N, where N // represents maximum number of steps possible during ray marching float rayStep = sqrt(_Scale.x * _Scale.x + _Scale.y * _Scale.y + _Scale.z * _Scale.z) / (float)_Samples; // Did we hit a sphere? bool hit = false; // Perform ray marching [loop] for (int i = 0; i < _Samples; i++) { // Determine whether we hit sphere or not hit = sphere(rayCoord); // If so, we can exit computation if (hit) { break; } // Move ray origin forward along direction by step size rayOrigin += rayDirection * rayStep; // Update volume coordinates rayCoord = (rayOrigin - _Position.xyz) / _Scale.xyz + 0.5f; // If we are out of the volume we can also exit if (rayCoord.x < 0.0f || rayCoord.x > 1.0f || rayCoord.y < 0.0f || rayCoord.y > 1.0f || rayCoord.z < 0.0f || rayCoord.z > 1.0f) { break; } } // Did we hit? float color = hit ? 1.0f : 0.0f; // Color output return float4(color, color, color, 1.0f); Which will yield us this-like result: Fig. 01 - Rendered sphere using ray marching technique Now if we would look at how many steps we have performed to render this (per pixel): Fig. 02 - Number of steps performed before hitting sphere or exiting out of the volume While this is one of the less efficient ways to render a sphere, ray marching allows us to actually process through volume at small steps accumulating values over time, therefore it is highly efficient method of rendering volumetric effects like fire, smoke, etc. Light Scattering Implementation Let's jump ahead into light scattering implementation. Based on the theory this will be quite straight forward - there is just one catch, as we are specifying the volume with axis-aligned bounding box, it is crucial to note that we need actually 2 different computations for both situations - one where camera is outside and one where camera is inside. Let's start with the one where camera is inside. // Screen space coordinates allowing for fullscreen projection of camera // z-buffer float2 projCoord = i.projection.xy / i.projection.w; projCoord *= 0.5f; projCoord += 0.5f; projCoord.y = 1.0f - projCoord.y; // Read z-value from camera depth texture, and linearize it to 0.0 - 1.0 float zvalue = LinearizeDepth(tex2D(ViewDepthTexture, projCoord).x); // Origin of our ray float3 rayOrigin = WorldSpaceCameraPosition.xyz; // Origin in volume coordinates float3 rayCoord = (rayOrigin - _Position.xyz) / _Scale.xyz + 0.5f; // Direction along which ray will march through volume float3 rayDirection = normalize(i.worldPos.xyz - WorldSpaceCameraPosition.xyz); // Push camera origin to near camera plane rayOrigin += rayDirection * CameraNearPlane; // Single step, the longest ray that is possible in volume is diagonal, // therefore we perform steps at size of diagonal length / N, where N // represents maximum number of steps possible during ray marching float rayStep = sqrt(_Scale.x * _Scale.x + _Scale.y * _Scale.y + _Scale.z * _Scale.z) / (float)_Samples; // Steps counter int steps = 0; // Resulting value of light scattering float3 L = float3(0.0f, 0.0f, 0.0f); // Perform ray marching [loop] for (int i = 0; i < _Samples; i++) { // Move ray origin forward along direction by step size rayOrigin += rayDirection * rayStep; // Update volume coordinates rayCoord = (rayOrigin - _Position.xyz) / _Scale.xyz + 0.5f; // Calculate linear z value for current position during the ray marching float z = -(mul(ViewMatrix, float4(rayOrigin, 1.0f)).z) / (CameraFarPlane - CameraNearPlane); // In case we are behind an object, terminate ray marching if (z >= zvalue) { break; } // Light scattering computation // Sample visibility for current position in ray marching, we use standard // shadow mapping to obtain whether current position is in shadow, in that // case returns 0.0, otherwise 1.0 float v = SampleShadow(rayOrigin, mul(ViewMatrix, float4(rayOrigin, 1.0f))); // Calculate distance from light for in-scattering component of light float d = length(LightPosition.xyz - rayOrigin.xyz); // Radiance reaching the sample position // Depends on volume scattering parameters, light intensity, visibility and // attenuation function float L_in = exp(-d * _TauScattering) * v * LightIntensity / (4.0f * 3.141592f * d * d); // In-scattering term for given sample // Applies albedo and phase function float3 L_i = L_in * _TauScattering * _Albedo.xyz * Phase(normalize(rayOrigin - WorldSpaceLightPosition), normalize(rayOrigin - WorldSpaceCameraPosition)); // Multiply by factors and sum into result L += L_i * exp(-length(rayOrigin - WorldSpaceCameraPosition) * _TauScattering) * rayStep; steps++; // If we are out of the volume we can also exit if (rayCoord.x < 0.0f || rayCoord.x > 1.0f || rayCoord.y < 0.0f || rayCoord.y > 1.0f || rayCoord.z < 0.0f || rayCoord.z > 1.0f) { break; } } // Output light scattering return float4(L, 1.0f); Which will compute light scattering inside the volume. Resulting in this like image: Fig. 03 - Light scattering inside volume with shadowed objects For the computation from outside of the volume one has to start with origin not being camera, but actual points where we enter the volume. Which isn't anything different than: // Origin of our ray float3 rayOrigin = i.worldPos.xyz; Of course, further when computing total radiance transferred from the step to the camera, the distance passed into exponent has to be only distance that has been traveled inside of the volume, e.g.: // Multiply by factors and sum into result L += L_i * exp(-length(rayOrigin - rayEntry) * _TauScattering) * rayStep; Where ray entry is input world position in the shader program. To have good image quality, post processing effects (like tone mapping) are required, resulting image can look like: Fig. 04 - Image with post processing effects. What wasn't described from the algorithm is the Phase function. The phase function determines probability density of scattering incoming photons into outgoing directions. One of the most common is Rayleigh phase function (which is commonly used for atmospheric scattering): float Phase(float3 inDir, float3 outDir) { float cosAngle = dot(inDir, outDir) / (length(inDir) * length(outDir)); float nom = 3.0f * (1.0f + cosAngle * cosAngle); float denom = 16.0f * 3.141592f; return nom / denom; } Other common phase functions are Henyey-Greenstein or Mie. On Optimization There could be a huge chapter on optimization of light scattering effects. To obtain images like above, one needs to calculate large amount of samples, which requires a lot of performance. Reducing them ends up in slicing which isn't visually pleasant. In the next figure, a low number of samples is being used: Fig. 05 - Low number of samples for ray marching. By simple modification, adding a random small offset to each step size - a noise will be introduced, and stepping artifacts will be removed, like: Fig. 06 - With low amount of samples and randomization, the image quality can be improved. Further, the shader programs provided in previous section were reference ones. They can be optimized a bit, reducing number of computations within the ray marching loop. Such as not working with ray origin, but only with ray coordinates within the loop. Also, rendering light scattering effects is often done at half or quarter resolution, or using interleaved sampling (For example, dividing whole image into block of 2x2 pixels, and calculating 1 of them each frame - thus reducing required computation power). The actual difference is then hardly visible when user moves around due to other effects like motion blur. All optimization tricks are left for a reader to try and implement. Extensions I intentionally pushed for implementing this algorithm inside the specified volume. While doing it as a full screen pass seems more straight forward, it actually is somehow limited. Using a specific volume can bring us further to simulating clouds or smoke, and lighting it correctly with this algorithm. Of course that would require a voxel texture representing density and albedo at each voxel in the volume. Applying a noise as density could result in interesting effects like ground fog, which may be especially interesting in various caves, or even outside. Results Instead of talking about the results, I will go ahead and share both - picture and a video: Fig. 07 - Picture showing the resulting light scattering Fig. 08 - Video showing the end result. The video shows really basic usage - it is rendered using the code explained earlier, only with albedo color for the fog. Conclusion This is by far one of the longest article I've ever written - I actually didn't expect the blog post being this long, and I can imagine some people here can be a bit scared by its size. Light scattering isn't easy, and even at this length of the article - it describes just basics. I hope you have enjoyed 'Effect' post this time, if I did any mistake in equations or source code - please let me know, I'll gladly update it. I'm also considering wrapping an improved version of this as unity package and making it available on Asset Store at some point (although there are numerous similar products available there). Anyways please let me know what you think!
    41. bool theseThreeCellsTrue = cell[1] && cell[5] && cell[9];
    42. NikitaSadkov

      Game Design from Hell

      So it is the year 2004. I've finally got Baldur's Gate II copy, but it isn't working, due to the bug in the botched pirate translation, and it was impossible to get untranslated original English version in Russia, because Russian pirates hacked and translated all games into Russian, usually including speech, sometimes using automatic translation tool, and always blindly, without testing how it would look and sound in the game. So yeah, Baldur's Gate segfaulted past the first few areas, that pirates have tested after their botched translation, while I learned a lesson that piracy is bad, and instead had to play Evil Islands (or Allods 3, as it is know in Russia). Evil Islands was in originally in Russian, as it is a Russian game, it was somewhat working, albeit very glitchy and unoptimized (framerate was jumping from 5 to 20 FPS), and it was an ugly experience, pretty much summarized by the following video: https://www.youtube.com/watch?v=bOCtp94TRSI If you're designing an RPG, you can get a lot of insight on what makes bad RPG from the above let's play video. In a nutshell, Russian game designers failed to properly balance monsters and playtest their garbage. It had less polish than modern asset swapped indie games. I got childhood trauma from that. [REMOVED] BTW, two previous Allods games (sold as Rage of Mages in the west) weren't much better. Basically they were Warcraft 2 clones with ugly graphics and without base buildings. Allods 4 (Allods Online) was an outright blatant World of Warcraft clone with worse everything. Now tell me that I have no right to hate Russians. Do you have any stories about the game design you hate? Like absolute worst in its genre?
    43. Is there a way to modify the .ALL in a way so it only checks certain cells in the table? I know I can do a for loop with a true false setting getting switched on matching cells.. but I was wondering if there was a quicker way.. kinda like all. So... say I have.... bool [] array = new boll[10]; .... is there a way to check that cell [1][5][9] is true and ignore all the other cells? Without using a standard for loop, in a "readable one like" type way... like with that .All thing?
    44. That is why user interface must expose the queue of the units, which will act next. Original Heroes games used even less intuitive scheduling. But if game time is divided into discrete turns between players, where player moves all his units, during his turn, there will be much less tactical depth. Because fast creatures would stop offering first turn advantage and there will be no way to counter these pesky archers with your super fast phoenixes, while slow zombies at close ranger would attack just as fast as any other unit. I.e. it will stop being HoMM game, become more casual knock-off, with different target audience.
    45. Yesterday
    46. Mapet

      Game code architecture

      Thank you for all your answers. if I encounter any problems (and it will definitely happen ^^) I will come back with more questions for sure. At the moment I think you've helped me so much that I can try to design my ECS module correctly enough :).
    47. hagerprof

      What I need to develop a game in c++??

      This is a bit of a question as well as a suggestion. Is Allegro still viable? It used to be a good entry into gaming with C++. I think you would have some backwards compatibility since they keep the old versions available (or at least they used to.)
    48. hplus0603

      Yet another interpolation question

      A simple way to calculate the offset: Each packet from the server contains the server time. You calculate the offset as "receivedTime minus yourOwnTime" but you only update your own value if the offset is greater than before. This means that lag spikes that temporarily reduces the offset, won't drag the estimate off.
    49. So I need some dynamically adjusted variable which I then add to timestamps received from server and then "compare" this sum to client time. Is my understanding correct? And if yes how should this offset be calculated?
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!