Jump to content
  • Advertisement

Search the Community

Showing results for tags 'DX11'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 1000 results

  1. Hi all, After seeing some missing pixels along the edges of meshes 'beside each other'/ connected, I first thought it was mesh specific/related. But after testing the following, I still see this occuring: - create a flat grid/plane, without textures - output only black, white background (buffer cleared) When I draw this plane twice, e.g. 0,0 (XZ) and 4,0 (XZ), with the plane being 4x4, I still see those strange pixels 'see through' (aiming for properly connected planes). After quite some researching and testing (disable skybox, change buffer clear color, disable depth buffer etc), I still keep getting this. I've been reading up on 't junction' issues, but I think here this isn't the case (verts all line up nicely). As a workaround/ solution for now, I've just made the black foundation planes (below the scene), boxes with minimal height and bottom triangles removed. This way basically these 'holes' are not visible because the boxes have sides. Here a screenshot of what I'm getting. I wonder if someone has thoughts on this, is this 'normal', do studios workaround this etc. There might be some rounding issues, but I wouldn't expect that with this 'full' numbers (all ,0).
  2. TLDR: is there a way to "capture" a constantbuffer in a command list (like the InstanceCount in DrawIndexedInstanced is captured) so i can update it before the command list is executed? Hey, I want to draw millions of objects and i use instancing to do so. My current implementation caches the matrix buffers, so I have a constantbuffer for each model-material combination. This is done so I don't have to rebuild my buffers each frame, because most of my scene is static, but can move at times. To update the constantbuffers I have another thread which creates command lists to update the constantbuffers and executes them on the immediate context. My render thread(s) also create command lists ahead of time to issue to the gpu when a new frame is needed. The matrix buffers are shared between multiple render threads. The result is that when an object changes color, so it goes from one model-material buffer to another, it hides one frame and is visible at the next or is for one frame at a different location where an object was before. I speculate this is because the constantbuffer for matrices is updated immediately but the InstanceCount in the draw command list is not. This leads to matrices which contain old or uninitialized memory. Is there a way to update my matrix constant buffers without stalling every renderthread and invalidating all render command lists? regards
  3. I'm having trouble wrapping my brain around what actually is the issue here, but the sampler I'm using in my volume renderer is only interpolating the 3D texture along the Y axis. I roughly followed (and borrowed a lot of code from) this tutorial, but I'm using SlimDX and WPF: http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html Here's an example, showing voxel-ish artifacts on the X and Z axes, which are evidently not being interpolated: ...whereas on the Y axis it appears to be interpolating correctly: If I disable any kind of interpolation in the sampler, the whole volume ends up looking voxel-ish / bad: Thinking maybe my hardware didn't support 3D textures (even though it's modern?) I wrote a little trilinear interpolation function, and got the same results. In the trilinear code, I calculate the position of the ray in grid coordinates, and use the fractional portion to do the lerps. So I experimented by just painting the fractional part of the grid coordinate where a ray starts, onto my geometry cast to a float4. As expected, the Y axis looks good, as my input dataset has 30 layers. So I see a white => black fade 30 times: However, my X and Z fractional values are strange. What I should be seeing is the same white => black fade 144 and 145 times, respectively. But what I get is this: ... which is definitely not right. The values are A) discretized and uniform per grid cell, and B) exhibit a pattern that repeats every handful of grid rows, instead of a smooth fade on each cell. My suspicion is that I'm initializing my texture badly, but here's a look at the whole pipeline from initialization to rendering 1) Loading data from a file, then constructing all my rendering-related objects: Data = new GURUGridFile(@"E:\GURU2 Test Data\GoshenDual\Finished\30_DOW7_(X)_20090605_220006.ggf"); double DataX = Data.CellSize[0] * Data.Dimensions[0]; double DataY = Data.CellSize[1] * Data.Dimensions[1]; double DataZ = Data.CellSize[2] * Data.Dimensions[2]; double MaxSize = Math.Max(DataX, Math.Max(DataY, DataZ)); DataX /= MaxSize; DataY /= MaxSize; DataZ /= MaxSize; Renderer.XSize = (float)DataX; Renderer.YSize = (float)DataY; Renderer.ZSize = (float)DataZ; int ProductCode = Data.LayerProducts[0].ToList().IndexOf("A_DZ"); float[,,] RadarData = new float[Data.Dimensions[0], Data.Dimensions[1], Data.Dimensions[2]]; for (int x = 0; x < Data.Dimensions[0]; x++) for (int y = 0; y < Data.Dimensions[1]; y++) for (int z = 0; z < Data.Dimensions[2]; z++) RadarData[x, y, z] = Data.Data[z][ProductCode][x, y]; int DataSize = Math.Max(RadarData.GetLength(0), Math.Max(RadarData.GetLength(1), RadarData.GetLength(2))); int mWidth = RadarData.GetLength(0); int mHeight = RadarData.GetLength(2); int mDepth = RadarData.GetLength(1); float mStepScale = 1.0F; float maxSize = (float)Math.Max(mWidth, Math.Max(mHeight, mDepth)); SlimDX.Vector3 stepSize = new SlimDX.Vector3( 1.0f / (mWidth * (maxSize / mWidth)), 1.0f / (mHeight * (maxSize / mHeight)), 1.0f / (mDepth * (maxSize / mDepth))); VolumeRenderer = new VolumeRenderEngine(false, Renderer.device); VolumeRenderer.Data = VolumeRenderTest.Rendering.TextureObject3D.FromData(RadarData); VolumeRenderer.StepSize = stepSize * mStepScale; VolumeRenderer.Iterations = (int)(maxSize * (1.0f / mStepScale) * 2.0F); Renderer.Initialize(); SetupSlimDX(); this.VolumeRenderer.DataWidth = Data.Dimensions[0]; this.VolumeRenderer.DataHeight = Data.Dimensions[2]; this.VolumeRenderer.DataDepth = Data.Dimensions[1]; It's worth noting here that I flip the Z and Y axes when passing data to the volume renderer so as to comply with DirectX coordinates. Next is my construction of the Texture3D and related fields. This is the step I think I'm messing up, both in terms of correctness as well as general violation of best practices. public static TextureObject3D FromData(float[,,] Data) { Texture3DDescription texDesc = new Texture3DDescription() { BindFlags = SlimDX.Direct3D11.BindFlags.ShaderResource, CpuAccessFlags = SlimDX.Direct3D11.CpuAccessFlags.None, Format = SlimDX.DXGI.Format.R32_Float, MipLevels = 1, OptionFlags = SlimDX.Direct3D11.ResourceOptionFlags.None, Usage = SlimDX.Direct3D11.ResourceUsage.Default, Width = Data.GetLength(0), Height = Data.GetLength(2), Depth = Data.GetLength(1) }; int i = 0; float[] FlatData = new float[Data.GetLength(0) * Data.GetLength(1) * Data.GetLength(2)]; for (int y = 0; y < Data.GetLength(1); y++) for (int z = 0; z < Data.GetLength(2); z++) for (int x = 0; x < Data.GetLength(0); x++) FlatData[i++] = Data[x, y, z]; DataStream TextureStream = new DataStream(FlatData, true, true); DataBox TextureBox = new DataBox(texDesc.Width * 4, texDesc.Width * texDesc.Height * 4, TextureStream); Texture3D valTex = new Texture3D(Renderer.device, texDesc, TextureBox); var viewDesc = new SlimDX.Direct3D11.ShaderResourceViewDescription() { Format = texDesc.Format, Dimension = SlimDX.Direct3D11.ShaderResourceViewDimension.Texture3D, MipLevels = texDesc.MipLevels, MostDetailedMip = 0, ArraySize = 1, CubeCount = 1, ElementCount = 1 }; ShaderResourceView valTexSRV = new ShaderResourceView(Renderer.device, valTex, viewDesc); TextureObject3D tex = new TextureObject3D(); tex.Device = Renderer.device; tex.Size = TextureStream.Length; tex.TextureStream = TextureStream; tex.TextureBox = TextureBox; tex.Texture = valTex; tex.TextureSRV = valTexSRV; return tex; } The TextureObject3D class is just a helper class that I wrap around a Texture3D to make things a little simpler to work with. At the rendering phase, I draw the back and front faces of my geometry (that is colored according to the vertex coordinates) to textures so that ray starting and ending positions can be calculated, then pass all that nonsense to the effect. private void RenderVolume() { // Rasterizer states RasterizerStateDescription RSD_Front = new RasterizerStateDescription(); RSD_Front.FillMode = SlimDX.Direct3D11.FillMode.Solid; RSD_Front.CullMode = CullMode.Back; RSD_Front.IsFrontCounterclockwise = false; RasterizerStateDescription RSD_Rear = new RasterizerStateDescription(); RSD_Rear.FillMode = SlimDX.Direct3D11.FillMode.Solid; RSD_Rear.CullMode = CullMode.Front; RSD_Rear.IsFrontCounterclockwise = false; RasterizerState RS_OLD = Device.ImmediateContext.Rasterizer.State; RasterizerState RS_FRONT = RasterizerState.FromDescription(Renderer.device, RSD_Front); RasterizerState RS_REAR = RasterizerState.FromDescription(Renderer.device, RSD_Rear); // Calculate world view matrix Matrix wvp = _world * _view * _proj; RenderTargetView NullRTV = null; // First we need to render to the rear texture SetupBlend(false); PrepareRTV(RearTextureView); SetBuffers(); Device.ImmediateContext.Rasterizer.State = RS_REAR; Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); ExecuteTechnique(Renderer.RayCasting101FX_RenderPosition); Device.ImmediateContext.Flush(); Device.ImmediateContext.OutputMerger.SetTargets(NullRTV); // Now we draw to the front texture SetupBlend(false); PrepareRTV(FrontTextureView); SetBuffers(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); ExecuteTechnique(Renderer.RayCasting101FX_RenderPosition); Device.ImmediateContext.Flush(); Device.ImmediateContext.OutputMerger.SetTargets(NullRTV); SetupBlend(false); //Set Render Target View Device.ImmediateContext.OutputMerger.SetTargets(SampleRenderView); // Set Viewport Device.ImmediateContext.Rasterizer.SetViewports(new Viewport(0, 0, WindowWidth, WindowHeight, 0.0f, 1.0f)); // Clear screen Device.ImmediateContext.ClearRenderTargetView(SampleRenderView, new Color4(1.0F, 0.0F, 0.0F, 0.0F)); if (Wireframe) { RenderWireframeBack(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; } SetBuffers(); // Render Position Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); Renderer.RayCasting101FX_Back.SetResource(new ShaderResourceView(Renderer.device, RearTexture));// RearTextureSRV); Renderer.RayCasting101FX_Front.SetResource(new ShaderResourceView(Renderer.device, FrontTexture));//FrontTextureSRV); Renderer.RayCasting101FX_Volume.SetResource(new ShaderResourceView(Renderer.device, Data.Texture)); Renderer.RayCasting101FX_StepSize.Set(StepSize); Renderer.RayCasting101FX_Iterations.Set(Iterations); Renderer.RayCasting101FX_Width.Set(DataWidth); Renderer.RayCasting101FX_Height.Set(DataHeight); Renderer.RayCasting101FX_Depth.Set(DataDepth); ExecuteTechnique(Renderer.RayCasting101FX_RayCastSimple); if (Wireframe) { RenderWireframeFront(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; } int sourceSubresource; sourceSubresource = SlimDX.Direct3D11.Resource.CalculateSubresourceIndex(0, 1, 1);// MSAATexture.CalculateSubResourceIndex(0, 0, out sourceMipLevels); int destinationSubresource; destinationSubresource = SlimDX.Direct3D11.Resource.CalculateSubresourceIndex(0, 1, 1); //m_renderTarget.CalculateSubResourceIndex(0, 0, out destinationMipLevels); Device.ImmediateContext.ResolveSubresource(MSAATexture, 0, SharedTexture, 0, Format.B8G8R8A8_UNorm); Device.ImmediateContext.Flush(); CanvasInvalid = false; sw.Stop(); this.LastFrame = sw.ElapsedTicks / 10000.0; } private void PrepareRTV(RenderTargetView rtv) { //Set Depth Stencil and Render Target View Device.ImmediateContext.OutputMerger.SetTargets(rtv); // Set Viewport Device.ImmediateContext.Rasterizer.SetViewports(new Viewport(0, 0, WindowWidth, WindowHeight, 0.0f, 1.0f)); // Clear render target Device.ImmediateContext.ClearRenderTargetView(rtv, new Color4(1.0F, 0.0F, 0.0F, 0.0F)); } private void SetBuffers() { // Setup buffer info Device.ImmediateContext.InputAssembler.InputLayout = Renderer.RayCastVBLayout; Device.ImmediateContext.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList; Device.ImmediateContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(Renderer.VertexBuffer, Renderer.VertexPC.Stride, 0)); Device.ImmediateContext.InputAssembler.SetIndexBuffer(Renderer.IndexBuffer, Format.R32_UInt, 0); } private void ExecuteTechnique(EffectTechnique T) { for (int p = 0; p < T.Description.PassCount; p++) { T.GetPassByIndex(p).Apply(Device.ImmediateContext); Device.ImmediateContext.DrawIndexed(36, 0, 0); } } Finally, here's the shader in its entirety. The TrilinearSample function is supposed to compute a good, interpolated sample but is what ended up highlighting what the problem likely is. What it does, or at least attempts to do, is calculate the actual coordinate of the ray in the original grid coordinates, then use the decimal portion to do the interpolation. float4x4 World; float4x4 WorldViewProj; float4x4 WorldInvTrans; float3 StepSize; int Iterations; int Side; float4 ScaleFactor; int Width; int Height; int Depth; Texture2D<float3> Front; Texture2D<float3> Back; Texture3D<float1> Volume; SamplerState FrontSS = sampler_state { Texture = <Front>; Filter = MIN_MAG_MIP_POINT; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; SamplerState BackSS = sampler_state { Texture = <Back>; Filter = MIN_MAG_MIP_POINT; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; SamplerState VolumeSS = sampler_state { Texture = <Volume>; Filter = MIN_MAG_MIP_LINEAR; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V AddressW = Border; // border sampling in W BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; struct VertexShaderInput { float3 Position : POSITION; float4 texC : COLOR; }; struct VertexShaderOutput { float4 Position : SV_POSITION; float3 texC : TEXCOORD0; float4 pos : TEXCOORD1; }; VertexShaderOutput PositionVS(VertexShaderInput input) { VertexShaderOutput output; output.Position = float4(input.Position, 1.0); output.Position = mul(output.Position * ScaleFactor, WorldViewProj); output.texC = input.texC.xyz; output.pos = output.Position; return output; } float4 PositionPS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { return float4(input.texC, 1.0f); } float4 WireFramePS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { return float4(1.0f, .5f, 0.0f, .85f); } //draws the front or back positions, or the ray direction through the volume float4 DirectionPS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { float2 texC = input.pos.xy /= input.pos.w; texC.x = 0.5f * texC.x + 0.5f; texC.y = -0.5f * texC.y + 0.5f; float3 front = Front.Sample(FrontSS, texC).rgb;// tex2D(FrontS, texC).rgb; float3 back = Back.Sample(BackSS, texC).rgb; // tex2D(BackS, texC).rgb; if(Side == 0) { float4 res = float4(front, 1.0f); return res; } if(Side == 1) { float4 res = float4(back, 1.0f); return res; } return float4(abs(back - front), 1.0f); } float TrilinearSample(float3 pos) { float X = pos.x * Width; float Y = pos.y * Height; float Z = pos.z * Depth; float iX = floor(X); float iY = floor(Y); float iZ = floor(Z); float iXn = iX + 1; float iYn = iY + 1; float iZn = iZ + 1; float XD = X - iX; float YD = Y - iY; float ZD = Z - iZ; float LL = lerp(Volume[float3(iX, iY, iZ)], Volume[float3(iX, iY, iZn)], ZD); float LR = lerp(Volume[float3(iXn, iY, iZ)], Volume[float3(iXn, iY, iZn)], ZD); float UL = lerp(Volume[float3(iX, iYn, iZ)], Volume[float3(iX, iYn, iZn)], ZD); float UR = lerp(Volume[float3(iXn, iYn, iZ)], Volume[float3(iXn, iYn, iZn)], ZD); float L = lerp(LL, UL, YD); float R = lerp(LR, UR, YD); //return ZD; return lerp(L, R, XD); return 0.0F; } float4 RayCastSimplePS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { //calculate projective texture coordinates //used to project the front and back position textures onto the cube float2 texC = input.pos.xy /= input.pos.w; texC.x = 0.5f* texC.x + 0.5f; texC.y = -0.5f* texC.y + 0.5f; float3 front = Front.Sample(FrontSS, texC).rgb; // tex2D(FrontS, texC).xyz; float3 back = Back.Sample(BackSS, texC).rgb; // tex2D(BackS, texC).xyz; float3 dir = normalize(back - front); float4 pos = float4(front, 0); float4 dst = float4(0, 0, 0, 0); float4 src = 0; float value = 0; //Iterations = 1500; float3 Step = dir * StepSize; // / (float)Iterations; float3 TotalStep = float3(0, 0, 0); value = Volume.Sample(VolumeSS, pos.xyz).r; int i = 0; for(i = 0; i < Iterations; i++) { pos.w = 0; //value = Volume.SampleLevel(VolumeSS, pos.xyz, 0); value = TrilinearSample(pos.xyz); // tex3Dlod(VolumeS, pos).r; // Radar reflectivity related threshold values if (value < 40) value = 40; if (value > 60) value = 60; value = (value - 40.0) / 20.0; src = (float4)(value); src.a /= (Iterations / 50.0); //Front to back blending // dst.rgb = dst.rgb + (1 - dst.a) * src.a * src.rgb // dst.a = dst.a + (1 - dst.a) * src.a src.rgb *= src.a; dst = (1.0f - dst.a) * src + dst; //break from the loop when alpha gets high enough if (dst.a >= .95f) break; //advance the current position pos.xyz += Step; TotalStep += Step; //break if the position is greater than <1, 1, 1> if (pos.x > 1.0f || pos.y > 1.0f || pos.z > 1.0f || pos.x < 0.0f || pos.y < 0.0f || pos.z < 0.0f) break; } return dst; } technique11 RenderPosition { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, PositionPS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 PositionPS(); } } technique11 RayCastDirection { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, DirectionPS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 DirectionPS(); } } technique11 RayCastSimple { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, RayCastSimplePS())); //VertexShader = compile vs_3_0 PositionVS(); //PixelShader = compile ps_3_0 RayCastSimplePS(); } } technique11 WireFrame { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, WireFramePS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 WireFramePS(); } } Any insight is hugely appreciated, whether on the specific problem or just random things I'm doing wrong. With the coordinates in the Texture3D being so messed up, I'm surprised this renders at all, let alone close to correctly. Thank you in advance!
  4. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. Features: True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported: Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage. Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API. Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers. Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader. Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy. Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance. Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads. Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe. Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm. Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  5. Hello everybody! I decided to write a graphics engine, the killer of Unity and Unreal. If anyone interested and have free time, join. High-level render is based on low-level OpenGL 4.5 and DirectX 11. Ideally, there will be PBR, TAA, SSR, SSAO, some variation of indirect light algorithm, support for multiple viewports and multiple cameras. The key feature is COM based (binary compatibility is needed). Physics, ray tracing, AI, VR will not. I grabbed the basic architecture from the DGLE engine. The editor will be on Qt (https://github.com/fra-zz-mer/RenderMasterEditor). Now there is a buildable editor. The main point of the engine is the maximum transparency of the architecture and high-quality rendering. For shaders, there will be no new language, everything will turn into defines.
  6. Based upon where my how far back from the camera is there will be a artifact of my point light. Here is what I mean before going into details I have a flat rectangle and a standing rectangle of 20x20 dimensions with a point light of 22 radius. My camera currently sits at -76 but if I were to move the camera to -75 then even in the most extreme example of point light z = 3.0 from above will no longer have the artifact. I am not sure what could be causing it but I have tried several idea to avail. The one with the biggest benefit seemed to be scaling my world matrix for rendering the point lights sphere by a small modifier like 1.1 in addition to the radius.That seemed to only mask the issue for a little bit. When looking at the render targets in RenderDoc going into the lighting pass the render targets seem correct so my guess is that it is my shading code. I am not sure what other details would be beneficial to detailing the problem but if there is something mentioned I will post it. Shader Code Texture2D NormalMap : register( t0 ); Texture2D DiffuseAlbedoMap : register( t1 ); Texture2D SpecularAlbedoMap : register( t2 ); Texture2D PositionMap : register(t3); cbuffer WorldViewProjCB : register( b0 ) { matrix WorldViewProjMatrix; matrix WorldViewMatrix; } cbuffer CameraPosition : register ( b2 ) { float3 CameraPosition; } cbuffer LightInfo : register ( b3 ) { float3 LightPosition; float3 LightColor; float3 LightDirection; float2 SpotLightAngles; float4 LightRange; }; struct VertexShaderInput { float4 Position : POSITION; }; struct VertexShaderOutput { float4 PositionCS : SV_Position; float3 ViewRay : VIEWRAY; }; VertexShaderOutput VertexShaderFunction(in VertexShaderInput input) { VertexShaderOutput output; output.PositionCS = mul( input.Position, WorldViewProjMatrix ); float3 positionVS = mul( input.Position, WorldViewMatrix ).xyz; output.ViewRay = positionVS; return output; } void GetGBufferAttributes(in float2 screenPos, out float3 normal, out float3 position, out float3 diffuseAlbedo, out float3 specularAlbedo, out float specularPower) { int3 sampleIndices = int3(screenPos.xy, 0); normal = NormalMap.Load(sampleIndices).xyz; position = PositionMap.Load(sampleIndices).xyz; diffuseAlbedo = DiffuseAlbedoMap.Load(sampleIndices).xyz; float4 spec = SpecularAlbedoMap.Load(sampleIndices); specularAlbedo = spec.xyz; specularPower = spec.w; } float3 CalcLighting(in float3 normal, in float3 position, in float3 diffuseAlbedo, in float3 specularAlbedo, in float specularPower) { float3 L = 0; float attenuation = 1.0f; L = LightPosition - position; float dist = length(L); attenuation = max(0, 1.0f - (dist / LightRange.x)); L /= dist; float nDotL = saturate(dot(normal, L)); float3 diffuse = nDotL * LightColor * diffuseAlbedo; float3 V = CameraPosition - position; float3 H = normalize( L + V); float3 specular = pow(saturate(dot(normal, H)), specularPower) * LightColor * specularAlbedo.xyz * nDotL; return (diffuse + specular) * attenuation; } float4 PixelShaderFunction( in float4 screenPos : SV_Position ) : SV_Target0 { float3 normal; float3 position; float3 diffuseAlbedo; float3 specularAlbedo; float specularPower; GetGBufferAttributes(screenPos.xy, normal, position, diffuseAlbedo, specularAlbedo, specularPower); float3 lighting = CalcLighting(normal, position, diffuseAlbedo, specularAlbedo, specularPower); return float4(lighting, 1.0f); }
  7. I'm trying to offset the depth value of all pixels written by a HLSL pixel shader, by a constant view-space value (it's used in fighting games like Guilty gear and Street Fighter V to simulate 2d layering effects. I wish to do something similar). The projection matrix is generated in sharpdx using a standard perspective projection matrix (PerspectiveFOVLH, which makes a matrix similar to the one described at the bottom there). My pixel shader looks like this struct PSoutput { float4 color: SV_TARGET; float depth: SV_DEPTH; }; PSoutput PShaderNormalDepth(VOutColorNormalView input) { PSoutput output; output.color = BlinnPhong(input.color, input.normal, input.viewDirection); output.depth = input.position.z; //input.position's just the standard SV_POSITION return output; } This one gives me the exact same results as before I included depth output. Given a view space offset value passed in a constant shader, how do I compute the correct offset to apply from there? EDIT: I've been stuck on this for weeks, but of course a bit after I post it I figure it out, after reading this. So, with a standard projection in clip space position.z really contains D = a * (1/z)+b where b and a are elements 33 and 43 of the projection matrix and z is view space depth. This means the view space depth can be computed with z = a/(D-b). So to add a given view space depth offset in the pixel shader, you do this: float trueZ = projectionMatrix._43 / (input.position.z - projectionMatrix._33); output.depth = projectionMatrix._43 / (trueZ + zOffset) + projectionMatrix._33;
  8. Hi, I wrote my animation importer for Direct3D 11 using assimp and an FBX file exported from Blender and everything is working after I flipped the axes such that Y=Z and Z=-Y. I basically multiplied BoneOffsetMatrix = BoneOffsetMatrix * FlipMatrix and GlobalInverseMatrix = Inverse(FlipMatrix) * GlobalInverseMatrix where FlipMatrix = ( 1,0,0,0, 0,0,1,0, 0,-1,0,0, 0,0,0,1 ) (matrices in row-major format). But why do I have to? There are tutorials (for OpenGL, but still) where this worked fine without this step. Is it a setting in Blender that is wrong? I am applying those transformations to my own vertices, so I'm not using the ones provided in the FBX file. But even if I did, those would be wrong without flipping the axes after the global inverse transformation. Even though it is working, I want to give my editor to modders of my game, so I can't be sure that it works at their ends since I had to add a step that should not be required according to the documentation. Cheers, Magogan
  9. Hey I'm working in an engine. Whenever we run dispatch calls they are much more expensive than draw calls for the drivers on nvidia cards(checked with nsight). Same cost maybe for running 5000 draw calls as a few 100 dispatch. Is this normal?
  10. Hi, sorry for my English. My comp specs are: Win 8.1, DirectX 11.2, Geforce GTX750 Ti with latest drivers. In my project I must use color blend mode max via SDL_ComposeCustomBlendMode which is supported in SDL 2.0.9 by direct3d11 renderer only. Changing defines in SDL_config.h or SDL_config_windows.h (SDL_VIDEO_RENDER_D3D11 to 1 and SDL_VIDEO_RENDER_D3D to 0) doesn't help. SDL says my system supports direct3d, opengl, opengles2 and software renderers. What should I do to activate direct3d11 renderer so I can use blend mode max?
  11. Hello, I'm doing tessellation and while I got the positions setup correctly using quads I am at a loss how to generate smooth normals from them. I suppose this should be done in the Domain shader but how would this process look like? I am using a heightmap for tessellation but I would rather generate the normals from the geometry than use a normal map, if possible. Cheers
  12. Hello all, I have made a simple shadow map shader with a minimal problem on my implementation, I know there's something missing that it draw on polygon not facing the light while it should not, since my shader knowledge on available functions is limited I cannot spot the problem, I put this on DX11 HLSL tag though GLSL and any hints, tips is appreciated and welcome ^_^y IMAGE : CODE: // Shadow color applying //------------------------------------------------------------------------------------------------------------------ //*>> Pixel position in light space // float4 m_LightingPos = mul(IN.WorldPos3D, __SM_LightViewProj); //*>> Shadow texture coordinates // float2 m_ShadowTexCoord = 0.5 * m_LightingPos.xy / m_LightingPos.w + float2( 0.5, 0.5 ); m_ShadowTexCoord.y = 1.0f - m_ShadowTexCoord.y; //*>> Shadow map depth // float m_ShadowDepth = tex2D( ShadowMapSampler, m_ShadowTexCoord ).r; //*>> Pixel depth // float m_PixelDepth = (m_LightingPos.z / m_LightingPos.w) - 0.001f; //*>> Pixel depth in front of the shadow map depth then apply shadow color // if ( m_PixelDepth > m_ShadowDepth ) { m_ColorView *= float4(0.5,0.5,0.5,0); } // Final color //------------------------------------------------------------------------------------------------------------------ return m_ColorView;
  13. Well we're back with a new entry. As usual we have made a lot of bug fixes. The primary new feature though is the separation of graphics, and planet generation in different threads. Here's the somewhat familiar code that was modified from our first entry.... void CDLClient::InitTest2() { this->CreateConsole(); printf("Starting Test2\n"); fflush(stdout); // Create virtual heap m_pHeap = new(MDL_VHEAP_MAX, MDL_VHEAP_INIT, MDL_VHEAP_HASH_MAX) CDLVHeap(); CDLVHeap *pHeap = m_pHeap.Heap(); // Create the universe m_pUniverse = new(pHeap) CDLUniverseObject(pHeap); // Create the graphics interface CDLDXWorldInterface *pInterface = new(pHeap) CDLDXWorldInterface(this); // Camera control double fMinDist = 0.0; double fMaxDist = 3200000.0; double fSrtDist = 1600000.0; // World size double fRad = 400000.0; // Fractal function for world CDLValuatorRidgedMultiFractal *pNV = new(pHeap) CDLValuatorRidgedMultiFractal(pHeap,fRad,fRad/20,2.0,23423098); //CDLValuatorSimplex3D *pNV = new(pHeap) CDLValuatorSimplex3D(fRad,fRad/20,2.0,23423098); // Create world CDLSphereObjectView *pSO = new(pHeap) CDLSphereObjectView( pHeap, fRad, 1.0 , 0.25, 6, pNV ); pSO->SetGraphicsInterface(pInterface); // Create an astral reference from the universe to the world and attach it to the universe CDLReferenceAstral *pRef = new(pHeap) CDLReferenceAstral(m_pUniverse(),pSO); m_pUniverse->PushReference(pRef); // Create the camera m_pCamera = new(pHeap) CDLCameraObject(pHeap, FDL_PI/4.0, this->GetWidth(), this->GetHeight()); m_pCamera->SetGraphicsInterface(pInterface); // Create a world tracking reference from the unverse to the camera m_pBoom = new(pHeap) CDLReferenceFollow(m_pUniverse(),m_pCamera(),pSO,fSrtDist,fMinDist,fMaxDist); m_pUniverse->PushReference(m_pBoom()); // Set zoom speed in the client this->SetZoom(fMinDist,fMaxDist,3.0); // Create the god object (Build point for LOD calculations) m_pGod = new(pHeap) CDLGodObject(pHeap); // Create a reference for the god opbject and attach it to the camera CDLReference *pGodRef = new(pHeap) CDLReference(m_pUniverse(), m_pGod()); m_pCamera->PushReference(pGodRef); // Set the main camera and god object for the universe' m_pUniverse->SetMainCamera(m_pCamera()); m_pUniverse->SetMainGod(m_pGod()); // Load and compile the vertex shader CDLUString clVShaderName = L"VS_DLDX_Test.hlsl"; m_pVertexShader = new(pHeap) CDLDXShaderVertexPC(this,clVShaderName,false,0,1); // Attach the Camera to the vertex shader m_pVertexShader->UseConstantBuffer(0,static_cast<CDLDXConstantBuffer *>(m_pCamera->GetViewData())); // Create the pixel shader CDLUString clPShaderName = L"PS_DLDX_Test.hlsl"; m_pPixelShader = new(pHeap) CDLDXShaderPixelGeneral(this,clPShaderName,false,0,0); // Create a rasterizer state and set to wireframe m_pRasterizeState = new(pHeap) CDLDXRasterizerState(this); m_pRasterizeState->ModifyState().FillMode = D3D11_FILL_WIREFRAME; // Initailze the universe m_pUniverse()->InitFromMainCamera(); // Run the universe! m_pUniverse->Run(); } Right at the end we call "m_pUniverse->Run();". This actually starts the build thread. What it does is continuously look at the position of the god object which we have attached to the camera above in the code, and build the planet with the appropriate LOD based on it's proximity to the various terrain chunks.........Let's not bore you with more text or boring pictures. Instead we will bore you with a boring video: As you can see it generates terrain reasonably fast. But there is still a lot more we can do. First off we should eliminate the backside of the planet. Note that as we descend towards the planet the backside becomes bigger and bigger as the horizon becomes closer and closer to the camera. This is one advantage of a spherical world. Second we can add a lot more threads. In general we try to cache as much data as possible. What we can still do is pre-generate our octree at one level down using a fractal function pipeline. In general most the CPU time is spent in the fractal data generation, so it makes sense to put add more threading there. Fortunately this is one of the easier places we can use threading. For our next entry we hope to go all the way down to the surface and include some nominal shading.
  14. How to calculate angle between two points from a third point with the help of D3DXMATH library?
  15. I was wondering if anyone knows of any tools to help design procedural textures. More specially I need something that will output the actual procedure rather than just the texture. It could output HLSL or some pseudo code that I can port to HLSL. The important thing is I need the algorithm, not just the texture so I can put it into a pixel shader myself. I posted the question on the Allegorithmic forum, but someone answered that while Substance Designer uses procedures internally, it doesn't support output of code, so I guess that one is out.
  16. Hi, I am doing some hobby project where I am trying to make a game work with Oculus. I do not have game's source code, but I believe it supplies Open VR API with DX10 texture (game itself is DX9, I believe internally they convert DX9 texture to DX10 for submission to Steam VR/Open VR). I originally tried to do everything in DX10 on my side, but I don't think Oculus supports DX10. So, now I'd like to try converting DX10 texture to DX11 and try using that in Oculus' Texture Swap Chain. I've couple of questions: * could someone suggest a way to convert DX10 texture to DX11? I am going to try using techniques described here https://docs.microsof t.com/en-us/windows/desktop/direct3darticles/surface-sharing-between-windows-graphics-apis but I am not a DX guru, in fact I am not a graphics programmer and last time I touched D3D was 15 years ago , and if someone could provide me with simpler approach, that'd be very helpful Or, at lease, could someone clarify if article I referenced is indeeded what I need to do? * given texture pointer, how can I figure out if it is indeed DX10 texture? I was able to get Description, and it seems to be filled reasonably: ID3D10Texture2D *src = (ID3D10Texture2D*)texture->handle; D3D10_TEXTURE2D_DESC srcDesc; src->GetDesc(&srcDesc); but how could i for example, tell if it is DX10 or DX10.1 texture? * looks like I will have to instantiate DX11 device myself. Is there any harm in having multiple D3D11 device instantiated (per swap chain) of do I need to share single device? Thanks for your help.
  17. Hello, I want to upgrade my program from D3D9 to D3D11, since I need more power in the newer API, the unfortunate thing is D3D11 no longer supports the ID3DXMesh interface, and I need to supply a fvf and a D3D9 device to it in order to create one, the D3D9 device is the thing I don't have when I start the application as a D3D11 application, of course. And I don't have the fvf handy. However, I can't live without a ID3DXMesh interface, because my original program is totally driven by it. how do I get it work again? thanks Jack
  18. I call CopyResource to get some data from the GPU to the CPU. If I try to Map immediately I obviously suffer performance hit. So I have a round buffer and I call Map delayed by, say, 3-4 frames. This works way better. Now my question is: how do I know after how many frames I can safely Map without incurring additional performance hit? I have noticed for instance that when I delay by one more frame I get linearly better performance, but after about 5-6 frames, the more frames I wait, the performance stays the same. Is there a way to "Query" DX11 to know when it's "safe" to Map a resource?
  19. Hi, I am trying to initialize SkyBox/Cubemap in Oculus app. Oculus's sample shows how to initialize texture swap chain texture. The difference in my use case is that I do not initialize from .dds bytes read from disk, I already have ID3D11Texture2D. I see samples online that allow getting Texture bytes, but that will involve CPU copying of memory, I wonder if it can be avoided. Here's roughly what I am doing: int numFaces = 6; for (int i = 0; i < numFaces; ++i) { ID3D11Texture2D *faceSrc = textures->handle; ++textures; context->UpdateSubresource(tex, i, nullptr, (const void*)faceSrc, srcDesc.Width * 4, srcDesc.Width * srcDesc.Height * 4); } However, that crashes with AV in Nvidia driver. Any suggestions? Thanks!
  20. Hi guys, Should I use MSAA surfaces (and corresponding depth buffers) when drawing data like positions, normals, and depth to use in stuff like SSAO ? And what about when drawing the SSAO itself ? I'm doing forward rendering for the scene and using MSAA for that.
  21. Gnollrunner

    Mountain Ranges

    For this entry we implemented the ubiquitous Ridged Multi-fractal function. It's not so interesting in and of itself, but it does highlight a few features that were included in our voxel engine. First as we mentioned, being a voxel engine, it supports full 3D geometry (caves, overhangs and so forth) and not just height-maps. However if we look at a typical world these features are the exception rather than the rule. It therefor makes sense to optimize the height-map portion of our terrain functions. This is especially true since our voxels are vertically aligned. This means that there will be many places where the same height calculation is repeated. Even if we look at a single voxel, nearly the same calculation is used for a lower corner and it's corresponding upper corner. The only difference been the subtraction from the voxel vertex position. ...... Enter the unit sphere! In our last entry we talked about explicit voxels, with edges and faces and vertexes. However all edges and faces are not created equal. Horizontal faces (in our case the triangular faces), and horizontal edges contain a special pointer that references their corresponding parts in a unit sphere, The unit sphere can be thought of as residing in the center of each planet. Like our world octree, it is formed from a subdivided icosahedron, only it is not extruded and is organized into a quadtree instead of an octree, being more 2D in nature. Vertexes in our unit sphere can be used to cache height-map function values to avoid repeated calculations. We also use our unit sphere to help the horizontal part of our voxel subdivision operation. By referencing the unit sphere we only have to multiply a unit sphere vertex by a height value to generate voxel vertex coordinates. Finally our unit-sphere is also used to provide coordinates during the ghost-walking process we talked about in our first entry. Without it, our ghost-walking would be more computationally expensive as it would have to calculate spherical coordinates on each iteration instead of just calculating heights, which are quite simple to calculate as they are all generated by simply averaging two other heights. Ownership of units sphere faces is a bit complex. Ostensibly they are owned by all voxel faces that reference them (and therefore add to their reference counter) . However this presents a bit of a problem as they are also used in ghost-walking which happens every LOD/re-chunking iteration, and it fact they may or may not end up being referenced by voxels faces, depending on whether mesh geometry is found. Even if no geometry is found we may want to keep them for the next ghost-walk search. To solve this problem, we implemented undead-objects. Unit sphere faces can become undead and can even be created that way if they are built by the ghost-walker. When they are undead they are kept in a special list which keeps them psudo-alive. They also have an un-dead life value associated with them. When they are touched by the ghost-walker that value is renewed. However if after a few iterations they are untouched, they become truly dead and are destroyed. Picture time again..... So here is our Ridged Multi-Fractal in wire frame. We'll flip it around to show our level transition........ Here's a place that needs a bit of work. The chunk level transitions are correct but they are probably a bit more complex than they need to be. We use a very general voxel tessellation algorithm since we have to handle various combinations of vertical and horizontal transitions. We will probably optimize this later, especially for the common cases but for now it serves it's purpose. Next up we are going to try to add threads. We plan to use a separate thread(s) for the LOD/re-chunk operations, and another one for the graphics .
  22. Hello everyone! I want to remake a Direct3D 11 C++ application for Android. I'm not familiar with any engines or libraries on the current time, only with pure Direct3D and OpenGL (including OpenGL ES), but I'm ready to learn one. Which library/engine should I choose? On the current time I'm considering LibGDX for this purpose, but I heard that it's not very suitable for 3D. I was also considering OpenGL ES (with Java) but I think it will be tricky to improve the game in this case (I'm planning to use an animated character, particles in the game). Performance is one of the main requirements to the game. I would also wish to have a possibility to compile the code for iOS or easy remake the code for this platform. Thanks in advance!
  23. I'm getting an odd problem with DX11. I kind of solved it but I don't understand why it didn't work the first way. What I'm trying to do is create a bunch of meshes (index and vertex buffers) in one thread but render them in a second thread. I don't don't render the same meshes I'm currently creating. I build a whole set of new meshes and then when everything is ready, the build thread tells the render thread to swap to the new set. This worked most of the time except for once in a while one of the meshes would be corrupted. It was definitely the mesh generation or copy, and not the render because a corrupted mesh would stick around until the next mesh update, then it would disappear. At first I thought it might be in my CPU side mesh generation code. I build meshes in my own mesh format and then I translate and copy straight to DX11 using ID3D11DeviceContext::map. I am aware that the device context is not thread safe so I guard it with a mutex to make sure I'm not trying to use it in two threads at the same time. Before I did this the program with would simply crash. But afterwards I would only get occasional mesh corruption. Finally just to try something else I put a mutex around the whole scene render code and then used that same mutex in the other thread around the CPU to DX11 mesh copy section. This solved the problem. However I don't understand why I should be forced to so this since I was protecting the graphics context before. Is there something I'm missing here? Should I even be calling DX11 from more than one thread? Supposedly it's thread safe except for the graphics context.
  24. I have a process that creates D3D11 shared texture accoring to https://docs.microsoft.com/en-us/window ... phics-apis we can open in Direct3D9Ex a shared textures previously created by non-DX9 APIs The texture has DXGI_FORMAT_B8G8R8A8_UNORM format I'm trying to open it like that: D3DDevice->CreateTexture(width, height, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &texture, &shared_handle) (usege=D3DUSAGE_RENDERTARGET makes no difference) but it says:"Direct3D9: (ERROR) :Opened and created resources don't match, unable to open the shared resource. any thoughts?
  25. Here's my dilemma..... I would like to use a physics engine but I'm not sure if it's practical for my project. What I need currently is fairly simple. I need mesh collision and response with a pill shaped object (i.e. a character). The thing is, I build my geometry at run time, and it goes straight into an octree. It's actually built after I figure out where the character is going in kind of a "Just In Time" fashion. Also, it's my own custom mesh format. I'd rather not take my mesh and put it in some 3rd party format because basically everything I need is already exists, i.e. faces, edges, vertexes, face normals and the octree. So I'm wondering if there is an engine that will somehow let me use my own octree and for instance let me register callbacks to pass in the mesh data as needed.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!