Jump to content
  • Advertisement

Search the Community

Showing results for tags 'DX11'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 685 results

  1. Hello everyone, I've been during the past few days trying to fix sunlight on my sphere which have some bugs in it. For starters I'm using this code: https://github.com/Illation/ETEngine/blob/master/source/Engine/Shaders/PlanetPatch.glsl to calculate my normals instead of using a normal map. I'm then using this guide: http://www.thetenthplanet.de/archives/1180 To get my TBN Matrix. I have 2 main issues I'm working to solve when reworking this code. First I get seams in the normal map along the equator and from pole to pole. The normal also seems to move when I move my camera. Here is a video showing what I mean, the color is the normal calculated with the TBN matrix and as the camera moves it moves along with it. Nothing is multiplied by the view matrix or anything. Here is my code Vertex Shader: output.normal = mul(finalPos, worldMatrix); output.viewVector = (mul(cameraPos.xyz, worldMatrix) - mul(finalPos, worldMatrix)); mapCoords = normalize(finalPos); output.mapCoord = float2((0.5f + (atan2(mapCoords.z, mapCoords.x) / (2 * 3.14159265f))), (0.5f - (asin(mapCoords.y) / 3.14159265f))); output.position = mul(float4(finalPos, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); return output; and also what might be more important, the pixel shader: float3x3 GetTBNMatrix(float3 normalVector, float3 posVector, float2 uv) { float3 dp1, dp2, dp2perp, dp1perp, T, B; float2 duv1, duv2; float invMax; dp1 = ddx(posVector); dp2 = ddy(posVector); duv1 = ddx(uv); duv2 = ddx(uv); dp2perp = cross(dp2, normalVector); dp1perp = cross(normalVector, dp1); // * -1 due to being LH coordinate system T = (dp2perp * duv1.x + dp1perp * duv2.x) * -1; B = (dp2perp * duv1.y + dp1perp * duv2.y) * -1; invMax = rsqrt(max(dot(T, T), dot(B, B))); return float3x3(T * invMax, B * invMax, normalVector); } float GetHeight(float2 uv) { return shaderTexture.SampleLevel(sampleType, uv, 0).r * (21.229f + 8.2f); } float3 CalculateNormal(float3 normalVector, float3 viewVector, float2 uv) { float textureWidth, textureHeight, hL, hR, hD, hU; float3 texOffset, N; float3x3 TBN; shaderTexture.GetDimensions(textureWidth, textureHeight); texOffset = float3((1.0f / textureWidth), (1.0f / textureHeight), 0.0f); hL = GetHeight(uv - texOffset.xz); hR = GetHeight(uv + texOffset.xz); hD = GetHeight(uv + texOffset.zy); hU = GetHeight(uv - texOffset.zy); N = normalize(float3((hL - hR), (hU - hD), 2.0f)); TBN = GetTBNMatrix(normalVector, -viewVector, uv); return mul(TBN, N); } float4 MarsPixelShader(PixelInputType input) : SV_TARGET { float3 normal; float lightIntensity, color; float4 finalColor; normal = normalize(CalculateNormal(normalize(input.normal), normalize(input.viewVector), input.mapCoord)); lightIntensity = saturate(dot(normal, normalize(-lightDirection))); color = saturate(diffuseColor * lightIntensity); return float4(normal.rgb, 1.0f);//float4(color, color, color, 1.0f); } Hope anyone can help shine some light on this problem for me Best Regards and Thanks in advance Toastmastern
  2. Hi folks, I have a problem and I really could use some ideas from other professionals! I am developing my video game Galactic Crew including its own game engine. I am currently working on improved graphics which includes shadows (I use Shadow Mapping for that). I observed that the game lags, when I use shadows, so I started profiling my source code. I used DirectX 11's Queries to measure the time my GPU spends on different tasks to search for bottlenecks. I found several small issues and solved them. As a result, the GPU needs around 10 ms per frame, which is good enough for 60 FPS (1s / 60 frames ~ 16 ms/frame). See attachment Scene1 for the default view. However, when I zoom into my scene, it starts to lag. See attachment Scene2 for the zoomed view. I compared the times spent on the GPU for both cases: default view and zoomed view. I found out that the render passes in which I render the full scene take much longer (~11 ms instead of ~2ms). One of these render stages is the conversion of the depth information to the Shadow Map and the second one is the final draw of the scene. So, I added even more GPU profiling to find the exact problem. After several iteration steps, I found this call to be the bottleneck: if (model.UseInstancing) _deviceContext.DrawIndexedInstanced(modelPart.NumberOfIndices, model.NumberOfInstances, 0, 0, 0); else _deviceContext.DrawIndexed(modelPart.NumberOfIndices, 0, 0); Whenever I render a scene, I iterate through all visible models in the scene, set the proper vertex and pixel shaders for this model and update the constant buffer of the vertex shader (if required). After that, I iterate through all positions of the model (if it does not use instancing) and iterate through all parts of the model. For each model part, I set the used texture maps (diffuse, normal, ...), set the vertex and index buffers and finally draw the model part by calling the code above. In one frame for example, 11.37 ms were spent drawing all models and their parts, when I zoomed it. From these 11.37 ms 11.35ms were spent in the drawing calls I posted above. As a test, I simplified my rather complex pixel shader to a simple function that returns a fixed color to make sure, the pixel shader is not responsible for my performance problem. As it turned out, the GPU time wasn't reduced. Does anyone of you have any idea what causes my lag, i.e. my long GPU time in the drawing calls? I don't use LOD or anything comparable and I also don't use my BSP scene graph in this scene. It is exactly the same content, but with different zooms. Maybe I missed something very basic. I am grateful for any help!!
  3. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. Features: True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported: Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage. Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API. Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers. Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader. Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy. Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance. Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads. Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe. Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm. Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  4. The code to create d3d11 structured buffer : D3D11_BUFFER_DESC desc; desc.ByteWidth = _count * _structSize; if (_type == StructType::Struct) { desc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED; } else { desc.MiscFlags = 0; } desc.StructureByteStride = _structSize; desc.BindFlags = D3D11_BIND_SHADER_RESOURCE; if (_dynamic) { desc.Usage = D3D11_USAGE_DEFAULT; desc.CPUAccessFlags = 0; } else { desc.Usage = D3D11_USAGE_IMMUTABLE; desc.CPUAccessFlags = 0; } if (FAILED(getDevice()->CreateBuffer(&desc, NULL, &_object))) { return false; } D3D11_SHADER_RESOURCE_VIEW_DESC resourceViewDesc; memset(&resourceViewDesc, 0, sizeof(resourceViewDesc)); if(_type == StructType::Float) resourceViewDesc.Format = DXGI_FORMAT_R32_FLOAT; else if (_type == StructType::Float2) resourceViewDesc.Format = DXGI_FORMAT_R32G32_FLOAT; else if (_type == StructType::Float3) resourceViewDesc.Format = DXGI_FORMAT_R32G32B32_FLOAT; else if (_type == StructType::Float4) resourceViewDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; else resourceViewDesc.Format = DXGI_FORMAT_UNKNOWN; resourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_BUFFER; resourceViewDesc.Buffer.ElementOffset = 0; resourceViewDesc.Buffer.NumElements = _count; ID3D11Resource* viewObject = _object; auto hr = getDevice()->CreateShaderResourceView(viewObject, &resourceViewDesc, &_shaderResourceView); if (FAILED(hr)) { return false; } I've created a float type structured buffer. The source data is a float array, I update the buffer from array[startIndex] to array[endIndex - 1]. The code to update buffer : bool setData(int startIndex, int endIndex, const void* data) { if (!data) return false; D3D11_BOX destBox; destBox.left = startIndex * _structSize; destBox.right = endIndex * _structSize; destBox.top = 0; destBox.bottom = 1; destBox.front = 0; destBox.back = 1; getContext()->UpdateSubresource(_object, 0, &destBox, data, _count * _structSize, 0); } The final result is that the data is not smooth, then I change the code of setData destBox.left = startIndex ; destBox.right = endIndex ; Then, the result looks smooth, but with some data missed !!!! Don't know why..
  5. Is it reasonable to use Direct2D for some small 2D games? I never did too much of Direct2D stuff, mostly I used it for displaying text/2D GUI for Direct3D engine etc. but I never tried doing game in it. Is it better to use Direct2D and sprites or would you prefer to go with D3D but with 2D shaders // is D2D not meant for games, no matter how big or small, at all?
  6. Hey, This is a very strange problem... I've got a computation shader that's supposed to fill 3d texture (voxels in metavoxel) with color, based on particles that cover given metavoxel. And this is the code: static const int VOXEL_WIDTH_IN_METAVOXEL = 32; static const int VOXEL_SIZE = 1; static const float VOXEL_HALF_DIAGONAL_LENGTH_SQUARED = (VOXEL_SIZE * VOXEL_SIZE + 2.0f * VOXEL_SIZE * VOXEL_SIZE) / 4.0f; static const int MAX_PARTICLES_IN_METAVOXEL = 32; struct Particle { float3 position; float radius; }; cbuffer OccupiedMetavData : register(b6) { float3 occupiedMetavWorldPos; int numberOfParticles; Particle particlesBin[MAX_PARTICLES_IN_METAVOXEL]; }; RWTexture3D<float4> metavoxelTexUav : register(u5); [numthreads(VOXEL_WIDTH_IN_METAVOXEL, VOXEL_WIDTH_IN_METAVOXEL, 1)] void main(uint2 groupThreadId : SV_GroupThreadID) { float4 voxelColumnData[VOXEL_WIDTH_IN_METAVOXEL]; float particleRadiusSquared; float3 distVec; for (int i = 0; i < VOXEL_WIDTH_IN_METAVOXEL; i++) voxelColumnData[i] = float4(0.0f, 0.0f, 1.0f, 0.0f); for (int k = 0; k < numberOfParticles; k++) { particleRadiusSquared = particlesBin[k].radius * particlesBin[k].radius + VOXEL_HALF_DIAGONAL_LENGTH_SQUARED; distVec.xy = (occupiedMetavWorldPos.xy + groupThreadId * VOXEL_SIZE) - particlesBin[k].position.xy; for (int i = 0; i < VOXEL_WIDTH_IN_METAVOXEL; i++) { distVec.z = (occupiedMetavWorldPos.z + i * VOXEL_SIZE) - particlesBin[k].position.z; if (dot(distVec, distVec) < particleRadiusSquared) { //given voxel is covered by particle voxelColumnData[i] += float4(0.0f, 1.0f, 0.0f, 1.0f); } } } for (int i = 0; i < VOXEL_WIDTH_IN_METAVOXEL; i++) metavoxelTexUav[uint3(groupThreadId.x, groupThreadId.y, i)] = clamp(voxelColumnData[i], 0.0, 1.0); } And it works well in debug mode. This is the correct looking result obtained after raymarching one metavoxel from camera: As you can see, the particle only covers the top right corner of the metavoxel. However, in release mode The result obtained looks like this: This looks like the upper half of the metavoxel was not filled at all even with the ambient blue-ish color in the first "for" loop... I nailed it down towards one line of code in the above shader. When I replace "numberOfParticles" in the "for" loop with constant value such as 1 (which is uploaded to GPU anyway) the result finally looks the same as in debug mode. This is the shader compile method from Hieroglyph Rendering Engine (awesome engine) and it looks fine for me but maybe something's wrong? My only modification was adding include functionality ID3DBlob* ShaderFactoryDX11::GenerateShader( ShaderType type, std::wstring& filename, std::wstring& function, std::wstring& model, const D3D_SHADER_MACRO* pDefines, bool enablelogging ) { HRESULT hr = S_OK; std::wstringstream message; ID3DBlob* pCompiledShader = nullptr; ID3DBlob* pErrorMessages = nullptr; char AsciiFunction[1024]; char AsciiModel[1024]; WideCharToMultiByte(CP_ACP, 0, function.c_str(), -1, AsciiFunction, 1024, NULL, NULL); WideCharToMultiByte(CP_ACP, 0, model.c_str(), -1, AsciiModel, 1024, NULL, NULL); // TODO: The compilation of shaders has to skip the warnings as errors // for the moment, since the new FXC.exe compiler in VS2012 is // apparently more strict than before. UINT flags = D3DCOMPILE_PACK_MATRIX_ROW_MAJOR; #ifdef _DEBUG flags |= D3DCOMPILE_DEBUG | D3DCOMPILE_SKIP_OPTIMIZATION; // | D3DCOMPILE_WARNINGS_ARE_ERRORS; #endif // Get the current path to the shader folders, and add the filename to it. FileSystem fs; std::wstring filepath = fs.GetShaderFolder() + filename; // Load the file into memory FileLoader SourceFile; if ( !SourceFile.Open( filepath ) ) { message << "Unable to load shader from file: " << filepath; EventManager::Get()->ProcessEvent( EvtErrorMessagePtr( new EvtErrorMessage( message.str() ) ) ); return( nullptr ); } LPCSTR s; if ( FAILED( hr = D3DCompile( SourceFile.GetDataPtr(), SourceFile.GetDataSize(), GlyphString::wstringToString(filepath).c_str(), //!!!! - this must be pointing to a concrete shader file!!! - only directory would work as well but in that case graphics debugger crashes when debugging shaders pDefines, D3D_COMPILE_STANDARD_FILE_INCLUDE, AsciiFunction, AsciiModel, flags, 0, &pCompiledShader, &pErrorMessages ) ) ) //if ( FAILED( hr = D3DX11CompileFromFile( // filename.c_str(), // pDefines, // 0, // AsciiFunction, // AsciiModel, // flags, // 0,//UINT Flags2, // 0, // &pCompiledShader, // &pErrorMessages, // &hr // ) ) ) { message << L"Error compiling shader program: " << filepath << std::endl << std::endl; message << L"The following error was reported:" << std::endl; if ( ( enablelogging ) && ( pErrorMessages != nullptr ) ) { LPVOID pCompileErrors = pErrorMessages->GetBufferPointer(); const char* pMessage = (const char*)pCompileErrors; message << GlyphString::ToUnicode( std::string( pMessage ) ); Log::Get().Write( message.str() ); } EventManager::Get()->ProcessEvent( EvtErrorMessagePtr( new EvtErrorMessage( message.str() ) ) ); SAFE_RELEASE( pCompiledShader ); SAFE_RELEASE( pErrorMessages ); return( nullptr ); } SAFE_RELEASE( pErrorMessages ); return( pCompiledShader ); } Could the shader crash for some reason in mid way through execution? The question also is what could compiler possibly do to the shader code in release mode that suddenly "numberOfParticles" becomes invalid and how to fix this issue? Or maybe it's even sth deeper which results in numberOfParticles being invalid? I checked my constant buffer values with Graphics debugger in debug and release modes and both had correct value for numberOfParticles set to 1...
  7. Hi all, I have been spending so much time trying to replicate a basic effect similar to these: Glowing line or Tron lines or More tron lines I've tried to use blurring using the shrink, horizontal and vertical passes, expand technique but the results of my implementation are crappy. I simply want my custom, non-textured 2d polygons to have a glow around them, in a size and color I can define. For example, I want to draw a blue rectangle using 2 triangles and have a glow around the shape. I am not sure how to best achieve this and what technique to use. I am prototyping an idea so performance is not an issue, I just want to get the pixels properly on the screen and I just can't figure out how to do it! It seems this effect has been done to death by now and should be easy, but I can't wrap my head around it, I'm not good at shaders at all I'm afraid. Are the Rastertek blur or glow tutorials the way to go? I'm using DirectX 11. Any tips or suggestions would be greatly appreciated!
  8. Hi, everybody. I touched on this topic in connection with the recent transition to development on Unreal Engine exclusively in C++ Everyone knows that very little information about the documentation of the engine, I spent a lot of time in finding information about it. Rummaged through GitHub to find worthy examples of implementation, but came to the fact that the best way to learn the Engine is to look for answers in the source code. Want to share with you that I dug up and perhaps someone will help me with my problem. Unreal Engine 4 Rendering, Possible to use my own pure HLSL and GLSL shader code, Jason Zink, Matt Pettineo, Jack Hoxley - Practical renderind with DirectX 11 - 2011.pdf In General I want to understand how to operationalize the concept of context FGlobalShader, UPrimitiveComponent and using FPrimitiveSceneProxy definition FVertexFactory, which implemented the connection of the Shader through the material FMaterialShader and transfer the parameters to it. I have studied the source code of these classes and understand that through the class of materials are transmitted a lot of parameters. But I do not want at least at the first stage to use the parameters that I do not fully understand, but gradually. Create a clean class with the ability to transfer the parameters I need in it, but that it fits into the concept of the pipeline Unreal Engine. Can someone faced it and agree to share a small piece of code for example. Thank you in advance!
  9. Hi everyone, I think my question boils down to "How do i feed shaders?" I was wondering what are the good strategies to store mesh transformation data [World matrices] to then be used in the shader for transforming vertices (performance being the priority ). And i'm talking about a game scenario where there are quite a lot of both moving entities, and static ones, that aren't repeated enough to be worth instanced drawing. So far i've only tried these naive methods : DX11 : - Store transforms of ALL entities in a constant buffer ( and give the entity an index to the buffer for later modification ) - Or store ONE transform in a constant buffer, and change it to the entity's transform before each drawcall. Vulkan : - Use Push Constants to send entity's transform to the shader before each drawcall, and maybe use a separate Device_local uniform buffer for static entities? Same question applies to lights. Any suggestions?
  10. HI, Needing some advice on a feature of the pixel shader that can be leveraged when making a shadow pass. Currently, my shadows work fine, everything is quite happily working....I did though ignore one aspect and I should of fixed it that point in time, for things such as billboard particles I'm no rendering shadows. The reason at the time was that the entire billboard (including what would of been the transparent area) is being written into the depth buffer. I remember seeing on the forum and answer to this problem , I believe it was to attach a pixel shader. And for pixels that weren't rejected on the depth test I believe I need to set the return value to null? @Hodgman - I know you were involved in the thread, you might be able to throw some light on this :) I believe if the texture sample is transparent then I should call discard? I've trawled the web site for the answer (and it's in here, i know it, i've seen it), Just hoping for a quick answer on something that is a little bit obscure. Time for me to go back and fix this little issue. Thanks all
  11. How to unpack the frame buffer when packing by Compact YCoCg Frame Buffer?
  12. Hello everyone, I'm looking for some advice since I have some issues with my textures for my mouse pointer and I'm not sure where to start to look. I have checked everything that I know off and now I'm in need of advice on what to look for in my code when I try to fix it. I have a planet that is rendered, I have a UI that is rendered and I also have a mouse pointer that is rendered. First the planet is rendered, then the UI and then the mouse pointer last. When the planet is done rendering I turn off Z-Buffer and enable Alpha Blending while I render the UI and the Mouse Pointer. In the Mouse Pointers Pixel Shader I look for black color and if that is the case I blend it. But what seems to happen is that it also blends part of the texture that isn't supose to be blended. I'm going to provide some screenshot of the effect. In the first image you can see that the mouse pointer changes color to a more white one when behing infront of the planet. The correct color is the one that is displayed when it's not infron of the planet. The second thing I find weird is that the mouse pointer is behind the ui text even tho it is rendered after. I also tried switching them around and it makes no difference. Also the UI doesn't have the same issues when being above the planet, it's color is displayed as it should. Here comes the Pixel Shader code if that helps anyone get a better grip of the issue: float4 color; color = shaderTexture.Sample(sampleType, input.tex); if(color.b == 0.0f && color.r == 0.0f && color.g == 0.0f) { color.a = 0.0f; } else { color.a = 1.0f; } return color; The UI uses almost the same code, but only checks the r channel of the color but I'm using all 3 channels in the Mouse Pointer due to colors might be abit more off. Should be that if the pixel is black it's should be blended. And it does work, but it's just that somehow it also does something with the parts that shouldn't be blended. Right now I'm leaning towards there being something in the Pixel Shader since I can set all pixels to white and it behaves as it should and creates a white box for me. Any pointers of what kind of issues I'm looking at here and what to search for to find a solution will be appreciated alot Best Regards and Thanks in Advance Toastmastern
  13. Hey, I can't find this information anywhere on the web and I'm wondering about specific optimization... Let's say I have hundreds of 3D textures which I need to process separately in compute shader. Each invocation needs different data in constant buffer BUT many of the 3d textures don't need to update their CB contents every frame. Would it be better to create just one CB resource, bind just once at startup and in loop map the data for each consecutive shader invocation or would it be better to create like hundreds of separate CB resources, map them only when needed and just bind appropriate CB before each shader invocation? This depends on how exacly are those resources managed internally in DirectX and what does binding actually do... I would be very grateful if somebody shared their experience!
  14. Hi, I'm trying to do a comparision with DirectInput GUID e.g GUID_XAxis, GUID_YAxis from a value I get from GetProperty eg DIPROPRANGE propRange; DIJoystick->GetProperty (DIPROP_RANGE, &propRange.diph); // This will crash if (GUID_XAxis == MAKEDIPROP (propRange.diph.dwObj)) ; How should I be comparing the GUID from GetProperty?
  15. Hi guys, I'm trying to learn this stuff but running into some problems 😕 I've compiled my .hlsl into a header file which contains the global variable with the precompiled shader data: //... // Approximately 83 instruction slots used #endif const BYTE g_vs[] = { 68, 88, 66, 67, 143, 82, 13, 236, 152, 133, 219, 113, 173, 135, 18, 87, 122, 208, 124, 76, 1, 0, 0, 0, 16, 76, 0, 0, 6, 0, //.... And now following the "Compiling at build time to header files" example at this msdn link , I've included the header files in my main.cpp and I'm trying to create the vertex shader like this: hr = g_d3dDevice->CreateVertexShader(g_vs, sizeof(g_vs), nullptr, &g_d3dVertexShader); if (FAILED(hr)) { return -1; } and this is failing, entering the if and returing -1. Can someone point out what I'm doing wrong? 😕
  16. I have a problem with SSAO. On left hand black area. Code shader: Texture2D<uint> texGBufferNormal : register(t0); Texture2D<float> texGBufferDepth : register(t1); Texture2D<float4> texSSAONoise : register(t2); float3 GetUV(float3 position) { float4 vp = mul(float4(position, 1.0), ViewProject); vp.xy = float2(0.5, 0.5) + float2(0.5, -0.5) * vp.xy / vp.w; return float3(vp.xy, vp.z / vp.w); } float3 GetNormal(in Texture2D<uint> texNormal, in int3 coord) { return normalize(2.0 * UnpackNormalSphermap(texNormal.Load(coord)) - 1.0); } float3 GetPosition(in Texture2D<float> texDepth, in int3 coord) { float4 position = 1.0; float2 size; texDepth.GetDimensions(size.x, size.y); position.x = 2.0 * (coord.x / size.x) - 1.0; position.y = -(2.0 * (coord.y / size.y) - 1.0); position.z = texDepth.Load(coord); position = mul(position, ViewProjectInverse); position /= position.w; return position.xyz; } float3 GetPosition(in float2 coord, float depth) { float4 position = 1.0; position.x = 2.0 * coord.x - 1.0; position.y = -(2.0 * coord.y - 1.0); position.z = depth; position = mul(position, ViewProjectInverse); position /= position.w; return position.xyz; } float DepthInvSqrt(float nonLinearDepth) { return 1 / sqrt(1.0 - nonLinearDepth); } float GetDepth(in Texture2D<float> texDepth, float2 uv) { return texGBufferDepth.Sample(samplerPoint, uv); } float GetDepth(in Texture2D<float> texDepth, int3 screenPos) { return texGBufferDepth.Load(screenPos); } float CalculateOcclusion(in float3 position, in float3 direction, in float radius, in float pixelDepth) { float3 uv = GetUV(position + radius * direction); float d1 = DepthInvSqrt(GetDepth(texGBufferDepth, uv.xy)); float d2 = DepthInvSqrt(uv.z); return step(d1 - d2, 0) * min(1.0, radius / abs(d2 - pixelDepth)); } float GetRNDTexFactor(float2 texSize) { float width; float height; texGBufferDepth.GetDimensions(width, height); return float2(width, height) / texSize; } float main(FullScreenPSIn input) : SV_TARGET0 { int3 screenPos = int3(input.Position.xy, 0); float depth = DepthInvSqrt(GetDepth(texGBufferDepth, screenPos)); float3 normal = GetNormal(texGBufferNormal, screenPos); float3 position = GetPosition(texGBufferDepth, screenPos) + normal * SSAO_NORMAL_BIAS; float3 random = normalize(2.0 * texSSAONoise.Sample(samplerNoise, input.Texcoord * GetRNDTexFactor(SSAO_RND_TEX_SIZE)).rgb - 1.0); float SSAO = 0; [unroll] for (int index = 0; index < SSAO_KERNEL_SIZE; index++) { float3 dir = reflect(SamplesKernel[index].xyz, random); SSAO += CalculateOcclusion(position, dir * sign(dot(dir, normal)), SSAO_RADIUS, depth); } return 1.0 - SSAO / SSAO_KERNEL_SIZE; }
  17. I've been following this tutorial -> https://www.3dgep.com/introduction-to-directx-11/#The_Main_Function , did all the steps,and I ended up with the main.cpp you can see below. The problem is the call at line 516 g_d3dDeviceContext->UpdateSubresource(g_d3dConstantBuffers[CB_Frame], 0, nullptr, &g_ViewMatrix, 0, 0); which is crashing the program, and the very odd thing is that the first time trough it works fine, it crash the app the second time it is called... Can someone help me understand why? 😕 I have no idea... #include <Direct3D_11PCH.h> //Shaders using namespace DirectX; // Globals //Window const unsigned g_WindowWidth = 1024; const unsigned g_WindowHeight = 768; const char* g_WindowClassName = "DirectXWindowClass"; const char* g_WindowName = "DirectX 11"; HWND g_WinHnd = nullptr; const bool g_EnableVSync = true; //Device and SwapChain ID3D11Device* g_d3dDevice = nullptr; ID3D11DeviceContext* g_d3dDeviceContext = nullptr; IDXGISwapChain* g_d3dSwapChain = nullptr; //RenderTarget view ID3D11RenderTargetView* g_d3dRenderTargerView = nullptr; //DepthStencil view ID3D11DepthStencilView* g_d3dDepthStencilView = nullptr; //Depth Buffer Texture ID3D11Texture2D* g_d3dDepthStencilBuffer = nullptr; // Define the functionality of the depth/stencil stages ID3D11DepthStencilState* g_d3dDepthStencilState = nullptr; // Define the functionality of the rasterizer stage ID3D11RasterizerState* g_d3dRasterizerState = nullptr; D3D11_VIEWPORT g_Viewport{}; //Vertex Buffer data ID3D11InputLayout* g_d3dInputLayout = nullptr; ID3D11Buffer* g_d3dVertexBuffer = nullptr; ID3D11Buffer* g_d3dIndexBuffer = nullptr; //Shader Data ID3D11VertexShader* g_d3dVertexShader = nullptr; ID3D11PixelShader* g_d3dPixelShader = nullptr; //Shader Resources enum ConstantBuffer { CB_Application, CB_Frame, CB_Object, NumConstantBuffers }; ID3D11Buffer* g_d3dConstantBuffers[ConstantBuffer::NumConstantBuffers]; //Demo parameter XMMATRIX g_WorldMatrix; XMMATRIX g_ViewMatrix; XMMATRIX g_ProjectionMatrix; // Vertex data for a colored cube. struct VertexPosColor { XMFLOAT3 Position; XMFLOAT3 Color; }; VertexPosColor g_Vertices[8] = { { XMFLOAT3(-1.0f, -1.0f, -1.0f), XMFLOAT3(0.0f, 0.0f, 0.0f) }, // 0 { XMFLOAT3(-1.0f, 1.0f, -1.0f), XMFLOAT3(0.0f, 1.0f, 0.0f) }, // 1 { XMFLOAT3(1.0f, 1.0f, -1.0f), XMFLOAT3(1.0f, 1.0f, 0.0f) }, // 2 { XMFLOAT3(1.0f, -1.0f, -1.0f), XMFLOAT3(1.0f, 0.0f, 0.0f) }, // 3 { XMFLOAT3(-1.0f, -1.0f, 1.0f), XMFLOAT3(0.0f, 0.0f, 1.0f) }, // 4 { XMFLOAT3(-1.0f, 1.0f, 1.0f), XMFLOAT3(0.0f, 1.0f, 1.0f) }, // 5 { XMFLOAT3(1.0f, 1.0f, 1.0f), XMFLOAT3(1.0f, 1.0f, 1.0f) }, // 6 { XMFLOAT3(1.0f, -1.0f, 1.0f), XMFLOAT3(1.0f, 0.0f, 1.0f) } // 7 }; WORD g_Indicies[36] = { 0, 1, 2, 0, 2, 3, 4, 6, 5, 4, 7, 6, 4, 5, 1, 4, 1, 0, 3, 2, 6, 3, 6, 7, 1, 5, 6, 1, 6, 2, 4, 0, 3, 4, 3, 7 }; //Forward Declaration LRESULT CALLBACK WindowProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam); bool LoadContent(); int Run(); void Update(float deltaTime); void Clear(const FLOAT clearColor[4], FLOAT clearDepth, UINT8 clearStencil); void Present(bool vSync); void Render(); void CleanUp(); int InitApplication(HINSTANCE hInstance, int cmdShow); int InitDirectX(HINSTANCE hInstance, BOOL vsync); int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR cmd, int cmdShow) { UNREFERENCED_PARAMETER(hPrevInstance); UNREFERENCED_PARAMETER(cmd); // Check for DirectX Math library support. if (!XMVerifyCPUSupport()) { MessageBox(nullptr, TEXT("Failed to verify DirectX Math library support."), nullptr, MB_OK); return -1; } if (InitApplication(hInstance, cmdShow) != 0) { MessageBox(nullptr, TEXT("Failed to create applicaiton window."), nullptr, MB_OK); return -1; } if (InitDirectX(hInstance, g_EnableVSync) != 0) { MessageBox(nullptr, TEXT("Failed to initialize DirectX."), nullptr, MB_OK); CleanUp(); return -1; } if (!LoadContent()) { MessageBox(nullptr, TEXT("Failed to load content."), nullptr, MB_OK); CleanUp(); return -1; } int returnCode = Run(); CleanUp(); return returnCode; } int Run() { MSG msg{}; static DWORD previousTime = timeGetTime(); static const float targetFramerate = 30.0f; static const float maxTimeStep = 1.0f / targetFramerate; while (msg.message != WM_QUIT) { if (PeekMessage(&msg, 0, 0, 0, PM_REMOVE)) { TranslateMessage(&msg); DispatchMessage(&msg); } else { DWORD currentTime = timeGetTime(); float deltaTime = (currentTime - previousTime) / 1000.0f; previousTime = currentTime; deltaTime = std::min<float>(deltaTime, maxTimeStep); Update(deltaTime); Render(); } } return static_cast<int>(msg.wParam); } LRESULT CALLBACK WindowProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { PAINTSTRUCT paintstruct; HDC hDC; switch (msg) { case WM_PAINT: { hDC = BeginPaint(hwnd, &paintstruct); EndPaint(hwnd, &paintstruct); }break; case WM_DESTROY: { PostQuitMessage(0); }break; default: return DefWindowProc(hwnd, msg, wParam, lParam); break; } return 0; } int InitApplication(HINSTANCE hInstance, int cmdShow) { //Register Window class WNDCLASSEX mainWindow{}; mainWindow.cbSize = sizeof(WNDCLASSEX); mainWindow.style = CS_HREDRAW | CS_VREDRAW; mainWindow.lpfnWndProc = &WindowProc; mainWindow.hInstance = hInstance; mainWindow.hCursor = LoadCursor(NULL, IDC_ARROW); mainWindow.hbrBackground = (HBRUSH)(COLOR_WINDOW + 1); mainWindow.lpszMenuName = nullptr; mainWindow.lpszClassName = g_WindowClassName; if (!RegisterClassEx(&mainWindow)) { return -1; } RECT client{ 0,0,g_WindowWidth,g_WindowHeight }; AdjustWindowRect(&client, WS_OVERLAPPEDWINDOW, false); // Create Window g_WinHnd = CreateWindowEx(NULL, g_WindowClassName, g_WindowName, WS_OVERLAPPEDWINDOW | WS_VISIBLE, CW_USEDEFAULT, CW_USEDEFAULT, client.right - client.left, client.bottom - client.top, nullptr, nullptr, hInstance, nullptr); if (!g_WinHnd) { return -1; } UpdateWindow(g_WinHnd); return 0; } int InitDirectX(HINSTANCE hInstance, BOOL vsync) { assert(g_WinHnd != nullptr); RECT client{}; GetClientRect(g_WinHnd, &client); unsigned int clientWidth = client.right - client.left; unsigned int clientHeight = client.bottom - client.top; //Direct3D Initialization HRESULT hr{}; //SwapChainDesc DXGI_RATIONAL refreshRate = vsync ? DXGI_RATIONAL{ 1, 60 } : DXGI_RATIONAL{ 0, 1 }; DXGI_SWAP_CHAIN_DESC swapChainDesc{}; swapChainDesc.BufferDesc.Width = clientWidth; swapChainDesc.BufferDesc.Height = clientHeight; swapChainDesc.BufferDesc.RefreshRate = refreshRate; swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; swapChainDesc.BufferDesc.Scaling = DXGI_MODE_SCALING_CENTERED; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.BufferCount = 1; swapChainDesc.OutputWindow = g_WinHnd; swapChainDesc.Windowed = true; swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; UINT createDeviceFlags{}; #if _DEBUG createDeviceFlags = D3D11_CREATE_DEVICE_DEBUG; #endif //Feature levels const D3D_FEATURE_LEVEL features[]{ D3D_FEATURE_LEVEL_11_0 }; D3D_FEATURE_LEVEL featureLevel; hr = D3D11CreateDeviceAndSwapChain( nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr, createDeviceFlags, features, _countof(features), D3D11_SDK_VERSION, &swapChainDesc, &g_d3dSwapChain, &g_d3dDevice, &featureLevel, &g_d3dDeviceContext ); if (FAILED(hr)) { return -1; } //Render Target View ID3D11Texture2D* backBuffer; hr = g_d3dSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&backBuffer)); if (FAILED(hr)) { return -1; } hr = g_d3dDevice->CreateRenderTargetView(backBuffer, nullptr, &g_d3dRenderTargerView); if (FAILED(hr)) { return -1; } SafeRelease(backBuffer); //Depth Stencil View D3D11_TEXTURE2D_DESC depthStencilBufferDesc{}; depthStencilBufferDesc.Width = clientWidth; depthStencilBufferDesc.Height = clientHeight; depthStencilBufferDesc.MipLevels = 1; depthStencilBufferDesc.ArraySize = 1; depthStencilBufferDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthStencilBufferDesc.SampleDesc.Count = 1; depthStencilBufferDesc.SampleDesc.Quality = 0; depthStencilBufferDesc.Usage = D3D11_USAGE_DEFAULT; depthStencilBufferDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL; hr = g_d3dDevice->CreateTexture2D(&depthStencilBufferDesc, nullptr, &g_d3dDepthStencilBuffer); if (FAILED(hr)) { return -1; } hr = g_d3dDevice->CreateDepthStencilView(g_d3dDepthStencilBuffer, nullptr, &g_d3dDepthStencilView); if (FAILED(hr)) { return -1; } //Set States D3D11_DEPTH_STENCIL_DESC depthStencilStateDesc{}; depthStencilStateDesc.DepthEnable = true; depthStencilStateDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL; depthStencilStateDesc.DepthFunc = D3D11_COMPARISON_LESS; depthStencilStateDesc.StencilEnable = false; hr = g_d3dDevice->CreateDepthStencilState(&depthStencilStateDesc, &g_d3dDepthStencilState); if (FAILED(hr)) { return -1; } D3D11_RASTERIZER_DESC rasterizerStateDesc{}; rasterizerStateDesc.FillMode = D3D11_FILL_SOLID; rasterizerStateDesc.CullMode = D3D11_CULL_BACK; rasterizerStateDesc.FrontCounterClockwise = FALSE; rasterizerStateDesc.DepthClipEnable = TRUE; rasterizerStateDesc.ScissorEnable = FALSE;; rasterizerStateDesc.MultisampleEnable = FALSE; hr = g_d3dDevice->CreateRasterizerState(&rasterizerStateDesc, &g_d3dRasterizerState); if (FAILED(hr)) { return -1; } //Set Viewport g_Viewport.Width = static_cast<float>(clientWidth); g_Viewport.Height = static_cast<float>(clientHeight); g_Viewport.TopLeftX = 0.0f; g_Viewport.TopLeftY = 0.0f; g_Viewport.MinDepth = 0.0f; g_Viewport.MaxDepth = 1.0f; return 0; } bool LoadContent() { //Load Shaders HRESULT hr; assert(g_d3dDevice); //VS ID3DBlob* vsBlob = nullptr; D3DReadFileToBlob(L"../Shaders/SimpleVertexShader.cso", &vsBlob); assert(vsBlob); hr = g_d3dDevice->CreateVertexShader(vsBlob->GetBufferPointer(), vsBlob->GetBufferSize(), nullptr, &g_d3dVertexShader); if (FAILED(hr)) { SafeRelease(vsBlob); return false; } //Create VS Input Layout D3D11_INPUT_ELEMENT_DESC vertexLayoutDesc[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, offsetof(VertexPosColor, Position), D3D11_INPUT_PER_VERTEX_DATA ,0 }, { "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, offsetof(VertexPosColor, Color), D3D11_INPUT_PER_VERTEX_DATA ,0 } }; hr = g_d3dDevice->CreateInputLayout(vertexLayoutDesc, _countof(vertexLayoutDesc), vsBlob->GetBufferPointer(), vsBlob->GetBufferSize(), &g_d3dInputLayout); if (FAILED(hr)) { SafeRelease(vsBlob); return false; } SafeRelease(vsBlob); //PS ID3DBlob* psBlob = nullptr; D3DReadFileToBlob(L"../Shaders/SimplePixelShader.cso", &psBlob); assert(psBlob); hr = g_d3dDevice->CreatePixelShader(psBlob->GetBufferPointer(), psBlob->GetBufferSize(), nullptr, &g_d3dPixelShader); SafeRelease(psBlob); if (FAILED(hr)) { return false; } //Load Vertex Buffer D3D11_BUFFER_DESC vertexBufferDesc{}; vertexBufferDesc.ByteWidth = sizeof(VertexPosColor) * _countof(g_Vertices); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; D3D11_SUBRESOURCE_DATA resourceData{}; resourceData.pSysMem = g_Vertices; hr = g_d3dDevice->CreateBuffer(&vertexBufferDesc, &resourceData, &g_d3dVertexBuffer); if (FAILED(hr)) { return false; } //Load Index Buffer D3D11_BUFFER_DESC indexBufferDesc{}; indexBufferDesc.ByteWidth = sizeof(WORD) * _countof(g_Indicies); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; resourceData.pSysMem = g_Indicies; hr = g_d3dDevice->CreateBuffer(&indexBufferDesc, &resourceData, &g_d3dIndexBuffer); if (FAILED(hr)) { return false; } //Load Constant Buffers D3D11_BUFFER_DESC cBufferDesc{}; cBufferDesc.ByteWidth = sizeof(XMMATRIX); cBufferDesc.Usage = D3D11_USAGE_DEFAULT; cBufferDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER; for (size_t bufferID = 0; bufferID < NumConstantBuffers; bufferID++) { hr = g_d3dDevice->CreateBuffer(&cBufferDesc, nullptr, &g_d3dConstantBuffers[bufferID]); if (FAILED(hr)) { return false; } } //Setup Projection Matrix RECT client{}; GetClientRect(g_WinHnd, &client); float clientWidth = static_cast<float>(client.right - client.left); float clientHeight = static_cast<float>(client.bottom - client.top); g_ProjectionMatrix = DirectX::XMMatrixPerspectiveFovLH(XMConvertToRadians(45.0f), clientWidth / clientHeight, 0.1f, 100.0f); g_d3dDeviceContext->UpdateSubresource(g_d3dConstantBuffers[CB_Application], 0, nullptr, &g_ProjectionMatrix, 0, 0); return true; } void Update(float deltaTime) { XMVECTOR eyePosition = XMVectorSet(0, 0, -10, 1); XMVECTOR focusPoint = XMVectorSet(0, 0, 0, 1); XMVECTOR upDirection = XMVectorSet(0, 1, 0, 0); g_ViewMatrix = DirectX::XMMatrixLookAtLH(eyePosition, focusPoint, upDirection); g_d3dDeviceContext->UpdateSubresource(g_d3dConstantBuffers[CB_Frame], 0, nullptr, &g_ViewMatrix, 0, 0); static float angle = 0.0f; angle += 90.0f * deltaTime; XMVECTOR rotationAxis = XMVectorSet(0, 1, 1, 0); g_WorldMatrix = DirectX::XMMatrixRotationAxis(rotationAxis, XMConvertToRadians(angle)); g_d3dDeviceContext->UpdateSubresource(g_d3dConstantBuffers[CB_Object], 0, nullptr, &g_WorldMatrix, 0, 0); } void Clear(const FLOAT clearColor[4], FLOAT clearDepth, UINT8 clearStencil) { g_d3dDeviceContext->ClearRenderTargetView(g_d3dRenderTargerView, clearColor); g_d3dDeviceContext->ClearDepthStencilView(g_d3dDepthStencilView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, clearDepth, clearStencil); } void Present(bool vSync) { if (vSync) { g_d3dSwapChain->Present(1, 0); } else { g_d3dSwapChain->Present(0, 0); } } void Render() { assert(g_d3dDevice); assert(g_d3dDeviceContext); Clear(Colors::CornflowerBlue, 1.0f, 0); //IA const UINT vertexStride = sizeof(VertexPosColor); const UINT offset = 0; g_d3dDeviceContext->IASetVertexBuffers(0, 1, &g_d3dVertexBuffer, &vertexStride, &offset); g_d3dDeviceContext->IASetInputLayout(g_d3dInputLayout); g_d3dDeviceContext->IASetIndexBuffer(g_d3dIndexBuffer, DXGI_FORMAT_R16_UINT, 0); g_d3dDeviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); //VS g_d3dDeviceContext->VSSetShader(g_d3dVertexShader, nullptr, 0); g_d3dDeviceContext->VSGetConstantBuffers(0, NumConstantBuffers, g_d3dConstantBuffers); //RS g_d3dDeviceContext->RSSetState(g_d3dRasterizerState); g_d3dDeviceContext->RSSetViewports(1, &g_Viewport); //PS g_d3dDeviceContext->PSSetShader(g_d3dPixelShader, nullptr, 0); //OM g_d3dDeviceContext->OMSetRenderTargets(1, &g_d3dRenderTargerView, g_d3dDepthStencilView); g_d3dDeviceContext->OMSetDepthStencilState(g_d3dDepthStencilState, 1); //draw g_d3dDeviceContext->DrawIndexed(_countof(g_Indicies), 0, 0); Present(g_EnableVSync); } void CleanUp() { SafeRelease(g_d3dVertexShader); SafeRelease(g_d3dPixelShader); SafeRelease(g_d3dVertexBuffer); SafeRelease(g_d3dIndexBuffer); SafeRelease(g_d3dInputLayout); SafeRelease(g_d3dDepthStencilBuffer); for (size_t bufferID = 0; bufferID < NumConstantBuffers; bufferID++) { SafeRelease(g_d3dConstantBuffers[bufferID]); } SafeRelease(g_d3dDepthStencilState); SafeRelease(g_d3dRasterizerState); SafeRelease(g_d3dRenderTargerView); SafeRelease(g_d3dDepthStencilView); SafeRelease(g_d3dSwapChain); SafeRelease(g_d3dDeviceContext); SafeRelease(g_d3dDevice); }
  18. Hello everyone, After a few years of break from coding and my planet render game I'm giving it a go again from a different angle. What I'm struggling with now is that I have created a Frustum that works fine for now atleast, it does what it's supose to do alltho not perfect. But with the frustum came very low FPS, since what I'm doing right now just to see if the Frustum worked is to recreate the vertex buffer every frame that the camera detected movement. This is of course very costly and not the way to do it. Thats why I'm now trying to learn how to create a dynamic vertexbuffer instead and to map and unmap the vertexes, in the end my goal is to update only part of the vertexbuffer that is needed, but one step at a time ^^ So below is my code which I use to create the Dynamic buffer. The issue is that I want the size of the vertex buffer to be big enough to handle bigger vertex buffers then just mPlanetMesh.vertices.size() due to more vertices being added later when I start to do LOD and stuff, the first render isn't the biggest one I will need. vertexBufferDesc.Usage = D3D11_USAGE_DYNAMIC; vertexBufferDesc.ByteWidth = mPlanetMesh.vertices.size(); vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; vertexBufferDesc.MiscFlags = 0; vertexBufferDesc.StructureByteStride = 0; vertexData.pSysMem = &mPlanetMesh.vertices[0]; vertexData.SysMemPitch = 0; vertexData.SysMemSlicePitch = 0; result = device->CreateBuffer(&vertexBufferDesc, &vertexData, &mVertexBuffer); if (FAILED(result)) { return false; } What happens is that the result = device->CreateBuffer(&vertexBufferDesc, &vertexData, &mVertexBuffer); Makes it crash due to Access Violation. When I put the vertices.size() in it works without issues, but when I try to set it to like vertices.size() * 2 it crashes. I googled my eyes dry tonight but doesn't seem to find people with the same kind of issue, I've read that the vertex buffer can be bigger if needed. What I'm I doing wrong here? Best Regards and Thanks in advance Toastmastern
  19. Hi, I have a terrain engine where the terrain and water are on different grids. So I'm trying to render planar reflections of the terrain into the water grid. After reading some web pages and docs and also trying to learn from the RasterTek reflections demo and the small water bodies demo as well. What I do is as follows: 1. Create a Reflection view matrix - Technically I ONLY flip the camera position in the Y direction (Positive Y is up) and add to it 2 * waterLevel. Then I update the View matrix and I save that matrix for later. The code: void Camera::UpdateReflectionViewMatrix( float waterLevel ) { mBackupPosition = mPosition; mBackupLook = mLook; mPosition.y = -mPosition.y + 2.0f * waterLevel; //mLook.y = -mLook.y + 2.0f * waterLevel; UpdateViewMatrix(); mReflectionView = View(); } 2. I render the Terrain geometry to a 512x512 sized Render target by using the Reflection view matrix and an opposite culling (My Terrain is using front culling by nature so I'm using back culling for the Reflction render pass). Let me say that I checked with the Graphics debugger and the Reflection Render target looks "OK" at this stage (Picture attached). I don't know if the fact that the terrain is shown only at the top are of the texture is expected or not, but it seems OK. 3. Render the Reflection texture into the water using projective texturing - I hope this step is OK code wise. Basically I'm sending to the shader the WorldReflectionViewProj matrix that was created at step 1 in order to use it for the projective texture coordinates, I then convert the position in the DS (Water and terrain are drawn with Tessellation) to the projective tex coords using that WorldReflectionViewProj matrix, then I sample the reflection texture after setting up the coordinates in the PS. Here is the code: //Send the ReflectionWorldViewProj matrix to the shader: XMStoreFloat4x4(&mPerFrameCB.Data.ReflectionWorldViewProj, XMMatrixTranspose( ( mWorld * pCam->GetReflectedView() ) * mProj )); //Setting up the Projective tex coords in the DS: Output.projTexPosition = mul(float4(worldPos.xyz, 1), g_ReflectionWorldViewProj); //Setting up the coords in the PS and sampling the reflection texture: float2 projTexCoords; projTexCoords.x = input.projTexPosition.x / input.projTexPosition.w / 2.0 + 0.5; projTexCoords.y = -input.projTexPosition.y / input.projTexPosition.w / 2.0 + 0.5; projTexCoords += normal.xz * 0.025; float4 reflectionColor = gReflectionMap.SampleLevel(SamplerClampLinear, projTexCoords, 0); texColor += reflectionColor * 0.25; I'll add that when compiling the PS I'm getting a warning on those dividing by input.projTexPosition.w for a possible float division by 0, I tried to add some offset or some minimum to the dividing term but that still not solved my issue. Here is the problem itself. At relatively flat view angles I'm seeing correct reflections (Or at least so it seems), but as I pitch the camera down, I'm seeing those artifacts which I have no idea where are coming from. I'm culling the terrain in the reflection render pass when it's lower than water height (I have heightmaps for that). Any help will be appreciated because I don't know what is wrong or where else to look.
  20. Hi, I am looking for a usefull commandline based texture compression tool with the rights to be able to ship with my application. It should have following caps: Supports all major image format as source files (jpeg, png, tga, bmp) Export as DDS Compression Formats BC1, BC2, BC3, BC4, BC7 I am actually using the nvdxt tool from Nvidia, but it does not support BC4 (which I need for one-channel 8bit textures). Everything else which I found wasn't really useful. Any suggestions? Thx
  21. I have been trying to create a BlendState for my UI text sprites so that they are both alpha-blended (so you can see them) and invert the pixel they are rendered over (again, so you can see them). In order to get alpha blending you would need: SrcBlend = SRC_ALPHA DestBlend = INV_SRC_ALPHA and in order to have inverted colours you would need something like: SrcBlend = INV_DEST_COLOR DestBlend = INV_SRC_COLOR and you can't have both. So I have come to the conclusion that it's not possible; am I right?
  22. In traditional way, it needs 6 passes for a point light and many passes for cascaded shadow mapping to generate shadow maps. Recently I learnt a method that using a geometry shader to generate all the shadow maps in one pass.I specify a render target and a depth-stencil buffer which are both Texture2dArray in DirectX11.It looks much better than the traditional way I think.But after I implemented it, I found cascaded shadow mapping runs much slower than the traditional way.The fps slow down from 60 to 35.I don't know why.I guess may be I should do some culling or maybe the geometry shader is not efficient. I want to know the reason that I reduced the drawcalls from 8 to 1, but it runs slow down.Should I abandon this method or is there any way to optimize this method to run more efficiently than multi-pass rendering? Here is the gs code: [maxvertexcount(24)] void main( triangle DepthGsIn input[3] : SV_POSITION, inout TriangleStream< DepthPsIn > output ) { for (uint k = 0; k < 8; ++k) { DepthPsIn element; element.RTIndex = k; for (uint i = 0; i < 3; ++i) { float2 shadowSlopeBias = calculateShadowSlopeBias(input.normal, -g_cameras[k].world[1]); float shadowBias = shadowSlopeBias.y * g_cameras[k].shadowMapParameters.x + g_cameras[k].shadowMapParameters.y; element.position = input.position + shadowBias * g_cameras[k].world[1]; element.position = mul(element.position, g_cameras[k].viewProjection); element.depth = element.position.z / element.position.w; output.Append(element); } output.RestartStrip(); } }
  23. Hey, There are a few things which confuse me regarding DirectX 11 and HLSL shaders in general. I would be very grateful for your advice! 1. Let's take for example a scene which invokes 2 totally separate pipeline render passes interchangeably. I understand I need to bind correct shaders for each of the render pass and potentially blend/depth or rasterizer state but what about resources such as Constant Buffers, Shader Resource Views and Unordered Access Views? Assuming that the second render pass uses none of the resources used by the first pass, do I still need to unbind the resources and clean pipeline state after first pass? Or is it ok to leave pipeline with unbound garbage since anything I'd need to bind for second pass would overwrite contents in the appropriate register slots anyway? 2. Is it a good practice to assign register slots manually to all resources in HLSL? 3. I thought about assigning manually register slots for every distinct render pass up to the maximum slot limit if neccessary. For example in 1 render pass I invoke 3 CS's, 2 VS's and 2 PS's and for all resources used by those shaders I try to fill as many register slots as neccessary and potentially reuse many times the same slot in shaders sharing the same resource. I was wondering if there is any performance penalty or gain when I bind all of my needed resources at the start of render pass and never gonna have to do it again until next render pass? - this means potentially binding a lot of registers and having excessive number of bound resources for every shader that is run. 4. Is it a good practice to create a separate include file for every resource that occurs in >= 2 shader files or is it better to duplicate the declarations? In first case, the code is imo easier to maintain and edit but might be harder to read if there's too many includes. I've come up with a compromise between these 2 like this: create a separate include file for every CB that occurs in >= 2 shader files and a separate include file for every sampler I ever need to use. All other resources like srvs and uavs I prefer to duplicate in multiple shaders because they take much less space than CB for example... I'm not sure however if that's a good practice
  24. I want implement Particle system based on stream out structure to my bigger project. I saw few articles about that method and I build one particle. It works almost correctly but in geometry shader with stream out i cant get value of InitVel.z and age because it always is 0. If i change order of age(for example age is before Position) it works fine for age but 6th float of order is still 0. It looks like he push only 5 first positions. I had no idea what i do wrong because i try change almost all(create input layout for vertex, the same like entry SO Declaration, change number of strides for static 28, change it to 32 but in this case he draw chaotic so size of strides is probably good). I think it is problem with limits of NumEntry in declaration Entry but on site msdn i saw the limit for directx is D3D11_SO_STREAM_COUNT(4)*D3D11_SO_OUTPUT_COMPONENT_COUNT(128) not 5. Pls can you look in this code and give me the way or hope of implement it correctly?? Thanks a lot for help. Structure of particle struct Particle{ Particle() {} Particle(float x, float y, float z,float vx, float vy, float vz,float l /*UINT typ*/) :InitPos(x, y, z), InitVel(vx, vy, vz), Age(l) /*, Type(typ)*/{} XMFLOAT3 InitPos; XMFLOAT3 InitVel; float Age; //UINT Type; }; SO Entry D3D11_SO_DECLARATION_ENTRY PartlayoutSO[] = { { 0,"POSITION", 0, 0 , 3, 0 }, // output all components of position { 0,"VELOCITY", 0, 0, 3, 0 }, { 0,"AGE", 0, 0, 1, 0 } //{ 0,"TYPE", 0, 0, 1, 0 } }; Global Variables //streamout shaders ID3D11VertexShader* Part_VSSO; ID3D11GeometryShader* Part_GSSO; ID3DBlob *Part_GSSO_Buffer; ID3DBlob *Part_VSSO_Buffer; //normal shaders ID3D11VertexShader* Part_VS; ID3D11GeometryShader* Part_GS; ID3DBlob *Part_GS_Buffer; ID3D11PixelShader* Part_PS; ID3DBlob *Part_VS_Buffer; ID3DBlob *Part_PS_Buffer; ID3D11Buffer* PartVertBufferInit; //ID3D11Buffer* Popy; ID3D11Buffer* mDrawVB; ID3D11Buffer* mStreamOutVB; ID3D11InputLayout* PartVertLayout;// I try to set input layout too void ParticleSystem::InitParticles() { mFirstRun = true; srand(time(NULL)); hr = D3DCompileFromFile(L"ParticleVertexShaderSO4.hlsl", NULL, D3D_COMPILE_STANDARD_FILE_INCLUDE, "main", "vs_5_0", NULL, NULL, &Part_VSSO_Buffer, NULL); hr = D3DCompileFromFile(L"ParticleGeometryShaderSO4.hlsl", NULL, D3D_COMPILE_STANDARD_FILE_INCLUDE, "main", "gs_5_0", NULL, NULL, &Part_GSSO_Buffer, NULL); UINT StrideArray[1] = { sizeof(Particle) };//I try to set static 28 bits-7*4 per float hr = device->CreateVertexShader(Part_VSSO_Buffer->GetBufferPointer(), Part_VSSO_Buffer->GetBufferSize(), NULL, &Part_VSSO); hr = device->CreateGeometryShaderWithStreamOutput(Part_GSSO_Buffer- >GetBufferPointer(), Part_GSSO_Buffer->GetBufferSize(), PartlayoutSO ,3/* sizeof(PartlayoutSO)*/ , StrideArray, 1,D3D11_SO_NO_RASTERIZED_STREAM, NULL,&Part_GSSO); //Draw Shaders hr = D3DCompileFromFile(L"ParticleVertexShaderDRAW4.hlsl", NULL, D3D_COMPILE_STANDARD_FILE_INCLUDE, "main", "vs_5_0", NULL, NULL, &Part_VS_Buffer, NULL); hr = D3DCompileFromFile(L"ParticleGeometryShaderDRAW4.hlsl", NULL, D3D_COMPILE_STANDARD_FILE_INCLUDE, "main", "gs_5_0", NULL, NULL, &Part_GS_Buffer, NULL); hr = D3DCompileFromFile(L"ParticlePixelShaderDRAW4.hlsl", NULL, D3D_COMPILE_STANDARD_FILE_INCLUDE, "main", "ps_5_0", NULL, NULL, &Part_PS_Buffer, NULL); hr = device->CreateVertexShader(Part_VS_Buffer->GetBufferPointer(), Part_VS_Buffer->GetBufferSize(), NULL, &Part_VS); hr = device->CreateGeometryShader(Part_GS_Buffer->GetBufferPointer(), Part_GS_Buffer->GetBufferSize(), NULL, &Part_GS); hr = device->CreatePixelShader(Part_PS_Buffer->GetBufferPointer(), Part_PS_Buffer->GetBufferSize(), NULL, &Part_PS); BuildVertBuffer(); } void ParticleSystem::BuildVertBuffer() { D3D11_BUFFER_DESC vertexBufferDesc1; ZeroMemory(&vertexBufferDesc1, sizeof(vertexBufferDesc1)); vertexBufferDesc1.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc1.ByteWidth = sizeof(Particle)*1; //*numParticles; vertexBufferDesc1.BindFlags = D3D11_BIND_VERTEX_BUFFER;// | D3D11_BIND_STREAM_OUTPUT; vertexBufferDesc1.CPUAccessFlags = 0; vertexBufferDesc1.MiscFlags = 0; vertexBufferDesc1.StructureByteStride = 0;// I tried to comment this too Particle p; ZeroMemory(&p, sizeof(Particle)); p.InitPos = XMFLOAT3(0.0f, 0.0f, 0.0f); p.InitVel = XMFLOAT3(0.0f, 0.0f, 0.0f); p.Age = 0.0f; //p.Type = 100.0f; D3D11_SUBRESOURCE_DATA vertexBufferData1; ZeroMemory(&vertexBufferData1, sizeof(vertexBufferData1)); vertexBufferData1.pSysMem = &p;//było &p vertexBufferData1.SysMemPitch = 0; vertexBufferData1.SysMemSlicePitch = 0; hr = device->CreateBuffer(&vertexBufferDesc1, &vertexBufferData1, &PartVertBufferInit); ZeroMemory(&vertexBufferDesc1, sizeof(vertexBufferDesc1)); vertexBufferDesc1.ByteWidth = sizeof(Particle) * numParticles; vertexBufferDesc1.BindFlags = D3D11_BIND_VERTEX_BUFFER | D3D11_BIND_STREAM_OUTPUT; hr = device->CreateBuffer(&vertexBufferDesc1, 0, &mDrawVB); hr = device->CreateBuffer(&vertexBufferDesc1, 0, &mStreamOutVB); } void ParticleSystem::LoadDataParticles() { UINT stride = sizeof(Particle); UINT offset = 0; //Create the Input Layout //device->CreateInputLayout(Partlayout, numElementsPart, Part_VSSO_Buffer- //>GetBufferPointer(), // Part_VSSO_Buffer->GetBufferSize(), &PartVertLayout); //Set the Input Layout //context->IASetInputLayout(PartVertLayout); //Set Primitive Topology context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_POINTLIST); if (mFirstRun) { // context->CopyResource(Popy, PartVertBufferInit); context->IASetVertexBuffers(0, 1, &PartVertBufferInit, &stride, &offset); } else { context->IASetVertexBuffers(0, 1, &mDrawVB, &stride, &offset); } context->SOSetTargets(1, &mStreamOutVB, &offset); context->VSSetShader(Part_VSSO, NULL, 0); context->GSSetShader(Part_GSSO, NULL, 0); context->PSSetShader(NULL, NULL, 0); //context->PSSetShader(Part_PS, NULL, 0); ID3D11DepthStencilState* depthState;//disable depth D3D11_DEPTH_STENCIL_DESC depthStateDesc; depthStateDesc.DepthEnable = false; depthStateDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ZERO; device->CreateDepthStencilState(&depthStateDesc, &depthState); context->OMSetDepthStencilState(depthState, 0); if (mFirstRun) { //mFirstRun; context->Draw(1, 0); mFirstRun = false; } else { context->DrawAuto(); } //} // done streaming-out--unbind the vertex buffer ID3D11Buffer* bufferArray[1] = { 0 }; context->SOSetTargets(1, bufferArray, &offset); // ping-pong the vertex buffers std::swap(mStreamOutVB, mDrawVB); // Draw the updated particle system we just streamed-out. //Create the Input Layout //device->CreateInputLayout(Partlayout, numElementsPart, Part_VS_Buffer- //>GetBufferPointer(), // Part_VS_Buffer->GetBufferSize(), &PartVertLayout); //Set the normal Input Layout //context->IASetInputLayout(PartVertLayout); context->IASetVertexBuffers(0, 1, &mDrawVB, &stride, &offset); ZeroMemory(&depthStateDesc, sizeof(depthStateDesc)); depthStateDesc.DepthEnable = true; depthStateDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ZERO; device->CreateDepthStencilState(&depthStateDesc, &depthState); context->OMSetDepthStencilState(depthState, 0); //I tried add normal layout here the same like Entry SO but no changes //Set Primitive Topology //context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_POINTLIST); context->VSSetShader(Part_VS, NULL, 0); context->GSSetShader(Part_GS, NULL, 0); context->PSSetShader(Part_PS, NULL, 0); context->DrawAuto(); //mFirstRun = true; context->GSSetShader(NULL, NULL, 0); } void ParticleSystem::RenderParticles() { //mFirstRun = true; LoadDataParticles(); } And the code of shaders: VertexShader to stream out struct Particle { float3 InitPos : POSITION; float3 InitVel : VELOCITY; float Age : AGE; //uint Type : TYPE; }; Particle main(Particle vin) { return vin;// just push data into geomtrywithso } GeometrywithSo struct Particle { float3 InitPos : POSITION; float3 InitVel : VELOCITY; float Age : AGE; //uint Type : TYPE; }; float RandomPosition(float offset) { float u = Time + offset;// (Time + offset); float v = ObjTexture13.SampleLevel(ObjSamplerState, u, 0).r; return (v); } [maxvertexcount(6)] void main( point Particle gin[1], inout PointStream< Particle > Output ) { //gin[0].Age = Time; if ( StartPart == 1.0f ) { //if (gin[0].Age < 100.0f) //{ for (int i = 0; i < 6; i++) { float3 VelRandom; //= 5.0f * RandomPosition((float)i / 5.0f); VelRandom.y = 10.0f+i; VelRandom.x = 35 * i* RandomPosition((float)i / 5.0f);//+ offse; VelRandom.z = 10.0f;//35*i * RandomPosition((float)i / 5.0f); Particle p; p.InitPos = VelRandom;//float3(0.0f, 5.0f, 0.0f); //+ VelRandom; p.InitVel = float3(10.0f, 10.0f, 10.0f); p.Age = 0.0f;//VelRandom.y; //p.Type = PT_FLARE; Output.Append(p); } Output.Append(gin[0]); } else if (StartPart == 0.0f) { if (gin[0].Age >= 0) { Output.Append(gin[0]); } } } If I change Age in geometry with so: for example Age += Time from const buffer In geometry shader its fine once but in draw shader it is 0 and next time if it is reading in geometry with so it is 0 too. Vertex shader to draw struct VertexOut { float3 Pos : POSITION; float4 Colour : COLOR; //uint Type : TYPE; }; struct Particle { float3 InitPos : POSITION; float3 InitVel : VELOCITY; float Age : AGE; // uint Type : TYPE; }; VertexOut main(Particle vin) { VertexOut vout; float3 gAccelW = float3(0.0f, -0.98f, 0.0f); float t = vin.Age; //float b = Time/10000; // constant Acceleration equation vout.Pos = vin.InitVel+ (0.7f * gAccelW)*Time/100; //vout.Pos.x = t; vout.Colour = float4(1.0f, 0.0f, 0.0f, 1.0f); //vout.Age = vout.Pos.y; //vout.Type = vin.Type; return vout; } Geometry shader to change point into line struct VertexOut { float3 Pos : POSITION; float4 Colour : COLOR; //uint Type : TYPE; }; struct GSOutput { float4 Pos : SV_POSITION; float4 Colour : COLOR; //float2 Tex : TEXCOORD; }; [maxvertexcount(2)] void main( point VertexOut gin[1], inout LineStream< GSOutput > Output ) { float3 gAccelW = float3(0.0f, -0.98f, 0.0f); //if (gin[0].Type != PT_EMITTER) { float4 v[2]; v[0] = float4(gin[0].Pos, 1.0f); v[1] = float4((gin[0].Pos + gAccelW), 1.0f); GSOutput gout; [unroll] for (int i = 0; i < 2; ++i) { gout.Pos = mul(v[i], WVP);// mul(v[i], gViewProj); gout.Colour = gin[0].Colour; Output.Append(gout); } } } And pixel Shader struct GSOutput { float4 Pos : SV_POSITION; float4 Colour : COLOR; }; float4 main(GSOutput pin) : SV_TARGET { return pin.Colour; }
  25. So I've been playing around today with some things in D3D 11.1, specifically the constant buffer offset stuff. And just FYI, I'm doing this in C# with SharpDX (latest version). I got everything set up, I have my constant buffer populating with data during each frame, and calling VSSetConstantBuffers1 and passing in the offset/count as needed. But, unfortunately, I get nothing on my screen. If I go back to using the older D3D11 SetConstantBuffers method (without the offset/count), everything works great. I get nothing from the D3D runtime debug spew, and a look in the graphics debugger stuff tells me that my constant buffer does indeed have data at the offsets that I'm providing. And the data (World * Projection matrix) is correct at each offset. The offsets, according again to the graphics debugger, are correct. I could be using it incorrectly, but what little (and seriously, there's not a lot) info I found seems to indicate that I'm doing it correctly. But here's my workflow (I'd post code, but it's rather massive): Frame #0: Map constant buffer with discard Write matrix at offset 0, count 64 Unmap VSSetConstantBuffers1(0, 1, buffers, new int[] { offset }, new int[] { count }); // Where offset is the offset above, same with count Draw single triangle Frame #1: Map constant buffer with no-overwrite Write matrix at offset 64, count 64. Unmap VSSetConstantBuffers1(0, 1, buffers, new int[] { offset }, new int[] { count }); // Where offset is the offset above, same with count Draw single triangle Etc... it repeats until the end of the buffer, and starts over with a discard when the buffer is full. Has anyone ever used these offset cbuffer functions before? Can you help a brother out? Edit: I've added screenshots of what I'm seeing the VS 2017 graphics debugger. As I said before, if I use the old VSSetConstantBuffers method, it works like a charm and I see my triangle.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!