Jump to content
  • Advertisement

Search the Community

Showing results for tags 'DX11' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 1529 results

  1. Hello, I'm doing tessellation and while I got the positions setup correctly using quads I am at a loss how to generate smooth normals from them. I suppose this should be done in the Domain shader but how would this process look like? I am using a heightmap for tessellation but I would rather generate the normals from the geometry than use a normal map, if possible. Cheers
  2. Has anyone ever tried to draw with one of the D3D11_PRIMITIVE_TOPOLOGY_XX_CONTROL_POINT_PATCHLIST primitive topologies, when only a vertex, geometry and pixel shader are active? Practical Rendering and Computation with Direct3D 11, microsoft's documentation for the InputPatch hlsl type and this (old) blog post seem to suggest it should be possible. But when I try, an error occurs when drawing: D3D11 ERROR: ID3D11DeviceContext::Draw: Rasterization Unit is enabled (PixelShader is not NULL or Depth/Stencil test is enabled and RasterizedStream is not D3D11_SO_NO_RASTERIZED_STREAM) but the input topology is Patch Control Points. You need either a Hull Shader and Domain Shader, or a Geometry Shader. [ EXECUTION ERROR #349: DEVICE_DRAW_INPUTLAYOUT_NOT_SET] D3D11: **BREAK** enabled for the previous message, which was: [ ERROR EXECUTION #349: DEVICE_DRAW_INPUTLAYOUT_NOT_SET ] I'm sure I did bind a geometry shader (and renderdoc agrees with me 😉). The OpenGL and Vulkan documentation seem to explicitly *not* allow this, though, so maybe it's simply not possible 🙂 If you've ever managed to do this, were there some special details to take into account to get it working? Thanks! PS: for completeness, here's my test shader code, although I don't think it does anything special: // // vertex shader // struct VertexShaderInputData { float4 pos : position; }; struct VertexShaderOutputData { float4 pos : position; }; VertexShaderOutputData vs_main(VertexShaderInputData inputData) { VertexShaderOutputData outputData; outputData.pos = inputData.pos; return outputData; } // // geometry shader // struct GeometryShaderInputData { float4 pos : position; }; struct GeometryShaderOutputData { float4 pos : SV_Position; }; [maxvertexcount(8)] void gs_main(in InputPatch<GeometryShaderInputData, 8> inputData, uint input_patch_id : SV_PrimitiveID, uint gs_instance_id : SV_GSInstanceID, inout TriangleStream<GeometryShaderOutputData> output_stream) { GeometryShaderOutputData output_vertex; output_vertex.pos = inputData[0].pos; output_stream.Append(output_vertex); ...and so on... } // // pixel shader // struct PixelShaderOutputData { float4 color : SV_Target0; }; PixelShaderOutputData main() { PixelShaderOutputData outputData; outputData.color = float4(1.0,1.0,1.0,1.0); return outputData; }
  3. CONUNDRUM I'm very new to DirectX C++ programming, I come from a unity background but I'm trying to optimize my procedural mesh generation system to run on the graphics card with the new Unity ECS and Jobs system. For that I need native access to the render API and I've settled on DirectX11, I spent the last week or so learning DirectX11 and implementing a Compute Shader marching cubes implementation to run in native code in a multi threaded environment. Everything seems to be working but because I don't have much experience implementing a Multi-threaded rendering system I thought I'd ask here if I was doing it right. I plan on running my own tests but nothing beats someone with years of experience, if anyone one has any advice to offer that would be amazing :). IMPLEMENTATION So for my rendering system I knew I had to minimize the amount of blocking going on between the processing job threads and the rendering thread if I wanted to get any performance out of this at all. I decided to follow a similar double buffer design as modern rendering APIs; I have one front queue of draw calls being rendered by the rendering thread, and then a back queue that is being allocated to from the processing threads. At the end of the frame on the main thread I "present" the back queue to the front queue and swap there pointer memories, I of course do this in a windows CRITICAL_SECTION lock. Then again in the render thread I use the same CRITICAL_SECTION and lock it while I access the front queue. I copy the contents from the front queue into a dynamic buffer and then release the lock, I then proceed to render using this copied version of the front queue buffer. I copy the buffer instead of rendering directly from it because I want to minimize the lock time for the main thread present task. On top of this I also have to guarantee that the resources in the front queue that are being rendered are not destroyed or corrupted while they are being accessed. To do this I implemented my own thread safe pinning system. It's like a reference counting system except it deletes the data whenever I tell it to delete it in the processing thread, but it does not delete the object holding the data so I can tell whatever other thread that is attempting to acquire the lock that that the data is gone. When all pins are released and the objects gpu data has been killed, the holding object is destroyed. I use another CRITICAL SECTION per renderable object to pin, unpin, and generally access and modify this holder data. PRESENT QUEUE EXECUTE DRAW QUESTIONS 1.) Is it reasonable to copy the whole front buffer, element by element, into a dynamic array and delete it after rendering? Will this be too much allocation? Would it be better to just lock the whole front queue while I am rendering and render directly from it. 2.) Is it reasonable to use a CRITICAL SECTION for every renderable object and for pinning and unpinning? Is that too many critical sections? Is there a possible workaround with atomic functions and would there be a way to do automated pinning and unpinning so I can use std::copy instead of manually going element by element and pinning the data. I feel more secure knowing exactly when the data is pinned and unpinned aswell as when it is alive or dead. (BTW the pin and unpin methods also unlock the CS, that's why you see a lock with no unlock) 3.) Is there a better method for threading that does not require 3 buffers at once or maybe just a better way of preserving the integrity of GPU data while it's in the render thread being rendered. 4.) Am I making any noob D3D11 mistakes . This is my first time using it. Everything seems to be working in the Unity Editor but I want to be sure before I continue and build off of this. THANKS
  4. Hi, The attached rendering result about the human heart looks so realistic. I would like to get some suggestion about how to reach this effect. What I have is a surface triangle mesh of the heart (has vertex normals, but no color and no texture). I think I need the following processes: 1. Use 3D Max to associate the surface model with a texture 2. Rendering with lighting (Is Phong shading sufficient?) I am not sure if my consideration is correct or not. Any suggestion are really appreciated. YL
  5. Hi, I've been trying to implement a skybox for some time now and there's probably only a tiny problem left to solve. When I load a texture using DirectXTK's CreateDDSTextureFromFileEx with a TEXTURECUBE flag, the resulting shader resource has its view dimension set to Texture2D and not TextureCube, which means I can't treat it as a TextureCube in HLSL. Also the file I'm loading contains all six faces. Here's snippets of my code where I load the texture, set the shader resource view to the pixel shader and then sample it in HLSL: // Loading the texture HRESULT hr = DirectX::CreateDDSTextureFromFileEx(device, filename, 0, D3D11_USAGE_IMMUTABLE, D3D11_BIND_SHADER_RESOURCE, 0, D3D11_RESOURCE_MISC_TEXTURECUBE, false, &texture, &mSkyboxSRV); // Setting the texture DeviceContext->PSSetShaderResources(0, 1, mSkybox->GetSkyboxSRV()); // HLSL: Sampling the texture TextureCube skyboxTexture : register(t0); SamplerState sampWrap : register(s0); struct PS_IN { float4 Pos : SV_POSITION; float3 lookVector : TEXCOORD0; }; float4 PS_main(PS_IN input) : SV_TARGET { return skyboxTexture.Sample(sampWrap, input.lookVector); } This is the error message being output by DirectX: D3D11 ERROR: ID3D11DeviceContext::Draw: The Shader Resource View dimension declared in the shader code (TEXTURECUBE) does not match the view type bound to slot 0 of the Pixel Shader unit (TEXTURE2D). This mismatch is invalid if the shader actually uses the view (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #354: DEVICE_DRAW_VIEW_DIMENSION_MISMATCH] Does anyone have any ideas on what to do? Any help is much appreciated!
  6. Hi all. I'm looking for a font to test my 2D text rendering using DirectX 11. I would need a truetype font that has correct kerning and valid other properties such as baseline, advance, scale, etc. I tried Arial on Windows, but I'm not sure if it's the version I have but I cannot get the kerning working. It would be great if there was a font that is guaranteed to have most properties and that they be valid. Any ideas on the best font to use for testing? Thanks!
  7. Hi Guys, I have a problem where a textured quad is being severely interpolated. I am trying to achieve the clean pixelated look on the right hand of the example. There is no MSAA enabled on the back buffer DXGI_SWAP_CHAIN_DESC swapChainDesc; ZeroMemory(&swapChainDesc, sizeof(swapChainDesc)); swapChainDesc.BufferCount = 2; swapChainDesc.BufferDesc.Width = width; swapChainDesc.BufferDesc.Height = height; swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; swapChainDesc.BufferDesc.RefreshRate.Numerator = numerator; swapChainDesc.BufferDesc.RefreshRate.Denominator = denominator; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.OutputWindow = hWnd; swapChainDesc.Windowed = true; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH; swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; I have verified in the debugger that my sampler is being applied, without asking for any anti-aliasing. D3D11_SAMPLER_DESC samplerDesc; samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR; samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_CLAMP; samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_CLAMP; samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_CLAMP; samplerDesc.MipLODBias = 0.0f; samplerDesc.MaxAnisotropy = 1; samplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER; samplerDesc.MinLOD = -FLT_MAX; samplerDesc.MaxLOD = FLT_MAX; if (FAILED(d3dDevice->CreateSamplerState(&samplerDesc, &d3dSamplerDefault))) return E_WINDOW_SAMPLER_DESC; d3dContext->PSSetSamplers(0, 1, &d3dSamplerDefault); And the default blend state, which as far as I can tell should be ok. // Create default blend state ID3D11BlendState* d3dBlendState = NULL; D3D11_BLEND_DESC omDesc; ZeroMemory(&omDesc, sizeof(D3D11_BLEND_DESC)); omDesc.RenderTarget[0].BlendEnable = true; omDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA; omDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA; omDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD; omDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE; omDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO; omDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD; omDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL; if (FAILED(d3dDevice->CreateBlendState(&omDesc, &d3dBlendState))) return E_WINDOW_DEVICE_BLEND_STATE; d3dContext->OMSetBlendState(d3dBlendState, 0, 0xffffffff); if (d3dBlendState) d3dBlendState->Release(); And the pixel shader, which is as basic as can be. SamplerState samLinear : register(s0); Texture2D squareMap : register(t0); struct VS_OUTPUT { float4 position : SV_POSITION; float2 textureCoord : TEXCOORD0; }; float4 ps_main(VS_OUTPUT input) : SV_TARGET { return squareMap.Sample(samLinear, input.textureCoord); } I have a ton of error checking and there are no failed calls and the debugger shows that everything is running great. Any ideas as to why I am still getting interpolation would be hugely appreciated
  8. Hello, I'm implementing SSAO for my engine but I noticed the kernel wasn't rotating properly. What happens is when I sample my random-vector texture I always get the same result, regardless of input texture coordinates. Here's my shader: Texture2D randomTexture : register(t2); SamplerState smplr { Filter = D3D11_FILTER_ANISOTROPIC; AddressU = Wrap; AddressV = Wrap; }; float4 PS_main( in float4 screenPos : SV_Position) : SV_TARGET0 { //screensize 1280x720, randomTexture size 64x64 float2 rvecCoord = float2(screenPos.x * 1280.f / 64.f, screenPos.y * 720.f / 64.f); float3 rvec = randomTexture.Sample(smplr, rvecCoord).xyz; return float4(rvec, 1.0f); } Off-topic code omitted. I cant for the love of my life figure out why this sample would always give me the same result. Changing the line : float2 rvecCoord = float2(screenPos.x * 1280.f / 64.f, screenPos.y * 720.f / 64.f); to float2 rvecCoord = float2(screenPos.xy); seems to make no difference. Here's a print of the shader state. Any help appreciated! ❤️
  9. I'm not sure what the correct terminology is to describe what is happening. In the following video, you will see that the grass quad has one of the tris glitching out towards the end in the far distance. YOUTUBE VIDEO DEMO Anyone know what might be causing this? Here is a link to the github if it helps. The actual drawing is done in the Graphics.cpp RenderFrame function GITHUB LINK (set project to x86 to run or go into the libs folder and extract the x64 archives to run as x64) Edited to Add: To move camera use WASD / Space / Z
  10. Some people reported they can't play my game, while a video recording tool is running at the same time. So, I bought a popular one to test it and as it turns out they are right. 😕 I get a System.AccessViolationException with HResult=0x80004003 when calling SwapChain.Present. I am developing with DirectX 11 using C# and SharpDX. This problem only happens when a video recording tool is running. After I close it, the runs perfectly fine! I searched online for this problem, but I did not find a solution. I also read the MSDN page of SwapChain.Present to search for possible error sources. However, according to this page the SwapChain.Present does not throw System.AccessViolationException error messages. So, I assume the problem comes from somewhere else. I tested this problem also with my second game that I am currently developing which is a 2D game that only uses basis DirectX 11 3D stuff and I get the same problem. I tried to search all the parameters of all the render targets, viewport, swap chain, depth stencil views, etc. to search for something that might conflict with other applications (e.g. forcing my game to use some resource exclusively that might be also used by the video recording tool). To locate the exact problem, I removed all code from the render loop, except for the SwapChain.Present call. So it was just this single line of code and it still crashed... Does anyone of you have had a similar problem?
  11. Hey folks, I am developing two video game projects at the moment and I am developing my own game engine at the same time. The engine grows with my game projects and more and more features are ready to use. For a very long time, I only had a single render pass of my 3D scenes: I rendered all visible models using a PBR or Blinn-Phong-Shader - depending on the model's material - and it looked nice. In order to improve Galactic Crew, I wanted to add shadows and improved laser effects. After doing some research, I implemented a Shadow Mapping algorithm to add shadows. This resulted in two additional render passes. Now, I have this sequence, if the shadows are activated in the settings: Render scene from the point of view of the main light source to create a Depth Map. Render scene from the point of view to create a Light Map (using the Depth Map from 1.). Render scene and apply shadows using Light Map from step 2. I did some optimizations with your help and it worked like a charm in all scenarios (space, planets and dungeons). Then, I wanted to improve my laser beam effects. I added moving laser clutter to make it look more dynamic in combat situations and I added a Glow effect to make it look smoother. However, I had to add three render passes for the Glow effect: Create a Glow Map, blur Glow Map and add Glow Map to final scene resulting in this sequence: Render scene from the point of view of the main light source to create a Depth Map. Render scene from the point of view to create a Light Map (using the Depth Map from 1.). Render all laser beams from the camera's point of view into a Glow Map. Blur Glow Map using a bi-directional filter and increase brightness of the laser beam's inner areas. Render scene and apply shadows using Light Map from step 2. Use an orthogonal plane to add Glow Map to rendered scene from step 5. So, if the shadows are activated and the player is in combat, I have six render passes instead of just one. I want to add planetary missions with water worlds and island this winter that will add even more render passes for water reflection, etc. How many render passes do you guys use in your game project?
  12. Anybody know is it necessary to clear AppendStructuredBuffer if I don't have a consume buffer ? What's the max size of AppendStructuredBuffer ? Now, my code doesn't clear the AppendStructuredBuffer, and the structured buffer i've created is very big, I append instances in compute shader every frame, then render, the instances flick everywhere.
  13. Hello everyone, I've been during the past few days trying to fix sunlight on my sphere which have some bugs in it. For starters I'm using this code: https://github.com/Illation/ETEngine/blob/master/source/Engine/Shaders/PlanetPatch.glsl to calculate my normals instead of using a normal map. I'm then using this guide: http://www.thetenthplanet.de/archives/1180 To get my TBN Matrix. I have 2 main issues I'm working to solve when reworking this code. First I get seams in the normal map along the equator and from pole to pole. The normal also seems to move when I move my camera. Here is a video showing what I mean, the color is the normal calculated with the TBN matrix and as the camera moves it moves along with it. Nothing is multiplied by the view matrix or anything. Here is my code Vertex Shader: output.normal = mul(finalPos, worldMatrix); output.viewVector = (mul(cameraPos.xyz, worldMatrix) - mul(finalPos, worldMatrix)); mapCoords = normalize(finalPos); output.mapCoord = float2((0.5f + (atan2(mapCoords.z, mapCoords.x) / (2 * 3.14159265f))), (0.5f - (asin(mapCoords.y) / 3.14159265f))); output.position = mul(float4(finalPos, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); return output; and also what might be more important, the pixel shader: float3x3 GetTBNMatrix(float3 normalVector, float3 posVector, float2 uv) { float3 dp1, dp2, dp2perp, dp1perp, T, B; float2 duv1, duv2; float invMax; dp1 = ddx(posVector); dp2 = ddy(posVector); duv1 = ddx(uv); duv2 = ddx(uv); dp2perp = cross(dp2, normalVector); dp1perp = cross(normalVector, dp1); // * -1 due to being LH coordinate system T = (dp2perp * duv1.x + dp1perp * duv2.x) * -1; B = (dp2perp * duv1.y + dp1perp * duv2.y) * -1; invMax = rsqrt(max(dot(T, T), dot(B, B))); return float3x3(T * invMax, B * invMax, normalVector); } float GetHeight(float2 uv) { return shaderTexture.SampleLevel(sampleType, uv, 0).r * (21.229f + 8.2f); } float3 CalculateNormal(float3 normalVector, float3 viewVector, float2 uv) { float textureWidth, textureHeight, hL, hR, hD, hU; float3 texOffset, N; float3x3 TBN; shaderTexture.GetDimensions(textureWidth, textureHeight); texOffset = float3((1.0f / textureWidth), (1.0f / textureHeight), 0.0f); hL = GetHeight(uv - texOffset.xz); hR = GetHeight(uv + texOffset.xz); hD = GetHeight(uv + texOffset.zy); hU = GetHeight(uv - texOffset.zy); N = normalize(float3((hL - hR), (hU - hD), 2.0f)); TBN = GetTBNMatrix(normalVector, -viewVector, uv); return mul(TBN, N); } float4 MarsPixelShader(PixelInputType input) : SV_TARGET { float3 normal; float lightIntensity, color; float4 finalColor; normal = normalize(CalculateNormal(normalize(input.normal), normalize(input.viewVector), input.mapCoord)); lightIntensity = saturate(dot(normal, normalize(-lightDirection))); color = saturate(diffuseColor * lightIntensity); return float4(normal.rgb, 1.0f);//float4(color, color, color, 1.0f); } Hope anyone can help shine some light on this problem for me Best Regards and Thanks in advance Toastmastern
  14. Hi folks, I have a problem and I really could use some ideas from other professionals! I am developing my video game Galactic Crew including its own game engine. I am currently working on improved graphics which includes shadows (I use Shadow Mapping for that). I observed that the game lags, when I use shadows, so I started profiling my source code. I used DirectX 11's Queries to measure the time my GPU spends on different tasks to search for bottlenecks. I found several small issues and solved them. As a result, the GPU needs around 10 ms per frame, which is good enough for 60 FPS (1s / 60 frames ~ 16 ms/frame). See attachment Scene1 for the default view. However, when I zoom into my scene, it starts to lag. See attachment Scene2 for the zoomed view. I compared the times spent on the GPU for both cases: default view and zoomed view. I found out that the render passes in which I render the full scene take much longer (~11 ms instead of ~2ms). One of these render stages is the conversion of the depth information to the Shadow Map and the second one is the final draw of the scene. So, I added even more GPU profiling to find the exact problem. After several iteration steps, I found this call to be the bottleneck: if (model.UseInstancing) _deviceContext.DrawIndexedInstanced(modelPart.NumberOfIndices, model.NumberOfInstances, 0, 0, 0); else _deviceContext.DrawIndexed(modelPart.NumberOfIndices, 0, 0); Whenever I render a scene, I iterate through all visible models in the scene, set the proper vertex and pixel shaders for this model and update the constant buffer of the vertex shader (if required). After that, I iterate through all positions of the model (if it does not use instancing) and iterate through all parts of the model. For each model part, I set the used texture maps (diffuse, normal, ...), set the vertex and index buffers and finally draw the model part by calling the code above. In one frame for example, 11.37 ms were spent drawing all models and their parts, when I zoomed it. From these 11.37 ms 11.35ms were spent in the drawing calls I posted above. As a test, I simplified my rather complex pixel shader to a simple function that returns a fixed color to make sure, the pixel shader is not responsible for my performance problem. As it turned out, the GPU time wasn't reduced. Does anyone of you have any idea what causes my lag, i.e. my long GPU time in the drawing calls? I don't use LOD or anything comparable and I also don't use my BSP scene graph in this scene. It is exactly the same content, but with different zooms. Maybe I missed something very basic. I am grateful for any help!!
  15. The code to create d3d11 structured buffer : D3D11_BUFFER_DESC desc; desc.ByteWidth = _count * _structSize; if (_type == StructType::Struct) { desc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED; } else { desc.MiscFlags = 0; } desc.StructureByteStride = _structSize; desc.BindFlags = D3D11_BIND_SHADER_RESOURCE; if (_dynamic) { desc.Usage = D3D11_USAGE_DEFAULT; desc.CPUAccessFlags = 0; } else { desc.Usage = D3D11_USAGE_IMMUTABLE; desc.CPUAccessFlags = 0; } if (FAILED(getDevice()->CreateBuffer(&desc, NULL, &_object))) { return false; } D3D11_SHADER_RESOURCE_VIEW_DESC resourceViewDesc; memset(&resourceViewDesc, 0, sizeof(resourceViewDesc)); if(_type == StructType::Float) resourceViewDesc.Format = DXGI_FORMAT_R32_FLOAT; else if (_type == StructType::Float2) resourceViewDesc.Format = DXGI_FORMAT_R32G32_FLOAT; else if (_type == StructType::Float3) resourceViewDesc.Format = DXGI_FORMAT_R32G32B32_FLOAT; else if (_type == StructType::Float4) resourceViewDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; else resourceViewDesc.Format = DXGI_FORMAT_UNKNOWN; resourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_BUFFER; resourceViewDesc.Buffer.ElementOffset = 0; resourceViewDesc.Buffer.NumElements = _count; ID3D11Resource* viewObject = _object; auto hr = getDevice()->CreateShaderResourceView(viewObject, &resourceViewDesc, &_shaderResourceView); if (FAILED(hr)) { return false; } I've created a float type structured buffer. The source data is a float array, I update the buffer from array[startIndex] to array[endIndex - 1]. The code to update buffer : bool setData(int startIndex, int endIndex, const void* data) { if (!data) return false; D3D11_BOX destBox; destBox.left = startIndex * _structSize; destBox.right = endIndex * _structSize; destBox.top = 0; destBox.bottom = 1; destBox.front = 0; destBox.back = 1; getContext()->UpdateSubresource(_object, 0, &destBox, data, _count * _structSize, 0); } The final result is that the data is not smooth, then I change the code of setData destBox.left = startIndex ; destBox.right = endIndex ; Then, the result looks smooth, but with some data missed !!!! Don't know why..
  16. Is it reasonable to use Direct2D for some small 2D games? I never did too much of Direct2D stuff, mostly I used it for displaying text/2D GUI for Direct3D engine etc. but I never tried doing game in it. Is it better to use Direct2D and sprites or would you prefer to go with D3D but with 2D shaders // is D2D not meant for games, no matter how big or small, at all?
  17. Hey, This is a very strange problem... I've got a computation shader that's supposed to fill 3d texture (voxels in metavoxel) with color, based on particles that cover given metavoxel. And this is the code: static const int VOXEL_WIDTH_IN_METAVOXEL = 32; static const int VOXEL_SIZE = 1; static const float VOXEL_HALF_DIAGONAL_LENGTH_SQUARED = (VOXEL_SIZE * VOXEL_SIZE + 2.0f * VOXEL_SIZE * VOXEL_SIZE) / 4.0f; static const int MAX_PARTICLES_IN_METAVOXEL = 32; struct Particle { float3 position; float radius; }; cbuffer OccupiedMetavData : register(b6) { float3 occupiedMetavWorldPos; int numberOfParticles; Particle particlesBin[MAX_PARTICLES_IN_METAVOXEL]; }; RWTexture3D<float4> metavoxelTexUav : register(u5); [numthreads(VOXEL_WIDTH_IN_METAVOXEL, VOXEL_WIDTH_IN_METAVOXEL, 1)] void main(uint2 groupThreadId : SV_GroupThreadID) { float4 voxelColumnData[VOXEL_WIDTH_IN_METAVOXEL]; float particleRadiusSquared; float3 distVec; for (int i = 0; i < VOXEL_WIDTH_IN_METAVOXEL; i++) voxelColumnData[i] = float4(0.0f, 0.0f, 1.0f, 0.0f); for (int k = 0; k < numberOfParticles; k++) { particleRadiusSquared = particlesBin[k].radius * particlesBin[k].radius + VOXEL_HALF_DIAGONAL_LENGTH_SQUARED; distVec.xy = (occupiedMetavWorldPos.xy + groupThreadId * VOXEL_SIZE) - particlesBin[k].position.xy; for (int i = 0; i < VOXEL_WIDTH_IN_METAVOXEL; i++) { distVec.z = (occupiedMetavWorldPos.z + i * VOXEL_SIZE) - particlesBin[k].position.z; if (dot(distVec, distVec) < particleRadiusSquared) { //given voxel is covered by particle voxelColumnData[i] += float4(0.0f, 1.0f, 0.0f, 1.0f); } } } for (int i = 0; i < VOXEL_WIDTH_IN_METAVOXEL; i++) metavoxelTexUav[uint3(groupThreadId.x, groupThreadId.y, i)] = clamp(voxelColumnData[i], 0.0, 1.0); } And it works well in debug mode. This is the correct looking result obtained after raymarching one metavoxel from camera: As you can see, the particle only covers the top right corner of the metavoxel. However, in release mode The result obtained looks like this: This looks like the upper half of the metavoxel was not filled at all even with the ambient blue-ish color in the first "for" loop... I nailed it down towards one line of code in the above shader. When I replace "numberOfParticles" in the "for" loop with constant value such as 1 (which is uploaded to GPU anyway) the result finally looks the same as in debug mode. This is the shader compile method from Hieroglyph Rendering Engine (awesome engine) and it looks fine for me but maybe something's wrong? My only modification was adding include functionality ID3DBlob* ShaderFactoryDX11::GenerateShader( ShaderType type, std::wstring& filename, std::wstring& function, std::wstring& model, const D3D_SHADER_MACRO* pDefines, bool enablelogging ) { HRESULT hr = S_OK; std::wstringstream message; ID3DBlob* pCompiledShader = nullptr; ID3DBlob* pErrorMessages = nullptr; char AsciiFunction[1024]; char AsciiModel[1024]; WideCharToMultiByte(CP_ACP, 0, function.c_str(), -1, AsciiFunction, 1024, NULL, NULL); WideCharToMultiByte(CP_ACP, 0, model.c_str(), -1, AsciiModel, 1024, NULL, NULL); // TODO: The compilation of shaders has to skip the warnings as errors // for the moment, since the new FXC.exe compiler in VS2012 is // apparently more strict than before. UINT flags = D3DCOMPILE_PACK_MATRIX_ROW_MAJOR; #ifdef _DEBUG flags |= D3DCOMPILE_DEBUG | D3DCOMPILE_SKIP_OPTIMIZATION; // | D3DCOMPILE_WARNINGS_ARE_ERRORS; #endif // Get the current path to the shader folders, and add the filename to it. FileSystem fs; std::wstring filepath = fs.GetShaderFolder() + filename; // Load the file into memory FileLoader SourceFile; if ( !SourceFile.Open( filepath ) ) { message << "Unable to load shader from file: " << filepath; EventManager::Get()->ProcessEvent( EvtErrorMessagePtr( new EvtErrorMessage( message.str() ) ) ); return( nullptr ); } LPCSTR s; if ( FAILED( hr = D3DCompile( SourceFile.GetDataPtr(), SourceFile.GetDataSize(), GlyphString::wstringToString(filepath).c_str(), //!!!! - this must be pointing to a concrete shader file!!! - only directory would work as well but in that case graphics debugger crashes when debugging shaders pDefines, D3D_COMPILE_STANDARD_FILE_INCLUDE, AsciiFunction, AsciiModel, flags, 0, &pCompiledShader, &pErrorMessages ) ) ) //if ( FAILED( hr = D3DX11CompileFromFile( // filename.c_str(), // pDefines, // 0, // AsciiFunction, // AsciiModel, // flags, // 0,//UINT Flags2, // 0, // &pCompiledShader, // &pErrorMessages, // &hr // ) ) ) { message << L"Error compiling shader program: " << filepath << std::endl << std::endl; message << L"The following error was reported:" << std::endl; if ( ( enablelogging ) && ( pErrorMessages != nullptr ) ) { LPVOID pCompileErrors = pErrorMessages->GetBufferPointer(); const char* pMessage = (const char*)pCompileErrors; message << GlyphString::ToUnicode( std::string( pMessage ) ); Log::Get().Write( message.str() ); } EventManager::Get()->ProcessEvent( EvtErrorMessagePtr( new EvtErrorMessage( message.str() ) ) ); SAFE_RELEASE( pCompiledShader ); SAFE_RELEASE( pErrorMessages ); return( nullptr ); } SAFE_RELEASE( pErrorMessages ); return( pCompiledShader ); } Could the shader crash for some reason in mid way through execution? The question also is what could compiler possibly do to the shader code in release mode that suddenly "numberOfParticles" becomes invalid and how to fix this issue? Or maybe it's even sth deeper which results in numberOfParticles being invalid? I checked my constant buffer values with Graphics debugger in debug and release modes and both had correct value for numberOfParticles set to 1...
  18. Hi all, I have been spending so much time trying to replicate a basic effect similar to these: Glowing line or Tron lines or More tron lines I've tried to use blurring using the shrink, horizontal and vertical passes, expand technique but the results of my implementation are crappy. I simply want my custom, non-textured 2d polygons to have a glow around them, in a size and color I can define. For example, I want to draw a blue rectangle using 2 triangles and have a glow around the shape. I am not sure how to best achieve this and what technique to use. I am prototyping an idea so performance is not an issue, I just want to get the pixels properly on the screen and I just can't figure out how to do it! It seems this effect has been done to death by now and should be easy, but I can't wrap my head around it, I'm not good at shaders at all I'm afraid. Are the Rastertek blur or glow tutorials the way to go? I'm using DirectX 11. Any tips or suggestions would be greatly appreciated!
  19. Hi, everybody. I touched on this topic in connection with the recent transition to development on Unreal Engine exclusively in C++ Everyone knows that very little information about the documentation of the engine, I spent a lot of time in finding information about it. Rummaged through GitHub to find worthy examples of implementation, but came to the fact that the best way to learn the Engine is to look for answers in the source code. Want to share with you that I dug up and perhaps someone will help me with my problem. Unreal Engine 4 Rendering, Possible to use my own pure HLSL and GLSL shader code, Jason Zink, Matt Pettineo, Jack Hoxley - Practical renderind with DirectX 11 - 2011.pdf In General I want to understand how to operationalize the concept of context FGlobalShader, UPrimitiveComponent and using FPrimitiveSceneProxy definition FVertexFactory, which implemented the connection of the Shader through the material FMaterialShader and transfer the parameters to it. I have studied the source code of these classes and understand that through the class of materials are transmitted a lot of parameters. But I do not want at least at the first stage to use the parameters that I do not fully understand, but gradually. Create a clean class with the ability to transfer the parameters I need in it, but that it fits into the concept of the pipeline Unreal Engine. Can someone faced it and agree to share a small piece of code for example. Thank you in advance!
  20. Hi everyone, I think my question boils down to "How do i feed shaders?" I was wondering what are the good strategies to store mesh transformation data [World matrices] to then be used in the shader for transforming vertices (performance being the priority ). And i'm talking about a game scenario where there are quite a lot of both moving entities, and static ones, that aren't repeated enough to be worth instanced drawing. So far i've only tried these naive methods : DX11 : - Store transforms of ALL entities in a constant buffer ( and give the entity an index to the buffer for later modification ) - Or store ONE transform in a constant buffer, and change it to the entity's transform before each drawcall. Vulkan : - Use Push Constants to send entity's transform to the shader before each drawcall, and maybe use a separate Device_local uniform buffer for static entities? Same question applies to lights. Any suggestions?
  21. HI, Needing some advice on a feature of the pixel shader that can be leveraged when making a shadow pass. Currently, my shadows work fine, everything is quite happily working....I did though ignore one aspect and I should of fixed it that point in time, for things such as billboard particles I'm no rendering shadows. The reason at the time was that the entire billboard (including what would of been the transparent area) is being written into the depth buffer. I remember seeing on the forum and answer to this problem , I believe it was to attach a pixel shader. And for pixels that weren't rejected on the depth test I believe I need to set the return value to null? @Hodgman - I know you were involved in the thread, you might be able to throw some light on this :) I believe if the texture sample is transparent then I should call discard? I've trawled the web site for the answer (and it's in here, i know it, i've seen it), Just hoping for a quick answer on something that is a little bit obscure. Time for me to go back and fix this little issue. Thanks all
  22. How to unpack the frame buffer when packing by Compact YCoCg Frame Buffer?
  23. Hello everyone, I'm looking for some advice since I have some issues with my textures for my mouse pointer and I'm not sure where to start to look. I have checked everything that I know off and now I'm in need of advice on what to look for in my code when I try to fix it. I have a planet that is rendered, I have a UI that is rendered and I also have a mouse pointer that is rendered. First the planet is rendered, then the UI and then the mouse pointer last. When the planet is done rendering I turn off Z-Buffer and enable Alpha Blending while I render the UI and the Mouse Pointer. In the Mouse Pointers Pixel Shader I look for black color and if that is the case I blend it. But what seems to happen is that it also blends part of the texture that isn't supose to be blended. I'm going to provide some screenshot of the effect. In the first image you can see that the mouse pointer changes color to a more white one when behing infront of the planet. The correct color is the one that is displayed when it's not infron of the planet. The second thing I find weird is that the mouse pointer is behind the ui text even tho it is rendered after. I also tried switching them around and it makes no difference. Also the UI doesn't have the same issues when being above the planet, it's color is displayed as it should. Here comes the Pixel Shader code if that helps anyone get a better grip of the issue: float4 color; color = shaderTexture.Sample(sampleType, input.tex); if(color.b == 0.0f && color.r == 0.0f && color.g == 0.0f) { color.a = 0.0f; } else { color.a = 1.0f; } return color; The UI uses almost the same code, but only checks the r channel of the color but I'm using all 3 channels in the Mouse Pointer due to colors might be abit more off. Should be that if the pixel is black it's should be blended. And it does work, but it's just that somehow it also does something with the parts that shouldn't be blended. Right now I'm leaning towards there being something in the Pixel Shader since I can set all pixels to white and it behaves as it should and creates a white box for me. Any pointers of what kind of issues I'm looking at here and what to search for to find a solution will be appreciated alot Best Regards and Thanks in Advance Toastmastern
  24. Hey, I can't find this information anywhere on the web and I'm wondering about specific optimization... Let's say I have hundreds of 3D textures which I need to process separately in compute shader. Each invocation needs different data in constant buffer BUT many of the 3d textures don't need to update their CB contents every frame. Would it be better to create just one CB resource, bind just once at startup and in loop map the data for each consecutive shader invocation or would it be better to create like hundreds of separate CB resources, map them only when needed and just bind appropriate CB before each shader invocation? This depends on how exacly are those resources managed internally in DirectX and what does binding actually do... I would be very grateful if somebody shared their experience!
  25. Hi, I'm trying to do a comparision with DirectInput GUID e.g GUID_XAxis, GUID_YAxis from a value I get from GetProperty eg DIPROPRANGE propRange; DIJoystick->GetProperty (DIPROP_RANGE, &propRange.diph); // This will crash if (GUID_XAxis == MAKEDIPROP (propRange.diph.dwObj)) ; How should I be comparing the GUID from GetProperty?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!