Jump to content
  • Advertisement

Search the Community

Showing results for tags 'DX11' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 1528 results

  1. isu diss

    DX11 Light Shafts

    I decided to implement light shafts using http://sirkan.iit.bme.hu/~szirmay/lightshaft_link.htm So far I've only managed to implement the shadow map. Can anyone help me to implement this in D3D11? (I mean steps, I can do the rest). I'm new to all these shadow maps and etc.
  2. Hi all, As a part of the debug drawing system in my engine, I want to add support for rendering simple text on screen (aka HUD/ HUD style). From what I've read there are a few options, in short: 1. Write your own font sprite renderer 2. Using Direct2D/Directwrite, combine with DX11 rendertarget/ backbuffer 3. Use an external library, like the directx toolkit etc. I want to go for number 2, but articles/ documentation confused me a bit. Some say you need to create a DX10 device, to be able to do this, because it doesn't directly work with the DX11 device. But other articles tell that this was 'patched' later on and should work now. Can someone shed some light on this and ideally provide me an example or article on how to set this up? All input is appreciated.
  3. I'm continuing to learn more about terrain rendering, and so far I've managed to load in a heightmap and render it as a tessellated wireframe (following Frank Luna's DX11 book). However, I'm getting some really weird behavior where a large section of the wireframe is being rendered with a yellow color, even though my pixel shader is hard coded to output white. The parts of the mesh that are discolored changes as well, as pictured below (mesh is being clipped by far plane). Here is my pixel shader. As mentioned, I simply hard code it to output white: float PS(DOUT pin) : SV_Target { return float4(1.0f, 1.0f, 1.0f, 1.0f); } I'm completely lost on what could be causing this, so any help in the right direction would be greatly appreciated. If I can help by providing more information please let me know.
  4. Hello, in my game engine i want to implement my own bone weight painting tool, so to say a virtual brush painting tool for a mesh. I have already implemented my own "dual quaternion skinning" animation system with "morphs" (=blend shapes) and "bone driven" "corrective morphs" (= morph is dependent from a bending or twisting bone) But now i have no idea which is the best method to implement a brush painting system. Just some proposals a. i would build a kind of additional "vertecie structure", that can help me to find the surrounding (neighbours) vertecie indexes from a given "central vertecie" index b. the structure should also give information about the distance from the neighbour vertecsies to the given "central vertecie" index c. calculate the strength of the adding color to the "central vertecie" an the neighbour vertecies by a formula with linear or quadratic distance fall off d. the central vertecie would be detected as that vertecie that is hit by a orthogonal projection from my cursor (=brush) in world space an the mesh but my problem is that there could be several vertecies that can be hit simultaniously. e.g. i want to paint the inward side of the left leg. the right leg will also be hit. I think the given problem is quite typical an there are standard approaches that i dont know. Any help or tutorial are welcome P.S. I am working with SharpDX, DirectX11
  5. Back around 2006 I spent a good year or two reading books, articles on this site, and gobbling up everything game dev related I could. I started an engine in DX10 and got through basics. I eventually gave up, because I couldn't do the harder things. Now, my C++ is 12 years stronger, my mind is trained better, and I am thinking of giving it another go. Alot has changed. There is no more SDK, there is evidently a DX Toolkit, XNA died, all the sweet sites I used to go to are 404, and google searches all point to Unity and Unreal. I plainly don't like Unity or Unreal, but might learn them for reference. So, what is the current path? Does everyone pretty much use the DX Toolkit? Should I start there? I also read that DX12 is just expert level DX11, so I guess I am going DX 11. Is there a current and up to date list of learning resources anywhere? I am about tired of 404s..
  6. Hi Guys, i want to draw shadows of a direction light but the shadows always disappear, if i translate my mesh (cube) in the world to far of the bounds of my orthographic projection matrix. That my code (Based of an XNA sample i recode for my project): // Matrix with that will rotate in points the direction of the light Matrix lightRotation = Matrix.LookAtLH(Vector3.Zero, lightDir, Vector3.Up); BoundingFrustum cameraFrustum = new BoundingFrustum(Matrix.Identity); // Get the corners of the frustum Vector3[] frustumCorners = cameraFrustum.GetCorners(); // Transform the positions of the corners into the direction of the light for (int i = 0; i < frustumCorners.Length; i++) frustumCorners[i] = Vector4F.ToVector3(Vector3.Transform(frustumCorners[i], lightRotation)); // Find the smallest box around the points BoundingBox lightBox = BoundingBox.FromPoints(frustumCorners); Vector3 boxSize = lightBox.Maximum - lightBox.Minimum; Vector3 halfBoxSize = boxSize * 0.5f; // The position of the light should be in the center of the back pannel of the box. Vector3 lightPosition = lightBox.Minimum + halfBoxSize; lightPosition.Z = lightBox.Minimum.Z; // We need the position back in world coordinates so we transform // the light position by the inverse of the lights rotation lightPosition = Vector4F.ToVector3(Vector3.Transform(lightPosition, Matrix.Invert(lightRotation))); // Create the view matrix for the light this.view = Matrix.LookAtLH(lightPosition, lightPosition + lightDir, Vector3.Up); // Create the projection matrix for the light // The projection is orthographic since we are using a directional light int amount = 25; this.projection = Matrix.OrthoOffCenterLH(boxSize.X - amount, boxSize.X + amount, boxSize.Y + amount, boxSize.Y - amount, -boxSize.Z - amount, boxSize.Z + amount); I believe the bug is by cameraFrustum to set a Matrix Idetity. I also tried with a Translation Matrix of my Camera Position and also the View Matrix of my Camera, but without success Can anyone tell me, how to draw shadows of my direction light always where my camera is current in my scene? Greets Benjamin
  7. Hi, I'm implementing a simple 3D engine based on DirectX11. I'm trying to render a skybox with a cubemap on it and to do so I'm using DDS Texture Loader from DirectXTex library. I use texassemble to generate the cubemap (texture array of 6 textures) into a DDS file that I load at runtime. I generated a cube "dome" and sample the texture using the position vector of the vertex as the sample coordinates (so far so good), but I always get the same face of the cubemap mapped on the sky. As I look around I always get the same face (and it wobbles a bit if I move the camera). My code: //Texture.cpp: Texture::Texture(const wchar_t *textureFilePath, const std::string &textureType) : mType(textureType) { //CreateDDSTextureFromFile(Game::GetInstance()->GetDevice(), Game::GetInstance()->GetDeviceContext(), textureFilePath, &mResource, &mShaderResourceView); CreateDDSTextureFromFileEx(Game::GetInstance()->GetDevice(), Game::GetInstance()->GetDeviceContext(), textureFilePath, 0, D3D11_USAGE_DEFAULT, D3D11_BIND_SHADER_RESOURCE, 0, D3D11_RESOURCE_MISC_TEXTURECUBE, false, &mResource, &mShaderResourceView); } // SkyBox.cpp: void SkyBox::Draw() { // set cube map ID3D11ShaderResourceView *resource = mTexture.GetResource(); Game::GetInstance()->GetDeviceContext()->PSSetShaderResources(0, 1, &resource); // set primitive topology Game::GetInstance()->GetDeviceContext()->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST); mMesh.Bind(); mMesh.Draw(); } // Vertex Shader: cbuffer Transform : register(b0) { float4x4 viewProjectionMatrix; }; float4 main(inout float3 pos : POSITION) : SV_POSITION { return mul(float4(pos, 1.0f), viewProjectionMatrix); } // Pixel Shader: SamplerState cubeSampler; TextureCube cubeMap; float4 main(in float3 pos : POSITION) : SV_TARGET { float4 color = cubeMap.Sample(cubeSampler, pos.xyz); return color; } I tried both functions grom DDS loader but I keep getting the same result. All results I found on the web are about the old SDK toolkits, but I'm using the new DirectXTex lib.
  8. Hello, I used OMSetRenderTargetsAndUnorderedAccessViews function to set the uav resource in the pixel shader. Everything is ok and there is no warring/error message. However, the vsGraphic Diagnostic encountered a fatal error when I try to open the frame I captured. Although I've tried to used another PC and use another project(The Intel project which also bound uav resource to pixel shader) to test, the same error still occur. It's so tough to debug the shader without such tool. If there is anyone know what should I do, please tell me. Thanks for your reply! BTW, my graphic diagnostic engine version is 15.6.5, DirectX feature level is 11_0, Shader Model is 5_0. Thx!
  9. The DirectX team has just published a blog post / article with a call to action for game developers, to change swapchain usage patterns: https://blogs.msdn.microsoft.com/directx/2018/04/09/dxgi-flip-model/ I wanted to get some visibility on it, as well as start a discussion to see if there's any feedback from folks who have gone down this road in the past, or to hear from anybody who's trying this out as a result of this article.
  10. Hello guys, I tried to draw shadows in my scene with a direction light but in the end result i see the shadow + the view space of my direction light too, (see image). 4 yeahrs ago, a known, that i haven't contact anymore, wrote me a shadowmap shader and the geometry shader for doing this, but never fix this. So i haven't really much knowledge of shadow math and never fix this by my self I also tried to filter the light view space background with, if(shadow > 0.15) shadow = 1.0f (shadow = lightIntensity). But with this filter looks shadows of complexe geometries awful My Bias Value is 0.0001f Can anyone help me and explain me what is wrong? Greets Benjamin ps. (I uploaded the two HLSL Shaders in the attachment) ShadowMap.fx SimpleShader.fx
  11. Hi guys. Trying to solve this for a few days now but getting nowhere. I am having trouble with correctly displaying model when using diffuse and ambient light combination. In the picture you can see 2 larger cubes, top one using only ambient light, and bottom one using ambient + diffuse combination Smaller cube is just for showing where light is located. Can someone help me understand why is cube rendered the way it is? I was thinking problems with normals. I tried several different normal calculations and none of them work. Or maybe its not problem with the normals? I didnt want to post any code until maybe someone just give me a clue what might be happening? thanx
  12. Hi there, I am rendering my game to render textures but I am having difficulty figuring out how to scale it to fill the window. The window looks like this: and the render texture looks like this (it's the window resolution downscaled by 4): (I implemented a screenshot function of the render texture which has proved to be very useful getting this working so far). My vertex shader is the classic "draw fullscreen triangle without binding a vertex or index buffer" as seen many times on this site: PS_IN_PosTex main(uint id : SV_VertexID) { PS_IN_PosTex output; output.tex = float2((id << 1) & 2, id & 2); output.pos = float4(output.tex * float2(2, -2) + float2(-1, 1), 0, 1); return output; } and the pixel shader is simply: Texture2D txDiffuse : register(t0); SamplerState samp : register(s0); float4 main(PS_IN_PosTex input) : SV_TARGET { return txDiffuse.Sample(samp, input.tex); } Can someone please give me a clue as to how to scale this correctly? Many thanks, Andy
  13. Hey everyone, ive used tools like Intel GPA in the past and I would like to continue doing that. But in my current work environment I cant find a tool that can analyze rendering without a swapchain and calling Present(). In addition to that, the program Im working on uses WPF for UI, which uses a D3D9Ex Device and some tools attach to that device instead. So, are there any tools that allow debugging D3D11 without depending on a swapchain? Thanks for any help!
  14. I've run into a puzzling rendering issue where triangles in the back, bleed through triangles in front of them. This screenshot shows the issue: The leftmost picture is drawn with no msaa and shows the expected result. The middle picture is exactly the same, except with msaa enabled. Notice the red pixels bleeding through at some triangle edges. The right picture shows a slightly rotated view, revealing the red surface in the back. In the rotated view, the artefact goes away, apparently because the triangles are no longer directly facing the camera. The issue only occurs on certain gpus (as far as I am currently aware, nvidia quadro K600 and intel integrated chips). When using a WARP device or the D3d11 reference rasterizer, the problem does not occur. The triangle mesh also affects the result. The two surfaces in the screenshot are pieces of a skull, triangulated from a 3d image scan using some form of marching cubes. So the triangles are in a very specific order. You can also see this in the following RenderDoc pixel history: RenderDoc shows the 'broken' pixel covered by two triangles -- I'd actually expect at least three: two adjacing the edge, and at least one from the back surface. The two triangles affecting the final pixel color are consecutive primitives. The front one has a shader depth output of 0.42414, the back one has a depth of 0.73829, but still ends up affecting the final color. If I change the order of the triangles -- for instance by splitting the surfaces and rendering each surface with its own draw call -- the problem also goes away. I understand that msaa changes rasterization, but shouldn't adjacing triangles still be completely without gaps? All sample positions within a pixel should be covered by the triangles on both sides of the edge, so no background should bleed through, right? For the record: I did check that there are no actual gaps/cracks in the mesh. Is this a driver/gpu bug? Am I misunderstanding the rasterization rules? Thanks!
  15. Trying to modify the D3D11 example to draw a texture onto the triangle rather than the colours but all I'm not seeing the triangle. What am I missing? Code: https://pastebin.com/KVMTq9r6 MiniTri.fx https://pastebin.com/0w1myEg0
  16. So, I am trying to do something fairly simple. Copy the content from one texture into another and then copy the result texture inside the first. (I'll also do some additional editing on the result texture and that's why I am copying the result back). But I am getting some strange results and the photo is loosing quality. Here is the Compute Shader I've used: Texture2D ObjTexture : register(t0); RWTexture2D<float4> ObjResult : register(u0); SamplerState ObjWrapSampler : register(s0); [numthreads(32, 32, 1)] void main(uint3 DTid : SV_DispatchThreadID) { float width, height; ObjTexture.GetDimensions(width, height); width -= 1; // X = [0 ... width-1] height -= 1; // Y = [0 ... height - 1] float2 uv = float2(DTid.xy) / float2(width, height); ObjResult[DTid.xy] = ObjTexture.SampleLevel(ObjWrapSampler, uv, 0); return; } and here is how I copy the result image back into the input one ID3D11Resource * inputTexture; inputTextureSRV->GetResource(&inputTexture); mContext->CopySubresourceRegion(inputTexture, 0, 0, 0, 0, mTexture.Get(), 0, NULL); I also tried copying the first texture inside the second texture and use an Unordered Acess View from the first texture, but I got the same result ... Is there something I am doing wrong here? This is the original texture This is the texture after ~60 passes And this is after ~240 passes. This is 'the final result' because it doesn't change anymore. As you can see, the image lost it's quality.
  17. I'm following rastertek tutorial 14 (http://rastertek.com/tertut14.html). The problem is, slope based texturing doesn't work in my application. There are plenty of slopes in my terrain. None of them get slope color. float4 PSMAIN(DS_OUTPUT Input) : SV_Target { float4 grassColor; float4 slopeColor; float4 rockColor; float slope; float blendAmount; float4 textureColor; grassColor = txTerGrassy.Sample(SSTerrain, Input.TextureCoords); slopeColor = txTerMossRocky.Sample(SSTerrain, Input.TextureCoords); rockColor = txTerRocky.Sample(SSTerrain, Input.TextureCoords); // Calculate the slope of this point. slope = (1.0f - Input.LSNormal.y); if(slope < 0.2) { blendAmount = slope / 0.2f; textureColor = lerp(grassColor, slopeColor, blendAmount); } if((slope < 0.7) && (slope >= 0.2f)) { blendAmount = (slope - 0.2f) * (1.0f / (0.7f - 0.2f)); textureColor = lerp(slopeColor, rockColor, blendAmount); } if(slope >= 0.7) { textureColor = rockColor; } return float4(textureColor.rgb, 1); } Can anyone help me? Thanks.
  18. i am new to directx. i just followed some tutorials online and started to program. It had been well till i faced this problem of loading my own 3d models from 3ds max exported as .x which is supported by directx. I am using c++ on visual studio 2010 and directX9. i really tried to find help on the net but i couldn't find which can solve my problem. i don't know where exactly the problem is. i run most of samples and examples all worked well. can anyone give me the hint or solution for my problem ? thanks in advance!
  19. Am trying a basebones tessellation shader and getting unexpected result when increasing the tessellation factor. Am rendering a group of quads and trying to apply tessellation to them. OutsideTess = (1,1,1,1), InsideTess= (1,1) OutsideTess = (1,1,1,1), InsideTess= (2,1) I expected 4 triangles in the quad, not two. Any idea of whats wrong? Structs: struct PatchTess { float mEdgeTess[4] : SV_TessFactor; float mInsideTess[2] : SV_InsideTessFactor; }; struct VertexOut { float4 mWorldPosition : POSITION; float mTessFactor : TESS; }; struct DomainOut { float4 mWorldPosition : SV_POSITION; }; struct HullOut { float4 mWorldPosition : POSITION; }; Hull shader: PatchTess PatchHS(InputPatch<VertexOut, 3> inputVertices) { PatchTess patch; patch.mEdgeTess[ 0 ] = 1; patch.mEdgeTess[ 1 ] = 1; patch.mEdgeTess[ 2 ] = 1; patch.mEdgeTess[ 3 ] = 1; patch.mInsideTess[ 0 ] = 2; patch.mInsideTess[ 1 ] = 1; return patch; } [domain("quad")] [partitioning("fractional_odd")] [outputtopology("triangle_ccw")] [outputcontrolpoints(4)] [patchconstantfunc("PatchHS")] [maxtessfactor( 64.0 )] HullOut hull_main(InputPatch<VertexOut, 3> verticeData, uint index : SV_OutputControlPointID) { HullOut ret; ret.mWorldPosition = verticeData[index].mWorldPosition; return ret; } Domain shader: [domain("quad")] DomainOut domain_main(PatchTess patchTess, float2 uv : SV_DomainLocation, const OutputPatch<HullOut, 4> quad) { DomainOut ret; const float MipInterval = 20.0f; ret.mWorldPosition.xz = quad[ 0 ].mWorldPosition.xz * ( 1.0f - uv.x ) * ( 1.0f - uv.y ) + quad[ 1 ].mWorldPosition.xz * uv.x * ( 1.0f - uv.y ) + quad[ 2 ].mWorldPosition.xz * ( 1.0f - uv.x ) * uv.y + quad[ 3 ].mWorldPosition.xz * uv.x * uv.y ; ret.mWorldPosition.y = quad[ 0 ].mWorldPosition.y; ret.mWorldPosition.w = 1; ret.mWorldPosition = mul( gFrameViewProj, ret.mWorldPosition ); return ret; } Any ideas what could be wrong with these shaders?
  20. I've just started learning about tessellation from Frank Luna's DX11 book. I'm getting some very weird behavior when I try to render a tessellated quad patch if I also render a mesh in the same frame. The tessellated quad patch renders just fine if it's the only thing I'm rendering. This is pictured below: ' However, when I attempt to render the same tessellated quad patch along with the other entities in the scene (which are simple triangle-lists), I get the following error: I have no idea why this is happening, and google searches have given me no leads at all. I use the following code to render the tessellated quad patch: ID3D11DeviceContext* dc = GetGFXDeviceContext(); dc->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_4_CONTROL_POINT_PATCHLIST); dc->IASetInputLayout(ShaderManager::GetInstance()->m_JQuadTess->m_InputLayout); float blendFactors[] = { 0.0f, 0.0f, 0.0f, 0.0f }; // only used with D3D11_BLEND_BLEND_FACTOR dc->RSSetState(m_rasterizerStates[RSWIREFRAME]); dc->OMSetBlendState(m_blendStates[BSNOBLEND], blendFactors, 0xffffffff); dc->OMSetDepthStencilState(m_depthStencilStates[DSDEFAULT], 0); ID3DX11EffectTechnique* activeTech = ShaderManager::GetInstance()->m_JQuadTess->Tech; D3DX11_TECHNIQUE_DESC techDesc; activeTech->GetDesc(&techDesc); for (unsigned int p = 0; p < techDesc.Passes; p++) { TerrainVisual* terrainVisual = (TerrainVisual*)entity->m_VisualComponent; UINT stride = sizeof(TerrainVertex); UINT offset = 0; GetGFXDeviceContext()->IASetVertexBuffers(0, 1, &terrainVisual->m_VB, &stride, &offset); Vector3 eyePos = Vector3(cam->m_position); Matrix rotation = Matrix::CreateFromYawPitchRoll(entity->m_rotationEuler.x, entity->m_rotationEuler.y, entity->m_rotationEuler.z); Matrix model = rotation * Matrix::CreateTranslation(entity->m_position); Matrix view = cam->GetLookAtMatrix(); Matrix MVP = model * view * m_ProjectionMatrix; ShaderManager::GetInstance()->m_JQuadTess->SetEyePosW(eyePos); ShaderManager::GetInstance()->m_JQuadTess->SetWorld(model); ShaderManager::GetInstance()->m_JQuadTess->SetWorldViewProj(MVP); activeTech->GetPassByIndex(p)->Apply(0, GetGFXDeviceContext()); GetGFXDeviceContext()->Draw(4, 0); } dc->RSSetState(0); dc->OMSetBlendState(0, blendFactors, 0xffffffff); dc->OMSetDepthStencilState(0, 0); I draw my scene by looping through the list of entities and calling the associated draw method depending on the entity's "visual type": for (unsigned int i = 0; i < scene->GetEntityList()->size(); i++) { Entity* entity = scene->GetEntityList()->at(i); if (entity->m_VisualComponent->m_visualType == VisualType::MESH) DrawMeshEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::BILLBOARD) DrawBillboardEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::TERRAIN) DrawTerrainEntity(entity, cam); } HR(m_swapChain->Present(0, 0)); Any help/advice would be much appreciated!
  21. Hello, I've stumbled upon Urho3D engine and found that it has a really nice and easy to read code structure. I think the graphics abstraction looks really interesting and I like the idea of how it defers pipeline state changes until just before the draw call to resolve redundant state changes. This is done by saving the state changes (blendEnabled/SRV changes/RTV changes) in member variables and just before the draw, apply the actual state changes using the graphics context. It looks something like this (pseudo): void PrepareDraw() { if(renderTargetsDirty) { pD3D11DeviceContext->OMSetRenderTarget(mCurrentRenderTargets); renderTargetsDirty = false } if(texturesDirty) { pD3D11DeviceContext->PSSetShaderResourceView(..., mCurrentSRVs); texturesDirty = false } .... //Some more state changes } This all looked like a great design at first but I've found that there is one big issue with this which I don't really understand how it is solved in their case and how I would tackle it. I'll explain it by example, imagine I have two rendertargets: my backbuffer RT and an offscreen RT. Say I want to render my backbuffer to the offscreen RT and then back to the backbuffer (Just for the sake of the example). You would do something like this: //Render to the offscreen RT pGraphics->SetRenderTarget(pOffscreenRT->GetRTV()); pGraphics->SetTexture(diffuseSlot, pDefaultRT->GetSRV()) pGraphics->DrawQuad() pGraphics->SetTexture(diffuseSlot, nullptr); //Remove the default RT from input //Render to the default (screen) RT pGraphics->SetRenderTarget(nullptr); //Default RT pGraphics->SetTexture(diffuseSlot, pOffscreenRT->GetSRV()) pGraphics->DrawQuad(); The problem here is that the second time the application loop comes around, the offscreen rendertarget is still bound as input ShaderResourceView when it gets set as a RenderTargetView because in Urho3D, the state of the RenderTargetView will always be changed before the ShaderResourceViews (see top code snippet) even when I set the SRV to nullptr before using it as a RTV like above causing errors because a resource can't be bound to both input and rendertarget. What is usually the solution to this? Thanks!
  22. I added terrain rendering to my sky+lensflare rendering pipeline. I render terrain onto a render-target and bind it to the backbuffer as a texture. I do this in pixel shader of the fullscreen quad. return txTerrain.Sample(samLinear, Input.TextureCoords)+ txSkyDome.Sample(samLinear, Input.TextureCoords) + float4(LensFlareHalo * txDirty.Sample(samLinear, Input.TextureCoords).rgb * Intensity, 1); The problem is, the terrain is blended with sky. How do I fix this? pImmediateContext->OMSetDepthStencilState(pDSState_DD, 1); //disable depth buffer // render sky dome pImmediateContext->OMSetRenderTargets(1, &pSkyDomeRTV, pDepthStencilView); pImmediateContext->ClearRenderTargetView(pSkyDomeRTV, Colors::Black); pImmediateContext->ClearDepthStencilView(pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0); //creating the sky texture in the cs shader float Theta = .2f;//XM_PI*(float)tElapsed/50; float Phi = XM_PIDIV4; CSCONSTANTBUF cscb; cscb.vSun = XMFLOAT3(cos(Theta)*cos(Phi), sin(Theta), cos(Theta)*sin(Phi)); cscb.MieCoeffs = XMFLOAT3((float)MieCoefficient(m, AerosolRadius, 680), (float)MieCoefficient(m, AerosolRadius, 530), (float)MieCoefficient(m, AerosolRadius, 470)); cscb.RayleighCoeffs = XMFLOAT3((float)RayleighCoefficient(680), (float)RayleighCoefficient(530), (float)RayleighCoefficient(470)); cscb.fHeight = 10; cscb.fWeight = 10; cscb.fWeight2 = 10; pImmediateContext->UpdateSubresource( pCSConstBuffer, 0, NULL, &cscb, 0, 0 ); UINT UAVCounts = 0; pImmediateContext->CSSetUnorderedAccessViews(0, 1, &pSkyUAV, &UAVCounts); pImmediateContext->CSSetConstantBuffers(0, 1, &pCSConstBuffer); pImmediateContext->CSSetShader(pComputeShader, NULL, 0); pImmediateContext->Dispatch(8, 8, 1); pImmediateContext->CSSetUnorderedAccessViews(0, 1, &NULLUAV, 0); pImmediateContext->CSSetShader(NULL, NULL, 0); //setting dome relevant vs and ps stuff pImmediateContext->IASetInputLayout(pVtxLayout); uiStride = sizeof(VTX); pImmediateContext->IASetVertexBuffers(0, 1, &pVtxSkyBuffer, &uiStride, &uiOffset); pImmediateContext->IASetIndexBuffer(pIndxSkyBuffer, DXGI_FORMAT_R32_UINT, 0); pImmediateContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); pImmediateContext->VSSetConstantBuffers(0, 1, &pVSSkyConstBuffer); pImmediateContext->VSSetShader(pVtxSkyShader, NULL, 0); pImmediateContext->PSSetShader(pPixlSkyShader, NULL, 0); pImmediateContext->PSSetShaderResources(0, 1, &pSkySRV); pImmediateContext->PSSetSamplers(0, 1, &SampState); mgWorld = XMMatrixTranslation(MyCAM->GetEye().m128_f32[0], MyCAM->GetEye().m128_f32[1], MyCAM->GetEye().m128_f32[2]); //drawing the sky dome VSCONSTANTBUF cb; cb.mWorld = XMMatrixTranspose( mgWorld ); cb.mView = XMMatrixTranspose( mgView ); cb.mProjection = XMMatrixTranspose( mgProjection ); pImmediateContext->UpdateSubresource( pVSSkyConstBuffer, 0, NULL, &cb, 0, 0 ); pImmediateContext->DrawIndexed((UINT)MyFBX->myInds.size(),0, 0); pImmediateContext->VSSetShader(0, NULL, 0); pImmediateContext->PSSetShader(0, NULL, 0); pImmediateContext->OMSetDepthStencilState(pDSState_DE, 1); //enable depth buffer // terrain rendering pImmediateContext->OMSetRenderTargets(1, &pTerRTV, pDepthStencilView); pImmediateContext->ClearRenderTargetView(pTerRTV, Colors::Black); pImmediateContext->ClearDepthStencilView(pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0); mgWorld = XMMatrixIdentity();// XMMatrixRotationY( (float)tElapsed ); VSCONSTANTBUF tcb; tcb.mWorld = XMMatrixTranspose( mgWorld ); tcb.mView = XMMatrixTranspose( mgView ); tcb.mProjection = XMMatrixTranspose( mgProjection ); pImmediateContext->UpdateSubresource( pVSTerrainConstBuffer, 0, NULL, &tcb, 0, 0 ); //setting terain relevant vs and ps stuff pImmediateContext->IASetInputLayout(pVtxTerrainLayout); uiStride = sizeof(TERVTX); pImmediateContext->IASetVertexBuffers(2, 1, &pVtxTerrainBuffer, &uiStride, &uiOffset); pImmediateContext->IASetIndexBuffer(pIndxTerrainBuffer, DXGI_FORMAT_R32_UINT, 0); pImmediateContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); pImmediateContext->VSSetConstantBuffers(0, 1, &pVSTerrainConstBuffer); pImmediateContext->VSSetShader(pVtxTerrainShader, NULL, 0); pImmediateContext->PSSetShader(pPixlTerrainShader, NULL, 0); pImmediateContext->PSSetShaderResources(0, 1, &pTerrainSRV); pImmediateContext->PSSetSamplers(0, 1, &TerrainSampState); pImmediateContext->DrawIndexed((UINT)MyTerrain->myInds.size(), 0, 0); pImmediateContext->VSSetShader(0, NULL, 0); pImmediateContext->PSSetShader(0, NULL, 0); //downsampling stage for lens flare pImmediateContext->OMSetRenderTargets(1, &pSkyDomeBlurRTV, pDepthStencilView); pImmediateContext->ClearRenderTargetView(pSkyDomeBlurRTV, Colors::Black); pImmediateContext->ClearDepthStencilView(pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0); uiStride = sizeof(VTX); pImmediateContext->IASetInputLayout(pVtxCamLayout); pImmediateContext->IASetVertexBuffers(1, 1, &pVtxCamBuffer, &uiStride, &uiOffset); pImmediateContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP); pImmediateContext->VSSetShader(pVtxSkyBlurShader, NULL, 0); pImmediateContext->PSSetShader(pPixlSkyBlurShader, NULL, 0); pImmediateContext->PSSetShaderResources(0, 1, &pSkyDomeSRV);//sky+dome texture pImmediateContext->PSSetSamplers(0, 1, &CamSampState); pImmediateContext->Draw(4, 0); pImmediateContext->VSSetShader(0, NULL, 0); pImmediateContext->PSSetShader(0, NULL, 0); //backbuffer stage where lens flare code and terrain texture are set pImmediateContext->OMSetRenderTargets(1, &pRenderTargetView, pDepthStencilView); pImmediateContext->ClearRenderTargetView(pRenderTargetView, Colors::Black); pImmediateContext->ClearDepthStencilView(pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0); // uiStride = sizeof(VTX); // pImmediateContext->IASetInputLayout(pVtxCamLayout); // pImmediateContext->IASetVertexBuffers(1, 1, &pVtxCamBuffer, &uiStride, &uiOffset); // pImmediateContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP); pImmediateContext->VSSetShader(pVtxCamShader, NULL, 0); pImmediateContext->PSSetShader(pPixlCamShader, NULL, 0); pImmediateContext->PSSetShaderResources(0, 1, &pSRV);// dirty texture pImmediateContext->PSSetShaderResources(1, 1, &pSkyDomeBlurSRV);//sky+dome blurred texture pImmediateContext->PSSetShaderResources(2, 1, &pSkyDomeSRV);//sky+dome texture pImmediateContext->PSSetShaderResources(3, 1, &pTerSRV);//terrain texture pImmediateContext->PSSetSamplers(0, 1, &CamSampState); pImmediateContext->Draw(4, 0); pImmediateContext->VSSetShader(0, NULL, 0); pImmediateContext->PSSetShader(0, NULL, 0); pSwapChain->Present(0, 0);
  23. I'm starting out learning graphics programming with DX11 and Win32 and so far I haven't found many quality resources. At the moment I'm trying to string together a few tutorrials (some based on DX10) to try and learn the basics. The tutorial I was reading used a D3DX function to compile shaders but as far as I understand, this library is deprecated? After some digging, I found the D3DCompileFromFile() function which I'm trying to use. Here is my c++ code: HINSTANCE d3d_compiler_lib = LoadLibrary("D3DCompiler_47.DLL"); assert(d3d_compiler_lib != NULL); typedef HRESULT (WINAPI *d3d_shader_compile_func)( LPCWSTR, const D3D_SHADER_MACRO *, ID3DInclude *, LPCSTR, LPCSTR, UINT, UINT, ID3DBlob **, ID3DBlob **); d3d_shader_compile_func D3DCompileFromFile = (d3d_shader_compile_func)GetProcAddress( d3d_compiler_lib, "D3DCompileFromFile"); assert(D3DCompileFromFile != NULL); ID3D10Blob *vs, *ps; hresult = D3DCompileFromFile( L"basic.shader", NULL, NULL, "VertexShader", "vs_4_0", 0, 0, &vs, NULL); assert(hresult == S_OK); // Fails here hresult = D3DCompileFromFile( L"basic.shader", NULL, NULL, "PixelShader", "ps_4_0", 0, 0, &ps, NULL); assert(hresult == S_OK); FreeLibrary(d3d_compiler_lib); In the failing assertion, hresult is 'E_FAIL', which according to MSDN, means: "Attempted to create a device with the debug layer enabled and the layer is not installed." I'm a bit lost at this point. I am also not 100% sure my D3DCompileFromFile signature is correct... It does match the version on MSDN though. Any ideas as to why this might be failing? I tried putting in the wrong file name and got (as expected) hresult == D3D11_ERROR_FILE_NOT_FOUND. So at least this is some indication that I haven't totally screwed up the function call. For reference, here is the shader file. I was able to compile it successfully using an online HLSL compiler. struct VOut { float4 position : SV_POSITION; float4 color : COLOR; }; VOut VertexShader(float4 position : POSITION, float4 color : COLOR) { VOut output; output.position = position; output.color = color; return output; } float4 PixelShader(float4 position : SV_POSITION, float4 color : COLOR) : SV_TARGET { return color; } Thanks for your time
  24. Hello, I wrote a MatCap shader following this idea: Given the image representing the texture, we compute the sample point by taking the dot product of the vertex normal and the camera position and remapping this to [0,1]. This seems to work well when I look straight at an object with this shader. However, in cases where the camera points slightly on the side, I can see the texture stretch a lot. Could anyone give me a hint as how to get a nice matcap shader ? Here's what I wrote: Shader "Unlit/Matcap" { Properties { _MainTex ("Texture", 2D) = "white" {} } SubShader { Tags { "RenderType"="Opaque" } LOD 100 Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag // make fog work #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float3 normal : NORMAL; }; struct v2f { float2 worldNormal : TEXCOORD0; float4 vertex : SV_POSITION; }; sampler2D _MainTex; v2f vert (appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); o.worldNormal = mul((float3x3)UNITY_MATRIX_V, UnityObjectToWorldNormal(v.normal)).xy*0.3 + 0.5; //UnityObjectToClipPos(v.normal)*0.5 + 0.5; return o; } fixed4 frag (v2f i) : SV_Target { // sample the texture fixed4 col = tex2D(_MainTex, i.worldNormal); // apply fog return col; } ENDCG } } } Thanks!
  25. Hi, I am trying to brute-force a closest-point-to-closed-triangle-mesh algorithm on the GPU by creating a thread for each point-primitive pair and keeping only the nearest result for each point. This code fails however, with multiple writes being made by threads with different distance computations. To keep only the closest value, I attempt to mask using InterlockedMin, and a conditional that only writes if the current thread holds the same value as the mask after a memory barrier. I have included the function below. As can be seen I have modified it to write to a different location every time the conditional succeeds for debugging. It is expected that multiple writes will take place, for example where the closest point is a vertex shared by multiple triangles, but when I read back closestPoints and calculate the distances, they are different, which should not be possible. The differences are large (~0.3+) so I do not think it is a rounding error. The CPU equivalent works fine for a single particle. After the kernel execution, distanceMask does hold the smallest value, suggesting the problem is with the barrier or the conditional. Can anyone say what is wrong with the function? RWStructuredBuffer<uint> distanceMask : register(u4); RWStructuredBuffer<uint> distanceWriteCounts : register(u0); RWStructuredBuffer<float3> closestPoints : register(u5); [numthreads(64,1,1)] void BruteForceClosestPointOnMesh(uint3 id : SV_DispatchThreadID) { int particleid = id.x; int triangleid = id.y; Triangle t = triangles[triangleid]; float3 v0 = GetVertex1(t.i0); float3 v1 = GetVertex1(t.i1); float3 v2 = GetVertex1(t.i2); float3 q1 = Q1[particleid]; ClosestPointPointTriangleResult result = ClosestPointPointTriangle(q1, v0, v1, v2); float3 p = v0 * result.uvw.x + v1 * result.uvw.y + v2 * result.uvw.z; uint distance = asuint(length(p - q1)); InterlockedMin(distanceMask[particleid], distance); AllMemoryBarrierWithGroupSync(); if(distance == distanceMask[particleid]) { uint bin = 0; InterlockedAdd(distanceWriteCounts[particleid],1,bin); closestPoints[particleid * binsize + bin] = p; } }
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!