• Advertisement

Search the Community

Showing results for tags 'DX11'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • GameDev Unboxed

Categories

  • Game Dev Loadout

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • For Beginners
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Developers

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 1482 results

  1. I'm continuing to learn more about terrain rendering, and so far I've managed to load in a heightmap and render it as a tessellated wireframe (following Frank Luna's DX11 book). However, I'm getting some really weird behavior where a large section of the wireframe is being rendered with a yellow color, even though my pixel shader is hard coded to output white. The parts of the mesh that are discolored changes as well, as pictured below (mesh is being clipped by far plane). Here is my pixel shader. As mentioned, I simply hard code it to output white: float PS(DOUT pin) : SV_Target { return float4(1.0f, 1.0f, 1.0f, 1.0f); } I'm completely lost on what could be causing this, so any help in the right direction would be greatly appreciated. If I can help by providing more information please let me know.
  2. Hello, i try to implement voxel cone tracing in my game engine. I have read many publications about this, but some crucial portions are still not clear to me. At first step i try to emplement the easiest "poor mans" method a. my test scene "Sponza Atrium" is voxelized completetly in a static voxel grid 128^3 ( structured buffer contains albedo) b. i dont care about "conservative rasterization" and dont use any sparse voxel access structure c. every voxel does have the same color for every side ( top, bottom, front .. ) d. one directional light injects light to the voxels ( another stuctured buffer ) I will try to say what i think is correct ( please correct me ) GI lighting a given vertecie in a ideal method A. we would shoot many ( e.g. 1000 ) rays in the half hemisphere which is oriented according to the normal of that vertecie B. we would take into account every occluder ( which is very much work load) and sample the color from the hit point. C. according to the angle between ray and the vertecie normal we would weigth ( cosin ) the color and sum up all samples and devide by the count of rays Voxel GI lighting In priciple we want to do the same thing with our voxel structure. Even if we would know where the correct hit points of the vertecie are we would have the task to calculate the weighted sum of many voxels. Saving time for weighted summing up of colors of each voxel To save the time for weighted summing up of colors of each voxel we build bricks or clusters. Every 8 neigbour voxels make a "cluster voxel" of level 1, ( this is done recursively for many levels ). The color of a side of a "cluster voxel" is the average of the colors of the four containing voxels sides with the same orientation. After having done this we can sample the far away parts just by sampling the coresponding "cluster voxel with the coresponding level" and get the summed up color. Actually this process is done be mip mapping a texture that contains the colors of the voxels which places the color of the neighbouring voxels also near by in the texture. Cone tracing, howto ?? Here my understanding is confus ?? How is the voxel structure efficiently traced. I simply cannot understand how the occlusion problem is fastly solved so that we know which single voxel or "cluster voxel" of which level we have to sample. Supposed, i am in a dark room that is filled with many boxes of different kind of sizes an i have a pocket lamp e.g. with a pyramid formed light cone - i would see some single voxels near or far - i would also see many different kind of boxes "clustered voxels" of different sizes which are partly occluded How do i make a weighted sum of this ligting area ?? e.g. if i want to sample a "clustered voxel level 4" i have to take into account how much per cent of the area of this "clustered voxel" is occluded. Please be patient with me, i really try to understand but maybe i need some more explanation than others best regards evelyn
  3. Hi, I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
  4. Hi guys, when I do picking followed by ray-plane intersection the results are all wrong. I am pretty sure my ray-plane intersection is correct so I'll just show the picking part. Please take a look: // get projection_matrix DirectX::XMFLOAT4X4 mat; DirectX::XMStoreFloat4x4(&mat, projection_matrix); float2 v; v.x = (((2.0f * (float)mouse_x) / (float)screen_width) - 1.0f) / mat._11; v.y = -(((2.0f * (float)mouse_y) / (float)screen_height) - 1.0f) / mat._22; // get inverse of view_matrix DirectX::XMMATRIX inv_view = DirectX::XMMatrixInverse(nullptr, view_matrix); DirectX::XMStoreFloat4x4(&mat, inv_view); // create ray origin (camera position) float3 ray_origin; ray_origin.x = mat._41; ray_origin.y = mat._42; ray_origin.z = mat._43; // create ray direction float3 ray_dir; ray_dir.x = v.x * mat._11 + v.y * mat._21 + mat._31; ray_dir.y = v.x * mat._12 + v.y * mat._22 + mat._32; ray_dir.z = v.x * mat._13 + v.y * mat._23 + mat._33; That should give me a ray origin and direction in world space but when I do the ray-plane intersection the results are all wrong. If I click on the bottom half of the screen ray_dir.z becomes negative (more so as I click lower). I don't understand how that can be, shouldn't it always be pointing down the z-axis ? I had this working in the past but I can't find my old code Please help. Thank you.
  5. Hello, in my game engine i want to implement my own bone weight painting tool, so to say a virtual brush painting tool for a mesh. I have already implemented my own "dual quaternion skinning" animation system with "morphs" (=blend shapes) and "bone driven" "corrective morphs" (= morph is dependent from a bending or twisting bone) But now i have no idea which is the best method to implement a brush painting system. Just some proposals a. i would build a kind of additional "vertecie structure", that can help me to find the surrounding (neighbours) vertecie indexes from a given "central vertecie" index b. the structure should also give information about the distance from the neighbour vertecsies to the given "central vertecie" index c. calculate the strength of the adding color to the "central vertecie" an the neighbour vertecies by a formula with linear or quadratic distance fall off d. the central vertecie would be detected as that vertecie that is hit by a orthogonal projection from my cursor (=brush) in world space an the mesh but my problem is that there could be several vertecies that can be hit simultaniously. e.g. i want to paint the inward side of the left leg. the right leg will also be hit. I think the given problem is quite typical an there are standard approaches that i dont know. Any help or tutorial are welcome P.S. I am working with SharpDX, DirectX11
  6. Hi, I'm implementing a simple 3D engine based on DirectX11. I'm trying to render a skybox with a cubemap on it and to do so I'm using DDS Texture Loader from DirectXTex library. I use texassemble to generate the cubemap (texture array of 6 textures) into a DDS file that I load at runtime. I generated a cube "dome" and sample the texture using the position vector of the vertex as the sample coordinates (so far so good), but I always get the same face of the cubemap mapped on the sky. As I look around I always get the same face (and it wobbles a bit if I move the camera). My code: //Texture.cpp: Texture::Texture(const wchar_t *textureFilePath, const std::string &textureType) : mType(textureType) { //CreateDDSTextureFromFile(Game::GetInstance()->GetDevice(), Game::GetInstance()->GetDeviceContext(), textureFilePath, &mResource, &mShaderResourceView); CreateDDSTextureFromFileEx(Game::GetInstance()->GetDevice(), Game::GetInstance()->GetDeviceContext(), textureFilePath, 0, D3D11_USAGE_DEFAULT, D3D11_BIND_SHADER_RESOURCE, 0, D3D11_RESOURCE_MISC_TEXTURECUBE, false, &mResource, &mShaderResourceView); } // SkyBox.cpp: void SkyBox::Draw() { // set cube map ID3D11ShaderResourceView *resource = mTexture.GetResource(); Game::GetInstance()->GetDeviceContext()->PSSetShaderResources(0, 1, &resource); // set primitive topology Game::GetInstance()->GetDeviceContext()->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST); mMesh.Bind(); mMesh.Draw(); } // Vertex Shader: cbuffer Transform : register(b0) { float4x4 viewProjectionMatrix; }; float4 main(inout float3 pos : POSITION) : SV_POSITION { return mul(float4(pos, 1.0f), viewProjectionMatrix); } // Pixel Shader: SamplerState cubeSampler; TextureCube cubeMap; float4 main(in float3 pos : POSITION) : SV_TARGET { float4 color = cubeMap.Sample(cubeSampler, pos.xyz); return color; } I tried both functions grom DDS loader but I keep getting the same result. All results I found on the web are about the old SDK toolkits, but I'm using the new DirectXTex lib.
  7. Hi Guys, i want to draw shadows of a direction light but the shadows always disappear, if i translate my mesh (cube) in the world to far of the bounds of my orthographic projection matrix. That my code (Based of an XNA sample i recode for my project): // Matrix with that will rotate in points the direction of the light Matrix lightRotation = Matrix.LookAtLH(Vector3.Zero, lightDir, Vector3.Up); BoundingFrustum cameraFrustum = new BoundingFrustum(Matrix.Identity); // Get the corners of the frustum Vector3[] frustumCorners = cameraFrustum.GetCorners(); // Transform the positions of the corners into the direction of the light for (int i = 0; i < frustumCorners.Length; i++) frustumCorners[i] = Vector4F.ToVector3(Vector3.Transform(frustumCorners[i], lightRotation)); // Find the smallest box around the points BoundingBox lightBox = BoundingBox.FromPoints(frustumCorners); Vector3 boxSize = lightBox.Maximum - lightBox.Minimum; Vector3 halfBoxSize = boxSize * 0.5f; // The position of the light should be in the center of the back pannel of the box. Vector3 lightPosition = lightBox.Minimum + halfBoxSize; lightPosition.Z = lightBox.Minimum.Z; // We need the position back in world coordinates so we transform // the light position by the inverse of the lights rotation lightPosition = Vector4F.ToVector3(Vector3.Transform(lightPosition, Matrix.Invert(lightRotation))); // Create the view matrix for the light this.view = Matrix.LookAtLH(lightPosition, lightPosition + lightDir, Vector3.Up); // Create the projection matrix for the light // The projection is orthographic since we are using a directional light int amount = 25; this.projection = Matrix.OrthoOffCenterLH(boxSize.X - amount, boxSize.X + amount, boxSize.Y + amount, boxSize.Y - amount, -boxSize.Z - amount, boxSize.Z + amount); I believe the bug is by cameraFrustum to set a Matrix Idetity. I also tried with a Translation Matrix of my Camera Position and also the View Matrix of my Camera, but without success Can anyone tell me, how to draw shadows of my direction light always where my camera is current in my scene? Greets Benjamin
  8. Hello, I used OMSetRenderTargetsAndUnorderedAccessViews function to set the uav resource in the pixel shader. Everything is ok and there is no warring/error message. However, the vsGraphic Diagnostic encountered a fatal error when I try to open the frame I captured. Although I've tried to used another PC and use another project(The Intel project which also bound uav resource to pixel shader) to test, the same error still occur. It's so tough to debug the shader without such tool. If there is anyone know what should I do, please tell me. Thanks for your reply! BTW, my graphic diagnostic engine version is 15.6.5, DirectX feature level is 11_0, Shader Model is 5_0. Thx!
  9. The DirectX team has just published a blog post / article with a call to action for game developers, to change swapchain usage patterns: https://blogs.msdn.microsoft.com/directx/2018/04/09/dxgi-flip-model/ I wanted to get some visibility on it, as well as start a discussion to see if there's any feedback from folks who have gone down this road in the past, or to hear from anybody who's trying this out as a result of this article.
  10. Hi guys. Trying to solve this for a few days now but getting nowhere. I am having trouble with correctly displaying model when using diffuse and ambient light combination. In the picture you can see 2 larger cubes, top one using only ambient light, and bottom one using ambient + diffuse combination Smaller cube is just for showing where light is located. Can someone help me understand why is cube rendered the way it is? I was thinking problems with normals. I tried several different normal calculations and none of them work. Or maybe its not problem with the normals? I didnt want to post any code until maybe someone just give me a clue what might be happening? thanx
  11. I used DirectX in projects on Borland C++ Builder 6.0. Microsoft .libs don't work with Builder so I tooe special .lib files from here: http://www.clootie.ru/cbuilder/index.html#DX_CBuilder_SDKs Now I've moved to C++ Builder 10 Berlin and have to find a way to attach DirectX to my project again. I've searched the Web but found nothing on how to get access to DirectX in Embarcadero Builders, only old information on Borland Builder and old .libs. DirectX SDK .libs still can't be used with new Builder 10 because of incompatible format. My question is: did anyone use DirectX with Embarcadero Builder and how did you solve .libs problem? Can anyone give me a guide or example on how to make DirectX accessible in your Builder 10 project? Why there is no information on this anywhere?
  12. Hi there, I am rendering my game to render textures but I am having difficulty figuring out how to scale it to fill the window. The window looks like this: and the render texture looks like this (it's the window resolution downscaled by 4): (I implemented a screenshot function of the render texture which has proved to be very useful getting this working so far). My vertex shader is the classic "draw fullscreen triangle without binding a vertex or index buffer" as seen many times on this site: PS_IN_PosTex main(uint id : SV_VertexID) { PS_IN_PosTex output; output.tex = float2((id << 1) & 2, id & 2); output.pos = float4(output.tex * float2(2, -2) + float2(-1, 1), 0, 1); return output; } and the pixel shader is simply: Texture2D txDiffuse : register(t0); SamplerState samp : register(s0); float4 main(PS_IN_PosTex input) : SV_TARGET { return txDiffuse.Sample(samp, input.tex); } Can someone please give me a clue as to how to scale this correctly? Many thanks, Andy
  13. Hello guys, I tried to draw shadows in my scene with a direction light but in the end result i see the shadow + the view space of my direction light too, (see image). 4 yeahrs ago, a known, that i haven't contact anymore, wrote me a shadowmap shader and the geometry shader for doing this, but never fix this. So i haven't really much knowledge of shadow math and never fix this by my self I also tried to filter the light view space background with, if(shadow > 0.15) shadow = 1.0f (shadow = lightIntensity). But with this filter looks shadows of complexe geometries awful My Bias Value is 0.0001f Can anyone help me and explain me what is wrong? Greets Benjamin ps. (I uploaded the two HLSL Shaders in the attachment) ShadowMap.fx SimpleShader.fx
  14. Hey everyone, ive used tools like Intel GPA in the past and I would like to continue doing that. But in my current work environment I cant find a tool that can analyze rendering without a swapchain and calling Present(). In addition to that, the program Im working on uses WPF for UI, which uses a D3D9Ex Device and some tools attach to that device instead. So, are there any tools that allow debugging D3D11 without depending on a swapchain? Thanks for any help!
  15. Trying to modify the D3D11 example to draw a texture onto the triangle rather than the colours but all I'm not seeing the triangle. What am I missing? Code: https://pastebin.com/KVMTq9r6 MiniTri.fx https://pastebin.com/0w1myEg0
  16. So, I am trying to do something fairly simple. Copy the content from one texture into another and then copy the result texture inside the first. (I'll also do some additional editing on the result texture and that's why I am copying the result back). But I am getting some strange results and the photo is loosing quality. Here is the Compute Shader I've used: Texture2D ObjTexture : register(t0); RWTexture2D<float4> ObjResult : register(u0); SamplerState ObjWrapSampler : register(s0); [numthreads(32, 32, 1)] void main(uint3 DTid : SV_DispatchThreadID) { float width, height; ObjTexture.GetDimensions(width, height); width -= 1; // X = [0 ... width-1] height -= 1; // Y = [0 ... height - 1] float2 uv = float2(DTid.xy) / float2(width, height); ObjResult[DTid.xy] = ObjTexture.SampleLevel(ObjWrapSampler, uv, 0); return; } and here is how I copy the result image back into the input one ID3D11Resource * inputTexture; inputTextureSRV->GetResource(&inputTexture); mContext->CopySubresourceRegion(inputTexture, 0, 0, 0, 0, mTexture.Get(), 0, NULL); I also tried copying the first texture inside the second texture and use an Unordered Acess View from the first texture, but I got the same result ... Is there something I am doing wrong here? This is the original texture This is the texture after ~60 passes And this is after ~240 passes. This is 'the final result' because it doesn't change anymore. As you can see, the image lost it's quality.
  17. I'm following rastertek tutorial 14 (http://rastertek.com/tertut14.html). The problem is, slope based texturing doesn't work in my application. There are plenty of slopes in my terrain. None of them get slope color. float4 PSMAIN(DS_OUTPUT Input) : SV_Target { float4 grassColor; float4 slopeColor; float4 rockColor; float slope; float blendAmount; float4 textureColor; grassColor = txTerGrassy.Sample(SSTerrain, Input.TextureCoords); slopeColor = txTerMossRocky.Sample(SSTerrain, Input.TextureCoords); rockColor = txTerRocky.Sample(SSTerrain, Input.TextureCoords); // Calculate the slope of this point. slope = (1.0f - Input.LSNormal.y); if(slope < 0.2) { blendAmount = slope / 0.2f; textureColor = lerp(grassColor, slopeColor, blendAmount); } if((slope < 0.7) && (slope >= 0.2f)) { blendAmount = (slope - 0.2f) * (1.0f / (0.7f - 0.2f)); textureColor = lerp(slopeColor, rockColor, blendAmount); } if(slope >= 0.7) { textureColor = rockColor; } return float4(textureColor.rgb, 1); } Can anyone help me? Thanks.
  18. Hi all, As a part of the debug drawing system in my engine, I want to add support for rendering simple text on screen (aka HUD/ HUD style). From what I've read there are a few options, in short: 1. Write your own font sprite renderer 2. Using Direct2D/Directwrite, combine with DX11 rendertarget/ backbuffer 3. Use an external library, like the directx toolkit etc. I want to go for number 2, but articles/ documentation confused me a bit. Some say you need to create a DX10 device, to be able to do this, because it doesn't directly work with the DX11 device. But other articles tell that this was 'patched' later on and should work now. Can someone shed some light on this and ideally provide me an example or article on how to set this up? All input is appreciated.
  19. I've just started learning about tessellation from Frank Luna's DX11 book. I'm getting some very weird behavior when I try to render a tessellated quad patch if I also render a mesh in the same frame. The tessellated quad patch renders just fine if it's the only thing I'm rendering. This is pictured below: ' However, when I attempt to render the same tessellated quad patch along with the other entities in the scene (which are simple triangle-lists), I get the following error: I have no idea why this is happening, and google searches have given me no leads at all. I use the following code to render the tessellated quad patch: ID3D11DeviceContext* dc = GetGFXDeviceContext(); dc->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_4_CONTROL_POINT_PATCHLIST); dc->IASetInputLayout(ShaderManager::GetInstance()->m_JQuadTess->m_InputLayout); float blendFactors[] = { 0.0f, 0.0f, 0.0f, 0.0f }; // only used with D3D11_BLEND_BLEND_FACTOR dc->RSSetState(m_rasterizerStates[RSWIREFRAME]); dc->OMSetBlendState(m_blendStates[BSNOBLEND], blendFactors, 0xffffffff); dc->OMSetDepthStencilState(m_depthStencilStates[DSDEFAULT], 0); ID3DX11EffectTechnique* activeTech = ShaderManager::GetInstance()->m_JQuadTess->Tech; D3DX11_TECHNIQUE_DESC techDesc; activeTech->GetDesc(&techDesc); for (unsigned int p = 0; p < techDesc.Passes; p++) { TerrainVisual* terrainVisual = (TerrainVisual*)entity->m_VisualComponent; UINT stride = sizeof(TerrainVertex); UINT offset = 0; GetGFXDeviceContext()->IASetVertexBuffers(0, 1, &terrainVisual->m_VB, &stride, &offset); Vector3 eyePos = Vector3(cam->m_position); Matrix rotation = Matrix::CreateFromYawPitchRoll(entity->m_rotationEuler.x, entity->m_rotationEuler.y, entity->m_rotationEuler.z); Matrix model = rotation * Matrix::CreateTranslation(entity->m_position); Matrix view = cam->GetLookAtMatrix(); Matrix MVP = model * view * m_ProjectionMatrix; ShaderManager::GetInstance()->m_JQuadTess->SetEyePosW(eyePos); ShaderManager::GetInstance()->m_JQuadTess->SetWorld(model); ShaderManager::GetInstance()->m_JQuadTess->SetWorldViewProj(MVP); activeTech->GetPassByIndex(p)->Apply(0, GetGFXDeviceContext()); GetGFXDeviceContext()->Draw(4, 0); } dc->RSSetState(0); dc->OMSetBlendState(0, blendFactors, 0xffffffff); dc->OMSetDepthStencilState(0, 0); I draw my scene by looping through the list of entities and calling the associated draw method depending on the entity's "visual type": for (unsigned int i = 0; i < scene->GetEntityList()->size(); i++) { Entity* entity = scene->GetEntityList()->at(i); if (entity->m_VisualComponent->m_visualType == VisualType::MESH) DrawMeshEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::BILLBOARD) DrawBillboardEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::TERRAIN) DrawTerrainEntity(entity, cam); } HR(m_swapChain->Present(0, 0)); Any help/advice would be much appreciated!
  20. Am trying a basebones tessellation shader and getting unexpected result when increasing the tessellation factor. Am rendering a group of quads and trying to apply tessellation to them. OutsideTess = (1,1,1,1), InsideTess= (1,1) OutsideTess = (1,1,1,1), InsideTess= (2,1) I expected 4 triangles in the quad, not two. Any idea of whats wrong? Structs: struct PatchTess { float mEdgeTess[4] : SV_TessFactor; float mInsideTess[2] : SV_InsideTessFactor; }; struct VertexOut { float4 mWorldPosition : POSITION; float mTessFactor : TESS; }; struct DomainOut { float4 mWorldPosition : SV_POSITION; }; struct HullOut { float4 mWorldPosition : POSITION; }; Hull shader: PatchTess PatchHS(InputPatch<VertexOut, 3> inputVertices) { PatchTess patch; patch.mEdgeTess[ 0 ] = 1; patch.mEdgeTess[ 1 ] = 1; patch.mEdgeTess[ 2 ] = 1; patch.mEdgeTess[ 3 ] = 1; patch.mInsideTess[ 0 ] = 2; patch.mInsideTess[ 1 ] = 1; return patch; } [domain("quad")] [partitioning("fractional_odd")] [outputtopology("triangle_ccw")] [outputcontrolpoints(4)] [patchconstantfunc("PatchHS")] [maxtessfactor( 64.0 )] HullOut hull_main(InputPatch<VertexOut, 3> verticeData, uint index : SV_OutputControlPointID) { HullOut ret; ret.mWorldPosition = verticeData[index].mWorldPosition; return ret; } Domain shader: [domain("quad")] DomainOut domain_main(PatchTess patchTess, float2 uv : SV_DomainLocation, const OutputPatch<HullOut, 4> quad) { DomainOut ret; const float MipInterval = 20.0f; ret.mWorldPosition.xz = quad[ 0 ].mWorldPosition.xz * ( 1.0f - uv.x ) * ( 1.0f - uv.y ) + quad[ 1 ].mWorldPosition.xz * uv.x * ( 1.0f - uv.y ) + quad[ 2 ].mWorldPosition.xz * ( 1.0f - uv.x ) * uv.y + quad[ 3 ].mWorldPosition.xz * uv.x * uv.y ; ret.mWorldPosition.y = quad[ 0 ].mWorldPosition.y; ret.mWorldPosition.w = 1; ret.mWorldPosition = mul( gFrameViewProj, ret.mWorldPosition ); return ret; } Any ideas what could be wrong with these shaders?
  21. Hello, I've stumbled upon Urho3D engine and found that it has a really nice and easy to read code structure. I think the graphics abstraction looks really interesting and I like the idea of how it defers pipeline state changes until just before the draw call to resolve redundant state changes. This is done by saving the state changes (blendEnabled/SRV changes/RTV changes) in member variables and just before the draw, apply the actual state changes using the graphics context. It looks something like this (pseudo): void PrepareDraw() { if(renderTargetsDirty) { pD3D11DeviceContext->OMSetRenderTarget(mCurrentRenderTargets); renderTargetsDirty = false } if(texturesDirty) { pD3D11DeviceContext->PSSetShaderResourceView(..., mCurrentSRVs); texturesDirty = false } .... //Some more state changes } This all looked like a great design at first but I've found that there is one big issue with this which I don't really understand how it is solved in their case and how I would tackle it. I'll explain it by example, imagine I have two rendertargets: my backbuffer RT and an offscreen RT. Say I want to render my backbuffer to the offscreen RT and then back to the backbuffer (Just for the sake of the example). You would do something like this: //Render to the offscreen RT pGraphics->SetRenderTarget(pOffscreenRT->GetRTV()); pGraphics->SetTexture(diffuseSlot, pDefaultRT->GetSRV()) pGraphics->DrawQuad() pGraphics->SetTexture(diffuseSlot, nullptr); //Remove the default RT from input //Render to the default (screen) RT pGraphics->SetRenderTarget(nullptr); //Default RT pGraphics->SetTexture(diffuseSlot, pOffscreenRT->GetSRV()) pGraphics->DrawQuad(); The problem here is that the second time the application loop comes around, the offscreen rendertarget is still bound as input ShaderResourceView when it gets set as a RenderTargetView because in Urho3D, the state of the RenderTargetView will always be changed before the ShaderResourceViews (see top code snippet) even when I set the SRV to nullptr before using it as a RTV like above causing errors because a resource can't be bound to both input and rendertarget. What is usually the solution to this? Thanks!
  22. Hello, I wrote a MatCap shader following this idea: Given the image representing the texture, we compute the sample point by taking the dot product of the vertex normal and the camera position and remapping this to [0,1]. This seems to work well when I look straight at an object with this shader. However, in cases where the camera points slightly on the side, I can see the texture stretch a lot. Could anyone give me a hint as how to get a nice matcap shader ? Here's what I wrote: Shader "Unlit/Matcap" { Properties { _MainTex ("Texture", 2D) = "white" {} } SubShader { Tags { "RenderType"="Opaque" } LOD 100 Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag // make fog work #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float3 normal : NORMAL; }; struct v2f { float2 worldNormal : TEXCOORD0; float4 vertex : SV_POSITION; }; sampler2D _MainTex; v2f vert (appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); o.worldNormal = mul((float3x3)UNITY_MATRIX_V, UnityObjectToWorldNormal(v.normal)).xy*0.3 + 0.5; //UnityObjectToClipPos(v.normal)*0.5 + 0.5; return o; } fixed4 frag (v2f i) : SV_Target { // sample the texture fixed4 col = tex2D(_MainTex, i.worldNormal); // apply fog return col; } ENDCG } } } Thanks!
  23. I want to output the Image file from the shaderresourceview of the rendertexture, so I have to transfer the shaderresourceview into a image object, and I don't know how to do this. Could you help me with it? PS:I am using SharpDx to develop my program. So it's better be coding by C#. Many thanks.
  24. DX11 Light Shafts

    I decided to implement light shafts using http://sirkan.iit.bme.hu/~szirmay/lightshaft_link.htm So far I've only managed to implement the shadow map. Can anyone help me to implement this in D3D11? (I mean steps, I can do the rest). I'm new to all these shadow maps and etc.
  25. Hi, I am trying to brute-force a closest-point-to-closed-triangle-mesh algorithm on the GPU by creating a thread for each point-primitive pair and keeping only the nearest result for each point. This code fails however, with multiple writes being made by threads with different distance computations. To keep only the closest value, I attempt to mask using InterlockedMin, and a conditional that only writes if the current thread holds the same value as the mask after a memory barrier. I have included the function below. As can be seen I have modified it to write to a different location every time the conditional succeeds for debugging. It is expected that multiple writes will take place, for example where the closest point is a vertex shared by multiple triangles, but when I read back closestPoints and calculate the distances, they are different, which should not be possible. The differences are large (~0.3+) so I do not think it is a rounding error. The CPU equivalent works fine for a single particle. After the kernel execution, distanceMask does hold the smallest value, suggesting the problem is with the barrier or the conditional. Can anyone say what is wrong with the function? RWStructuredBuffer<uint> distanceMask : register(u4); RWStructuredBuffer<uint> distanceWriteCounts : register(u0); RWStructuredBuffer<float3> closestPoints : register(u5); [numthreads(64,1,1)] void BruteForceClosestPointOnMesh(uint3 id : SV_DispatchThreadID) { int particleid = id.x; int triangleid = id.y; Triangle t = triangles[triangleid]; float3 v0 = GetVertex1(t.i0); float3 v1 = GetVertex1(t.i1); float3 v2 = GetVertex1(t.i2); float3 q1 = Q1[particleid]; ClosestPointPointTriangleResult result = ClosestPointPointTriangle(q1, v0, v1, v2); float3 p = v0 * result.uvw.x + v1 * result.uvw.y + v2 * result.uvw.z; uint distance = asuint(length(p - q1)); InterlockedMin(distanceMask[particleid], distance); AllMemoryBarrierWithGroupSync(); if(distance == distanceMask[particleid]) { uint bin = 0; InterlockedAdd(distanceWriteCounts[particleid],1,bin); closestPoints[particleid * binsize + bin] = p; } }
  • Advertisement