• Advertisement

Search the Community

Showing results for tags 'DX11' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • GameDev Unboxed

Categories

  • Game Dev Loadout

Categories

  • Game Developers Conference
    • GDC 2017
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Developers

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 1421 results

  1. My shadows (drawn using a depth buffer which I later sample from in the shadow texture), seem to be detaching slightly from their objects. I looked this up and I think it's "peter panning," and they were saying you have to change the depth offset, but not sure how to do that. https://msdn.microsoft.com/en-us/library/windows/desktop/ee416324(v=vs.85).aspx Is there a fast way I can tweak the code to fix this? Should I change the "bias" perhaps, or something else? Thanks, Here is the code for the shadows, for reference: // ENTRY POINT float4 main(PixelInputType input) : SV_TARGET { float2 projectTexCoord; float depthValue; float lightDepthValue; //float4 lightColor = float4(0,0,0,0); float4 lightColor = float4(0.05,0.05,0.05,1); // Set the bias value for fixing the floating point precision issues. float bias = 0.001f; //////////////// SHADOWING LOOP //////////////// for(int i = 0; i < NUM_LIGHTS; ++i) { // Calculate the projected texture coordinates. projectTexCoord.x = input.vertex_ProjLightSpace[i].x / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f; projectTexCoord.y = -input.vertex_ProjLightSpace[i].y / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f; if((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y)) { // Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location. depthValue = depthTextures[i].Sample(SampleTypeClamp, projectTexCoord).r; // Calculate the depth of the light. lightDepthValue = input.vertex_ProjLightSpace[i].z / input.vertex_ProjLightSpace[i].w; // Subtract the bias from the lightDepthValue. lightDepthValue = lightDepthValue - bias; // Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel. // If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it. if(lightDepthValue < depthValue) { // Calculate the amount of light on this pixel. float lightIntensity = saturate(dot(input.normal, normalize(input.lightPos_LS[i]))); if(lightIntensity > 0.0f) { float spotlightIntensity = CalculateSpotLightIntensity(input.lightPos_LS[i], cb_lights[i].lightDirection, input.normal); //lightColor += (float4(1.0f, 1.0f, 1.0f, 1.0f) * lightIntensity) * .3f; // spotlight lightColor += float4(1.0f, 1.0f, 1.0f, 1.0f) /** lightIntensity*/ * spotlightIntensity * .3f; // spotlight } } } } return saturate(lightColor); } https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/MultiShadows_ps.hlsl
  2. Hi! Pretty new to 3D programming, using SharpDX wrapper to build a 3D world (for testing and learning). I am adding several visible camera objects (very rudimentary models) in order to visualize different views. Let's say I have a "world" floor grid covering vectors {0,0,0 - 1,1,0} I add a pretend camera "CAM2" object at {0.5, 1.5, -1.0} I am looking at this world by setting the parameters for "CAM1" a worldView projection position at pos: {0.0, 1.5, 1.5} lookat: {0.5, 0.0, 0.5} (looking down from upper left towards the center of the floor). I would like to draw a line from the pretend camera "CAM2" model origin, to the center of the floor as it is projected through "CAM1" view. Obviously a line from "CAM1" to the Lookat point would be invisible. But I can't for my life figure out how to apply the correct conversions to the vector end point for "CAM2". As can be seen in the snashot, the line (green) from "CAM2" points to.... well.. Russia?? :-D Can anyone help? BR Per
  3. Hi, so I imported some new models into my engine, and some of them show up with ugly seams or dark patches, while others look perfect (see pictures) I'm using the same shader for all of them, and all of these models have had custom UV mapped textures created for them, which should wrap fully around them, instead of using tiled textures. I have no idea why the custom UV mapped textures are mapping correctly on some, but not others. Possible causes are 1. Am I using the wrong SamplerState to sample the textures? (Im using SampleTypeClamp ) 2. The original models had quads, and were UV mapped by an artist in that state, then I reimported them into 3DS Max and reexported them as all triangles (my engine object loader only accepts triangles). 3. Could the original model UVs just be wrong? Please let me know if somebody can help identify this problem, I'm completely baffled. Thanks. For reference, here's a link to the shader being used to draw the problematic models and the shader code below. https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/Light_SoftShadows_ps.hlsl ///////////// // DEFINES // ///////////// #define NUM_LIGHTS 3 ///////////// // GLOBALS // ///////////// // texture resource that will be used for rendering the texture on the model Texture2D shaderTextures[7];// NOTE - we only use one render target for drawing all the shadows here! // allows modifying how pixels are written to the polygon face, for example choosing which to draw. SamplerState SampleType; /////////////////// // SAMPLE STATES // /////////////////// SamplerState SampleTypeClamp : register(s0); SamplerState SampleTypeWrap : register(s1); /////////////////// // TYPEDEFS // /////////////////// // This structure is used to describe the lights properties struct LightTemplate_PS { int type; float3 padding; float4 diffuseColor; float3 lightDirection; //(lookat?) //@TODO pass from VS BUFFER? float specularPower; float4 specularColor; }; ////////////////////// // CONSTANT BUFFERS // ////////////////////// cbuffer SceneLightBuffer:register(b0) { float4 cb_ambientColor; LightTemplate_PS cb_lights[NUM_LIGHTS]; } ////////////////////// // CONSTANT BUFFERS // ////////////////////// // value set here will be between 0 and 1. cbuffer TranslationBuffer:register(b1) { float textureTranslation; //@NOTE = hlsl automatically pads floats for you }; // for alpha blending textures cbuffer TransparentBuffer:register(b2) { float blendAmount; }; struct PixelInputType { float4 vertex_ModelSpace : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float3 viewDirection : TEXCOORD1; float3 lightPos_LS[NUM_LIGHTS] : TEXCOORD2; float4 vertex_ScrnSpace : TEXCOORD5; }; float4 main(PixelInputType input) : SV_TARGET { bool bInsideSpotlight = true; float2 projectTexCoord; float depthValue; float lightDepthValue; float4 textureColor; float gamma = 7.f; /////////////////// NORMAL MAPPING ////////////////// float4 bumpMap = shaderTextures[4].Sample(SampleType, input.tex); // Sample the shadow value from the shadow texture using the sampler at the projected texture coordinate location. projectTexCoord.x = input.vertex_ScrnSpace.x / input.vertex_ScrnSpace.w / 2.0f + 0.5f; projectTexCoord.y = -input.vertex_ScrnSpace.y / input.vertex_ScrnSpace.w / 2.0f + 0.5f; float shadowValue = shaderTextures[6].Sample(SampleTypeClamp, projectTexCoord).r; // Expand the range of the normal value from (0, +1) to (-1, +1). bumpMap = (bumpMap * 2.0f) - 1.0f; // Change the COORDINATE BASIS of the normal into the space represented by basis vectors tangent, binormal, and normal! float3 bumpNormal = normalize((bumpMap.x * input.tangent) + (bumpMap.y * input.binormal) + (bumpMap.z * input.normal)); //////////////// AMBIENT BASE COLOR //////////////// // Set the default output color to the ambient light value for all pixels. float4 lightColor = cb_ambientColor * saturate(dot(bumpNormal, input.normal) + .2); // Calculate the amount of light on this pixel. for(int i = 0; i < NUM_LIGHTS; ++i) { float lightIntensity = saturate(dot(bumpNormal, normalize(input.lightPos_LS[i]))); if(lightIntensity > 0.0f) { lightColor += (cb_lights[i].diffuseColor * lightIntensity) * 0.3; } } // Saturate the final light color. lightColor = saturate(lightColor); // TEXTURE ANIMATION - Sample pixel color from texture at this texture coordinate location. input.tex.x += textureTranslation; // BLENDING float4 color1 = shaderTextures[0].Sample(SampleTypeWrap, input.tex); float4 color2 = shaderTextures[1].Sample(SampleTypeWrap, input.tex); float4 alphaValue = shaderTextures[3].Sample(SampleTypeWrap, input.tex); //textureColor = saturate((alphaValue * color1) + ((1.0f - alphaValue) * color2)); textureColor = color1; // Combine the light and texture color. float4 finalColor = lightColor * textureColor * shadowValue * gamma; //if(lightColor.x == 0) //{ // finalColor = cb_ambientColor * saturate(dot(bumpNormal, input.normal) + .2) * textureColor; //} return finalColor; }
  4. I make DXGI adapters and monitors enumeration. The second monitor connected to my computer is Dell P2715Q, which has 3840*2160 resolution. However, the program reports it as 2560*1440, the second available resolution. Minimal code to reproduce: #include "stdafx.h" #include <Windows.h> #include <stdio.h> #include <tchar.h> #include <iostream> #include <DXGI.h> #pragma comment(lib, "DXGI.lib") using namespace std; int main() { IDXGIFactory1* pFactory1; HRESULT hr = CreateDXGIFactory1(__uuidof(IDXGIFactory1), (void**)(&pFactory1)); if (FAILED(hr)) { wcout << L"CreateDXGIFactory1 failed. " << endl; return 0; } for (UINT i = 0;; i++) { IDXGIAdapter1* pAdapter1 = nullptr; hr = pFactory1->EnumAdapters1(i, &pAdapter1); if (hr == DXGI_ERROR_NOT_FOUND) { // no more adapters break; } if (FAILED(hr)) { wcout << L"EnumAdapters1 failed. " << endl; return 0; } DXGI_ADAPTER_DESC1 desc; hr = pAdapter1->GetDesc1(&desc); if (FAILED(hr)) { wcout << L"GetDesc1 failed. " << endl; return 0; } wcout << L"Adapter: " << desc.Description << endl; for (UINT j = 0;; j++) { IDXGIOutput *pOutput = nullptr; HRESULT hr = pAdapter1->EnumOutputs(j, &pOutput); if (hr == DXGI_ERROR_NOT_FOUND) { // no more outputs break; } if (FAILED(hr)) { wcout << L"EnumOutputs failed. " << endl; return 0; } DXGI_OUTPUT_DESC desc; hr = pOutput->GetDesc(&desc); if (FAILED(hr)) { wcout << L"GetDesc1 failed. " << endl; return 0; } wcout << L" Output: " << desc.DeviceName << L" (" << desc.DesktopCoordinates.left << L"," << desc.DesktopCoordinates.top << L")-(" << (desc.DesktopCoordinates.right - desc.DesktopCoordinates.left) << L"," << (desc.DesktopCoordinates.bottom - desc.DesktopCoordinates.top) << L")" << endl; } } return 0; } Program output: Adapter: Intel(R) Iris(TM) Pro Graphics 6200 Output: \\.\DISPLAY1 (0,0)-(1920,1200) Output: \\.\DISPLAY2 (1920,0)-(2560,1440) DISPLAY2 is reported with low resolution. Environment: Windows 10 x64 Intel(R) Iris(TM) Pro Graphics 6200 DELL P2715Q What can cause this behavior: DirectX restrictions, video memory, display adapter, driver, monitor? How can I fix this and get full available resolution?
  5. Hello, DX9Ex. I have the problem with driver stability in time of serial renderings, which i try to use for image processing in memory with fragment shaders. For big bitmaps the video driver sometimes becomes unstable ("Display driver stopped responding and has recovered") and, for instance, if the media player runs video in background, it sometimes freezes and distorts. I tried to use next methods of IDirect3DDevice9Ex: SetGPUThreadPriority(-7); WaitForVBlank(0); EvictManagedResources(); with purpose to give some time for GPU between scenes, but it seems to be has not notable effect in this case. I don't want to reinitilialize subsystem for every step to avoid performance loss. So, my question is next: does some common practice exists to avoid overloading of GPU by running tasks? Many thanks in advance.
  6. DX11 Shadow Map Details

    I think I understand the idea behind Shadow Mapping, however I'm having problems with implementation details. In VS I need light position - but I don't have one! I only have light direction, what light position should I use? I have working camera class, with Projection and View matrices and all - how can I reuse this? I should put camera position, but how to calculate "lookAt" parameter? Is this suppose to be ortographic or perspective camera? And one more thing - when in the 3D piplene is the actual write to the Depth Buffer? In PS or somewhere earlier? Br.,BB
  7. So last night I was messing about with some old code on a Direct 3D 11.4 interface and trying out some compute stuff. I had set this thing up to send data in, run the compute shader, and then output the result data into a structured buffer. To read this data back in to the CPU, I had copied the structured buffer into a staging buffer and retrieved the data from there. This all worked well enough. But I was curious to see if I could remove the intermediate copy to stage and read from the structured buffer directly using Map. To do this, I created the buffer using D3D11_CPU_ACCESS_READ and a usage of default, and to my shock and amazement... it worked (and no warning messages from the D3D Debug log). However, this seems to run counter to what I've read in the documentation for D3D11_CPU_ACCESS_FLAG: The bolded part is what threw me off. Here, I had a structured buffer created with default usage, and a UAV (definitely bindable to the pipeline), but I was able to map and read the data. Does this seem wrong? I'm aware that some hardware manufacturers may implement things differently, but if MS says that this flag can't be used outside of a staging resource, then shouldn't the manufacturer (NVidia) adhere to that? I can find nothing else in the documentation that says this is allowed or not allowed (beyond the description for D3D11_CPU_ACCESS_READ). And the debug output for D3D doesn't complain in the slightest. So what gives? Is it actually safe to do a map & read from a default usage resource with CPU read flags?
  8. A new player of my game reported an issue. When he starts the game, it immediately crashes, before he even can see the main menu. He sent me a log file of my game and it turns out that the game crashes, when my game creates a 2D render target. Here is the full "Interface not supported" error message: HRESULT: [0x80004002], Module: [General], ApiCode: [E_NOINTERFACE/No such interface supported], Message: Schnittstelle nicht unterstützt bei SharpDX.Result.CheckError() bei SharpDX.Direct2D1.Factory.CreateDxgiSurfaceRenderTarget(Surface dxgiSurface, RenderTargetProperties& renderTargetProperties, RenderTarget renderTarget) bei SharpDX.Direct2D1.RenderTarget..ctor(Factory factory, Surface dxgiSurface, RenderTargetProperties properties) bei Game.AGame.Initialize() Because of the log file's content, I know exactly where the game crashes: Factory2D = new SharpDX.Direct2D1.Factory(); _surface = backBuffer.QueryInterface<SharpDX.DXGI.Surface>(); // It crashes when calling this line! RenderTarget2D = new SharpDX.Direct2D1.RenderTarget(Factory2D, _surface, new SharpDX.Direct2D1.RenderTargetProperties(new SharpDX.Direct2D1.PixelFormat(_dxgiFormat, SharpDX.Direct2D1.AlphaMode.Premultiplied))); RenderTarget2D.AntialiasMode = SharpDX.Direct2D1.AntialiasMode.Aliased; I did some research on this error message and all similar problems I found were around six to seven years old, when people tried to work with DirectX 11 3D graphics and Dirext 10.1 2D graphics. However, I am using DirectX 11 for all visual stuff. The game runs very well on the computers of all other 2500 players. So I am trying to figure out, why the source code crashes on this player's computer. He used Windows 7 with all Windows Updates, 17179 MB memory and a NVIDIA GeForce GTX 870M graphics card. This is more than enough to run my game. Below, you can see the code I use for creating the 3D device and the swap chain. I made sure to use BGRA-Support when creating the device, because it is required when using Direct2D in a 3D game in DirectX 11. The same DXGI format is used in creating 2D and 3D content. The refresh rate is read from the used adapter. // Set swap chain flags, DXGI format and default refresh rate. _swapChainFlags = SharpDX.DXGI.SwapChainFlags.None; _dxgiFormat = SharpDX.DXGI.Format.B8G8R8A8_UNorm; SharpDX.DXGI.Rational refreshRate = new SharpDX.DXGI.Rational(60, 1); // Get proper video adapter and create device and swap chain. using (var factory = new SharpDX.DXGI.Factory1()) { SharpDX.DXGI.Adapter adapter = GetAdapter(factory); if (adapter != null) { // Get refresh rate. refreshRate = GetRefreshRate(adapter, _dxgiFormat, refreshRate); // Create Device and SwapChain _device = new SharpDX.Direct3D11.Device(adapter, SharpDX.Direct3D11.DeviceCreationFlags.BgraSupport, new SharpDX.Direct3D.FeatureLevel[] { SharpDX.Direct3D.FeatureLevel.Level_10_1 }); _swapChain = new SharpDX.DXGI.SwapChain(factory, _device, GetSwapChainDescription(clientSize, outputHandle, refreshRate)); _deviceContext = _device.ImmediateContext; } }
  9. I've been trying for hours now to find the cause of this problem. My vertex shader is passing the wrong values to the pixel shader, and I think it might be my input/output semantics. *This shader takes in a prerendered texture with shadows in it, based on Rastertek Tutorial 42. So the light/dark values of the shadows are already encoded in the blurred shadow texture, sampled from Texture2D shaderTextures[7] at index 6 in the pixel shader. struct VertexInputType { float4 vertex_ModelSpace : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; }; struct PixelInputType { float4 vertex_ModelSpace : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float3 viewDirection : TEXCOORD1; float3 lightPos_LS[NUM_LIGHTS] : TEXCOORD2; float4 vertex_ScrnSpace : TEXCOORD5; }; Specifically PixelInputType is causing a ton of trouble - if I switch the tags "SV_POSITION" for the first variable and "TEXCOORD5" for the last one, it gives completely different values to the Pixel shader even though all the calculations are exactly the same. Specifically, the main issues are that I have a spotlight effect in the pixel shader that takes the dot product of the light to Surface Vector with the light direction to give it a falloff, which was previously working, but in this upgraded version of the shader, it seems to be giving completely wrong values. / (See full vertex shader code below). Is there some weird thing about pixel shader semantics that Im missing? Does the order of the variables in the struct matter? I've also attached teh full shader files for reference. Any insight would be much appreciated, thanks. PixelInputType main( VertexInputType input ) { //The final output for the vertex shader PixelInputType output; // Pass through tex coordinates untouched output.tex = input.tex; // Pre-calculate vertex position in world space input.vertex_ModelSpace.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.vertex_ModelSpace = mul(input.vertex_ModelSpace, cb_worldMatrix); output.vertex_ModelSpace = mul(output.vertex_ModelSpace, cb_viewMatrix); output.vertex_ModelSpace = mul(output.vertex_ModelSpace, cb_projectionMatrix); // Store the position of the vertice as viewed by the camera in a separate variable. output.vertex_ScrnSpace = output.vertex_ModelSpace; // Bring normal, tangent, and binormal into world space output.normal = normalize(mul(input.normal, (float3x3)cb_worldMatrix)); output.tangent = normalize(mul(input.tangent, (float3x3)cb_worldMatrix)); output.binormal = normalize(mul(input.binormal, (float3x3)cb_worldMatrix)); // Store worldspace view direction for specular calculations float4 vertex_WS = mul(input.vertex_ModelSpace, cb_worldMatrix); output.viewDirection = normalize(cb_camPosition_WS.xyz - vertex_WS.xyz); for(int i = 0; i< NUM_LIGHTS; ++i) { // Calculate light position relative to the vertex in WORLD SPACE output.lightPos_LS[i] = cb_lights[i].lightPosition_WS - vertex_WS.xyz; } return output; } Repo link: https://github.com/mister51213/DirectX11Engine/tree/master/DirectX11Engine Light_SoftShadows_ps.hlsl Light_SoftShadows_vs.hlsl
  10. Hi, I'm on Rastertek series 42, soft shadows, which uses a blur shader and runs extremely slow. http://www.rastertek.com/dx11tut42.html He obnoxiously states that there are many ways to optimize his blur shader, but gives you no idea how to do it. The way he does it is : 1. Project the objects in the scene to a render target using the depth shader. 2. Draw black and white shadows on another render target using those depth textures. 3. Blur the black/white shadow texture produced in step 2 by a) rendering it to a smaller texture b) vertical / horizontal blurring that texture c) rendering it back to a bigger texture again. 4. Send the blurred shadow texture into the final shader, which samples its black/white values to determine light intensity. So this uses a ton of render textures, and I just added more than one light, which multiplies the render textures required. Is there any easy way I can optimize the super expensive blur shader that wouldnt require a whole new complicated system? Like combining any of these render textures into one for example? If you know of any easy way not requiring too many changes, please let me know, as I already had a really hard time understanding the way this works, so a super complicated change would be beyond my capacity. Thanks. *For reference, here is my repo, in which I have simplified his tutorial and added an additional light. https://github.com/mister51213/DX11Port_SoftShadows/tree/MultiShadows
  11. // Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location. depthValue = shaderTextures[6 + i].Sample(SampleTypeClamp, projectTexCoord).r; // Calculate the depth of the light. lightDepthValue = input.lightViewPositions[i].z / input.lightViewPositions[i].w; // Subtract the bias from the lightDepthValue. lightDepthValue = lightDepthValue - bias; // Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel. // If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it. if(lightDepthValue < depthValue) { // Calculate the amount of light on this pixel. //lightIntensity = saturate(dot(input.normal, input.lightPositions)); lightIntensity = saturate(dot(input.normal, normalize(input.lightPositions[i]))); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseCols[i] * lightIntensity * 0.25f); } } else // shadow falloff here { float4 shadowcol = (1,1,1,1); float shadowintensity = saturate(length(input.lightpositions[i])*0.038); color += shadowcol * shadowintensity*shadowintensity*shadowintensity; } } } // Saturate the final light color. color = saturate(color); Hi, I want to add a fall off to the shadows in this pixel shader. This should be really straightforward - just get the distance between the light position and the vertex position, and multiply it by the light intensity at the pixel being shadowed, so the light intensity will increase and the shadow will fade away towards the edges. As you can see, I get the "lightPosition" from the input (which comes from the vertex shader, and was calculated by worldLightPosition - worldVertexPosition inside the vertex shader, so taking its length should give you the distance between the light and the pixel.) I multiplied it by 0.038, an arbitrary number, to scale it down, because it needs to be between 0 and 1 before multiplying it by shadow color (1,1,1,1) to give a gradient. However, this does absolutely nothing, and I cant tell where its failing. Please look at the attached files to see the full code of the vertex and pixel shaders. Any advice would be very welcome, thanks! Light_ps.hlsl Light_vs.hlsl
  12. hi, after implementing skinning with compute shader i want to implement skinning with VertexShader Streamout method to compare performance. The following Thread is a discussion about it. Here's the recommended setup: Use a pass-through geometry shader (point->point), setup the streamout and set topology to point list. Draw the whole buffer with context->Draw(). This gives a 1:1 mapping of the vertices. Later bind the stream out buffer as vertex buffer. Bind the index buffer of the original mesh. draw with DrawIndexed like you would with the original mesh (or whatever draw call you had). I know the reason why a point list as input is used, because when using the normal vertex topology as input the output would be a stream of "each of his own" primitives that would blow up the vertexbuffer. I assume a indexbuffer then would be needless ? But how can you transform position and normal in one step when feeding the pseudo Vertex/Geometry Shader with a point list ? In my VertexShader i first calculate the resulting transform matrix from bone indexes(4) und weights (4) and transform position and normal with the same resulting transform Matrix. Do i have to run 2 passes ? One for transforming position and one for transforming normal ? I think it could be done better ? thanks for any help
  13. i am new to directx. i just followed some tutorials online and started to program. It had been well till i faced this problem of loading my own 3d models from 3ds max exported as .x which is supported by directx. I am using c++ on visual studio 2010 and directX9. i really tried to find help on the net but i couldn't find which can solve my problem. i don't know where exactly the problem is. i run most of samples and examples all worked well. can anyone give me the hint or solution for my problem ? thanks in advance!
  14. hi, until now i use typical vertexshader approach for skinning with a Constantbuffer containing the transform matrix for the bones and an the vertexbuffer containing bone index and bone weight. Now i have implemented realtime environment probe cubemaping so i have to render my scene from many point of views and the time for skinning takes too long because it is recalculated for every side of the cubemap. For Info i am working on Win7 an therefore use one Shadermodel 5.0 not 5.x that have more options, or is there a way to use 5.x in Win 7 My Graphic Card is Directx 12 compatible NVidia GTX 960 the member turanszkij has posted a good for me understandable compute shader. ( for Info: in his engine he uses an optimized version of it ) https://turanszkij.wordpress.com/2017/09/09/skinning-in-compute-shader/ Now my questions is it possible to feed the compute shader with my orignial vertexbuffer or do i have to copy it in several ByteAdressBuffers as implemented in the following code ? the same question is about the constant buffer of the matrixes my more urgent question is how do i feed my normal pipeline with the result of the compute Shader which are 2 RWByteAddressBuffers that contain position an normal for example i could use 2 vertexbuffer bindings 1 containing only the uv coordinates 2.containing position and normal How do i copy from the RWByteAddressBuffers to the vertexbuffer ? (Code from turanszkij ) Here is my shader implementation for skinning a mesh in a compute shader: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 struct Bone { float4x4 pose; }; StructuredBuffer<Bone> boneBuffer; ByteAddressBuffer vertexBuffer_POS; // T-Pose pos ByteAddressBuffer vertexBuffer_NOR; // T-Pose normal ByteAddressBuffer vertexBuffer_WEI; // bone weights ByteAddressBuffer vertexBuffer_BON; // bone indices RWByteAddressBuffer streamoutBuffer_POS; // skinned pos RWByteAddressBuffer streamoutBuffer_NOR; // skinned normal RWByteAddressBuffer streamoutBuffer_PRE; // previous frame skinned pos inline void Skinning(inout float4 pos, inout float4 nor, in float4 inBon, in float4 inWei) { float4 p = 0, pp = 0; float3 n = 0; float4x4 m; float3x3 m3; float weisum = 0; // force loop to reduce register pressure // though this way we can not interleave TEX - ALU operations [loop] for (uint i = 0; ((i &lt; 4) &amp;&amp; (weisum&lt;1.0f)); ++i) { m = boneBuffer[(uint)inBon].pose; m3 = (float3x3)m; p += mul(float4(pos.xyz, 1), m)*inWei; n += mul(nor.xyz, m3)*inWei; weisum += inWei; } bool w = any(inWei); pos.xyz = w ? p.xyz : pos.xyz; nor.xyz = w ? n : nor.xyz; } [numthreads(1024, 1, 1)] void main( uint3 DTid : SV_DispatchThreadID ) { const uint fetchAddress = DTid.x * 16; // stride is 16 bytes for each vertex buffer now... uint4 pos_u = vertexBuffer_POS.Load4(fetchAddress); uint4 nor_u = vertexBuffer_NOR.Load4(fetchAddress); uint4 wei_u = vertexBuffer_WEI.Load4(fetchAddress); uint4 bon_u = vertexBuffer_BON.Load4(fetchAddress); float4 pos = asfloat(pos_u); float4 nor = asfloat(nor_u); float4 wei = asfloat(wei_u); float4 bon = asfloat(bon_u); Skinning(pos, nor, bon, wei); pos_u = asuint(pos); nor_u = asuint(nor); // copy prev frame current pos to current frame prev pos streamoutBuffer_PRE.Store4(fetchAddress, streamoutBuffer_POS.Load4(fetchAddress)); // write out skinned props: streamoutBuffer_POS.Store4(fetchAddress, pos_u); streamoutBuffer_NOR.Store4(fetchAddress, nor_u); }
  15. I wanted to see how others are currently handling descriptor heap updates and management. I've read a few articles and there tends to be three major strategies : 1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc) 2) You have one descriptor heap for an entire pipeline 3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc) The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient. The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change.
  16. Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception? _lightBufferVS->viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&_lightBufferVS->position), XMLoadFloat3(&_lookAt), XMLoadFloat3(&up)); It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions. m_viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&m_position), XMLoadFloat3(&m_lookAt), XMLoadFloat3(&up)); Here is the repo for the broken code (See LightClass) https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/LightClass.cpp and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter. https://github.com/mister51213/DX11Port_SoftShadows/blob/master/Engine/lightclass.cpp
  17. Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file? I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and double clicked on the frame to open it, but no idea where to go from there. I've been searching for hours and there's no information on this, not even on the Microsoft Website! They say "open the Graphics Pixel History window" but there is no such window! Then they say, in the "Pipeline Stages choose Start Debugging" but the Start Debugging option is nowhere to be found in the whole interface. Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger? All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated and Microsoft's instructions are horrible! Somebody please, please help.
  18. I finally ported Rastertek's tutorial # 42 on soft shadows and blur shading. This tutorial has a ton of really useful effects and there's no working version anywhere online. Unfortunately it just draws a black screen. Not sure what's causing it. I'm guessing the camera or ortho matrix transforms are wrong, light directions, or maybe texture resources not being properly initialized. I didnt change any of the variables though, only upgraded all types and functions DirectX3DVector3 to XMFLOAT3, and used DirectXTK for texture loading. If anyone is willing to take a look at what might be causing the black screen, maybe something pops out to you, let me know, thanks. https://github.com/mister51213/DX11Port_SoftShadows Also, for reference, here's tutorial #40 which has normal shadows but no blur, which I also ported, and it works perfectly. https://github.com/mister51213/DX11Port_ShadowMapping
  19. Is Direct3D 11 an api function like glMemoryBarrier in OpenGL? For example, if binds a texture to compute shader, compute shader writes some values to texture, then dispatchCompute, after that, read texture content to CPU side. I know, In OpenGL, we could call glMemoryBarrier before reading to assure that texture all content has been updated by compute shader. How to handle incoherent memory access in Direct3D 11? Thank you.
  20. I made a spotlight that 1. Projects 3d models onto a render target from each light POV to simulate shadows 2. Cuts a circle out of the square of light that has been projected onto the render target as a result of the light frustum, then only lights up the pixels inside that circle (except the shadowed parts of course), so you dont see the square edges of the projected frustum. After doing an if check to see if the dot product of light direction and light to vertex vector is greater than .95 to get my initial cutoff, I then multiply the light intensity value inside the resulting circle by the same dot product value, which should range between .95 and 1.0. This should give the light inside that circle a falloff from 100% lit to 0% lit toward the edge of the circle. However, there is no falloff. It's just all equally lit inside the circle. Why on earth, I have no idea. If someone could take a gander and let me know, please help, thank you so much. float CalculateSpotLightIntensity( float3 LightPos_VertexSpace, float3 LightDirection_WS, float3 SurfaceNormal_WS) { //float3 lightToVertex = normalize(SurfacePosition - LightPos_VertexSpace); float3 lightToVertex_WS = -LightPos_VertexSpace; float dotProduct = saturate(dot(normalize(lightToVertex_WS), normalize(LightDirection_WS))); // METALLIC EFFECT (deactivate for now) float metalEffect = saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))); if(dotProduct > .95 /*&& metalEffect > .55*/) { return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))); //return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))) * dotProduct; //return dotProduct; } else { return 0; } } float4 LightPixelShader(PixelInputType input) : SV_TARGET { float2 projectTexCoord; float depthValue; float lightDepthValue; float4 textureColor; // Set the bias value for fixing the floating point precision issues. float bias = 0.001f; // Set the default output color to the ambient light value for all pixels. float4 lightColor = cb_ambientColor; /////////////////// NORMAL MAPPING ////////////////// float4 bumpMap = shaderTextures[4].Sample(SampleType, input.tex); // Expand the range of the normal value from (0, +1) to (-1, +1). bumpMap = (bumpMap * 2.0f) - 1.0f; // Change the COORDINATE BASIS of the normal into the space represented by basis vectors tangent, binormal, and normal! float3 bumpNormal = normalize((bumpMap.x * input.tangent) + (bumpMap.y * input.binormal) + (bumpMap.z * input.normal)); //////////////// LIGHT LOOP //////////////// for(int i = 0; i < NUM_LIGHTS; ++i) { // Calculate the projected texture coordinates. projectTexCoord.x = input.vertex_ProjLightSpace[i].x / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f; projectTexCoord.y = -input.vertex_ProjLightSpace[i].y / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f; if((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y)) { // Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location. depthValue = shaderTextures[6 + i].Sample(SampleTypeClamp, projectTexCoord).r; // Calculate the depth of the light. lightDepthValue = input.vertex_ProjLightSpace[i].z / input.vertex_ProjLightSpace[i].w; // Subtract the bias from the lightDepthValue. lightDepthValue = lightDepthValue - bias; float lightVisibility = shaderTextures[6 + i].SampleCmp(SampleTypeComp, projectTexCoord, lightDepthValue ); // Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel. // If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it. if(lightDepthValue < depthValue) { // Calculate the amount of light on this pixel. float lightIntensity = saturate(dot(bumpNormal, normalize(input.lightPos_LS[i]))); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. float spotLightIntensity = CalculateSpotLightIntensity( input.lightPos_LS[i], // NOTE - this is NOT NORMALIZED!!! cb_lights[i].lightDirection, bumpNormal/*input.normal*/); lightColor += cb_lights[i].diffuseColor*spotLightIntensity* .18f; // spotlight //lightColor += cb_lights[i].diffuseColor*lightIntensity* .2f; // square light } } } } // Saturate the final light color. lightColor = saturate(lightColor); // lightColor = saturate( CalculateNormalMapIntensity(input, lightColor, cb_lights[0].lightDirection)); // TEXTURE ANIMATION - Sample pixel color from texture at this texture coordinate location. input.tex.x += textureTranslation; // BLENDING float4 color1 = shaderTextures[0].Sample(SampleTypeWrap, input.tex); float4 color2 = shaderTextures[1].Sample(SampleTypeWrap, input.tex); float4 alphaValue = shaderTextures[3].Sample(SampleTypeWrap, input.tex); textureColor = saturate((alphaValue * color1) + ((1.0f - alphaValue) * color2)); // Combine the light and texture color. float4 finalColor = lightColor * textureColor; /////// TRANSPARENCY ///////// //finalColor.a = 0.2f; return finalColor; } Light_vs.hlsl Light_ps.hlsl
  21. I'm trying to code Rayleigh part of Nishita's model (Display Method of the Sky Color Taking into Account Multiple Scattering). I get black screen no colors. Can anyone find the issue for me? [numthreads(32, 32, 1)] //disptach 8, 8, 1 it's 256 by 256 image void ComputeSky(uint3 DTID : SV_DispatchThreadID) { float X = ((2 * DTID.x) / 255) - 1; float Y = 1 - ((2 * DTID.y) / 255); float r = sqrt(((X*X)+(Y*Y))); float Theta = r * (PI); float Phi = atan2(Y, X); static float3 Eye = float3(0, 10, 0); float ViewOD = 0, SunOD = 0, tmpDensity = 0; float3 Attenuation = 0, tmp = 0, Irgb = 0; //if (r<=1) { float3 ViewDir = normalize(float3(sin(Theta)*cos(Phi), cos(Theta),sin(Theta)*sin(Phi) )); float ViewRayLength = RaySphereIntersection(Eye, ViewDir, float3(0, 0, 0), OutterRadius); float SampleLength = ViewRayLength / Ksteps; //vSunDir = normalize(vSunDir); float cosTheta = dot(normalize(vSunDir), ViewDir); float3 tmpPos = Eye + 0.5 * SampleLength * ViewDir; for(int k=0; k<Ksteps; k++) { float SunRayLength = RaySphereIntersection(tmpPos, vSunDir, float3(0, 0, 0), OutterRadius); float3 TopAtmosphere = tmpPos + SunRayLength*vSunDir; ViewOD = OpticalDepth(Eye, tmpPos); SunOD = OpticalDepth(tmpPos, TopAtmosphere); tmpDensity = Density(length(tmpPos)-InnerRadius); Attenuation = exp(-RayleighCoeffs*(ViewOD+SunOD)); tmp += tmpDensity*Attenuation; tmpPos += SampleLength * ViewDir; } Irgb = RayleighCoeffs*RayleighPhaseFunction(cosTheta)*tmp*SampleLength; SkyColors[DTID.xy] = float4(Irgb, 1); } }
  22. I'm trying to trianglulate 3D Points using DirectX11, so I triangulate 3D points then I try to draw triangles, the outcome of triangulation is std::vector<Tri>, each Tri has a,b,c 3 values. I don't see any output, I think I have a problem with the math.. here is my code: https://pastebin.com/SQ8z3WAt
  23. Please look at my new post in this thread where I supply new information! I'm trying to implement SSAO in my 'engine' (based on this article) but I'm getting odd results. I know I'm doing something wrong but I can't figure out what's causing the particular issue im having at the moment. Here's a video of what it looks like . The rendered output is the SSAO map. As you can see the result is heavily altered depending on the camera (although it seems to be unaffected my camera translation). The fact that the occlusion itself isn't correct isn't much of a problem at this stage, since I've hardcoded a lot of stuff that shouldn't be. E.g. I don't have a random-vector texture, all I do is use one of the sample vectors in order to construct the TBN matrix. One issue at a time... My shaders are as follows: //SSAO VS struct VS_IN { float3 pos : POSITION; float3 ray : VIEWRAY; }; struct VS_OUT { float4 pos : SV_POSITION; float4 ray : VIEWRAY; }; VS_OUT VS_main( VS_IN input ) { VS_OUT output; output.pos = float4(input.pos, 1.0f); //already in NDC space, pass through output.ray = float4(input.ray, 0.0f); //interpolate view ray return output; } Texture2D depthTexture : register(t0); Texture2D normalTexture : register(t1); struct VS_OUT { float4 pos : SV_POSITION; float4 ray : VIEWRAY; }; cbuffer cbViewProj : register(b0) { float4x4 view; float4x4 projection; } float4 PS_main(VS_OUT input) : SV_TARGET { //Generate samples float3 kernel[8]; kernel[0] = float3(1.0f, 1.0f, 1.0f); kernel[1] = float3(-1.0f, -1.0f, 0.0f); kernel[2] = float3(-1.0f, 1.0f, 1.0f); kernel[3] = float3(1.0f, -1.0f, 0.0f); kernel[4] = float3(1.0f, 1.0f, 0.0f); kernel[5] = float3(-1.0f, -1.0f, 1.0f); kernel[6] = float3(-1.0f, 1.0f, .0f); kernel[7] = float3(1.0f, -1.0f, 1.0f); //Get texcoord using SV_POSITION int3 texCoord = int3(input.pos.xy, 0); //Fragment viewspace position (non-linear depth) float3 origin = input.ray.xyz * (depthTexture.Load(texCoord).r); //world space normal transformed to view space and normalized float3 normal = normalize(mul(view, float4(normalTexture.Load(texCoord).xyz, 0.0f))); //Grab arbitrary vector for construction of TBN matrix float3 rvec = kernel[3]; float3 tangent = normalize(rvec - normal * dot(rvec, normal)); float3 bitangent = cross(normal, tangent); float3x3 tbn = float3x3(tangent, bitangent, normal); float occlusion = 0.0; for (int i = 0; i < 8; ++i) { // get sample position: float3 samp = mul(tbn, kernel[i]); samp = samp * 1.0f + origin; // project sample position: float4 offset = float4(samp, 1.0); offset = mul(projection, offset); offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; // get sample depth. (again, non-linear depth) float sampleDepth = depthTexture.Load(int3(offset.xy, 0)).r; // range check & accumulate: occlusion += (sampleDepth <= samp.z ? 1.0 : 0.0); } //Average occlusion occlusion /= 8.0; return min(occlusion, 1.0f); } I'm fairly sure my matrices are correct (view and projection) and that the input rays are correct. I don't think the non-linear depth is the problem here either, but what do I know I haven't fixed the linear depth mostly because I don't really understand how it's done... Any ideas are very appreciated!
  24. There are a number of options now in each API for storing data on the GPU. Specifically in DX11, we have Buffers which can be bound as vertex buffers, index buffers, constant buffers or shader resources (structured buffer for example). Constant buffers have the most limitations and I think that's because they are so optimized for non random Access (an optimization of which we have no control of). Vertex buffers and index buffers however have not many limitations compared to shader resource buffers to the point that I question their value. For example, the common way of drawing geometry is to provide a vertex buffer (and maybe an instance buffer) by a specific call to SetVertexBuffers. We also provide index buffers with a specific call. At this point we also have to provide an input layout. That is significantly more of a management overhead than it would be if we provided the vertex and index buffers through shader resources and indexed them with sysvalues (eg. SV_VertexID) in the shader. Now, I haven't actually tried doing vertex buffer management this way but I actually looking forward to it if no one points out the faults in my way of thinking.
  25. Hi all, I'm working on a terrain engine with DX11. The engine suppose to render every frame a surface that represents ~180x180 KM and the texturing should look good from high above but also close to the ground (The engine is part of a game and the camera view can come to 0 altitude, basically). More regarding texturing - Since this is a replacement engine to an existing one (which use much lower mesh resolution on DX9), the textures already exist and there are basically ~3000 of them. On DX9 the engine is issueing a draw call for every texture, which results in ~1000 draw calls per frame for the terrain (As not all textures are usually used per frame, and of course many are tiled more than once as the mesh is huge). With DX11 I'm storing all those ~3500 on 2 texture arrays and using very few draw calls (Due to Tessellation technique, the textures can be done with 1 draw call). In order to select which textures will be tiled where, I have a blendmap that is a 1024x1024 16-bit RAW file that I sample in the Pixel shader and choose according to the value which array to use of the 2 and which texture eventually to tile at the UV ccordinate. Now, since I want to fight the huge number of textures that will require a lot of VRAM (Assuming I need at least 3K textures and let's say acceptable resolution is 1024x1024 DXT1, that's already more than 2GB that I have to store in VRAM, only for the art textures of the terrain), my idea for texturing was to use another set of textures in Multi-texture fashion to tile most of the terrain and leave only special areas that require special look to be tiled with more unique textures. But, Now everything got complicated as I learn about Tiled-Resources Thing is that I simply don't get how EXACTLY it's working. I have an idea after I read what I could on the web and I even have this source code of Mars rendering which is linked on the bottom of this page: https://blogs.windows.com/windowsexperience/2014/01/30/directx-11-2-tiled-resources-enables-optimized-pc-gaming-experiences/ Also I saw that PPT (Didn't heard the lecture though): https://channel9.msdn.com/Events/Build/2013/4-063 So I simply want to ask: 1. According to what I described and assuming my engine eventually renderes real areas of the world, would it be my best choice to use Tiled Resources and even increase the number of textures in use? Because basically the dream of artists is to use a unique Sattelite image for textures... 2. Currently the engine is designed for D3D 11.0 only, we didn't meant to go 11.2 which will require users to have windows 8.1 at least, is it worth it? 3. From coding POV, how complicated it should be to handle Tiled Resources? As I understand the idea is to allow the App specify which textures are needed for the current frame and only the subresources (i.e needed textures and only the necessary mip levels) are uploaded to VRAM. It means I will need some App (C++ for me) code that will tell the rendering code which textures I need and which mip-levels for each texture? Or is it something "Automatic" ? 4. I do have the Mars example from Microsoft which I linked above, and I'm going to inspect it deeper (They are using 16K^3 - 1GB texture for the rendering, but they say they are using only 16MB of VRAM for each frame, and I could verify with process explorer that the App uses only ~80MB of VRAM total), But I don't know if it really represents the same complexity that I have, i.e many different textures, as here it's only 1 large texture, If I understand it correctly. Any help would be welcome.
  • Advertisement