Search the Community

Showing results for tags 'DX11' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Audio Jobs
  • Business Jobs
  • Game Design Jobs
  • Programming Jobs
  • Visual Arts Jobs

Categories

  • GameDev Unboxed

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 1349 results

  1. Hello, i'm currently implementing http://roar11.com/2015/07/screen-space-glossy-reflections/. But i don't know how to calc viewRay. here is my solution much different from example, it works but look bad at edge of screen. float3 viewRay = normalize(VSPositionFromDepth(input.tex.xy, depth, sslr_invProjMatrix)); float3 rayOriginVS = viewRay * LinearDepth(depth); /* * Since position is reconstructed in view space, just normalize it to get the * vector from the eye to the position and then reflect that around the normal to * get the ray direction to trace. */ float3 toPositionVS = normalize(worldPosition.xyz - camPosition.xyz); float3 rayDirectionVS = normalize(reflect(toPositionVS, normalVS)); rayDirectionVS = mul(float4(rayDirectionVS, 0), sslr_viewMatrix); // output rDotV to the alpha channel for use in determining how much to fade the ray float rDotV = dot(rayDirectionVS, rayOriginVS); is there a better way to do this? I'm googling for few days but can't find any working solution. Thanks
  2. In our old DX9 renderer, I used alpha-to-coverage to render a huge amount of ground cover such as grass, plants, and flowers. It worked really well because depth buffering remained enabled and nothing had to be sorted. I would like to use our new DX11 deferred renderer to draw all of our ground cover foliage because it will allow all of the foliage to be lit in the standard deferred lighting pass. However, I'm not having any success. Here's how I'm setting up the DX11 blend state: D3D11_BLEND_DESC::AlphaToCoverageEnable = true D3D11_BLEND_DESC::IndependentBlendEnable = false D3D11_BLEND_DESC::RenderTarget[0].BlendEnable = false D3D11_BLEND_DESC::RenderTarget[0].SrcBlend = D3D11_BLEND_ONE D3D11_BLEND_DESC::RenderTarget[0].DestBlend = D3D11_BLEND_ZERO D3D11_BLEND_DESC::RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD D3D11_BLEND_DESC::RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE D3D11_BLEND_DESC::RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO D3D11_BLEND_DESC::RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD The deferred renderer has 3 render targets: RT0: R8G8B8A8_UNORM: RGB=diffuse, A=alpha-to-coverage output RT1: R16G16B16A16_FLOAT: RGB=emissive, A=unused RT2: R8G8B8A8_UNORM: RG=normal, B=spec power, A=spec intensity I assume that to use alpha-to-coverage in a deferred renderer, I simply need to set AlphaToCoverageEnable to true in the blend state and then output an opacity value to RT0's alpha component. But when I try this, what I see is that when the shader outputs an alpha value of <=0.5 to RT0, nothing at all is written to the RGBA components of RT0. When an alpha value of >0.5 is written to RT0, the RGBA components of RT0 are written just fine. Any help appreciated!
  3. In my render pipeline, I have a large 3k by 3k rendertarget meant for terrain splatting, and another one that uses the splatting output render the terrain's final texture. As you can guess, I'm hitting ~40 fps on my weaker laptop. What would be the best way to optimize this without lowering the resolution of my rendertargets? In essence, how do you optimize textures that need to be large? Things I've thought about, correct me if it's likely wrong: - Subdividing up the rendertarget into 4 quads and rendering them.
  4. I need to bind a shader resource as a structured buffer. Textures (t#) and constant buffers (b#) are listed as expected but not StructuredBuffers? I've looked at https://msdn.microsoft.com/en-us/library/windows/desktop/dd607359(v=vs.85).aspx and it still puzzles me - what is the register type used for Structured Buffers in HLSL?
  5. I don't know whether this is correct or wrong. This is how I load the UVs. UV = new XMFLOAT2[NumVertices]; for (int i = 0; i < Mesh->GetPolygonCount(); i++)//polygon(=mostly rectangle) count { FbxLayerElementArrayTemplate<FbxVector2>* uvVertices = NULL; Mesh->GetTextureUV(&uvVertices); for (int j = 0; j < Mesh->GetPolygonSize(i); j++)//retrieves number of vertices in a polygon { FbxVector2 uv = uvVertices->GetAt(Mesh->GetTextureUVIndex(i, j)); UV[3*i+j] = XMFLOAT2((float)uv.mData[0], (float)uv.mData[1]); } } How do I correct this? Below image was captured when I was trying to load a texture onto a plane.
  6. I am currently trying working on implementing a paper which requires me to use a downsampled texture of type int2 of resolution (width/k, height/k). Where K is an unsigned integer k = (1, infinity) I need to sample this smaller int2 texture in my full screen pass and effectively sample it per window pixel, but I can not use texture.Sample as it is an int2 type AND because it is smaller than the screen size I have read I am not able to use texture.Load either. So in short: Need to use downsampled int2 texture in fullscreen rendering pass but I don't know how to sample this texture properly.
  7. For example, in deferred renderer, in pixel shader i would like to write distance from camera to pixel (read from g-buffer R32F texture) at current cursor position. What kind of buffers i could use (if any, except render targets) and how to setup those to be able to access that value on CPU side? With google i couldn't find any example code, and i don't know with what keywords to search properly. I tried to write some code, but i am stuck as don't know what is correct "semantic" for shader code side: c++ struct MyStruct { vec4 myData; }; ... std::memset(&buffDesc, 0, sizeof(buffDesc)); buffDesc.Usage = D3D11_USAGE_DEFAULT; buffDesc.ByteWidth = sizeof(MyStruct); buffDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE; buffDesc.CPUAccessFlags = 0; hr = dev->CreateBuffer(&buffDesc, nullptr, &m_writeBuffer); if (FAILED(hr)) { ... } std::memset(&buffDesc, 0, sizeof(buffDesc)); buffDesc.Usage = D3D11_USAGE_STAGING; buffDesc.ByteWidth = sizeof(MyStruct); buffDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE; buffDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ; hr = dev->CreateBuffer(&buffDesc, nullptr, &m_readBuffer); if (FAILED(hr)) { ... } ... //after drawing Context->CopyResource(m_readBuffer, m_writeBuffer); MyStruct s; D3D11_MAPPED_SUBRESOURCE ms; Context->Map(m_readBuffer, 0, D3D11_MAP_READ, 0, &ms); s.myData = *(vec4*)ms.pData; Context->Unmap(m_readBuffer, 0); ... // use myData hlsl ??? MyStruct : register(???) { float4 myData; }; void PixelShader(...) { ... myData = something; ... }
  8. Hi, I am trying to port J.Coluna XNA sample https://jcoluna.wordpress.com/page/2/ for LLP to my DX engine and I faced with some problems: 1. - For now I am using inversed depth buffer in my engine to increase depth precision and I dont know in which places it will affect my renderer, but obviously it is not working now, because in normal and depth rendertargets I see nothing and my model is rendering without depth (just ambient). In Clear GBuffer shader I left Depth as float4(1,1,1,1), because depth is in view space, so I dont need to invert it as I understand. struct VertexShaderInput { float4 Position : SV_POSITION; }; struct VertexShaderOutput { float4 Position : SV_POSITION; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; input.Position.w = 1; output.Position = input.Position; return output; } struct PixelShaderOutput { float4 Normal : SV_TARGET0; float4 Depth : SV_TARGET1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; //this value depends on your normal encoding method. //on our example, it will generate a (0,0,-1) normal output.Normal = float4(0.5, 0.5, 0.5, 0); //max depth output.Depth = float4(1, 1, 1, 1); return output; } technique Clear { pass ClearPass { Profile = 11; VertexShader = VertexShaderFunction; PixelShader = PixelShaderFunction; } } And here is my LPP main shader VertexShaderOutput VertexShaderFunction(MeshVertexInput input) { VertexShaderOutput output; input.Position.w = 1; float3 viewSpacePos = mul(input.Position, WorldView); output.Position = mul(input.Position, WorldViewProjection); output.TexCoord = input.UV0; //pass the texture coordinates further //we output our normals/tangents/binormals in viewspace output.Normal = normalize(mul(input.Normal, (float3x3)WorldView)); output.Tangent = normalize(mul(input.Tangent.xyz, (float3x3) WorldView)); output.Binormal = normalize(cross(output.Normal, output.Tangent) * input.Tangent.w); output.Depth = viewSpacePos.z; //pass depth return output; } PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output = (PixelShaderOutput) 1; //read from our normal map half4 normalMap = NormalMap.Sample(NormalMapSampler, input.TexCoord); half3 normalViewSpace = NormalMapToSpaceNormal(normalMap.xyz, input.Normal, input.Binormal, input.Tangent); output.Normal.rg = EncodeNormal(normalize(normalViewSpace)); //our encoder output in RG channels output.Normal.b = normalMap.a; //our specular power goes into B channel output.Normal.a = 1; //not used output.Depth.r = -input.Depth / FarClip; //output Depth in linear space, [0..1] return output; } But as a result I receive the following image (see attach). I the top of the window must be 3 render targets (normals, depth and light). Light is working (more or less), but normals and depth is not displaying anything at all. As I understand I receive only color for my model. First screenshot is from my forward renderer with directional light Second - from my LPP renderer with directional light. As you can see in the top of the window must be 3 rendertargets for debug, but there is no any output for them. I cannot understand what I am doing wrong here. Maybe someone could point me on the right direction.
  9. Hi Guys, I am presently trying to render a video frame to a quad in DirectX 11 using Microsoft Media Foundations. But I am having a problem with 'TransferVideoFrame'. m_spMediaEngine->TransferVideoFrame(texture, rect, rcTarget, &m_bkgColor); The call keeps returning 'invalid parameter'. Some googling revealed this - https://stackoverflow.com/questions/15711054/how-do-i-grab-frames-from-a-video-stream-on-windows-8-modern-apps Which explains that there is (was??) a bug with the NVidia drivers back in 2013. Other searches say this bug was fixed. The very odd thing is, if I run the application through Visual Studio Graphical debugger the video renders perfectly to the quad with no errors, so this would suggest that my code is sound. I can't do as the link suggests above and try to create the device as 9_x, as this is an extension that I am creating for GameMaker 2 and I don't have access to renderer creation (only the device handle - which is fine most of the time). I am presently downloading VS 2017 to see if it behaves better with more recent SDK's (as I am currently using VS 2015). Thanks in advance
  10. Hi Guys, I am having a bit of a problem with a dynamic texture. It is creating without error and I am attempting to initialize the first pixel to white to make sure I am mapping correctly. But when I draw the texture to the quad it displays the whole quad white (instead of just one pixel). This is how I am creating, mapping, and setting the first pixel to white. But as mentioned, when I draw the quad, the entire quad is white. // Create dynamic texture D3D11_TEXTURE2D_DESC textureDesc = { 0 }; textureDesc.Width = 2048; textureDesc.Height = 2048; textureDesc.MipLevels = 1; textureDesc.ArraySize = 1; textureDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; textureDesc.SampleDesc.Count = 1; textureDesc.Usage = D3D11_USAGE_DYNAMIC; textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE; textureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; textureDesc.MiscFlags = 0; HRESULT result = d3dDevice->CreateTexture2D(&textureDesc, NULL, &textureDynamic); if (FAILED(result)) return -1; result = d3dDevice->CreateShaderResourceView(textureDynamic, 0, &textureRV); if (FAILED(result)) return -2; D3D11_MAPPED_SUBRESOURCE resource; if (FAILED(d3dContext->Map(textureDynamic, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource))) return -1; memset(resource.pData, 255, 4); d3dContext->Unmap(textureDynamic, 0); Hopefully I have just made an oversight somewhere. Any assistance would be greatly appreciated (If I change the 255 value to 128 the quad then turns grey, so the mapping is definitely doing something. Just can't work out why it is colouring the whole quad and not the first pixel)
  11. Hello! A have an issue with my point light shadows realisation. First of all, the pixel shader path: //.... float3 toLight = plPosW.xyz - input.posW; float3 fromLight = -toLight; //... float depthL = abs(fromLight.x); if(depthL < abs(fromLight.y)) depthL = abs(fromLight.y); if(depthL < abs(fromLight.z)) depthL = abs(fromLight.z); float4 pH = mul(float4(0.0f, 0.0f, depthL, 1.0f), lightProj); pH /= pH.w; isVisible = lightDepthTex.SampleCmpLevelZero(lightDepthSampler, normalize(fromLight), pH.z).x; lightProj matrix creation Matrix4x4 projMat = Matrix4x4::PerspectiveFovLH(0.5f * Pi, 0.01f, 1000.0f, 1.0f); thats how i create Depth cube texture viewport->TopLeftX = 0.0f; viewport->TopLeftY = 0.0f; viewport->Width = static_cast<float>(1024); viewport->Height = static_cast<float>(1024); viewport->MinDepth = 0.0f; viewport->MaxDepth = 1.0f; D3D11_TEXTURE2D_DESC textureDesc; textureDesc.Width = 1024; textureDesc.Height = 1024; textureDesc.MipLevels = 1; textureDesc.ArraySize = 6; textureDesc.Format = DXGI_FORMAT_R24G8_TYPELESS; textureDesc.SampleDesc.Count = 1; textureDesc.SampleDesc.Quality = 0; textureDesc.Usage = D3D11_USAGE_DEFAULT; textureDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE; textureDesc.CPUAccessFlags = 0; textureDesc.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE; ID3D11Texture2D* texturePtr; HR(DeviceKeeper::GetDevice()->CreateTexture2D(&textureDesc, NULL, &texturePtr)); for(int i = 0; i < 6; ++i){ D3D11_DEPTH_STENCIL_VIEW_DESC dsvDesc; dsvDesc.Flags = 0; dsvDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; dsvDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2DARRAY; dsvDesc.Texture2DArray = D3D11_TEX2D_ARRAY_DSV{0, i, 1}; ID3D11DepthStencilView *outDsv; HR(DeviceKeeper::GetDevice()->CreateDepthStencilView(texturePtr, &dsvDesc, &outDsv)); edgeDsv = outDsv; } D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc; srvDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS; srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE; srvDesc.TextureCube = D3D11_TEXCUBE_SRV{0, 1}; ID3D11ShaderResourceView *outSRV; HR(DeviceKeeper::GetDevice()->CreateShaderResourceView(texturePtr, &srvDesc, &outSRV)); then i create six target oriented cameras and finally draw scene to cube depth according to each camera Cameras creation code: std::vector<Vector3> camDirs = { { 1.0f, 0.0f, 0.0f}, {-1.0f, 0.0f, 0.0f}, { 0.0f, 1.0f, 0.0f}, { 0.0f, -1.0f, 0.0f}, { 0.0f, 0.0f, 1.0f}, { 0.0f, 0.0f, -1.0f}, }; std::vector<Vector3> camUps = { {0.0f, 1.0f, 0.0f}, // +X {0.0f, 1.0f, 0.0f}, // -X {0.0f, 0.0f, -1.0f}, // +Y {0.0f, 0.0f, 1.0f}, // -Y {0.0f, 1.0f, 0.0f}, // +Z {0.0f, 1.0f, 0.0f} // -Z }; for(size_t b = 0; b < camDirs.size(); b++){ edgesCameras.SetPos(pl.GetPos()); edgesCameras.SetTarget(pl.GetPos() + camDirs); edgesCameras.SetUp(camUps); edgesCameras.SetProjMatrix(projMat); } I will be very gratefull for any help! P.s sorry for my poor English)
  12. Just a really quick question - is there any overhead to using DrawIndexedInstanced even for geometry you just render once vs using DrawIndexed? Or is the details obfuscated by the graphics driver? I would assume no but you never know
  13. Hi, I tried searching for this but either I failed or couldn't find anything. I know there's D11/D12 interop and there are extensions for GL/D11 (though not very efficient). I was wondering if there's any Vulkan/D11 or Vulkan/D12 interop? Thanks!
  14. Hey folks. So I'm having this problem in which if my camera is close to a surface, the SSAO pass suddenly spikes up to around taking 16 milliseconds. When still looking towards the same surface, but less close. The framerate resolves itself and becomes regular again. This happens with ANY surface of my model, I am a bit clueless in regards to what could cause this. Any ideas? In attached image: y axis is time in ms, x axis is current frame. The dips in SSAO milliseconds are when I moved away from the surface, the peaks happen when I am very close to the surface. Edit: So I've done some more in-depth profiling with Nvidia nsight. So these are the facts from my results Count of command buffers goes from 4 (far away from surface) to ~20(close to surface). The command buffer duration in % goes from around ~30% to ~99% Sometimes the CPU duration takes up to 0.03 to 0.016 milliseconds per frame while comparatively usually it takes around 0.002 milliseconds. I am using a vertex shader which generates my full-screen quad and afterwards I do my SSAO calculations in my pixel shader, could this be a GPU driver bug? I'm a bit lost myself. It seems there could be a CPU/GPU resource stall. But why would the amount of command buffers be variable depending on distance from a surface? Edit n2: Any resolution above 720p starts to have this issue, and I am fairly certain my SSAO is not that performance heavy it would crap itself at a bit higher resolutions.
  15. I have a gaming framework with an renderer interface. Those support DX8, DX9 and latest, DX11. Both DX8 and DX9 use fixed function pipeline, while DX11 obviously uses shaders. I've got most of the parts working fine, as in I can switch renderers and notice almost no difference. The most advanced features are 2 directional lights with a single texture My last problem is lighting; albeit there's documentation on the D3D lighting model I still can't get the behaviour right. My mistake shows most prominently in the dark side opposite the lights. I'm pretty sure the ambient calculation is off, but that one's supposed to be the most simple one and should be hard to get wrong. Interestingly I've been searching high and low, and have yet to find a resource that shows how to build a HLSL shader where diffuse, ambient and specular are used together with material properties. I've got various shaders for all the variations I'm supporting. I stepped through the shader with the graphics debugger, but the calculation seems to do what I want. I'm just not sure the formula is correct. This one should suffice though, it's doing two directional lights, texture modulated with vertex color and a normal. Maybe someone can spot one (or more mistakes). And yes, this is in the vertex shader and I'm aware lighting will be as "bad" as in fixed function; that's my goal currently. // A constant buffer that stores the three basic column-major matrices for composing geometry. cbuffer ModelViewProjectionConstantBuffer : register(b0) { matrix model; matrix view; matrix projection; matrix ortho2d; }; struct DirectionLight { float3 Direction; float PaddingL1; float4 Ambient; float4 Diffuse; float4 Specular; }; cbuffer LightsConstantBuffer : register( b1 ) { float4 Ambient; float3 EyePos; float PaddingLC1; DirectionLight Light[8]; }; struct Material { float4 MaterialEmissive; float4 MaterialAmbient; float4 MaterialDiffuse; float4 MaterialSpecular; float MaterialSpecularPower; float3 MaterialPadding; }; cbuffer MaterialConstantBuffer : register( b2 ) { Material _Material; }; // Per-vertex data used as input to the vertex shader. struct VertexShaderInput { float3 pos : POSITION; float3 normal : NORMAL; float4 color : COLOR0; float2 tex : TEXCOORD0; }; // Per-pixel color data passed through the pixel shader. struct PixelShaderInput { float4 pos : SV_POSITION; float2 tex : TEXCOORD0; float4 color : COLOR0; }; // Simple shader to do vertex processing on the GPU. PixelShaderInput main(VertexShaderInput input) { PixelShaderInput output; float4 pos = float4( input.pos, 1.0f ); // Transform the vertex position into projected space. pos = mul(pos, model); pos = mul(pos, view); pos = mul(pos, projection); output.pos = pos; // pass texture coords output.tex = input.tex; // Calculate the normal vector against the world matrix only. //set required lighting vectors for interpolation float3 normal = mul( input.normal, ( float3x3 )model ); normal = normalize( normal ); float4 ambientEffect = Ambient; float4 diffuseEffect = float4( 0, 0, 0, 0 ); float4 specularEffect = float4( 0, 0, 0, 0 ); for ( int i = 0; i < 2; ++i ) { // Invert the light direction for calculations. float3 lightDir = -Light[i].Direction; float lightFactor = max( dot( lightDir, input.normal ), 0 ); ambientEffect += Light[i].Ambient * _Material.MaterialAmbient; diffuseEffect += saturate( Light[i].Diffuse * dot( normal, lightDir ) );// * _Material.MaterialDiffuse; //specularEffect += Light[i].Specular * dot( normal, halfangletolight ) * _Material.MaterialSpecularPower; } specularEffect *= _Material.MaterialSpecular; //ambientEffect.w = 1.0; ambientEffect = normalize( ambientEffect ); /* Ambient effect: (L1.ambient + L2.ambient) * object ambient color Diffuse effect: (L1.diffuse * Dot(VertexNormal, Light1.Direction) + L2.diffuse * Dot(VertexNormal, Light2.Direction)) * object diffuse color Specular effect: (L1.specular * Dot(VertexNormal, HalfAngleToLight1) * Object specular reflection power + L2.specular * Dot(VertexNormal, HalfAngleToLight2) * Object specular reflection power ) * object specular color Resulting color = Ambient effect + diffuse effect + specular effect*/ float4 totalFactor = ambientEffect + diffuseEffect + specularEffect; totalFactor.w = 1.0; output.color = input.color * totalFactor; return output; } Edit: This message editor is driving me nuts (Arrrr!) - I don't write code in Word.
  16. Hi! I am trying to implement simple SSAO postprocess. The main source of my knowledge on this topic is that awesome tutorial. But unfortunately something doesn't work... And after a few long hours I need some help. Here is my hlsl shader: float3 randVec = _noise * 2.0f - 1.0f; // noise: vec: {[0;1], [0;1], 0} float3 tangent = normalize(randVec - normalVS * dot(randVec, normalVS)); float3 bitangent = cross(tangent, normalVS); float3x3 TBN = float3x3(tangent, bitangent, normalVS); float occlusion = 0.0; for (int i = 0; i < kernelSize; ++i) { float3 samplePos = samples[i].xyz; // samples: {[-1;1], [-1;1], [0;1]} samplePos = mul(samplePos, TBN); samplePos = positionVS.xyz + samplePos * ssaoRadius; float4 offset = float4(samplePos, 1.0f); offset = mul(offset, projectionMatrix); offset.xy /= offset.w; offset.y = -offset.y; offset.xy = offset.xy * 0.5f + 0.5f; float sampleDepth = tex_4.Sample(textureSampler, offset.xy).a; sampleDepth = vsPosFromDepth(sampleDepth, offset.xy).z; const float threshold = 0.025f; float rangeCheck = abs(positionVS.z - sampleDepth) < ssaoRadius ? 1.0 : 0.0; occlusion += (sampleDepth <= samplePos.z + threshold ? 1.0 : 0.0) * rangeCheck; } occlusion = saturate(1 - (occlusion / kernelSize)); And current result: http://imgur.com/UX2X1fc I will really appreciate for any advice!
  17. in directx I need to know how would I play the animation in directx rendering frames for each part of the model moving parts - like the bones, how would would that work? I'm new to writing a animation player, for a model parser. I'm asking so I understand how to do this, I'm new to loading a model and playing\moving the bones to effect the mesh.
  18. I'm trying to code Rayleigh part of Nishita's model (Display Method of the Sky Color Taking into Account Multiple Scattering). I get black screen no colors. Can anyone find the issue for me? [numthreads(32, 32, 1)] //disptach 8, 8, 1 it's 256 by 256 image void ComputeSky(uint3 DTID : SV_DispatchThreadID) { float X = ((2 * DTID.x) / 255) - 1; float Y = 1 - ((2 * DTID.y) / 255); float r = sqrt(((X*X)+(Y*Y))); float Theta = r * (PI); float Phi = atan2(Y, X); static float3 Eye = float3(0, 10, 0); float ViewOD = 0, SunOD = 0, tmpDensity = 0; float3 Attenuation = 0, tmp = 0, Irgb = 0; //if (r<=1) { float3 ViewDir = normalize(float3(sin(Theta)*cos(Phi), cos(Theta),sin(Theta)*sin(Phi) )); float ViewRayLength = RaySphereIntersection(Eye, ViewDir, float3(0, 0, 0), OutterRadius); float SampleLength = ViewRayLength / Ksteps; //vSunDir = normalize(vSunDir); float cosTheta = dot(normalize(vSunDir), ViewDir); float3 tmpPos = Eye + 0.5 * SampleLength * ViewDir; for(int k=0; k<Ksteps; k++) { float SunRayLength = RaySphereIntersection(tmpPos, vSunDir, float3(0, 0, 0), OutterRadius); float3 TopAtmosphere = tmpPos + SunRayLength*vSunDir; ViewOD = OpticalDepth(Eye, tmpPos); SunOD = OpticalDepth(tmpPos, TopAtmosphere); tmpDensity = Density(length(tmpPos)-InnerRadius); Attenuation = exp(-RayleighCoeffs*(ViewOD+SunOD)); tmp += tmpDensity*Attenuation; tmpPos += SampleLength * ViewDir; } Irgb = RayleighCoeffs*RayleighPhaseFunction(cosTheta)*tmp*SampleLength; SkyColors[DTID.xy] = float4(Irgb, 1); } }
  19. I made my obj parser and It also calculate tagent space for normalmap. it seems calculation is wrong.. any good suggestion for this? I can't upload my pics so I link my question. https://gamedev.stackexchange.com/questions/147199/how-to-debug-calculating-tangent-space and I uploaded my code here ObjLoader.cpp ObjLoader.h
  20. So the application itself is 2D and layered but I want to build the background bitmap from a Blender 3D model. When I make an appeal to Saint Google there seem to be an awful lot of ways of getting to the desired result. The process I have in mind is to do the following. Please comment on the practicality or otherwise of this approach. Use a background thread to: Load the model Create a 3D buffer for rendering into Render into the buffer Copy the 3D buffer to a 2D buffer somewhere The main loop will Blit? copy the 2D background to the backbuffer of the swap chain Overlay the background with various sprites Swap buffers Which leads to the following questions. What is the best option for exporting a Blender model for use by DirectX. Which API should I use to create a 3D buffer for rendering the model into? As it only gets rendered on scene changes, every 10-20 seconds, am I correct in assuming it shouldn't be part of the swap chain? In managing the background buffer is it best to, or must I, use a "dummy" device of some kind? How do I turn a 3D buffer into a 2D bitmap for load into the 2D swap chain? What is the best way to set up the 2D buffer so asto minimise the cost of the copy for every frame by the main loop?
  21. Hi guys, Do you think I should render my first-person hand/weapon in world space or view space, is there a preferred method ? I got it to work in view space but now I have to do all the lighting in view space too, that's not much of a problem but I've been doing it in world space. I think I would rather do it in world space unless there is a good reason not too. I haven't been able to get that working though. Any help would be appreciated, especially getting it to work in world space. Thanks.
  22. I am creating a simple Graphics Engine and I want to let the user set the time of the day for the scene, but I have problems translating it into the lighting direction. The light direction is now given by a vector of three float. i.e. The user set time = 12 (noon). I want the light direction to be right on the top of the scene. Is there a way to achieve that? Thank you
  23. Is it possible to do column operations (addition, subtraction) directly on an XMMATRIX (i.e. XMMATRIX alternative for XMVECTOR swizzle operations) without using _XM_NO_INTRINSICS_? I use these operations for creating the view frustum planes directly from the transformation matrices.
  24. Is the slot pointed to by a Texture2D/SamplerState in a pixel shader assigned at compile time and immutable? So on the HLSL side I'd have: Texture2D tex0 : register(t0); Texture2D tex1 : register(t1); SamplerState smp0 : register(s0); SamplerState smp1 : register(s1); And then on the host side: context->PSSetShaderResource( 0, 1, textureA ) context->PSSetShaderResource( 1, 1, textureB ) context->PSSetSamplers( 0, 1, samplerA ); context->PSSetSamplers( 1, 1, samplerB ); When the program is run tex0 would refer to textureA, tex1 to textureB and so on with the samplers. What if I wanted to switch them around on the HLSL side. So in OpenGL I could do glUniform1i( tex0, 1 ) and glUniform( tex1, 0 ). This would make tex0 refer to textureB and tex1 refer to textureA. Can the same be done with DX11? If so what is the call to modify tex0/tex0 - and the smp0/smp1?
  25. Hi, I have written a deferred renderer a few years ago and now I picked up the project again to fix some outstanding bugs and extend some features. The project is written in C++, DirectX 11 and HLSL. While fixing the bugs I stombled across a strange behavior in one of my shader files which took me some time to track down. First I thought it had to do with my depth reconstruction algorithm in the point light shader, but after implementing alternate algorithms based on MJPs code snippets I ruled this out. It appears as if the dot function inside the shader sometimes (but reproducable) yields wrong results. Also, when switching from D3D_DRIVER_TYPE_HARDWARE to D3D_DRIVER_TYPE_WARP the problem completely disappeared, so to me it seems like this is either some kind of HLSL/DX11 or driver issue. I am using a GTX 980 for rendering and have the latest NVIDIA driver installed, also tried on an older laptop with NVIDIA card (which gave the same strange results). Here are some images that show the problem: So when debugging the wrong pixels with the Visual Studio Graphics Analyzer I found out that the hlsl dot function during my point light computations return unexpected and wrong values: And the dot product of (-0.51, 0.78, 0.36) and (0, 1, 0) obviously should not be 0... I am no expert in asm hlsl output, but the compiled shader code looks like this (last line is the dot product of lightVec and normal): Does anyone have an idea on how to fix this issue or on how to avoid the strange dot product behavior?