Jump to content
  • Advertisement

Search the Community

Showing results for tags 'DX11'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Art Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum
  • Unreal Engine Users's Unreal Engine Group Forum
  • Unity Developers's Forum
  • Unity Developers's Asset Share

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 45 results

  1. Hi guys Looking for resources on emulating the illumination and after glow of CRT oscilloscope displays, like on these old radar scopes. So far my google searches aren't returning anything useful. Any suggestions?
  2. how is the BSDF function used in the kajiya rendering équations ? We know that path tracing proivde an analytical solution and we saw the BSDF function at first time in the path tracing algorithm. After that, is there a way to use mutliple BSDF function in a full rendering process ? If you have some links to any books or website, please share it !
  3. Hi, I currently have Naive Surface Nets working in a Compute Shader and I'm using a Dictionary to load chunks around the player. Retrieving meshes from the Compute Shader back to the CPU and creating Unity collision meshes is causing a lot of stalling. I've decided to tackle this by implementing a LOD system. I've found the 0fps articles on simplifying isosurfaces to be a weath of information. After studying those and looking at the papers I've identified what I think are the two main directions for approaching LOD: Mesh Simplification and Geometry Clipmaps. I am aiming to get one or both of these working in a Compute Shader. This brings me to my questions: 1) I've seen a lot of feedback that Mesh Simplification is going to be too slow for realtime terrain editing, is there a consensus on this? Most algorithms appear to use the half-edge format but I can't find any information on efficiently converting an indexed mesh into half-edge mesh format. Can anyone provide an example? 2) Can the Geometry Clipmap approach work with 3D terrain as opposed to heightmaps? Would I need to store my chunks in Octrees to make it work? Are there any GPU implementation examples that don't work on heightmaps? Any advice to help me further focus my research would be much appreciated. Thanks.
  4. I've implemented a basic version of Voxel Cone Tracing that uses a single volume texture (covering a small region around the player). But I want to have large and open environments, so I must use some cascaded (LoD'ed) variant of the algorithm. 1) How to inject sky light into the voxels and how to do it fast? (e.g. imagine a large shadowed area which is lit by the blue sky above.) I think, after voxelizing the scene I will introduce an additional compute shader pass where, from each surface voxel, I will trace cones in the direction of the surface normal until they hit the sky (cubemap), but, I'm afraid, it would be slow with Cascaded Voxel Cone Tracing. 2) How to calculate (rough) reflections from the sky (and distant objects)? If the scene consists of many "reflective" pixels, tracing cones through all cascades would destroy performance. Looks like Voxel Cone Tracing is only suited for smallish indoor scenes (like Doom 3-style cramped spaces).
  5. Hello, I have some questions about HLSL register which can't find any good reference s on the internet. 1. what is the differencet of register b and c? Are this two lines equivalent (in term of using and binding data) float4 dummy :register(b0) and float4 dummy :register(c0) 2. What is the benefit of keyword Constant Buffer Are these 2 codes eqivalent? float4 v1 :register(c0) // or (b0) and Constant buffer dummy : register(b0) // or (c0)? { float4 v1 :register(c0) } 3. What register that "SetPixelShaderConstantF" bind data to? b or c or both? Thanks
  6. I'm implementing single-pass surface voxelization (via Geometry Shader) for Voxel Cone Tracing (VCT), and after some debugging I've discovered that I have to insert out-of-bounds checks into the pixel shader to avoid voxelizing geometry which is outside the voxel grid: void main_PS_VoxelTerrain_DLoD( VSOutput pixelInput ) { const float3 posInVoxelGrid = (pixelInput.position_world - g_vxgi_voxel_radiance_grid_min_corner_world) * g_vxgi_inverse_voxel_size_world; const uint color_encoded = packR8G8B8A8( float4( posInVoxelGrid, 1 ) ); const int3 writecoord = (int3) floor( posInVoxelGrid ); const uint writeIndex1D = flattenIndex3D( (uint3)writecoord, (uint3)g_vxgi_voxel_radiance_grid_resolution_int ); // HACK: bool inBounds = writecoord.x >= 0 && writecoord.x < g_vxgi_voxel_radiance_grid_resolution_int && writecoord.y >= 0 && writecoord.y < g_vxgi_voxel_radiance_grid_resolution_int && writecoord.z >= 0 && writecoord.z < g_vxgi_voxel_radiance_grid_resolution_int ; if( inBounds ) { rwsb_voxelGrid[writeIndex1D] = color_encoded; } else { rwsb_voxelGrid[writeIndex1D] = 0xFF0000FF; //RED, ALPHA } } Why is this check needed, and how can I avoid it? Shouldn't Direct3D automatically clip the pixels falling outside the viewport? (I tried to ensure that out-of-bounds pixels are clipped in the geometry shader and I also enable depthClip in rasterizer, but it doesn't work.) Here's a picture illustrating the problem (extraneous voxels are highlighted with red): And here the full HLSL code of the voxelization shader:
  7. hi, please could someone explain me when and why do i have to divide my tranformed coordinates in e.g. pixelshader by w ? here a typical NVIDIA example. the comments and questions are made by myself. struct VS_OUTPUT { float4 ScreenP : SV_POSITION; float4 P : TEXCOORD0; float3 N : NORMAL0; }; // Vertex Shader VS_OUTPUT main( uint id : SV_VERTEXID ) { VS_OUTPUT output; ... some Code .. float3 N; // Normal N.x = ((face_idx % 3) == 0) ? 1 : 0; N.y = ((face_idx % 3) == 1) ? 1 : 0; N.z = ((face_idx % 3) == 2) ? 1 : 0; N *= ((face_idx / 3) == 0) ? 1 : -1; P += N; // World Coordinate output.P = mul(c_mObject, float4(P, 1)); // transform with object world matrix with w = 1 => float4(P, 1) output.ScreenP = mul(c_mViewProj, output.P); // transform further with ViewProj in clip coordinates output.N = mul(c_mObject, float4(N, 0)).xyz; // transform with object world matrix with w = 0 because only rotations apply return output; } cbuffer CameraCB : register( b0 ) { column_major float4x4 c_mViewProj : packoffset(c0); float3 c_vEyePos : packoffset(c4); float c_fZNear : packoffset(c5); float c_fZFar : packoffset(c5.y); }; // pixelShader float4 main(VS_OUTPUT input) : SV_Target0 { float3 P = input.P.xyz / input.P.w; // => my Question why do we have the world coordnate by w ??? float3 N = normalize(input.N); // normalize because normal is interpolated in pixelshader ? float3 Kd = c_vObjectColor; const float SHADOW_BIAS = -0.001f; float4 shadow_clip = mul(c_mLightViewProj, float4(P,1)); // => here P is transformed to clipspace coordinate with w = 1 shadow_clip = shadow_clip / shadow_clip.w; // => why division by w again ?? uint hemisphereID = (shadow_clip.z > 0) ? 0 : 1; float2 shadow_tc = float2(0.5f, -0.5f)*shadow_clip.xy + 0.5f; // => here xy used for texure coordinates float receiver_depth = shadow_clip.z+SHADOW_BIAS; // => here z used as depth float total_light = 0; const int SHADOW_KERNEL = 2; [unroll] for (int ox=-SHADOW_KERNEL; ox<=SHADOW_KERNEL; ++ox) { [unroll] for (int oy=-SHADOW_KERNEL; oy<=SHADOW_KERNEL; ++oy) { total_light += tShadowmap.SampleCmpLevelZero(sampShadowmap, shadow_tc, receiver_depth, int2(ox, oy)).x; } } float shadow_term = total_light / ((2*SHADOW_KERNEL+1) * (2*SHADOW_KERNEL+1)); float3 output = float3(0,0,0); float3 L = -c_vLightDirection; // Spotlight) { float light_to_world = length(P - c_vLightPos); // P (divided by w) used as world Pos but LightPos is not divided by w float3 W = (c_vLightPos - P)/light_to_world; // Light direction Vector is calculated from P float distance_attenuation = 1.0f/(c_vLightAttenuationFactors.x + c_vLightAttenuationFactors.y*light_to_world + c_vLightAttenuationFactors.z*light_to_world*light_to_world) + c_vLightAttenuationFactors.w; const float ANGLE_EPSILON = 0.00001f; float angle_factor = saturate((dot(N, L)-c_fLightFalloffCosTheta)/(1-c_fLightFalloffCosTheta)); float spot_attenuation = (angle_factor > ANGLE_EPSILON) ? pow(angle_factor, c_fLightFalloffPower) : 0.0f; float3 attenuation = distance_attenuation*spot_attenuation*shadow_term*dot(N, W); float3 ambient = 0.00001f*saturate(0.5f*(dot(N, L)+1.0f)); output += c_vLightColor*max(attenuation, ambient) * exp(-c_vSigmaExtinction*light_to_world); } return float4(output, 1); }
  8. Hey there! I have been writing a game with a friend of mine since a few days. While I was implementing the graphics part of it, a problem has occured to me I have never experienced. I am issuing a drawcall in directx and everything is set up: - Vertex Buffer - Input Layout - Primitive Layout - Vertex Shader and its constant buffer(which contains the render target size / 2) - Pixel Shader with a sampler state and a srv of a white texture - Rasterizer State & Blend State - Viewport - Not to mention, a rtv is bound to the output merger too Directx11's debug messages doesn't show anything wrong. The graphics debugger doesn't describe the draw call at all, it doesn't even fetch the input vertices/geometry (while it exists in the vertex buffer) and the pixel shader stage was not run. Upon debugging the shaders, the vertices are run through the vertex shader like it is supposed to and produces the expected results. Code for the creation of rasterizer state, sampler state & blend state: https://pastebin.com/aeFWWkPu Code of the vertex shader: https://pastebin.com/9z9e81Sp Code of the pixel shader: https://pastebin.com/jMvWihV8 Here is the event timeline of the instance: https://pastebin.com/xetJbzNp Here is a picture of the pipeline where the draw call is processed through: https://i.imgur.com/czxQEHH.png Here are the vertices: https://i.imgur.com/C5z3oIT.png Does anybody know what the cause of the problem is or has experienced a similiar problem? Thanks!
  9. Hi, I'm working on a terrain engine for sometime now, and I would like to add now some roads to my terrain. I'm not sure yet to which direction should I look for to get the best result. My terrain is for a flight simulator and so it covers a pretty large area of 1024x1024 KM in size. The heightmap size is 16K so I have 62.5m mesh resolution at the highest tessellation level (It is good enough). The main problem I have though is that it's pretty hard for me to get down to details, as the rendering is done in large chunks (Based on Nvidia DX11 terrain tessellation sample - Shown here: https://developer.nvidia.com/dx11-samples and also the PDF here: http://fliphtml5.com/lvgc/xhjd/basic) and all the texture coordinates are generated from the grid in the shaders, I hardly have any control on "small items". Not sure if this is a good enough explanation of my problem, but it's like I can't just tell the terrain: "Draw road segment here". The way I see it, I have more or less 4 options (Unless you can suggest more): 1. Trying this sample from Humus engine that is rendering roads on terrain using stencil buffer and box volumes: http://www.humus.name/index.php?page=3D&amp;ID=84 - I think this is my preferred choice but for some reason I wasn't able to get anything rendered at all on my terrain using the code from their sample (At least with translation to my own engine, I must be doing something wrong. Do you also think that this is the best method to draw roads on terrain? Pros: Seems simple and efficient, using the stencil buffer that the HW gives for free Cons - I can't get it to work 2. Render 3D models of roads on the terrain. Pros - Minimal rendering, no interfering with the terrain rendering itself. Cons - Hard to get correct results with tessellated terrain, potential Z-Buffer issues with the terrain. I think that's the worse solution. 3. Simple texture mapping - Holding huge textures to map roads on the entire terrain. As basically I need only highways and real roads (Sand roads, small streams and such others will be coverd by a 4m/pixel photoreal textures), it could be not that much but still will probably cost some GBs of textures to cover the entire terrain area with sufficient resolution (Roads as textures should have at least 4m/pixel). Pros - Relatively easy to render, no Z-Buffer issues Cons - Large textures to hold and manage, branching in the Terrain's pixel shader for checking if we are on a road pixel 4. Texture mapping but with RTT. Instead of holding constant textures in disk and load them when necessary, RTT the roads into textures in memory and hold only the necessary data this way. By using small tiles (Example 1x1 KM) I think it's possible to map only tiles with roads and render them, all other tiles will be set with 0-mapping and so shader will fetch "no roads" texture and won't map anything Pros - Texture mapping but without holding any real textures on disk Cons - Requires RTT including probably render into mipmaps stages, may be a bit complicated to manage, area with many roads may get to large usage of memory I still would like to get the Humus sample to at least work on my engine. Unless you think it has other issues which may make it not that great for broad usage on a large rendered terrain. Thanx!
  10. Tape_Worm

    Gorgon Update #8

    This is a small update that contains a few more fixes and additions. To get the latest version, go to the Github Releases page and download version 3.0.86.259 . As always, there is a commit log on the release stating what changes were made. View the full article
  11. hey guys, i have a problem over here with vertex shaders and going to need some professional help, so i thought i create a thread here. i just decided to work this feature out for me, 2 days ago. so please dont expect me to have any experience in programming D3D. i am trying to work this out on my own as good as i can, but i would really appreciate if someone could help me a bit or give me a push in the right direction. i am paying for the solution, so you won't waste your time, if you decide to join me on this case. but ok, lets start: first of all, you have to know that i am editing a very old game, which is using Direct X 8. the water in this game is basically a simple plane, which is using a normal texture projected onto it. so basically its just a plane with a texuture, that is initializing a alpha state for the texture, so you can look through the water ingame. no shadows, no reflections, no animations yet. my first goal is to get the inverseViewMatrix working, to make at least the shadows of the world appear in the water. so, here's the code of the actual render state of the water: So, i noticed there's already something started for inversing the view ('&InverseTheView'). in another function, for setting the Inverse View and Shadow Matrices, its declared: InverseTheView = pCamera->GetInverseViewMatrix(); D3DXVECTOR3 v3Target = pCamera->GetTarget(); D3DXVECTOR3 v3LightEye(v3Target.x - 1.732f * 1250.0f, v3Target.y - 1250.0f, v3Target.z + 2.0f * 1.732f * 1250.0f); D3DXMatrixLookAtRH(&m_matLightView, &v3LightEye, &v3Target, &D3DXVECTOR3(0.0f, 0.0f, 1.0f)); DynamicShadow = InverseTheView * LightView * DynamicShadowScale; okay, so far so good. i tried to add the Inversed View to my Water rendering state: void LoadWaterMaterial() { char buf[256]; for (int i = 0; i < 30; ++i) { sprintf(buf, "C:/test/water/%water_1d.dds", i+1); WaterInstances[i].SetImagePointer((CGraphicImage *) CResourceManager::Instance().GetResourcePointer(buf)); } } void RenderWater() { // Saving Render State D3DXMATRIX oldView = LightView; // Old light view D3DXMATRIX oldLight; // old light D3DXPLANE plane(0.0f, 1.0f, 0.0f, 0.0f); D3DXMATRIX invertMatrix; D3DXMatrixReflect(&invertMatrix, &plane); oldView = ms_matWorldView; D3DXMATRIX vue = oldView; D3DXMatrixMultiply(&vue, &vue, &invertMatrix); // Inverse view D3DXMatrixMultiply(&LightView, &LightView, &invertMatrix); // Inverse light STATEMANAGER.SaveRenderState(D3DRS_ZWRITEENABLE, FALSE); STATEMANAGER.SaveRenderState(D3DRS_ALPHABLENDENABLE, TRUE); STATEMANAGER.SaveRenderState(D3DRS_STENCILENABLE, true); STATEMANAGER.SaveRenderState(D3DRS_STENCILFUNC, D3DCMP_ALWAYS); STATEMANAGER.SaveRenderState(D3DRS_STENCILREF, 0x1); STATEMANAGER.SaveRenderState(D3DRS_STENCILMASK, 0xffffffff); STATEMANAGER.SaveRenderState(D3DRS_STENCILWRITEMASK, 0xffffffff); STATEMANAGER.SaveRenderState(D3DRS_STENCILZFAIL, D3DSTENCILOP_KEEP); STATEMANAGER.SaveRenderState(D3DRS_STENCILFAIL, D3DSTENCILOP_KEEP); STATEMANAGER.SaveRenderState(D3DRS_STENCILPASS, D3DSTENCILOP_REPLACE); STATEMANAGER.SaveRenderState(D3DRS_CULLMODE, D3DCULL_NONE); STATEMANAGER.SaveRenderState(D3DRS_DIFFUSEMATERIALSOURCE, D3DMCS_COLOR1); STATEMANAGER.SaveRenderState(D3DRS_COLORVERTEX, TRUE); STATEMANAGER.SaveRenderState(D3DRS_SPECULARENABLE, TRUE); STATEMANAGER.SaveRenderState(D3DRS_SPECULARMATERIALSOURCE, D3DMCS_COLOR2); /// Reflection D3DXMATRIX reflection = vue; D3DXMatrixScaling(&reflection, m_fWaterTexCoordBase, -m_fWaterTexCoordBase, 0.0f); STATEMANAGER.SaveTransform(D3DTS_TEXTURE0, &reflection); LPDIRECT3DTEXTURE9 textureEau = m_WaterInstances[((ELTimer_GetMSec() / 30) % 250)].GetTexturePointer()->GetD3DTexture(); STATEMANAGER.SetFVF(D3DFVF_XYZ | D3DFVF_DIFFUSE | D3DFVF_SPECULAR | D3DFVF_TEX0 | D3DFVF_TEX1); STATEMANAGER.SetTexture(0, textureEau); STATEMANAGER.SaveTextureStageState(0, D3DTSS_TEXCOORDINDEX, D3DTSS_TCI_CAMERASPACEREFLECTIONVECTOR); STATEMANAGER.SaveTextureStageState(0, D3DTSS_TEXTURETRANSFORMFLAGS, D3DTTFF_PROJECTED); STATEMANAGER.SaveSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR); STATEMANAGER.SaveSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR); STATEMANAGER.SaveSamplerState(0, D3DSAMP_MIPFILTER, D3DTEXF_LINEAR); STATEMANAGER.SaveSamplerState(0, D3DSAMP_ADDRESSU, D3DTADDRESS_MIRROR); STATEMANAGER.SaveSamplerState(0, D3DSAMP_ADDRESSV, D3DTADDRESS_WRAP); STATEMANAGER.SaveSamplerState(0, D3DSAMP_ADDRESSW, D3DTADDRESS_CLAMP); STATEMANAGER.SetTextureStageState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE); STATEMANAGER.SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_SELECTARG1); STATEMANAGER.SetTextureStageState(0, D3DTSS_ALPHAARG1, D3DTA_DIFFUSE); STATEMANAGER.SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_SELECTARG1); // Start of Water STATEMANAGER.SetTexture(0, WaterInstances[((ELTimer_GetMSec() / 70) % 30)].GetTexturePointer()->GetD3DTexture()); D3DXMatrixScaling(&TransformTextureToWater, WaterTextureCoordBase, -WaterTextureCoordBase, 0.0f); D3DXMatrixMultiply(&TransformTextureToWater, &InverseTheView, &TransformTextureToWater); STATEMANAGER.SaveTransform(D3DTS_TEXTURE0, &TransformTextureToWater); STATEMANAGER.SaveVertexShader(D3DFVF_XYZ|D3DFVF_DIFFUSE); STATEMANAGER.SaveTextureStageState(0, D3DTSS_TEXCOORDINDEX, D3DTSS_TCI_CAMERASPACEPOSITION); STATEMANAGER.SaveTextureStageState(0, D3DTSS_TEXTURETRANSFORMFLAGS, D3DTTFF_COUNT2); STATEMANAGER.SaveTextureStageState(0, D3DTSS_MINFILTER, D3DTEXF_LINEAR); STATEMANAGER.SaveTextureStageState(0, D3DTSS_MAGFILTER, D3DTEXF_LINEAR); STATEMANAGER.SaveTextureStageState(0, D3DTSS_MIPFILTER, D3DTEXF_LINEAR); STATEMANAGER.SaveTextureStageState(0, D3DTSS_ADDRESSU, D3DTADDRESS_WRAP); STATEMANAGER.SaveTextureStageState(0, D3DTSS_ADDRESSV, D3DTADDRESS_WRAP); STATEMANAGER.SetTextureStageState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE); STATEMANAGER.SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_SELECTARG1); STATEMANAGER.SetTextureStageState(0, D3DTSS_ALPHAARG1, D3DTA_DIFFUSE); STATEMANAGER.SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_SELECTARG1); STATEMANAGER.SetTexture(1,NULL); STATEMANAGER.SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_DISABLE); STATEMANAGER.SetTextureStageState(1, D3DTSS_ALPHAOP, D3DTOP_DISABLE); // Restoring Render State STATEMANAGER.RestoreVertexShader(); STATEMANAGER.RestoreTransform(D3DTS_TEXTURE0); STATEMANAGER.RestoreTextureStageState(0, D3DTSS_MINFILTER); STATEMANAGER.RestoreTextureStageState(0, D3DTSS_MAGFILTER); STATEMANAGER.RestoreTextureStageState(0, D3DTSS_MIPFILTER); STATEMANAGER.RestoreTextureStageState(0, D3DTSS_ADDRESSU); STATEMANAGER.RestoreTextureStageState(0, D3DTSS_ADDRESSV); STATEMANAGER.RestoreTextureStageState(0, D3DTSS_TEXCOORDINDEX); STATEMANAGER.RestoreTextureStageState(0, D3DTSS_TEXTURETRANSFORMFLAGS); STATEMANAGER.RestoreRenderState(D3DRS_DIFFUSEMATERIALSOURCE); STATEMANAGER.RestoreRenderState(D3DRS_COLORVERTEX); STATEMANAGER.RestoreRenderState(D3DRS_ZWRITEENABLE); STATEMANAGER.RestoreRenderState(D3DRS_ALPHABLENDENABLE); STATEMANAGER.RestoreRenderState(D3DRS_CULLMODE); } The Output is, i would say strange. its flickering, framing between black and gray color. but its not inversing the view. Does it have something to do, with clipping the frustum view? I really do not have any clue how to work with this properly. i would really appreciate if someone here gives me the push in the right direction, as mentioned. you'll get rewarded, too! thanks for reading and best regards
  12. Hey, I had already implemented AABB raypicking using slab intersection, but I couldn't get it working with rotation to create OBB raypicking. bool TestAABBIntersection(XMFLOAT3 lb, XMFLOAT3 rt, XMFLOAT3 origin, XMFLOAT3 dirfrac, float& distance) { assert(lb.x <= rt.x); assert(lb.y <= rt.y); assert(lb.z <= rt.z); const float t1 = (lb.x - origin.x)*dirfrac.x; const float t2 = (rt.x - origin.x)*dirfrac.x; const float t3 = (lb.y - origin.y)*dirfrac.y; const float t4 = (rt.y - origin.y)*dirfrac.y; const float t5 = (lb.z - origin.z)*dirfrac.z; const float t6 = (rt.z - origin.z)*dirfrac.z; const float tmin = max(max(min(t1, t2), min(t3, t4)), min(t5, t6)); const float tmax = min(min(max(t1, t2), max(t3, t4)), max(t5, t6)); // if tmax < 0, ray (line) is intersecting AABB, but the whole AABB is behind us if (tmax < 0) { return false; } // if tmin > tmax, ray doesn't intersect AABB if (tmin > tmax) { return false; } distance = tmin; return true; } bool TestOBBIntersection(ModelClass* model, XMFLOAT3 origin, XMFLOAT3 dir, XMFLOAT3 lb, XMFLOAT3 rt, float & dist) { XMMATRIX worldMatrix = XMMatrixIdentity(); worldMatrix = DirectX::XMMatrixMultiply(worldMatrix, DirectX::XMMatrixRotationX(model->GetRotation().x * 0.0174532925f)); worldMatrix = DirectX::XMMatrixMultiply(worldMatrix, DirectX::XMMatrixRotationY(model->GetRotation().y * 0.0174532925f)); worldMatrix = DirectX::XMMatrixMultiply(worldMatrix, DirectX::XMMatrixRotationZ(model->GetRotation().z * 0.0174532925f)); worldMatrix = XMMatrixInverse(NULL, worldMatrix); const XMVECTOR originTransformed = XMVector3Transform({ origin.x, origin.y, origin.z }, worldMatrix); const XMVECTOR dirTransformed = XMVector3Transform({ dir.x, dir.y, dir.z }, worldMatrix); origin = { originTransformed.m128_f32[0],originTransformed.m128_f32[1],originTransformed.m128_f32[2] }; dir = { dirTransformed.m128_f32[0], dirTransformed.m128_f32[1], dirTransformed.m128_f32[2] }; return TestAABBIntersection(lb, rt, origin, dir, dist); } What I am doing is multiplying ray origin and ray direction by inverse rotation matrix and then perform Ray-AABB test in OBB-space. It works only for 0, 90 and 180 degrees rotation. Where might be a problem?
  13. Hello What i would like to do is to calculate the world space position of a pixel in a pixel shader used on a screen quad. I understand that this is a "common" topic and i have read several posts about this, but they never show the whole picture and most of them are based on OpenGL when im using Directx 11. Im clearly missing something because its just not working for me. The first pass in my render loop is to render a simple GBuffer. struct PixelInputType { float4 position : SV_POSITION; float depth : TEXCOORD0; float2 uv : TEXCOORD1; float3 normal : TEXCOORD2; }; PixelInputType vs_main( VertexInputType p_Input ) { PixelInputType output; output.position = mul( float4(p_Input.position, 1.0f), PM_MODEL_VIEW_PROJECTION ); output.depth = output.position.z; // code removed for clairity return output; }; struct PixelOutputType { float4 diffuse; float4 normal; float depth; }; PixelOutputType ps_main( PixelInputType p_Input ) : SV_TARGET { PixelOutputType output; // code removed for clairity output.depth = p_Input.depth / PM_FAR_CLIP; return output; } Then i draw the screen quad where i try to recreate the world space position for each pixel. Texture2D<float> g_Depth : register( t2 ); float3 world_position_from_depth( float2 uv, float depth ) { float4 ndc = float4( uv * 2.0f - 1.0f, depth * 2.0f - 1.0f, 1.0f ); float4 clip = mul( ndc, PM_INVERSE_PROJECTION ); float4 view = mul( clip / clip.w, PM_INVERSE_VIEW ); return view.xyz; } float4 ps_main( PixelInputType p_Input ) : SV_TARGET { float depth = g_Depth.Sample( g_ClampSampler, p_Input.uv ); float3 wp = world_position_from_depth( p_Input.uv, depth ); return visualize_position( wp ); } I then visualize the position based on this thread: https://discourse.threejs.org/t/reconstruct-world-position-in-screen-space-from-depth-buffer/5532 From the result i can clearly tell that something is wrong. My lines are not "stationary" they move around and bend instead of sticking to their world space position when i move my camera in the world. Shader constants used in the shaders are applied like this PM_FAR_CLIP = p_Camera->getFarClip(); // 1000.0f PM_INVERSE_PROJECTION = p_Camera->getProjection().inverse().transpose(); PM_INVERSE_VIEW = p_Camera->getView().inverse().transpose(); PM_MODEL_VIEW_PROJECTION = ( p_Transform * p_Camera->getView() * p_Camera->getProjection() ).transpose(); And finally, the depth target is created using DXGI_FORMAT_R16_FLOAT. It became quite a long post but i wanted to ensure that i gave as much detail as i could since i cant figure out what im doing wrong. If you have any suggestion or ideas they more than welcome!
  14. My application has a internal graphics settings panel that lets the user to change the graphics settings that includes all three options mentioned in the title including V-sync and windowed/fullscreen switching. My approach (given that it'll go full-screen to full-screen) to implement such system follows: Release all back buffer references. Go to windowed mode from full-screen using swapchain->SetFullscreenState(FALSE, NULL) perform swapchain->Release(); Prepare to recreate swapchain. Create new DXGI_SWAP_CHAIN_DESC from parameters collected from user settings panel perform pFactory->CreateSwapChain(device, &swap_desc, &swapchain); Recreate the back buffer and views on success. Switch to full-screen using swapchain->SetFullscreenState(TRUE, NULL); Now, is it possible to just use the IDXGISwapChain::ResizeBuffers and IDXGISwapChain::ResizeTarget methods since those are dedicated for such tasks? Since those take Height, Width, DXGI_MODE_DESC it seems like it is possible to set new resolution and refresh rate using the the two above mentioned methods. Is there any risk here? And most importantly, I cannot find any way to update the swapchain's multisampling count and quality level without doing a fresh swapchain reset.
  15. Hi everyone! So, I'm trying to create a Depth CubeMap, but get an error when trying to Create the DepthStencilView with a FirstArraySlice >= 1. When set to only FirstArraySlice = 0 everything works as expected (all depths get rendered to just one face of the cubemap... but any value bigger than that returns an E_INVALIDARG error. Here's the code snipet with the settings I'm using: D3D11_TEXTURE2D_DESC textDesc; ZeroMemory(&textDesc, sizeof(D3D11_TEXTURE2D_DESC)); textDesc.Width = 1024; textDesc.Height = 1024; textDesc.MipLevels = 1; textDesc.ArraySize = 6; textDesc.Format = DXGI_FORMAT_R32_TYPELESS; textDesc.SampleDesc.Count = 1; textDesc.Usage = D3D11_USAGE_DEFAULT; textDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_DEPTH_STENCIL; textDesc.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE; m_pd3dDevice->CreateTexture2D(&textDesc, NULL, &m_pTexture); // Everything OK so far... D3D11_DEPTH_STENCIL_VIEW_DESC dsvDesc; dsvDesc.Format = DXGI_FORMAT_D32_FLOAT; dsvDesc.Flags = 0; dsvDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2DARRAY; dsvDesc.Texture2DArray.ArraySize = textDesc.ArraySize; dsvDesc.Texture2DArray.MipSlice = 0; for( UINT i = 0; i < textDesc.ArraySize; i++ ){ dsvDesc.Texture2DArray.FirstArraySlice = i; m_pd3dDevice->CreateDepthStencilView(m_pTexture, &dsvDesc, &m_pDepthStencilView[i]); } // OK FirstArraySlice = 0 // ERROR FirstArraySlice >= 1 Am I missing something in here? Thanks!