Jump to content
  • Advertisement

Search the Community

Showing results for tags 'DX11' in content posted in Graphics and GPU Programming.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 1017 results

  1. I added terrain rendering to my sky+lensflare rendering pipeline. I render terrain onto a render-target and bind it to the backbuffer as a texture. I do this in pixel shader of the fullscreen quad. return txTerrain.Sample(samLinear, Input.TextureCoords)+ txSkyDome.Sample(samLinear, Input.TextureCoords) + float4(LensFlareHalo * txDirty.Sample(samLinear, Input.TextureCoords).rgb * Intensity, 1); The problem is, the terrain is blended with sky. How do I fix this? pImmediateContext->OMSetDepthStencilState(pDSState_DD, 1); //disable depth buffer // render sky dome pImmediateContext->OMSetRenderTargets(1, &pSkyDomeRTV, pDepthStencilView); pImmediateContext->ClearRenderTargetView(pSkyDomeRTV, Colors::Black); pImmediateContext->ClearDepthStencilView(pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0); //creating the sky texture in the cs shader float Theta = .2f;//XM_PI*(float)tElapsed/50; float Phi = XM_PIDIV4; CSCONSTANTBUF cscb; cscb.vSun = XMFLOAT3(cos(Theta)*cos(Phi), sin(Theta), cos(Theta)*sin(Phi)); cscb.MieCoeffs = XMFLOAT3((float)MieCoefficient(m, AerosolRadius, 680), (float)MieCoefficient(m, AerosolRadius, 530), (float)MieCoefficient(m, AerosolRadius, 470)); cscb.RayleighCoeffs = XMFLOAT3((float)RayleighCoefficient(680), (float)RayleighCoefficient(530), (float)RayleighCoefficient(470)); cscb.fHeight = 10; cscb.fWeight = 10; cscb.fWeight2 = 10; pImmediateContext->UpdateSubresource( pCSConstBuffer, 0, NULL, &cscb, 0, 0 ); UINT UAVCounts = 0; pImmediateContext->CSSetUnorderedAccessViews(0, 1, &pSkyUAV, &UAVCounts); pImmediateContext->CSSetConstantBuffers(0, 1, &pCSConstBuffer); pImmediateContext->CSSetShader(pComputeShader, NULL, 0); pImmediateContext->Dispatch(8, 8, 1); pImmediateContext->CSSetUnorderedAccessViews(0, 1, &NULLUAV, 0); pImmediateContext->CSSetShader(NULL, NULL, 0); //setting dome relevant vs and ps stuff pImmediateContext->IASetInputLayout(pVtxLayout); uiStride = sizeof(VTX); pImmediateContext->IASetVertexBuffers(0, 1, &pVtxSkyBuffer, &uiStride, &uiOffset); pImmediateContext->IASetIndexBuffer(pIndxSkyBuffer, DXGI_FORMAT_R32_UINT, 0); pImmediateContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); pImmediateContext->VSSetConstantBuffers(0, 1, &pVSSkyConstBuffer); pImmediateContext->VSSetShader(pVtxSkyShader, NULL, 0); pImmediateContext->PSSetShader(pPixlSkyShader, NULL, 0); pImmediateContext->PSSetShaderResources(0, 1, &pSkySRV); pImmediateContext->PSSetSamplers(0, 1, &SampState); mgWorld = XMMatrixTranslation(MyCAM->GetEye().m128_f32[0], MyCAM->GetEye().m128_f32[1], MyCAM->GetEye().m128_f32[2]); //drawing the sky dome VSCONSTANTBUF cb; cb.mWorld = XMMatrixTranspose( mgWorld ); cb.mView = XMMatrixTranspose( mgView ); cb.mProjection = XMMatrixTranspose( mgProjection ); pImmediateContext->UpdateSubresource( pVSSkyConstBuffer, 0, NULL, &cb, 0, 0 ); pImmediateContext->DrawIndexed((UINT)MyFBX->myInds.size(),0, 0); pImmediateContext->VSSetShader(0, NULL, 0); pImmediateContext->PSSetShader(0, NULL, 0); pImmediateContext->OMSetDepthStencilState(pDSState_DE, 1); //enable depth buffer // terrain rendering pImmediateContext->OMSetRenderTargets(1, &pTerRTV, pDepthStencilView); pImmediateContext->ClearRenderTargetView(pTerRTV, Colors::Black); pImmediateContext->ClearDepthStencilView(pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0); mgWorld = XMMatrixIdentity();// XMMatrixRotationY( (float)tElapsed ); VSCONSTANTBUF tcb; tcb.mWorld = XMMatrixTranspose( mgWorld ); tcb.mView = XMMatrixTranspose( mgView ); tcb.mProjection = XMMatrixTranspose( mgProjection ); pImmediateContext->UpdateSubresource( pVSTerrainConstBuffer, 0, NULL, &tcb, 0, 0 ); //setting terain relevant vs and ps stuff pImmediateContext->IASetInputLayout(pVtxTerrainLayout); uiStride = sizeof(TERVTX); pImmediateContext->IASetVertexBuffers(2, 1, &pVtxTerrainBuffer, &uiStride, &uiOffset); pImmediateContext->IASetIndexBuffer(pIndxTerrainBuffer, DXGI_FORMAT_R32_UINT, 0); pImmediateContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); pImmediateContext->VSSetConstantBuffers(0, 1, &pVSTerrainConstBuffer); pImmediateContext->VSSetShader(pVtxTerrainShader, NULL, 0); pImmediateContext->PSSetShader(pPixlTerrainShader, NULL, 0); pImmediateContext->PSSetShaderResources(0, 1, &pTerrainSRV); pImmediateContext->PSSetSamplers(0, 1, &TerrainSampState); pImmediateContext->DrawIndexed((UINT)MyTerrain->myInds.size(), 0, 0); pImmediateContext->VSSetShader(0, NULL, 0); pImmediateContext->PSSetShader(0, NULL, 0); //downsampling stage for lens flare pImmediateContext->OMSetRenderTargets(1, &pSkyDomeBlurRTV, pDepthStencilView); pImmediateContext->ClearRenderTargetView(pSkyDomeBlurRTV, Colors::Black); pImmediateContext->ClearDepthStencilView(pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0); uiStride = sizeof(VTX); pImmediateContext->IASetInputLayout(pVtxCamLayout); pImmediateContext->IASetVertexBuffers(1, 1, &pVtxCamBuffer, &uiStride, &uiOffset); pImmediateContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP); pImmediateContext->VSSetShader(pVtxSkyBlurShader, NULL, 0); pImmediateContext->PSSetShader(pPixlSkyBlurShader, NULL, 0); pImmediateContext->PSSetShaderResources(0, 1, &pSkyDomeSRV);//sky+dome texture pImmediateContext->PSSetSamplers(0, 1, &CamSampState); pImmediateContext->Draw(4, 0); pImmediateContext->VSSetShader(0, NULL, 0); pImmediateContext->PSSetShader(0, NULL, 0); //backbuffer stage where lens flare code and terrain texture are set pImmediateContext->OMSetRenderTargets(1, &pRenderTargetView, pDepthStencilView); pImmediateContext->ClearRenderTargetView(pRenderTargetView, Colors::Black); pImmediateContext->ClearDepthStencilView(pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0); // uiStride = sizeof(VTX); // pImmediateContext->IASetInputLayout(pVtxCamLayout); // pImmediateContext->IASetVertexBuffers(1, 1, &pVtxCamBuffer, &uiStride, &uiOffset); // pImmediateContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP); pImmediateContext->VSSetShader(pVtxCamShader, NULL, 0); pImmediateContext->PSSetShader(pPixlCamShader, NULL, 0); pImmediateContext->PSSetShaderResources(0, 1, &pSRV);// dirty texture pImmediateContext->PSSetShaderResources(1, 1, &pSkyDomeBlurSRV);//sky+dome blurred texture pImmediateContext->PSSetShaderResources(2, 1, &pSkyDomeSRV);//sky+dome texture pImmediateContext->PSSetShaderResources(3, 1, &pTerSRV);//terrain texture pImmediateContext->PSSetSamplers(0, 1, &CamSampState); pImmediateContext->Draw(4, 0); pImmediateContext->VSSetShader(0, NULL, 0); pImmediateContext->PSSetShader(0, NULL, 0); pSwapChain->Present(0, 0);
  2. I'm starting out learning graphics programming with DX11 and Win32 and so far I haven't found many quality resources. At the moment I'm trying to string together a few tutorrials (some based on DX10) to try and learn the basics. The tutorial I was reading used a D3DX function to compile shaders but as far as I understand, this library is deprecated? After some digging, I found the D3DCompileFromFile() function which I'm trying to use. Here is my c++ code: HINSTANCE d3d_compiler_lib = LoadLibrary("D3DCompiler_47.DLL"); assert(d3d_compiler_lib != NULL); typedef HRESULT (WINAPI *d3d_shader_compile_func)( LPCWSTR, const D3D_SHADER_MACRO *, ID3DInclude *, LPCSTR, LPCSTR, UINT, UINT, ID3DBlob **, ID3DBlob **); d3d_shader_compile_func D3DCompileFromFile = (d3d_shader_compile_func)GetProcAddress( d3d_compiler_lib, "D3DCompileFromFile"); assert(D3DCompileFromFile != NULL); ID3D10Blob *vs, *ps; hresult = D3DCompileFromFile( L"basic.shader", NULL, NULL, "VertexShader", "vs_4_0", 0, 0, &vs, NULL); assert(hresult == S_OK); // Fails here hresult = D3DCompileFromFile( L"basic.shader", NULL, NULL, "PixelShader", "ps_4_0", 0, 0, &ps, NULL); assert(hresult == S_OK); FreeLibrary(d3d_compiler_lib); In the failing assertion, hresult is 'E_FAIL', which according to MSDN, means: "Attempted to create a device with the debug layer enabled and the layer is not installed." I'm a bit lost at this point. I am also not 100% sure my D3DCompileFromFile signature is correct... It does match the version on MSDN though. Any ideas as to why this might be failing? I tried putting in the wrong file name and got (as expected) hresult == D3D11_ERROR_FILE_NOT_FOUND. So at least this is some indication that I haven't totally screwed up the function call. For reference, here is the shader file. I was able to compile it successfully using an online HLSL compiler. struct VOut { float4 position : SV_POSITION; float4 color : COLOR; }; VOut VertexShader(float4 position : POSITION, float4 color : COLOR) { VOut output; output.position = position; output.color = color; return output; } float4 PixelShader(float4 position : SV_POSITION, float4 color : COLOR) : SV_TARGET { return color; } Thanks for your time
  3. Hello, I wrote a MatCap shader following this idea: Given the image representing the texture, we compute the sample point by taking the dot product of the vertex normal and the camera position and remapping this to [0,1]. This seems to work well when I look straight at an object with this shader. However, in cases where the camera points slightly on the side, I can see the texture stretch a lot. Could anyone give me a hint as how to get a nice matcap shader ? Here's what I wrote: Shader "Unlit/Matcap" { Properties { _MainTex ("Texture", 2D) = "white" {} } SubShader { Tags { "RenderType"="Opaque" } LOD 100 Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag // make fog work #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float3 normal : NORMAL; }; struct v2f { float2 worldNormal : TEXCOORD0; float4 vertex : SV_POSITION; }; sampler2D _MainTex; v2f vert (appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); o.worldNormal = mul((float3x3)UNITY_MATRIX_V, UnityObjectToWorldNormal(v.normal)).xy*0.3 + 0.5; //UnityObjectToClipPos(v.normal)*0.5 + 0.5; return o; } fixed4 frag (v2f i) : SV_Target { // sample the texture fixed4 col = tex2D(_MainTex, i.worldNormal); // apply fog return col; } ENDCG } } } Thanks!
  4. Hi, I am trying to brute-force a closest-point-to-closed-triangle-mesh algorithm on the GPU by creating a thread for each point-primitive pair and keeping only the nearest result for each point. This code fails however, with multiple writes being made by threads with different distance computations. To keep only the closest value, I attempt to mask using InterlockedMin, and a conditional that only writes if the current thread holds the same value as the mask after a memory barrier. I have included the function below. As can be seen I have modified it to write to a different location every time the conditional succeeds for debugging. It is expected that multiple writes will take place, for example where the closest point is a vertex shared by multiple triangles, but when I read back closestPoints and calculate the distances, they are different, which should not be possible. The differences are large (~0.3+) so I do not think it is a rounding error. The CPU equivalent works fine for a single particle. After the kernel execution, distanceMask does hold the smallest value, suggesting the problem is with the barrier or the conditional. Can anyone say what is wrong with the function? RWStructuredBuffer<uint> distanceMask : register(u4); RWStructuredBuffer<uint> distanceWriteCounts : register(u0); RWStructuredBuffer<float3> closestPoints : register(u5); [numthreads(64,1,1)] void BruteForceClosestPointOnMesh(uint3 id : SV_DispatchThreadID) { int particleid = id.x; int triangleid = id.y; Triangle t = triangles[triangleid]; float3 v0 = GetVertex1(t.i0); float3 v1 = GetVertex1(t.i1); float3 v2 = GetVertex1(t.i2); float3 q1 = Q1[particleid]; ClosestPointPointTriangleResult result = ClosestPointPointTriangle(q1, v0, v1, v2); float3 p = v0 * result.uvw.x + v1 * result.uvw.y + v2 * result.uvw.z; uint distance = asuint(length(p - q1)); InterlockedMin(distanceMask[particleid], distance); AllMemoryBarrierWithGroupSync(); if(distance == distanceMask[particleid]) { uint bin = 0; InterlockedAdd(distanceWriteCounts[particleid],1,bin); closestPoints[particleid * binsize + bin] = p; } }
  5. Following issue: I got a model and data which basically tells me where on the model surface I got points. When the user taps on that point, he sees some information. Now you can rotate around the model and there are easily 100+ points per model. So I need to distinguish whether a point is actually visible or not. The points are visualized by circles. Is there any fast algorithm which can check based on the camera position whether the point is covered by the model or not? Or (which would likely be more performant) can I somehow realize this in a shader? Right now I'm using StripLists to render the circles. I see no possible way to use the depth buffer so I can completely clip some circles if they're on the opposite side of the model. Any ideas? I guess creating a ray and testing against all vertices (400k) of the model would be way to intense for the CPU. In the end I need to have some feedback CPU-side so I can distinguish at input-detection whether the tapped point is visible or not. Otherwise I'd end up having ghost points which can be tapped (or maybe even overlay visible ones) which can be tapped but aren't visible ...
  6. I want to output the Image file from the shaderresourceview of the rendertexture, so I have to transfer the shaderresourceview into a image object, and I don't know how to do this. Could you help me with it? PS:I am using SharpDx to develop my program. So it's better be coding by C#. Many thanks.
  7. Hello, I have a Problem with GPU Skinning, I load from a COLLADA File my object with the vertices and the weights and bone indices for it and the bones with the matrices. For every vertex I choose 4 weights and 4 bone indices. For every non skin vertex i choose by the weights 1 0 0 0 and bone indices 0 0 0 0 (In the Bone Matrices Array is index 0 a Matrix Idetity) And i check up if all weights values together is always 1 or i calculate it to 1. So far so good, my Shader looks like this: bool HasBones; matrix BoneMatrices[256]; struct Vertex { float3 Position : POSITION; float3 Normal : NORMAL; float2 UV : TEXCOORD0; float3 Tangent : TANGENT; float4 Weights : WEIGHTS; int4 BoneIndices : BONEINDICES; }; float4 ApplyBoneTransform(Vertex input, float4 value) { if(HasBones) { float4x4 skinTransform = (float4x4)0; skinTransform += BoneMatrices[input.BoneIndices.x] * input.Weights.x; skinTransform += BoneMatrices[input.BoneIndices.y] * input.Weights.y; skinTransform += BoneMatrices[input.BoneIndices.z] * input.Weights.z; skinTransform += BoneMatrices[input.BoneIndices.w] * input.Weights.w; float4 position = mul(value, skinTransform); return position; } else return value; } Pixel vertexShader(Vertex input) { Pixel result = (Pixel) 0; float4 posWorld = mul(ApplyBoneTransform(input, float4(input.Position.xyz, 1.0f)), World); result.Position = mul(mul(posWorld, View), Projection); result.Normal = normalize(mul(ApplyBoneTransform(input, float4(input.Normal.xyz, 1.0f)), WorldIT)); result.UV = input.UV; result.View = ViewInverse[3] - mul(float4(input.Position.xyz, 1.0f), World); result.Tangent = normalize(mul(ApplyBoneTransform(input, float4(input.Tangent.xyz, 1.0f)), WorldIT).xyz); result.Binormal = normalize(cross(input.Normal, input.Tangent)); return result; } And if i set HasBones to true, my object will not draw right anymore, i only see two dark triangles I believe it depends on the bone matrices i load from the controller_lib of the COLLADA File and send it to the BoneMatrices in the shader + at the index 0 the Matrix Idetity. Has anyone an Idea what I make wrong and could help and explain me it? And i upload to this post the Collada file and images of the object draw in HasBones = false and HasBones = true Greets Benajmin Model.dae
  8. Hi there, I have an issue with the SpriteRenderer I have developed. I seem to be getting different alpha blending from one run to the next. I am using DirectXTK and have followed the advise in the wiki and have converted all my sprite images to pre-multiplied alpha using TexConv.exe: texconv -pow2 -pmalpha -m 1 -f BC3_UNORM -dx10 -y -o Tests\Resources\Images Tests\ResourceInput\Images\*.png and here is the code that applies the blend states (using the DIrectXTK CommonStates class): ID3D11BlendState* blendState = premultAlphas ? m_commonStates->AlphaBlend() : m_commonStates->NonPremultiplied(); m_deviceContext->OMSetBlendState(blendState, nullptr, 0xffffffff); Can anyone think of what I am doing wrong?
  9. Hello, I wrote a geometry shader in order to render (in the future, grass) some planes. I get a strange error that I do not understand. It says: The given primitive topology does not match with the topology expected by the geometry shader I do not have the magenta error color, but nothing appears in the scene. Could anyone point me my error ? struct v2g { float4 pos: POSITION; // float3 norm : NORMAL; }; struct g2f { float4 pos : POSITION; // float3 norm : NORMAL ; }; v2g vert(appdata_full v) { // float3 v0 = v.vertex.xyz; v2g o; o.pos = UnityObjectToClipPos(v.vertex); // o.norm = v.normal; return o; } [maxvertexcount(3)] void geom(point v2g v[1], inout TriangleStream <g2f> triStream) { float3 lightPosition = _WorldSpaceLightPos0; float3 perpAngle = float3(1,0,0); float3 up_normal = float3(0,1,0); float3 faceNormal = cross(perpAngle, up_normal); float4 v0 = v[0].pos; float4 v1 = v0; v1.xyz += _GrassWidth*perpAngle; float4 v2 = v1; v2.xyz += (0,1,0)*_GrassHeight; g2f o ; o.pos = UnityObjectToClipPos(v0); triStream.Append(o); o.pos = UnityObjectToClipPos(v1); triStream.Append(o); o.pos = UnityObjectToClipPos(v2); triStream.Append(o); } Thanks a lot !
  10. I managed convert opengl code on http://john-chapman-graphics.blogspot.co.uk/2013/02/pseudo-lens-flare.html to hlsl, but unfortunately I don't know how to add it to my atmospheric scattering code (Sky - first image). Can anyone help me? I tried to bind the sky texture as SRV and implement lens flare code in pixel shader, I don't know how to separate them (second image)
  11. I have a multi-tab application.Each tab is a form which has a single control window for sharpDX.I am using RenderLoop.Run(control, someFunc) to update the graphics on each tab.Each tab works correctly when created but the previously created tab no longer updates the graphics window because it appears RenderLoop.Run on that tab is no longer being called.Each new tab when created appears to replace the previous RenderLoop.The tabs are can also be undocked so more than one may be visible at once.Any tab can be closed so I have no guarantee that any one tab will be there.How should I solve the problem?Should I use a Timer instead?
  12. I'm using fxc.exe from the Win10 SDK (10.0.10586.0) to build my HLSL code. I've got some code in different pixel shaders that uses interlocked instructions on UAVs, such as: RWBuffer<uint> FragmentCount : register(u2); RWTexture2D<uint> HeadIndex : register(u3); ... uint newNode = 0; /*!!*/InterlockedAdd(FragmentCount[0], 1U, newNode);/*!!*/ ... uint previousTail = 0; /*!!*/InterlockedExchange(HeadIndex[xy], newNode+1, previousTail); /*!!*/ ... uint previousHead = 0; /*!!*/InterlockedAdd(HeadIndex[xy], 0x01000000, previousHead);/*!!*/ This compiles fine with the ps_5_0, cs_5_0 and cs_5_1 targets, but with ps_5_1, the compiler gives error x4532: cannot map expression to ps_5_1 instruction set, on the lines indicated with /*!!*/ Anyone else experienced this? What the hell, right?
  13. HRESULT FBXLoader::Open(HWND hWnd, char* Filename) { HRESULT hr = S_OK; if (FBXM) { FBXIOS = FbxIOSettings::Create(FBXM, IOSROOT); FBXM->SetIOSettings(FBXIOS); FBXI = FbxImporter::Create(FBXM, ""); if (!(FBXI->Initialize(Filename, -1, FBXIOS))) MessageBox(hWnd, (wchar_t*)FBXI->GetStatus().GetErrorString(), TEXT("ALM"), MB_OK); FBXS = FbxScene::Create(FBXM, "MCS"); if (!FBXS) MessageBox(hWnd, TEXT("Failed to create the scene"), TEXT("ALM"), MB_OK); if (!(FBXI->Import(FBXS))) MessageBox(hWnd, TEXT("Failed to import fbx file content into the scene"), TEXT("ALM"), MB_OK); if (FBXI) FBXI->Destroy(); FbxNode* MainNode = FBXS->GetRootNode(); int NumKids = MainNode->GetChildCount(); FbxNode* ChildNode = NULL; for (int i=0; i<NumKids; i++) { ChildNode = MainNode->GetChild(i); FbxNodeAttribute* NodeAttribute = ChildNode->GetNodeAttribute(); if (NodeAttribute->GetAttributeType() == FbxNodeAttribute::eMesh) { FbxMesh* Mesh = ChildNode->GetMesh(); NumVertices = Mesh->GetControlPointsCount();//number of vertices MyV = new FBXVTX[NumVertices]; for (DWORD j = 0; j < NumVertices; j++) { FbxVector4 Vertex = Mesh->GetControlPointAt(j);//Gets the control point at the specified index. MyV[j].Position = XMFLOAT3((float)Vertex.mData[0], (float)Vertex.mData[1], (float)Vertex.mData[2]); } NumIndices = Mesh->GetPolygonVertexCount();//number of indices; for cube 20 MyI = new DWORD[NumIndices]; MyI = (DWORD*)Mesh->GetPolygonVertices();//index array NumFaces = Mesh->GetPolygonCount(); MyF = new FBXFACEX[NumFaces]; for (int l=0;l<NumFaces;l++) { MyF[l].Vertices[0] = MyI[4*l]; MyF[l].Vertices[1] = MyI[4*l+1]; MyF[l].Vertices[2] = MyI[4*l+2]; MyF[l].Vertices[3] = MyI[4*l+3]; } UV = new XMFLOAT2[NumIndices]; for (int i = 0; i < Mesh->GetPolygonCount(); i++)//polygon(=mostly rectangle) count { FbxLayerElementArrayTemplate<FbxVector2>* uvVertices = NULL; Mesh->GetTextureUV(&uvVertices); for (int j = 0; j < Mesh->GetPolygonSize(i); j++)//retrieves number of vertices in a polygon { FbxVector2 uv = uvVertices->GetAt(Mesh->GetTextureUVIndex(i, j)); UV[4*i+j] = XMFLOAT2((float)uv.mData[0], (float)uv.mData[1]); } } } } } else MessageBox(hWnd, TEXT("Failed to create the FBX Manager"), TEXT("ALM"), MB_OK); return hr; } I've been trying to load fbx files(cube.fbx) into my programme. but I get this. Can someone pls help me?
  14. I'm trying to duplicate vertices using std::map to be used in a vertex buffer. I don't get the correct index buffer(myInds) or vertex buffer(myVerts). I can get the index array from FBX but it differs from what I get in the following std::map code. Any help is much appreciated. struct FBXVTX { XMFLOAT3 Position; XMFLOAT2 TextureCoord; XMFLOAT3 Normal; }; std::map< FBXVTX, int > myVertsMap; std::vector<FBXVTX> myVerts; std::vector<int> myInds; HRESULT FBXLoader::Open(HWND hWnd, char* Filename, bool UsePositionOnly) { HRESULT hr = S_OK; if (FBXM) { FBXIOS = FbxIOSettings::Create(FBXM, IOSROOT); FBXM->SetIOSettings(FBXIOS); FBXI = FbxImporter::Create(FBXM, ""); if (!(FBXI->Initialize(Filename, -1, FBXIOS))) { hr = E_FAIL; MessageBox(hWnd, (wchar_t*)FBXI->GetStatus().GetErrorString(), TEXT("ALM"), MB_OK); } FBXS = FbxScene::Create(FBXM, "REALMS"); if (!FBXS) { hr = E_FAIL; MessageBox(hWnd, TEXT("Failed to create the scene"), TEXT("ALM"), MB_OK); } if (!(FBXI->Import(FBXS))) { hr = E_FAIL; MessageBox(hWnd, TEXT("Failed to import fbx file content into the scene"), TEXT("ALM"), MB_OK); } FbxAxisSystem OurAxisSystem = FbxAxisSystem::DirectX; FbxAxisSystem SceneAxisSystem = FBXS->GetGlobalSettings().GetAxisSystem(); if(SceneAxisSystem != OurAxisSystem) { FbxAxisSystem::DirectX.ConvertScene(FBXS); } FbxSystemUnit SceneSystemUnit = FBXS->GetGlobalSettings().GetSystemUnit(); if( SceneSystemUnit.GetScaleFactor() != 1.0 ) { FbxSystemUnit::cm.ConvertScene( FBXS ); } if (FBXI) FBXI->Destroy(); FbxNode* MainNode = FBXS->GetRootNode(); int NumKids = MainNode->GetChildCount(); FbxNode* ChildNode = NULL; for (int i=0; i<NumKids; i++) { ChildNode = MainNode->GetChild(i); FbxNodeAttribute* NodeAttribute = ChildNode->GetNodeAttribute(); if (NodeAttribute->GetAttributeType() == FbxNodeAttribute::eMesh) { FbxMesh* Mesh = ChildNode->GetMesh(); if (UsePositionOnly) { NumVertices = Mesh->GetControlPointsCount();//number of vertices MyV = new XMFLOAT3[NumVertices]; for (DWORD j = 0; j < NumVertices; j++) { FbxVector4 Vertex = Mesh->GetControlPointAt(j);//Gets the control point at the specified index. MyV[j] = XMFLOAT3((float)Vertex.mData[0], (float)Vertex.mData[1], (float)Vertex.mData[2]); } NumIndices = Mesh->GetPolygonVertexCount();//number of indices MyI = (DWORD*)Mesh->GetPolygonVertices();//index array } else { FbxLayerElementArrayTemplate<FbxVector2>* uvVertices = NULL; Mesh->GetTextureUV(&uvVertices); int idx = 0; for (int i = 0; i < Mesh->GetPolygonCount(); i++)//polygon(=mostly triangle) count { for (int j = 0; j < Mesh->GetPolygonSize(i); j++)//retrieves number of vertices in a polygon { FBXVTX myVert; int p_index = 3*i+j; int t_index = Mesh->GetTextureUVIndex(i, j); FbxVector4 Vertex = Mesh->GetControlPointAt(p_index);//Gets the control point at the specified index. myVert.Position = XMFLOAT3((float)Vertex.mData[0], (float)Vertex.mData[1], (float)Vertex.mData[2]); FbxVector4 Normal; Mesh->GetPolygonVertexNormal(i, j, Normal); myVert.Normal = XMFLOAT3((float)Normal.mData[0], (float)Normal.mData[1], (float)Normal.mData[2]); FbxVector2 uv = uvVertices->GetAt(t_index); myVert.TextureCoord = XMFLOAT2((float)uv.mData[0], (float)uv.mData[1]); if ( myVertsMap.find( myVert ) != myVertsMap.end() ) myInds.push_back( myVertsMap[ myVert ]); else { myVertsMap.insert( std::pair<FBXVTX, int> (myVert, idx ) ); myVerts.push_back(myVert); myInds.push_back(idx); idx++; } } } } } } } else { hr = E_FAIL; MessageBox(hWnd, TEXT("Failed to create the FBX Manager"), TEXT("ALM"), MB_OK); } return hr; } bool operator < ( const FBXVTX &lValue, const FBXVTX &rValue) { if (lValue.Position.x != rValue.Position.x) return(lValue.Position.x < rValue.Position.x); if (lValue.Position.y != rValue.Position.y) return(lValue.Position.y < rValue.Position.y); if (lValue.Position.z != rValue.Position.z) return(lValue.Position.z < rValue.Position.z); if (lValue.TextureCoord.x != rValue.TextureCoord.x) return(lValue.TextureCoord.x < rValue.TextureCoord.x); if (lValue.TextureCoord.y != rValue.TextureCoord.y) return(lValue.TextureCoord.y < rValue.TextureCoord.y); if (lValue.Normal.x != rValue.Normal.x) return(lValue.Normal.x < rValue.Normal.x); if (lValue.Normal.y != rValue.Normal.y) return(lValue.Normal.y < rValue.Normal.y); return(lValue.Normal.z < rValue.Normal.z); }
  15. Hello, I'm currently working on a visualisation program which should render a low-poly car model in real-time and highlight some elements based on data I get from a database. So far no difficulty here. So far I managed to set up my UWP project so that I have a working DirectX 11 device and real-time swapchain-based output. When I debug my Input Assembler stage shows a valid model - the whole thing is correctly rendered in the VS Graphics Analyzer. The Vertex Shader output seems okay, too - but it's likely that the camera view-matrix is a bit wrong (too close, maybe the world matrix is rotated wrong as well) but the VS-GA does show some output there. Now the Pixel Shader does not run, though. My assumption is, that the coordinates I get from the Vertex Shader are bad (high values which I guess should actually be between -1.0f and 1.0f). So obviously the Rasterizer clips all vertices and nothing ends up rendered. I've been struggling with this for days now and since I really need to get this working soon I hoped someone here has the knowledge to help me fix this. Here's a screenshot of the debugger: I'm currently just using a simple ambient light shader (see here). And that's my code for rendering. The model class I'm using simply loads the vertices from a STL-file and creates the vertex buffer for it. It sets it and renders the model indexed or not (right now I don't have a index buffer since I haven't figured out how I calculate the indices for the model ... but that'll do for now). public override void Render() { Device.Clear(Colors.CornflowerBlue, DepthStencilClearFlags.Depth, 1, 0); _camera.Update(); ViewportF[] viewports = Device.Device.ImmediateContext.Rasterizer.GetViewports<ViewportF>(); _projection = Matrix.PerspectiveFovRH((float) Math.PI / 4.0f, viewports[0].Width / viewports[0].Height, 0.1f, 5000f); _worldMatrix.SetMatrix(Matrix.RotationX(MathUtil.DegreesToRadians(-90f))); _viewMatrix.SetMatrix(_camera.View); _projectionMatrix.SetMatrix(_projection); Device.Context.InputAssembler.InputLayout = _inputLayout; Device.Context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList; EffectTechnique tech = _shadingEffect.GetTechniqueByName("Ambient"); for (int i = 0; i < tech.Description.PassCount; i++) { EffectPass pass = tech.GetPassByIndex(i); pass.Apply(Device.Context); _model.Render(); } } The world is rotated around the X-axis since the model coordinate system is actually a right-handed CS where X+ is depth and Y is the horizontal and Z the vertical coordinate. Couldn't figure out if it was right that way, but should be theoretically. public void Update() { _rotation = Quaternion.RotationYawPitchRoll(Yaw, Pitch, Roll); Vector3.Transform(ref _target, ref _rotation, out _target); Vector3 up = _up; Vector3.Transform(ref up, ref _rotation, out up); _view = Matrix.LookAtRH(Position, _target, up); } The whole camera setup (static since no input has been implemented) is for now: _camera = new Camera(Vector3.UnitZ); _camera.SetView(new Vector3(0, 0, 5000f), new Vector3(0, 0, 0), MathUtil.DegreesToRadians(-90f)); So I tried to place the camera above the origin, looking down (thus the UnitZ as up-vector). So can anybody explain me why the vertex shader ouput is so wrong and obviously all vertices get clipped? Thank you very much!
  16. Hi guys, I want to visualize textures having the DXGI_FORMAT_D16_UNORM format (like depth, shadow maps, etc.) I'm copying to the clipboard but can't figure out how to get the values. // resource is mapped to subresource for (int i = 0; i < texture_height; i++) { BYTE* src = (BYTE*)(subresource.pData) + (i * subresource.RowPitch); // start of row for (int j = 0; j < texture_width; j++) { grey = *((unsigned short*)(src)); // this isn't right // fill in my rgb values here src += 2; (advance 2 bytes = 16 bits) } } Please help. Thanks.
  17. I'm getting the error "WINCODEC_ERR_UNSUPPORTEDPIXELFORMAT" so clearly I'm doing something wrong, I just don't understand what :\ Do you guys see anything wrong in my code? //Create D3d device, context, swapChain ID3D11Device* d3dDevice; ID3D11DeviceContext* d3dContext; IDXGISwapChain* swapChain; DXGI_SWAP_CHAIN_DESC swapChainDesc{}; swapChainDesc.BufferDesc.Width = appWidth; swapChainDesc.BufferDesc.Height = appHeight; swapChainDesc.BufferDesc.RefreshRate = { 60,1 }; swapChainDesc.BufferDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; swapChainDesc.BufferDesc.Scaling = DXGI_MODE_SCALING_CENTERED; swapChainDesc.SampleDesc.Count = 1; // for AntiAliasing swapChainDesc.SampleDesc.Quality;// for AntiAliasing swapChainDesc.BufferCount = 2; swapChainDesc.OutputWindow = appWindow; swapChainDesc.Windowed = true; if (HRESULT hr = D3D11CreateDeviceAndSwapChain(nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr, D3D11_CREATE_DEVICE_BGRA_SUPPORT, nullptr, 0, D3D11_SDK_VERSION, &swapChainDesc, &swapChain, &d3dDevice, nullptr, &d3dContext ); hr != S_OK) { printError("Error during D3d initialization", hr); return 1; } //Create D2d factory, renderTarget ID2D1Factory* d2dFactory; ID2D1RenderTarget* renderTarget; if (HRESULT hr = D2D1CreateFactory(D2D1_FACTORY_TYPE_MULTI_THREADED, &d2dFactory); hr != S_OK) { printError("Error during D2d factory creation", hr); return 1; } //get backBuffer IDXGISurface* dxgiSurface; if (HRESULT hr = swapChain->GetBuffer(0, __uuidof(dxgiSurface), reinterpret_cast<void**>(&dxgiSurface)); hr != S_OK) { printError("Error retrieving dxgiSurface", hr); return 1; } float dpiX, dpiY; d2dFactory->GetDesktopDpi(&dpiX, &dpiY); D2D1_RENDER_TARGET_PROPERTIES props{}; props.type = D2D1_RENDER_TARGET_TYPE_DEFAULT; props.pixelFormat = D2D1::PixelFormat(DXGI_FORMAT_UNKNOWN, D2D1_ALPHA_MODE_PREMULTIPLIED); props.dpiX = dpiX; props.dpiY = dpiY; if (HRESULT hr = d2dFactory->CreateDxgiSurfaceRenderTarget(dxgiSurface, props, &renderTarget); hr != S_OK) { printError("CreateDxgiSurfaceRenderTarget() function failed", hr); //THIS FAIL system("pause"); //THIS FAIL return 1; //THIS FAIL }
  18. I've been away for a VERY long time, so if this topic has already been discussed, I couldn't find it. I started using VS2017 recently and I keep getting warnings like this: 1>c:\program files (x86)\microsoft directx sdk (june 2010)\include\d3d10.h(609): warning C4005: 'D3D10_ERROR_FILE_NOT_FOUND': macro redefinition (compiling source file test.cpp) 1>C:\Program Files (x86)\Windows Kits\10\Include\10.0.16299.0\shared\winerror.h(54103): note: see previous definition of 'D3D10_ERROR_FILE_NOT_FOUND' (compiling source file test.cpp) It pops up for various things, but the reasons are all the same. Something is already defined..... I have DXSDK June2010 and referencing the .lib and .h set correctly (otherwise I wouldn't get this, I'd get errors) Is there a way to correct this issue or do I just have to live with it? Also (a little off-topic) the compiler doesn't like to compile my code if I make very small changes.... What's up with that? Can I change it? Google is no help.
  19. matt77hias

    nullptr SRV

    Is it ok to bind nullptr shader resource views and sample them in some shader? I.e. is the resulting behavior deterministic and consistent across GPU drivers? Or should one rather bind an SRV to a texture having just a single black texel?
  20. Is it common to have more than one ID3D11Device and/or associated immediate ID3D11DeviceContext? If I am correct a single display subsystem (GPU, video memory, etc.) is completely determined (from a 3D rendering perspective) by a IDXGIAdapter (meta functionality facade); ID3D11Device (resource creation facade); ID3D11DeviceContext (pipeline facade). So given that you want to use multiple display subsystems, you will have to handle multiple of these interfaces. A concrete example would be a graphics card dedicated to rendering and a separate graphics card dedicated to computation, or combining an integrated and dedicated graphics card. All such cases seem to me quite far fetched to justify support in a majority of games. So moving one abstraction level further downstream, should a game engine even consider multiple display systems (i.e. there is just one ID3D11Device and one immediate ID3D11DeviceContext)?
  21. Hi, right now building my engine in visual studio involves a shader compiling step to build hlsl 5.0 shaders. I have a separate project which only includes shader sources and the compiler is the visual studio integrated fxc compiler. I like this method because on any PC that has visual studio installed, I can just download the solution from GitHub and everything just builds without additional dependencies and using the latest version of the compiler. I also like it because the shaders are included in the solution explorer and easy to browse, and double-click to open (opening files can be really a pain in the ass in visual studio run in admin mode). Also it's nice that VS displays the build output/errors in the output window. But now I have the HLSL 6 compiler and want to build hlsl 6 shaders as well (and as I understand I can also compile vulkan compatible shaders with it later). Any idea how to do this nicely? I want only a single project containing shader sources, like it is now, but build them for different targets. I guess adding different building projects would be the way to go that reference the shader source project? But how would they differentiate from shader type of the sources (eg. pixel shader, compute shader,etc.)? Now the shader building project contains for each shader the shader type, how can other building projects reference that? Anyone with some experience in this?
  22. Hi there, this is my first post in what looks to be a very interesting forum. I am using DirectXTK to put together my 2D game engine but would like to use the GPU depth buffer in order to avoid sorting back-to-front on the CPU and I think I also want to use GPU instancing, so can I do that with SpriteBatch or am I looking at implementing my own sprite rendering? Thanks in advance!
  23. Hi all, I have another "niche" architecture error On our building servers, we're using head-less machines on which we're running DX11 WARP in a console session, that is D3D_DRIVER_TYPE_WARP plus D3D_FEATURE_LEVEL_11_0. It's Windows 7 or Windows Server 2008 R2 with "Platform Update for Windows 7". Everything's been fine, it's running all kinds of complex rendering, compute shaders, UAVs, everything fine and even fast. The problem: Writes to a cubemap array specific slice and specific mipmap using PS+UAV seem to be dropped. Do note that with D3D_DRIVER_TYPE_HARDWARE it works correctly; I can reproduce the bug on any normal workstation (also Windows 7 x64) with D3D_DRIVER_TYPE_WARP. The shader in question is a simple average 4->1 mipmapping PS, which samples a source SRV texture and writes into a UAV like this: RWTexture2DArray<float4> array2d; array2d[int3(xy, arrayIdx)] = avg_float4_value; The output merger is set to do no RT writes, the only output is via that one UAV. Note again that with a normal HW driver (GeForce) it works right, but with WARP it doesn't. Any ideas how I could debug this, to be sure it's really WARP causing this? Do you think RenderDoc will capture also a WARP application (using their StartFrameCapture/EndFrameCapture API of course, since the there's no window nor swap chain)? EDIT: RenderDoc does make a capture even with WARP, wow Thanks!
  24. Hi, I've been trying to implement a gaussian blur recently, it would seem the best way to achieve this is by running a bur on one axis, then another blur on the other axis. I think I have successfully implemented the blur part per axis, but now I have to blend both calls with a proper BlendState, at least I think this is where my problem is. Here are my passes: RasterizerState DisableCulling { CullMode = BACK; }; BlendState AdditiveBlend { BlendEnable[0] = TRUE; BlendEnable[1] = TRUE; SrcBlend[0] = SRC_COLOR; BlendOp[0] = ADD; BlendOp[1] = ADD; SrcBlend[1] = SRC_COLOR; }; technique11 BlockTech { pass P0 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurV())); SetRasterizerState(DisableCulling); SetBlendState(AdditiveBlend, float4(0.0, 0.0, 0.0, 0.0), 0xffffffff); } pass P1 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurH())); SetRasterizerState(DisableCulling); } } D3DX11_TECHNIQUE_DESC techDesc; mBlockEffect->mTech->GetDesc( &techDesc ); for(UINT p = 0; p < techDesc.Passes; ++p) { deviceContext->IASetVertexBuffers(0, 2, bufferPointers, stride, offset); deviceContext->IASetIndexBuffer(mIB, DXGI_FORMAT_R32_UINT, 0); mBlockEffect->mTech->GetPassByIndex(p)->Apply(0, deviceContext); deviceContext->DrawIndexedInstanced(36, mNumberOfActiveCubes, 0, 0, 0); } No blur PS_BlurV PS_BlurH P0 + P1 As you can see, it does not work at all. I think the issue is in my BlendState, but I am not sure. I've seen many articles going with the render to texture approach, but I've also seen articles where both shaders were called in succession, and it worked just fine, I'd like to go with that second approach. Unfortunately, the code was in OpenGL where the syntax for running multiple passes is quite different (http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/). So I need some help doing the same in HLSL :-) Thanks!
  25. I hope this is the right place to ask questions about DirectXTK which aren't really about graphics, if not please let me know a better place. Can anyone tell me why I cannot do this: DirectX::SimpleMath::Rectangle rectangle = {...}; RECT rect = rectangle; or RECT rect = static_cast<RECT>(rectangle); or const RECT rect(m_textureRect); despite Rectangle having the following operator RECT: operator RECT() { RECT rct; rct.left = x; rct.top = y; rct.right = (x + width); rct.bottom = (y + height); return rct; } VS2017 tells me: error C2440: 'initializing': cannot convert from 'const DirectX::SimpleMath::Rectangle' to 'const RECT' Thanks in advance
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!