• Advertisement

Search the Community

Showing results for tags 'DX11'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • GameDev Unboxed

Categories

  • Game Dev Loadout

Categories

  • Game Developers Conference
    • GDC 2017
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Developers

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 1444 results

  1. nullptr SRV

    Is it ok to bind nullptr shader resource views and sample them in some shader? I.e. is the resulting behavior deterministic and consistent across GPU drivers? Or should one rather bind an SRV to a texture having just a single black texel?
  2. 【DirectX9 Get shader bytecode】 I hook DrawIndexedPrimitive HookCode(PPointer(g_DeviceBaseAddr + $148)^,@NewDrawIndexedPrimitive, @OldDrawIndexedPrimitive); function NewDrawIndexedPrimitive(const Device:IDirect3DDevice9;_Type: TD3DPrimitiveType; BaseVertexIndex: Integer; MinVertexIndex, NumVertices, startIndex, primCount: LongWord): HResult; stdcall; var ppShader: IDirect3DVertexShader9; _Code:Pointer; _CodeLen:Cardinal; begin Device.GetVertexShader(ppShader);//<------1.Get ShaderObject(ppShader) ppShader.GetFunction(nil,_CodeLen); GetMem(_Code,_CodeLen); ppShader.GetFunction(_Code,_CodeLen);//<----2.Get bytecode from ShaderObject(ppShader) Result:=OldDrawIndexedPrimitive(Self,_Type,BaseVertexIndex,MinVertexIndex, NumVertices, startIndex, primCount); end; 【How to DirectX11 Get VSShader bytecode?】 I hook DrawIndexed pDrawIndexed:=PPointer(PUINT_PTR(UINT_PTR(g_ImmContext)+0)^ + 12 * SizeOf(Pointer))^; HookCode(pDrawIndexed,@NewDrawIndexed,@OldDrawIndexed); procedure NewDrawIndexed(g_Real_ImmContext:ID3D11DeviceContext;IndexCount: UINT;StartIndexLocation: UINT;BaseVertexLocation: Integer); stdcall; var game_pVertexShader: ID3D11VertexShader; ppClassInstances: ID3D11ClassInstance; NumClassInstances: UINT begin g_Real_ImmContext.VSGetShader(game_pVertexShader,ppClassInstances,NumClassInstances); //<------1.Get ShaderObject(game_pVertexShader) .....//<----【2.Here's how to get bytecode from ShaderObject(game_pVertexShader)?】 OldDrawIndexed(ImmContext, IndexCount, StartIndexLocation, BaseVertexLocation); end; Another way: HOOK CreateVertexShader() but HOOK need to be created before the game CreateVertexShader, HOOK will not get bytecode if the game is running later,I need to get bytecode at any time like DirectX9
  3. Is it common to have more than one ID3D11Device and/or associated immediate ID3D11DeviceContext? If I am correct a single display subsystem (GPU, video memory, etc.) is completely determined (from a 3D rendering perspective) by a IDXGIAdapter (meta functionality facade); ID3D11Device (resource creation facade); ID3D11DeviceContext (pipeline facade). So given that you want to use multiple display subsystems, you will have to handle multiple of these interfaces. A concrete example would be a graphics card dedicated to rendering and a separate graphics card dedicated to computation, or combining an integrated and dedicated graphics card. All such cases seem to me quite far fetched to justify support in a majority of games. So moving one abstraction level further downstream, should a game engine even consider multiple display systems (i.e. there is just one ID3D11Device and one immediate ID3D11DeviceContext)?
  4. Hi all, I have another "niche" architecture error On our building servers, we're using head-less machines on which we're running DX11 WARP in a console session, that is D3D_DRIVER_TYPE_WARP plus D3D_FEATURE_LEVEL_11_0. It's Windows 7 or Windows Server 2008 R2 with "Platform Update for Windows 7". Everything's been fine, it's running all kinds of complex rendering, compute shaders, UAVs, everything fine and even fast. The problem: Writes to a cubemap array specific slice and specific mipmap using PS+UAV seem to be dropped. Do note that with D3D_DRIVER_TYPE_HARDWARE it works correctly; I can reproduce the bug on any normal workstation (also Windows 7 x64) with D3D_DRIVER_TYPE_WARP. The shader in question is a simple average 4->1 mipmapping PS, which samples a source SRV texture and writes into a UAV like this: RWTexture2DArray<float4> array2d; array2d[int3(xy, arrayIdx)] = avg_float4_value; The output merger is set to do no RT writes, the only output is via that one UAV. Note again that with a normal HW driver (GeForce) it works right, but with WARP it doesn't. Any ideas how I could debug this, to be sure it's really WARP causing this? Do you think RenderDoc will capture also a WARP application (using their StartFrameCapture/EndFrameCapture API of course, since the there's no window nor swap chain)? EDIT: RenderDoc does make a capture even with WARP, wow Thanks!
  5. DX11 LensDirt Effect

    I've updated the VBexEditor-Interface. Cubemap-data can be arranged to use it as Skybox. The SkyCubeShader is updated to generate nice LensDirt. Have fun...
  6. I hope this is the right place to ask questions about DirectXTK which aren't really about graphics, if not please let me know a better place. Can anyone tell me why I cannot do this: DirectX::SimpleMath::Rectangle rectangle = {...}; RECT rect = rectangle; or RECT rect = static_cast<RECT>(rectangle); or const RECT rect(m_textureRect); despite Rectangle having the following operator RECT: operator RECT() { RECT rct; rct.left = x; rct.top = y; rct.right = (x + width); rct.bottom = (y + height); return rct; } VS2017 tells me: error C2440: 'initializing': cannot convert from 'const DirectX::SimpleMath::Rectangle' to 'const RECT' Thanks in advance
  7. I'm trying to duplicate vertices using std::map to be used in a vertex buffer. I don't get the correct index buffer(myInds) or vertex buffer(myVerts). I can get the index array from FBX but it differs from what I get in the following std::map code. Any help is much appreciated. struct FBXVTX { XMFLOAT3 Position; XMFLOAT2 TextureCoord; XMFLOAT3 Normal; }; std::map< FBXVTX, int > myVertsMap; std::vector<FBXVTX> myVerts; std::vector<int> myInds; HRESULT FBXLoader::Open(HWND hWnd, char* Filename, bool UsePositionOnly) { HRESULT hr = S_OK; if (FBXM) { FBXIOS = FbxIOSettings::Create(FBXM, IOSROOT); FBXM->SetIOSettings(FBXIOS); FBXI = FbxImporter::Create(FBXM, ""); if (!(FBXI->Initialize(Filename, -1, FBXIOS))) { hr = E_FAIL; MessageBox(hWnd, (wchar_t*)FBXI->GetStatus().GetErrorString(), TEXT("ALM"), MB_OK); } FBXS = FbxScene::Create(FBXM, "REALMS"); if (!FBXS) { hr = E_FAIL; MessageBox(hWnd, TEXT("Failed to create the scene"), TEXT("ALM"), MB_OK); } if (!(FBXI->Import(FBXS))) { hr = E_FAIL; MessageBox(hWnd, TEXT("Failed to import fbx file content into the scene"), TEXT("ALM"), MB_OK); } FbxAxisSystem OurAxisSystem = FbxAxisSystem::DirectX; FbxAxisSystem SceneAxisSystem = FBXS->GetGlobalSettings().GetAxisSystem(); if(SceneAxisSystem != OurAxisSystem) { FbxAxisSystem::DirectX.ConvertScene(FBXS); } FbxSystemUnit SceneSystemUnit = FBXS->GetGlobalSettings().GetSystemUnit(); if( SceneSystemUnit.GetScaleFactor() != 1.0 ) { FbxSystemUnit::cm.ConvertScene( FBXS ); } if (FBXI) FBXI->Destroy(); FbxNode* MainNode = FBXS->GetRootNode(); int NumKids = MainNode->GetChildCount(); FbxNode* ChildNode = NULL; for (int i=0; i<NumKids; i++) { ChildNode = MainNode->GetChild(i); FbxNodeAttribute* NodeAttribute = ChildNode->GetNodeAttribute(); if (NodeAttribute->GetAttributeType() == FbxNodeAttribute::eMesh) { FbxMesh* Mesh = ChildNode->GetMesh(); if (UsePositionOnly) { NumVertices = Mesh->GetControlPointsCount();//number of vertices MyV = new XMFLOAT3[NumVertices]; for (DWORD j = 0; j < NumVertices; j++) { FbxVector4 Vertex = Mesh->GetControlPointAt(j);//Gets the control point at the specified index. MyV[j] = XMFLOAT3((float)Vertex.mData[0], (float)Vertex.mData[1], (float)Vertex.mData[2]); } NumIndices = Mesh->GetPolygonVertexCount();//number of indices MyI = (DWORD*)Mesh->GetPolygonVertices();//index array } else { FbxLayerElementArrayTemplate<FbxVector2>* uvVertices = NULL; Mesh->GetTextureUV(&uvVertices); int idx = 0; for (int i = 0; i < Mesh->GetPolygonCount(); i++)//polygon(=mostly triangle) count { for (int j = 0; j < Mesh->GetPolygonSize(i); j++)//retrieves number of vertices in a polygon { FBXVTX myVert; int p_index = 3*i+j; int t_index = Mesh->GetTextureUVIndex(i, j); FbxVector4 Vertex = Mesh->GetControlPointAt(p_index);//Gets the control point at the specified index. myVert.Position = XMFLOAT3((float)Vertex.mData[0], (float)Vertex.mData[1], (float)Vertex.mData[2]); FbxVector4 Normal; Mesh->GetPolygonVertexNormal(i, j, Normal); myVert.Normal = XMFLOAT3((float)Normal.mData[0], (float)Normal.mData[1], (float)Normal.mData[2]); FbxVector2 uv = uvVertices->GetAt(t_index); myVert.TextureCoord = XMFLOAT2((float)uv.mData[0], (float)uv.mData[1]); if ( myVertsMap.find( myVert ) != myVertsMap.end() ) myInds.push_back( myVertsMap[ myVert ]); else { myVertsMap.insert( std::pair<FBXVTX, int> (myVert, idx ) ); myVerts.push_back(myVert); myInds.push_back(idx); idx++; } } } } } } } else { hr = E_FAIL; MessageBox(hWnd, TEXT("Failed to create the FBX Manager"), TEXT("ALM"), MB_OK); } return hr; } bool operator < ( const FBXVTX &lValue, const FBXVTX &rValue) { if (lValue.Position.x != rValue.Position.x) return(lValue.Position.x < rValue.Position.x); if (lValue.Position.y != rValue.Position.y) return(lValue.Position.y < rValue.Position.y); if (lValue.Position.z != rValue.Position.z) return(lValue.Position.z < rValue.Position.z); if (lValue.TextureCoord.x != rValue.TextureCoord.x) return(lValue.TextureCoord.x < rValue.TextureCoord.x); if (lValue.TextureCoord.y != rValue.TextureCoord.y) return(lValue.TextureCoord.y < rValue.TextureCoord.y); if (lValue.Normal.x != rValue.Normal.x) return(lValue.Normal.x < rValue.Normal.x); if (lValue.Normal.y != rValue.Normal.y) return(lValue.Normal.y < rValue.Normal.y); return(lValue.Normal.z < rValue.Normal.z); }
  8. Hi, I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly. Is there anyone that is wishing to help me set up my compute shader? Thank you in advance for any replies and interest!
  9. Hi, right now building my engine in visual studio involves a shader compiling step to build hlsl 5.0 shaders. I have a separate project which only includes shader sources and the compiler is the visual studio integrated fxc compiler. I like this method because on any PC that has visual studio installed, I can just download the solution from GitHub and everything just builds without additional dependencies and using the latest version of the compiler. I also like it because the shaders are included in the solution explorer and easy to browse, and double-click to open (opening files can be really a pain in the ass in visual studio run in admin mode). Also it's nice that VS displays the build output/errors in the output window. But now I have the HLSL 6 compiler and want to build hlsl 6 shaders as well (and as I understand I can also compile vulkan compatible shaders with it later). Any idea how to do this nicely? I want only a single project containing shader sources, like it is now, but build them for different targets. I guess adding different building projects would be the way to go that reference the shader source project? But how would they differentiate from shader type of the sources (eg. pixel shader, compute shader,etc.)? Now the shader building project contains for each shader the shader type, how can other building projects reference that? Anyone with some experience in this?
  10. Hello! Have a problem with reflection shader for D3D11: 1>engine_render_d3d11_system.obj : error LNK2001: unresolved external symbol IID_ID3D11ShaderReflection I tried to add this: #include <D3D11Shader.h> #include <D3Dcompiler.h> #include <D3DCompiler.inl> #pragma comment(lib, "D3DCompiler.lib") //#pragma comment(lib, "D3DCompiler_47.lib") As MSDN tells me but still no fortune. I think lot of people did that already, what I missing? I also find this article http://mattfife.com/?p=470 where recommend to use SDK headers and libs before Wind SDK, but I am not using DirectX SDK for this project at all, should I?
  11. Hi there, this is my first post in what looks to be a very interesting forum. I am using DirectXTK to put together my 2D game engine but would like to use the GPU depth buffer in order to avoid sorting back-to-front on the CPU and I think I also want to use GPU instancing, so can I do that with SpriteBatch or am I looking at implementing my own sprite rendering? Thanks in advance!
  12. I am trying to draw a screen-aligned quad with arbitrary sizes. currently I just send 4 vertices to the vertex shader like so: pDevCon->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP); pDevCon->Draw(4, 0); then in the vertex shader I am doing this: float4 main(uint vI : SV_VERTEXID) : SV_POSITION { float2 texcoord = float2(vI & 1, vI >> 1); return float4((texcoord.x - 0.5f) * 2, -(texcoord.y - 0.5f) * 2, 0, 1); } that gets me a screen-sized quad...ok .. what's the correct way to get arbitrary sizes?...I have messed around with various numbers, but I think I don't quite get something in these relationships. one thing I tried is: float4 quad = float4((texcoord.x - (xpos/screensizex)) * (width/screensizex), -(texcoord.y - (ypos/screensizey)) * (height/screensizey), 0, 1); .. where xpos and ypos is number of pixels from upper right corner..width and height is the desired size of the quad in pixels this gets me somewhat close, but not right.. a bit too small..so I'm missing something ..any ideas? .
  13. Hi, I've been trying to implement a gaussian blur recently, it would seem the best way to achieve this is by running a bur on one axis, then another blur on the other axis. I think I have successfully implemented the blur part per axis, but now I have to blend both calls with a proper BlendState, at least I think this is where my problem is. Here are my passes: RasterizerState DisableCulling { CullMode = BACK; }; BlendState AdditiveBlend { BlendEnable[0] = TRUE; BlendEnable[1] = TRUE; SrcBlend[0] = SRC_COLOR; BlendOp[0] = ADD; BlendOp[1] = ADD; SrcBlend[1] = SRC_COLOR; }; technique11 BlockTech { pass P0 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurV())); SetRasterizerState(DisableCulling); SetBlendState(AdditiveBlend, float4(0.0, 0.0, 0.0, 0.0), 0xffffffff); } pass P1 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurH())); SetRasterizerState(DisableCulling); } } D3DX11_TECHNIQUE_DESC techDesc; mBlockEffect->mTech->GetDesc( &techDesc ); for(UINT p = 0; p < techDesc.Passes; ++p) { deviceContext->IASetVertexBuffers(0, 2, bufferPointers, stride, offset); deviceContext->IASetIndexBuffer(mIB, DXGI_FORMAT_R32_UINT, 0); mBlockEffect->mTech->GetPassByIndex(p)->Apply(0, deviceContext); deviceContext->DrawIndexedInstanced(36, mNumberOfActiveCubes, 0, 0, 0); } No blur PS_BlurV PS_BlurH P0 + P1 As you can see, it does not work at all. I think the issue is in my BlendState, but I am not sure. I've seen many articles going with the render to texture approach, but I've also seen articles where both shaders were called in succession, and it worked just fine, I'd like to go with that second approach. Unfortunately, the code was in OpenGL where the syntax for running multiple passes is quite different (http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/). So I need some help doing the same in HLSL :-) Thanks!
  14. Back around 2006 I spent a good year or two reading books, articles on this site, and gobbling up everything game dev related I could. I started an engine in DX10 and got through basics. I eventually gave up, because I couldn't do the harder things. Now, my C++ is 12 years stronger, my mind is trained better, and I am thinking of giving it another go. Alot has changed. There is no more SDK, there is evidently a DX Toolkit, XNA died, all the sweet sites I used to go to are 404, and google searches all point to Unity and Unreal. I plainly don't like Unity or Unreal, but might learn them for reference. So, what is the current path? Does everyone pretty much use the DX Toolkit? Should I start there? I also read that DX12 is just expert level DX11, so I guess I am going DX 11. Is there a current and up to date list of learning resources anywhere? I am about tired of 404s..
  15. Hi, I've been trying to implement a basic gaussian blur using the gaussian formula, and here is what it looks like so far: float gaussian(float x, float sigma) { float pi = 3.14159; float sigma_square = sigma * sigma; float a = 1 / sqrt(2 * pi*sigma_square); float b = exp(-((x*x) / (2 * sigma_square))); return a * b; } My problem is that I don't quite know what sigma should be. It seems that if I provide a random value for sigma, weights in my kernel won't add up to 1. So I ended up calling my gaussian function with sigma == 1, which gives me weights adding up to 1, but also a very subtle blur. Here is what my kernel looks like with sigma == 1 [0] 0.0033238872995488885 [1] 0.023804742479357766 [2] 0.09713820127276819 [3] 0.22585307043511713 [4] 0.29920669915475656 [5] 0.22585307043511713 [6] 0.09713820127276819 [7] 0.023804742479357766 [8] 0.0033238872995488885 I would have liked it to be more "rounded" at the top, or a better spread instead of wasting [0], [1], [2] with values bellow 0.1. Based on my experiments, the key to this is to provide a different sigma, but if I do, my kernel values no longer adds up to 1, which results to a darker blur. I've found this post ... which helped me a bit, but I am really confused with this the part where he divide sigma by 3. Can someone please explain how sigma works? How is it related to my kernel size, how can I balance my weights with different sigmas, ect... Thanks :-)
  16. Is it possible to asynchronously create a Texture2D using DirectX11? I have a native Unity plugin that downloads 8K textures from a server and displays them to the user for a VR application. This works well, but there's a large frame drop when calling CreateTexture2D. To remedy this, I've tried creating a separate thread that creates the texture, but the frame drop is still present. Is there anything else that I could do to prevent that frame drop from occuring?
  17. i'm trying draw a circule using math: class coordenates { public: coordenates(float x=0, float y=0) { X = x; Y = y; } float X; float Y; }; coordenates RotationPoints(coordenates ActualPosition, double angle) { coordenates NewPosition; NewPosition.X = ActualPosition.X*sin(angle) - ActualPosition.Y*sin(angle); NewPosition.Y = ActualPosition.Y*cos(angle) + ActualPosition.X*cos(angle); return NewPosition; } but now i know that these have 1 problem, because i don't use the orign. even so i'm getting problems on how i can rotate the point. these coordinates works between -1 and 1 floating points. can anyone advice more for i create the circule?
  18. I have some code (not written by me) that is creating a window to draw stuff into using these: CreateDXGIFactory1 to create an IDXGIFactory1 dxgi_factory->CreateSwapChain to create an IDXGISwapChain D3D11CreateDevice to create an ID3D11Device and an ID3D11DeviceContext Other code (that I dont quite understand) that creates various IDXGIAdapter1 and IDXGIOutput instances Still other code (that I dont quite understand) that is creating some ID3D11RenderTargetView and ID3D11DepthStencilView instances and is doing something with those as well (possibly loading them into the graphics context somewhere although I cant quite see where) What I want to do is to create a second window and draw stuff to that as well as to the main window (all drawing would happen on the one thread with all the drawing to the sub-window happening in one block and outside of any rendering being done to the main window). Do I need to create a second IDXGISwapChain for my new window? Do I need to create a second ID3D11Device or different IDXGIAdapter1 and IDXGIOutput interfaces? How do I tell Direct3D which window I want to render to? Are there particular d3d11 functions I should be looking for that are involved in this? I am good with Direct3D9 but this is the first time I am working with Direct3D11 (and the guy who wrote the code has left our team so I cant ask him for help
  19. I managed convert opengl code on http://john-chapman-graphics.blogspot.co.uk/2013/02/pseudo-lens-flare.html to hlsl, but unfortunately I don't know how to add it to my atmospheric scattering code (Sky - first image). Can anyone help me? I tried to bind the sky texture as SRV and implement lens flare code in pixel shader, I don't know how to separate them (second image)
  20. I have found an example here: http://xboxforums.create.msdn.com/forums/t/66208.aspx about sharing surfaces/textures created by DX11 into the ID2D1DCRenderTarget, however the above example is based on SwapChain bound to a window (hwnd) What I need is to draw a shared texture into GDI as this is the only plugin interface in an old software I have. Any ideas ? Here is what I do. SharedSurface::SharedSurface(): { //Initialize the ID2D1Factory object D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &pD2DFactory); //D2D1CreateFactory(D2D1_FACTORY_TYPE_MULTI_THREADED, &pD2DFactory); //initialize the ID2D1DCRenderTarget D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties( D2D1_RENDER_TARGET_TYPE_HARDWARE, // D2D1_RENDER_TARGET_TYPE_DEFAULT, D2D1::PixelFormat( DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED), 0, 0, D2D1_RENDER_TARGET_USAGE_GDI_COMPATIBLE, D2D1_FEATURE_LEVEL_DEFAULT ); HRESULT hr = pD2DFactory->CreateDCRenderTarget(&props, &pRT); DWORD createDeviceFlags = 0; createDeviceFlags |= D3D11_CREATE_DEVICE_DEBUG; ID3D11DeviceContext* context; D3D_FEATURE_LEVEL fl; DXGI_SWAP_CHAIN_DESC sd; ZeroMemory( &sd, sizeof( sd ) ); sd.BufferCount = 1; sd.BufferDesc.Width = width; sd.BufferDesc.Height = height; sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; sd.BufferDesc.RefreshRate.Numerator = 60; sd.BufferDesc.RefreshRate.Denominator = 1; sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT | DXGI_USAGE_SHARED; sd.OutputWindow = 0; // g_hWnd; sd.SampleDesc.Count = 1; sd.SampleDesc.Quality = 0; sd.Windowed = FALSE, // TRUE; hr = D3D11CreateDeviceAndSwapChain(nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr, createDeviceFlags, nullptr, 0, D3D11_SDK_VERSION, &sd, &pSwapChain, &mDevice, &fl, &context); hr = m_pSwapChain->GetBuffer(0, IID_PPV_ARGS(&pBackBuffer)); } bool SharedSurface::CreateTexture(ID3D11Device* device, UINT width, UINT height) { HRESULT hr; D3D11_TEXTURE2D_DESC desc; ZeroMemory(&desc, sizeof(desc)); desc.Width = width; desc.Height = height; desc.MipLevels = 1; desc.ArraySize = 1; desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; desc.SampleDesc.Count = 1; desc.MiscFlags = D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX; // D3D11_RESOURCE_MISC_SHARED; // desc.Usage = D3D11_USAGE_DEFAULT; desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET; desc.CPUAccessFlags = 0; // D3D11_CPU_ACCESS_READ; // 0; if (device != nullptr) mDevice = device; hr = mDevice->CreateTexture2D(&desc, NULL, &pTexture); IDXGIResource * pDXGIResource = NULL; hr = pTexture->QueryInterface(__uuidof(IDXGIResource), (void **)&pDXGIResource); if SUCCEEDED(hr) { hr = pDXGIResource->GetSharedHandle(&sharedHandle); pDXGIResource->Release(); if SUCCEEDED(hr) { hr = pTexture->QueryInterface(__uuidof(IDXGIKeyedMutex), (LPVOID*)&pMutex); } } hr = pTexture->QueryInterface(__uuidof(IDXGISurface), (void **)&pSurface); FLOAT dpiX; FLOAT dpiY; pD2DFactory->GetDesktopDpi(&dpiX, &dpiY); D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties(D2D1_RENDER_TARGET_TYPE_HARDWARE/*D2D1_RENDER_TARGET_TYPE_DEFAULT*/, D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED),dpiX,dpiY); hr = pD2DFactory->CreateDxgiSurfaceRenderTarget(pBackBuffer,&props, &pBackBufferRT); DXGI_SURFACE_DESC sdesc; D2D1_BITMAP_PROPERTIES bp; ZeroMemory(&bp, sizeof(bp)); pSurface->GetDesc(&sdesc); bp.pixelFormat = D2D1::PixelFormat(sdesc.Format, D2D1_ALPHA_MODE_PREMULTIPLIED); hr = pBackBufferRT->CreateSharedBitmap(__uuidof(ID3D11Texture2D), pSurface,&bp, &pBitmap); return SUCCEEDED(hr); } void SharedSurface::Draw() { pBackBufferRT->BeginDraw(); pBackBufferRT->DrawBitmap(pBitmap); pBackBufferRT->EndDraw(); pBackBufferRT->Present(); } void SharedSurface::BindDC(HDC hdc, int width, int height) { RECT rct; rct.top = 0; rct.left = 0; rct.right = width; rct.bottom = height; pRT->BindDC(hdc, &rct); } // HOW TO EXCHANGE between pBackBufferRT and pRT ?
  21. While I am running my directX11 engine in windowed mode, I can use the snipping too to get a screen cap of it. However, if I run it in full screen and hit print screen button, and try to copy paste that into paint or photoshop, it has nothing - print screen isnt capturing the image for some reason, and I cant use snipping tool to get the screen cap while its in full screen? Is there some setting or soemthing thats not allowing windows to detect the game while it's in full screen? Very important to get these captures for my portfolio, and if I capture it in windowed mode, it looks really bad and low res. Heres a link to the D3D class in my engine for reference https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/D3DClass.cpp Any help would be much appreciated, thanks!
  22. Hello, I have a Problem with GPU Skinning, I load from a COLLADA File my object with the vertices and the weights and bone indices for it and the bones with the matrices. For every vertex I choose 4 weights and 4 bone indices. For every non skin vertex i choose by the weights 1 0 0 0 and bone indices 0 0 0 0 (In the Bone Matrices Array is index 0 a Matrix Idetity) And i check up if all weights values together is always 1 or i calculate it to 1. So far so good, my Shader looks like this: bool HasBones; matrix BoneMatrices[256]; struct Vertex { float3 Position : POSITION; float3 Normal : NORMAL; float2 UV : TEXCOORD0; float3 Tangent : TANGENT; float4 Weights : WEIGHTS; int4 BoneIndices : BONEINDICES; }; float4 ApplyBoneTransform(Vertex input, float4 value) { if(HasBones) { float4x4 skinTransform = (float4x4)0; skinTransform += BoneMatrices[input.BoneIndices.x] * input.Weights.x; skinTransform += BoneMatrices[input.BoneIndices.y] * input.Weights.y; skinTransform += BoneMatrices[input.BoneIndices.z] * input.Weights.z; skinTransform += BoneMatrices[input.BoneIndices.w] * input.Weights.w; float4 position = mul(value, skinTransform); return position; } else return value; } Pixel vertexShader(Vertex input) { Pixel result = (Pixel) 0; float4 posWorld = mul(ApplyBoneTransform(input, float4(input.Position.xyz, 1.0f)), World); result.Position = mul(mul(posWorld, View), Projection); result.Normal = normalize(mul(ApplyBoneTransform(input, float4(input.Normal.xyz, 1.0f)), WorldIT)); result.UV = input.UV; result.View = ViewInverse[3] - mul(float4(input.Position.xyz, 1.0f), World); result.Tangent = normalize(mul(ApplyBoneTransform(input, float4(input.Tangent.xyz, 1.0f)), WorldIT).xyz); result.Binormal = normalize(cross(input.Normal, input.Tangent)); return result; } And if i set HasBones to true, my object will not draw right anymore, i only see two dark triangles I believe it depends on the bone matrices i load from the controller_lib of the COLLADA File and send it to the BoneMatrices in the shader + at the index 0 the Matrix Idetity. Has anyone an Idea what I make wrong and could help and explain me it? And i upload to this post the Collada file and images of the object draw in HasBones = false and HasBones = true Greets Benajmin Model.dae
  23. Would a 1024x16384 texture generally perform as well as a 4096x4096 texture? Memory wise they should be the same but perhaps having such a large dimension as 16384 would be worse anyway? I'm considering making a texture atlas like a long strip where it would be tilable in one direction.
  24. I am having trouble with the graphics device disconnecting with the reason DXGI_ERROR_DRIVER_INTERNAL_ERROR on some of our devices after our game terminal has been running for a few hours. Specifically the Intel HD 530, though a few of the other Intel based GPUs will crash as well. The odd thing is that the same code will run indefinitely on the same computers if we put in a discreet card. It also runs without any problem on other Intel chipsets, even those that are quite a bit older. We have tried the basics like Windows updates and getting the latest drivers from Intel. We have also verified that this is not a computer-specific problem, as the problem persists across different computers with the same build. I have tried adding ID3D11DeviceContext::Flush calls by resource creation as suggested by https://software.intel.com/en-us/forums/graphics-driver-bug-reporting/topic/610376 with no apparent help. I have also verified that no graphics handles are being held onto for a very long time, and our GPU memory usage never gets much above 400 Mb, which should be well within the ability of an integrated card to handle. We actually wrote a watchdog application to monitor that, and the usually the device is removed while the memory is lower than normal. I'm having a hard time finding any resources that would help us find the root problem, as DXGI_ERROR_DRIVER_INTERNAL_ERROR is not a very helpful error message. We are using the DirectX 11 api, and run on a variety of Windows based computers, including both all in ones and desktops. I would appreciate any help or ideas anyone has, as we haven't been able to make much forward progress even after a few weeks of intensive debugging and engine changes.
  25. Hi, so I imported some new models into my engine, and some of them show up with ugly seams or dark patches, while others look perfect (see pictures) I'm using the same shader for all of them, and all of these models have had custom UV mapped textures created for them, which should wrap fully around them, instead of using tiled textures. I have no idea why the custom UV mapped textures are mapping correctly on some, but not others. Possible causes are 1. Am I using the wrong SamplerState to sample the textures? (Im using SampleTypeClamp ) 2. The original models had quads, and were UV mapped by an artist in that state, then I reimported them into 3DS Max and reexported them as all triangles (my engine object loader only accepts triangles). 3. Could the original model UVs just be wrong? Please let me know if somebody can help identify this problem, I'm completely baffled. Thanks. For reference, here's a link to the shader being used to draw the problematic models and the shader code below. https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/Light_SoftShadows_ps.hlsl ///////////// // DEFINES // ///////////// #define NUM_LIGHTS 3 ///////////// // GLOBALS // ///////////// // texture resource that will be used for rendering the texture on the model Texture2D shaderTextures[7];// NOTE - we only use one render target for drawing all the shadows here! // allows modifying how pixels are written to the polygon face, for example choosing which to draw. SamplerState SampleType; /////////////////// // SAMPLE STATES // /////////////////// SamplerState SampleTypeClamp : register(s0); SamplerState SampleTypeWrap : register(s1); /////////////////// // TYPEDEFS // /////////////////// // This structure is used to describe the lights properties struct LightTemplate_PS { int type; float3 padding; float4 diffuseColor; float3 lightDirection; //(lookat?) //@TODO pass from VS BUFFER? float specularPower; float4 specularColor; }; ////////////////////// // CONSTANT BUFFERS // ////////////////////// cbuffer SceneLightBuffer:register(b0) { float4 cb_ambientColor; LightTemplate_PS cb_lights[NUM_LIGHTS]; } ////////////////////// // CONSTANT BUFFERS // ////////////////////// // value set here will be between 0 and 1. cbuffer TranslationBuffer:register(b1) { float textureTranslation; //@NOTE = hlsl automatically pads floats for you }; // for alpha blending textures cbuffer TransparentBuffer:register(b2) { float blendAmount; }; struct PixelInputType { float4 vertex_ModelSpace : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float3 viewDirection : TEXCOORD1; float3 lightPos_LS[NUM_LIGHTS] : TEXCOORD2; float4 vertex_ScrnSpace : TEXCOORD5; }; float4 main(PixelInputType input) : SV_TARGET { bool bInsideSpotlight = true; float2 projectTexCoord; float depthValue; float lightDepthValue; float4 textureColor; float gamma = 7.f; /////////////////// NORMAL MAPPING ////////////////// float4 bumpMap = shaderTextures[4].Sample(SampleType, input.tex); // Sample the shadow value from the shadow texture using the sampler at the projected texture coordinate location. projectTexCoord.x = input.vertex_ScrnSpace.x / input.vertex_ScrnSpace.w / 2.0f + 0.5f; projectTexCoord.y = -input.vertex_ScrnSpace.y / input.vertex_ScrnSpace.w / 2.0f + 0.5f; float shadowValue = shaderTextures[6].Sample(SampleTypeClamp, projectTexCoord).r; // Expand the range of the normal value from (0, +1) to (-1, +1). bumpMap = (bumpMap * 2.0f) - 1.0f; // Change the COORDINATE BASIS of the normal into the space represented by basis vectors tangent, binormal, and normal! float3 bumpNormal = normalize((bumpMap.x * input.tangent) + (bumpMap.y * input.binormal) + (bumpMap.z * input.normal)); //////////////// AMBIENT BASE COLOR //////////////// // Set the default output color to the ambient light value for all pixels. float4 lightColor = cb_ambientColor * saturate(dot(bumpNormal, input.normal) + .2); // Calculate the amount of light on this pixel. for(int i = 0; i < NUM_LIGHTS; ++i) { float lightIntensity = saturate(dot(bumpNormal, normalize(input.lightPos_LS[i]))); if(lightIntensity > 0.0f) { lightColor += (cb_lights[i].diffuseColor * lightIntensity) * 0.3; } } // Saturate the final light color. lightColor = saturate(lightColor); // TEXTURE ANIMATION - Sample pixel color from texture at this texture coordinate location. input.tex.x += textureTranslation; // BLENDING float4 color1 = shaderTextures[0].Sample(SampleTypeWrap, input.tex); float4 color2 = shaderTextures[1].Sample(SampleTypeWrap, input.tex); float4 alphaValue = shaderTextures[3].Sample(SampleTypeWrap, input.tex); //textureColor = saturate((alphaValue * color1) + ((1.0f - alphaValue) * color2)); textureColor = color1; // Combine the light and texture color. float4 finalColor = lightColor * textureColor * shadowValue * gamma; //if(lightColor.x == 0) //{ // finalColor = cb_ambientColor * saturate(dot(bumpNormal, input.normal) + .2) * textureColor; //} return finalColor; }
  • Advertisement