• Advertisement

Search the Community

Showing results for tags 'DX11' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • GameDev Unboxed

Categories

  • Game Dev Loadout

Categories

  • Game Developers Conference
    • GDC 2017

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Developers

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 1418 results

  1. Hi, New here. I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. Thanks, Dejay Hextrix
  2. In some situations, my game starts to "lag" on older computers. I wanted to search for bottlenecks and optimize my game by searching for flaws in the shaders and in the layer between CPU and GPU. My first step was to measure the time my render function needs to solve its tasks. Every second I wrote the accumulated times of each task into my console window. Each second it takes around 170ms to call render functions for all models (including settings shader resources, updating constant buffers, drawing all indexed and non-indexed vertices, etc.) 40ms to render the UI 790ms to call SwapChain.Present <1ms to do the rest (updating structures, etc.) In my Swap Chain description I set a frame rate of 60 Hz, if it's supported by the computer. It made sense for me that the Present function waits some time until it starts the next frame. However, I wanted to check, if this might be a problem for me. After a web search I found articles like this one, which states My drivers are up-to-date so that's no issue. I installed Microsoft's PIX, but I was unable to use it. I could configure my game for x64, but PIX is not able to process DirectX 11.. After getting only error messages, I installed NVIDIA's NSight. After adjusting my game and installing all components, I couldn't get a proper result, because my game freezes after a few frames. I haven't figured out why. There is no exception or error message and other debug mechanisms like log messages and break points tell me the game freezes at the end of the render function after a few frames. So, I looked for another profiling tool and found Jeremy's GPUProfiler. However, the information returned by this tool is too basic to get an in-depth knowledge about my performance issues. Can anyone recommend a GPU Profiler or any other tool that might help me to find bottlenecks in my game and or that is able to indicate performance problems in my shaders? My custom graphics engine can handle subjects like multi-texturing, instancing, soft shadowing, animation, etc. However, I am pretty sure, there are things I can optimize! I am using SharpDX to develop a game (engine) based on DirectX 11 with .NET Framework 4.5. My graphics cards is from NVIDIA and my processor is made by Intel.
  3. I was wondering if someone could explain this to me I'm working on using the windows WIC apis to load in textures for DirectX 11. I see that sometimes the WIC Pixel Formats do not directly match a DXGI Format that is used in DirectX. I see that in cases like this the original WIC Pixel Format is converted into a WIC Pixel Format that does directly match a DXGI Format. And doing this conversion is easy, but I do not understand the reason behind 2 of the WIC Pixel Formats that are converted based on Microsoft's guide I was wondering if someone could tell me why Microsoft's guide on this topic says that GUID_WICPixelFormat40bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA and why GUID_WICPixelFormat80bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA In one case I would think that: GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat32bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA, because the black channel (k) values would get readded / "swallowed" into into the CMY channels In the second case I would think that: GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat128bppRGBA, because the black channel (k) bits would get redistributed amongst the remaining 4 channels (CYMA) and those "new bits" added to those channels would fit in the GUID_WICPixelFormat64bppRGBA and GUID_WICPixelFormat128bppRGBA formats. But also seeing as there is no GUID_WICPixelFormat128bppRGBA format this case is kind of null and void I basically do not understand why Microsoft says GUID_WICPixelFormat40bppCMYKAlpha and GUID_WICPixelFormat80bppCMYKAlpha should convert to GUID_WICPixelFormat64bppRGBA in the end
  4. Does buffer number matter in ID3D11DeviceContext::PSSetConstantBuffers()? I added 5 or six constant buffers to my framework, and later realized I had set the buffer number parameter to either 0 or 1 in all of them - but they still all worked! Curious why that is, and should they be set up to correspond to the number of constant buffers? Similarly, inside the buffer structs used to pass info into the hlsl shader, I added padding inside the c++ struct to make a struct containing a float3 be 16 bytes, but in the declaration of the same struct inside the hlsl shader file, it was missing the padding value - and it still worked! Do they need to be consistent or not? Thanks. struct CameraBufferType { XMFLOAT3 cameraPosition; float padding; };
  5. Hi guys, anyone experienced with DX11 could look at my graphics.cpp class? I got fonts rendering correctly with painters algorithm - painting over the other 3d stuff each frame, however, whenever I turn the camera left or right, the fonts get smushed narrower and narrower, then disappear completely. It seems like the fix must be a very small change, untying their rendering from the cam direction, but I just can't figure out how to do it under all this rendering complexity. Any tips would be helpful, thanks. https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/Graphics.cpp
  6. SOLVED: I had written Dispatch(32, 24, 0) instead of Dispatch(32, 24, 1) I'm attempting to implement some basic post-processing in my "engine" and the HLSL part of the Compute Shader and such I think I've understood, however I'm at a loss at how to actually get/use it's output for rendering to the screen. Assume I'm doing something to a UAV in my CS: RWTexture2D<float4> InputOutputMap : register(u0); I want that texture to essentially "be" the backbuffer. I'm pretty certain I'm doing something wrong when I create the views (what I think I'm doing is having the backbuffer be bound as render target aswell as UAV and then using it in my CS): DXGI_SWAP_CHAIN_DESC scd; ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC)); scd.BufferCount = 1; scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT | DXGI_USAGE_SHADER_INPUT | DXGI_USAGE_UNORDERED_ACCESS; scd.OutputWindow = wndHandle; scd.SampleDesc.Count = 1; scd.Windowed = TRUE; HRESULT hr = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &scd, &gSwapChain, &gDevice, NULL, &gDeviceContext); // get the address of the back buffer ID3D11Texture2D* pBackBuffer = nullptr; gSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer); // use the back buffer address to create the render target gDevice->CreateRenderTargetView(pBackBuffer, NULL, &gBackbufferRTV); // set the render target as the back buffer CreateDepthStencilBuffer(); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); //UAV for compute shader D3D11_UNORDERED_ACCESS_VIEW_DESC uavd; ZeroMemory(&uavd, sizeof(uavd)); uavd.Format = DXGI_FORMAT_R8G8B8A8_UNORM; uavd.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; uavd.Texture2D.MipSlice = 1; gDevice->CreateUnorderedAccessView(pBackBuffer, &uavd, &gUAV); pBackBuffer->Release(); After I render the scene, I dispatch like this: gDeviceContext->OMSetRenderTargets(0, NULL, NULL); m_vShaders["cs1"]->Bind(); gDeviceContext->CSSetUnorderedAccessViews(0, 1, &gUAV, 0); gDeviceContext->Dispatch(32, 24, 0); //hard coded ID3D11UnorderedAccessView* nullview = { nullptr }; gDeviceContext->CSSetUnorderedAccessViews(0, 1, &nullview, 0); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); gSwapChain->Present(0, 0); Worth noting is the scene is rendered as usual, but I dont get any results from the CS (simple gaussian blur) I'm sure it's something fairly basic I'm doing wrong, perhaps my understanding of render targets / views / what have you is just completely wrong and my approach just makes no sense. If someone with more experience could point me in the right direction I would really appreciate it! On a side note, I'd really like to learn more about this kind of stuff. I can really see the potential of the CS aswell as rendering to textures and using them for whatever in the engine so I would love it if you know some good resources I can read about this! Thank you <3 P.S I excluded the .hlsl since I cant imagine that being the issue, but if you think you need it to help me just ask P:P:S. As you can see this is my first post however I do have another account, but I can't log in with it because gamedev.net just keeps asking me to accept terms and then logs me out when I do over and over
  7. I was wondering if anyone could explain the depth buffer and the depth stencil state comparison function to me as I'm a little confused So I have set up a depth stencil state where the DepthFunc is set to D3D11_COMPARISON_LESS, but what am I actually comparing here? What is actually written to the buffer, the pixel that should show up in the front? I have these 2 quad faces, a Red Face and a Blue Face. The Blue Face is further away from the Viewer with a Z index value of -100.0f. Where the Red Face is close to the Viewer with a Z index value of 0.0f. When DepthFunc is set to D3D11_COMPARISON_LESS the Red Face shows up in front of the Blue Face like it should based on the Z index values. BUT if I change the DepthFunc to D3D11_COMPARISON_LESS_EQUAL the Blue Face shows in front of the Red Face. Which does not make sense to me, I would think that when the function is set to D3D11_COMPARISON_LESS_EQUAL the Red Face would still show up in front of the Blue Face as the Z index for the Red Face is still closer to the viewer Am I thinking of this comparison function all wrong? Vertex data just in case //Vertex date that make up the 2 faces Vertex verts[] = { //Red face Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), //Blue face Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), };
  8. Hi all, First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource! Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots: The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios. Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
  9. If I do a buffer update with MAP_NO_OVERWRITE or MAP_DISCARD, can I just write to the buffer after I called Unmap() on the buffer? It seems to work fine for me (Nvidia driver), but is it actually legal to do so? I have a graphics device wrapper and I don't want to expose Map/Unmap, but just have a function like void* AllocateFromRingBuffer(GPUBuffer* buffer, uint size, uint& offset); This function would just call Map on the buffer, then Unmap immediately and then return the address of the buffer. It usually does a MAP_NO_OVERWRITE, but sometimes it is a WRITE_DISCARD (when the buffer wraps around). Previously I have been using it so that the function expected the data upfront and would copy to the buffer between Map/Unmap, but now I want to extend functionality of it so that it would just return an address to write to.
  10. Trying to write a multitexturing shader in DirectX11 - 3 textures work fine, but adding 4th gets sampled as black! Could you please look at the textureClass.cpp line 79? - I'm guess its D3D11_TEXTURE2D_DESC settings are wrong, but no idea how to set it up right. I tried changing ArraySize from 1 to 4, but does nothing. If thats not the issue, please look at the LightShader_ps - maybe doing something wrong there? Otherwise, no idea. // Setup the description of the texture. textureDesc.Height = height; textureDesc.Width = width; textureDesc.MipLevels = 0; textureDesc.ArraySize = 1; textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; textureDesc.SampleDesc.Count = 1; textureDesc.SampleDesc.Quality = 0; textureDesc.Usage = D3D11_USAGE_DEFAULT; textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET; textureDesc.CPUAccessFlags = 0; textureDesc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS; Please help, thanks. https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/Texture.cpp
  11. Can someone help out with this. The code builds but I get "Exception thrown: Read Access Violation" and says my index buffer was a nullptr. I'm going to attach my code and a screenshot of the error below or above. Any help is greatly appreciated. //------------------------------- //ヘッダファイル //------------------------------- #include "manager.h" #include "renderer.h" #include "dome.h" #include "camera.h" //------------------------------- //コンストラクタ //------------------------------- CDome::CDome() { m_pIndxBuff = nullptr; m_pVtxBuff = nullptr; m_HorizontalGrid = NULL; m_VerticalGrid = NULL; // ワールドの位置・拡大・回転を設定 m_Scale = D3DXVECTOR3(1.0f, 1.0f, 1.0f); m_Pos = D3DXVECTOR3(0.0f, 0.0f, 0.0f); //m_Rotate = 0.0f; } CDome::CDome(int HorizontalGrid, int VerticalGrid, float Length) { m_pIndxBuff = nullptr; m_pVtxBuff = nullptr; m_HorizontalGrid = HorizontalGrid; m_VerticalGrid = VerticalGrid; // ワールドの位置・拡大・回転を設定 m_Scale = D3DXVECTOR3(1.0f, 1.0f, 1.0f); m_Pos = D3DXVECTOR3(0.0f, 0.0f, 0.0f); m_Length = Length; } CDome::CDome(int HorizontalGrid, int VerticalGrid, float Length, D3DXVECTOR3 Pos) { m_pIndxBuff = nullptr; m_pVtxBuff = nullptr; m_HorizontalGrid = HorizontalGrid; m_VerticalGrid = VerticalGrid; // ワールドの位置・拡大・回転を設定 m_Scale = D3DXVECTOR3(1.0f, 1.0f, 1.0f); m_Pos = Pos; m_Length = Length; } //------------------------------- //デストラクタ //------------------------------- CDome::~CDome() { } //------------------------------- //初期化処理 //------------------------------- void CDome::Init(void) { LPDIRECT3DDEVICE9 pDevice; pDevice = CManager::GetRenderer()->GetDevice(); m_VtxNum = (m_HorizontalGrid + 1) * (m_VerticalGrid + 1); m_IndxNum = (m_HorizontalGrid * 2 + 2) * m_VerticalGrid + (m_VerticalGrid - 1) * 2; // テクスチャの生成 if (FAILED(D3DXCreateTextureFromFile(pDevice, "data/TEXTURE/dome.jpg", &m_pTexture))) { MessageBox(NULL, "Couldn't read Texture file destination", "Error Loading Texture", MB_OK | MB_ICONHAND); } //頂点バッファの作成 if (FAILED(pDevice->CreateVertexBuffer(sizeof(VERTEX_3D) * m_VtxNum, D3DUSAGE_WRITEONLY, FVF_VERTEX_3D, D3DPOOL_MANAGED, &m_pVtxBuff, NULL))) //作成した頂点バッファのサイズ { MessageBox(NULL, "Error making VertexBuffer", "Error", MB_OK); } //インデクスバッファの作成 if (FAILED(pDevice->CreateIndexBuffer(sizeof(VERTEX_3D) * m_IndxNum, D3DUSAGE_WRITEONLY, D3DFMT_INDEX16, D3DPOOL_MANAGED, &m_pIndxBuff, NULL))) { MessageBox(NULL, "Error making IndexBuffer", "Error", MB_OK); } VERTEX_3D *pVtx; //仮想アドレス用ポインターVertex WORD *pIndx; //仮想アドレス用ポインターIndex //頂点バッファをロックして仮想アドレスを取得する。 m_pVtxBuff->Lock(0, 0, (void**)&pVtx, 0); //インデクスをロックして仮想アドレスを取得する。 m_pIndxBuff->Lock(0, 0, (void**)&pIndx, 0); for (int i = 0; i < (m_VerticalGrid + 1); i++) { for (int j = 0; j < (m_HorizontalGrid + 1); j++) { pVtx[0].pos = D3DXVECTOR3(m_Length * sinf(i * (D3DX_PI * 0.5f / ((int)m_VerticalGrid - 1))) * sinf(j * (D3DX_PI * 2 / ((int)m_HorizontalGrid - 1))), m_Length * cosf(i * (D3DX_PI * 0.5f / ((int)m_VerticalGrid - 1))), m_Length * sinf(i * (D3DX_PI* 0.5f / ((int)m_VerticalGrid - 1))) * cosf(j * (D3DX_PI * 2 / ((int)m_HorizontalGrid - 1)))); D3DXVECTOR3 tempNormalize; D3DXVec3Normalize(&tempNormalize, &pVtx[0].pos); pVtx[0].normal = -tempNormalize; pVtx[0].color = D3DXCOLOR(255, 255, 255, 255); pVtx[0].tex = D3DXVECTOR2((float)j / (m_HorizontalGrid - 1), (float)i / (m_VerticalGrid - 1)); pVtx++; } } for (int i = 0; i < m_VerticalGrid; i++) { if (i != 0) { pIndx[0] = ((m_HorizontalGrid + 1) * (i + 1)); pIndx++; } for (int j = 0; j < (m_HorizontalGrid + 1); j++) { pIndx[0] = ((m_HorizontalGrid + 1) * (i + 1)) + j; pIndx[1] = ((m_HorizontalGrid + 1) * i) + j; pIndx += 2; } if (i + 1 != m_VerticalGrid) { pIndx[0] = pIndx[-1]; pIndx++; } } //インデクスをアンロックする m_pIndxBuff->Unlock(); //頂点バッファをアンロックする m_pVtxBuff->Unlock(); } //------------------------------- //終了処理 //------------------------------- void CDome::Uninit(void) { // 頂点バッファの破棄 SAFE_RELEASE(m_pVtxBuff); // インデクスの破棄 SAFE_RELEASE(m_pIndxBuff); Release(); } //------------------------------- //更新処理 //------------------------------- void CDome::Update(void) { m_Pos = CManager::GetCamera()->GetCameraPosEye(); } //------------------------------- //描画処理 //------------------------------- void CDome::Draw(void) { LPDIRECT3DDEVICE9 pDevice; pDevice = CManager::GetRenderer()->GetDevice(); D3DXMATRIX mtxWorld; D3DXMATRIX mtxPos; D3DXMATRIX mtxScale; D3DXMATRIX mtxRotation; // ワールドID D3DXMatrixIdentity(&mtxWorld); // 3D拡大行列 D3DXMatrixScaling(&mtxScale, m_Scale.x, m_Scale.y, m_Scale.z); D3DXMatrixMultiply(&mtxWorld, &mtxWorld, &mtxScale); // 3D平行移動行列 D3DXMatrixTranslation(&mtxPos, m_Pos.x, m_Pos.y + 70.0f, m_Pos.z); D3DXMatrixMultiply(&mtxWorld, &mtxWorld, &mtxPos); // ワールド座標変換 pDevice->SetTransform(D3DTS_WORLD, &mtxWorld); // 頂点バッファをデータストリームに設定 pDevice->SetStreamSource(0, m_pVtxBuff, 0, sizeof(VERTEX_3D)); // 頂点フォーマットの設定 pDevice->SetFVF(FVF_VERTEX_3D); // テクスチャの設定 pDevice->SetTexture(0, m_pTexture); // インデクスの設定 pDevice->SetIndices(m_pIndxBuff); // カラーが見えるようにライトを消す pDevice->SetRenderState(D3DRS_LIGHTING, FALSE); // ポリゴンの描画 pDevice->DrawIndexedPrimitive(D3DPT_TRIANGLESTRIP, 0, 0, m_VtxNum, 0, m_IndxNum - 2); // ライトを元に戻す pDevice->SetRenderState(D3DRS_LIGHTING, TRUE); } //------------------------------- //Create MeshDome //------------------------------- CDome *CDome::Create(int HorizontalGrid, int VerticalGrid, float Length) { CDome *pMeshDome; pMeshDome = new CDome(HorizontalGrid, VerticalGrid, Length); pMeshDome->Init(); return pMeshDome; } CDome *CDome::Create(int HorizontalGrid, int VerticalGrid, float Length, D3DXVECTOR3 Pos) { CDome *pMeshDome; pMeshDome = new CDome(HorizontalGrid, VerticalGrid, Length, Pos); pMeshDome->Init(); return pMeshDome; }
  12. I have to learn DirectX for a course I am studying. This book https://www.amazon.co.uk/Introduction-3D-Game-Programming-Directx/dp/1936420228 I felt would be great for me to learn from. The trouble is the examples which are all offered here http://www.d3dcoder.net/d3d11.htm . They do not work for me. This is a known issue as there is a link on the examples page saying how to fix it. I'm having difficulty with doing this though. This is the page with the solution http://www.d3dcoder.net/Data/Book4/d3d11Win10.htm. The reason why this problem is happening, the book was released before Windows 10 was released. Now when the examples are run they need slight fixes in order for them to even work. I just can't get these examples working at all. Would anyone be able to help me get the examples working please. I am running Windows 10 also just to make this clear, so this is why the examples are experiencing the not so desired behaviour. I just wish they would work straight away but there seems to be issues with the examples from this book mainly because of it trying to run from a Windows 10 OS. On top of this, if anyone has any suggestions with how I can learn DirectX 11 i would be most grateful. Thanks very much. I really would like to get them examples working to though from the book I mentioned. Look forward to reading any replies this thread receives. GameDevCoder. PS - If anyone has noticed. I asked this about 1 year ago also but this was when I was dabbling in it. Now I am actually needing to produce some stuff with DirectX so I have to get my head round this now. I felt at the time that I sort of understood what was being written to me in response to my thread back then. I had always been a little unsure though of being absolutely sure of what was happening with these troublesome examples. So I am really just trying to get to the bottom of this now. If anyone can help me work these examples out so I can see them working then hopefully I can learn DirectX 11 from them. *SOLUTION* - I was able to get the examples running thanks to the gamedev.net community. Great work guys. I'm so please now that I can learn from this book now I have the examples running. https://www.gamedev.net/forums/topic/693437-i-need-to-learn-directx-the-examples-for-introduction-to-3d-programming-with-directx-11-by-frank-d-luna-does-not-work-can-anyone-help-me/?do=findComment&comment=5363013
  13. Hello, until now i am using structured buffers in my vertexShader to calculate the morph offsets of my animated characters. And it works fine. But until now i only read from this kind of buffers. ( i use 4 of them ) Now i had in mind to do other things, where i have to use a readwrite buffer that i can write to. But i cant get in my head how to sync write acceses. when i read a value from the buffer at a adress that coresponds to e.g. a pixel coordinate and want to add a value another thread could have read the same value overrides the value that i had written. How is this done typically ?
  14. Hey, anyone know a good plugin for debugging hlsl shaders? I have hlsl tool installed, and it makes the text change color, but it wont show me any of teh errors so I have no way of spotting errors in the shader I write other than my own two eyes. It doesnt do the red squiggly underline for some reason. Settings issue or do I need another plugin? Thanks.
  15. Hi, there's a great tutorial on frustum culling, but impossible to compile because it uses old DirectX 11 types (Direct3DXPlane instead of XMVECTOR, etc). Can someone please help me update this one class - frustumClass - to the new DirectX11 types (XMMATRIX, XMVECTOR, etc)? http://www.rastertek.com/dx11tut16.html Furthermore, can anyone please explain how he gets the minimum Z distance from the projection matrix by dividing one element by another? He leaves no explanation for this math and it's extremely frustrating. // Calculate the minimum Z distance in the frustum. zMinimum = -projectionMatrix._43 / projectionMatrix._33; r = screenDepth / (screenDepth - zMinimum); projectionMatrix._33 = r; projectionMatrix._43 = -r * zMinimum; Also not sure how to use an XMVECTOR instead of the old Plane class that he uses. Confused as to where all the m12, m13, etc correspond to the elements of an XMVECTOR. I thought you're not supposed to even access the elements of an XMVECTOR directly! So incredibly frustrating. Please help, thanks.
  16. Hi, As part of my terrain project, I'm trying to render ocean water. I have a nice FFT Compute shader implementation which outputs a nice 512x512 heightmap (It can also output a Gradient map but I disabled it as there are issues with it). The FFT code is from the Nvidia FFT ocean sample for DX11. Now, here is the weird thing, I have 2 different methods that render the water grid, both using the same FFT Heightmap SRV (SRVs are members of a dedicated Resource Manager class), and both are rendering the FFT Heightmap same way exactly. Although the grids are different, eventually I made the FFT map tile in a way where the scales are almost 1:1. The rendering itself is pretty much straight forward (Using DX11 Tessellation pipeline): 1. In domain shader - Sample the Heightmap in order to displace the vertices 2. In pixel shader - Finite diff to get the normals - Sample the heightmap 4 times and calculate the normals as usual Now here is the weird thing: Method 1 - Normals look good after Finite diff operation - Unfortunately I can't use this method as it has some other issues. Method 2 - Normals are coming out distorted in a way that I can't explain - More than that, if in the Domain shader I give up the displacement on the horizontal axis (XZ) and leave only the vertical displacement on Y axis, the normals are fine. With full displacement (XZ included) it feels like the normals aren't compensating for the XZ movement of the displacement. I tried to play with anything I could think of, but normals look bad no matter what. And I really don't want to give up the XZ displacement as with vertical displacement only, the FFT looks kinda crippled. I tried also to use ddx_fine and ddy_fine, and it seems like the normals looking more accurate (i.e taking the XZ movement into account), but the quality was very low, so not usable. But the fact that the natural derivatives functions showed the XZ movement more accurately does give me hope that there is a better way to do it (?) So, Is there a better way to calculate the normals more accurately? Here is the difference: Method 1 normals - Nice and crispy Method 2 normals - Distorted Also here is the Method 2 displacement in wireframe, and it's looking good as can be seen: I'm also attaching here the relevant DS and PS code that makes the displacement and normals in method 2 (Method 1 code is same, just has some more stuff like Perlin noise blended in the distance, but the FFT related stuff is same exactly): DS displacement code // bilerp the position float3 worldPos = Bilerp(terrainQuad[0].vPosition, terrainQuad[1].vPosition, terrainQuad[2].vPosition, terrainQuad[3].vPosition, UV); float3 displacement = 0; displacement = SampleHeightForVS(gFFTHeightMap, Sampler16Aniso, worldPos.xz); displacement.z *= -1; // Flip Z back because the tex coordinates use a flipped Z, if not flipping the FFT look kinda upside down worldPos += displacement * FFT_DS_SCALE_FACTOR; return worldPos; PS finite diff: float3 CalcNormalForOceanHeightMap(float2 uv) { float2 one_texel = float2(1.0f / 512.0f, 1.0f / 512.0f); float2 leftTex; float2 rightTex; float2 bottomTex; float2 topTex; float leftY; float rightY; float bottomY; float topY; float normFactor = 1.0 / 512.0; leftTex = uv + float2(-one_texel.x, 0.0f); rightTex = uv + float2(one_texel.x, 0.0f); bottomTex = uv + float2(0.0f, one_texel.y); topTex = uv + float2(0.0f, -one_texel.y); leftY = gFFTHeightMap.SampleLevel(Sampler16Aniso, leftTex, 0 ).z * normFactor; rightY = gFFTHeightMap.SampleLevel(Sampler16Aniso, rightTex, 0 ).z * normFactor; bottomY = gFFTHeightMap.SampleLevel(Sampler16Aniso, bottomTex, 0 ).z * normFactor; topY = gFFTHeightMap.SampleLevel(Sampler16Aniso, topTex, 0 ).z * normFactor; float3 normal; normal.x = (leftY - rightY); normal.z = (bottomY - topY); normal.y = 1.0f / 64.0; return normalize(normal); } Any help would be welcome, thanx!
  17. Hello, i want to use - fxc Effect compiler to compile my Fx Files with, Techniques, Passes, Shaders to ByteCode => no Problem - SharpDX ShaderReflection to get attributes of every used Shader to generate Constant Buffer Structures an other things - to use ShaderReflection i have to feed the ShaderByteCode of every single Shader i want to operate with. I was succesfull to iterate through the Effect for the Techniques, an iterate through the Techniques for the Passes an the Descriptions I struggled a lot but cannot find out where is the connection to get the ShaderByteCode of the Shaders that are used by a EffectPass ?? Here is the anser of AlexandreMutel And even then, you can still use the FX file format - the difference is you'd have an offline compiler tool that parses the format for techniq ues/passes in order to acquire the necessary shader profiles + entry points. So with the profile/entry point in hand, you can run the FX file through the D3DCompiler to get a ShaderByteCode object for each shader, then use that to create a reflection object to query all your meta data. Then write out the reflected meta data to a file, that gets consumed by your application at runtime - which would be your own implementation of Effects11 (or something completely different, either way...you use the meta data to automatically setup your constant buffers, bind resources, and manage the shader pipeline by directly using the Direct3D11 shader interfaces). How do i get the pofile/entry point at hand ?? Please give me some help
  18. In countless sources I've found that, when operating within a warp, one might skip syncthreads because all instructions are synchronous within a single warp. In CUDA-related sources. I followed that advice and applied it in DirectCompute (I use NV's GPU). I wrote this code that does nothing else but good old prefix-sum of 64 elements (64 is the size of my block): groupshared float errs1_shared[64]; groupshared float errs2_shared[64]; groupshared float errs4_shared[64]; groupshared float errs8_shared[64]; groupshared float errs16_shared[64]; groupshared float errs32_shared[64]; groupshared float errs64_shared[64]; void CalculateErrs(uint threadIdx) { if (threadIdx < 32) errs2_shared[threadIdx] = errs1_shared[2*threadIdx] + errs1_shared[2*threadIdx + 1]; if (threadIdx < 16) errs4_shared[threadIdx] = errs2_shared[2*threadIdx] + errs2_shared[2*threadIdx + 1]; if (threadIdx < 8) errs8_shared[threadIdx] = errs4_shared[2*threadIdx] + errs4_shared[2*threadIdx + 1]; if (threadIdx < 4) errs16_shared[threadIdx] = errs8_shared[2*threadIdx] + errs8_shared[2*threadIdx + 1]; if (threadIdx < 2) errs32_shared[threadIdx] = errs16_shared[2*threadIdx] + errs16_shared[2*threadIdx + 1]; if (threadIdx < 1) errs64_shared[threadIdx] = errs32_shared[2*threadIdx] + errs32_shared[2*threadIdx + 1]; } This works flawlessly. I noticed that I have bank conflicts in here so I changed that code to this: void CalculateErrs(uint threadIdx) { if (threadIdx < 32) errs2_shared[threadIdx] = errs1_shared[threadIdx] + errs1_shared[threadIdx + 32]; if (threadIdx < 16) errs4_shared[threadIdx] = errs2_shared[threadIdx] + errs2_shared[threadIdx + 16]; if (threadIdx < 8) errs8_shared[threadIdx] = errs4_shared[threadIdx] + errs4_shared[threadIdx + 8]; if (threadIdx < 4) errs16_shared[threadIdx] = errs8_shared[threadIdx] + errs8_shared[threadIdx + 4]; if (threadIdx < 2) errs32_shared[threadIdx] = errs16_shared[threadIdx] + errs16_shared[threadIdx + 2]; if (threadIdx < 1) errs64_shared[threadIdx] = errs32_shared[threadIdx] + errs32_shared[threadIdx + 1]; } And to my surprise this one causes race conditions. Is it because I should not rely on that functionality (auto-sync within warp) when working with DirectCompute instead of CUDA? Because that hurts my performance by measurable margin. With bank conflicts (first version) I am still faster by around 15-20% than in the second version, which is conflict-free but I have to add GroupMemoryBarrierWithGroupSync in between each assignment.
  19. So I have potentially a fairly simple question When rendering various things, whether they are 2D or 3D, when do they get placed or blitted onto the set render target? Is it at the moment when you call Present or is it at the moment of calling a Draw call?
  20. As part of a video project I'm working on, I have to pass ID3D11Texture2D decoded by CUDA, from one D3D11Device to the other, which handles rendering. I managed to achieve the goal, but it looks like I'm leaking textures. The workflow looks as follows: Sending side (decoder): ID3D11Texture2D* pD3D11OutTexture; if (!createOutputTexture(pD3D11OutTexture)) return false; IDXGIResource1* pRsrc = nullptr; pD3D11OutTexture->QueryInterface(__uuidof(IDXGIResource1), reinterpret_cast<void**>(&pRsrc)); auto hr = pRsrc->CreateSharedHandle( nullptr, DXGI_SHARED_RESOURCE_READ | DXGI_SHARED_RESOURCE_WRITE, nullptr, &frameData->shared_handle); pRsrc->Release(); Receiving side (renderer): ID3D11Texture2D* pTex = nullptr; hres = m_pD3D11Device->OpenSharedResource1( frameData->shared_handle, __uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&pTex)); DrawFrame(pTex); pTex->Release(); CloseHandle(frameData->shared_handle); I'm somewhat puzzled by the inner workings of this workflow, namely: what happens when I create a shared handle? Does this allow me to release texture? what happens, when I call OpenSharedResource? Does it create separate texture - that is do I have to release both textures after rendering? Appreciate your help!
  21. I converted some code to use tiled resources and noticed a few issues that I was wondering if anyone else knows anything about. First of all, the tiled resource is working and visually everything is correct-- but.. 1. It takes a long time to create the tiled resource. The call to CreateTexture2D to create the tiles(D3D11_RESOURCE_MISC_TILED) often doesn't return for a few seconds. There aren't any warnings or messages from the debug layer either, so this seems very strange. My resource is a texture2d array with 453 entries, each 2d slice is 2048x2048. I tried reducing the size to 256x256 instead and it still takes at least 2 seconds to complete the function call. 2. I set the lowest mip layer(128x128) in each slice to a unique mapping, and all of the rest of the tiles to a single default tile, I'd expect performance to be fairly good since this means that aside from the lowest mip, there is only 65k of texture memory being accessed, but it destroys my FPS-- going from 80 to 20. Visually it is correct, the tiles show as expected-- This is on a tier1 device and apparently mip maps under the tile size(128x128 in my case) do not work with texture arrays. So in the shader I check the distance and only use the tiled resource when near the camera, this helps but performance is still rubbish. GPU is an AMD 285x
  22. Hello, can someone tell me what's the easiest way to draw string with D3D11? I can't use both FW1FontWrapper and ImGui, so I need an alternative. The only goal I want to get is to draw a text at a screen location, using Tahoma font (if possible, with outline style). Thanks in advance.
  23. Hello today today I started to read Frank Luna's book: DirectX 11 Programming and I encountered with error in file "BoxDemo" (I compiled it successfully): D3D11 CreateDevice failed. Could somebody help me because I didn't find any answer I found similar problems on other games occuring for some people but not like this, it's about gpu but my gpu and windows 7 support dx11, even I tried to reinstall gpu driver, but it didn't solve my problem.
  24. Hello, I started to reading DirectX 11 Programming by Frank Luna but I'm encountered with weird problem I'm not able to solve because I even can't find people with exact same problem. As you see I'm using VS2010 and DirectX 11, I mean my setup is exact the same as in the book. The error occurs in the first sample of the book (BoxDemo.cpp). I think the error is in lines: HR(D3DX11CreateEffectFromMemory(compiledShader->GetBufferPointer(), compiledShader->GetBufferSize(), 0, md3dDevice, &mFX)); (See screenshots below). Could anyone help me? Maybe there are some people with the same error or who solved it. Thanks in advance.
  25. Heads up: this question is more theoretical than practical. My (minute) knowledge about D3D11 is self taught, so please take any premise I make with additional care. I invite everyone to correct anything I say. Now to the actual post. I have a question about the lifetime of a D3D11_USAGE_DEFAULT buffer, used with a D3D11ShaderResourceView as a StructuredBuffer, in GPU memory. At first I need to make sure I am understanding the difference between DEFAULT and DYNAMIC buffers correctly. The way I understand the difference between DEFAULT and DYNAMIC buffers comes from here: D3D11_USAGE_DEFAULT D3D11_USAGE_DEFAULT tells the API to store my buffer in memory that is fast to access for the GPU. This does absolutely not guarantee (?) it is located in VRAM, however it is more likely to be located there. I can update the buffer (partially) by using UpdateSubResource. Here is some info from the previously mentioned thread. D3D11_USAGE_DYNAMIC D3D11_USAGE_DYNAMIC tells the API to store my buffer in memory that is fast to access for the CPU. This guarantees (?) it will be located on system RAM and not VRAM. Whenever the GPU needs to access the data it will upload the data to VRAM. Assuming the hardware can handle buffers larger than 128mB (see footnote 1 on here) this theoretically means the size of the buffer is limited by the amount of data can be transferred from CPU memory to GPU memory in the desired frametime. An estimate for the upper boundary, ignoring time necessary for actually processing the data, would be the PCIE bandwidth available to the GPU divided by the desired framerate (can we estimate a more precise upper boundary?). I can update the buffer using Map/Unmap with one of the following flags: D3D11_MAP_WRITE D3D11_MAP_READ_WRITE D3D11_MAP_WRITE_DISCARD D3D11_MAP_WRITE_NO_OVERWRITE (D3D11_MAP_READ <- this would not be for updating, but simply for reading) Nvidia suggests to use D3D11_MAP_WRITE_DISCARD (for constant buffers). The reason for this (as I understand from here) is that buffers may still be in use when you are trying to update them, and MAP_WRITE_DISCARD will let you write to a different region of memory so that the GPU can discard the old buffer when it is done with it, and grab the new one when it needs it. All of this is still under my personal, possibly wrong, premise that the USAGE_DYNAMIC buffer is stored in system RAM and grabbed by the GPU over PCIE lanes when it needs it. If I were to use MAP_WRITE_NO_OVERWRITE, I could write to the buffer that is in use, but I would have to guarantee that my implementation does not overwrite anything the GPU is currently using. I assume something undefined happens otherwise. Here I really would need to understand the intricacies of how DX11 manages CPU/GPU memory. So if you happen to know about these intricacies in relation to the map flags, please share your knowledge. Back to my initial question: A structured buffer is nothing but an ID3D11Buffer wrapped by an ID3D11ShaderResourceView. As I understand, this means the memory management by D3D11 should be no different. Of course that assumption could be fatally flawed, but that is why I am posting here asking for help. Nonetheless I have to bind and unbind ShaderResources, for example for the vertex shader via VSSetShaderResources. How is binding/unbinding (both implicitly by binding a new resource, or implicitly by binding a nullptr) related to the memory management of my ID3D11Buffer by the D3D11 API? Assuming I have used a USAGE_DEFAULT buffer, then I would hope my structured buffer stays in VRAM until I Release() the resources explicitly. Meaning I can bind/unbind without the cost of having to move the buffer from RAM to VRAM. I guess this question can be generalized to the following: do I ever get a guarantee from D3D11 that something is stored in VRAM until I decide to remove/release it? Of course I still need clarification/answers for the rest of the questions in my post, but my difficulties with D3D11 are summarized by a lack of understanding of the lifetime of objects in VRAM, and how I can influence these lifetimes. Thanks for reading this far, hope someone can help me.
  • Advertisement