Jump to content
  • Advertisement

w00key

Member
  • Content Count

    12
  • Joined

  • Last visited

Community Reputation

100 Neutral

About w00key

  • Rank
    Member
  1. As MartinsM says everytime you use GetDevice or GetBuffer it actually increments a reference counter. This means if you don't release after you've done these functions it'll warn that some objects and anything linked to it are still active. Calling release at these points decreases the ref counter and the object is still alive. For example ID3D11Texture2D* pBackBuffer = nullptr; if(FAILED(swapChain_->GetBuffer( 0, __uuidof( ID3D11Texture2D ), ( void** )&pBackBuffer ))) //increments ref counter { MessageBox( NULL, "Failed to get the back buffer.", "Error", MB_OK ); return false; } if(FAILED(device_->CreateRenderTargetView( pBackBuffer, NULL, renderTarget))) { MessageBox( NULL, "Failed to create a render target from the back buffer.", "Error creating render target", MB_OK ); return false; } pBackBuffer->Release(); // called release to decrease ref counter Aside from this its mostly a case of looking for sampler/blender/rasterizer states that haven't been released.
  2. I'm not sure precisely the main cause, I've now found it reappear in my other rendering loop which uses deferred rendering. Again affected by release/recreate for my texture/bitmap style font class. So the textured quad for the scene draws and then afterwards I create vertices for all the letters as quads then create 1 buffer with these letters. How I create my vertex buffers D3D11_BUFFER_DESC bd; ZeroMemory( &bd, sizeof(bd) ); bd.Usage = D3D11_USAGE_DEFAULT; bd.ByteWidth = sizeof(FontVertex) * vertexCount_; bd.BindFlags = D3D11_BIND_VERTEX_BUFFER; bd.CPUAccessFlags = 0; D3D11_SUBRESOURCE_DATA InitData; ZeroMemory( &InitData, sizeof(InitData) ); InitData.pSysMem = vertices; if(FAILED(device->CreateBuffer( &bd, &InitData, &vertexBuffer_))) return false; However unable to precisely pinpoint if any of this is related aside from removing the option to render the font on screen seems to fix it. (It may however just be reducing the chance of the bug happening the same with using dynamic vertex buffers)
  3. Quick update since there have been quite a few views of this topic. I've discovered what was triggering the issue after managing to snag a screenshot whilst the application was flickering. It revealed that it wasn't a texture issue but a vertexbuffer issue since I could see 2 sprites but sprite 1 would be using sprite2's vertexbuffer occasionally. (obvious to see as the two sprites were different sizes) I'm fairly sure my vertexbuffer stuff was correct as it works for ati cards however for nvidia cards rapidly releasing and recreating vertexbuffer would cause issues. I have since changed it to a dynamic vertexbuffer and fill it per frame instead. (I was planning this change at some point anyway) and the problem no longer appears.
  4. I've noticed that my program will randomly hang mostly on release build (debug version does too but very very rarely) I've been able to reproduce the random hanging fairly well and its getting stuck in nvwgf2um.dll which from what I gather is the nvidia driver. To reproduce the problem I just have to draw multiple 2D quads which uses textures and set the texture per frame so it triggers the bug more frequently. So I'm wondering if my approach to setting textures and using them is slightly flawed in some respect. UINT stride = sizeof(Sprite2DVertex); UINT offset = 0; context->IASetInputLayout(inputLayout_); context->IASetVertexBuffers( 0, 1, &vertexBuffer_, &stride, &offset ); context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); context->PSSetShaderResources(0,1, &srv); context->PSSetShader(pixelShader_, nullptr, 0); context->VSSetShader(vertexShader_, nullptr, 0); //Grab previous states ID3D11BlendState* prevBlendState; ID3D11DepthStencilState* prevDepthStencilState; ID3D11RasterizerState* prevRasterizerState; float prevBlendFactor[4]; unsigned int prevBlendSampleMask; unsigned int prevStencilRef; context->OMGetBlendState(&prevBlendState, prevBlendFactor, &prevBlendSampleMask); context->OMGetDepthStencilState(&prevDepthStencilState, &prevStencilRef); context->RSGetState(&prevRasterizerState); //Set new states context->PSSetSamplers( 0, 1, &samplerState_); context->RSSetState(rasterizerState_); const float factor[4] = {0.0f,0.0f,0.0f,0.0f}; context->OMSetBlendState(blendState_, factor, 0xFFFFFFFF); context->OMSetDepthStencilState(depthStencilState_, 0); context->Draw(6,0); //Reset to the previous states context->OMSetBlendState(prevBlendState, prevBlendFactor, prevBlendSampleMask); context->OMSetDepthStencilState(prevDepthStencilState, prevStencilRef); context->RSSetState(prevRasterizerState); I think I can avoid the set/reset by setting to null afterwards? Lastly the relevant shader code ps/vs 4.0 Texture2D textureDiffuse : register( t0 ); SamplerState sampleLinear : register( s0 ); struct VS_Sprite { float4 colour : COLOR; float3 position : POSITION; float2 uv : TEXCOORD; }; struct PS_Sprite { float4 colour : COLOR; float4 position : SV_POSITION; float2 uv : TEXCOORD; }; PS_Sprite vs_sprite(VS_Sprite input) { PS_Sprite output; output.colour = input.colour; output.uv = input.uv; output.position.xyz = input.position; output.position.w = 1; return output; } float4 ps_sprite(PS_Sprite input) : SV_Target { float4 output = textureDiffuse.Sample( sampleLinear, input.uv ); if(output.w > 0) output.xyz += input.colour.xyz; return output; } I've tested this issue on 3 separate computers using 460gtx's and my driver is the latest 275.33, also my friend tested on an ati card with no problems at all. I've just been having trouble actually debugging the issue because the high error rate is usually on release version and my parallels nsight seems to have broken for the last week builds. I tried pix and there are no device warnings and if the issue happens while in pix it'll crash pix too. Any questions or if you need more information I'll be happy to provide it.
  5. Confirmed that was the cause, thanks for the help.
  6. Ok I will do, it may take a day or so to confirm as I'm giving him test versions via email. If that is the case is it just breaking on that function because its the first directx function call then? It would explain the weird error.
  7. [font="Arial"]I've created a sample app to do some testing and I passed it to a friend and it's failing to create the device/swapchain. After looking at the documentation when I first presumed the HRESULT error code would be one of these : MSDN:D3D11ErrorCodes from here: MSDN:D3D11CreateDeviceAndSwapchain I just output the HRESULT and found it linked to -2005270524 and DXGI_ERROR_UNSUPPORTED however my friend is running it on an Nvidia gtx 570 (latest stable drivers, checked for this app) and using vista x64 sp2 (which I thought was supposed to add DX11 support he gave me a screenshot of dxdiag confirming dx11) Any ideas on why its failing? It should in theory fallback on versions because its code taken from the samples Microsoft provided.[/font] const int createDeviceFlags = D3D11_CREATE_DEVICE_DEBUG; //| D3D10_CREATE_DEVICE_SINGLETHREADED; //const int createDeviceFlags = 0; const int nDriverTypes = 3; const int nFeatureLvls = 6; HRESULT hr = S_OK; D3D11_VIEWPORT vp; //Features, drivers to use/fallback to D3D_DRIVER_TYPE driverTypes[] = { D3D_DRIVER_TYPE_HARDWARE, D3D_DRIVER_TYPE_WARP, D3D_DRIVER_TYPE_REFERENCE }; D3D_FEATURE_LEVEL featureLvls[] = { D3D_FEATURE_LEVEL_11_0, D3D_FEATURE_LEVEL_10_1, D3D_FEATURE_LEVEL_10_0, D3D_FEATURE_LEVEL_9_3, D3D_FEATURE_LEVEL_9_2, D3D_FEATURE_LEVEL_9_1 }; //Window size RECT rc; GetClientRect(hWnd, &rc ); const int width = rc.right - rc.left; const int height = rc.bottom - rc.top; height_ = height; width_ = width; //SwapChain DXGI_SWAP_CHAIN_DESC swapChainDesc; ZeroMemory( &swapChainDesc, sizeof( swapChainDesc ) ); swapChainDesc.BufferCount = 1; swapChainDesc.BufferDesc.Width = width; swapChainDesc.BufferDesc.Height = height; swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; swapChainDesc.BufferDesc.RefreshRate.Numerator = 0; //users =0, originally 60 swapChainDesc.BufferDesc.RefreshRate.Denominator = 0; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.OutputWindow = hWnd; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.Windowed = TRUE; for(int i = 0; i < nDriverTypes; i++ ) { driverType_ = driverTypes; hr = D3D11CreateDeviceAndSwapChain( NULL, driverType_, NULL, createDeviceFlags, featureLvls, nFeatureLvls, D3D11_SDK_VERSION, &swapChainDesc, &swapChain_, &device_, &featureLvl_, &context_ ); if(SUCCEEDED(hr)) break; } if(FAILED(hr)) //Failed quit function //Do something {
  8. On the June 2010 sdk I can get the same weirdness using alt-enter on tutorial 7. (the tutorial doesn't resize buffers and will trigger the warning) However my friend using Radeon HD 5750 has no problems, so I got the latest driver package for my nvidia gtx 460 but the problem remains. Ok, I quickly changed my shader to use one set of constant buffers and set them just before render like tutorial 6 and the problem disappears. No idea why multiple constant buffers is causing weird issues.
  9. I've been reading the msdn docs regarding DXGI fullscreen swapping and the best practises. Msdn link Could my issue be due to DXGI hanging on waiting for the message to go? I'm not using any of my own threading code within the apps but the app is still responsive (it just isn't drawn on the window). I know its responsive because swapping back to fullscreen shows the update to the mesh position. int WINAPI wWinMain( HINSTANCE hInst, HINSTANCE hPrevInst, LPWSTR lpCmdLine, int nCmdShow ) { Game game; HWND hWnd; LRESULT CALLBACK wndProc( HWND, UINT, WPARAM, LPARAM ); bool setup(HINSTANCE hInst, HWND &hWnd, RECT rc, Game* game); RECT rc = { 0, 0, 1024, 768 }; if(!setup(hInst,hWnd,rc, &game)) PostQuitMessage(0); if(!game.setup(hWnd)) { MessageBox( NULL, "Failed to setup stage", "Rob's Error", MB_OK ); PostQuitMessage(0); } ShowWindow(hWnd, nCmdShow ); MSG msg = {0}; while( WM_QUIT != msg.message ) { if( PeekMessage( &msg, NULL, 0U, 0U, PM_REMOVE ) != 0) { game.getInput()->handle(&msg); TranslateMessage( &msg ); DispatchMessage( &msg ); } else { game.run(); } } return (int)msg.wParam; } I've added my message pump loop if that helps. getInput()->handle just grabs all the keys pressed/mouse moved using VK_Keys it's also responsive even though the window is blank. game.run() -> updates camera from the handled input and also runs render();
  10. I tried your suggestion, quickly made a function that simply called Present after EndPaint, this keeps the windowed app from drawing always when dragging resize but the main weird issue remains.
  11. Yeah its been changing a lot the past few days with various debugging ideas, swapped it back to your suggestion but the overall problem remains. (Thanks for the fast reply!)
  12. Hello all, I'm new here but a long time lurker and used to trawling the forum search for snippets to solve past problems. I'm getting a weird issue when exiting from full screen (alt-enter) or from triggered key presses. The problem is that the window area is blank or just not updating the frame image. (its temperamental) If I return to full screen it starts rendering again with any of my keyboard inputs having updated the view. It's a simple app which renders a mesh from vertexbuffer using a simple shader. This issue only happens when I don't have D3D10_CREATE_DEVICE_SINGLETHREADED flag or when I don't render my mesh at all. Render void Game::render() { //if(!paused_) //{ ID3D11DeviceContext* context = dx_.getContext(); float ClearColor[4] = { 0.0f, 0.125f, 0.3f, 1.0f }; //red,green,blue,alpha BOOL fs; dx_.getSwapChain()->GetFullscreenState(&fs,nullptr); if(fs) ClearColor[1] = 1.0f; if(hit_) ClearColor[2] = 1.0f; context->ClearRenderTargetView( dx_.getRenderTarget(), ClearColor ); context->ClearDepthStencilView( dx_.getDepthStencilView(), D3D11_CLEAR_DEPTH, 1.0f, 0 ); CB_FrameChanges cb; cb.view = XMMatrixTranspose(camera_.getView()); XMVECTOR det; cb.worldInverse = XMMatrixTranspose(XMMatrixInverse(&det,camera_.getWorld())); dx_.getContext()->UpdateSubresource(cb_FrameChanges_, 0, nullptr, &cb, 0, 0); mesh_.setToContext(context); context->VSSetConstantBuffers( 0, 1, &cb_NeverChanges_ ); context->VSSetConstantBuffers( 1, 1, &cb_ResizeChanges_ ); context->VSSetConstantBuffers( 2, 1, &cb_FrameChanges_ ); context->PSSetConstantBuffers( 0, 1, &cb_NeverChanges_ ); context->PSSetConstantBuffers( 1, 1, &cb_ResizeChanges_ ); context->PSSetConstantBuffers( 2, 1, &cb_FrameChanges_ ); context->VSSetShader( vertexShader_, nullptr, 0 ); context->PSSetShader( pixelShader_, nullptr, 0 ); context->Draw(mesh_.getVertexTotal(), 0); HRESULT hr = dx_.getSwapChain()->Present(0,0); if(FAILED(hr)) OutputDebugStringA("Failed to present.\n"); //}} WndProc LRESULT CALLBACK wndProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ) { if(msg == WM_PAINT) { PAINTSTRUCT ps; BeginPaint( hWnd, &ps ); EndPaint( hWnd, &ps ); } else if(msg == WM_DESTROY ) { PostQuitMessage(0); } else if(msg == WM_SIZE) { Game* game = (Game*) GetWindowLongPtr(hWnd, GWL_USERDATA); RECT rc; GetClientRect(hWnd, &rc ); const int width = rc.right - rc.left; const int height = rc.bottom - rc.top; game->resize(width,height); } else if(msg == WM_GETMINMAXINFO) { ((MINMAXINFO*)lParam)->ptMinTrackSize.x = 50; ((MINMAXINFO*)lParam)->ptMinTrackSize.y = 50; } else if(msg == WM_NCCREATE) { LPCREATESTRUCT cs = (LPCREATESTRUCT) lParam; SetWindowLongPtr(hWnd, GWL_USERDATA, (long) cs->lpCreateParams); return DefWindowProc( hWnd, msg, wParam, lParam ); } else return DefWindowProc( hWnd, msg, wParam, lParam ); return 0;} WM_SIZE calls my resize function bool prevFullscreen = fullscreen_; BOOL fullscreen = 0; if(FAILED(swapChain_->GetFullscreenState(&fullscreen, nullptr))) OutputDebugStringA("Failed to get fullscreen state.\n"); fullscreen_ = (fullscreen > 0); //only resize if new height/width are different to current and they both aren't 0 if((height != height_ || width != width_) && (width != 0 && height != 0 ) && (fullscreen_ != prevFullscreen)) { OutputDebugStringA("Going to resize buffers.\n"); DXGI_SWAP_CHAIN_DESC swapChainDesc; width_ = width; height_ = height; context_->OMSetRenderTargets(0, 0, 0); swapChain_->GetDesc(&swapChainDesc); swapChainDesc.Windowed = !fullscreen; swapChainDesc.Flags = 0; if(fullscreen_) swapChainDesc.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH; //Release renderTarget, depth stencil view, etc depthStencilView_->Release(); renderTarget_->Release(); if(FAILED(swapChain_->ResizeBuffers(swapChainDesc.BufferCount,width,height, swapChainDesc.BufferDesc.Format ,swapChainDesc.Flags))) { MessageBox( NULL, "Failed to resize buffers.", "DirectX Error", MB_OK ); return false; } //recreate everything that was released if(!createRenderTarget()) return false; if(!createDepthStencils(width,height)) return false; context_->OMSetRenderTargets( 1, &renderTarget_, depthStencilView_); D3D11_VIEWPORT vp; //Should be a member of dx! vp.Width = (float)width; vp.Height = (float)height; vp.MinDepth = 0.0f; vp.MaxDepth = 1.0f; vp.TopLeftX = 0; vp.TopLeftY = 0; context_->RSSetViewports( 1, &vp ); } return true;} I've added a fair bit of debug stuff since this problem has been bugging me. I essentially release renderTarget, release depthStencilView resizeBuffers and recreate the released bits. I'm running it with the debug flag and get no warnings. I've run it with PIX and grabbed a frame when it was blank and it showed all device/swapchains etc as alive and showed me the render frame. It's just not appearing on the window. If anyone can shed some light on this weird behaviour or offer insights I'd appreciate it. Also need any more information I'll be happy to provide. Extra info win7 64bit, app is win32 dx sdk June 2010
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!