• Content count

  • Joined

  • Last visited

Community Reputation

421 Neutral

About knatterton

  • Rank
  1. Hi,   We have encountered a strange problem with our DirectX 9 application when it's running on Windows 8.1 x64. The application is working correctly in windowed mode, but when we go to fullscreen mode the GPU stops working completely (according to GPU-Z). It only renders the first image and then nothing. The application is still running - you can hear sounds and click on GUI items, but the image on the screen is completely frozen. The only thing that seems to help is to alt-tab out of the application and then back in. After that the GPU seems to wake up to render images again. Going to windowed mode and back to fullscreen mode freezes the image again. Alt-tabbing to the desktop and then back to the application is the only thing that seems to work.   Any ideas what could be causing this kind of thing? We have only encountered this problem on Windows 8.
  2. Terrain tessellation

    [quote name='Martins Mozeiko' timestamp='1355870135' post='5012228'] Source for that paper is available here: [url=""]https://developer.nv...sdk-11-direct3d[/url] "TerrainTessellation". [/quote] Thanks! I never seem to find anything from Nvidia's website. [quote name='GeniusPooh' timestamp='1355882077' post='5012276'] What you must think first is Graphical hardware is very so similiar with water.. You never put unnecessary data into that stream.. you must cull out all data you will not draw [img][/img] It's so basic and so important Actually on current superfast GPU can render very small portion with good realistic rendering qualty.. [/quote] I implemented basic culling of patches in the hull shader, but it didn't have any noticeable effect on the frame rate of our application. I think it's mainly because our terrain isn't very large and we are mostly bottlenecked by the pixel shader stage anyway. The next possible optimization that I've been thinking about involves using the stream output to save the tessellated terrain into a buffer, which could then be reused whenever the terrain is rendered again during the same frame. Currently the terrain is tessellated 3 times per frame for different purposes, and it doesn't seem to make a lot of sense to me. Of course saving the tessellation results would increase the GPU memory usage and would probably involve some other overhead as well. And considering that we are already mostly bottlenecked by the pixel shader stage it might not change the frame rate in any way at all. Does anyone have any idea how this kind of situation is usually handled in games?
  3. Hi, I'm in the process of implementing tessellated terrain for our engine. Everything works sort of okay already, except there's no frustum culling and the LOD could be improved. As I was looking for ways to improve the tessellation I found a paper called "[url=""]DirectX 11 Terrain Tessellation[/url]" by Iain Cantlay. It has lots of interesting ideas about tessellation, but unfortunately there doesn't seem to be any source code available of the implementation. Does anyone know if the source code can be downloaded somewhere? In the paper frustum culling has been implemented by dividing the terrain into multiple vertex buffers, which are then individually culled and then rendered using instancing. Wouldn't it also be possible to use the hull shader to cull individual patches? Would multiple vertex buffers be the better choice of these two approaches since our application is GPU bound and is likely to remain that way?
  4. PIX error message

    Thanks for the replies. It seems that capturing a replayable stream does indeed solve this problem. Capturing a single frame would be better though, because it would be much faster and would allow debugging individual pixels. So I guess I'll keep looking for the actual source of this problem.
  5. PIX error message

    Hey, We've been using PIX successfully in the past, while we were still using DXUT. However, now that we got rid of DXUT, PIX is giving the following error message whenever trying to view draw call results: A call that previously succeeded failed during playback: EID: 128 Call: IDXGIFactory::CreateSwapChain() HRESULT: DXGI_ERROR_INVALID_CALL My operating system is Windows 7 x64, PIX is 64-bit and the debugged application is 32-bit. The application is working fine outside of PIX, and there are no errors or warnings that are related to Direct3D. I tried looking through the PIX documentation for information about this error, but most of the things there seemed to be about Direct3D 9. So... What could possibly make the CreateSwapChain() call fail in PIX, when it's working fine otherwise?
  6. Problems going fullscreen

    Problem solved! It was really something that I should have figured out sooner, but I was blinded by the examples in books and tutorials. I hadn't previously used the IDXGIOutput::GetDisplayModeList() function to check what display modes are actually available on my monitor. I just assumed that 1920x1200 would be there and that's it. However, apparently my monitor only supports one display mode with that resolution and DXGI_FORMAT_R8G8B8A8_UNORM and to use it, I have to set scanline ordering to DXGI_MODE_SCANLINE_ORDER_PROGRESSIVE and scaling to DXGI_MODE_SCALING_UNSPECIFIED. I didn't realize the importance of these two parameters. I just used whatever values I had seen in books and tutorials. Does anyone know if these values are what most monitors support? At least they work with my primary monitor and my TV. I guess that the best practice would be to check what's the highest resolution that the primary monitor supports and get the values for scanline ordering and scaling by calling GetDisplayModeList() and use those when creating the swap chain.
  7. Problems going fullscreen

    I actually added that flag to see if it would solve this problem. I'm not sure what it's supposed to do, but at least it doesn't seem change the fullscreen behavior in any way.
  8. Problems going fullscreen

    I don't seem to be getting anywhere with this problem, so I'll post some code snippets and hope that someone can help me. Here's the function I use to create the device and the swap chain: [source lang="plain"]bool D3D::initialize(HWND window_handle, bool vsync_enabled) { assert(m_device == nullptr); m_window_handle = window_handle; m_vsync_enabled = vsync_enabled; // Creates device UINT device_flags = 0; #if defined(DEBUG) || defined(_DEBUG) device_flags = D3D11_CREATE_DEVICE_DEBUG; #endif D3D_FEATURE_LEVEL features[] = { D3D_FEATURE_LEVEL_11_0 }; int num_features = 1; D3D_FEATURE_LEVEL feature_level; HRESULT hr = D3D11CreateDevice(nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr, device_flags, features, num_features, D3D11_SDK_VERSION, &m_device, &feature_level, &m_immediate_context); if (FAILED(hr)) { LOG(kLogError) << "Failed to create Direct3D device."; return false; } if (feature_level != D3D_FEATURE_LEVEL_11_0) { LOG(kLogError) << "Direct3D 11 not supported."; return false; } // Creates swap chain description DXGI_SWAP_CHAIN_DESC swap_chain_desc; memset(&swap_chain_desc, 0, sizeof(swap_chain_desc)); swap_chain_desc.BufferDesc.Width = 0; swap_chain_desc.BufferDesc.Height = 0; swap_chain_desc.BufferDesc.RefreshRate.Numerator = 0; swap_chain_desc.BufferDesc.RefreshRate.Denominator = 0; swap_chain_desc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; swap_chain_desc.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED; swap_chain_desc.BufferDesc.Scaling = DXGI_MODE_SCALING_CENTERED; swap_chain_desc.SampleDesc.Count = 1; swap_chain_desc.SampleDesc.Quality = 0; swap_chain_desc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swap_chain_desc.BufferCount = 1; swap_chain_desc.OutputWindow = m_window_handle; swap_chain_desc.Windowed = true; swap_chain_desc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; swap_chain_desc.Flags = 0; IDXGIDevice1* pDXGIDevice; m_device->QueryInterface(__uuidof(IDXGIDevice1), (void **)&pDXGIDevice); IDXGIAdapter1* pDXGIAdapter; pDXGIDevice->GetParent(__uuidof(IDXGIAdapter1), (void **)&pDXGIAdapter); IDXGIFactory1* pIDXGIFactory; pDXGIAdapter->GetParent(__uuidof(IDXGIFactory1), (void **)&pIDXGIFactory); // Creates swap chain hr = pIDXGIFactory->CreateSwapChain(m_device, &swap_chain_desc, &m_swap_chain); // Disables Alt-Enter if (FAILED(pIDXGIFactory->MakeWindowAssociation(m_window_handle, DXGI_MWA_NO_ALT_ENTER | DXGI_MWA_NO_WINDOW_CHANGES | DXGI_MWA_NO_PRINT_SCREEN))) { LOG(kLogError) << "MakeWindowAssociation failed."; } SAFE_RELEASE(pIDXGIFactory); SAFE_RELEASE(pDXGIAdapter); SAFE_RELEASE(pDXGIDevice); if (FAILED(hr)) { LOG(kLogError) << "Failed to create swap chain."; return false; } m_initialized = true; return true; }[/source] Here's the function that's called after the application receives a WM_SIZE message. Well, actually it's not called until the whole event queue has been processed, because I see no point in resizing the buffers while the window is being resized. [source lang="plain"]bool D3D::resize_window( int width, int height ) { assert(m_initialized == true); LOG(kLogInfo) << "Changing window size to: " << width << " " << height; m_immediate_context->ClearState(); SAFE_RELEASE(m_back_buffer_view); SAFE_RELEASE(m_depth_stencil_view); if (FAILED(m_swap_chain->ResizeBuffers(0, width, height, DXGI_FORMAT_UNKNOWN, 0))) { LOG(kLogError) << "Failed to resize swap chain's back buffer."; return false; } ID3D11Texture2D* back_buffer_texture; if (FAILED(m_swap_chain->GetBuffer(0, __uuidof(ID3D11Texture2D), (void**)&back_buffer_texture))) { LOG(kLogError) << "Failed to get swap chain buffer."; return false; } if (FAILED (m_device->CreateRenderTargetView(back_buffer_texture, 0, &m_back_buffer_view))) { LOG(kLogError) << "Failed to create render target view."; return false; } // Not needed anymore SAFE_RELEASE(back_buffer_texture); D3D11_TEXTURE2D_DESC desc; memset(&desc, 0, sizeof(desc)); desc.Width = width; desc.Height = height; desc.MipLevels = 1; desc.ArraySize = 1; desc.Format = DXGI_FORMAT_D32_FLOAT; desc.SampleDesc.Count = 1; desc.SampleDesc.Quality = 0; desc.Usage = D3D11_USAGE_DEFAULT; desc.BindFlags = D3D11_BIND_DEPTH_STENCIL; desc.CPUAccessFlags = 0; desc.MiscFlags = 0; ID3D11Texture2D* depth_stencil_texture; if (FAILED(m_device->CreateTexture2D(&desc, 0, &depth_stencil_texture))) { LOG(kLogError) << "Failed to create depth stencil buffer."; return false; } if (FAILED(m_device->CreateDepthStencilView(depth_stencil_texture, 0, &m_depth_stencil_view))) { LOG(kLogError) << "Failed to create depth stencil view."; return false; } // Not needed anymore SAFE_RELEASE(depth_stencil_texture); // Initializes viewport memset(&m_viewport, 0, sizeof(m_viewport)); m_viewport.Width = (float)width; m_viewport.Height = (float)height; m_viewport.MinDepth = 0.0f; m_viewport.MaxDepth = 1.0f; return true; }[/source] And here's the function that's called when switching between windowed and fullscreen modes. I'm hoping that it would someday switch to fullscreen mode without changing the display mode, but so far I've been unsuccessful. [source lang="plain"]void D3D::set_fullscreen( bool fullscreen ) { assert(m_initialized == true); if (fullscreen != m_full_screen) { m_full_screen = fullscreen; LOG(kLogInfo) << "Setting fullscreen to " << (m_full_screen ? "TRUE" : "FALSE"); HRESULT result = m_swap_chain->SetFullscreenState(m_full_screen, nullptr); if (FAILED(result)) { LOG(kLogError) << "Failed to set fullscreen state."; } } }[/source] So... Any ideas? I've read the page about DXGI best practices ([url=""]http://msdn.microsof...5(v=vs.85).aspx[/url]) and I feel that I've more or less followed the instructions that it provides. I'll include my main loop as well, so you can see how the functions are actually called. Note that I'm not actually drawing anything at the moment. Just creating an empty window and trying to go fullscreen. [source lang="plain"]int Game::run(HINSTANCE hInstance) { MSG msg; // Game loop while (true) { // Windows messages while (PeekMessage(&msg, nullptr, 0, 0, PM_REMOVE) != 0) { // Quitting if (msg.message == WM_QUIT) { return (int)msg.wParam; } TranslateMessage(&msg); DispatchMessage(&msg); } if (m_keyboard.key_state(kKeyEsc) != kKeyNotPressed) { PostQuitMessage(0); } else if (m_keyboard.key_state(kKeyEnter) != kKeyNotPressed) { m_fullscreen = !m_fullscreen; m_keyboard.reset_key_state(kKeyEnter); // Sets fullscreen state m_d3d.set_fullscreen(m_fullscreen); } // If the window has been resized if (m_window_resized) { m_d3d.resize_window(m_client_width, m_client_height); m_window_resized = false; } } }[/source] Any other pointers are welcome as well, if you notice something that seems to make no sense.
  9. Hey, My application starts up successfully in windowed mode and now I'm trying to switch to fullscreen mode without any changes in display mode. In other words, I want to keep the desktop resolution and refresh rate. I expected this to be a fairly easy thing to accomplish, but I've run into a problem that I can't seem to solve. I've set up my application so that the Enter key switches between windowed and fullscreen modes. I've also disabled Alt-Enter from doing the same, but I'm fairly certain that has nothing to do with this particular problem. This is what happens after I press Enter: 1. Key press is correctly detected. 2. DXGISwapChain::SetFullscreenState(true, nullptr) is called. 3. Immediately after SetFullscreenState() the application goes fullscreen and a WM_SIZE message is received. This is when the problems start. The application does go to fullscreen mode successfully, but the display mode is also changed. Also the width and height received with the WM_SIZE message seem to be incorrect. When running the application on my TV the display mode changes from 1920x1080@60Hz to 1920x1080i@60Hz and the size received with WM_SIZE is 1768x992. When running the application on my monitor the display mode changes from 1920x1200@60Hz to 1600x1200@60Hz and WM_SIZE delivers 1008x730. 4. After getting the WM_SIZE message, IDXGISwapChain::ResizeBuffers() is called with the new width and height values, but it doesn't matter really, because something has already gone wrong before this step. This behavior seems strange to me, because I've created the swap chain without the DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH flag. I also never call IDXGISwapChain::ResizeTarget(). So what's changing the display mode anyway? I thought that calling SetFullscreenState() was supposed to keep the current desktop resolution, but I must have misunderstood something. Any obvious things I might have missed? Could my dual monitor setup have something to do with this? I can also post some code samples if that would be helpful. Any help would be appreciated!