Jump to content

  • Log In with Google      Sign In   
  • Create Account


knatterton

Member Since 12 Jul 2012
Offline Last Active Jun 24 2014 01:44 AM

Posts I've Made

In Topic: Terrain tessellation

20 December 2012 - 03:25 PM

Source for that paper is available here: https://developer.nv...sdk-11-direct3d "TerrainTessellation".


Thanks! I never seem to find anything from Nvidia's website.

What you must think first is Graphical hardware is very so similiar with water..

You never put unnecessary data into that stream..

you must cull out all data you will not draw Posted Image

It's so basic and so important

Actually on current superfast GPU can render very small portion with good realistic rendering qualty..


I implemented basic culling of patches in the hull shader, but it didn't have any noticeable effect on the frame rate of our application. I think it's mainly because our terrain isn't very large and we are mostly bottlenecked by the pixel shader stage anyway.

The next possible optimization that I've been thinking about involves using the stream output to save the tessellated terrain into a buffer, which could then be reused whenever the terrain is rendered again during the same frame. Currently the terrain is tessellated 3 times per frame for different purposes, and it doesn't seem to make a lot of sense to me. Of course saving the tessellation results would increase the GPU memory usage and would probably involve some other overhead as well. And considering that we are already mostly bottlenecked by the pixel shader stage it might not change the frame rate in any way at all. Does anyone have any idea how this kind of situation is usually handled in games?

In Topic: PIX error message

27 September 2012 - 12:49 AM

Thanks for the replies. It seems that capturing a replayable stream does indeed solve this problem. Capturing a single frame would be better though, because it would be much faster and would allow debugging individual pixels. So I guess I'll keep looking for the actual source of this problem.

In Topic: Problems going fullscreen

10 September 2012 - 12:16 PM

Problem solved!

It was really something that I should have figured out sooner, but I was blinded by the examples in books and tutorials. I hadn't previously used the IDXGIOutput::GetDisplayModeList() function to check what display modes are actually available on my monitor. I just assumed that 1920x1200 would be there and that's it. However, apparently my monitor only supports one display mode with that resolution and DXGI_FORMAT_R8G8B8A8_UNORM and to use it, I have to set scanline ordering to DXGI_MODE_SCANLINE_ORDER_PROGRESSIVE and scaling to DXGI_MODE_SCALING_UNSPECIFIED. I didn't realize the importance of these two parameters. I just used whatever values I had seen in books and tutorials. Does anyone know if these values are what most monitors support? At least they work with my primary monitor and my TV. I guess that the best practice would be to check what's the highest resolution that the primary monitor supports and get the values for scanline ordering and scaling by calling GetDisplayModeList() and use those when creating the swap chain.

In Topic: Problems going fullscreen

09 September 2012 - 02:14 PM

I actually added that flag to see if it would solve this problem. I'm not sure what it's supposed to do, but at least it doesn't seem change the fullscreen behavior in any way.

In Topic: Problems going fullscreen

09 September 2012 - 11:56 AM

I don't seem to be getting anywhere with this problem, so I'll post some code snippets and hope that someone can help me.

Here's the function I use to create the device and the swap chain:
[source lang="plain"]bool D3D::initialize(HWND window_handle, bool vsync_enabled){ assert(m_device == nullptr); m_window_handle = window_handle; m_vsync_enabled = vsync_enabled; // Creates device UINT device_flags = 0;#if defined(DEBUG) || defined(_DEBUG) device_flags = D3D11_CREATE_DEVICE_DEBUG;#endif D3D_FEATURE_LEVEL features[] = { D3D_FEATURE_LEVEL_11_0 }; int num_features = 1; D3D_FEATURE_LEVEL feature_level; HRESULT hr = D3D11CreateDevice(nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr, device_flags, features, num_features, D3D11_SDK_VERSION, &m_device, &feature_level, &m_immediate_context); if (FAILED(hr)) { LOG(kLogError) << "Failed to create Direct3D device."; return false; } if (feature_level != D3D_FEATURE_LEVEL_11_0) { LOG(kLogError) << "Direct3D 11 not supported."; return false; } // Creates swap chain description DXGI_SWAP_CHAIN_DESC swap_chain_desc; memset(&swap_chain_desc, 0, sizeof(swap_chain_desc)); swap_chain_desc.BufferDesc.Width = 0; swap_chain_desc.BufferDesc.Height = 0; swap_chain_desc.BufferDesc.RefreshRate.Numerator = 0; swap_chain_desc.BufferDesc.RefreshRate.Denominator = 0; swap_chain_desc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; swap_chain_desc.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED; swap_chain_desc.BufferDesc.Scaling = DXGI_MODE_SCALING_CENTERED; swap_chain_desc.SampleDesc.Count = 1; swap_chain_desc.SampleDesc.Quality = 0; swap_chain_desc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swap_chain_desc.BufferCount = 1; swap_chain_desc.OutputWindow = m_window_handle; swap_chain_desc.Windowed = true; swap_chain_desc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; swap_chain_desc.Flags = 0; IDXGIDevice1* pDXGIDevice; m_device->QueryInterface(__uuidof(IDXGIDevice1), (void **)&pDXGIDevice); IDXGIAdapter1* pDXGIAdapter; pDXGIDevice->GetParent(__uuidof(IDXGIAdapter1), (void **)&pDXGIAdapter); IDXGIFactory1* pIDXGIFactory; pDXGIAdapter->GetParent(__uuidof(IDXGIFactory1), (void **)&pIDXGIFactory); // Creates swap chain hr = pIDXGIFactory->CreateSwapChain(m_device, &swap_chain_desc, &m_swap_chain); // Disables Alt-Enter if (FAILED(pIDXGIFactory->MakeWindowAssociation(m_window_handle, DXGI_MWA_NO_ALT_ENTER | DXGI_MWA_NO_WINDOW_CHANGES | DXGI_MWA_NO_PRINT_SCREEN))) { LOG(kLogError) << "MakeWindowAssociation failed."; } SAFE_RELEASE(pIDXGIFactory); SAFE_RELEASE(pDXGIAdapter); SAFE_RELEASE(pDXGIDevice); if (FAILED(hr)) { LOG(kLogError) << "Failed to create swap chain."; return false; } m_initialized = true; return true;}[/source]

Here's the function that's called after the application receives a WM_SIZE message. Well, actually it's not called until the whole event queue has been processed, because I see no point in resizing the buffers while the window is being resized.
[source lang="plain"]bool D3D::resize_window( int width, int height ){ assert(m_initialized == true); LOG(kLogInfo) << "Changing window size to: " << width << " " << height; m_immediate_context->ClearState(); SAFE_RELEASE(m_back_buffer_view); SAFE_RELEASE(m_depth_stencil_view); if (FAILED(m_swap_chain->ResizeBuffers(0, width, height, DXGI_FORMAT_UNKNOWN, 0))) { LOG(kLogError) << "Failed to resize swap chain's back buffer."; return false; } ID3D11Texture2D* back_buffer_texture; if (FAILED(m_swap_chain->GetBuffer(0, __uuidof(ID3D11Texture2D), (void**)&back_buffer_texture))) { LOG(kLogError) << "Failed to get swap chain buffer."; return false; } if (FAILED (m_device->CreateRenderTargetView(back_buffer_texture, 0, &m_back_buffer_view))) { LOG(kLogError) << "Failed to create render target view."; return false; } // Not needed anymore SAFE_RELEASE(back_buffer_texture); D3D11_TEXTURE2D_DESC desc; memset(&desc, 0, sizeof(desc)); desc.Width = width; desc.Height = height; desc.MipLevels = 1; desc.ArraySize = 1; desc.Format = DXGI_FORMAT_D32_FLOAT; desc.SampleDesc.Count = 1; desc.SampleDesc.Quality = 0; desc.Usage = D3D11_USAGE_DEFAULT; desc.BindFlags = D3D11_BIND_DEPTH_STENCIL; desc.CPUAccessFlags = 0; desc.MiscFlags = 0; ID3D11Texture2D* depth_stencil_texture; if (FAILED(m_device->CreateTexture2D(&desc, 0, &depth_stencil_texture))) { LOG(kLogError) << "Failed to create depth stencil buffer."; return false; } if (FAILED(m_device->CreateDepthStencilView(depth_stencil_texture, 0, &m_depth_stencil_view))) { LOG(kLogError) << "Failed to create depth stencil view."; return false; } // Not needed anymore SAFE_RELEASE(depth_stencil_texture); // Initializes viewport memset(&m_viewport, 0, sizeof(m_viewport)); m_viewport.Width = (float)width; m_viewport.Height = (float)height; m_viewport.MinDepth = 0.0f; m_viewport.MaxDepth = 1.0f; return true;}[/source]

And here's the function that's called when switching between windowed and fullscreen modes. I'm hoping that it would someday switch to fullscreen mode without changing the display mode, but so far I've been unsuccessful.
[source lang="plain"]void D3D::set_fullscreen( bool fullscreen ){ assert(m_initialized == true); if (fullscreen != m_full_screen) { m_full_screen = fullscreen; LOG(kLogInfo) << "Setting fullscreen to " << (m_full_screen ? "TRUE" : "FALSE"); HRESULT result = m_swap_chain->SetFullscreenState(m_full_screen, nullptr); if (FAILED(result)) { LOG(kLogError) << "Failed to set fullscreen state."; } }}[/source]

So... Any ideas? I've read the page about DXGI best practices (http://msdn.microsof...5(v=vs.85).aspx) and I feel that I've more or less followed the instructions that it provides.

I'll include my main loop as well, so you can see how the functions are actually called. Note that I'm not actually drawing anything at the moment. Just creating an empty window and trying to go fullscreen.
[source lang="plain"]int Game::run(HINSTANCE hInstance){ MSG msg; // Game loop while (true) { // Windows messages while (PeekMessage(&msg, nullptr, 0, 0, PM_REMOVE) != 0) { // Quitting if (msg.message == WM_QUIT) { return (int)msg.wParam; } TranslateMessage(&msg); DispatchMessage(&msg); } if (m_keyboard.key_state(kKeyEsc) != kKeyNotPressed) { PostQuitMessage(0); } else if (m_keyboard.key_state(kKeyEnter) != kKeyNotPressed) { m_fullscreen = !m_fullscreen; m_keyboard.reset_key_state(kKeyEnter); // Sets fullscreen state m_d3d.set_fullscreen(m_fullscreen); } // If the window has been resized if (m_window_resized) { m_d3d.resize_window(m_client_width, m_client_height); m_window_resized = false; } }}[/source]

Any other pointers are welcome as well, if you notice something that seems to make no sense.

PARTNERS