Batzer

Members
  • Content count

    28
  • Joined

  • Last visited

Community Reputation

592 Good

About Batzer

  • Rank
    Member

Personal Information

  • Interests
    Education
    Production
    Programming
  1. I haven't used D3D12 yet, but i guess it will be the same as in D3D11. Meaning you call IUnknown::QueryInterface on your device, command list and command queue.
  2.   Actually it could be a huge difference. Calling glBufferSubData for each tree every frame will make the GPU wait for the CPU to upload the data. This can and will kill your performance. The only way to actually make UBOs perform better than glUniformX is to make a huge UBO for all your trees and upload all the matrices at once before rendering and then for each tree use glBindBufferRange to bind the correct transforms. This will be nice and fast. At least that is my experience with UBOs. To avoid synchronization between GPU and CPU you should use buffer orphaning or manual synchronization. Mote info here: https://www.opengl.org/wiki/Buffer_Object_Streaming. And here: http://www.gamedev.net/topic/655969-speed-gluniform-vs-uniform-buffer-objects/
  3. Here are some more OpenGL tutorials: http://www.learnopengl.com/ http://ogldev.atspace.co.uk/
  4. So I actually found the problem. It appears that when a uniform from a uniform block is used as an argument for the built-in function pow the linker isn't happy and just never links the shader. I found that when the uniform is assigned to a local variable the linker is happy: Change float specular = pow(max(0.0, NdotH), MaterialSpecular.w); to float shininess = MaterialSpecular.w; float specular = pow(max(0.0, NdotH), shininess); This MUST be a driver bug. Another workaround is to use instance names for the uniform blocks like in my previous post. But this is just annoying. Qualcomm fix your shit...
  5. Hey guys,   I have a very weird problem. When I try to use more than one uniform buffer in a shader glLinkProgram will just go into oblivion and never return. If I define the blocks without an instance name the lock up occurs: layout (std140) uniform LightBlock { vec3 LightColor; vec3 DirToLight; vec3 AmbientColor; }; layout (std140) uniform MaterialBlock { vec4 Diffuse; vec4 Specular; }; But when I give the blocks instance names, the program runs just fine: layout (std140) uniform LightBlock { vec3 LightColor; vec3 DirToLight; vec3 AmbientColor; } Light; layout (std140) uniform MaterialBlock { vec4 Diffuse; vec4 Specular; } Material; This is not the behaivior described in the specs. Tbh glLinkProgram should never lock up ... I tested the Code on a Nexus 4 (Adreno 320) and Nexus 5 (Adreno 330). Am I doing something wrong or is this a driver bug from Qualcomm?
  6. You could store a collision map in memory and just check if the the picked pixel is transparent. Then you can just check from front to back and the first image with an opaque pixel is the one you want. The collision map can just be an array of bools or uint8_t (0 for transparent, 1 for opaque). Don't know if this is the fastest way, but it's definitely faster than reading from the GPU.
  7. Yes you guys are right, after all my testing it seems that the default DXGI handler for Alt+Enter does not like sizes that are not valid display modes. I looked at some commercial games and they all do not permit resizing the window by dragging. Which of course makes sense. I guess the default behavior of DXGI is fine for all game scenarios, because you would only want valid display modes anyway and resizing the window wouldn't make much sense. And for editor viewports I don't see a reason to go fullscreen.   But I would still like to see a better documentation on this topic. The information provided is scattered all over MSDN which makes it very time consuming to find everything and they do not mention a lot of things (at least not explicitly) ...   The funny thing is that, when the window's client size is valid, DXGI outputs the warning again. So the default handler for Alt+Enter does not handle other display modes other than the native one very well. The only fix for that is to check every frame if the fullscreen state has changed and respond to that (like in post #3). Or just don't allow mode changes, then everything works if you handle WM_SIZE correctly. The last option is ofc to just do everything on your own.
  8. Yes you were right, I need to call ResizeBuffers effectively twice when going fullscreen manually. The first time before calling SetFullscreenState, because DXGI chooses the display mode by looking at the backbuffer size and format. And then for some reason we need to call ResizeBuffers again with the same size so DXGI stops printing the warning. I finally got everything to work by checking if we need to resize or the fullscreen state changed every frame. Seems like a ugly hack since MSDN's guidelines don't work for me... #ifndef UNICODE #define UNICODE #endif #include <Windows.h> #include <d3d11.h> #include <sstream> #pragma comment(lib, "d3d11.lib") namespace { bool isRunning = true; HWND window = nullptr; ID3D11Device* device = nullptr; ID3D11DeviceContext* context = nullptr; IDXGISwapChain* swapChain = nullptr; ID3D11RenderTargetView* renderTargetView = nullptr; // Fullscreen stuff bool needsResize = false; BOOL isFullscreen = FALSE; } void CreateBufferViews() { ID3D11Texture2D* backBuffer; swapChain->GetBuffer(0, IID_PPV_ARGS(&backBuffer)); device->CreateRenderTargetView(backBuffer, nullptr, &renderTargetView); backBuffer->Release(); } void ResizeBuffers() { context->ClearState(); context->Flush(); if (renderTargetView) { renderTargetView->Release(); renderTargetView = nullptr; } swapChain->ResizeBuffers(0, 0, 0, DXGI_FORMAT_UNKNOWN, DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH); CreateBufferViews(); } void SetFullscreen(UINT width, UINT height, bool fullscreen) { DXGI_MODE_DESC mode{}; mode.Width = width; mode.Height = height; mode.Format = DXGI_FORMAT_R8G8B8A8_UNORM; if (fullscreen) { IDXGIOutput* output; swapChain->GetContainingOutput(&output); DXGI_MODE_DESC closestMatch; output->FindClosestMatchingMode(&mode, &closestMatch, device); output->Release(); swapChain->ResizeTarget(&closestMatch); ResizeBuffers(); // MSDN says DXGI chooses mode by taking backbuffer params, so resize the backbuffer :) swapChain->SetFullscreenState(true, nullptr); } else { swapChain->SetFullscreenState(false, nullptr); swapChain->ResizeTarget(&mode); // We dont want a mode switch so resize window after going to windowed } } LRESULT CALLBACK WindowCallback(HWND window, UINT message, WPARAM wParam, LPARAM lParam) { switch (message) { case WM_CLOSE: DestroyWindow(window); break; case WM_DESTROY: isRunning = false; break; case WM_SIZE: { needsResize = true; break; } case WM_KEYDOWN: case WM_SYSKEYDOWN: { if (wParam == VK_ESCAPE) { isRunning = false; } else if (wParam == VK_SPACE) // For testing manual switching { if (isFullscreen) { SetFullscreen(1280, 720, false); } else { SetFullscreen(1920, 1080, true); } } break; } default: return DefWindowProcW(window, message, wParam, lParam); } return 0; } int WINAPI wWinMain(HINSTANCE hInstance, HINSTANCE, PWSTR, int cmdShow) { WNDCLASSW windowClass{ }; windowClass.hCursor = LoadCursorW(nullptr, IDC_ARROW); windowClass.hIcon = LoadIconW(nullptr, IDI_APPLICATION); windowClass.hbrBackground = reinterpret_cast<HBRUSH>(COLOR_WINDOW + 1); windowClass.hInstance = hInstance; windowClass.lpfnWndProc = &WindowCallback; windowClass.lpszClassName = L"SandboxWindowClass"; windowClass.style = CS_HREDRAW | CS_VREDRAW; RegisterClassW(&windowClass); RECT windowRect = { 0, 0, 1280, 720 }; AdjustWindowRect(&windowRect, WS_OVERLAPPEDWINDOW, false); auto windowWidth = windowRect.right - windowRect.left; auto windowHeight = windowRect.bottom - windowRect.top; window = CreateWindowW(L"SandboxWindowClass", L"Sandbox", WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, windowWidth, windowHeight, nullptr, nullptr, hInstance, nullptr); ShowWindow(window, cmdShow); D3D11CreateDevice(nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr, D3D11_CREATE_DEVICE_DEBUG, nullptr, 0, D3D11_SDK_VERSION, &device, nullptr, &context); IDXGIDevice1* dxgiDevice; device->QueryInterface(IID_PPV_ARGS(&dxgiDevice)); IDXGIAdapter1* adapter; dxgiDevice->GetParent(IID_PPV_ARGS(&adapter)); IDXGIFactory1* factory; adapter->GetParent(IID_PPV_ARGS(&factory)); RECT clientRect; GetClientRect(window, &clientRect); UINT backBufferWidth = static_cast<UINT>(clientRect.right - clientRect.left); UINT backBufferHeight = static_cast<UINT>(clientRect.bottom - clientRect.top); DXGI_SWAP_CHAIN_DESC swapChainDesc{ }; swapChainDesc.BufferCount = 1; swapChainDesc.BufferDesc.Width = backBufferWidth; swapChainDesc.BufferDesc.Height = backBufferHeight; swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH; swapChainDesc.OutputWindow = window; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; swapChainDesc.Windowed = true; factory->CreateSwapChain(device, &swapChainDesc, &swapChain); factory->Release(); adapter->Release(); dxgiDevice->Release(); CreateBufferViews(); MSG message; while (isRunning) { if (PeekMessageW(&message, nullptr, 0, 0, PM_REMOVE)) { TranslateMessage(&message); DispatchMessageW(&message); } else { // BEGIN (annoying fullscreen stuff) BOOL fullscreenState; swapChain->GetFullscreenState(&fullscreenState, nullptr); if (needsResize || isFullscreen != fullscreenState) { ResizeBuffers(); needsResize = false; } isFullscreen = fullscreenState; // END (annoying fullscreen stuff) static const float color[] = { 1, 0, 1, 1 }; context->ClearRenderTargetView(renderTargetView, color); context->OMSetRenderTargets(1, &renderTargetView, nullptr); swapChain->Present(1, 0); } } swapChain->SetFullscreenState(false, nullptr); context->ClearState(); context->Flush(); renderTargetView->Release(); swapChain->Release(); context->Release(); device->Release(); UnregisterClassW(L"SandboxWindowClass", hInstance); return 0; }
  9. Hey guys,   I've been trying to get fullscreen working for at least over 2 days now and the documentation does not mention all the annoying corner-cases. For example when I try to switch to fullscreen manually, I call IDXGISwapChain::ResizeTarget (with a valid mode!) and then IDXGISwapChain::SetFullscreenState. MSDN says that this is the correct way, beacause WM_SIZE will be sent which resizes the backbuffer. This seems to work for some modes, but not for all of them. For 1280x720 this works but with 800x600 I get that DXGI performance warning. Another weird corner-case is when the window has the size of a valid mode, like 1280x720, and you then switch to fullscreen via Alt+Enter. This works correctly the first time, but if you repeat this it gives you the performance warning again ... I attached a minimalistic program, which replicates the Alt+Enter issue. Please somebody tell me what is going wrong here, I'm going crazy.   PS: I read all relevant threads here on the forum about this topic, but I can't find a solution. #ifndef UNICODE #define UNICODE #endif #include <Windows.h> #include <d3d11.h> #include <sstream> #pragma comment(lib, "d3d11.lib") namespace { bool isRunning = true; HWND window = nullptr; ID3D11Device* device = nullptr; ID3D11DeviceContext* context = nullptr; IDXGISwapChain* swapChain = nullptr; ID3D11RenderTargetView* renderTargetView = nullptr; } void CreateBufferViews() { ID3D11Texture2D* backBuffer; swapChain->GetBuffer(0, IID_PPV_ARGS(&backBuffer)); device->CreateRenderTargetView(backBuffer, nullptr, &renderTargetView); backBuffer->Release(); } void ResizeBuffers() { context->ClearState(); context->Flush(); if (renderTargetView) { renderTargetView->Release(); renderTargetView = nullptr; } swapChain->ResizeBuffers(0, 0, 0, DXGI_FORMAT_R8G8B8A8_UNORM, DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH); CreateBufferViews(); } LRESULT CALLBACK WindowCallback(HWND window, UINT message, WPARAM wParam, LPARAM lParam) { switch (message) { case WM_CLOSE: DestroyWindow(window); break; case WM_DESTROY: isRunning = false; break; case WM_SIZE: { if (wParam != SIZE_MINIMIZED && swapChain) { ResizeBuffers(); } break; } case WM_KEYDOWN: case WM_SYSKEYDOWN: { if (wParam == VK_ESCAPE) { isRunning = false; } break; } default: return DefWindowProcW(window, message, wParam, lParam); } return 0; } int WINAPI wWinMain(HINSTANCE hInstance, HINSTANCE, PWSTR, int cmdShow) { WNDCLASSW windowClass{ }; windowClass.hCursor = LoadCursorW(nullptr, IDC_ARROW); windowClass.hIcon = LoadIconW(nullptr, IDI_APPLICATION); windowClass.hbrBackground = reinterpret_cast<HBRUSH>(COLOR_WINDOW + 1); windowClass.hInstance = hInstance; windowClass.lpfnWndProc = &WindowCallback; windowClass.lpszClassName = L"SandboxWindowClass"; windowClass.style = CS_HREDRAW | CS_VREDRAW; RegisterClassW(&windowClass); window = CreateWindowW(L"SandboxWindowClass", L"Sandbox", WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, 1280, 720, nullptr, nullptr, hInstance, nullptr); ShowWindow(window, cmdShow); D3D11CreateDevice(nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr, D3D11_CREATE_DEVICE_DEBUG, nullptr, 0, D3D11_SDK_VERSION, &device, nullptr, &context); IDXGIDevice1* dxgiDevice; device->QueryInterface(IID_PPV_ARGS(&dxgiDevice)); IDXGIAdapter1* adapter; dxgiDevice->GetParent(IID_PPV_ARGS(&adapter)); IDXGIFactory1* factory; adapter->GetParent(IID_PPV_ARGS(&factory)); RECT clientRect; GetClientRect(window, &clientRect); DXGI_SWAP_CHAIN_DESC swapChainDesc{ }; swapChainDesc.BufferCount = 1; swapChainDesc.BufferDesc.Width = clientRect.right; swapChainDesc.BufferDesc.Height = clientRect.bottom; swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH; swapChainDesc.OutputWindow = window; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; swapChainDesc.Windowed = true; factory->CreateSwapChain(device, &swapChainDesc, &swapChain); factory->MakeWindowAssociation(window, 0); factory->Release(); adapter->Release(); dxgiDevice->Release(); CreateBufferViews(); MSG message; while (isRunning) { if (PeekMessageW(&message, nullptr, 0, 0, PM_REMOVE)) { TranslateMessage(&message); DispatchMessageW(&message); } else { static const float color[] = { 1, 0, 1, 1 }; context->ClearRenderTargetView(renderTargetView, color); context->OMSetRenderTargets(1, &renderTargetView, nullptr); swapChain->Present(1, 0); } } swapChain->SetFullscreenState(false, nullptr); context->ClearState(); context->Flush(); renderTargetView->Release(); swapChain->Release(); context->Release(); device->Release(); UnregisterClassW(L"SandboxWindowClass", hInstance); return 0; }
  10. For only 2D texture support there isn't much to it. You ony need to create a ID3D11Texture2D with a correctly filled D3D11_TEXTURE2D_DESC. Now you could upload the image to the gpu via ID3D11DeviceContext::Map/Unmap or ID3D11DeviceContext::UpdateSubresource, depending on the Usage you sepcified for your texture. But the easiest way wouldbe to just pass the imagedata when creating your texture (I think the 2nd parameter). Also this is the only way to fill a immutable resource. After your texture is created you can just create a ID3D11ShaderResourceView for it.   When you want to support the full DDS-format you need to be able to create 1D-, 2D- and 3D-textures, as well as cubemaps. A bit more work, but doable. If you want to auto generate mipmaps for your textures, you need to specify them as rendertargets so ID3D11DeviceContext::GenerateMips can do its work.   Btw: http://msdn.microsoft.com/en-us/library/windows/desktop/hh404569(v=vs.85).aspx
  11. Thanks for the long and very informative answer, Ravyne!   He doesn't explicitly recommend it, but it is presented as the solution to making sure that systems are initialized in a specific order. So it's kinda recommended, yeah. It just seemed odd to me that this is the presented solution, because I expected more "Best-Practice-Code" from a seasoned professional.   My first thought for a solution would have been to just wrap everything in a class and rely on RAII for the correct ordering. Then the systems would just be passed to the objects that need them. class RenderEngine { RenderEngine() { // Init here } ~RenderEngine() { // Free here } }; class Application { void RunGameLoop(); // Declare systems in correct order if needed RenderEngine renderEngine; AudioEngine audioEngine; ... }; int main() { Application app; app.RunGameLoop(); return 0; }
  12. Hey guys,   I have recently been reading Jason Gregory's book, Game Engine Architecture 2nd Edition. And the first thing I noticed is that in the first chapter about engine systems he uses alot of global variables. This seems to be pretty common in production code. His approche is something like this: class RenderEngine { RenderEngine() { // Do nothing } ~RenderEngine() { // Do nothing } void startUp() { // Init here } void shutDown() { // Free here } }; RenderEngine gRenderEngine; // ... int main() { gRenderEngine.startUp(); runGameLoop(); gRenderEngine.shutDown(); return 0; } I don't really understand why they choose to do it like this instead of making use of dependency injection. Maybe somebody with more knowledge can explain?
  13. Abstract away the API?

    Damn there seems to much more work than I initialy thought. Thanks for your input guys. Maybe I will just stick with D3D11 for now The one problem that I find with all these solutions is that the little optimizations you can do with the specific APIs can't really be applied when you abstract them all into one unified API. My current idea is to just implement the high-level classes in the different APIs. So I would provide a Mesh class which is either implemented in D3D11 or D3D12. This way I imagine the little optimizations can still be used. class IRenderer { virtual std::unique_ptr<IMesh> CreateMesh(...) = 0; }; class Renderer11 : public IRenderer { virtual std::unique_ptr<IMesh> CreateMesh(...) override; }; class Renderer12 : public IRenderer { virtual std::unique_ptr<IMesh> CreateMesh(...) override; }; class IMesh {}; class Mesh11 : public IMesh {}; class Mesh12 : public IMesh {}; // and so on The only problem here is how to abstract away the ID3DxxDeviceContext, which would be needed for drawing the mesh, without inventing a new API for it.
  14. Hey guys,   with the upcoming Direct3D12 I want to get ready to integrate the new API into my framework, so I thought of adding a new layer of abstraction on top of the existing code. But is this really the "right" way of doing things? The other thing is that some things in my code can't really be abstracted that well. For example my Material & Texture class: class Texture { Texture(ID3D11ShaderResource* theTexture); ... }; class Material { void AttachDiffuseMap(const Texture& map); ... void Bind(ID3D11DeviceContext* ctx, ...); }; How would I make Texture::Texture and Material::Bind more abstract, when they need something very specific, any patterns for this kind of design problem?