Jump to content
  • Advertisement

Search the Community

Showing results for tags 'DX11' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 1000 results

  1. Hello everyone, I'm looking for some advice since I have some issues with my textures for my mouse pointer and I'm not sure where to start to look. I have checked everything that I know off and now I'm in need of advice on what to look for in my code when I try to fix it. I have a planet that is rendered, I have a UI that is rendered and I also have a mouse pointer that is rendered. First the planet is rendered, then the UI and then the mouse pointer last. When the planet is done rendering I turn off Z-Buffer and enable Alpha Blending while I render the UI and the Mouse Pointer. In the Mouse Pointers Pixel Shader I look for black color and if that is the case I blend it. But what seems to happen is that it also blends part of the texture that isn't supose to be blended. I'm going to provide some screenshot of the effect. In the first image you can see that the mouse pointer changes color to a more white one when behing infront of the planet. The correct color is the one that is displayed when it's not infron of the planet. The second thing I find weird is that the mouse pointer is behind the ui text even tho it is rendered after. I also tried switching them around and it makes no difference. Also the UI doesn't have the same issues when being above the planet, it's color is displayed as it should. Here comes the Pixel Shader code if that helps anyone get a better grip of the issue: float4 color; color = shaderTexture.Sample(sampleType, input.tex); if(color.b == 0.0f && color.r == 0.0f && color.g == 0.0f) { color.a = 0.0f; } else { color.a = 1.0f; } return color; The UI uses almost the same code, but only checks the r channel of the color but I'm using all 3 channels in the Mouse Pointer due to colors might be abit more off. Should be that if the pixel is black it's should be blended. And it does work, but it's just that somehow it also does something with the parts that shouldn't be blended. Right now I'm leaning towards there being something in the Pixel Shader since I can set all pixels to white and it behaves as it should and creates a white box for me. Any pointers of what kind of issues I'm looking at here and what to search for to find a solution will be appreciated alot Best Regards and Thanks in Advance Toastmastern
  2. Hey, I can't find this information anywhere on the web and I'm wondering about specific optimization... Let's say I have hundreds of 3D textures which I need to process separately in compute shader. Each invocation needs different data in constant buffer BUT many of the 3d textures don't need to update their CB contents every frame. Would it be better to create just one CB resource, bind just once at startup and in loop map the data for each consecutive shader invocation or would it be better to create like hundreds of separate CB resources, map them only when needed and just bind appropriate CB before each shader invocation? This depends on how exacly are those resources managed internally in DirectX and what does binding actually do... I would be very grateful if somebody shared their experience!
  3. Hi, I'm trying to do a comparision with DirectInput GUID e.g GUID_XAxis, GUID_YAxis from a value I get from GetProperty eg DIPROPRANGE propRange; DIJoystick->GetProperty (DIPROP_RANGE, &propRange.diph); // This will crash if (GUID_XAxis == MAKEDIPROP (propRange.diph.dwObj)) ; How should I be comparing the GUID from GetProperty?
  4. Hi guys, I'm trying to learn this stuff but running into some problems 😕 I've compiled my .hlsl into a header file which contains the global variable with the precompiled shader data: //... // Approximately 83 instruction slots used #endif const BYTE g_vs[] = { 68, 88, 66, 67, 143, 82, 13, 236, 152, 133, 219, 113, 173, 135, 18, 87, 122, 208, 124, 76, 1, 0, 0, 0, 16, 76, 0, 0, 6, 0, //.... And now following the "Compiling at build time to header files" example at this msdn link , I've included the header files in my main.cpp and I'm trying to create the vertex shader like this: hr = g_d3dDevice->CreateVertexShader(g_vs, sizeof(g_vs), nullptr, &g_d3dVertexShader); if (FAILED(hr)) { return -1; } and this is failing, entering the if and returing -1. Can someone point out what I'm doing wrong? 😕
  5. I have a problem with SSAO. On left hand black area. Code shader: Texture2D<uint> texGBufferNormal : register(t0); Texture2D<float> texGBufferDepth : register(t1); Texture2D<float4> texSSAONoise : register(t2); float3 GetUV(float3 position) { float4 vp = mul(float4(position, 1.0), ViewProject); vp.xy = float2(0.5, 0.5) + float2(0.5, -0.5) * vp.xy / vp.w; return float3(vp.xy, vp.z / vp.w); } float3 GetNormal(in Texture2D<uint> texNormal, in int3 coord) { return normalize(2.0 * UnpackNormalSphermap(texNormal.Load(coord)) - 1.0); } float3 GetPosition(in Texture2D<float> texDepth, in int3 coord) { float4 position = 1.0; float2 size; texDepth.GetDimensions(size.x, size.y); position.x = 2.0 * (coord.x / size.x) - 1.0; position.y = -(2.0 * (coord.y / size.y) - 1.0); position.z = texDepth.Load(coord); position = mul(position, ViewProjectInverse); position /= position.w; return position.xyz; } float3 GetPosition(in float2 coord, float depth) { float4 position = 1.0; position.x = 2.0 * coord.x - 1.0; position.y = -(2.0 * coord.y - 1.0); position.z = depth; position = mul(position, ViewProjectInverse); position /= position.w; return position.xyz; } float DepthInvSqrt(float nonLinearDepth) { return 1 / sqrt(1.0 - nonLinearDepth); } float GetDepth(in Texture2D<float> texDepth, float2 uv) { return texGBufferDepth.Sample(samplerPoint, uv); } float GetDepth(in Texture2D<float> texDepth, int3 screenPos) { return texGBufferDepth.Load(screenPos); } float CalculateOcclusion(in float3 position, in float3 direction, in float radius, in float pixelDepth) { float3 uv = GetUV(position + radius * direction); float d1 = DepthInvSqrt(GetDepth(texGBufferDepth, uv.xy)); float d2 = DepthInvSqrt(uv.z); return step(d1 - d2, 0) * min(1.0, radius / abs(d2 - pixelDepth)); } float GetRNDTexFactor(float2 texSize) { float width; float height; texGBufferDepth.GetDimensions(width, height); return float2(width, height) / texSize; } float main(FullScreenPSIn input) : SV_TARGET0 { int3 screenPos = int3(input.Position.xy, 0); float depth = DepthInvSqrt(GetDepth(texGBufferDepth, screenPos)); float3 normal = GetNormal(texGBufferNormal, screenPos); float3 position = GetPosition(texGBufferDepth, screenPos) + normal * SSAO_NORMAL_BIAS; float3 random = normalize(2.0 * texSSAONoise.Sample(samplerNoise, input.Texcoord * GetRNDTexFactor(SSAO_RND_TEX_SIZE)).rgb - 1.0); float SSAO = 0; [unroll] for (int index = 0; index < SSAO_KERNEL_SIZE; index++) { float3 dir = reflect(SamplesKernel[index].xyz, random); SSAO += CalculateOcclusion(position, dir * sign(dot(dir, normal)), SSAO_RADIUS, depth); } return 1.0 - SSAO / SSAO_KERNEL_SIZE; }
  6. I've been following this tutorial -> https://www.3dgep.com/introduction-to-directx-11/#The_Main_Function , did all the steps,and I ended up with the main.cpp you can see below. The problem is the call at line 516 g_d3dDeviceContext->UpdateSubresource(g_d3dConstantBuffers[CB_Frame], 0, nullptr, &g_ViewMatrix, 0, 0); which is crashing the program, and the very odd thing is that the first time trough it works fine, it crash the app the second time it is called... Can someone help me understand why? 😕 I have no idea... #include <Direct3D_11PCH.h> //Shaders using namespace DirectX; // Globals //Window const unsigned g_WindowWidth = 1024; const unsigned g_WindowHeight = 768; const char* g_WindowClassName = "DirectXWindowClass"; const char* g_WindowName = "DirectX 11"; HWND g_WinHnd = nullptr; const bool g_EnableVSync = true; //Device and SwapChain ID3D11Device* g_d3dDevice = nullptr; ID3D11DeviceContext* g_d3dDeviceContext = nullptr; IDXGISwapChain* g_d3dSwapChain = nullptr; //RenderTarget view ID3D11RenderTargetView* g_d3dRenderTargerView = nullptr; //DepthStencil view ID3D11DepthStencilView* g_d3dDepthStencilView = nullptr; //Depth Buffer Texture ID3D11Texture2D* g_d3dDepthStencilBuffer = nullptr; // Define the functionality of the depth/stencil stages ID3D11DepthStencilState* g_d3dDepthStencilState = nullptr; // Define the functionality of the rasterizer stage ID3D11RasterizerState* g_d3dRasterizerState = nullptr; D3D11_VIEWPORT g_Viewport{}; //Vertex Buffer data ID3D11InputLayout* g_d3dInputLayout = nullptr; ID3D11Buffer* g_d3dVertexBuffer = nullptr; ID3D11Buffer* g_d3dIndexBuffer = nullptr; //Shader Data ID3D11VertexShader* g_d3dVertexShader = nullptr; ID3D11PixelShader* g_d3dPixelShader = nullptr; //Shader Resources enum ConstantBuffer { CB_Application, CB_Frame, CB_Object, NumConstantBuffers }; ID3D11Buffer* g_d3dConstantBuffers[ConstantBuffer::NumConstantBuffers]; //Demo parameter XMMATRIX g_WorldMatrix; XMMATRIX g_ViewMatrix; XMMATRIX g_ProjectionMatrix; // Vertex data for a colored cube. struct VertexPosColor { XMFLOAT3 Position; XMFLOAT3 Color; }; VertexPosColor g_Vertices[8] = { { XMFLOAT3(-1.0f, -1.0f, -1.0f), XMFLOAT3(0.0f, 0.0f, 0.0f) }, // 0 { XMFLOAT3(-1.0f, 1.0f, -1.0f), XMFLOAT3(0.0f, 1.0f, 0.0f) }, // 1 { XMFLOAT3(1.0f, 1.0f, -1.0f), XMFLOAT3(1.0f, 1.0f, 0.0f) }, // 2 { XMFLOAT3(1.0f, -1.0f, -1.0f), XMFLOAT3(1.0f, 0.0f, 0.0f) }, // 3 { XMFLOAT3(-1.0f, -1.0f, 1.0f), XMFLOAT3(0.0f, 0.0f, 1.0f) }, // 4 { XMFLOAT3(-1.0f, 1.0f, 1.0f), XMFLOAT3(0.0f, 1.0f, 1.0f) }, // 5 { XMFLOAT3(1.0f, 1.0f, 1.0f), XMFLOAT3(1.0f, 1.0f, 1.0f) }, // 6 { XMFLOAT3(1.0f, -1.0f, 1.0f), XMFLOAT3(1.0f, 0.0f, 1.0f) } // 7 }; WORD g_Indicies[36] = { 0, 1, 2, 0, 2, 3, 4, 6, 5, 4, 7, 6, 4, 5, 1, 4, 1, 0, 3, 2, 6, 3, 6, 7, 1, 5, 6, 1, 6, 2, 4, 0, 3, 4, 3, 7 }; //Forward Declaration LRESULT CALLBACK WindowProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam); bool LoadContent(); int Run(); void Update(float deltaTime); void Clear(const FLOAT clearColor[4], FLOAT clearDepth, UINT8 clearStencil); void Present(bool vSync); void Render(); void CleanUp(); int InitApplication(HINSTANCE hInstance, int cmdShow); int InitDirectX(HINSTANCE hInstance, BOOL vsync); int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR cmd, int cmdShow) { UNREFERENCED_PARAMETER(hPrevInstance); UNREFERENCED_PARAMETER(cmd); // Check for DirectX Math library support. if (!XMVerifyCPUSupport()) { MessageBox(nullptr, TEXT("Failed to verify DirectX Math library support."), nullptr, MB_OK); return -1; } if (InitApplication(hInstance, cmdShow) != 0) { MessageBox(nullptr, TEXT("Failed to create applicaiton window."), nullptr, MB_OK); return -1; } if (InitDirectX(hInstance, g_EnableVSync) != 0) { MessageBox(nullptr, TEXT("Failed to initialize DirectX."), nullptr, MB_OK); CleanUp(); return -1; } if (!LoadContent()) { MessageBox(nullptr, TEXT("Failed to load content."), nullptr, MB_OK); CleanUp(); return -1; } int returnCode = Run(); CleanUp(); return returnCode; } int Run() { MSG msg{}; static DWORD previousTime = timeGetTime(); static const float targetFramerate = 30.0f; static const float maxTimeStep = 1.0f / targetFramerate; while (msg.message != WM_QUIT) { if (PeekMessage(&msg, 0, 0, 0, PM_REMOVE)) { TranslateMessage(&msg); DispatchMessage(&msg); } else { DWORD currentTime = timeGetTime(); float deltaTime = (currentTime - previousTime) / 1000.0f; previousTime = currentTime; deltaTime = std::min<float>(deltaTime, maxTimeStep); Update(deltaTime); Render(); } } return static_cast<int>(msg.wParam); } LRESULT CALLBACK WindowProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { PAINTSTRUCT paintstruct; HDC hDC; switch (msg) { case WM_PAINT: { hDC = BeginPaint(hwnd, &paintstruct); EndPaint(hwnd, &paintstruct); }break; case WM_DESTROY: { PostQuitMessage(0); }break; default: return DefWindowProc(hwnd, msg, wParam, lParam); break; } return 0; } int InitApplication(HINSTANCE hInstance, int cmdShow) { //Register Window class WNDCLASSEX mainWindow{}; mainWindow.cbSize = sizeof(WNDCLASSEX); mainWindow.style = CS_HREDRAW | CS_VREDRAW; mainWindow.lpfnWndProc = &WindowProc; mainWindow.hInstance = hInstance; mainWindow.hCursor = LoadCursor(NULL, IDC_ARROW); mainWindow.hbrBackground = (HBRUSH)(COLOR_WINDOW + 1); mainWindow.lpszMenuName = nullptr; mainWindow.lpszClassName = g_WindowClassName; if (!RegisterClassEx(&mainWindow)) { return -1; } RECT client{ 0,0,g_WindowWidth,g_WindowHeight }; AdjustWindowRect(&client, WS_OVERLAPPEDWINDOW, false); // Create Window g_WinHnd = CreateWindowEx(NULL, g_WindowClassName, g_WindowName, WS_OVERLAPPEDWINDOW | WS_VISIBLE, CW_USEDEFAULT, CW_USEDEFAULT, client.right - client.left, client.bottom - client.top, nullptr, nullptr, hInstance, nullptr); if (!g_WinHnd) { return -1; } UpdateWindow(g_WinHnd); return 0; } int InitDirectX(HINSTANCE hInstance, BOOL vsync) { assert(g_WinHnd != nullptr); RECT client{}; GetClientRect(g_WinHnd, &client); unsigned int clientWidth = client.right - client.left; unsigned int clientHeight = client.bottom - client.top; //Direct3D Initialization HRESULT hr{}; //SwapChainDesc DXGI_RATIONAL refreshRate = vsync ? DXGI_RATIONAL{ 1, 60 } : DXGI_RATIONAL{ 0, 1 }; DXGI_SWAP_CHAIN_DESC swapChainDesc{}; swapChainDesc.BufferDesc.Width = clientWidth; swapChainDesc.BufferDesc.Height = clientHeight; swapChainDesc.BufferDesc.RefreshRate = refreshRate; swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; swapChainDesc.BufferDesc.Scaling = DXGI_MODE_SCALING_CENTERED; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.BufferCount = 1; swapChainDesc.OutputWindow = g_WinHnd; swapChainDesc.Windowed = true; swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; UINT createDeviceFlags{}; #if _DEBUG createDeviceFlags = D3D11_CREATE_DEVICE_DEBUG; #endif //Feature levels const D3D_FEATURE_LEVEL features[]{ D3D_FEATURE_LEVEL_11_0 }; D3D_FEATURE_LEVEL featureLevel; hr = D3D11CreateDeviceAndSwapChain( nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr, createDeviceFlags, features, _countof(features), D3D11_SDK_VERSION, &swapChainDesc, &g_d3dSwapChain, &g_d3dDevice, &featureLevel, &g_d3dDeviceContext ); if (FAILED(hr)) { return -1; } //Render Target View ID3D11Texture2D* backBuffer; hr = g_d3dSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&backBuffer)); if (FAILED(hr)) { return -1; } hr = g_d3dDevice->CreateRenderTargetView(backBuffer, nullptr, &g_d3dRenderTargerView); if (FAILED(hr)) { return -1; } SafeRelease(backBuffer); //Depth Stencil View D3D11_TEXTURE2D_DESC depthStencilBufferDesc{}; depthStencilBufferDesc.Width = clientWidth; depthStencilBufferDesc.Height = clientHeight; depthStencilBufferDesc.MipLevels = 1; depthStencilBufferDesc.ArraySize = 1; depthStencilBufferDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthStencilBufferDesc.SampleDesc.Count = 1; depthStencilBufferDesc.SampleDesc.Quality = 0; depthStencilBufferDesc.Usage = D3D11_USAGE_DEFAULT; depthStencilBufferDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL; hr = g_d3dDevice->CreateTexture2D(&depthStencilBufferDesc, nullptr, &g_d3dDepthStencilBuffer); if (FAILED(hr)) { return -1; } hr = g_d3dDevice->CreateDepthStencilView(g_d3dDepthStencilBuffer, nullptr, &g_d3dDepthStencilView); if (FAILED(hr)) { return -1; } //Set States D3D11_DEPTH_STENCIL_DESC depthStencilStateDesc{}; depthStencilStateDesc.DepthEnable = true; depthStencilStateDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL; depthStencilStateDesc.DepthFunc = D3D11_COMPARISON_LESS; depthStencilStateDesc.StencilEnable = false; hr = g_d3dDevice->CreateDepthStencilState(&depthStencilStateDesc, &g_d3dDepthStencilState); if (FAILED(hr)) { return -1; } D3D11_RASTERIZER_DESC rasterizerStateDesc{}; rasterizerStateDesc.FillMode = D3D11_FILL_SOLID; rasterizerStateDesc.CullMode = D3D11_CULL_BACK; rasterizerStateDesc.FrontCounterClockwise = FALSE; rasterizerStateDesc.DepthClipEnable = TRUE; rasterizerStateDesc.ScissorEnable = FALSE;; rasterizerStateDesc.MultisampleEnable = FALSE; hr = g_d3dDevice->CreateRasterizerState(&rasterizerStateDesc, &g_d3dRasterizerState); if (FAILED(hr)) { return -1; } //Set Viewport g_Viewport.Width = static_cast<float>(clientWidth); g_Viewport.Height = static_cast<float>(clientHeight); g_Viewport.TopLeftX = 0.0f; g_Viewport.TopLeftY = 0.0f; g_Viewport.MinDepth = 0.0f; g_Viewport.MaxDepth = 1.0f; return 0; } bool LoadContent() { //Load Shaders HRESULT hr; assert(g_d3dDevice); //VS ID3DBlob* vsBlob = nullptr; D3DReadFileToBlob(L"../Shaders/SimpleVertexShader.cso", &vsBlob); assert(vsBlob); hr = g_d3dDevice->CreateVertexShader(vsBlob->GetBufferPointer(), vsBlob->GetBufferSize(), nullptr, &g_d3dVertexShader); if (FAILED(hr)) { SafeRelease(vsBlob); return false; } //Create VS Input Layout D3D11_INPUT_ELEMENT_DESC vertexLayoutDesc[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, offsetof(VertexPosColor, Position), D3D11_INPUT_PER_VERTEX_DATA ,0 }, { "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, offsetof(VertexPosColor, Color), D3D11_INPUT_PER_VERTEX_DATA ,0 } }; hr = g_d3dDevice->CreateInputLayout(vertexLayoutDesc, _countof(vertexLayoutDesc), vsBlob->GetBufferPointer(), vsBlob->GetBufferSize(), &g_d3dInputLayout); if (FAILED(hr)) { SafeRelease(vsBlob); return false; } SafeRelease(vsBlob); //PS ID3DBlob* psBlob = nullptr; D3DReadFileToBlob(L"../Shaders/SimplePixelShader.cso", &psBlob); assert(psBlob); hr = g_d3dDevice->CreatePixelShader(psBlob->GetBufferPointer(), psBlob->GetBufferSize(), nullptr, &g_d3dPixelShader); SafeRelease(psBlob); if (FAILED(hr)) { return false; } //Load Vertex Buffer D3D11_BUFFER_DESC vertexBufferDesc{}; vertexBufferDesc.ByteWidth = sizeof(VertexPosColor) * _countof(g_Vertices); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; D3D11_SUBRESOURCE_DATA resourceData{}; resourceData.pSysMem = g_Vertices; hr = g_d3dDevice->CreateBuffer(&vertexBufferDesc, &resourceData, &g_d3dVertexBuffer); if (FAILED(hr)) { return false; } //Load Index Buffer D3D11_BUFFER_DESC indexBufferDesc{}; indexBufferDesc.ByteWidth = sizeof(WORD) * _countof(g_Indicies); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; resourceData.pSysMem = g_Indicies; hr = g_d3dDevice->CreateBuffer(&indexBufferDesc, &resourceData, &g_d3dIndexBuffer); if (FAILED(hr)) { return false; } //Load Constant Buffers D3D11_BUFFER_DESC cBufferDesc{}; cBufferDesc.ByteWidth = sizeof(XMMATRIX); cBufferDesc.Usage = D3D11_USAGE_DEFAULT; cBufferDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER; for (size_t bufferID = 0; bufferID < NumConstantBuffers; bufferID++) { hr = g_d3dDevice->CreateBuffer(&cBufferDesc, nullptr, &g_d3dConstantBuffers[bufferID]); if (FAILED(hr)) { return false; } } //Setup Projection Matrix RECT client{}; GetClientRect(g_WinHnd, &client); float clientWidth = static_cast<float>(client.right - client.left); float clientHeight = static_cast<float>(client.bottom - client.top); g_ProjectionMatrix = DirectX::XMMatrixPerspectiveFovLH(XMConvertToRadians(45.0f), clientWidth / clientHeight, 0.1f, 100.0f); g_d3dDeviceContext->UpdateSubresource(g_d3dConstantBuffers[CB_Application], 0, nullptr, &g_ProjectionMatrix, 0, 0); return true; } void Update(float deltaTime) { XMVECTOR eyePosition = XMVectorSet(0, 0, -10, 1); XMVECTOR focusPoint = XMVectorSet(0, 0, 0, 1); XMVECTOR upDirection = XMVectorSet(0, 1, 0, 0); g_ViewMatrix = DirectX::XMMatrixLookAtLH(eyePosition, focusPoint, upDirection); g_d3dDeviceContext->UpdateSubresource(g_d3dConstantBuffers[CB_Frame], 0, nullptr, &g_ViewMatrix, 0, 0); static float angle = 0.0f; angle += 90.0f * deltaTime; XMVECTOR rotationAxis = XMVectorSet(0, 1, 1, 0); g_WorldMatrix = DirectX::XMMatrixRotationAxis(rotationAxis, XMConvertToRadians(angle)); g_d3dDeviceContext->UpdateSubresource(g_d3dConstantBuffers[CB_Object], 0, nullptr, &g_WorldMatrix, 0, 0); } void Clear(const FLOAT clearColor[4], FLOAT clearDepth, UINT8 clearStencil) { g_d3dDeviceContext->ClearRenderTargetView(g_d3dRenderTargerView, clearColor); g_d3dDeviceContext->ClearDepthStencilView(g_d3dDepthStencilView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, clearDepth, clearStencil); } void Present(bool vSync) { if (vSync) { g_d3dSwapChain->Present(1, 0); } else { g_d3dSwapChain->Present(0, 0); } } void Render() { assert(g_d3dDevice); assert(g_d3dDeviceContext); Clear(Colors::CornflowerBlue, 1.0f, 0); //IA const UINT vertexStride = sizeof(VertexPosColor); const UINT offset = 0; g_d3dDeviceContext->IASetVertexBuffers(0, 1, &g_d3dVertexBuffer, &vertexStride, &offset); g_d3dDeviceContext->IASetInputLayout(g_d3dInputLayout); g_d3dDeviceContext->IASetIndexBuffer(g_d3dIndexBuffer, DXGI_FORMAT_R16_UINT, 0); g_d3dDeviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); //VS g_d3dDeviceContext->VSSetShader(g_d3dVertexShader, nullptr, 0); g_d3dDeviceContext->VSGetConstantBuffers(0, NumConstantBuffers, g_d3dConstantBuffers); //RS g_d3dDeviceContext->RSSetState(g_d3dRasterizerState); g_d3dDeviceContext->RSSetViewports(1, &g_Viewport); //PS g_d3dDeviceContext->PSSetShader(g_d3dPixelShader, nullptr, 0); //OM g_d3dDeviceContext->OMSetRenderTargets(1, &g_d3dRenderTargerView, g_d3dDepthStencilView); g_d3dDeviceContext->OMSetDepthStencilState(g_d3dDepthStencilState, 1); //draw g_d3dDeviceContext->DrawIndexed(_countof(g_Indicies), 0, 0); Present(g_EnableVSync); } void CleanUp() { SafeRelease(g_d3dVertexShader); SafeRelease(g_d3dPixelShader); SafeRelease(g_d3dVertexBuffer); SafeRelease(g_d3dIndexBuffer); SafeRelease(g_d3dInputLayout); SafeRelease(g_d3dDepthStencilBuffer); for (size_t bufferID = 0; bufferID < NumConstantBuffers; bufferID++) { SafeRelease(g_d3dConstantBuffers[bufferID]); } SafeRelease(g_d3dDepthStencilState); SafeRelease(g_d3dRasterizerState); SafeRelease(g_d3dRenderTargerView); SafeRelease(g_d3dDepthStencilView); SafeRelease(g_d3dSwapChain); SafeRelease(g_d3dDeviceContext); SafeRelease(g_d3dDevice); }
  7. Hello everyone, After a few years of break from coding and my planet render game I'm giving it a go again from a different angle. What I'm struggling with now is that I have created a Frustum that works fine for now atleast, it does what it's supose to do alltho not perfect. But with the frustum came very low FPS, since what I'm doing right now just to see if the Frustum worked is to recreate the vertex buffer every frame that the camera detected movement. This is of course very costly and not the way to do it. Thats why I'm now trying to learn how to create a dynamic vertexbuffer instead and to map and unmap the vertexes, in the end my goal is to update only part of the vertexbuffer that is needed, but one step at a time ^^ So below is my code which I use to create the Dynamic buffer. The issue is that I want the size of the vertex buffer to be big enough to handle bigger vertex buffers then just mPlanetMesh.vertices.size() due to more vertices being added later when I start to do LOD and stuff, the first render isn't the biggest one I will need. vertexBufferDesc.Usage = D3D11_USAGE_DYNAMIC; vertexBufferDesc.ByteWidth = mPlanetMesh.vertices.size(); vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; vertexBufferDesc.MiscFlags = 0; vertexBufferDesc.StructureByteStride = 0; vertexData.pSysMem = &mPlanetMesh.vertices[0]; vertexData.SysMemPitch = 0; vertexData.SysMemSlicePitch = 0; result = device->CreateBuffer(&vertexBufferDesc, &vertexData, &mVertexBuffer); if (FAILED(result)) { return false; } What happens is that the result = device->CreateBuffer(&vertexBufferDesc, &vertexData, &mVertexBuffer); Makes it crash due to Access Violation. When I put the vertices.size() in it works without issues, but when I try to set it to like vertices.size() * 2 it crashes. I googled my eyes dry tonight but doesn't seem to find people with the same kind of issue, I've read that the vertex buffer can be bigger if needed. What I'm I doing wrong here? Best Regards and Thanks in advance Toastmastern
  8. Hi, I have a terrain engine where the terrain and water are on different grids. So I'm trying to render planar reflections of the terrain into the water grid. After reading some web pages and docs and also trying to learn from the RasterTek reflections demo and the small water bodies demo as well. What I do is as follows: 1. Create a Reflection view matrix - Technically I ONLY flip the camera position in the Y direction (Positive Y is up) and add to it 2 * waterLevel. Then I update the View matrix and I save that matrix for later. The code: void Camera::UpdateReflectionViewMatrix( float waterLevel ) { mBackupPosition = mPosition; mBackupLook = mLook; mPosition.y = -mPosition.y + 2.0f * waterLevel; //mLook.y = -mLook.y + 2.0f * waterLevel; UpdateViewMatrix(); mReflectionView = View(); } 2. I render the Terrain geometry to a 512x512 sized Render target by using the Reflection view matrix and an opposite culling (My Terrain is using front culling by nature so I'm using back culling for the Reflction render pass). Let me say that I checked with the Graphics debugger and the Reflection Render target looks "OK" at this stage (Picture attached). I don't know if the fact that the terrain is shown only at the top are of the texture is expected or not, but it seems OK. 3. Render the Reflection texture into the water using projective texturing - I hope this step is OK code wise. Basically I'm sending to the shader the WorldReflectionViewProj matrix that was created at step 1 in order to use it for the projective texture coordinates, I then convert the position in the DS (Water and terrain are drawn with Tessellation) to the projective tex coords using that WorldReflectionViewProj matrix, then I sample the reflection texture after setting up the coordinates in the PS. Here is the code: //Send the ReflectionWorldViewProj matrix to the shader: XMStoreFloat4x4(&mPerFrameCB.Data.ReflectionWorldViewProj, XMMatrixTranspose( ( mWorld * pCam->GetReflectedView() ) * mProj )); //Setting up the Projective tex coords in the DS: Output.projTexPosition = mul(float4(worldPos.xyz, 1), g_ReflectionWorldViewProj); //Setting up the coords in the PS and sampling the reflection texture: float2 projTexCoords; projTexCoords.x = input.projTexPosition.x / input.projTexPosition.w / 2.0 + 0.5; projTexCoords.y = -input.projTexPosition.y / input.projTexPosition.w / 2.0 + 0.5; projTexCoords += normal.xz * 0.025; float4 reflectionColor = gReflectionMap.SampleLevel(SamplerClampLinear, projTexCoords, 0); texColor += reflectionColor * 0.25; I'll add that when compiling the PS I'm getting a warning on those dividing by input.projTexPosition.w for a possible float division by 0, I tried to add some offset or some minimum to the dividing term but that still not solved my issue. Here is the problem itself. At relatively flat view angles I'm seeing correct reflections (Or at least so it seems), but as I pitch the camera down, I'm seeing those artifacts which I have no idea where are coming from. I'm culling the terrain in the reflection render pass when it's lower than water height (I have heightmaps for that). Any help will be appreciated because I don't know what is wrong or where else to look.
  9. Hi, I am looking for a usefull commandline based texture compression tool with the rights to be able to ship with my application. It should have following caps: Supports all major image format as source files (jpeg, png, tga, bmp) Export as DDS Compression Formats BC1, BC2, BC3, BC4, BC7 I am actually using the nvdxt tool from Nvidia, but it does not support BC4 (which I need for one-channel 8bit textures). Everything else which I found wasn't really useful. Any suggestions? Thx
  10. I have been trying to create a BlendState for my UI text sprites so that they are both alpha-blended (so you can see them) and invert the pixel they are rendered over (again, so you can see them). In order to get alpha blending you would need: SrcBlend = SRC_ALPHA DestBlend = INV_SRC_ALPHA and in order to have inverted colours you would need something like: SrcBlend = INV_DEST_COLOR DestBlend = INV_SRC_COLOR and you can't have both. So I have come to the conclusion that it's not possible; am I right?
  11. In traditional way, it needs 6 passes for a point light and many passes for cascaded shadow mapping to generate shadow maps. Recently I learnt a method that using a geometry shader to generate all the shadow maps in one pass.I specify a render target and a depth-stencil buffer which are both Texture2dArray in DirectX11.It looks much better than the traditional way I think.But after I implemented it, I found cascaded shadow mapping runs much slower than the traditional way.The fps slow down from 60 to 35.I don't know why.I guess may be I should do some culling or maybe the geometry shader is not efficient. I want to know the reason that I reduced the drawcalls from 8 to 1, but it runs slow down.Should I abandon this method or is there any way to optimize this method to run more efficiently than multi-pass rendering? Here is the gs code: [maxvertexcount(24)] void main( triangle DepthGsIn input[3] : SV_POSITION, inout TriangleStream< DepthPsIn > output ) { for (uint k = 0; k < 8; ++k) { DepthPsIn element; element.RTIndex = k; for (uint i = 0; i < 3; ++i) { float2 shadowSlopeBias = calculateShadowSlopeBias(input.normal, -g_cameras[k].world[1]); float shadowBias = shadowSlopeBias.y * g_cameras[k].shadowMapParameters.x + g_cameras[k].shadowMapParameters.y; element.position = input.position + shadowBias * g_cameras[k].world[1]; element.position = mul(element.position, g_cameras[k].viewProjection); element.depth = element.position.z / element.position.w; output.Append(element); } output.RestartStrip(); } }
  12. Hey, There are a few things which confuse me regarding DirectX 11 and HLSL shaders in general. I would be very grateful for your advice! 1. Let's take for example a scene which invokes 2 totally separate pipeline render passes interchangeably. I understand I need to bind correct shaders for each of the render pass and potentially blend/depth or rasterizer state but what about resources such as Constant Buffers, Shader Resource Views and Unordered Access Views? Assuming that the second render pass uses none of the resources used by the first pass, do I still need to unbind the resources and clean pipeline state after first pass? Or is it ok to leave pipeline with unbound garbage since anything I'd need to bind for second pass would overwrite contents in the appropriate register slots anyway? 2. Is it a good practice to assign register slots manually to all resources in HLSL? 3. I thought about assigning manually register slots for every distinct render pass up to the maximum slot limit if neccessary. For example in 1 render pass I invoke 3 CS's, 2 VS's and 2 PS's and for all resources used by those shaders I try to fill as many register slots as neccessary and potentially reuse many times the same slot in shaders sharing the same resource. I was wondering if there is any performance penalty or gain when I bind all of my needed resources at the start of render pass and never gonna have to do it again until next render pass? - this means potentially binding a lot of registers and having excessive number of bound resources for every shader that is run. 4. Is it a good practice to create a separate include file for every resource that occurs in >= 2 shader files or is it better to duplicate the declarations? In first case, the code is imo easier to maintain and edit but might be harder to read if there's too many includes. I've come up with a compromise between these 2 like this: create a separate include file for every CB that occurs in >= 2 shader files and a separate include file for every sampler I ever need to use. All other resources like srvs and uavs I prefer to duplicate in multiple shaders because they take much less space than CB for example... I'm not sure however if that's a good practice
  13. I want implement Particle system based on stream out structure to my bigger project. I saw few articles about that method and I build one particle. It works almost correctly but in geometry shader with stream out i cant get value of InitVel.z and age because it always is 0. If i change order of age(for example age is before Position) it works fine for age but 6th float of order is still 0. It looks like he push only 5 first positions. I had no idea what i do wrong because i try change almost all(create input layout for vertex, the same like entry SO Declaration, change number of strides for static 28, change it to 32 but in this case he draw chaotic so size of strides is probably good). I think it is problem with limits of NumEntry in declaration Entry but on site msdn i saw the limit for directx is D3D11_SO_STREAM_COUNT(4)*D3D11_SO_OUTPUT_COMPONENT_COUNT(128) not 5. Pls can you look in this code and give me the way or hope of implement it correctly?? Thanks a lot for help. Structure of particle struct Particle{ Particle() {} Particle(float x, float y, float z,float vx, float vy, float vz,float l /*UINT typ*/) :InitPos(x, y, z), InitVel(vx, vy, vz), Age(l) /*, Type(typ)*/{} XMFLOAT3 InitPos; XMFLOAT3 InitVel; float Age; //UINT Type; }; SO Entry D3D11_SO_DECLARATION_ENTRY PartlayoutSO[] = { { 0,"POSITION", 0, 0 , 3, 0 }, // output all components of position { 0,"VELOCITY", 0, 0, 3, 0 }, { 0,"AGE", 0, 0, 1, 0 } //{ 0,"TYPE", 0, 0, 1, 0 } }; Global Variables //streamout shaders ID3D11VertexShader* Part_VSSO; ID3D11GeometryShader* Part_GSSO; ID3DBlob *Part_GSSO_Buffer; ID3DBlob *Part_VSSO_Buffer; //normal shaders ID3D11VertexShader* Part_VS; ID3D11GeometryShader* Part_GS; ID3DBlob *Part_GS_Buffer; ID3D11PixelShader* Part_PS; ID3DBlob *Part_VS_Buffer; ID3DBlob *Part_PS_Buffer; ID3D11Buffer* PartVertBufferInit; //ID3D11Buffer* Popy; ID3D11Buffer* mDrawVB; ID3D11Buffer* mStreamOutVB; ID3D11InputLayout* PartVertLayout;// I try to set input layout too void ParticleSystem::InitParticles() { mFirstRun = true; srand(time(NULL)); hr = D3DCompileFromFile(L"ParticleVertexShaderSO4.hlsl", NULL, D3D_COMPILE_STANDARD_FILE_INCLUDE, "main", "vs_5_0", NULL, NULL, &Part_VSSO_Buffer, NULL); hr = D3DCompileFromFile(L"ParticleGeometryShaderSO4.hlsl", NULL, D3D_COMPILE_STANDARD_FILE_INCLUDE, "main", "gs_5_0", NULL, NULL, &Part_GSSO_Buffer, NULL); UINT StrideArray[1] = { sizeof(Particle) };//I try to set static 28 bits-7*4 per float hr = device->CreateVertexShader(Part_VSSO_Buffer->GetBufferPointer(), Part_VSSO_Buffer->GetBufferSize(), NULL, &Part_VSSO); hr = device->CreateGeometryShaderWithStreamOutput(Part_GSSO_Buffer- >GetBufferPointer(), Part_GSSO_Buffer->GetBufferSize(), PartlayoutSO ,3/* sizeof(PartlayoutSO)*/ , StrideArray, 1,D3D11_SO_NO_RASTERIZED_STREAM, NULL,&Part_GSSO); //Draw Shaders hr = D3DCompileFromFile(L"ParticleVertexShaderDRAW4.hlsl", NULL, D3D_COMPILE_STANDARD_FILE_INCLUDE, "main", "vs_5_0", NULL, NULL, &Part_VS_Buffer, NULL); hr = D3DCompileFromFile(L"ParticleGeometryShaderDRAW4.hlsl", NULL, D3D_COMPILE_STANDARD_FILE_INCLUDE, "main", "gs_5_0", NULL, NULL, &Part_GS_Buffer, NULL); hr = D3DCompileFromFile(L"ParticlePixelShaderDRAW4.hlsl", NULL, D3D_COMPILE_STANDARD_FILE_INCLUDE, "main", "ps_5_0", NULL, NULL, &Part_PS_Buffer, NULL); hr = device->CreateVertexShader(Part_VS_Buffer->GetBufferPointer(), Part_VS_Buffer->GetBufferSize(), NULL, &Part_VS); hr = device->CreateGeometryShader(Part_GS_Buffer->GetBufferPointer(), Part_GS_Buffer->GetBufferSize(), NULL, &Part_GS); hr = device->CreatePixelShader(Part_PS_Buffer->GetBufferPointer(), Part_PS_Buffer->GetBufferSize(), NULL, &Part_PS); BuildVertBuffer(); } void ParticleSystem::BuildVertBuffer() { D3D11_BUFFER_DESC vertexBufferDesc1; ZeroMemory(&vertexBufferDesc1, sizeof(vertexBufferDesc1)); vertexBufferDesc1.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc1.ByteWidth = sizeof(Particle)*1; //*numParticles; vertexBufferDesc1.BindFlags = D3D11_BIND_VERTEX_BUFFER;// | D3D11_BIND_STREAM_OUTPUT; vertexBufferDesc1.CPUAccessFlags = 0; vertexBufferDesc1.MiscFlags = 0; vertexBufferDesc1.StructureByteStride = 0;// I tried to comment this too Particle p; ZeroMemory(&p, sizeof(Particle)); p.InitPos = XMFLOAT3(0.0f, 0.0f, 0.0f); p.InitVel = XMFLOAT3(0.0f, 0.0f, 0.0f); p.Age = 0.0f; //p.Type = 100.0f; D3D11_SUBRESOURCE_DATA vertexBufferData1; ZeroMemory(&vertexBufferData1, sizeof(vertexBufferData1)); vertexBufferData1.pSysMem = &p;//było &p vertexBufferData1.SysMemPitch = 0; vertexBufferData1.SysMemSlicePitch = 0; hr = device->CreateBuffer(&vertexBufferDesc1, &vertexBufferData1, &PartVertBufferInit); ZeroMemory(&vertexBufferDesc1, sizeof(vertexBufferDesc1)); vertexBufferDesc1.ByteWidth = sizeof(Particle) * numParticles; vertexBufferDesc1.BindFlags = D3D11_BIND_VERTEX_BUFFER | D3D11_BIND_STREAM_OUTPUT; hr = device->CreateBuffer(&vertexBufferDesc1, 0, &mDrawVB); hr = device->CreateBuffer(&vertexBufferDesc1, 0, &mStreamOutVB); } void ParticleSystem::LoadDataParticles() { UINT stride = sizeof(Particle); UINT offset = 0; //Create the Input Layout //device->CreateInputLayout(Partlayout, numElementsPart, Part_VSSO_Buffer- //>GetBufferPointer(), // Part_VSSO_Buffer->GetBufferSize(), &PartVertLayout); //Set the Input Layout //context->IASetInputLayout(PartVertLayout); //Set Primitive Topology context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_POINTLIST); if (mFirstRun) { // context->CopyResource(Popy, PartVertBufferInit); context->IASetVertexBuffers(0, 1, &PartVertBufferInit, &stride, &offset); } else { context->IASetVertexBuffers(0, 1, &mDrawVB, &stride, &offset); } context->SOSetTargets(1, &mStreamOutVB, &offset); context->VSSetShader(Part_VSSO, NULL, 0); context->GSSetShader(Part_GSSO, NULL, 0); context->PSSetShader(NULL, NULL, 0); //context->PSSetShader(Part_PS, NULL, 0); ID3D11DepthStencilState* depthState;//disable depth D3D11_DEPTH_STENCIL_DESC depthStateDesc; depthStateDesc.DepthEnable = false; depthStateDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ZERO; device->CreateDepthStencilState(&depthStateDesc, &depthState); context->OMSetDepthStencilState(depthState, 0); if (mFirstRun) { //mFirstRun; context->Draw(1, 0); mFirstRun = false; } else { context->DrawAuto(); } //} // done streaming-out--unbind the vertex buffer ID3D11Buffer* bufferArray[1] = { 0 }; context->SOSetTargets(1, bufferArray, &offset); // ping-pong the vertex buffers std::swap(mStreamOutVB, mDrawVB); // Draw the updated particle system we just streamed-out. //Create the Input Layout //device->CreateInputLayout(Partlayout, numElementsPart, Part_VS_Buffer- //>GetBufferPointer(), // Part_VS_Buffer->GetBufferSize(), &PartVertLayout); //Set the normal Input Layout //context->IASetInputLayout(PartVertLayout); context->IASetVertexBuffers(0, 1, &mDrawVB, &stride, &offset); ZeroMemory(&depthStateDesc, sizeof(depthStateDesc)); depthStateDesc.DepthEnable = true; depthStateDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ZERO; device->CreateDepthStencilState(&depthStateDesc, &depthState); context->OMSetDepthStencilState(depthState, 0); //I tried add normal layout here the same like Entry SO but no changes //Set Primitive Topology //context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_POINTLIST); context->VSSetShader(Part_VS, NULL, 0); context->GSSetShader(Part_GS, NULL, 0); context->PSSetShader(Part_PS, NULL, 0); context->DrawAuto(); //mFirstRun = true; context->GSSetShader(NULL, NULL, 0); } void ParticleSystem::RenderParticles() { //mFirstRun = true; LoadDataParticles(); } And the code of shaders: VertexShader to stream out struct Particle { float3 InitPos : POSITION; float3 InitVel : VELOCITY; float Age : AGE; //uint Type : TYPE; }; Particle main(Particle vin) { return vin;// just push data into geomtrywithso } GeometrywithSo struct Particle { float3 InitPos : POSITION; float3 InitVel : VELOCITY; float Age : AGE; //uint Type : TYPE; }; float RandomPosition(float offset) { float u = Time + offset;// (Time + offset); float v = ObjTexture13.SampleLevel(ObjSamplerState, u, 0).r; return (v); } [maxvertexcount(6)] void main( point Particle gin[1], inout PointStream< Particle > Output ) { //gin[0].Age = Time; if ( StartPart == 1.0f ) { //if (gin[0].Age < 100.0f) //{ for (int i = 0; i < 6; i++) { float3 VelRandom; //= 5.0f * RandomPosition((float)i / 5.0f); VelRandom.y = 10.0f+i; VelRandom.x = 35 * i* RandomPosition((float)i / 5.0f);//+ offse; VelRandom.z = 10.0f;//35*i * RandomPosition((float)i / 5.0f); Particle p; p.InitPos = VelRandom;//float3(0.0f, 5.0f, 0.0f); //+ VelRandom; p.InitVel = float3(10.0f, 10.0f, 10.0f); p.Age = 0.0f;//VelRandom.y; //p.Type = PT_FLARE; Output.Append(p); } Output.Append(gin[0]); } else if (StartPart == 0.0f) { if (gin[0].Age >= 0) { Output.Append(gin[0]); } } } If I change Age in geometry with so: for example Age += Time from const buffer In geometry shader its fine once but in draw shader it is 0 and next time if it is reading in geometry with so it is 0 too. Vertex shader to draw struct VertexOut { float3 Pos : POSITION; float4 Colour : COLOR; //uint Type : TYPE; }; struct Particle { float3 InitPos : POSITION; float3 InitVel : VELOCITY; float Age : AGE; // uint Type : TYPE; }; VertexOut main(Particle vin) { VertexOut vout; float3 gAccelW = float3(0.0f, -0.98f, 0.0f); float t = vin.Age; //float b = Time/10000; // constant Acceleration equation vout.Pos = vin.InitVel+ (0.7f * gAccelW)*Time/100; //vout.Pos.x = t; vout.Colour = float4(1.0f, 0.0f, 0.0f, 1.0f); //vout.Age = vout.Pos.y; //vout.Type = vin.Type; return vout; } Geometry shader to change point into line struct VertexOut { float3 Pos : POSITION; float4 Colour : COLOR; //uint Type : TYPE; }; struct GSOutput { float4 Pos : SV_POSITION; float4 Colour : COLOR; //float2 Tex : TEXCOORD; }; [maxvertexcount(2)] void main( point VertexOut gin[1], inout LineStream< GSOutput > Output ) { float3 gAccelW = float3(0.0f, -0.98f, 0.0f); //if (gin[0].Type != PT_EMITTER) { float4 v[2]; v[0] = float4(gin[0].Pos, 1.0f); v[1] = float4((gin[0].Pos + gAccelW), 1.0f); GSOutput gout; [unroll] for (int i = 0; i < 2; ++i) { gout.Pos = mul(v[i], WVP);// mul(v[i], gViewProj); gout.Colour = gin[0].Colour; Output.Append(gout); } } } And pixel Shader struct GSOutput { float4 Pos : SV_POSITION; float4 Colour : COLOR; }; float4 main(GSOutput pin) : SV_TARGET { return pin.Colour; }
  14. So I've been playing around today with some things in D3D 11.1, specifically the constant buffer offset stuff. And just FYI, I'm doing this in C# with SharpDX (latest version). I got everything set up, I have my constant buffer populating with data during each frame, and calling VSSetConstantBuffers1 and passing in the offset/count as needed. But, unfortunately, I get nothing on my screen. If I go back to using the older D3D11 SetConstantBuffers method (without the offset/count), everything works great. I get nothing from the D3D runtime debug spew, and a look in the graphics debugger stuff tells me that my constant buffer does indeed have data at the offsets that I'm providing. And the data (World * Projection matrix) is correct at each offset. The offsets, according again to the graphics debugger, are correct. I could be using it incorrectly, but what little (and seriously, there's not a lot) info I found seems to indicate that I'm doing it correctly. But here's my workflow (I'd post code, but it's rather massive): Frame #0: Map constant buffer with discard Write matrix at offset 0, count 64 Unmap VSSetConstantBuffers1(0, 1, buffers, new int[] { offset }, new int[] { count }); // Where offset is the offset above, same with count Draw single triangle Frame #1: Map constant buffer with no-overwrite Write matrix at offset 64, count 64. Unmap VSSetConstantBuffers1(0, 1, buffers, new int[] { offset }, new int[] { count }); // Where offset is the offset above, same with count Draw single triangle Etc... it repeats until the end of the buffer, and starts over with a discard when the buffer is full. Has anyone ever used these offset cbuffer functions before? Can you help a brother out? Edit: I've added screenshots of what I'm seeing the VS 2017 graphics debugger. As I said before, if I use the old VSSetConstantBuffers method, it works like a charm and I see my triangle.
  15. Hi all, It's been a while since I've been working on my HLSL shaders, and found out I'm not 100% sure if I'm applying gamma correctness correctly. So here's what I do: - create backbuffer in this format: DXGI_FORMAT_R8G8B8A8_UNORM_SRGB - source textures (DDS) are always in SRGB format - this way the textures should be gamma correct, because DX11 helps me out here Now my question is about material and light colors. I'm not sure if I need to convert those to linear space. The colors are handpicked on screen, so I guess gamma correct. Below are 2 screenshots, the darker is including converting those colors (``return float4(linearColor.rgb * linearColor.rgb, linearColor.a);``), in the lighter shot I didn't do this conversion. These are the properties of the brick material and the light source (there are no other lightsources in the scene, also no global ambient): Material: CR_VECTOR4(0.51f, 0.26f, 0.22f, 1.0f), // ambient CR_VECTOR4(0.51f, 0.26f, 0.22f, 1.0f), // diffuse RGB + alpha CR_VECTOR4(0.51f, 0.26f, 0.22f, 4.0f)); // specular RGB + power Directional light: mDirLights[0].Ambient = CR_VECTOR4(0.1f, 0.1f, 0.1f, 1.0f); mDirLights[0].Diffuse = CR_VECTOR4(0.75f, 0.75f, 0.75f, 1.0f); mDirLights[0].Specular = CR_VECTOR4(1.0f, 1.0f, 1.0f, 16.0f); mDirLights[0].Direction = CR_VECTOR3(0.0f, 1.0f, 0.0f); So in short, should I or should I not do this conversion in the lighting calculation in the shader? (and/or what else are you seeing :)) Note that I don't do anything with the texture color, after it's fetched in the shader (no conversions), which I believe is correct.
  16. Introduction: In general my questions pertain to the differences between floating- and fixed-point data. Additionally I would like to understand when it can be advantageous to prefer fixed-point representation over floating-point representation in the context of vertex data and how the hardware deals with the different data-types. I believe I should be able to reduce the amount of data (bytes) necessary per vertex by choosing the most opportune representations for my vertex attributes. Thanks ahead of time if you, the reader, are considering the effort of reading this and helping me. I found an old topic that shows this is possible in principal, but I am not sure I understand what the pitfalls are when using fixed-point representation and whether there are any hardware-based performance advantages/disadvantages. (TLDR at bottom) The Actual Post: To my understanding HLSL/D3D11 offers not just the traditional floating point model in half-,single-, and double-precision, but also the fixed-point model in form of signed/unsigned normalized integers in 8-,10-,16-,24-, and 32-bit variants. Both models offer a finite sequence of "grid-points". The obvious difference between the two models is that the fixed-point model offers a constant spacing between values in the normalized range of [0,1] or [-1,1], while the floating point model allows for smaller "deltas" as you get closer to 0, and larger "deltas" the further you are away from 0. To add some context, let me define a struct as an example: struct VertexData { float[3] position; //3x32-bits float[2] texCoord; //2x32-bits float[3] normals; //3x32-bits } //Total of 32 bytes Every vertex gets a position, a coordinate on my texture, and a normal to do some light calculations. In this case we have 8x32=256bits per vertex. Since the texture coordinates lie in the interval [0,1] and the normal vector components are in the interval [-1,1] it would seem useful to use normalized representation as suggested in the topic linked at the top of the post. The texture coordinates might as well be represented in a fixed-point model, because it seems most useful to be able to sample the texture in a uniform manner, as the pixels don't get any "denser" as we get closer to 0. In other words the "delta" does not need to become any smaller as the texture coordinates approach (0,0). A similar argument can be made for the normal-vector, as a normal vector should be normalized anyway, and we want as many points as possible on the sphere around (0,0,0) with a radius of 1, and we don't care about precision around the origin. Even if we have large textures such as 4k by 4k (or the maximum allowed by D3D11, 16k by 16k) we only need as many grid-points on one axis, as there are pixels on one axis. An unsigned normalized 14 bit integer would be ideal, but because it is both unsupported and impractical, we will stick to an unsigned normalized 16 bit integer. The same type should take care of the normal vector coordinates, and might even be a bit overkill. struct VertexData { float[3] position; //3x32-bits uint16_t[2] texCoord; //2x16bits uint16_t[3] normals; //3x16bits } //Total of 22 bytes Seems like a good start, and we might even be able to take it further, but before we pursue that path, here is my first question: can the GPU even work with the data in this format, or is all I have accomplished minimizing CPU-side RAM usage? Does the GPU have to convert the texture coordinates back to a floating-point model when I hand them over to the sampler in my pixel shader? I have looked up the data types for HLSL and I am not sure I even comprehend how to declare the vertex input type in HLSL. Would the following work? struct VertexInputType { float3 pos; //this one is obvious unorm half2 tex; //half corresponds to a 16-bit float, so I assume this is wrong, but this the only 16-bit type I found on the linked MSDN site snorm half3 normal; //same as above } I assume this is possible somehow, as I have found input element formats such as: DXGI_FORMAT_R16G16B16A16_SNORM and DXGI_FORMAT_R16G16B16A16_UNORM (also available with a different number of components, as well as different component lengths). I might have to avoid 3-component vectors because there is no 3-component 16-bit input element format, but that is the least of my worries. The next question would be: what happens with my normals if I try to do lighting calculations with them in such a normalized-fixed-point format? Is there no issue as long as I take care not to mix floating- and fixed-point data? Or would that work as well? In general this gives rise to the question: how does the GPU handle fixed-point arithmetic? Is it the same as integer-arithmetic, and/or is it faster/slower than floating-point arithmetic? Assuming that we still have a valid and useful VertexData format, how far could I take this while remaining on the sensible side of what could be called optimization? Theoretically I could use the an input element format such as DXGI_FORMAT_R10G10B10A2_UNORM to pack my normal coordinates into a 10-bit fixed-point format, and my verticies (in object space) might even be representable in a 16-bit unsigned normalized fixed-point format. That way I could end up with something like the following struct: struct VertexData { uint16_t[3] pos; //3x16bits uint16_t[2] texCoord; //2x16bits uint32_t packedNormals; //10+10+10+2bits } //Total of 14 bytes Could I use a vertex structure like this without too much performance-loss on the GPU-side? If the GPU has to execute some sort of unpacking algorithm in the background I might as well let it be. In the end I have a functioning deferred renderer, but I would like to reduce the memory footprint of the huge amount of vertecies involved in rendering my landscape. TLDR: I have a lot of vertices that I need to render and I want to reduce the RAM-usage without introducing crazy compression/decompression algorithms to the CPU or GPU. I am hoping to find a solution by involving fixed-point data-types, but I am not exactly sure how how that would work.
  17. Hi all, I was wondering it it matters in which order you draw 2D and 3D items, looking at the BeginDraw/EndDraw calls on a D2D rendertarget. The order in which you do the actual draw calls is clear, 3D first then 2D, means the 2D (DrawText in this case) is in front of the 3D scene. The question is mainly about when to call the BeginDraw and EndDraw. Note that I'm drawing D2D stuff through a DXGI surface linked to the 3D RT. Option 1: A - Begin frame, clear D3D RT B - Draw 3D C - BeginDraw D2D RT D - Draw 2D E - EndDraw D2D RT F - Present Option 2: A - Begin frame, clear D3D RT + BeginDraw D2D RT B - Draw 3D C - Draw 2D D - EndDraw D2D RT E- Present Would there be a difference (performance/issue?) in using option 2? (versus 1) Any input is appreciated.
  18. Do you know any papers that cover custom data structures like lists or binary trees implemented in hlsl without CUDA that work perfectly fine no matter how many threads try to use them at any given time?
  19. For a 2D game, does using a float2 for position increases performance in any way? I know that in the end the vertex shader will have to return a float4 anyway, but does using a float2 decreases the amount of data that will have to be sent from the CPU to the GPU?
  20. Hi all, Last week I noticed that when I run my test application(s) in Renderdoc, it crashes when it enable my code that uses D2D/DirectWrite. In Visual Studio no issues occur (debug or release), but when I run the same executable in Renderdoc, it crashes somehow (assert of D2D rendertarget or without any information). Before I spend hours on debugging/ figuring it out, does someone have experience with this symptom and/or know if Renderdoc has known issues with D2D? (if so, that would be bad news for debugging my application in the future ); I can also post some more information on what happens, code and which code commented out, eliminates the problems (when running in RenderDoc). Any input is appreciated.
  21. Hi Guys, I understand how to create input layouts etc... But I am wondering is it at all possible to derive an input layout from a shader and create the input layout directly from this? (Rather than manually specifying the input layout format?) Thanks in advance :)
  22. Hi guys, when I do picking followed by ray-plane intersection the results are all wrong. I am pretty sure my ray-plane intersection is correct so I'll just show the picking part. Please take a look: // get projection_matrix DirectX::XMFLOAT4X4 mat; DirectX::XMStoreFloat4x4(&mat, projection_matrix); float2 v; v.x = (((2.0f * (float)mouse_x) / (float)screen_width) - 1.0f) / mat._11; v.y = -(((2.0f * (float)mouse_y) / (float)screen_height) - 1.0f) / mat._22; // get inverse of view_matrix DirectX::XMMATRIX inv_view = DirectX::XMMatrixInverse(nullptr, view_matrix); DirectX::XMStoreFloat4x4(&mat, inv_view); // create ray origin (camera position) float3 ray_origin; ray_origin.x = mat._41; ray_origin.y = mat._42; ray_origin.z = mat._43; // create ray direction float3 ray_dir; ray_dir.x = v.x * mat._11 + v.y * mat._21 + mat._31; ray_dir.y = v.x * mat._12 + v.y * mat._22 + mat._32; ray_dir.z = v.x * mat._13 + v.y * mat._23 + mat._33; That should give me a ray origin and direction in world space but when I do the ray-plane intersection the results are all wrong. If I click on the bottom half of the screen ray_dir.z becomes negative (more so as I click lower). I don't understand how that can be, shouldn't it always be pointing down the z-axis ? I had this working in the past but I can't find my old code Please help. Thank you.
  23. Hello, I am trying to recreate a feature that exists in Unity which is called Stretched Billboarding. However I am having a hard time figuring out how to use a particle velocity to rotate and stretch the particle-quad accordingly. Here's a screenie of unity's example: Depending on the velocity of the particle, the quad rotates and stretches, but it is still always facing the camera. In my current solution I have normal billboarding and velocities and particle-local rotations are working fine. I generate my quads in a geometry-shader, if that makes any difference. So does anyone have any thoughts of how to achieve this? Best regards Hampus
  24. Hi, I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
  25. Hi guys, I am having troubles implementing a directional light in HLSL. I have been following this guide https://www.gamasutra.com/view/feature/131275/implementing_lighting_models_with_.php?page=2 but the quad is rendering pure black no matter where the light vector is pointing. My quad is in the format of position, texcoord, & normal. This is the shader so far, cbuffer ModelViewProjectionConstantBuffer : register(b0) { float4x4 worldmat; float4x4 worldviewmat; float4x4 worldviewprojmat; float4 vecLightDir; } struct VS_OUTPUT { float4 Pos : SV_POSITION; float3 Light : TEXCOORD0; float3 Norm : TEXCOORD1; }; VS_OUTPUT vs_main(float4 position : POSITION, float2 texcoord : TEXCOORD, float3 normal : NORMAL) { float4 newpos; newpos = mul(position, worldmat); newpos = mul(newpos, worldviewmat); newpos = mul(newpos, worldviewprojmat); // return newpos; VS_OUTPUT Out = (VS_OUTPUT)0; Out.Pos = newpos;// mul(newpos, matWorldViewProj); // transform Position Out.Light = vecLightDir; // output light vector Out.Norm = normalize(mul(normal, worldmat)); // transform Normal and normalize it return Out; } float4 ps_main(float3 Light: TEXCOORD0, float3 Norm : TEXCOORD1) : SV_TARGET { float4 diffuse = { 1.0f, 0.0f, 0.0f, 1.0f }; float4 ambient = { 0.1, 0.0, 0.0, 1.0 }; //return float4(1.0f, 1.0f, 1.0f, 1.0f); return ambient + diffuse * saturate(dot(Light, Norm)); } Any help would be truly appreciated as this is the only real area of DX11 that I really have difficulties in. Thanks in advance.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!