• Content count

  • Joined

  • Last visited

Community Reputation

167 Neutral

About thejelmega

  • Rank
  1. Thanks for the tips, I'll surely look at that tutorial, now the only thing I still need to figure out is how to make sprites.
  2. Well, It seems that Gamemaker is quite a bit easier, because of it's more simplistic interface, like the creaton of sprites. But I can't yet say it for sure, so after putting some thought it, I'm gonna try unity. Also It's a bit annoying to do things now, because it is like a sauna in my room.   Edit: Well, already discovered my first problem with unity, if I want to drag an image into my project, I won't work.   Edit 2: I've started to try out unity and one feature I really like, which Gamemaker has is the ability to change values without having to edit the code and thinking of it, I'll have to use unity next year for school, so I think I'll pick Unity, unlike I said in my previous post.
  3. I've looked on the internet some more and I think I'll go with game maker studio for now and if I wan't it would be that 'hard' to port it to unity if I would want to. EDIT: (for now)
  4. Well, I've tried out game maker already and it uses it's own language, called GML, which they say is based on C. C# would be that hard for me, because I already know c++, so I'll probably just need to learn the differences between those 2.   EDIT: accidentally typed c# instead of c++, if you wonder where the comment below comes from
  5. I would like to make a roquelike game, I've tried programming an engine myself, but I would prefer an engine. I've looked at both Game Maker: Studio and Unity 2d and to be honest, I'm not sure which would be better.   Does anyone have experience with them or know which on would be the best?
  6. I've tried using sdl, but than I remembered I could also do it with sfml last night, in which I have a bit more experience and for me, it seems a bit easier.
  7. Thanks for the replies, in the meantime I've also been searching some more on the internet and it seems that newer version of dwarf fortress which can be run on linux seem to use SDL, so I'll look into SDL, which also would give me a system to handle input more easily, because I was planning to use getch, but it is a bit annoying when it comes to arrow keys and things like that.
  8. I've been interesting in making something in the style of dwarf fortress, so I want to find a way to change the fore and background color of individual characters. I'm on Windows, so I've looked at SetConsoleTextAttribute(), but It is very limited in color and I'd prefer having a way of changing the colors using rgb values.   Seeing that I'vs tried SetConsoleTextAttribute(), so It's no problem even if it is a platform specific way.   If you know a way to do it, please let me know.
  9. DX12 [Solved][D3D11] D3D not presenting

    Yes, that was my problem.   I've just used AdjustWindowRect() during the creation of my window and now it works.   Thank you very much.
  10. So after some people telling me it would be better do start with dx11, before dx12, I decided to do it, but now I have a problem I've hadn't had before: dx11 is not presenting anything to the screen. I've been following this tutorial   I'm already gonna thank you for the help.   Code:   Renderer.h [spoiler] #pragma once   #include "Helpers\Util.h"   namespace VEH { class Window; }   namespace VE { struct Vertex { Vertex() {} Vertex(float x, float y, float z) : pos(x, y, z) {}   DirectX::XMFLOAT3 pos; };     class Renderer { public: Renderer(VEH::Window* window); ~Renderer();   bool Init(HINSTANCE hInstance); bool InitScene(); void Update(); void Render(); void Shutdown();   private:   IDXGISwapChain* m_pSwapChain = nullptr; ID3D11Device* m_pDevice = nullptr; ID3D11DeviceContext* m_pDevCon = nullptr; ID3D11RenderTargetView* m_pRenderTargetView = nullptr;   VEH::Window* m_pWindow = nullptr; }; } [/spoiler]   Renderer.cpp [spoiler] #include "Renderer.h" #include "Handlers\Window.h"   namespace VE { Renderer::Renderer(VEH::Window* window) : m_pWindow(window) { }   Renderer::~Renderer() { }   bool Renderer::Init(HINSTANCE hInstance) { HRESULT hr;   // describe the back buffer DXGI_MODE_DESC bufferDesc; ZeroMemory(&bufferDesc, sizeof(DXGI_MODE_DESC));    bufferDesc.Width = m_pWindow->GetWidth(); bufferDesc.Height = m_pWindow->GetHeight(); bufferDesc.RefreshRate.Numerator = 60; bufferDesc.RefreshRate.Denominator = 1; bufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; bufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED; bufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;   //Describe the swapchain DXGI_SWAP_CHAIN_DESC swapChainDesc; ZeroMemory(&swapChainDesc, sizeof(DXGI_SWAP_CHAIN_DESC));   swapChainDesc.BufferDesc = bufferDesc; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.BufferCount = 1; swapChainDesc.OutputWindow = m_pWindow->GetHwnd(); swapChainDesc.Windowed = !m_pWindow->IsFullscreen(); swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;   UINT creationFlags = 0;   #ifdef _DEBUG creationFlags |= D3D11_CREATE_DEVICE_DEBUG; #endif   D3D_FEATURE_LEVEL featureLevel = D3D_FEATURE_LEVEL_11_0;   //Create swap chain and device hr = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, creationFlags, &featureLevel, 1, D3D11_SDK_VERSION, &swapChainDesc, &m_pSwapChain, &m_pDevice, NULL, &m_pDevCon); if (FAILED(hr)) { MessageBox(NULL, L"Swapchain and Device creation failed!", L"Error", MB_OK | MB_ICONERROR); return false; }   //Create the back bbuffer ID3D11Texture2D* backBuffer; hr = m_pSwapChain->GetBuffer(0, IID_PPV_ARGS(&backBuffer)); if (FAILED(hr)) { return false; }   //Create the render target hr = m_pDevice->CreateRenderTargetView(backBuffer, NULL, &m_pRenderTargetView); backBuffer->Release(); if (FAILED(hr)) { return false; }   //set the render target view m_pDevCon->OMSetRenderTargets(1, &m_pRenderTargetView, NULL);   return true; }   bool Renderer::InitScene() {     return true; }   void Renderer::Update() {   }   void Renderer::Render() { //Clear our back buffer to the updated color float clearColor[4] = {0.0f, 0.1f, 0.2f, 1.0f}; m_pDevCon->ClearRenderTargetView(m_pRenderTargetView, clearColor);   //Present the backbuffer to the screen m_pSwapChain->Present(0, 0); }   void Renderer::Shutdown() { // release COM objects SAFE_RELEASE(m_pRenderTargetView); SAFE_RELEASE(m_pSwapChain); SAFE_RELEASE(m_pDevice); SAFE_RELEASE(m_pDevCon) } } [/spoiler]   Util.h [spoiler] #pragma once   #ifndef WIN32_LEAN_AND_MEAN #define WIN32_LEAN_AND_MEAN // Exclude rarely used stuff from the windows header #endif   #pragma comment(lib, "d3d11.lib") #pragma comment(lib, "dxgi.lib") #pragma comment(lib, "d3dcompiler.lib")   #include <windows.h> #include <d3d11.h> #include <dxgi.h> #include <d3dcompiler.h> #include <DirectXMath.h>   #include <string>   #define SAFE_RELEASE(p) { if(p) { (p)->Release(); (p) = nullptr; } } #define SAFE_DELETE(a) { if((a) != nullptr) { delete (a); (a) = nullptr; } } [/spoiler]   Main.cpp [spoiler] #include "Handlers\Window.h" #include "Renderer.h"   int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR cmdLine, int nCmdShow) { VEH::Window* pWindow = new VEH::Window(hInstance, nCmdShow, 800, 600, L"Voxel Engine DX11 v0.000.00.00001", false);   if (!pWindow->Init()) { MessageBox(NULL, L"Window creation failed!", L"Error", MB_OK); return 0; }   VE::Renderer* pRenderer = new VE::Renderer(pWindow); if (!pRenderer->Init(hInstance)) { MessageBox(NULL, L"Renderer creation failed!", L"Error", MB_OK); return 0; } if (!pRenderer->InitScene()) { MessageBox(NULL, L"Scene creation failed!", L"Error", MB_OK); return 0; }   while (pWindow->Update()) { pRenderer->Update(); pRenderer->Render(); }   pRenderer->Shutdown(); pWindow->Close();   delete pRenderer; delete pWindow;   return 0; } [/spoiler]
  11. Ok, I'll be sure to do that. I've looked on my computer that I once started with dx11, but never really continued, so I'm going to do that and if I have something nice, I can always port it later. I'll start with dx11 and than I'm also going to have a head start for next school year.   Thanks for the info.
  12. I've actually already programmed with opengl, it's just that I didn't think of when I was typing that, because it has already been a few months since I did that.    And the reason I chose to learn some D3D12, is because I like to make it challenging to myself, also, I have lot of free time now, so I'm taking it slow, school only starts in the last week of september.
  13. OK, I've added that code and I get this:   D3D12 ERROR: ID3D12GraphicsCommandList::*: This API cannot be called on a closed command list. [ EXECUTION ERROR #547: COMMAND_LIST_CLOSED]   Because I'm new when it comes to graphics programming, I'm not sure where my problem is.   EDIT: after looking at the the tutorial code once again, I found the problem, in the tutorial source code, a close call was removed, but they did not tell that in the tutorial, but I'm extremely thankful for the help, MJP. Especially because after the vacation, my second year game development will start and then we will only start with graphics programming, so I wanted to be a bit ahead.
  14. I've been following this tutorial: But now I've run into an annoying problem, it doesn't render and I've check the graphics debug, like it said in the tutorial and the vertex data isn't given to the gpu. it says that the type is byte2x, but it is a float, and the data it has is 00000000.   If you can help me, thanks.   code:   Renderer.h [spoiler] #pragma once   #include "Helpers\Util.h"   namespace VEH { class Window; }   namespace VE { class Renderer { public: Renderer(VEH::Window* outputWindow); ~Renderer();   bool Init(); //initialize d3d12 void Tick(); // update game logic void TickPipeline(); // update d3d pipeline (update command list) void Draw(); // execute command list void Shutdown(); //release objects and clean up memory void WaitForPreviousFrame(); // wait until gpu is finished with the command list   private:   const static int m_FrameBufferCount = 3; //number of buffers we want, 2 for double buffering, 3 for tripple buffering   ID3D12Device* m_pDevice = nullptr; // direct3d device IDXGISwapChain3* m_pSwapChain = nullptr; // swapchain used to switch render targets ID3D12CommandQueue* m_pCommandQueue = nullptr; // container for command lists ID3D12DescriptorHeap* m_pRtvDescriptorHeap = nullptr; // a descriptor heap to hold resources like render targets ID3D12Resource* m_paRenderTargets[m_FrameBufferCount] = {}; // number of render targets equal to the buffer count ID3D12CommandAllocator* m_paCommandAllocator[m_FrameBufferCount] = {}; // we want enough allocatorsfor each buffer * number of threads (only 1 thread for now) ID3D12GraphicsCommandList* m_pCommandList = nullptr; // a command list we can record commands into, the execute them to render the frame ID3D12Fence* m_paFence[m_FrameBufferCount] = {}; // an object that is locked while our command list is being executed by the gpu. We need as many as we have allocators (more if we want to know when the gpu is finished with an asset) ID3D12PipelineState* m_pPipelineStateObject = nullptr; // pso containing a pipeline state ID3D12RootSignature* m_pRootSignature = nullptr; // root signature defines data shaders will access ID3D12Resource* m_pVertexBuffer = nullptr; // a default buffer in DPU memory that we will load vertex data for our triangle into   D3D12_VIEWPORT m_Viewport; // area that output from the rasterizer will be stretched to D3D12_RECT m_ScissorRect; // the area to draw in. pixels outside that area will not be drawn to D3D12_VERTEX_BUFFER_VIEW m_VertexBufferView; // a structure containing a pointer to the vertex data in gpu memory // the total size of the buffer, and size of each element (vertex)     HANDLE m_FenceEvent = NULL; // a handle to an event when out fence is unlocked by the gpu UINT64 m_FenceValue[m_FrameBufferCount] = {}; // this value is incremented each frame. each fence will have its own value int m_FrameIndex = 0; // currnet rtv we are on int m_RtvDescriptorSize = 0; // size of the rtv descriptor on the device (all front and back buffers will be the same size)   VEH::Window* m_pWindow; }; }   [/spoiler]   Renderer.cpp [spoiler] #include "Renderer.h" #include "Handlers\Window.h"   namespace VE {   Renderer::Renderer(VEH::Window* outputWindow) : m_pWindow(outputWindow) { }     Renderer::~Renderer() { }   bool Renderer::Init() { HRESULT hr;   // -- Create Device -- //   IDXGIFactory4* dxgiFactory; hr = (CreateDXGIFactory1(IID_PPV_ARGS(&dxgiFactory))); if (FAILED(hr)) { return false; }   IDXGIAdapter1* adapter; // adapters are the graphics cards (this includes the embedded graphics on the cpu) int adapterIndex = 0; // we'll start looking for directx 12 compatible grapics devices starting at index 0 bool adapterFound = false; // set to true when a good one was found   while (dxgiFactory->EnumAdapters1(adapterIndex, &adapter) != DXGI_ERROR_NOT_FOUND) { DXGI_ADAPTER_DESC1 desc; adapter->GetDesc1(&desc);   if (desc.Flags & DXGI_ADAPTER_FLAG_SOFTWARE) { // we don't want a software device ++adapterIndex; continue; }   //we want a device that is compatible with direct3d 12 (feature level 11 or higher) hr = D3D12CreateDevice(adapter, D3D_FEATURE_LEVEL_11_0, __uuidof(ID3D12Device), nullptr); if (SUCCEEDED(hr)) { adapterFound = true; break; }   ++adapterIndex; }   if (!adapterFound) { return false; }   //Create the device hr = D3D12CreateDevice(adapter, D3D_FEATURE_LEVEL_11_0, IID_PPV_ARGS(&m_pDevice)); if (FAILED(hr)) { return false; }   // -- Create the Command Queue -- //   D3D12_COMMAND_QUEUE_DESC cqDesc = {}; // we'll be using the default values   hr = m_pDevice->CreateCommandQueue(&cqDesc, IID_PPV_ARGS(&m_pCommandQueue)); if (FAILED(hr)) { return false; }   // -- Create the Swap Chain (double/triple buffering)   uint32_t windowWidth = m_pWindow->GetWidth(); uint32_t windowHeight = m_pWindow->GetHeight();   DXGI_MODE_DESC backBufferDesc = {}; backBufferDesc.Width = windowWidth; // buffer width backBufferDesc.Height = windowHeight; // buffer height backBufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; // format of the buffer (rgbe 32 bits, 8 bits for each channel)   // decribe our multi=sampling. We are not multi-sampling, so we set count to 1 (we need at least 1 sample of course) DXGI_SAMPLE_DESC sampleDesc = {}; sampleDesc.Count = 1; // multi-sample count (no multi-sampling, so we just put 1, since we still nee 1 sample)   // describe and create the swap chain DXGI_SWAP_CHAIN_DESC swapChainDesc = {};  swapChainDesc.BufferCount = m_FrameBufferCount; // number of buffers we have swapChainDesc.BufferDesc = backBufferDesc; // our back buffer description swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; // this says the pipeline will render to this swap chain swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_DISCARD; // dxgi will discrad the buffer (data) after we call present swapChainDesc.OutputWindow = m_pWindow->GetHwnd(); //handle to our window swapChainDesc.SampleDesc = sampleDesc; // our multi-sampling description swapChainDesc.Windowed = !m_pWindow->IsFullscreen(); // set to true, then if in fullscreen must call SetFullScreenState with true for full screen to get uncapped fps   IDXGISwapChain* tempSwapChain;   dxgiFactory->CreateSwapChain( m_pCommandQueue, // the queue will be flushed once the swap chain is created &swapChainDesc,  // give it the swapchain description we created above &tempSwapChain); // store the created swap cahin in a temp IDXGSwapChain Interface   m_pSwapChain = static_cast<IDXGISwapChain3*>(tempSwapChain);   m_FrameIndex = m_pSwapChain->GetCurrentBackBufferIndex();   // -- Create the Back Buffers (render target views) Descriptor heap -- //   // describe a rtv descriptor heap and creat it D3D12_DESCRIPTOR_HEAP_DESC rtvHeapDesc = {}; rtvHeapDesc.NumDescriptors = m_FrameBufferCount; // number of descriptors for this heap rtvHeapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV;   // this heap will not be directly referenced by the shaders (not shader visible), as this will store the output from the pipeline // otherwixe we would set the heap's flag to D3D12_DESCRIPTION_HEAP_FLAG_SHADER_VISIBLE rtvHeapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; hr = m_pDevice->CreateDescriptorHeap(&rtvHeapDesc, IID_PPV_ARGS(&m_pRtvDescriptorHeap)); if (FAILED(hr)) { return false; }   // get the size of a descriptor in this heap (this is a rtv hea[, so only rtv descriptors should be stored in it. // descriptor sixes may vary from device to device, which is why there is no set size and we must ask the // device to give us the size. we will use this size to increment a descriptor handle offset m_RtvDescriptorSize = m_pDevice->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_RTV);   // get a handle to the first descriptor in the descriptor heap. ahnadle is basically a pointer, // but we cannot literally use it like a c++ pointer CD3DX12_CPU_DESCRIPTOR_HANDLE rtvHandle(m_pRtvDescriptorHeap->GetCPUDescriptorHandleForHeapStart());   //Create a RTV for each buffer (double buffering is 2 buffers, triple buffering is 3) for (int i = 0; i < m_FrameBufferCount; ++i) { // first we get the n'th buffer in the swap chain and store it in the n'th // position of our ID3D12Resource array hr = m_pSwapChain->GetBuffer(i, IID_PPV_ARGS(&m_paRenderTargets[i])); if (FAILED(hr)) { return false; }   // then we "create: a render target view which binds the swap chain buffer (ID3D12Resource[n]) to the rtv handle m_pDevice->CreateRenderTargetView(m_paRenderTargets[i], nullptr, rtvHandle);   //we increment the rv handle by the rtv desriptor size we got above rtvHandle.Offset(1, m_RtvDescriptorSize); }   // -- Create the Command Allocators -- //   for (int i = 0; i < m_FrameBufferCount; ++i) { hr = m_pDevice->CreateCommandAllocator(D3D12_COMMAND_LIST_TYPE_DIRECT, IID_PPV_ARGS(&m_paCommandAllocator[i])); if (FAILED(hr)) { return false; } }   // create the command list with the first allocator hr = m_pDevice->CreateCommandList(0, D3D12_COMMAND_LIST_TYPE_DIRECT, m_paCommandAllocator[0], NULL, IID_PPV_ARGS(&m_pCommandList)); if (FAILED(hr)) { return false; }   // command lists are created in the recording state. our main loop will set it up for recording afain, so close it now m_pCommandList->Close();   // -- Create a Fence & Fence Event -- //   // create the fences for (int i = 0; i < m_FrameBufferCount; ++i) { hr = m_pDevice->CreateFence(0, D3D12_FENCE_FLAG_NONE, IID_PPV_ARGS(&m_paFence[i])); if (FAILED(hr)) { return false; } m_FenceValue[i] = 0; // set the initial fence value to 0 }   //create a handle to a fence event m_FenceEvent = CreateEvent(nullptr, FALSE, FALSE, nullptr); if (m_FenceEvent == nullptr) { return false; }   // create root signature CD3DX12_ROOT_SIGNATURE_DESC rootSignatureDesc = {}; rootSignatureDesc.Init(0, nullptr, 0, nullptr, D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT);   ID3DBlob* signature; hr = D3D12SerializeRootSignature(&rootSignatureDesc, D3D_ROOT_SIGNATURE_VERSION_1, &signature, nullptr); if (FAILED(hr)) { return false; }   hr = m_pDevice->CreateRootSignature(0, signature->GetBufferPointer(), signature->GetBufferSize(), IID_PPV_ARGS(&m_pRootSignature)); if (FAILED(hr)) { return false; }   // create vertex and pixel shader   // when debugging, we can compilre the shader files at runtime. // but for release versions, we can compile the hlsl shaders // with fxc.wxw to create .cso files, which contain the shader // bytecode. We can load the .cso files at runtime to get the // shader bytecode, shich of course is faster than compiling // them at runtime   //compile vertex shader ID3DBlob* vertexShader; // d3d blob for holding vertex shader byte code ID3DBlob* errorBuff; // a buffer holding the error data if any hr = D3DCompileFromFile(L"Resources/Shaders/VertexShader.hlsl", nullptr, nullptr, "main", "vs_5_0", D3DCOMPILE_DEBUG | D3DCOMPILE_SKIP_OPTIMIZATION, 0, &vertexShader, &errorBuff); if (FAILED(hr)) { OutputDebugStringA((char*)errorBuff->GetBufferPointer()); return false; }   //fill out a shader bytecode structure, which is basically just a pointer // to the shader bytecode and the size of the shader bytecode D3D12_SHADER_BYTECODE vertexShaderBytecode = {}; vertexShaderBytecode.BytecodeLength = vertexShader->GetBufferSize(); vertexShaderBytecode.pShaderBytecode = vertexShader->GetBufferPointer();   //compile pixel shader ID3DBlob* pixelShader; // a buffer holding the error data if any hr = D3DCompileFromFile(L"Resources/Shaders/PixelShader.hlsl", nullptr, nullptr, "main", "ps_5_0", D3DCOMPILE_DEBUG | D3DCOMPILE_SKIP_OPTIMIZATION, 0, &pixelShader, &errorBuff); if (FAILED(hr)) { OutputDebugStringA((char*)errorBuff->GetBufferPointer()); return false; }   // fil out bytecode for the pixel shader D3D12_SHADER_BYTECODE pixelShaderBytecode = {}; pixelShaderBytecode.BytecodeLength = pixelShader->GetBufferSize(); pixelShaderBytecode.pShaderBytecode = pixelShader->GetBufferPointer();   // create the input layout   // the input layout is used by the Input Assembler so that it knows // how to read the vertex sata bound to it   D3D12_INPUT_ELEMENT_DESC inputLayout[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 } };   //fill out an input layout description D3D12_INPUT_LAYOUT_DESC inputLayoutDesc = {};   //we can get the number of elements in an array by "sizeof(array) / sizeof(arrayElementType)" inputLayoutDesc.NumElements = sizeof(inputLayout) / sizeof(D3D12_INPUT_ELEMENT_DESC); inputLayoutDesc.pInputElementDescs = inputLayout;   // create a Pipeline State Object (PSO)   // In a real application, you will have many pso's. for each different shader // or different combinations of shaders, different blend states or different rasterizer states, // different topology types (point, line, triangle, patch), or a different number // of render targets you will need a pso   // VS is the only required shader for a pso. You might be wondering when a case would be where // you only set the VS. It's possible that you have a pso that only outputs data with the stream // output, and not on a render target, which means you would not need anything after the stream // output.   D3D12_GRAPHICS_PIPELINE_STATE_DESC psoDesc = {}; // a structure to define a pso psoDesc.InputLayout = inputLayoutDesc; // the structure describing our input layout psoDesc.pRootSignature = m_pRootSignature; // the root signature thath describes the input data this pso needs psoDesc.VS = vertexShaderBytecode; // structure describing where to find the vertex shader bytecode and how large it is psoDesc.PS = pixelShaderBytecode; // same as VS, but for pixel shader psoDesc.PrimitiveTopologyType = D3D12_PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; // type of topology we are drawing psoDesc.RTVFormats[0] = DXGI_FORMAT_R8G8B8A8_UNORM; // format of the render target psoDesc.SampleDesc = sampleDesc; // must be the same sample description as the swapchain and depth/stencil buffer psoDesc.SampleMask = 0xffffffff; // sample mask has to do with multi-sampling. 0xffffffff means point sampling is done psoDesc.RasterizerState = CD3DX12_RASTERIZER_DESC(D3D12_DEFAULT); // a default rasterizer state psoDesc.BlendState = CD3DX12_BLEND_DESC(D3D12_DEFAULT); // a default belnd state psoDesc.NumRenderTargets = 1; // we are only binding one render target   //create the pso hr = m_pDevice->CreateGraphicsPipelineState(&psoDesc, IID_PPV_ARGS(&m_pPipelineStateObject)); if (FAILED(hr)) { return false; }   // Create vertex buffer   // a triangle Vertex vList[] = { { {0.0f, 0.5f, 0.5f} }, { {0.5f, -0.5f, 0.5f} }, { {-0.5f, -0.5f, 0.5f} } };   uint32_t vBufferSize = sizeof(vList);   // create default heap // deault heao is memory on the gpu. Only the GPU has acces to this memery // To get data into this heap, we will have to upload the data using // an upload heap m_pDevice->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT), // a default heap D3D12_HEAP_FLAG_NONE, // no flags &CD3DX12_RESOURCE_DESC::Buffer(vBufferSize), // resource description for a buffer D3D12_RESOURCE_STATE_COPY_DEST, // we will start the heap in the copy destination state since we will copy data // from the upload heap to this nullptr, // optimize clear value must be null for this type of resouurce, used for render targets ans depth/stencil buffers IID_PPV_ARGS(&m_pVertexBuffer));   // we can give resource heaps a name so when we debug with the graphics debugger we know what resource we are looking at m_pVertexBuffer->SetName(L"Vertex Buffer Resource Heap");   // create upload heap // upload heaps are used to upload data to the GPU. CPU can wrtie to it, GPU can read from it // we will upload the vertex buffer using this heap to te default heap ID3D12Resource* vBufferUploadHeap;   m_pDevice->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_UPLOAD), //upload heap D3D12_HEAP_FLAG_NONE, // no flags &CD3DX12_RESOURCE_DESC::Buffer(vBufferSize), //read description for a buffer D3D12_RESOURCE_STATE_GENERIC_READ, // GPU will read from this buffer and copy its contents to the default heap nullptr, IID_PPV_ARGS(&vBufferUploadHeap)); vBufferUploadHeap->SetName(L"Vertex Buffer Upload Resource Heap");   //store vertex buffer in upload heap D3D12_SUBRESOURCE_DATA vertexData = {}; vertexData.pData = reinterpret_cast<BYTE*>(vList); //pointer to our vertex data vertexData.RowPitch = vBufferSize; // size of all our triangle vertex data vertexData.SlicePitch = vBufferSize; // als the size of our triangle vertex data   // we are now creating a command with the comand list to copy the data from // the upload heap to the default heap UpdateSubresources(m_pCommandList, m_pVertexBuffer, vBufferUploadHeap, 0, 0, 1, &vertexData);   // transition the vertex buffer data from copy destination to vertex buffer state m_pCommandList->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_pVertexBuffer, D3D12_RESOURCE_STATE_COPY_DEST, D3D12_RESOURCE_STATE_VERTEX_AND_CONSTANT_BUFFER));   //Now we execute the command list to upload the initial assets (triangle data m_pCommandList->Close(); ID3D12CommandList* ppCommandLists[] = { m_pCommandList }; m_pCommandQueue->ExecuteCommandLists(_countof(ppCommandLists), ppCommandLists);   //increment the fence value now, otherwise the buffer might not be uploaded by the time we start drawing ++m_FenceValue[m_FrameIndex]; hr = m_pCommandQueue->Signal(m_paFence[m_FrameIndex], m_FenceValue[m_FrameIndex]); if (FAILED(hr)) { m_pWindow->Close(); }   // create a vertex buffer view for the triangle, We get the GPU memory address to the vertex pointer using the GetGpuVirtualAddress() method m_VertexBufferView.BufferLocation = m_pVertexBuffer->GetGPUVirtualAddress(); m_VertexBufferView.StrideInBytes = sizeof(Vertex); m_VertexBufferView.SizeInBytes = vBufferSize;   // fill out the viewport m_Viewport.TopLeftX = 0; m_Viewport.TopLeftY = 0; m_Viewport.Width = windowWidth; m_Viewport.Height = windowHeight; m_Viewport.MinDepth = 0.0f; m_Viewport.MaxDepth = 1.0f;   // fill out a scissor rect m_ScissorRect.left = 0; = 0; m_ScissorRect.right = windowWidth; m_ScissorRect.bottom = windowHeight;   return true; }   void Renderer::Tick() { // update app logic, such as moving the camera or figuring out what objects are in view (for now) }   void Renderer::TickPipeline() { HRESULT hr;   // we have to wiat for the gpu to finish with the command allocator before we can reset it WaitForPreviousFrame();   // we can only reset the allocator once the gpu is done with it // resetting an allocator frees the memory that the command list was stored in hr = m_paCommandAllocator[m_FrameIndex]->Reset(); if (FAILED(hr)) { m_pWindow->Close(); }   // reset the command list. by resetting the command list we are putting it into // a recording state so we can start recording commands into the command allocator. // the command allocator that we reference here may have multiple command lists // associated with it, but only one can be in the recording state at any time. Make sure // that any other command lists associated with this command allocator are in // the xlosed state (not recording). // Here you will pass the initiali pipeline state object as the second parameter, // but for now we are only clearing the rtc, and do not actually need // anything but an initial default pipeline, which is what we get by setting // the second parameter to NULL hr = m_pCommandList->Reset(m_paCommandAllocator[m_FrameIndex], m_pPipelineStateObject); if (FAILED(hr)) { m_pWindow->Close(); }   // here we start recording commands into the command list (all the command will be stored in the command allocator)   // transition the "frame index" render target from the present state to  render target state so the command list draws to it starting from here m_pCommandList->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_paRenderTargets[m_FrameIndex], D3D12_RESOURCE_STATE_PRESENT, D3D12_RESOURCE_STATE_RENDER_TARGET));   // here we again get the handle to our current render target wiew so we can set it as the render target in the output merger stage of the pipeline CD3DX12_CPU_DESCRIPTOR_HANDLE rtvHandle(m_pRtvDescriptorHeap->GetCPUDescriptorHandleForHeapStart(), m_FrameIndex, m_RtvDescriptorSize);   // set the render target for the output merger stage (the output of the pipleine) m_pCommandList->OMSetRenderTargets(1, &rtvHandle, FALSE, nullptr);   // Clear the render target by using the ClearRenderTargetView command const float clearColor[] = {0.0f, 0.2f, 0.4f, 1.0f}; m_pCommandList->ClearRenderTargetView(rtvHandle, clearColor, 0, nullptr);   //draw triangle m_pCommandList->SetGraphicsRootSignature(m_pRootSignature); // set the root signature m_pCommandList->RSSetViewports(1, &m_Viewport); // set the viewports m_pCommandList->RSSetScissorRects(1, &m_ScissorRect); // set the scissor rects m_pCommandList->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST); //set the primitive topology m_pCommandList->IASetVertexBuffers(0, 1, &m_VertexBufferView); // set the vertex buffer (using the vertex buffer view) m_pCommandList->DrawInstanced(3, 1, 0, 0); // finaly draw 3 vertices (draw the triangle)   // transition the "frame index" render target from the render target state to the present state. If the debug layer is enable , you will recieve a // warning if present is called on the render target when it's not in the present state m_pCommandList->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_paRenderTargets[m_FrameIndex], D3D12_RESOURCE_STATE_RENDER_TARGET, D3D12_RESOURCE_STATE_PRESENT));   hr = m_pCommandList->Close(); if (FAILED(hr)) { m_pWindow->Close(); } }   void Renderer::Draw() { HRESULT hr;   TickPipeline(); // update the pipeline by sending commands to the command queue   // create an array of command lists (only one command lis here) ID3D12CommandList* ppCommandList[] = { m_pCommandList };   //execute the array of command lists m_pCommandQueue->ExecuteCommandLists(_countof(ppCommandList), ppCommandList);   // This command goes in at the end of our command queue. we will know our command queue // has finished because the fence value will be set to "fenceValue" from the GPU since the command  // queue is being executed on the GPU hr = m_pCommandQueue->Signal(m_paFence[m_FrameIndex], m_FenceValue[m_FrameIndex]); if (FAILED(hr)) { m_pWindow->Close(); }   //present the current backbuffer hr = m_pSwapChain->Present(0, 0); if (FAILED(hr)) { m_pWindow->Close(); } }   void Renderer::Shutdown() { CloseHandle(m_FenceEvent);   //wait for the gpu to finish all frames for (int i = 0; i < m_FrameBufferCount; ++i) { m_FrameIndex = i; WaitForPreviousFrame(); }   //get swapchain out of full screen before exiting BOOL fs = false; if (m_pSwapChain->GetFullscreenState(&fs, NULL)) m_pSwapChain->SetFullscreenState(false, NULL);   SAFE_RELEASE(m_pDevice); SAFE_RELEASE(m_pSwapChain); SAFE_RELEASE(m_pCommandQueue); SAFE_RELEASE(m_pRtvDescriptorHeap); SAFE_RELEASE(m_pCommandList);   for (int i = 0; i < m_FrameBufferCount; ++i) { SAFE_RELEASE(m_paRenderTargets[i]); SAFE_RELEASE(m_paCommandAllocator[i]); SAFE_RELEASE(m_paFence[i]); }   SAFE_RELEASE(m_pPipelineStateObject); SAFE_RELEASE(m_pRootSignature); SAFE_RELEASE(m_pVertexBuffer); }   void Renderer::WaitForPreviousFrame() { HRESULT hr;   // swap the surrent rtv buffer so we can draw on the correct buffer m_FrameIndex = m_pSwapChain->GetCurrentBackBufferIndex();   // if command queue fence value is still less than "fenceValue", then we know the GPU has not finished executing // the command queue since it has not reached the "CommandQueue->Signal(fence, fenceValue)" command if (m_paFence[m_FrameIndex]->GetCompletedValue() < m_FenceValue[m_FrameIndex]) { // we have the fence create an event which is signaled once the fence's current value is "fenceValue" hr = m_paFence[m_FrameIndex]->SetEventOnCompletion(m_FenceValue[m_FrameIndex], m_FenceEvent); if (FAILED(hr)) { m_pWindow->Close(); }   // we will wait until the fence has triggered the even that it's current value has reached "fenceValue". once its value // has reached "fenceValue", we know the command queue has finished executing WaitForSingleObject(m_FenceEvent, INFINITE); }   //increment fenceValue for next frame ++m_FenceValue[m_FrameIndex]; }   } [/spoiler]   Util.h [spoiler] #pragma once   #ifndef WIN32_LEAN_AND_MEAN #define WIN32_LEAN_AND_MEAN // Exclude rarely used stuff from the windows header #endif   #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #include <d3dcompiler.h> #include <DirectXMath.h> #include "d3dx12.h"   #include <iostream> #include <vector> #include <string> #include <assert.h>   #define SAFE_RELEASE(p) { if ( (p) ) { (p)->Release(); (p) = 0; } } #define SAFE_DELETE(a) if( (a) != NULL ) delete (a); (a) = NULL;   struct Vertex { DirectX::XMFLOAT3 pos; }; [/spoiler]     VertexShader.hlsl [spoiler] float4 main(float3 pos : POSITION) : SV_POSITION { // just pass vertex position straight through return float4(pos, 1.0f); } [/spoiler]   VertexShader.hlsl [spoiler] float4 main() : SV_TARGET { //return green return float4(0.0f, 1.0f, 0.0f, 1.0f); } [/spoiler]
  15. OpenGL opengl texture problem

    Oh, so you can't give the vertices to draw using elements and the texture coords separately. and If I use separate vertices, isn't it going to be laggy?