Jump to content
  • Advertisement

Search the Community

Showing results for tags 'DX11'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 1000 results

  1. Gnollrunner

    Mountain Ranges

    For this entry we implemented the ubiquitous Ridged Multi-fractal function. It's not so interesting in and of itself, but it does highlight a few features that were included in our voxel engine. First as we mentioned, being a voxel engine, it supports full 3D geometry (caves, overhangs and so forth) and not just height-maps. However if we look at a typical world these features are the exception rather than the rule. It therefor makes sense to optimize the height-map portion of our terrain functions. This is especially true since our voxels are vertically aligned. This means that there will be many places where the same height calculation is repeated. Even if we look at a single voxel, nearly the same calculation is used for a lower corner and it's corresponding upper corner. The only difference been the subtraction from the voxel vertex position. ...... Enter the unit sphere! In our last entry we talked about explicit voxels, with edges and faces and vertexes. However all edges and faces are not created equal. Horizontal faces (in our case the triangular faces), and horizontal edges contain a special pointer that references their corresponding parts in a unit sphere, The unit sphere can be thought of as residing in the center of each planet. Like our world octree, it is formed from a subdivided icosahedron, only it is not extruded and is organized into a quadtree instead of an octree, being more 2D in nature. Vertexes in our unit sphere can be used to cache height-map function values to avoid repeated calculations. We also use our unit sphere to help the horizontal part of our voxel subdivision operation. By referencing the unit sphere we only have to multiply a unit sphere vertex by a height value to generate voxel vertex coordinates. Finally our unit-sphere is also used to provide coordinates during the ghost-walking process we talked about in our first entry. Without it, our ghost-walking would be more computationally expensive as it would have to calculate spherical coordinates on each iteration instead of just calculating heights, which are quite simple to calculate as they are all generated by simply averaging two other heights. Ownership of units sphere faces is a bit complex. Ostensibly they are owned by all voxel faces that reference them (and therefore add to their reference counter) . However this presents a bit of a problem as they are also used in ghost-walking which happens every LOD/re-chunking iteration, and it fact they may or may not end up being referenced by voxels faces, depending on whether mesh geometry is found. Even if no geometry is found we may want to keep them for the next ghost-walk search. To solve this problem, we implemented undead-objects. Unit sphere faces can become undead and can even be created that way if they are built by the ghost-walker. When they are undead they are kept in a special list which keeps them psudo-alive. They also have an un-dead life value associated with them. When they are touched by the ghost-walker that value is renewed. However if after a few iterations they are untouched, they become truly dead and are destroyed. Picture time again..... So here is our Ridged Multi-Fractal in wire frame. We'll flip it around to show our level transition........ Here's a place that needs a bit of work. The chunk level transitions are correct but they are probably a bit more complex than they need to be. We use a very general voxel tessellation algorithm since we have to handle various combinations of vertical and horizontal transitions. We will probably optimize this later, especially for the common cases but for now it serves it's purpose. Next up we are going to try to add threads. We plan to use a separate thread(s) for the LOD/re-chunk operations, and another one for the graphics .
  2. Gnollrunner

    Bumpy World

    After a LOT of bug fixes, and some algorithm changes changes, our octree marching prisms algorithm is now in a much better state. We added a completely new way of determine chunk level transitions, but before discussing it we will first talk a bit more about our voxel octree. Our octree is very explicit. By that we mean it is built up of actual geometric components. First we have voxel vertexes (not to be confused with mesh vertexes) for the corners of voxels. Then we have voxel edges that connect them. Then we have voxel faces which are made up of a set of voxel edges (either three or four) and finally we have the voxels themselves which reference our faces. Currently we support prismatic voxels. since they make the best looking world, however the lower level constructs are designed to also support the more common cubic voxels. In addition to our octree of voxels, voxel faces are kept in a quadtrees, while voxel edges are organized into binary trees. Everything is pretty much interconnected and there is a reference counting system that handles deallocation of unused objects. So why go though all this trouble? The answer by doing things this way we can avoid traversing the octree when building meshes using our marching prisms algorithms. For instance, If there is a mesh edge on a voxel face, since that face is referenced by the voxels on either side of it, we can easily connect together mesh triangles generated in both voxels. The same goes for voxel edges. A mesh vertex on a voxel edge is shared by all voxels that reference it. So in short, seamless meshes are built in place with little effort. This is important since meshes will be constantly recalculated for LOD as a player moves around. This brings us to chunking. As we talked about in our first entry, a chunk is nothing more than a sub-section of the octree. Most importantly we need to know where there are up and down chunk transitions. Here our face quadtrees, and edge binary tress help us out. From the top of any chunk we can quickly traverse the quadtrees and binary trees and tag faces and edges as transition objects. The algorithm is quite simple since we know there will only be one level difference between chunks, and therefore if there is a level difference, one level will be odd and the other even. So we can tag our edges and faces with up to two chunk levels in a 2 element array indexed by the last bit of the chunk level. After going down the borders of each chunk, border objects will now have one of two states. They will be tagged with a single level or a two levels one being one higher than the other. From this we can now generate transition voxels with no more need to look at a voxel's neighboring voxels. One more note about our explicit voxels, since they are in fact explicit there is no requirement that they form a regular grid. As we said before our world grid is basically wrapped around a sphere which gives us fairly uniform terrain no matter where you are on the globe. Hopefully in he future we can also use this versatility to build trees. Ok so it's picture time......... We added some 3D simplex nose to get something that isn't a simple sphere. Hopefully in our next entry we will try a multi-fractal.
  3. Hey everyone! I implemented additive lighting, so I basically have a full screen texture that is the sum of all light colors (per pixel, of course). Now I have the problem that if I place too many camp fires next to each other, they will add up to more than 1.0. If I just clip it at 1.0, it will look weird, because the camp fires shine orange light, which becomes green-ish if you clip the red to 1.0. I also want the camp fire to be much less bright than the sun, but if I make it only like 0.1 bright, it will be too dark in the night. What can I do to fix this? I looked for high dynamic range and stuff, but I didn't find anything other than the HDR functionality of DirectX 12. In fact, I didn't find any good tutorial or article at all, that is why I ask here. Cheers, Magogan
  4. Hi everyone! I have some problem in drawing multiple objects. If i load and draw only one model, it draws correctly, but if i add more objects, some of them draw partially, and i have warning ID3D11DeviceContext::DrawIndexed: Index buffer has not enough space! [ EXECUTION WARNING #359: DEVICE_DRAW_INDEX_BUFFER_TOO_SMALL]. If i draw only one model, i don't have this warning. For each model in render function i do following actions: deviceContext->PSSetSamplers(0, 1, &m_samplerState); deviceContext->IASetInputLayout(m_layout); deviceContext->VSSetShader(m_vertexShader, NULL, 0); deviceContext->PSSetShader(m_pixelShader, NULL, 0); deviceContext->DrawIndexed(indexDrawAmount, indexStart, 0); unsigned int stride; unsigned int offset; stride = sizeof(VertexType); offset = 0; deviceContext->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset); deviceContext->IASetIndexBuffer(m_indexBuffer, DXGI_FORMAT_R32_UINT, offset); deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); deviceContext->PSSetSamplers(0, 0, NULL); deviceContext->IASetInputLayout(NULL); deviceContext->VSSetShader(NULL, NULL, 0); deviceContext->PSSetShader(NULL, NULL, 0); Each model has it's own vertex and index buffers, which being initialized when i load .obj file. Each index is unsigned int. When i'm trying to use Graphics Debugger in Visual Studio, i catch unhandled exception in line deviceContext->DrawIndexed(indexDrawAmount, indexStart, 0); dx11 2018-10-05 18-44-58-83.bmp
  5. I am learning directX11. I used FBX sdk to import objects, but the object part of position is wrong,but I opened fbx file by blender. It didn't have a problem. Did I do something wrong? thank you.
  6. Hello Guys, I'm using scharpDX 4, and wonder why the matrix class uses row_major matrixes, because HLSL uses column_major Can anyone tell me if there is a specific reason why the matrix class uses exactly the wrong matrix order?
  7. Hello there. First post :). I'm trying to perform cellular automata in the GPU without double buffering, I came across a very weird problem though that I could simplify into few lines of code: for (int x = 0; x < max; ++x) { for (int y = 1; y < max; ++y) { uint down = map[int2(x, y-1)]; if (!down) { map[int2(x, y-1)] = 64; map[int2(x, y)] = 0; } } } map is `RWTexture2D<uint> map : register(u0);`. It's dispatched with (1,1,1) and numthreads (1,1,1) so it loops the whole area (in this case 256*256) in a single thread (I know this isn't good but it's the simplest way to repro the issue I could come up with). It works fine when `max` is passed in a constant buffer, however, as soon as I hardcode that value simply by adding `int max = 256;` (and removing from the cbuffer) the shader simply crashes. I think it could be some compiler optimization gone wrong but I'm compiling debug shaders. Do you guys have any idea what could be causing that? The shader compiles fine and there's no error whatsoever, it simply hangs forever when I hardcode that value. I am making sure that the value is exactly the same as passed in the constant. I have no idea what kind of nonsense is going on there Thanks in advance
  8. Hi guys, I have just noticed my project is using quite a bit of memory and I have narrowed it down to draw calls. I am drawing quads with 32px by 32px images with R8B8G8A8 formatting. Back buffer is formatted the same also. So in theory the sprite is using 4K of RAM. I notice though, that each individual draw call I make is using 13MB of RAM. Draw 10 quads and it blows up to 130MB of RAM (Release mode). I checked using the VS graphical debugger and the render sequence is extremely lean at only a couple of calls. My draw routine is pretty compact also; // Draw sprite float attributes = 4.0f; // X, Y, U, V unsigned int stride = (unsigned int)(sizeof(float) * attributes); unsigned int offset = 0; d3dContext->IASetInputLayout(d3dInputLayoutDefault); d3dContext->IASetVertexBuffers(0, 1, &d3dVertexBufferDynamic, &stride, &offset); d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); d3dContext->Draw(6, 0); The RAM usage is the same if I use single or double buffering on the back buffer and I am not using any surfaces. Any ideas why each draw call costs 13 MB? Or is this normal behaviour that I just haven't picked up on before? Thanks in advance
  9. Some computer configurations have multiple GPUs, e.g. gaming laptop with an Intel HD Graphics chip and a GeForce or Radeon chip. When enumerating through the available adapters on a computer, the Intel chip is the first one in most cases. However, I want to use the best adapter for my game. So I wrote the code below to create my swap chain: protected void CreateDevice(Size clientSize, IntPtr outputHandle) { ProcessLogger.Instance.StartFunction(this, "CreateDevice"); // Set swap chain flags, DXGI format and default refresh rate. _swapChainFlags = SharpDX.DXGI.SwapChainFlags.None; _dxgiFormat = SharpDX.DXGI.Format.R8G8B8A8_UNorm; SharpDX.DXGI.Rational refreshRate = new SharpDX.DXGI.Rational(60, 1); // Get proper video adapter and create device and swap chain. using (var factory = new SharpDX.DXGI.Factory1()) { SharpDX.DXGI.Adapter adapter = GetAdapter(factory); if (adapter != null) { ProcessLogger.Instance.Write(String.Format("Selected adapter: {0}", adapter.Description.Description)); // Get refresh rate. refreshRate = GetRefreshRate(adapter, _dxgiFormat, refreshRate); ProcessLogger.Instance.Write(String.Format("Selected refresh rate = {0}/{1} ({2})", refreshRate.Numerator, refreshRate.Denominator, refreshRate.Numerator / refreshRate.Denominator)); // Create Device and SwapChain ProcessLogger.Instance.Write("Create device."); _device = new SharpDX.Direct3D11.Device(adapter, SharpDX.Direct3D11.DeviceCreationFlags.BgraSupport, new SharpDX.Direct3D.FeatureLevel[] { SharpDX.Direct3D.FeatureLevel.Level_10_1 }); ProcessLogger.Instance.Write("Create swap chain."); _swapChain = new SharpDX.DXGI.SwapChain(factory, _device, GetSwapChainDescription(clientSize, outputHandle, refreshRate)); ProcessLogger.Instance.Write("Store device context."); _deviceContext = _device.ImmediateContext; } } ProcessLogger.Instance.EndFunction(this, "CreateDevice"); } For this function to work properly, I have to select the proper adapter in GetAdapter. Here is what it looks like: private SharpDX.DXGI.Adapter GetAdapter(SharpDX.DXGI.Factory1 factory) { List<SharpDX.DXGI.Adapter> adapters = new List<SharpDX.DXGI.Adapter>(); for (int i = 0; i < factory.GetAdapterCount(); i++) { SharpDX.DXGI.Adapter adapter = factory.GetAdapter(i); if (SharpDX.Direct3D11.Device.IsSupportedFeatureLevel(adapter, SharpDX.Direct3D.FeatureLevel.Level_10_1)) adapters.Add(adapter); } try { foreach (var adapter in adapters) if (adapter.Description.Description != null && (adapter.Description.Description.Contains("GeForce") || adapter.Description.Description.Contains("Radeon"))) { return adapter; } } catch { } return adapters.First(); } So all I am doing is asking my Factory1 for a list of all adapters that support FeatureLevel 10.1 and search for a "GeForce" or "Radeon" one. Very simple and use that one. This works like a charm. HOWEVER, there is one big problem: When I use this code in the Release build the game crashes when going to fullscreen using the code below. public void SetFullscreenState(bool isFullscreen) { if (isFullscreen != _swapChain.IsFullScreen) _swapChain.SetFullscreenState(isFullscreen, null); } The error code is DXGI_ERROR_UNSUPPORTED and after doing some research I found out, that this problem only happens for the Release build but not for the Debug build. The Debug build works like a charm. It also crashes only, if the selected adapter is not the first one in the list. So, if I use the first adapter (factory.GetAdapter(0)), it works! If I change my computer settings so that my GeForce is used as primary adapter for my game, it works. It only does not work for the Release build, if the selected adapter is not the first adapter in the list and I can't figure out why... The problem is independent from the used screen or other running applications in a multi-windowed application. I already tested that.
  10. romashka911

    DX11 Texture lagging

    It's me again, i hope, i'm not annoying... I have trouble with textures... I ask here because i will never find the answer in google... diffuseLight.vs diffuseLight.ps
  11. There are two Entities in the game and we controlling one of them. The other entity moves in a particular direction throughout the game and I want to make the AI like the enemy shoots at my player after some duration of time. I am using Directx 10 SDK for this. I think I would need to calculate the distance between the two entities and shoot it towards the player. I would need to calculate the distance between the two vectors and direction of A towards B. How to calculate the direction between the two?
  12. D3DXVECTOR3 m_pos; D3DXVECTOR3 pos; D3DXVECTOR3 lookAt; D3DXVECTOR3 up; D3DXMATRIX m_cameraMatrix; D3DXMATRIX translate, result; D3DXMatrixLookAtRH(&m_cameraMatrix, &pos, &lookAt, &up); D3DXMatrixIdentity(&translate); D3DXMatrixTranslation(&translate, m_pos.x, m_pos.y, m_pos.z); result = m_cameraMatrix * translate; return result; I want to move camera to m_pos position, as i do it with world matrix, but it seems it doesn't work, any ideas?
  13. Ok so I had some code using the old DXSDK. I went to windows 10 and updated my code. I also made some changes to where i stored the vector that holds the vertex positions but thats pretty much it. Now my model is not showing up. So I ran renderdoc. Now renderdoc can see my vertices and indices and the finished mesh when I click the draw call...see the attached image. When I run from vs2017 i cant see anything except the background color. I thought it was because of where my camera was positioned so I changed that and I still can't see anything. The camera is sitting slightly back from the origin....shouldn't the model be there? That's where it was before I made the changes to the source to update it for windows 10. I made the model file and made no changes to it ever since it worked before. Here is the source code and shader: #include "Source.h" #include "DDSTextureLoader.h" //Global Declarations - Interfaces// IDXGISwapChain* SwapChain; ID3D11Device* d3d11Device; ID3D11DeviceContext* d3d11DevCon; ID3D11RenderTargetView* renderTargetView; ID3D11DepthStencilView* depthStencilView; ID3D11Texture2D* depthStencilBuffer; ID3D11Resource* tex; ID3D11RasterizerState * rasterState; ID3D11VertexShader* VS; ID3D11PixelShader* PS; ID3DBlob* VS_Blob; ID3DBlob* PS_Blob; ID3D11InputLayout* vertLayout; ID3D11Buffer* fbx_vertex_buf; ID3D11Buffer* fbx_index_buf; ID3D11Buffer* cbPerObjectBuffer; ID3D11ShaderResourceView* fbx_rc_view; ID3D11SamplerState* fbx_sampler_state; ID3D11ShaderResourceView* normal_rc_view; std::vector<MeshData> meshList; myConsole* con; //Global Declarations - Others// HWND hwnd = NULL; HRESULT hr; int Width = 300; int Height = 300; DirectX::XMMATRIX WVP; ///////////////**************new**************//////////////////// DirectX::XMMATRIX mesh_world; ///////////////**************new**************//////////////////// DirectX::XMMATRIX camView; DirectX::XMMATRIX camProjection; DirectX::XMVECTOR camPosition; DirectX::XMVECTOR camTarget; DirectX::XMVECTOR camUp; ///////////////**************new**************//////////////////// DirectX::XMMATRIX Rotation; DirectX::XMMATRIX Scale; DirectX::XMMATRIX Translation; float rot = 0.01f; ///////////////**************new**************//////////////////// //Create effects constant buffer's structure// struct cbPerObject { DirectX::XMMATRIX WVP; }; cbPerObject cbPerObj; D3D11_INPUT_ELEMENT_DESC layout[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "TANGENT", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "BITANGENT", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 } }; UINT numElements = ARRAYSIZE(layout); // Event table for MyFrame wxBEGIN_EVENT_TABLE(MyFrame, wxFrame) EVT_MENU(wxID_EXIT, MyFrame::OnQuit) EVT_CLOSE(MyFrame::OnClose) wxEND_EVENT_TABLE() // Implements MyApp& GetApp() DECLARE_APP(MyApp) // Give wxWidgets the means to create a MyApp object IMPLEMENT_APP(MyApp) bool MyApp::OnInit() { // Create the main application window MyFrame *frame = new MyFrame(wxT("Worgen Engine Version 0")); // Show it frame->Show(true); return true; } void MyFrame::OnQuit(wxCommandEvent& event) { // Destroy the frame Close(); } void MyFrame::OnClose(wxCloseEvent& event) { timer->Stop(); //Release the COM Objects we created SwapChain->Release(); d3d11Device->Release(); d3d11DevCon->Release(); renderTargetView->Release(); event.Skip(); } MyFrame::MyFrame(const wxString& title) : wxFrame(NULL, wxID_ANY, title, wxDefaultPosition) { // Create a menu bar wxMenu *fileMenu = new wxMenu; // The “About” item should be in the help menu wxMenu *helpMenu = new wxMenu; helpMenu->Append(wxID_ABOUT, wxT("&About...\tF1"), wxT("ABout this program.")); fileMenu->Append(wxID_EXIT, wxT("E&xit\tAlt - X"), wxT("Quit this program")); // Now append the freshly created menu to the menu bar... wxMenuBar *menuBar = new wxMenuBar(); menuBar->Append(fileMenu, wxT("&File")); menuBar->Append(helpMenu, wxT("&Help")); // ... and attach this menu bar to the frame SetMenuBar(menuBar); // Create a status bar just for fun CreateStatusBar(2); SetStatusText(wxT("Welcome to Worgen Engine!")); nbHierarchy = new wxNotebook(this, wxID_ANY, wxDefaultPosition, wxSize(200, 300)); nbScene = new wxNotebook(this, wxID_ANY, wxDefaultPosition, wxSize(800, 600)); nbInspector = new wxNotebook(this, wxID_ANY, wxDefaultPosition, wxSize(200, 300)); timer = new RenderTimer(); console = new myConsole(wxSize(800, 300), wxTE_MULTILINE | wxTE_READONLY, this); timer->dxPanel = new MyDxPanel((MyFrame*)nbScene); wxPanel* hierarchyWindow = new wxPanel(nbHierarchy, wxID_ANY); nbHierarchy->AddPage(hierarchyWindow, "Hierarchy", false); nbScene->AddPage(timer->dxPanel, "Game", false); wxPanel* inspectorWindow = new wxPanel(nbInspector, wxID_ANY); nbInspector->AddPage(inspectorWindow, "Inspector", false); wxBoxSizer* sizer = new wxBoxSizer(wxHORIZONTAL); sizer->Add(nbHierarchy, 0, wxEXPAND, 0); sizer->Add(nbScene, 1, wxEXPAND, 0); sizer->Add(nbInspector, 0, wxEXPAND, 0); wxBoxSizer* console_sizer = new wxBoxSizer(wxVERTICAL); console_sizer->Add(sizer, 0, wxEXPAND, 0); console_sizer->Add(console, 0, wxEXPAND, 0); SetSizerAndFit(console_sizer); timer->dxPanel->c = console; timer->dxPanel->aLoader = new LoadMesh("C:\\Models\\wally.fbx", console, meshList); timer->dxPanel->initDx(timer->dxPanel->GetHWND()); timer->dxPanel->initScene(); timer->Start(); } MyFrame::~MyFrame() { delete timer; } wxBEGIN_EVENT_TABLE(MyDxPanel, wxPanel) EVT_PAINT(MyDxPanel::OnPaint) EVT_ERASE_BACKGROUND(MyDxPanel::OnEraseBackground) wxEND_EVENT_TABLE() MyDxPanel::MyDxPanel(MyFrame* parent) : wxPanel(parent) { parentFrame = parent; } MyDxPanel::~MyDxPanel() { } void MyDxPanel::OnEraseBackground(wxEraseEvent &WXUNUSED(event)) { //empty to avoid flashing } void MyDxPanel::updateScene() { rot += .05f; if (rot > 6.26f) rot = 0.0f; DirectX::XMVECTOR rotaxis = DirectX::XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); Rotation = DirectX::XMMatrixRotationAxis(rotaxis, -rot); mesh_world = DirectX::XMMatrixIdentity(); } void MyDxPanel::render() { //Clear our backbuffer float bgColor[4] = {0.0f, 0.6f, 0.4f, 1.0f }; d3d11DevCon->ClearRenderTargetView(renderTargetView, bgColor); //Refresh the Depth/Stencil view d3d11DevCon->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0); WVP = mesh_world * camView * camProjection; cbPerObj.WVP = DirectX::XMMatrixTranspose(WVP); d3d11DevCon->UpdateSubresource(cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0); d3d11DevCon->VSSetConstantBuffers(0, 1, &cbPerObjectBuffer); d3d11DevCon->PSSetShaderResources(0, 1, &fbx_rc_view); d3d11DevCon->PSSetSamplers(0, 1, &fbx_sampler_state); d3d11DevCon->DrawIndexed(meshList[0].indices.size(), 0, 0); //Present the backbuffer to the screen SwapChain->Present(0, 0); } void MyDxPanel::OnPaint(wxPaintEvent& event) { wxPaintDC dc(this); updateScene(); render(); } void MyDxPanel::initDx(HWND wnd) { //Describe our SwapChain Buffer DXGI_MODE_DESC bufferDesc; ZeroMemory(&bufferDesc, sizeof(DXGI_MODE_DESC)); bufferDesc.Width = Width; bufferDesc.Height = Height; bufferDesc.RefreshRate.Numerator = 60; bufferDesc.RefreshRate.Denominator = 1; bufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; bufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED; bufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED; //Describe our SwapChain DXGI_SWAP_CHAIN_DESC swapChainDesc; ZeroMemory(&swapChainDesc, sizeof(DXGI_SWAP_CHAIN_DESC)); swapChainDesc.BufferDesc = bufferDesc; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.BufferCount = 1; swapChainDesc.OutputWindow = wnd; swapChainDesc.Windowed = TRUE; swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; //Create our SwapChain hr = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &swapChainDesc, &SwapChain, &d3d11Device, NULL, &d3d11DevCon); //Create our BackBuffer ID3D11Texture2D* BackBuffer; hr = SwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (void**)&BackBuffer); d3d11Device->CreateRenderTargetView(BackBuffer, NULL, &renderTargetView); //Describe our Depth/Stencil Buffer D3D11_TEXTURE2D_DESC depthStencilDesc; depthStencilDesc.Width = Width; depthStencilDesc.Height = Height; depthStencilDesc.MipLevels = 1; depthStencilDesc.ArraySize = 1; depthStencilDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthStencilDesc.SampleDesc.Count = 1; depthStencilDesc.SampleDesc.Quality = 0; depthStencilDesc.Usage = D3D11_USAGE_DEFAULT; depthStencilDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL; depthStencilDesc.CPUAccessFlags = 0; depthStencilDesc.MiscFlags = 0; //Create the Depth/Stencil View d3d11Device->CreateTexture2D(&depthStencilDesc, NULL, &depthStencilBuffer); d3d11Device->CreateDepthStencilView(depthStencilBuffer, NULL, &depthStencilView); //Set our Render Target d3d11DevCon->OMSetRenderTargets(1, &renderTargetView, depthStencilView); } void MyDxPanel::initScene() { shaders(); generateBuffers(); setBuffers(0); setInputLayoutAndTopology(); setViewport(); setCameraInfo(); createConstantBuffer(); loadModelTexture("C:\\Models\\tex.DDS"); createSamplerState(); setRSState(); } void MyDxPanel::shaders() { //Compile Shaders from shader file HR(D3DCompileFromFile(L"Effects.fx", 0, 0, "VS", "vs_5_0", 0, 0, &VS_Blob, 0)); HR(D3DCompileFromFile(L"Effects.fx", 0, 0, "PS", "ps_5_0", 0, 0, &PS_Blob, 0)); //Create the Shader Objects HR(d3d11Device->CreateVertexShader(VS_Blob->GetBufferPointer(), VS_Blob->GetBufferSize(), NULL, &VS)); HR(d3d11Device->CreatePixelShader(PS_Blob->GetBufferPointer(), PS_Blob->GetBufferSize(), NULL, &PS)); //Set Vertex and Pixel Shaders d3d11DevCon->VSSetShader(VS, 0, 0); d3d11DevCon->PSSetShader(PS, 0, 0); } void MyDxPanel::generateBuffers() { D3D11_BUFFER_DESC indexBufferDesc; ZeroMemory(&indexBufferDesc, sizeof(indexBufferDesc)); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = meshList[0].indices.size() * sizeof(DWORD); indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = &meshList[0].indices[0]; HR(d3d11Device->CreateBuffer(&indexBufferDesc, &iinitData, &meshList[0].indexBuffer)); D3D11_BUFFER_DESC vertexBufferDesc; ZeroMemory(&vertexBufferDesc, sizeof(vertexBufferDesc)); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.ByteWidth = meshList[0].vertices.size() * sizeof(Vertex); vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = 0; vertexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; ZeroMemory(&vertexBufferData, sizeof(vertexBufferData)); vertexBufferData.pSysMem = &meshList[0].vertices[0]; HR(d3d11Device->CreateBuffer(&vertexBufferDesc, &vertexBufferData, &meshList[0].vertexBuffer)); } void MyDxPanel::setBuffers(int i) { d3d11DevCon->IASetIndexBuffer(meshList[i].indexBuffer, DXGI_FORMAT_R32_UINT, 0); UINT stride = sizeof(Vertex); UINT offset = 0; d3d11DevCon->IASetVertexBuffers(0, 1, &meshList[i].vertexBuffer, &stride, &offset); } void MyDxPanel::setInputLayoutAndTopology() { HR(d3d11Device->CreateInputLayout(layout, numElements, VS_Blob->GetBufferPointer(), VS_Blob->GetBufferSize(), &vertLayout)); d3d11DevCon->IASetInputLayout(vertLayout); d3d11DevCon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); } void MyDxPanel::setViewport() { //Create the Viewport D3D11_VIEWPORT viewport; ZeroMemory(&viewport, sizeof(D3D11_VIEWPORT)); viewport.TopLeftX = 0; viewport.TopLeftY = 0; viewport.Width = Width; viewport.Height = Height; viewport.MinDepth = 0.0f; viewport.MaxDepth = 1.0f; //Set the Viewport d3d11DevCon->RSSetViewports(1, &viewport); } void MyDxPanel::createConstantBuffer() { //Create the buffer to send to the cbuffer in effect file D3D11_BUFFER_DESC cbbd; ZeroMemory(&cbbd, sizeof(D3D11_BUFFER_DESC)); cbbd.Usage = D3D11_USAGE_DEFAULT; cbbd.ByteWidth = sizeof(cbPerObject); cbbd.BindFlags = D3D11_BIND_CONSTANT_BUFFER; cbbd.CPUAccessFlags = 0; cbbd.MiscFlags = 0; cbPerObj.WVP = DirectX::XMMatrixTranspose(mesh_world * camView * camProjection); D3D11_SUBRESOURCE_DATA initData; initData.pSysMem = &cbPerObj; HR(d3d11Device->CreateBuffer(&cbbd, &initData , &cbPerObjectBuffer)); } void MyDxPanel::setCameraInfo() { camPosition = DirectX::XMVectorSet(-0.5f, -0.5f, 4.0f, 0.0f); camTarget = DirectX::XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); camUp = DirectX::XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); //Set the View matrix camView = DirectX::XMMatrixLookAtLH(camPosition, camTarget, camUp); //Set the Projection matrix camProjection = DirectX::XMMatrixPerspectiveFovLH(0.4f*3.14f, (float)Width / Height, 1.0f, 1000.0f); } void MyDxPanel::loadModelTexture(std::string path) { HR(DirectX::CreateDDSTextureFromFile(d3d11Device, L"C:\\Models\\tex.DDS", &tex, &fbx_rc_view)); } void MyDxPanel::createSamplerState() { D3D11_SAMPLER_DESC sampDesc; ZeroMemory(&sampDesc, sizeof(sampDesc)); sampDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR; sampDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP; sampDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP; sampDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP; sampDesc.ComparisonFunc = D3D11_COMPARISON_NEVER; sampDesc.MinLOD = 0; sampDesc.MaxLOD = D3D11_FLOAT32_MAX; //Create the Sample State HR(d3d11Device->CreateSamplerState(&sampDesc, &fbx_sampler_state)); } void MyDxPanel::setRSState() { D3D11_RASTERIZER_DESC rasterizerState; rasterizerState.FillMode = D3D11_FILL_SOLID; rasterizerState.CullMode = D3D11_CULL_NONE; rasterizerState.FrontCounterClockwise = true; rasterizerState.DepthBias = false; rasterizerState.DepthBiasClamp = 0; rasterizerState.SlopeScaledDepthBias = 0; rasterizerState.DepthClipEnable = true; rasterizerState.ScissorEnable = true; rasterizerState.MultisampleEnable = false; rasterizerState.AntialiasedLineEnable = false; d3d11Device->CreateRasterizerState(&rasterizerState, &rasterState); d3d11DevCon->RSSetState(rasterState); } RenderTimer::RenderTimer() : wxTimer() { } void RenderTimer::Notify() { dxPanel->Refresh(); } void RenderTimer::start() { wxTimer::Start(10); } cbuffer cbPerObject { float4x4 WVP; }; Texture2D ObjTexture; SamplerState ObjSamplerState; struct VS_OUTPUT { float4 Pos : SV_POSITION; float2 TexCoord : TEXCOORD; }; VS_OUTPUT VS(float4 inPos : POSITION, float2 inTexCoord : TEXCOORD) { VS_OUTPUT output; output.Pos = mul(inPos, WVP); output.TexCoord = inTexCoord; return output; } float4 PS(VS_OUTPUT input) : SV_TARGET { return ObjTexture.Sample(ObjSamplerState, input.TexCoord); }
  14. Hi! I'm writing my first DX11 engine, and now i'm trying to load .obj file and draw it. Today i've already tried two classes from web, that should load .obj to vertices, indices etc... When i tried first loader, i saw an issue, see in picture. I thought that this .obj loader bad, and spent a half a day to implement other one, but the issue still existing. I use D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST, and export from 3dsmax with triangle faces. Maybe somebody can tell me, why does it draw like this? Thanks. dx11 2018-09-20 18-16-10-30.bmp dx11 2018-09-20 18-28-33-77.bmp
  15. Gnollrunner

    Anyone use Dear ImGUI?

    I was looking for a GUI API I could use with DirectX11 for my project. My search led me to Dear ImGui. I was wondering if anyone here has tried it and had any comments on it. I'm mostly interested in HUD stuff but there will be some menus for inventory and the like. Also if you know of some other API that I might look into I would be interested on hearing about that too.
  16. I have set up a normal shadow mapping but the result is not so good. So, I decided to do cascaded shadow mapping. Can anyone point out to me a good source or place to start with?
  17. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. Features: True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported: Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage. Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API. Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers. Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader. Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy. Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance. Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads. Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe. Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm. Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  18. Hi Guys, I have a problem where rendering direct to the back buffer is fine. But if I create a render target and draw to it, then render it to the back buffer, the sprite transparency eats holes through anything on the RT. Which then reveals the back buffer itself. On the right side the render target is coloured a dark red to highlight the problem. The blend state is being created and set early on (shortly after context creation) and is working ok as the back buffer is behaving as expected. // Create default blend state ID3D11BlendState* d3dBlendState = NULL; D3D11_BLEND_DESC omDesc; ZeroMemory(&omDesc, sizeof(D3D11_BLEND_DESC)); omDesc.RenderTarget[0].BlendEnable = true; omDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA; omDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA; omDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD; omDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE; omDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO; omDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD; omDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL; if (FAILED(d3dDevice->CreateBlendState(&omDesc, &d3dBlendState))) return E_WINDOW_DEVICE_BLEND_STATE; d3dContext->OMSetBlendState(d3dBlendState, 0, 0xffffffff); if (d3dBlendState) d3dBlendState->Release(); Any ideas would be greatly appreciated. Thanks in advance.
  19. This is pretty much a question for @SoldierOfLight, probably.. I've read a ton of information about the different flip modes and the various ways of configuring the swap chain. Would really like to get down to near 0ms latency at 60fps. My GPU is somewhat old - NVidia GTX 430 - but my software is up to date. Latest NVidia drivers, latest Windows 10 (April 2018 version 1803) PresentMon indicates dwm.exe is "Hardware: Legacy Flip" (not sure if this is important but thought I'd include it since 'Legacy' sounds bad) If I run windowed, PresentMon indicates "Composed: Flip" with a latency around 48ms If I run fullscreen with SetFullscreenState(true), PresentMon indicates "Hardware Composed: Independent Flip: Plane 0" with a latency around 46ms If I run fullscreen as just a borderless window covering the whole screen, PresentMon indicates "Hardware Composed: Independent Flip: Plane 0" and around 32ms latency In windowed mode, DXGI_SWAP_CHAIN_DESC1 setup is: swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_DISCARD; swapChainDesc.Flags = DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT; SetMaximumFrameLatency is 1 frame (DISCARD seems to have the same latency as SEQUENTIAL) In fullscreen mode with SetFullscreenState, I find I have to remove the WAITABLE_OBJECT flag - if I don't, DX gives an error when SetFullscreenState is called. Running in DX debug mode, it logs a message saying that the WAITABLE_OBJECT flag can't be combined with fullscreen (although I've seen other posts claiming that this restriction was lifted at some point?? not on my machine hehe) when I call present, I'm just calling swapChain->Present(1,0) Questions: 1) Why can't I combine WAITABLE_OBJECT with SetFullscreenState? 2) Do I need to use SetFullscreenState anyway? Currently the lowest latency is just borderless window covering the screen, with 32ms latency. But why is it not 16ms? 3) Why is SetFullscreenState slower, at 48ms latency? It's worth mentioning that I am in this case also creating a borderless window that covers the screen.. and then calling SetFullscreenState on that window.. maybe that's confusing the system (?) 4) Is "Hardware Composed: Independent Flip: Plane 0" the best I can hope for or is there some other flip mode that is optimal? If so, what changes do I need to make to the code to get there? ------- More information after further testing: With the borderless fullscreen window (not using SetFullscreenState), loop looks like this: 1) WaitForSingleObject(WAITABLE_OBJECT) 2) Spin loop for 15ms (almost the entire duration of the frame) <-- added after writing the original post 3) read controller/user inputs 4) Draw the next frame of the game 5) Present With the above, PresentMon indicates around 17ms latency with "Hardware Composed: Independent Flip: Plane 0" Is this as good as I can do or can I somehow get the latency reported by PresentMon even lower? I am measuring controller-to-display latency with a 240hz camera and a gamepad with an LED wired into the start button. I am seeing as low as 5 240hz frames (just over 16ms latency) between the LED lighting up on the controller and visible results appearing on screen. But, sometimes I see up to 14 240hz frames. The average is probably around 8/9 frames. Have I minimized the latency from the perspective of the application? For some reason I feel like I should be able to achieve very close to 0ms latency. Conceptually if I wait until the very end of a vertical refresh cycle.. then sample the user input, draw the game, call Present() *right* before the gpu is ready to display the next frame.. then it would get my back buffer and swap it to front only 0-2 ms after I call Present. How do I get to a solution like this? If you're curious I'm using the Dell 2414H monitor which is reviewed to have 4ms latency, and other tests I've done with dedicated hardware more or less confirm this (http://www.tftcentral.co.uk/reviews/dell_u2414h.htm#lag) Thanks!
  20. Hello, Just wanted to share the link to the latest upgrade for the Conservative Morphological Anti-Aliasing, in case someone is interested. It is a post-process AA technique in the same class of approaches as FXAA & SMAA but focusing on minimizing the input image change - that is, apply as much anti-aliasing as possible while avoiding blurring textures or other sharp features. Details available on https://software.intel.com/en-us/articles/conservative-morphological-anti-aliasing-20 and full DX11 source code under MIT license available on https://github.com/GameTechDev/CMAA2/ (compute shader implementation, DX12 & Vulkan ports are in the works too!)
  21. Hi all I am attempting to follow the Rastertek tutorial http://www.rastertek.com/dx11tut37.html Right now I am having a problem, it appears that my input layout is not being initialized properly and I'm not sure why, an exception is being thrown when i call CreateInputLayout... Exception thrown at 0x00007FFD9B8EA388 in MyGame.exe: Microsoft C++ exception: _com_error at memory location 0x0000000D4D18ED30. Maybe you all can point out where I'm going wrong here? void Renderer::InitPipeline() { // load and compile the two shaders ID3D10Blob *VS, *PS; D3DX11CompileFromFile("Shaders.shader", 0, 0, "VShader", "vs_4_0", 0, 0, 0, &VS, 0, 0); D3DX11CompileFromFile("Shaders.shader", 0, 0, "PShader", "ps_4_0", 0, 0, 0, &PS, 0, 0); // encapsulate both shaders into shader objects dev->CreateVertexShader(VS->GetBufferPointer(), VS->GetBufferSize(), NULL, &pVS); dev->CreatePixelShader(PS->GetBufferPointer(), PS->GetBufferSize(), NULL, &pPS); // set the shader objects devcon->VSSetShader(pVS, 0, 0); devcon->PSSetShader(pPS, 0, 0); // create the input layout object D3D11_INPUT_ELEMENT_DESC ied[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 }, // Add another input for the instance buffer { "INSTANCE", 0, DXGI_FORMAT_R32G32B32_FLOAT, 1, 0, D3D11_INPUT_PER_INSTANCE_DATA, 1} }; dev->CreateInputLayout(ied, 2, VS->GetBufferPointer(), VS->GetBufferSize(), &pLayout); devcon->IASetInputLayout(pLayout); } If I have not provided enough information, please help me understand what is needed so I can provide the info.
  22. Hi all, I'm trying to cut down on some of the spaghetti in my code after running through a few lessons and tutorials. Currently, I have everything grouped in a "Renderer" class and I'm trying to break my larger functions down into more manageable bits. I have three main functions that are initializing all my d3d and graphics: void Renderer::InitD3D(HWND hWnd) { // create a struct to hold information about the swap chain DXGI_SWAP_CHAIN_DESC scd; // clear out the struct for use ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC)); // fill the swap chain description struct scd.BufferCount = 1; // one back buffer scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; // use 32-bit color scd.BufferDesc.Width = SCREEN_WIDTH; // set the back buffer width scd.BufferDesc.Height = SCREEN_HEIGHT; // set the back buffer height scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; // how swap chain is to be used scd.OutputWindow = hWnd; // the window to be used scd.SampleDesc.Count = 4; // how many multisamples scd.Windowed = TRUE; // windowed/full-screen mode scd.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH; // allow full-screen switching // create a device, device context and swap chain using the information in the scd struct D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &scd, &swapchain, &dev, NULL, &devcon); // get the address of the back buffer ID3D11Texture2D *pBackBuffer; swapchain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer); // use the back buffer address to create the render target dev->CreateRenderTargetView(pBackBuffer, NULL, &backbuffer); pBackBuffer->Release(); // set the render target as the back buffer devcon->OMSetRenderTargets(1, &backbuffer, NULL); // Set the viewport D3D11_VIEWPORT viewport; ZeroMemory(&viewport, sizeof(D3D11_VIEWPORT)); viewport.TopLeftX = 0; viewport.TopLeftY = 0; viewport.Width = SCREEN_WIDTH; viewport.Height = SCREEN_HEIGHT; devcon->RSSetViewports(1, &viewport); } void Renderer::InitPipeline(ID3D11Device * dev, ID3D11DeviceContext * devcon) { // load and compile the two shaders ID3D10Blob *VS, *PS; D3DX11CompileFromFile("Shaders.shader", 0, 0, "VShader", "vs_4_0", 0, 0, 0, &VS, 0, 0); D3DX11CompileFromFile("Shaders.shader", 0, 0, "PShader", "ps_4_0", 0, 0, 0, &PS, 0, 0); // encapsulate both shaders into shader objects dev->CreateVertexShader(VS->GetBufferPointer(), VS->GetBufferSize(), NULL, &pVS); dev->CreatePixelShader(PS->GetBufferPointer(), PS->GetBufferSize(), NULL, &pPS); // set the shader objects devcon->VSSetShader(pVS, 0, 0); devcon->PSSetShader(pPS, 0, 0); // create the input layout object createInputLayout(dev, devcon, VS); } void Renderer::InitGraphics(std::vector<Renderable_Object*> Game_Objects) { for (int i = 0; i < Game_Objects.size(); i++) { for (int j = 0; j < Game_Objects[i]->getVertices().size(); j++) { OurVertices.push_back(Game_Objects[i]->getVertices()[j]); } } createVertexBuffer(dev); createInstanceBuffer(dev); createProjectionBuffer(dev); createWorldBuffer(dev); } I have InitD3D, which creates my device, swapchain, and device context. This seems to make sense to me, its all of the "background" work. The comes InitPipeline, this function in my mind, should be in charge of preparing the pathway in which we will shove data into our shaders for rendering. Then, I have my InitGraphics, which really all I want it to do is setup the data to be shoved down previously mentioned pipeline, right now in my mind i think it is doing more than that. Am I right in thinking creating buffers is part of the pipeline setup, and then after the setup is done, updating those buffers is part of the graphics initiation I am thinking of. Again, three Phases: Initialize D3D window and device and crap -> initialize pipeline and way of sending data to it -> push data onto the pipeline for initial rendering One more side question: do I need to use the same subresource that I used to initialize a buffer when I go to update it with map/unmap? Or do I set a new subresource_data? I'm starting to think I need the original subresource_data, or at least a pointer to it, because when I call CreateBuffer I pass it a reference to a subresource_data object... Sorry all, I tend to ask before I invetigate, given that this code here works just fine for me, it would suggest that I do not need the original subresource, all I need is the data I'm going to update with. if (createVertexBuffer(dev)) { // copy the vertices into the buffer D3D11_MAPPED_SUBRESOURCE ms; devcon->Map(pVBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms); // map the buffer memcpy(ms.pData, OurVertices.data(), sizeof(Vertex) * OurVertices.size()); // copy the data devcon->Unmap(pVBuffer, NULL); // unmap the buffer } So, this means I can successfully separate my buffer creation, from my buffer initialization. I can create my buffers as the cars that will carry crap down the pipeline, in the init pipeline funciton, then I can put people in said cars in the initgraphics function.
  23. Hi all! I'm trying to implement the technique: Light Indexed Deferred Rendering. I modified the original demo: 1) Has removed some UI elements. 2) Has removed View Space light calculation, 3) I fill the Light indices during startup. 4) Optional: I tried to use UBO instead of Texture1D uncomment //#define USE_UBO My modified version of demo My implementation details: I use the constant buffers instead of Texture1D for storing of the light source information. Instead of OpenGL, I use Direct3D11. My implementation is divided on following parts: 1) Packing of light indices for each light during startup: void LightManager::LightManagerImpl::FillLightIndices() { int n = static_cast<int>(lights.size()); for (int lightIndex = n - 1; lightIndex >= 0; --lightIndex) { Vector4D& OutColor = lightIndices.push_back(); // Set the light index color ubyte convertColor = static_cast<ubyte>(lightIndex + 1); ubyte redBit = (convertColor & (0x3 << 0)) << 6; ubyte greenBit = (convertColor & (0x3 << 2)) << 4; ubyte blueBit = (convertColor & (0x3 << 4)) << 2; ubyte alphaBit = (convertColor & (0x3 << 6)) << 0; OutColor = Vector4D(redBit, greenBit, blueBit, alphaBit); const float divisor = 255.0f; OutColor /= divisor; } } 2) Optional/Test implementation: Update lights positions(animation). 3) Rendering The Light source Geometry into RGBA RenderTarget (Light sources Buffer) using 2 shaders from demo: Pixel shader: uniform float4 LightIndex : register(c0); struct PS { float4 position : POSITION; }; float4 psMain(in PS ps) : COLOR { return LightIndex; }; Vertex shader: uniform float4x4 ViewProjMatrix : register(c0); uniform float4 LightData : register(c4); struct PS { float4 position : POSITION; }; PS vsMain(in float4 position : POSITION) { PS Out; Out.position = mul(float4(LightData.xyz + position.xyz * LightData.w, 1.0f), ViewProjMatrix); return Out; } These shaders is compiled in 3DEngine into C++ code. 4) Calculating of the final lighting, using the prepared texture with light indices. The pixel shaders can be found in attached project. The final shaders: Pixel: // DeclTex2D(tex1, 0); // terrain first texture DeclTex2D(tex2, 1); // terrain second texture DeclTex2D(BitPlane, 2); // Light Buffer struct Light { float4 posRange; // pos.xyz + w - Radius float4 colorLightType; // RGB color + light type }; // The light list uniform Light lights[NUM_LIGHTS]; struct VS_OUTPUT { float4 Pos: POSITION; float2 texCoord: TEXCOORD0; float3 Normal: TEXCOORD1; float4 lightProjSpaceLokup : TEXCOORD2; float3 vVec : TEXCOORD3; }; // Extract light indices float4 GetLightIndexImpl(Texture2D BitPlane, SamplerState sBitPlane, float4 projectSpace) { projectSpace.xy /= projectSpace.w; projectSpace.y = 1.0f - projectSpace.y; float4 packedLight = tex2D(BitPlane, projectSpace.xy); float4 unpackConst = float4(4.0, 16.0, 64.0, 256.0) / 256.0; float4 floorValues = ceil(packedLight * 254.5); float4 lightIndex; for(int i = 0; i < 4; i++) { packedLight = floorValues * 0.25; floorValues = floor(packedLight); float4 fracParts = packedLight - floorValues; lightIndex[i] = dot(fracParts, unpackConst); } return lightIndex; } #define GetLightIndex(tex, pos) GetLightIndexImpl(tex, s##tex, pos) // calculate final lighting float4 CalculateLighting(float4 color, float3 vVec, float3 Normal, float4 lightIndex) { float3 ambient_color = float3(0.2f, 0.2f, 0.2f); float3 lighting = float3(0.0f, 0.0f, 0.0f); for (int i = 0; i < 4; ++i) { float lIndex = 255.0f * lightIndex[i]; // read the light source data from constant buffer Light light = lights[int(lIndex)]; // Get the vector from the light center to the surface float3 lightVec = light.posRange.xyz - vVec; // original from demo doesn't work correctly #if 0 // Scale based on the light radius float3 lVec = lightVec / light.posRange.a; float atten = 1.0f - saturate(dot(lVec, lVec)); #else float d = length(lightVec) / light.posRange.a; const float3 ConstantAtten = float3(0.4f, 0.01f, 0.01f); float atten = 1.0f / (ConstantAtten.x + ConstantAtten.y * d + ConstantAtten.z * d * d); #endif lightVec = normalize(lightVec); float3 H = normalize(lightVec + vVec); float diffuse = saturate(dot(lightVec, Normal)); float specular = pow(saturate(dot(lightVec, H)), 16.0); lighting += atten * (diffuse * light.colorLightType.xyz * color.xyz + color.xyz * ambient_color + light.colorLightType.xyz * specular); } return float4(lighting.xyz, color.a); } float4 psMain(in VS_OUTPUT In) : COLOR { float4 Color1 = tex2D(tex1, In.texCoord); float4 Color2 = tex2D(tex2, In.texCoord); float4 Color = Color1 * Color2; float3 Normal = normalize(In.Normal); // get light indices from Light Buffer float4 lightIndex = GetLightIndex(BitPlane, In.lightProjSpaceLokup); // calculate lightung float4 Albedo = CalculateLighting(Color, In.vVec, Normal, lightIndex); Color.xyz += Albedo.xyz; return Color; } Vertex Shaders: // uniform float4x4 ViewProjMatrix : register(c0); struct VS_OUTPUT { float4 Pos: POSITION; float2 texCoord: TEXCOORD0; float3 Normal: TEXCOORD1; float4 lightProjSpaceLokup : TEXCOORD2; float3 vVec : TEXCOORD3; }; float4 CalcLightProjSpaceLookup(float4 projectSpace) { projectSpace.xy = (projectSpace.xy + float2(projectSpace.w, projectSpace.w)) * 0.5; return projectSpace; } VS_OUTPUT VSmain(float4 Pos: POSITION, float3 Normal: NORMAL, float2 texCoord: TEXCOORD0) { VS_OUTPUT Out; Out.Pos = mul(float4(Pos.xyz, 1.0f), ViewProjMatrix); Out.texCoord = texCoord; Out.lightProjSpaceLokup = CalcLightProjSpaceLookup(Out.Pos); Out.vVec = Pos.xyz; Out.Normal = Normal; return Out; } The result: We can show the Light sources Buffer - texture with light indices:(console command: enableshowlightbuffer 1) If we try to show the light geometry we will see the following result(console enabledrawlights 1) And my the demo of Light indexed deferred rendering: https://www.dropbox.com/s/5t9f5vpg83sspfs/3DMove_multilighting_gd.net.7z?dl=0 1) Try to run demo, moving on terrain using W,A,S,D. 2) Try to show light geometry(console command enabledrawlights 1), light buffer(console command: enableshowlightbuffer 1) What do i do wrong ? how to fix the calculation of lighting ?
  24. Hi, When using FW1FontWrapper for text rendering, text gets aliased if the screen resolution gets changed. e.g Write a text using FW1FontWrapper in a window which has resolution (1/4)th of the full-screen resolution and now switch the window to full screen without changing the font size. You can see the text is getting aliased. Is there any way to make FW1FontWrapper independent of resolution. Thanks
  25. I used DirectX in projects on Borland C++ Builder 6.0. Microsoft .libs don't work with Builder so I tooe special .lib files from here: http://www.clootie.ru/cbuilder/index.html#DX_CBuilder_SDKs Now I've moved to C++ Builder 10 Berlin and have to find a way to attach DirectX to my project again. I've searched the Web but found nothing on how to get access to DirectX in Embarcadero Builders, only old information on Borland Builder and old .libs. DirectX SDK .libs still can't be used with new Builder 10 because of incompatible format. My question is: did anyone use DirectX with Embarcadero Builder and how did you solve .libs problem? Can anyone give me a guide or example on how to make DirectX accessible in your Builder 10 project? Why there is no information on this anywhere?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!