• Advertisement

Search the Community

Showing results for tags 'DX11' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • GameDev Unboxed

Categories

  • Game Dev Loadout

Categories

  • Game Developers Conference
    • GDC 2017
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Developers

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 1420 results

  1. I am writing a level editor using Visual C++ and the win32 API. I have the game engine going working fine with Direct3D 11 (its not an off-the-shelf engine, its custom) The plan for the editor is to have something like this: The blue bit is going to be a standard win32 menu bar, the yellow bit will be a standard win32 status bar, the red bit will contain things like a list of objects to insert into the level (its contents will change depending on what the user is doing) and the purple bit will be a window that will be rendered into by the rendering code. I know how to do Direct3D11 rendering into a window that is the parent window and is the only thing the app is drawing (the engine runs a loop that lets the windows message loop run and process its messages before running the engine code and doing rendering) but I can't find anything out there on how you do Direct3D graphics (11 or otherwise) into a child window and how you handle things like resizing and painting and things. Its definitely possible since so many level editors and things do it but I dont know how they pull it off. (and Google isn't showing anything useful either) Are there any examples out there of how you can create a win32 custom control/child window, set up a IDirect3D11Device to draw onto that window and then have that window play nice with all the other windows (the main parent window and the other child windows) whilst still triggering a once-per-frame render call to allow me to draw my stuff in there.
  2. Hi guys. I'm trying to move my shadow map with the camera. I need world positions of camera frustum points to implement this. How can I calculate the world positions of 8 camera frustum vertices?
  3. Solved: didn't think clearly and realized I can't just compare the cross-product with 0,0,0. Fixed by doing this: float3 originVector = float3(0.0, 0.0, 0.0) - v1.xyz; if (dot(cross(e1, e2).xyz, originVector) > 0.0) { //... } I'm trying to write a geometry shader that does backface culling. (Dont ask me why) What I'm doing is checking the cross-product of two edges of the triangle (in NDC space) and checking if it's facing 0,0,0 . The problem is when I compile I get this error: this is i guess because if it isn't facing us, I dont append any verts to the stream. I always assumed maxvertexcount implied I can emit as few verts as I like, but I suppose not. How do I get around this? Shader below: struct GS_IN_OUT { float4 Pos : SV_POSITION; float4 PosW : POSITION; float4 NorW : NORMAL; float2 UV : TEXCOORD; }; [maxvertexcount(3)] void GS_main( triangle GS_IN_OUT input[3], inout TriangleStream< GS_IN_OUT > output ) { //Check for backface float4 v1, v2, v3; v1 = input[0].Pos; v2 = input[1].Pos; v3 = input[2].Pos; float4 e1, e2; e1 = v1 - v2; e2 = v1 - v3; if (dot(cross(e1, e2).xyz, float3(0.0, 0.0, 0.0)) > 0.0) { //face is facing us, let triangle through for (uint i = 0; i < 3; i++) { GS_IN_OUT element; element = input[i]; output.Append(element); } } }
  4. I am trying to use DirectXTK for an easy way to render 2d font in a 3d engine. I am getting very strange results, however. Here is the code I am using for my render frame void Graphics::RenderFrame() { //Clear our backbuffer to the updated color const float bgColor[4] = { 0, 0, 1.0f, 1.0f }; m_d3d11DevCon->ClearRenderTargetView(m_renderTargetView, bgColor); //Refresh the Depth/Stencil view m_d3d11DevCon->ClearDepthStencilView(m_depthStencilView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0); m_d3d11DevCon->PSSetShaderResources(0, 1, &m_testTexture); //set texture to use for pixel shader m_d3d11DevCon->PSSetSamplers(0, 1, &m_samplerState); //set sampler state to use //Draw the square for texture m_d3d11DevCon->DrawIndexed(6, 0, 0); //draw text m_spriteBatch->Begin(); const wchar_t* output = L"Hello World"; XMVECTOR fontPos = XMVectorSet(0, 0, 0, 0); m_font->DrawString(m_spriteBatch.get(), output, fontPos, Colors::White, 0.f, g_XMZero, 2.0f); m_spriteBatch->End(); //Present the backbuffer to the screen m_swapChain->Present(0, 0); } If I comment out all of the code for drawing the text, this is the result I get where my 3d square is being drawn correctly with its texture. If I leave the code for Begin/End'ing the sprite batch and drawing the text, this is the result I get where my square in 3d space is no longer visible, and the H in "Hello" seems to have something gone wrong with it. If I only comment out the code to draw the string, but I leave the SpriteBatch->Begin / SpriteBatch->End calls, I get nothing on my screen, but I would expect to get the square with the texture on it. Is it not possible to combine Direct3D draw calls while using the DirectXTK SpriteBatch? If this will not work does anyone have any good recommendations of routes to go for efficient text rendering in dx11?
  5. Hi. I have successfully implemented shadow mapping in directx 11. Only directional light shadows though. My problem is my current shadow viewproj matrix doesn't cover entire world. If I make it large shadow quality will drop. I need to move my viewproj matrix with camera. I tried translating matrix to my camera pos but it doesn't work. I searched online and found something about calculating viewproj matrix from current camera projection matrix but I didn't found enough resources to fully understand it. Can someone explain me? I will really appreciate it. Thank you for you time.
  6. I could have sworn that this D3D11 renderer of mine was working before. The OpenGL version of this renderer works fine (mostly because I've been confined to MacOS for too long, causing my D3D11 renderer to fall behind) but even though I've followed the tutorials almost exactly, the code just isn't working. I've tried to use Visual Studio's debugging feature, but I couldn't find any helpful information within it (that or I'm just blind since I've never used it before until now). Now, I really hate to just dump code on you all, but there's more than enough to go through, so I'll try to keep it limited to the relevant parts. This is the main source file, so you can see in order what is being done, what is being called, etc. CKeDemoApplication::CKeDemoApplication() { std::string dxvs = "float4 vs_main( float4 Pos : POSITION ) : SV_POSITION\n" "{\n" " return Pos;\n" "}"; std::string dxps = "float4 ps_main( float4 Pos : SV_POSITION ) : SV_Target\n" "{\n" " return float4( 1.0f, 1.0f, 0.0f, 1.0f );\n" "}"; std::string glvs = "#version 150\n" "in vec3 in_pos;\n" "out vec4 out_colour;\n" "void main( void )\n" "{\n" " gl_Position = vec4( in_pos.xyz, 1.0 );\n" " out_colour = vec4( 1, 1, 1, 1 );\n" "}"; std::string glfs = "#version 150\n" "out vec4 colour;\n" "in vec4 out_colour;\n" "void main(void)\n" "{\n" "colour = out_colour;\n" "}"; /* * Initialize Kunai Engine */ KeInitialize(); /* * Initialize a basic core OpenGL 3.x device */ KeRenderDeviceDesc rddesc; ZeroMemory( &rddesc, sizeof( KeRenderDeviceDesc ) ); rddesc.width = 640; rddesc.height = 480; rddesc.colour_bpp = 32; rddesc.depth_bpp = 24; rddesc.stencil_bpp = 8; rddesc.fullscreen = No; rddesc.buffer_count = 2; rddesc.device_type = KE_RENDERDEVICE_D3D11; bool ret = KeCreateWindowAndDevice( &rddesc, &m_pRenderDevice ); if( !ret ) { DISPDBG( KE_ERROR, "Error initializing render device!" ); } /* * Initialize GPU program and geometry buffer */ KeVertexAttribute va[] = { { KE_VA_POSITION, 3, KE_FLOAT, No, sizeof(float)*3, 0 }, { -1, 0, 0, 0, 0 } }; nv::vec3f vd[] = { nv::vec3f( -1.0f, -1.0f, 0.0f ), nv::vec3f( 1.0f, -1.0f, 0.0f ), nv::vec3f( 0.0f, 1.0f, 0.0f ), }; if( rddesc.device_type == KE_RENDERDEVICE_D3D11 ) m_pRenderDevice->CreateProgram( dxvs.c_str(), dxps.c_str(), NULL, NULL, va, &m_pProgram ); else m_pRenderDevice->CreateProgram( glvs.c_str(), glfs.c_str(), NULL, NULL, va, &m_pProgram ); m_pRenderDevice->CreateGeometryBuffer( &vd, sizeof(nv::vec3f)*3, NULL, 0, 0, KE_USAGE_STATIC_WRITE, va, &m_pGB ); } CKeDemoApplication::~CKeDemoApplication() { if( m_pGB ) m_pGB->Destroy(); if( m_pProgram ) m_pProgram->Destroy(); KeDestroyWindowAndDevice( m_pRenderDevice ); m_pRenderDevice = NULL; KeUninitialize(); } void CKeDemoApplication::Run() { m_pRenderDevice->SetProgram( m_pProgram ); m_pRenderDevice->SetGeometryBuffer( m_pGB ); m_pRenderDevice->SetTexture( 0, NULL ); while( !KeQuitRequested() ) { KeProcessEvents(); float green[4] = { 0.0f, 0.5f, 0.0f, 1.0 }; m_pRenderDevice->SetClearColourFV( green ); m_pRenderDevice->SetClearDepth( 1.0f ); m_pRenderDevice->SetClearStencil(0); m_pRenderDevice->Clear( KE_COLOUR_BUFFER | KE_DEPTH_BUFFER /*| KE_STENCIL_BUFFER*/ ); m_pRenderDevice->DrawVertices( KE_TRIANGLES, sizeof(nv::vec3f), 0, 3 ); m_pRenderDevice->Swap(); } } For those that want to see my initialization routine: bool IKeDirect3D11RenderDevice::PVT_InitializeDirect3DWin32() { /* Initialize Direct3D11 */ uint32_t flags = D3D11_CREATE_DEVICE_BGRA_SUPPORT; D3D_FEATURE_LEVEL feature_levels[] = { D3D_FEATURE_LEVEL_12_1, D3D_FEATURE_LEVEL_12_0, D3D_FEATURE_LEVEL_11_1, D3D_FEATURE_LEVEL_11_0, D3D_FEATURE_LEVEL_10_1, D3D_FEATURE_LEVEL_10_0, D3D_FEATURE_LEVEL_9_3, D3D_FEATURE_LEVEL_9_2, D3D_FEATURE_LEVEL_9_1 }; int feature_level_count = ARRAYSIZE( feature_levels ); #ifdef _DEBUG flags = D3D11_CREATE_DEVICE_DEBUG; #endif ZeroMemory( &swapchain_desc, sizeof( swapchain_desc ) ); swapchain_desc.BufferCount = device_desc->buffer_count; swapchain_desc.BufferDesc.Width = device_desc->width; swapchain_desc.BufferDesc.Height = device_desc->height; swapchain_desc.BufferDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; swapchain_desc.BufferDesc.RefreshRate.Numerator = device_desc->refresh_rate; swapchain_desc.BufferDesc.RefreshRate.Denominator = 1; swapchain_desc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapchain_desc.OutputWindow = GetActiveWindow(); swapchain_desc.SampleDesc.Count = 1; swapchain_desc.SampleDesc.Quality = 0; swapchain_desc.Windowed = !device_desc->fullscreen; HRESULT hr = D3D11CreateDeviceAndSwapChain( NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, flags, feature_levels, feature_level_count, D3D11_SDK_VERSION, &swapchain_desc, &dxgi_swap_chain, &d3ddevice, &feature_level, &d3ddevice_context ); #ifdef _DEBUG /* If we are requesting a debug device, and we fail to get it, try again without the debug flag. */ if( hr == DXGI_ERROR_SDK_COMPONENT_MISSING ) { DISPDBG( KE_WARNING, "Attempting to re-create the Direct3D device without debugging capabilities..." ); flags &= ~D3D11_CREATE_DEVICE_DEBUG; hr = D3D11CreateDeviceAndSwapChain( NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, flags, feature_levels, feature_level_count, D3D11_SDK_VERSION, &swapchain_desc, &dxgi_swap_chain, &d3ddevice, &feature_level, &d3ddevice_context ); } #endif D3D_DISPDBG_RB( KE_ERROR, "Error creating Direct3D11 device and swapchain!", hr ); /* Create our render target view */ ID3D11Texture2D* back_buffer = NULL; hr = dxgi_swap_chain->GetBuffer( 0, __uuidof( ID3D11Texture2D ), ( LPVOID* )&back_buffer ); D3D_DISPDBG_RB( KE_ERROR, "Error getting back buffer!", hr ); hr = d3ddevice->CreateRenderTargetView( back_buffer, NULL, &d3d_render_target_view ); back_buffer->Release(); D3D_DISPDBG_RB( KE_ERROR, "Error creating render target view!", hr ); /* Create our depth stencil view */ D3D11_TEXTURE2D_DESC depthdesc; depthdesc.Width = device_desc->width; depthdesc.Height = device_desc->height; depthdesc.MipLevels = 1; depthdesc.ArraySize = 1; depthdesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthdesc.SampleDesc.Count = 1; depthdesc.SampleDesc.Quality = 0; depthdesc.Usage = D3D11_USAGE_DEFAULT; depthdesc.BindFlags = D3D11_BIND_DEPTH_STENCIL; depthdesc.CPUAccessFlags = 0; depthdesc.MiscFlags = 0; hr = d3ddevice->CreateTexture2D( &depthdesc, NULL, &d3d_depth_stencil_buffer ); D3D_DISPDBG_RB( KE_ERROR, "Error creating depth stencil buffer!", hr ); D3D11_DEPTH_STENCIL_VIEW_DESC dsvdesc = {}; dsvdesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; /* TODO: Do not hardcode this... */ dsvdesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D; dsvdesc.Texture2D.MipSlice = 0; hr = d3ddevice->CreateDepthStencilView( d3d_depth_stencil_buffer, &dsvdesc, &d3d_depth_stencil_view ); D3D_DISPDBG_RB( KE_ERROR, "Error creating depth stencil view!", hr ); /* Set render target and depth stencil */ d3ddevice_context->OMSetRenderTargets( 1, &d3d_render_target_view.GetInterfacePtr(), d3d_depth_stencil_view ); /* Setup the viewport */ D3D11_VIEWPORT vp; vp.Width = (FLOAT) device_desc->width; vp.Height = (FLOAT) device_desc->height; vp.MinDepth = 0.0f; vp.MaxDepth = 1.0f; vp.TopLeftX = 0; vp.TopLeftY = 0; d3ddevice_context->RSSetViewports( 1, &vp ); /* Get DXGI output */ if( FAILED( hr = dxgi_swap_chain->GetContainingOutput( &dxgi_output ) ) ) { DISPDBG( KE_WARNING, "IDXGISwapChain::GetContainingOutput returned (0x" << hr << ")" ); dxgi_output = nullptr; } return S_OK; } Going down the initialization routine, here's the code for creating shaders and geometry buffers: bool IKeDirect3D11RenderDevice::CreateProgram( const char* vertex_shader, const char* fragment_shader, const char* geometry_shader, const char* tesselation_shader, KeVertexAttribute* vertex_attributes, IKeGpuProgram** gpu_program ) { D3D11_INPUT_ELEMENT_DESC* layout = NULL; int layout_size = 0; DXGI_FORMAT fmt; DWORD shader_flags = D3DCOMPILE_ENABLE_STRICTNESS; #ifdef _DEBUG shader_flags |= D3DCOMPILE_DEBUG; #endif /* Allocate new GPU program */ *gpu_program = new IKeDirect3D11GpuProgram; IKeDirect3D11GpuProgram* gp = static_cast<IKeDirect3D11GpuProgram*>( *gpu_program ); /* Create Direct3D compatible vertex layout */ while( vertex_attributes[layout_size].index != -1 ) layout_size++; layout = new D3D11_INPUT_ELEMENT_DESC[layout_size]; if( layout ) { for( int i = 0; i < layout_size; i++ ) { if( vertex_attributes[i].type == KE_FLOAT && vertex_attributes[i].size == 1 ) fmt = DXGI_FORMAT_R32_FLOAT; if( vertex_attributes[i].type == KE_FLOAT && vertex_attributes[i].size == 2 ) fmt = DXGI_FORMAT_R32G32_FLOAT; if( vertex_attributes[i].type == KE_FLOAT && vertex_attributes[i].size == 3 ) fmt = DXGI_FORMAT_R32G32B32_FLOAT; if( vertex_attributes[i].type == KE_FLOAT && vertex_attributes[i].size == 4 ) fmt = DXGI_FORMAT_R32G32B32A32_FLOAT; if( !strcmp( "POSITION", semantic_list[vertex_attributes[i].index].name ) ) layout[i].SemanticName = "POSITION"; if( !strcmp( "COLOR", semantic_list[vertex_attributes[i].index].name ) ) layout[i].SemanticName = "COLOR"; layout[i].SemanticIndex = semantic_list[vertex_attributes[i].index].index; layout[i].Format = fmt; layout[i].InputSlot = 0; /* TODO */ layout[i].AlignedByteOffset = vertex_attributes[i].offset; layout[i].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA; layout[i].InstanceDataStepRate = 0; /* TODO */ } /* Initialize vertex shader */ /* TODO: Auto detect highest shader version */ CD3D10Blob* blob_shader = NULL; CD3D10Blob* blob_error = NULL; HRESULT hr = D3DCompile( vertex_shader, strlen( vertex_shader ) + 1, "vs_main", NULL, NULL, "vs_main", "vs_4_0", shader_flags, 0, &blob_shader, &blob_error ); if( FAILED( hr ) ) { if( blob_error != NULL ) { DISPDBG( KE_ERROR, "Error compiling vertex shader source!\n" << (char*)blob_error->GetBufferPointer() << "\n" ); delete[] layout; blob_error = 0; gp->Destroy(); } return false; } hr = d3ddevice->CreateVertexShader( blob_shader->GetBufferPointer(), blob_shader->GetBufferSize(), NULL, &gp->vs ); if( FAILED( hr ) ) { delete[] layout; blob_shader = 0; gp->Destroy(); DISPDBG( KE_ERROR, "Error creating vertex shader!\n" ); } /* Create input layout */ hr = d3ddevice->CreateInputLayout( layout, layout_size, blob_shader->GetBufferPointer(), blob_shader->GetBufferSize(), &gp->il ); blob_shader = 0; delete[] layout; if( FAILED( hr ) ) { gp->Destroy(); DISPDBG( KE_ERROR, "Error creating input layout!\n" ); } /* Create pixel shader */ hr = D3DCompile( fragment_shader, strlen( fragment_shader ) + 1, "ps_main", NULL, NULL, "ps_main", "ps_4_0", shader_flags, 0, &blob_shader, &blob_error ); if( FAILED( hr ) ) { if( blob_error != NULL ) { DISPDBG( KE_ERROR, "Error compiling pixel shader source!\n" << (char*)blob_error->GetBufferPointer() << "\n" ); blob_error = 0; gp->Destroy(); } return false; } hr = d3ddevice->CreatePixelShader( blob_shader->GetBufferPointer(), blob_shader->GetBufferSize(), NULL, &gp->ps ); if( FAILED( hr ) ) { blob_shader = 0; gp->Destroy(); DISPDBG( KE_ERROR, "Error creating pixel shader!\n" ); } blob_shader = 0; /* TODO: Geometry, Hull, Compute and Domain shaders */ gp->hs = NULL; gp->gs = NULL; gp->cs = NULL; gp->ds = NULL; } #if 1 /* Copy vertex attributes */ int va_size = 0; while( vertex_attributes[va_size].index != -1 ) va_size++; gp->va = new KeVertexAttribute[va_size+1]; memmove( gp->va, vertex_attributes, sizeof( KeVertexAttribute ) * (va_size+1) ); #endif return true; } /* * Name: IKeDirect3D11RenderDevice::create_geometry_buffer * Desc: Creates a geometry buffer based on the vertex and index data given. Vertex and index * buffers are encapsulated into one interface for easy management, however, index data * input is completely optional. Interleaved vertex data is also supported. */ bool IKeDirect3D11RenderDevice::CreateGeometryBuffer( void* vertex_data, uint32_t vertex_data_size, void* index_data, uint32_t index_data_size, uint32_t index_data_type, uint32_t flags, KeVertexAttribute* va, IKeGeometryBuffer** geometry_buffer ) { HRESULT hr = S_OK; /* Sanity check(s) */ if( !geometry_buffer ) DISPDBG_RB( KE_ERROR, "Invalid interface pointer!" ); //if( !vertex_attributes ) // return false; if( !vertex_data_size ) DISPDBG_RB( KE_ERROR, "(vertex_data_size == 0) condition is currently not allowed..." ); /* Temporary? */ *geometry_buffer = new IKeDirect3D11GeometryBuffer; IKeDirect3D11GeometryBuffer* gb = static_cast<IKeDirect3D11GeometryBuffer*>( *geometry_buffer ); gb->stride = 0; /* Create a vertex buffer */ D3D11_BUFFER_DESC bd; ZeroMemory( &bd, sizeof(bd) ); bd.Usage = D3D11_USAGE_DEFAULT; bd.ByteWidth = vertex_data_size; bd.BindFlags = D3D11_BIND_VERTEX_BUFFER; bd.CPUAccessFlags = 0; /* TODO */ D3D11_SUBRESOURCE_DATA id; ZeroMemory( &id, sizeof(id) ); id.pSysMem = vertex_data; hr = d3ddevice->CreateBuffer( &bd, &id, &gb->vb ); if( FAILED( hr ) ) { delete (*geometry_buffer); D3D_DISPDBG_RB( KE_ERROR, "Error creating vertex buffer!", hr ); } /* Create index buffer, if desired. */ gb->ib = NULL; if( index_data_size ) { ZeroMemory( &bd, sizeof(bd) ); bd.Usage = D3D11_USAGE_DEFAULT; bd.ByteWidth = index_data_size; bd.BindFlags = D3D11_BIND_INDEX_BUFFER; ZeroMemory( &id, sizeof(id) ); id.pSysMem = index_data; hr = d3ddevice->CreateBuffer( &bd, &id, &gb->ib ); if( FAILED( hr ) ) { delete (*geometry_buffer); D3D_DISPDBG_RB( KE_ERROR, "Error creating index buffer!", hr ); } gb->index_type = index_data_type; } else { gb->index_type = 0; } return true; } So that's the end of the initialization stuff, let's take a look the relevant stuff that makes it draw. /* * Name: IKeDirect3D11RenderDevice::set_program * Desc: Sets the GPU program. If NULL, the GPU program is set to 0. */ void IKeDirect3D11RenderDevice::SetProgram( IKeGpuProgram* gpu_program ) { IKeDirect3D11GpuProgram* gp = static_cast<IKeDirect3D11GpuProgram*>( gpu_program ); /* Set input layout */ if(gp) d3ddevice_context->IASetInputLayout( gp->il ); else d3ddevice_context->IASetInputLayout( NULL ); /* Set shaders */ if(gp) { d3ddevice_context->VSSetShader( gp->vs, NULL, 0 ); d3ddevice_context->PSSetShader( gp->ps, NULL, 0 ); d3ddevice_context->GSSetShader( gp->gs, NULL, 0 ); d3ddevice_context->HSSetShader( gp->hs, NULL, 0 ); d3ddevice_context->DSSetShader( gp->ds, NULL, 0 ); d3ddevice_context->CSSetShader( gp->cs, NULL, 0 ); } else { d3ddevice_context->VSSetShader( NULL, NULL, 0 ); d3ddevice_context->PSSetShader( NULL, NULL, 0 ); d3ddevice_context->GSSetShader( NULL, NULL, 0 ); d3ddevice_context->HSSetShader( NULL, NULL, 0 ); d3ddevice_context->DSSetShader( NULL, NULL, 0 ); d3ddevice_context->CSSetShader( NULL, NULL, 0 ); } } /* * Name: IKeDirect3D11RenderDevice::set_vertex_buffer * Desc: Sets the current geometry buffer to be used when rendering. Internally, binds the * vertex array object. If NULL, then sets the current vertex array object to 0. */ void IKeDirect3D11RenderDevice::SetGeometryBuffer( IKeGeometryBuffer* geometry_buffer ) { current_geometrybuffer = geometry_buffer; /* We'll come back to this in a minute */ } void IKeDirect3D11RenderDevice::Clear( uint32_t buffers ) { if( buffers & KE_COLOUR_BUFFER ) d3ddevice_context->ClearRenderTargetView( d3d_render_target_view, clear_colour ); D3D11_CLEAR_FLAG flags = 0; if( buffers & KE_DEPTH_BUFFER ) flags |= D3D11_CLEAR_DEPTH; if( buffers & KE_STENCIL_BUFFER ) flags |= D3D11_CLEAR_STENCIL; if( flags && d3d_depth_stencil_view != nullptr ) d3ddevice_context->ClearDepthStencilView( d3d_depth_stencil_view, flags, clear_depth, clear_stencil ); } /* * Name: IKeDirect3D11RenderDevice::draw_vertices * Desc: Draws vertices from the current vertex buffer */ void IKeDirect3D11RenderDevice::DrawVertices( uint32_t primtype, uint32_t stride, int first, int count ) { IKeDirect3D11GeometryBuffer* gb = static_cast<IKeDirect3D11GeometryBuffer*>(current_geometrybuffer); IKeDirect3D11GpuProgram* gp = static_cast<IKeDirect3D11GpuProgram*>(current_gpu_program); uint32_t offset = 0; /* TODO: Allow user to specify this */ d3ddevice_context->IASetVertexBuffers( 0, 1, &gb->vb.GetInterfacePtr(), &stride, &offset ); d3ddevice_context->IASetPrimitiveTopology( primitive_types[primtype] ); d3ddevice_context->Draw( count, first ); } /* * Name: IKeDirect3D11RenderDevice::swap * Desc: Swaps the double buffer. */ void IKeDirect3D11RenderDevice::Swap() { HRESULT hr = dxgi_swap_chain->Present( swap_interval, 0 ); if( FAILED( hr ) ) DISPDBG( KE_ERROR, "IDXGISwapChain::Present(): Error = 0x" << hr << "\n" ); } Okay, so that should be everything in order. I was following the Microsoft tutorials (Lesson 2) from the SDK at the time to help me get started on basics and initialization. I followed it almost to the letter but it's still not rendering anything but a blank screen. The entire thing (including this sample project) is on github if you want/need it: https://github.com/blueshogun96/KunaiEngine/blob/master/source/KeDirect3D11/KeDirect3D11RenderDevice.h https://github.com/blueshogun96/KunaiEngine/blob/master/source/KeDirect3D11/KeDirect3D11RenderDevice.cpp https://github.com/blueshogun96/KunaiEngine/tree/master/templates/win32 <- Template project Just a word of warning, if you try to build the template project, it will take a few minutes, as the entire engine is fairly large (and getting larger). I'm also prepared for any critique on the overall renderer design since there's much room for improvement and a ton of stuff I haven't gotten a chance to touch on the Direct3D side. Any ideas? Thanks. Shogun
  7. Hi everyone, I have a terrain grid which is pretty large and I'd like to have some data stored in textures to map stuff for me. For example I'm going to have water bodies data which is made of 64 DDS textures where each texture is 4096x4096. Now, since I don't want to hold all 64 texture in memory all the time - Basically I can, but VRAM will probably be crowded with other stuff and I want to save room. Also there are some other sets that I'll need to hold which may be even larger. So, I decided that at a given moment I'd like to hold a cube of 3x3 of those 64 textures in VRAM and use the camera position to decide at which cube I'm right now and render those 3x3 cubes accordingly. Now, I have 2 ways that I thought of doing what I need: 1. Use a texture array - I already know how to use this and it would be pretty easy to manage I think. But My fear is that I won't have a way to decide which array index will fit each pixel WITHOUT USING a couple of if/else pairs for dynamic branching in the Pixel shader which AFAIK isn't such a good idea. Please if you think that using a couple of dynamic branches isn't THAT bad, then I may do just that, it would be easier for me. 2. Use a texture Atlas in memory - This solution has the advantage that I can directly translate world position in the Pixel shader to texture coordinates and sample, but I'm not sure how to load 3x3 DDS textures into 1 big Atlas that is 3x3 times the size of each of the textures. I'm especially confused with how to order the textures correctly in the Atlas, as I'm not sure it'll be ordered same way as loading into an Array. If option #2 is doable I think to go with that would be easier than translating world position to array indices. Thanx for any help.
  8. HI, I'm doing a final pass on my water rendering, as such, I would like to be able to reference a copy of the depth buffer to understand how deep the water is at a point based upon the water pixels Z and the Depth of the Z value. Im doing some water depth related effects, therefore sampling the Z Depth. I have a few ideas on how to solve this. The simplest I believe is not possible, so it leaves me with a couple of options. 1 - ResolveSubresource from a MSAS Depth buffer to a non MSAA Depth buffer (believe this is not possible as resolving from a depth buffer with depth stencil flag set). 2 - I just copy the depth resource as a MSAA Depth buffer to a MSAA Depth buffer. Then Use Load to sample in the shader for the depth. Slower, but should work. 3 - Render the MSAA Depth buffer using a full screen quad to another target which is a non MSAA SRV. 4 - Consider this is a transparency pass, I probably could use the depth buffer unbound from rendertarget and use as a SRV and use Load to sample from it (no need to copy at all). 2 or 4 seem to be the go. I believe 4 would be best of all the options. and should be the fastest option. Comments. Am I missing something here? Cheers
  9. I don't have a lot of experience when it comes to the depth buffer, but I have been messing around with it and some sprites. Some of which have transparency and I have noticed that when the depth buffer is enabled I get some "weird" results When the depth buffer is on (the orange sprite has a Z-index of -2 and red star sprite has a Z-index of -1). I get this weird "cut out" of where I expect things to be transparent, but really its just the clear color When the depth buffer is off (since the depth buffer is off the Z-index does not matter, so I reordered the way sprites are drawn). The sprites are drawn as I expected So I'm guessing this is where the whole "sort by depth" comes in that I have heard about? Which makes me wonder is there a point to enabling the depth buffer at all when rendering sprites since wouldn't sorting by depth simulate what the depth buffer does? Is the depth buffer more meant for 3D objects?
  10. Hi guys, I am setting up instanced drawing (non-indexed) like this: struct d3d_vertex { float x, y, z; float nx, ny, nz; float u, v, w; }; struct d3d_instance { float x, y, z; }; D3D11_INPUT_ELEMENT_DESC layout[4]; layout[0] = { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }; layout[1] = { "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 }; layout[2] = { "TEXCOORD", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0 }; layout[3] = { "TEXCOORD", 1, DXGI_FORMAT_R32G32B32_FLOAT, 1, 36, D3D11_INPUT_PER_VERTEX_DATA, 0 }; ... UINT buffer_strides[2]; buffer_strides[0] = sizeof(d3d_vertex); buffer_strides[1] = sizeof(d3d_instance); d3d_context->IASetVertexBuffers(0, 2, bufferPointers, buffer_strides, buffer_offsets); ... d3d_context->DrawInstanced(tree_model.buffer_size, 1, 0, 0); I am getting this error: ID3D11DeviceContext::DrawInstanced: Input vertex slot 0 has stride 36 which is less than the minimum stride logically expected from the current Input Layout (48 bytes). What am I missing ? Thanks. **** solved part: layout[0] = { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }; layout[1] = { "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 }; layout[2] = { "TEXCOORD", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0 }; layout[3] = { "TEXCOORD", 1, DXGI_FORMAT_R32G32B32_FLOAT, 1, 0, D3D11_INPUT_PER_INSTANCE_DATA, 1 };
  11. I just finished up my 1st iteration of my sprite renderer and I'm sort of questioning its performance. Currently, I am trying to render 10K worth of 64x64 textured sprites in a 800x600 window. These sprites all using the same texture, vertex shader, and pixel shader. There is basically no state changes. The sprite renderer itself is dynamic using the D3D11_MAP_WRITE_NO_OVERWRITE then D3D11_MAP_WRITE_DISCARD when the vertex buffer is full. The buffer is large enough to hold all 10K sprites and execute them in a single draw call. Cutting the buffer size down to only being able to fit 1000 sprites before a draw call is executed does not seem to matter / improve performance. When I clock the time it takes to complete the render method for my sprite renderer (the only renderer that is running) I'm getting about 40ms. Aside from trying to adjust the size of the vertex buffer, I have tried using 1x1 texture and making the window smaller (640x480) as quick and dirty check to see if the GPU was the bottleneck, but I still get 40ms with both of those cases. I'm kind of at a loss. What are some of the ways that I could figure out where my bottleneck is? I feel like only being able to render 10K sprites is really low, but I'm not sure. I'm not sure if I coded a poor renderer and there is a bottleneck somewhere or I'm being limited by my hardware Just some other info: Dev PC specs: GPU: Intel HD Graphics 4600 / Nvidia GTX 850M (Nvidia is set to be the preferred GPU in the Nvida control panel. Vsync is set to off) CPU: Intel Core i7-4710HQ @ 2.5GHz Renderer: //The renderer has a working depth buffer //Sprites have matrices that are precomputed. These pretransformed vertices are placed into the buffer Matrix4 model = sprite->getModelMatrix(); verts[0].position = model * verts[0].position; verts[1].position = model * verts[1].position; verts[2].position = model * verts[2].position; verts[3].position = model * verts[3].position; verts[4].position = model * verts[4].position; verts[5].position = model * verts[5].position; //Vertex buffer is flaged for dynamic use vertexBuffer = BufferModule::createVertexBuffer(D3D11_USAGE_DYNAMIC, D3D11_CPU_ACCESS_WRITE, sizeof(SpriteVertex) * MAX_VERTEX_COUNT_FOR_BUFFER); //The vertex buffer is mapped to when adding a sprite to the buffer //vertexBufferMapType could be D3D11_MAP_WRITE_NO_OVERWRITE or D3D11_MAP_WRITE_DISCARD depending on the data already in the vertex buffer D3D11_MAPPED_SUBRESOURCE resource = vertexBuffer->map(vertexBufferMapType); memcpy(((SpriteVertex*)resource.pData) + vertexCountInBuffer, verts, BYTES_PER_SPRITE); vertexBuffer->unmap(); //The constant buffer used for the MVP matrix is updated once per draw call D3D11_MAPPED_SUBRESOURCE resource = mvpConstBuffer->map(D3D11_MAP_WRITE_DISCARD); memcpy(resource.pData, projectionMatrix.getData(), sizeof(Matrix4)); mvpConstBuffer->unmap(); Vertex / Pixel Shader: cbuffer mvpBuffer : register(b0) { matrix mvp; } struct VertexInput { float4 position : POSITION; float2 texCoords : TEXCOORD0; float4 color : COLOR; }; struct PixelInput { float4 position : SV_POSITION; float2 texCoords : TEXCOORD0; float4 color : COLOR; }; PixelInput VSMain(VertexInput input) { input.position.w = 1.0f; PixelInput output; output.position = mul(mvp, input.position); output.texCoords = input.texCoords; output.color = input.color; return output; } Texture2D shaderTexture; SamplerState samplerType; float4 PSMain(PixelInput input) : SV_TARGET { float4 textureColor = shaderTexture.Sample(samplerType, input.texCoords); return textureColor; } If anymore info is needed feel free to ask, I would really like to know how I can improve this assuming I'm not hardware limited
  12. Hi Guys, I am wondering if there is a simple way to test if a point is in the view frustum. I have a camera that is always facing the same direction and moves in all axes. No rotations involved, it is always facing x, y, z+1. FOVY is always set to 60. Is there a basic way to test if a point is within the view? I am happy to even ignore the near and far planes for simplicity sake. Thanks in advance
  13. Having some issues with a geometry shader in a very basic DX app. We have an assignment where we are supposed to render a rotating textured quad, and in the geometry shader duplicate this quad and offset it by its normal. Very basic stuff essentially. My issue is that the duplicated quad, when rendered in front of the original quad, seems to fail the Z test and thus the original quad is rendered on top of it. Whats even weirder is that this only happens for one of the triangles in the duplicated quad, against one of the original quads triangles. Here's a video to show you what happens: Video (ignore the stretched textures) Here's my GS: (VS is simple passthrough shader and PS is just as basic) struct VS_OUT { float4 Pos : SV_POSITION; float2 UV : TEXCOORD; }; struct VS_IN { float4 Pos : POSITION; float2 UV : TEXCOORD; }; cbuffer cbPerObject : register(b0) { float4x4 WVP; }; [maxvertexcount(6)] void main( triangle VS_IN input[3], inout TriangleStream< VS_OUT > output ) { //Calculate normal float4 faceEdgeA = input[1].Pos - input[0].Pos; float4 faceEdgeB = input[2].Pos - input[0].Pos; float3 faceNormal = normalize(cross(faceEdgeA.xyz, faceEdgeB.xyz)); //Input triangle, transformed for (uint i = 0; i < 3; i++) { VS_OUT element; VS_IN vert = input[i]; element.Pos = mul(vert.Pos, WVP); element.UV = vert.UV; output.Append(element); } output.RestartStrip(); for (uint j = 0; j < 3; j++) { VS_OUT element; VS_IN vert = input[j]; element.Pos = mul(vert.Pos + float4(faceNormal, 0.0f), WVP); element.Pos.xyz; element.UV = vert.UV; output.Append(element); } } I havent used geometry shaders much so im not 100% on what happens behind the scenes. Any tips appreciated!
  14. I try to draw lines with different thicknesses using the geometry shader approach from here: https://forum.libcinder.org/topic/smooth-thick-lines-using-geometry-shader It seems to work great on my development machine (some Intel HD). However, if I try it on my target (Nvidia NVS 300, yes it's old) I get different results. See the attached images. There seem to be gaps in my sine signal that the NVS 300 device creates, the intel does what I want and expect in the other picture. It's a shame, because I just can't figure out why. I expect it to be the same. I get no Error in the debug output, with enabled native debugging. I disabled culling with CullMode.None. Could it be some z-fighting? I have little clue about it but I tested to play around with the RasterizerStateDescription and DepthBias properties with no success, no change at all. Maybe I miss something there? I develop the application with SharpDX btw. Any clues or help is very welcome
  15. DX11 FFT on GPU

    Hi, I'm currently trying to write a shader which shoud compute a fast fourier transform of some data, manipulating the transformed data, do an inverse FFT an then displaying the result as vertex offset and color. I use Unity3d and HLSL as shader language. One of the main problems is that the data should not be passed from CPU to GPU for every frame if possible. My original plan was to use a vertex shader and do the fft there, but I fail to find out how to store changing data betwen shader calls/passes. I found a technique called ping-ponging which seems to be based on writing and exchangeing render targets, but I couldn't find an example for HLSL as a vertex shader yet. I found https://social.msdn.microsoft.com/Forums/en-US/c79a3701-d028-41d9-ad74-a2b3b3958383/how-to-render-to-multiple-render-targets-in-hlsl?forum=xnaframework which seem to use COLOR0 and COLOR1 as such render targets. Is it even possible to do such calculations on the gpu only? (/in this shader stage?, because I need the result of the calculation to modify the vertex offsets there) I also saw the use of compute shaders in simmilar projects (ocean wave simulation), do they realy copy data between CPU / GPU for every frame? How does this ping-ponging / rendertarget switching technique work in HLSL? Have you seen an example of usage? Any answer would be helpfull. Thank you appswert
  16. Hi Just a simple question about compute shaders (CS5, DX11). Do the atomic operations (InterlockedAdd in my case) should work without any issues on RWByteAddressBuffer and be globaly coherent ? I'v come back from CUDA world and commited fairly simple kernel that does some job, the pseudo-code is as follows: (both kernels use that same RWByteAddressBuffer) first kernel does some job and sets Result[0] = 0; (using Result.Store(0, 0)) I'v checked with debugger, and indeed the value stored at dword 0 is 0 now my second kernel RWByteAddressBuffer Result; [numthreads(8, 8, 8)] void main() { for (int i = 0; i < 5; i++) { uint4 v0 = DoSomeCalculations1(); uint4 v1 = DoSomeCalculations2(); uint4 v2 = DoSomeCalculations3(); if (v0.w == 0 && v1.w == 0 && v2.w) continue; // increment counter by 3, and get it previous value // this should basically allocate space for 3 uint4 values in buffer uint prev; Result.InterlockedAdd(0, 3, prev); // this fills the buffer with 3 uint4 values (+1 is here as the first 16 bytes is occupied by DrawInstancedIndirect data) Result.Store4((prev+0+1)*16, v0); Result.Store4((prev+1+1)*16, v1); Result.Store4((prev+2+1)*16, v2); } } Now I invoke it with Dispatch(4,4,4) Now I use DrawInstancedIndirect to draw the buffer, but ocassionaly there is missed triangle here and there for a frame, as if the atomic counter does not work as expected do I need any additional synchronization there ? I'v tried 'AllMemoryBarrierWithGroupSync' at the end of kernel, but without effect. If I do not use atomic counter, and istead just output empty vertices (that will transform into degenerated triangles) the all is OK - as if I'm missing some form of synchronization, but I do not see such a thing in DX11. I'v tested on both old and new nvidia hardware (680M and 1080, the behaviour is that same).
  17. Hello, I am, like many others before me, making a displacement map tesselator. I want render some terrain using a quad, a texture containing heightdata and the geometryshader/tesselator. So far, Ive managed the utilize the texture on the pixelshader (I return different colors depending on the height). I have also managed to tesselate my surface, i.e. subdivided my quad into lots of triangles . What doesnt work however is the sampling step on the domain shader. I want to offset the vertices using the heightmap. I tried calling the same function "textureMap.Sample(textureSampler, texcoord)" as on the pixelshader but got compiling errors. Instead I am now using the "SampleLevel" function to use the 0 mipmap version of the input texture. But yeah non of this seem to be working. I wont get anything except [0, 0, 0, 0] from my sampler. Below is some code: The working pixelshader, the broken domain shader where I want to sample, and the instanciations of the samplerstates on the CPU side. Been stuck on this for a while! Any help would be much appreciated! Texture2D textureMap: register(t0); SamplerState textureSampler : register(s0); //Pixel shader float4 PS(PS_IN input) : SV_TARGET { float4 textureColor = textureMap.Sample(textureSampler, input.texcoord); return textureColor; } GS_IN DS(HS_CONSTANT_DATA input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<DS_IN, 3> patch) { GS_IN output; float2 texcoord = uvwCoord.x * patch[0].texcoord.xy + uvwCoord.y * patch[1].texcoord.xy + uvwCoord.z * patch[2].texcoord.xy; float4 textureColor = textureMap.SampleLevel(textureSampler, texcoord.xy, 0); //fill and return output.... } //Sampler SharpDX.Direct3D11.SamplerStateDescription samplerDescription; samplerDescription = SharpDX.Direct3D11.SamplerStateDescription.Default(); samplerDescription.Filter = SharpDX.Direct3D11.Filter.MinMagMipLinear; samplerDescription.AddressU = SharpDX.Direct3D11.TextureAddressMode.Wrap; samplerDescription.AddressV = SharpDX.Direct3D11.TextureAddressMode.Wrap; this.samplerStateTextures = new SharpDX.Direct3D11.SamplerState(d3dDevice, samplerDescription); d3dDeviceContext.PixelShader.SetSampler(0, samplerStateTextures); d3dDeviceContext.VertexShader.SetSampler(0, samplerStateTextures); d3dDeviceContext.HullShader.SetSampler(0, samplerStateTextures); d3dDeviceContext.DomainShader.SetSampler(0, samplerStateTextures); d3dDeviceContext.GeometryShader.SetSampler(0, samplerStateTextures);
  18. Hi everybody! I am currently trying to write my own GPU Raytracer. I am using DirectX 11 and Compute Shader. Here is what I've tried so far: RayTracer.hlsl RayTracingHeader.hlsli But the result is not what I expected. For example, the sphere is located at (0,0,10) with radius 1, but this is the result when CamPos is 4.5, which I think is wrong. Also, for some reason, when I rotate the camera, the sphere expands. Anyone could give me some pieces of advice, please?
  19. I am currently working on my first iteration of my sprite renderer and I'm trying to draw 2 sprites. They both use the same texture and are placed into the same buffer, but unfortunately only the second sprite is shown on the the screen. I assume I messed something up when I place them into the buffer and that I am overwriting the data of the first sprite. So how should I be mapping my buffer with an offset? /* Code that sets up the sprite vertices and etc */ D3D11_MAPPED_SUBRESOURCE resource = vertexBuffer->map(vertexBufferMapType); memcpy(resource.pData, verts, sizeof(SpriteVertex) * VERTEX_PER_QUAD); vertexBuffer->unmap(); vertexCount += VERTEX_PER_QUAD; I feel like I should be doing something like: /* Code that sets up the sprite vertices and etc */ D3D11_MAPPED_SUBRESOURCE resource = vertexBuffer->map(vertexBufferMapType); //Place the sprite vertex data into the pData using the current vertex count as offset //The code resource.pData[vertexCount] is syntatically wrong though :( Not sure how it should look since pData is void pointer memcpy(resource.pData[vertexCount], verts, sizeof(SpriteVertex) * VERTEX_PER_QUAD); vertexBuffer->unmap(); vertexCount += VERTEX_PER_QUAD; Also speaking of offsets can someone give an example of when the pOffsets param for the IASetVertexBuffers call would not be 0
  20. While considering how to optimize my DirectX11 graphics engine, I noticed that it is mapping and unmapping (locking and unlocking) the D3D11_MAPPED_SUBRESOURCE many times to write to different constant buffers. Some shader have 10 or more contant buffers, for camera position, light direction, clip plane, texture translation, fog info, and many other things that need to be passed from the CPU to GPU. I was wondering if all the mapping and unmapping might be the reason why my engine is running horribly slow, and is there any way around this? What is the correct way to do it? (Refer to LightShaderClass::SetShaderParameters() function, line 401 onward to see all the mapping/unmapping). https://github.com/mister51213/DirectX11Engine/blob/WaterShader/DirectX11Engine/LightShaderClass.cpp I feel like I might be doing something obviously wrong and wasteful that could be fixed with a simple reorganization, but dont know enough about DX11 to know how. Any tips would be much appreciated, thanks.
  21. Hello! I can see that when there's a write to UAVs in CS or PS, and I bind a null ID3D11UnorderedAccessView into a used UAV slot, the GPU won't hang and the writes are silently dropped. I hope I amn't dreaming. With DX12, I can't seem to emulate this. I reckon it's impossible. The shader just reads the descriptor of the UAV (from a register/offset based on the root signature layout) and does an "image_store" at some offset from the base address. If it's unmapped, bang, we're dead. I tried zeroing out that GPU visible UAV's range in the table, same result. Such an all-zero UAV descriptor doesn't seem very legit. That's expected. Am I right? How does DX11 do it that it survives this? Does it silently patch the shader or what? Thanks, .P
  22. I'm reviewing a tutorial on using textures and I see that the vertex shader has this input declaration where the position is float4 struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; }; But when they go over uploading the data to vertex buffer they only use a float3 (Vector3) value for the position // Load the vertex array with data. vertices[0].position = D3DXVECTOR3(-1.0f, -1.0f, 0.0f); // Bottom left. vertices[0].texture = D3DXVECTOR2(0.0f, 1.0f); vertices[1].position = D3DXVECTOR3(0.0f, 1.0f, 0.0f); // Top middle. vertices[1].texture = D3DXVECTOR2(0.5f, 0.0f); vertices[2].position = D3DXVECTOR3(1.0f, -1.0f, 0.0f); // Bottom right. vertices[2].texture = D3DXVECTOR2(1.0f, 1.0f); The input layout description declared also seems to match to use a float3 value polygonLayout[0].SemanticName = "POSITION"; polygonLayout[0].SemanticIndex = 0; polygonLayout[0].Format = DXGI_FORMAT_R32G32B32_FLOAT; polygonLayout[0].InputSlot = 0; polygonLayout[0].AlignedByteOffset = 0; polygonLayout[0].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA; polygonLayout[0].InstanceDataStepRate = 0; So does this mean that shaders will automatically default "missing" values to 0 or something of the like? If so is this frowned upon?
  23. A player of my game contacted me, because the game crashes during start-up. After taking a look into log file he sent me, calling CreateSwapChain results in an exception as shown below. HRESULT: [0x887A0001], Module: [SharpDX.DXGI], ApiCode: [DXGI_ERROR_INVALID_CALL/InvalidCall], Message: Unknown at SharpDX.Result.CheckError() at SharpDX.DXGI.Factory.CreateSwapChain(ComObject deviceRef, SwapChainDescription& descRef, SwapChain swapChainOut) at SharpDX.Direct3D11.Device.CreateWithSwapChain(Adapter adapter, DriverType driverType, DeviceCreationFlags flags, FeatureLevel[] featureLevels, SwapChainDescription swapChainDescription, Device& device, SwapChain& swapChain) at SharpDX.Direct3D11.Device.CreateWithSwapChain(DriverType driverType, DeviceCreationFlags flags, SwapChainDescription swapChainDescription, Device& device, SwapChain& swapChain) In order to investigate this player's problem, I created a test application that looks like this: class Program { static void Main(string[] args) { Helper.UserInfo userInfo = new Helper.UserInfo(true); Console.WriteLine("Checking adapters."); using (var factory = new SharpDX.DXGI.Factory1()) { for (int i = 0; i < factory.GetAdapterCount(); i++) { SharpDX.DXGI.Adapter adapter = factory.GetAdapter(i); Console.WriteLine("\tAdapter {0}: {1}", i, adapter.Description.Description); bool supportsLevel10_1 = SharpDX.Direct3D11.Device.IsSupportedFeatureLevel(adapter, SharpDX.Direct3D.FeatureLevel.Level_10_1); Console.WriteLine("\t\tSupport for Level_10_1? {0}!", supportsLevel10_1); Console.WriteLine("\t\tCreate refresh rate (60)."); var refreshRate = new SharpDX.DXGI.Rational(60, 1); Console.WriteLine("\t\tCreate mode description."); var modeDescription = new SharpDX.DXGI.ModeDescription(0, 0, refreshRate, SharpDX.DXGI.Format.R8G8B8A8_UNorm); Console.WriteLine("\t\tCreate sample description."); var sampleDescription = new SharpDX.DXGI.SampleDescription(1, 0); Console.WriteLine("\t\tCreate swap chain description."); var desc = new SharpDX.DXGI.SwapChainDescription() { // Numbers of back buffers to use on the SwapChain BufferCount = 1, ModeDescription = modeDescription, // Do we want to use a windowed mode? IsWindowed = true, Flags = SharpDX.DXGI.SwapChainFlags.None, OutputHandle = Process.GetCurrentProcess().MainWindowHandle, // Cout in 'SampleDescription' means the level of anti-aliasing (from 1 to usually 4) SampleDescription = sampleDescription, SwapEffect = SharpDX.DXGI.SwapEffect.Discard, // DXGI_USAGE_RENDER_TARGET_OUTPUT: This value is used when you wish to draw graphics into the back buffer. Usage = SharpDX.DXGI.Usage.RenderTargetOutput }; try { Console.WriteLine("\t\tCreate device (Run 1)."); SharpDX.Direct3D11.Device device = new SharpDX.Direct3D11.Device(adapter, SharpDX.Direct3D11.DeviceCreationFlags.None, new SharpDX.Direct3D.FeatureLevel[] { SharpDX.Direct3D.FeatureLevel.Level_10_1 }); Console.WriteLine("\t\tCreate swap chain (Run 1)."); SharpDX.DXGI.SwapChain swapChain = new SharpDX.DXGI.SwapChain(factory, device, desc); } catch (Exception e) { Console.WriteLine("EXCEPTION: {0}", e.Message); } try { Console.WriteLine("\t\tCreate device (Run 2)."); SharpDX.Direct3D11.Device device = new SharpDX.Direct3D11.Device(adapter, SharpDX.Direct3D11.DeviceCreationFlags.BgraSupport, new SharpDX.Direct3D.FeatureLevel[] { SharpDX.Direct3D.FeatureLevel.Level_10_1 }); Console.WriteLine("\t\tCreate swap chain (Run 2)."); SharpDX.DXGI.SwapChain swapChain = new SharpDX.DXGI.SwapChain(factory, device, desc); } catch (Exception e) { Console.WriteLine("EXCEPTION: {0}", e.Message); } try { Console.WriteLine("\t\tCreate device (Run 3)."); SharpDX.Direct3D11.Device device = new SharpDX.Direct3D11.Device(adapter); Console.WriteLine("\t\tCreate swap chain (Run 3)."); SharpDX.DXGI.SwapChain swapChain = new SharpDX.DXGI.SwapChain(factory, device, desc); } catch (Exception e) { Console.WriteLine("EXCEPTION: {0}", e.Message); } } } Console.WriteLine("FIN."); Console.ReadLine(); } } In the beginning, I am collecting information about the computer (processor, GPU, .NET Framework version, etc.). The rest should explain itself. I sent him the application and in all three cases, creating the swap chain fails with the same exception. In this test program, I included all solutions that worked for other users. For example, AlexandreMutel said in this forum thread, that device and swap chain need to share the same factory. I did that in my program. So using different factories is not a problem in my case. Laurent Couvidou said here: The player has Windows 7 with .NET Framework 4.6.1, which is good enough to run my test application or game which use .NET Framework 4.5.2. The graphics cards (Radeon HD 6700 Series) is also good enough to run the application. In my test application, I also checked, if Feature Level 10_1 is supported, which is the minimum requirement for my game. A refresh rate of 60 Hz should also be no problem. Therefore, I think the parameters are fine. The remaining calls are just three different ways to create a device and a swap chain for the adapter. All of them throw an exception when creating the swap chain. There are also Battlefield 3 players who had problems with running BF3. As it turned out, BF3 had some issue with Windows region settings. But I believe that's not a problem in my case. There were also compatibility issues in Fallout 4, but my game runs on many other Windows 7 PCs without any problems. Do you have any idea what's wrong? I already have a lot of players that play my game without any issues, so it's not a general problem. I really want to help this player, but I currently can't find a solution.
  24. Hi guys, following Rastertek tutorial and making a reflection effect using render targets in DirectX11. http://www.rastertek.com/dx11tut27.html Got it working, but as you can see, the reflected image seems to be drawing to a really small "texture" - doesnt correspond to the size of the actual blue floor texture it is being drawn onto. If you go really close to it and look straight down, you can also see many copies of the reflected texture inside the blue floor texture, so it seems it's being tiled somehow. But no idea where to change this setting in the code, or what is the cause of the sizing problem. At line 108 of Graphics.cpp, _RenderTexture->Initialize(_D3D->GetDevice(), screenWidth, screenHeight); We give the screenWidth and Height to the render Texture that is later used as the target for the reflected image. Howver, I tried passing in different values for width and height of this texture and it will stretch or shrink the reflected image, but then its position in the reflected surface is all messed up. I need the size of that reflection target to be the same size as the actual blue floor texture, but the images to be scaled properly when they get reflected in that. It doesnt seem to be doing the scaling right at the moment. If you could take a look, any help would be much appreciated. Thanks so much. https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/Graphics.cpp
  25. Hey guys, I'm trying to work on adding transparent objects to my deferred-rendered scene. The only issue is the z-buffer. As far as I know, the standard way to handle this is copying the buffer. In OpenGL, I can just blit it. What's the alternative for DirectX? And are there any alternatives to copying the buffer? Thanks in advance!
  • Advertisement