Jump to content
  • Advertisement

blueshogun96

Member
  • Content count

    1094
  • Joined

  • Last visited

Community Reputation

2267 Excellent

1 Follower

About blueshogun96

  • Rank
    Crossbones+

Recent Profile Visitors

23781 profile views
  1. blueshogun96

    My renderer stopped working

    Yeah, but unfortunately it doesn't work on my laptop (only on my old surface tablet atm) so I use RenderDoc per recommendation of Mr Hodgman. Also, the bigger issue is that I was drawing the triangle CCW. It appears that on NV, the drivers change their behaviour involving default render states depending on the MSAA setting. Strange. Oh well, it works now so that's all I care about. Shogun
  2. blueshogun96

    You biggest difficulties with game development?

    I've never been able to knowingly delete my own threads, but a moderator can take the appropriate action. You can either wait for them to respond, or report your own thread asking them to move it to the appropriate place(s). Shogun
  3. blueshogun96

    You biggest difficulties with game development?

    As much as I appreciate your intentions, I do believe that posting surveys in the beginner's development forum probably isn't the best place to do so. I'd recommend sharing it in the lounge instead, but this is a call for the moderators to make. Welcome btw, Shogun
  4. blueshogun96

    My renderer stopped working

    That last bit is by design. I was copying off of the Microsoft tutorial, just trying to get the triangle to work. Anyway, thanks to slicer4ever (yet again, this guy is friggin awesome), I found out what my issue was. Turns out that when I set swapchain_desc.SampleDesc.Count to 1, nothing renders. He had the same issue, and nothing worked until he set it to 4. Commented out the depth stencil code temporarily and it worked. Now, there's one more stupid thing I did, look at IKeD3D11RenderDevice::Clear, I attempted to clear the depth and stencil buffer in one function call and somehow assumed it was the same in D3D9 and core OpenGL. So I moved that to separate lines and it works now. Sorry if my description was too vague. But I'm just glad it's working. Thanks. Shogun
  5. I could have sworn that this D3D11 renderer of mine was working before. The OpenGL version of this renderer works fine (mostly because I've been confined to MacOS for too long, causing my D3D11 renderer to fall behind) but even though I've followed the tutorials almost exactly, the code just isn't working. I've tried to use Visual Studio's debugging feature, but I couldn't find any helpful information within it (that or I'm just blind since I've never used it before until now). Now, I really hate to just dump code on you all, but there's more than enough to go through, so I'll try to keep it limited to the relevant parts. This is the main source file, so you can see in order what is being done, what is being called, etc. CKeDemoApplication::CKeDemoApplication() { std::string dxvs = "float4 vs_main( float4 Pos : POSITION ) : SV_POSITION\n" "{\n" " return Pos;\n" "}"; std::string dxps = "float4 ps_main( float4 Pos : SV_POSITION ) : SV_Target\n" "{\n" " return float4( 1.0f, 1.0f, 0.0f, 1.0f );\n" "}"; std::string glvs = "#version 150\n" "in vec3 in_pos;\n" "out vec4 out_colour;\n" "void main( void )\n" "{\n" " gl_Position = vec4( in_pos.xyz, 1.0 );\n" " out_colour = vec4( 1, 1, 1, 1 );\n" "}"; std::string glfs = "#version 150\n" "out vec4 colour;\n" "in vec4 out_colour;\n" "void main(void)\n" "{\n" "colour = out_colour;\n" "}"; /* * Initialize Kunai Engine */ KeInitialize(); /* * Initialize a basic core OpenGL 3.x device */ KeRenderDeviceDesc rddesc; ZeroMemory( &rddesc, sizeof( KeRenderDeviceDesc ) ); rddesc.width = 640; rddesc.height = 480; rddesc.colour_bpp = 32; rddesc.depth_bpp = 24; rddesc.stencil_bpp = 8; rddesc.fullscreen = No; rddesc.buffer_count = 2; rddesc.device_type = KE_RENDERDEVICE_D3D11; bool ret = KeCreateWindowAndDevice( &rddesc, &m_pRenderDevice ); if( !ret ) { DISPDBG( KE_ERROR, "Error initializing render device!" ); } /* * Initialize GPU program and geometry buffer */ KeVertexAttribute va[] = { { KE_VA_POSITION, 3, KE_FLOAT, No, sizeof(float)*3, 0 }, { -1, 0, 0, 0, 0 } }; nv::vec3f vd[] = { nv::vec3f( -1.0f, -1.0f, 0.0f ), nv::vec3f( 1.0f, -1.0f, 0.0f ), nv::vec3f( 0.0f, 1.0f, 0.0f ), }; if( rddesc.device_type == KE_RENDERDEVICE_D3D11 ) m_pRenderDevice->CreateProgram( dxvs.c_str(), dxps.c_str(), NULL, NULL, va, &m_pProgram ); else m_pRenderDevice->CreateProgram( glvs.c_str(), glfs.c_str(), NULL, NULL, va, &m_pProgram ); m_pRenderDevice->CreateGeometryBuffer( &vd, sizeof(nv::vec3f)*3, NULL, 0, 0, KE_USAGE_STATIC_WRITE, va, &m_pGB ); } CKeDemoApplication::~CKeDemoApplication() { if( m_pGB ) m_pGB->Destroy(); if( m_pProgram ) m_pProgram->Destroy(); KeDestroyWindowAndDevice( m_pRenderDevice ); m_pRenderDevice = NULL; KeUninitialize(); } void CKeDemoApplication::Run() { m_pRenderDevice->SetProgram( m_pProgram ); m_pRenderDevice->SetGeometryBuffer( m_pGB ); m_pRenderDevice->SetTexture( 0, NULL ); while( !KeQuitRequested() ) { KeProcessEvents(); float green[4] = { 0.0f, 0.5f, 0.0f, 1.0 }; m_pRenderDevice->SetClearColourFV( green ); m_pRenderDevice->SetClearDepth( 1.0f ); m_pRenderDevice->SetClearStencil(0); m_pRenderDevice->Clear( KE_COLOUR_BUFFER | KE_DEPTH_BUFFER /*| KE_STENCIL_BUFFER*/ ); m_pRenderDevice->DrawVertices( KE_TRIANGLES, sizeof(nv::vec3f), 0, 3 ); m_pRenderDevice->Swap(); } } For those that want to see my initialization routine: bool IKeDirect3D11RenderDevice::PVT_InitializeDirect3DWin32() { /* Initialize Direct3D11 */ uint32_t flags = D3D11_CREATE_DEVICE_BGRA_SUPPORT; D3D_FEATURE_LEVEL feature_levels[] = { D3D_FEATURE_LEVEL_12_1, D3D_FEATURE_LEVEL_12_0, D3D_FEATURE_LEVEL_11_1, D3D_FEATURE_LEVEL_11_0, D3D_FEATURE_LEVEL_10_1, D3D_FEATURE_LEVEL_10_0, D3D_FEATURE_LEVEL_9_3, D3D_FEATURE_LEVEL_9_2, D3D_FEATURE_LEVEL_9_1 }; int feature_level_count = ARRAYSIZE( feature_levels ); #ifdef _DEBUG flags = D3D11_CREATE_DEVICE_DEBUG; #endif ZeroMemory( &swapchain_desc, sizeof( swapchain_desc ) ); swapchain_desc.BufferCount = device_desc->buffer_count; swapchain_desc.BufferDesc.Width = device_desc->width; swapchain_desc.BufferDesc.Height = device_desc->height; swapchain_desc.BufferDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; swapchain_desc.BufferDesc.RefreshRate.Numerator = device_desc->refresh_rate; swapchain_desc.BufferDesc.RefreshRate.Denominator = 1; swapchain_desc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapchain_desc.OutputWindow = GetActiveWindow(); swapchain_desc.SampleDesc.Count = 1; swapchain_desc.SampleDesc.Quality = 0; swapchain_desc.Windowed = !device_desc->fullscreen; HRESULT hr = D3D11CreateDeviceAndSwapChain( NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, flags, feature_levels, feature_level_count, D3D11_SDK_VERSION, &swapchain_desc, &dxgi_swap_chain, &d3ddevice, &feature_level, &d3ddevice_context ); #ifdef _DEBUG /* If we are requesting a debug device, and we fail to get it, try again without the debug flag. */ if( hr == DXGI_ERROR_SDK_COMPONENT_MISSING ) { DISPDBG( KE_WARNING, "Attempting to re-create the Direct3D device without debugging capabilities..." ); flags &= ~D3D11_CREATE_DEVICE_DEBUG; hr = D3D11CreateDeviceAndSwapChain( NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, flags, feature_levels, feature_level_count, D3D11_SDK_VERSION, &swapchain_desc, &dxgi_swap_chain, &d3ddevice, &feature_level, &d3ddevice_context ); } #endif D3D_DISPDBG_RB( KE_ERROR, "Error creating Direct3D11 device and swapchain!", hr ); /* Create our render target view */ ID3D11Texture2D* back_buffer = NULL; hr = dxgi_swap_chain->GetBuffer( 0, __uuidof( ID3D11Texture2D ), ( LPVOID* )&back_buffer ); D3D_DISPDBG_RB( KE_ERROR, "Error getting back buffer!", hr ); hr = d3ddevice->CreateRenderTargetView( back_buffer, NULL, &d3d_render_target_view ); back_buffer->Release(); D3D_DISPDBG_RB( KE_ERROR, "Error creating render target view!", hr ); /* Create our depth stencil view */ D3D11_TEXTURE2D_DESC depthdesc; depthdesc.Width = device_desc->width; depthdesc.Height = device_desc->height; depthdesc.MipLevels = 1; depthdesc.ArraySize = 1; depthdesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthdesc.SampleDesc.Count = 1; depthdesc.SampleDesc.Quality = 0; depthdesc.Usage = D3D11_USAGE_DEFAULT; depthdesc.BindFlags = D3D11_BIND_DEPTH_STENCIL; depthdesc.CPUAccessFlags = 0; depthdesc.MiscFlags = 0; hr = d3ddevice->CreateTexture2D( &depthdesc, NULL, &d3d_depth_stencil_buffer ); D3D_DISPDBG_RB( KE_ERROR, "Error creating depth stencil buffer!", hr ); D3D11_DEPTH_STENCIL_VIEW_DESC dsvdesc = {}; dsvdesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; /* TODO: Do not hardcode this... */ dsvdesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D; dsvdesc.Texture2D.MipSlice = 0; hr = d3ddevice->CreateDepthStencilView( d3d_depth_stencil_buffer, &dsvdesc, &d3d_depth_stencil_view ); D3D_DISPDBG_RB( KE_ERROR, "Error creating depth stencil view!", hr ); /* Set render target and depth stencil */ d3ddevice_context->OMSetRenderTargets( 1, &d3d_render_target_view.GetInterfacePtr(), d3d_depth_stencil_view ); /* Setup the viewport */ D3D11_VIEWPORT vp; vp.Width = (FLOAT) device_desc->width; vp.Height = (FLOAT) device_desc->height; vp.MinDepth = 0.0f; vp.MaxDepth = 1.0f; vp.TopLeftX = 0; vp.TopLeftY = 0; d3ddevice_context->RSSetViewports( 1, &vp ); /* Get DXGI output */ if( FAILED( hr = dxgi_swap_chain->GetContainingOutput( &dxgi_output ) ) ) { DISPDBG( KE_WARNING, "IDXGISwapChain::GetContainingOutput returned (0x" << hr << ")" ); dxgi_output = nullptr; } return S_OK; } Going down the initialization routine, here's the code for creating shaders and geometry buffers: bool IKeDirect3D11RenderDevice::CreateProgram( const char* vertex_shader, const char* fragment_shader, const char* geometry_shader, const char* tesselation_shader, KeVertexAttribute* vertex_attributes, IKeGpuProgram** gpu_program ) { D3D11_INPUT_ELEMENT_DESC* layout = NULL; int layout_size = 0; DXGI_FORMAT fmt; DWORD shader_flags = D3DCOMPILE_ENABLE_STRICTNESS; #ifdef _DEBUG shader_flags |= D3DCOMPILE_DEBUG; #endif /* Allocate new GPU program */ *gpu_program = new IKeDirect3D11GpuProgram; IKeDirect3D11GpuProgram* gp = static_cast<IKeDirect3D11GpuProgram*>( *gpu_program ); /* Create Direct3D compatible vertex layout */ while( vertex_attributes[layout_size].index != -1 ) layout_size++; layout = new D3D11_INPUT_ELEMENT_DESC[layout_size]; if( layout ) { for( int i = 0; i < layout_size; i++ ) { if( vertex_attributes[i].type == KE_FLOAT && vertex_attributes[i].size == 1 ) fmt = DXGI_FORMAT_R32_FLOAT; if( vertex_attributes[i].type == KE_FLOAT && vertex_attributes[i].size == 2 ) fmt = DXGI_FORMAT_R32G32_FLOAT; if( vertex_attributes[i].type == KE_FLOAT && vertex_attributes[i].size == 3 ) fmt = DXGI_FORMAT_R32G32B32_FLOAT; if( vertex_attributes[i].type == KE_FLOAT && vertex_attributes[i].size == 4 ) fmt = DXGI_FORMAT_R32G32B32A32_FLOAT; if( !strcmp( "POSITION", semantic_list[vertex_attributes[i].index].name ) ) layout[i].SemanticName = "POSITION"; if( !strcmp( "COLOR", semantic_list[vertex_attributes[i].index].name ) ) layout[i].SemanticName = "COLOR"; layout[i].SemanticIndex = semantic_list[vertex_attributes[i].index].index; layout[i].Format = fmt; layout[i].InputSlot = 0; /* TODO */ layout[i].AlignedByteOffset = vertex_attributes[i].offset; layout[i].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA; layout[i].InstanceDataStepRate = 0; /* TODO */ } /* Initialize vertex shader */ /* TODO: Auto detect highest shader version */ CD3D10Blob* blob_shader = NULL; CD3D10Blob* blob_error = NULL; HRESULT hr = D3DCompile( vertex_shader, strlen( vertex_shader ) + 1, "vs_main", NULL, NULL, "vs_main", "vs_4_0", shader_flags, 0, &blob_shader, &blob_error ); if( FAILED( hr ) ) { if( blob_error != NULL ) { DISPDBG( KE_ERROR, "Error compiling vertex shader source!\n" << (char*)blob_error->GetBufferPointer() << "\n" ); delete[] layout; blob_error = 0; gp->Destroy(); } return false; } hr = d3ddevice->CreateVertexShader( blob_shader->GetBufferPointer(), blob_shader->GetBufferSize(), NULL, &gp->vs ); if( FAILED( hr ) ) { delete[] layout; blob_shader = 0; gp->Destroy(); DISPDBG( KE_ERROR, "Error creating vertex shader!\n" ); } /* Create input layout */ hr = d3ddevice->CreateInputLayout( layout, layout_size, blob_shader->GetBufferPointer(), blob_shader->GetBufferSize(), &gp->il ); blob_shader = 0; delete[] layout; if( FAILED( hr ) ) { gp->Destroy(); DISPDBG( KE_ERROR, "Error creating input layout!\n" ); } /* Create pixel shader */ hr = D3DCompile( fragment_shader, strlen( fragment_shader ) + 1, "ps_main", NULL, NULL, "ps_main", "ps_4_0", shader_flags, 0, &blob_shader, &blob_error ); if( FAILED( hr ) ) { if( blob_error != NULL ) { DISPDBG( KE_ERROR, "Error compiling pixel shader source!\n" << (char*)blob_error->GetBufferPointer() << "\n" ); blob_error = 0; gp->Destroy(); } return false; } hr = d3ddevice->CreatePixelShader( blob_shader->GetBufferPointer(), blob_shader->GetBufferSize(), NULL, &gp->ps ); if( FAILED( hr ) ) { blob_shader = 0; gp->Destroy(); DISPDBG( KE_ERROR, "Error creating pixel shader!\n" ); } blob_shader = 0; /* TODO: Geometry, Hull, Compute and Domain shaders */ gp->hs = NULL; gp->gs = NULL; gp->cs = NULL; gp->ds = NULL; } #if 1 /* Copy vertex attributes */ int va_size = 0; while( vertex_attributes[va_size].index != -1 ) va_size++; gp->va = new KeVertexAttribute[va_size+1]; memmove( gp->va, vertex_attributes, sizeof( KeVertexAttribute ) * (va_size+1) ); #endif return true; } /* * Name: IKeDirect3D11RenderDevice::create_geometry_buffer * Desc: Creates a geometry buffer based on the vertex and index data given. Vertex and index * buffers are encapsulated into one interface for easy management, however, index data * input is completely optional. Interleaved vertex data is also supported. */ bool IKeDirect3D11RenderDevice::CreateGeometryBuffer( void* vertex_data, uint32_t vertex_data_size, void* index_data, uint32_t index_data_size, uint32_t index_data_type, uint32_t flags, KeVertexAttribute* va, IKeGeometryBuffer** geometry_buffer ) { HRESULT hr = S_OK; /* Sanity check(s) */ if( !geometry_buffer ) DISPDBG_RB( KE_ERROR, "Invalid interface pointer!" ); //if( !vertex_attributes ) // return false; if( !vertex_data_size ) DISPDBG_RB( KE_ERROR, "(vertex_data_size == 0) condition is currently not allowed..." ); /* Temporary? */ *geometry_buffer = new IKeDirect3D11GeometryBuffer; IKeDirect3D11GeometryBuffer* gb = static_cast<IKeDirect3D11GeometryBuffer*>( *geometry_buffer ); gb->stride = 0; /* Create a vertex buffer */ D3D11_BUFFER_DESC bd; ZeroMemory( &bd, sizeof(bd) ); bd.Usage = D3D11_USAGE_DEFAULT; bd.ByteWidth = vertex_data_size; bd.BindFlags = D3D11_BIND_VERTEX_BUFFER; bd.CPUAccessFlags = 0; /* TODO */ D3D11_SUBRESOURCE_DATA id; ZeroMemory( &id, sizeof(id) ); id.pSysMem = vertex_data; hr = d3ddevice->CreateBuffer( &bd, &id, &gb->vb ); if( FAILED( hr ) ) { delete (*geometry_buffer); D3D_DISPDBG_RB( KE_ERROR, "Error creating vertex buffer!", hr ); } /* Create index buffer, if desired. */ gb->ib = NULL; if( index_data_size ) { ZeroMemory( &bd, sizeof(bd) ); bd.Usage = D3D11_USAGE_DEFAULT; bd.ByteWidth = index_data_size; bd.BindFlags = D3D11_BIND_INDEX_BUFFER; ZeroMemory( &id, sizeof(id) ); id.pSysMem = index_data; hr = d3ddevice->CreateBuffer( &bd, &id, &gb->ib ); if( FAILED( hr ) ) { delete (*geometry_buffer); D3D_DISPDBG_RB( KE_ERROR, "Error creating index buffer!", hr ); } gb->index_type = index_data_type; } else { gb->index_type = 0; } return true; } So that's the end of the initialization stuff, let's take a look the relevant stuff that makes it draw. /* * Name: IKeDirect3D11RenderDevice::set_program * Desc: Sets the GPU program. If NULL, the GPU program is set to 0. */ void IKeDirect3D11RenderDevice::SetProgram( IKeGpuProgram* gpu_program ) { IKeDirect3D11GpuProgram* gp = static_cast<IKeDirect3D11GpuProgram*>( gpu_program ); /* Set input layout */ if(gp) d3ddevice_context->IASetInputLayout( gp->il ); else d3ddevice_context->IASetInputLayout( NULL ); /* Set shaders */ if(gp) { d3ddevice_context->VSSetShader( gp->vs, NULL, 0 ); d3ddevice_context->PSSetShader( gp->ps, NULL, 0 ); d3ddevice_context->GSSetShader( gp->gs, NULL, 0 ); d3ddevice_context->HSSetShader( gp->hs, NULL, 0 ); d3ddevice_context->DSSetShader( gp->ds, NULL, 0 ); d3ddevice_context->CSSetShader( gp->cs, NULL, 0 ); } else { d3ddevice_context->VSSetShader( NULL, NULL, 0 ); d3ddevice_context->PSSetShader( NULL, NULL, 0 ); d3ddevice_context->GSSetShader( NULL, NULL, 0 ); d3ddevice_context->HSSetShader( NULL, NULL, 0 ); d3ddevice_context->DSSetShader( NULL, NULL, 0 ); d3ddevice_context->CSSetShader( NULL, NULL, 0 ); } } /* * Name: IKeDirect3D11RenderDevice::set_vertex_buffer * Desc: Sets the current geometry buffer to be used when rendering. Internally, binds the * vertex array object. If NULL, then sets the current vertex array object to 0. */ void IKeDirect3D11RenderDevice::SetGeometryBuffer( IKeGeometryBuffer* geometry_buffer ) { current_geometrybuffer = geometry_buffer; /* We'll come back to this in a minute */ } void IKeDirect3D11RenderDevice::Clear( uint32_t buffers ) { if( buffers & KE_COLOUR_BUFFER ) d3ddevice_context->ClearRenderTargetView( d3d_render_target_view, clear_colour ); D3D11_CLEAR_FLAG flags = 0; if( buffers & KE_DEPTH_BUFFER ) flags |= D3D11_CLEAR_DEPTH; if( buffers & KE_STENCIL_BUFFER ) flags |= D3D11_CLEAR_STENCIL; if( flags && d3d_depth_stencil_view != nullptr ) d3ddevice_context->ClearDepthStencilView( d3d_depth_stencil_view, flags, clear_depth, clear_stencil ); } /* * Name: IKeDirect3D11RenderDevice::draw_vertices * Desc: Draws vertices from the current vertex buffer */ void IKeDirect3D11RenderDevice::DrawVertices( uint32_t primtype, uint32_t stride, int first, int count ) { IKeDirect3D11GeometryBuffer* gb = static_cast<IKeDirect3D11GeometryBuffer*>(current_geometrybuffer); IKeDirect3D11GpuProgram* gp = static_cast<IKeDirect3D11GpuProgram*>(current_gpu_program); uint32_t offset = 0; /* TODO: Allow user to specify this */ d3ddevice_context->IASetVertexBuffers( 0, 1, &gb->vb.GetInterfacePtr(), &stride, &offset ); d3ddevice_context->IASetPrimitiveTopology( primitive_types[primtype] ); d3ddevice_context->Draw( count, first ); } /* * Name: IKeDirect3D11RenderDevice::swap * Desc: Swaps the double buffer. */ void IKeDirect3D11RenderDevice::Swap() { HRESULT hr = dxgi_swap_chain->Present( swap_interval, 0 ); if( FAILED( hr ) ) DISPDBG( KE_ERROR, "IDXGISwapChain::Present(): Error = 0x" << hr << "\n" ); } Okay, so that should be everything in order. I was following the Microsoft tutorials (Lesson 2) from the SDK at the time to help me get started on basics and initialization. I followed it almost to the letter but it's still not rendering anything but a blank screen. The entire thing (including this sample project) is on github if you want/need it: https://github.com/blueshogun96/KunaiEngine/blob/master/source/KeDirect3D11/KeDirect3D11RenderDevice.h https://github.com/blueshogun96/KunaiEngine/blob/master/source/KeDirect3D11/KeDirect3D11RenderDevice.cpp https://github.com/blueshogun96/KunaiEngine/tree/master/templates/win32 <- Template project Just a word of warning, if you try to build the template project, it will take a few minutes, as the entire engine is fairly large (and getting larger). I'm also prepared for any critique on the overall renderer design since there's much room for improvement and a ton of stuff I haven't gotten a chance to touch on the Direct3D side. Any ideas? Thanks. Shogun
  6. /* * GPU Details structure * NOTE: Subject to change */ [StructLayout(LayoutKind.Sequential, Size = 136), Serializable] public struct GPUDETAILS { [MarshalAsAttribute(UnmanagedType.ByValTStr, SizeConst = 128)] public string DeviceDesc; [MarshalAsAttribute(UnmanagedType.U4, SizeConst = 1)] public UInt32 DeviceID; [MarshalAsAttribute(UnmanagedType.U4, SizeConst = 1)] public UInt32 VendorID; } Fixed it, works perfectly. Thanks. Shogun
  7. I'm writing a GUI based GPU tool and I'm using C# and WPF since it will make my life easier bringing this app to the Win10 store. IMHO, it's surprisingly hard to find a good series of tutorials that teach you more than how to add some buttons to a window, input text, etc. My app won't use that much, and what I really need is to create a GUI similar to that of the task manager. What I mean are tabs to switch between graphs and tables, etc. Also, I can't find any code on how to add a graph (histogram) to my app, and so on. I've googled quite a bit and found out that some tutorials are behind a friggin pay wall?! Ugh. Not to complain, just hoping to find something so I can get this thing working acceptably before Wednesday. Thanks, Shogun
  8. I'm writing a GPU tool with C# for the UI and command line .exes that the user interacts with, and writing the necessary driver code to make it work in C++ (via .dll). So far, loading an unmanaged .dll file written in C++ is trivial and easy, but there's one part that confuses me (mostly because I am not a C# expert, yet). How do you handle structures as parameters? My code crashes when I try to use a structure as a parameter. Should I use an IntPtr instead and just cast it? I'll show you a bit of code to show you what I mean: C++: typedef struct _GPUDETAILS { CHAR DeviceDesc[128]; DWORD DeviceID; DWORD VendorID; } GPUDETAILS; ... GPUMONEXDRIVERNVAPI_API int Drv_GetGpuDetails( int AdapterNumber, GPUDETAILS* pGpuDetails ) { _LOG( __FUNCTION__ << "(): TODO: Implement...\n" ); if( !pGpuDetails ) { _ERROR( "Invalid parameter!" << std::endl ); return 0; } _LOG( __FUNCTION__ << "(): Gathering GPU details...\n" ); strcpy( pGpuDetails->DeviceDesc, "NVIDIA Something..." ); pGpuDetails->DeviceID = 0xFFFF; /* TODO */ pGpuDetails->VendorID = 0x10DE; /* This is always a given */ return 1; } Something simple for now. Let's move on to the C# part... namespace GPUMonEx { /* * GPU Details structure * NOTE: Subject to change */ public struct GPUDETAILS { public string DeviceDesc; public UInt32 DeviceID; public UInt32 VendorID; } /* * Driver importer classes for the following APIs under Windows * TODO: Get ahold of Intel's SDK as well as implement AMD's equivalent for their hardware. * * NVAPI - NVIDIA Driver Specific functionality * D3DKMT - Direct3D internal driver functions. Should work for all GPUs, but currently needed for Intel. */ static class DrvD3DKMT { [DllImport("GPUMonEx.Driver.D3DKMT.dll")] public static extern int Drv_Initialize(); [DllImport("GPUMonEx.Driver.D3DKMT.dll")] public static extern void Drv_Uninitialize(); [DllImport("GPUMonEx.Driver.D3DKMT.dll")] public static extern unsafe int Drv_GetGpuDetails(int Adapter, ref GPUDETAILS pGpuDetails); [DllImport("GPUMonEx.Driver.D3DKMT.dll")] public static extern int Drv_GetOverallGpuLoad(); [DllImport("GPUMonEx.Driver.D3DKMT.dll")] public static extern int Drv_GetGpuTemperature(); } static class DrvNVAPI { [DllImport("GPUMonEx.Driver.NVAPI.dll")] public static extern int Drv_Initialize(); [DllImport("GPUMonEx.Driver.NVAPI.dll")] public static extern void Drv_Uninitialize(); [DllImport("GPUMonEx.Driver.NVAPI.dll")] public static extern unsafe int Drv_GetGpuDetails(int Adapter, ref GPUDETAILS pGpuDetails); [DllImport("GPUMonEx.Driver.NVAPI.dll")] public static extern int Drv_GetOverallGpuLoad(); [DllImport("GPUMonEx.Driver.NVAPI.dll")] public static extern int Drv_GetGpuTemperature(); } /* * GPU Driver interfacing classes (the ones you actually call in user mode) */ public abstract class GPUDriverBase { public abstract int Initialize(); public abstract void Uninitialize(); public abstract int GetGpuDetails( int Adapter, ref GPUDETAILS pGpuDetails ); public abstract int GetOverallGpuLoad(); public abstract int GetGpuTemperature(); } public class GPUDriverD3DKMT : GPUDriverBase { public override int Initialize() { return DrvD3DKMT.Drv_Initialize(); } public override void Uninitialize() { DrvD3DKMT.Drv_Uninitialize(); } public override int GetGpuDetails( int Adapter, ref GPUDETAILS pGpuDetails ) { return DrvD3DKMT.Drv_GetGpuDetails( Adapter, ref pGpuDetails ); } public override int GetOverallGpuLoad() { return DrvD3DKMT.Drv_GetOverallGpuLoad(); } public override int GetGpuTemperature() { return DrvD3DKMT.Drv_GetGpuTemperature(); } } public class GPUDriverNVAPI : GPUDriverBase { public override int Initialize() { return DrvNVAPI.Drv_Initialize(); } public override void Uninitialize() { DrvNVAPI.Drv_Uninitialize(); } public override int GetGpuDetails(int Adapter, ref GPUDETAILS pGpuDetails) { return DrvNVAPI.Drv_GetGpuDetails(Adapter, ref pGpuDetails); } public override int GetOverallGpuLoad() { return DrvNVAPI.Drv_GetOverallGpuLoad(); } public override int GetGpuTemperature() { return DrvNVAPI.Drv_GetGpuTemperature(); } } } So, focusing on Drv_GetGpuDetails(), how do I actually get a valid structure filled in here? Calling that function just crashes. I'm sure it's a stupid easy fix, but once again, I'm far too C++ oriented and have yet to get used to C# in the same manner. Any advice is welcome (on the question at hand or anything else). Shogun
  9. blueshogun96

    Understanding OpenGL 3+ profiles

    Quite simple. So when you are creating a core OpenGL profile (3.0 and beyond), you are essentially requesting the OpenGL equivalent of D3D10+ functionality (depending on what profile you are using). To be more specific, certain core OpenGL extensions were introduced with certain core OpenGL updates. Usually, the higher the profile you have, the more functionality you have at you disposal. Of course, some things are vendor specific just as with legacy OpenGL. You still have to query extension support. Example, let's say you want to use GL_NV_command_list, you will need core OpenGL 4.5 to use it as NV added support for this extension with OpenGL 4.5 and later. If you need a better explanation, take a look at this history chart of each OpenGL update as well as what extensions each profile is supposed to support. https://www.khronos.org/opengl/wiki/History_of_OpenGL#Summary_of_version_changes Usually, the ARB or Khronos will approve an extension before making it an official part of the spec. Vendor specific ones do so at their own desire. Now, as for your question on creating a core OpenGL context, I don't remember off hand on how to do it. I personally use SDL 2.0 and use it's API to select the core OpenGL profile I want because my engine has to be cross platform (plus I use glew for simplicity with extensions, which you can still use for core OpenGL). A lot of tutorials use SDL or glfw for simplicity but that doesn't explain how a core context is created though. But if you want a windows specific that uses the wglCreateContextAttribsARB, take a look at the Khronos example here: https://www.khronos.org/opengl/wiki/Tutorial:_OpenGL_3.1_The_First_Triangle_(C++/Win) Specifically, let's focus on this part: bool CGLRenderer::CreateGLContext(CDC* pDC) { PIXELFORMATDESCRIPTOR pfd; memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR)); pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR); pfd.nVersion = 1; pfd.dwFlags = PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = 32; pfd.cDepthBits = 32; pfd.iLayerType = PFD_MAIN_PLANE; int nPixelFormat = ChoosePixelFormat(pDC->m_hDC, &pfd); if (nPixelFormat == 0) return false; BOOL bResult = SetPixelFormat (pDC->m_hDC, nPixelFormat, &pfd); if (!bResult) return false; HGLRC tempContext = wglCreateContext(pDC->m_hDC); wglMakeCurrent(pDC->m_hDC, tempContext); GLenum err = glewInit(); if (GLEW_OK != err) { AfxMessageBox(_T("GLEW is not initialized!")); } int attribs[] = { WGL_CONTEXT_MAJOR_VERSION_ARB, 3, WGL_CONTEXT_MINOR_VERSION_ARB, 1, WGL_CONTEXT_FLAGS_ARB, 0, 0 }; if(wglewIsSupported("WGL_ARB_create_context") == 1) { m_hrc = wglCreateContextAttribsARB(pDC->m_hDC,0, attribs); wglMakeCurrent(NULL,NULL); wglDeleteContext(tempContext); wglMakeCurrent(pDC->m_hDC, m_hrc); } else { //It's not possible to make a GL 3.x context. Use the old style context (GL 2.1 and before) m_hrc = tempContext; } //Checking GL version const GLubyte *GLVersionString = glGetString(GL_VERSION); //Or better yet, use the GL3 way to get the version number int OpenGLVersion[2]; glGetIntegerv(GL_MAJOR_VERSION, &OpenGLVersion[0]); glGetIntegerv(GL_MINOR_VERSION, &OpenGLVersion[1]); if (!m_hrc) return false; return true; } You'll still want to create your legacy rendering context, but next you want to fill out a structure that tells the driver what core context you want, and pass that into wglCreateContextAttribsARB() and go from there. That article explains the initialization part well enough I guess, as well as how to render primitives using the proper methods. Hope that helps. Shogun
  10. blueshogun96

    Grown out of playing games

    Yeah, the first Panzer Dragoon was great and had a really high replay value. Sadly, I can't say the same about Crimson Dragon. I was quite disappointed as the controls were quite frustrating and doesn't live up to the hype, but of course that's my opinion. Panzer Dragoon Orta definitely played better if you ask me. Instead of taking my word for it, I'd rather you play it for yourself because you might actually enjoy it. Shogun
  11. blueshogun96

    Grown out of playing games

    If you're still in the dev space, then I still recommend playing a few modern games here and there. This way I can keep up with the standards, give you a few ideas of your own, as well as keep you up to date on your competition. As much as I hated doing video game testing, I learned alot about increasing my own standards for writing a good/better game. Even though this era of gaming doesn't interest me nearly as much as the timeline marked by the beginning of Atari 2600 and the end of original Xbox, I still find it necessary to see and experience what other companies are up to. Shogun EDIT: Also, like Mr. Hodgman, my tastes have changed since 10+ years ago. My steady decline in FPS games and my rise in interest of story based games like Syberia and Broken Sword as well as bullet hell and rail shooters such as Panzer Dragoon account for my recent lack of interest in this era of gaming as those types of games are made by a select few. I also want my 2D beat em up games like Streets of Rage back!
  12. blueshogun96

    OpenGL API and structures

    Why OpenGL 1.x does not contain structures is not a question I have a direct answer for, but my assumption is that it would be better designed that way so driver developers can keep the most complex stuff internal and privatized in their driver code. Not sure if you know this already, let's take a brief look at why OpenGL came about in the first place. Before OpenGL was even thought of, there was an API called PHIGS back in 198x. From what I've read, the main issue with PHIGS is that ultimately it didn't give the developers what they needed in many instances. So SGI initially created Iris GL, which eventually became the basis for OpenGL in January 1992. Unlike PHIGS, OpenGL had a simplified state machine and supported an "immediate mode" rendering component. AFAIK, simplicity was the overall goal, while having a standard that graphics hardware could support via software or hardware all across the board and with little setbacks as possible. Prior to what we have today, programming graphics hardware was quite a task and all sorts of structs were everywhere. I mean, if you take a look at how NVIDIA's gfx registers were laid out and accessed back in 199x (hello NV1 and Riva128), you'll see that each channel is just a series of structs. OpenGL was meant to simplify graphics programming greatly. Keep in mind that it was not originally designed for games, but for CAD, 3D simulations, and so forth. Not that it really matters though... This is just my two cents. If I'm wrong about any of this, someone feel free to correct me. Shogun
  13. blueshogun96

    How to stay motivated?

    Wow, I didn't realize that this thread was still active! In case you are all wondering, have things gotten better since then? Actually no, things have gotten MUCH worse. Do I feel the same? Honestly, no. In fact, I feel more motivated now. I've quit binge drinking too! Right now I am living with mom and pops for a while since I can't afford my place anymore and still broke as a joke. But at least I have a part time job doing game testing for Win10/Xbox Game certification. I'll likely be doing this until I can get something better. Since I'm far away from the job now, I have to commute about 4-5 hours a day via bus, but I scraped up enough money to get a used Surface Pro to work on my game's UWP port for Dream.Build.Play. I'm going to submit my game to this contest before the end of this year and hopefully win some money or exposure. You just have to: Stop complaining. There's always someone that has it worse and deals with a greater set of challenges than I do. I mean really, I have Microsoft contacts, id@xbox access, the business card of a Sony publisher, and more. Plus Josh said stop Keep on keeping on. Leverage your advantages, build smart solutions to overcome your disadvantages. Stop drinking! Killing your brain cells and trashing your liver isn't going to help. So even though I've had no breakthroughs and things have gotten far worse, I feel more motivated. Ever hear the saying "I'm sick and tired of being sick and tired?" Well, I'm sick and tired of saying that I'm sick and tired of being sick and tired. That's enough, let's just move forward! Shogun
  14. https://developer.microsoft.com/en-us/windows/projects/campaigns/windows-developer-day?utm_campaign=windevdaycu Not sure how many of you would care about this, but today is Windows Developer day, and Microsoft has been having a live stream today. So far, it's been pretty interesting, for both games and non-game apps. It's mostly on UWP (which everyone seems to hate), but I'm taking advantage of UWP for my game. One bit of good news is that (IIRC) Microsoft will allow UWP games to access the full GPU and other resources. Curious what you all think of this, as well as if you share the opinion that Microsoft is practically *begging* developers to support UWP at this point. Shogun
  15. blueshogun96

    How to fix this timestep once and for all?

    Wow, didn't realize I had more responses to the thread... Anyway, I'm fixed it "forrealzees" this time. Using the roxlu portable nanosecond timer in place of my millisecond one, then converting the numerator from 1000 milliseconds to the appropriate number (1000*1000*1000), it appears to work fine this time. Even without Vsync, ran nicely at 120+fps. It was a combination of a low resolution timer plus my own spawning code was causing some entities to spawn yet rapidly disappear! Since it happens in the blink of an eye, it was a rather hard bug to catch until today. So far, no more spawning issues! Now to try it on my desktop Mac and PC, as well as mobile devices. If only I had one. All of my monitors are 60hz only Shogun.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!