• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Shnoutz

Members
  • Content count

    198
  • Joined

  • Last visited

Community Reputation

356 Neutral

About Shnoutz

  • Rank
    Member

Personal Information

  • Location
    Montreal
  1. Thanks, interresting stuff :)
  2. I think the first step would be to define the components of the language as generic as possible.   What the most basic operations that are performed in a frame? Clear, copy, dispatch, draw (and present maybe). These operations have inputs and outputs, figuring out dependencies is quite easy.   Granted the shader code is required to figure out if a texture is used as a UAV or as a SRV but really, adding a pseudo shading language is not too scary I've done something similar in the past (maybe not the first thing I'll do ;) ). Still that would not be enough to know if, for example, a buffer was fully or partially written to but there might be a way to express sub resource regions.   I'm starting to think that the basic operations are like instructions and the shader code is like micro-instructions. With "instructions" and dependencies we can do lots of cool stuff like re-ordering and eliminating duplication.   I have started to work on a small prototype that takes code as input and spit out a graph with dependencies as output... I think its the first step. But its easy to imagine where that could lead... Automatic barriers, operation reordering, automatic async compute (given a descriptions of the hardware queues), automatic volatile resource allocation/aliasing, descriptor management.   That's a job for a full team of professional and for quite a while but ill try just for fun.
  3. I was thinking of a programming language because it is essentially a compact way to express a tree of resources and operations. I also like the idea of generating code rather than evaluating a tree like structure at run-time. (I do convert a lot of my code from a script like language to c++) Too bad I wont be attending GDC this year I would have loved to see that presentation.
  4. Hi,   I am a professional graphics programmer and I create scripting/programming languages as an hobby. I got this idea that I wanted to share with you.   I am reading and learning about low level graphics APIs and the reason why they exist. In DirectX11/OpenGL a lot of the GPU work, like resource barriers for example, is hidden and executed by the driver. Now, because the driver doesn't know what your frame looks like it has to execute the worst case scenario and execute more barriers that may be required. (I think DX11 drivers now are quite clever and do prediction to reduce that problem but you get my point)   DX12/Vulkan somewhat solves this issue by letting the programmer decide where to execute the barriers by exposing them as an API concept. That is a major plus but it is very error prone and if not done correctly can lead to major performance issues.   Now this got me thinking... What if we created a programming language that allowed to define explicitly what a full frame looks like. The steps and the resources involved in those steps. We could then look at theses steps and figure out exactly where to put the barriers. Re-order the steps for optimal performance. We could also look at what are the dependencies between steps and probably figure out a way to automatically dispatch the work on different queues (copy/dma, compute & graphics).   I have the feeling that with new low level APIs this door is now opened. Static analysis and optimization (of full frames)... Something every compilers do for CPU code. Why not GPU code?   Any thoughts on that?   Gab.
  5. I have a question about that.   I used to sort my passes (in a tree like structure) by root signatures then pipeline states then by resources all in one big command list.   Since then, I started looking at async compute to do culling of a pass while another is rendering.   Because fences are not part of the command list interface but rather on the queue I had to break my big command list into smaller ones (one per pass). Root signatures are undefined at the beginning of a command list so I have to re-set it for each pass' command list (even if I know they are the same for many consecutive pass). Is there still a point to sorting by root signature in this context? Or simply sort by pipelinestate+rootsignature like hodgman seem to suggest?
  6. Hi!   I can kind of guess what D3D12_FENCE_FLAG_SHARED_CROSS_ADAPTER is for but what is the use of D3D12_FENCE_FLAG_SHARED?   Is it used for fence shared across different queues?   Thanks.
  7. Thank you for looking into this :)
  8.   :wacko: I am kinda sure the code for the swap chain is fine. It works on amd & warp without any validation error/warning. It's also very straightforward. I was assuming GPU validation would not completely crash the application if there was something wrong. The message I get before the crash seems wrong, it basically complains that a resource is not in the right state but ask for the resource to be in that exact state (present 0x0). Disabling GPU validation works, the app is behaving exactly as expected. I am not excluding the possibility I messed up, but I gpu validation is a relatively new feature and I have the feeling my experience with it was worth reporting.
  9. Yeah, I have been lazy and using the first adapter, I will take some time and enumerate the adapters...   Back to GPU validation, I get this message on my laptop and I am not sure what it means:   (at the "Present" call) IGIESW ***.exe found in whitelist: NOIGIWHW Game ***.exe found in whitelist: NOD3D12 ERROR: GPU-BASED VALIDATION: Present, Back Buffer state invalid, Incompatible resource state: Resource: 0x0000025FF8FD0A50:'swapchain buffer', Subresource Index: [0], Resource State: D3D12_RESOURCE_STATE_[COMMON|PRESENT](0x0), Required State Bits: D3D12_RESOURCE_STATE_[COMMON|PRESENT](0x0), Draw Count [0], Dispatch Count [0], Command List: <deleted>, Resources used in COPY command lists must start out in the D3D12_RESOURCE_STATE_COMMON state.  This includes Resources created in a COPY_SOURCE or COPY_DEST state.  [ EXECUTION ERROR #942: GPU_BASED_VALIDATION_INCOMPATIBLE_RESOURCE_STATE]   If I understand correctly, 'swapchain buffer' is in the state "D3D12_RESOURCE_STATE_[COMMON|PRESENT](0x0)" but should be in the "state "D3D12_RESOURCE_STATE_[COMMON|PRESENT](0x0)"  o.O ??
  10. Good point. Ill check when I get off work, It tends to revert back to the intel gpu after a driver update.
  11. Just a quick question, is it possible that by updating windows it changes the supported level of features and resource binding tiers supported by a given gpu? I have a gtx980m and since last windows update it reports feature level 11.1 and resource binding tier 1. I am sure this was at least feature level 12 and resource binding tier 2.   My app is not working anymore on my laptop :(
  12. In my case it was a missing windows update.   Now, GPU based validation works like a charm on my desktop with an AMD rx480 but crashes on my notebook with a GTX980m. (The cash happens when I try to create a descriptor heap). 
  13. Hi, I just updated to the new sdk 10.0.14393.33 and if I activate the D3D12 Debug layer D3D12CreateDevice simply fails. // Enable the D3D12 debug layer ComPtr< ID3D12Debug > debugController; if(SUCCEEDED(D3D12GetDebugInterface(IID_PPV_ARGS(&debugController)))) { debugController->EnableDebugLayer(); //ComPtr< ID3D12Debug1 > debugController1; //if(FAILED(debugController->QueryInterface(IID_PPV_ARGS(&debugController1)))) // throw "Failed to retrieve debug controller"; //debugController1->SetEnableGPUBasedValidation(true); } // Get adapter ComPtr< IDXGIAdapter1 > dxgiAdapter; if(FAILED(dxgiFactory->EnumAdapters1(0, &dxgiAdapter))) throw "Failed to retrieve default adapter"; if(FAILED(dxgiAdapter.As(&m_adapter))) throw "Failed to retrieve adapter"; // Create DirectX12 device if(FAILED(D3D12CreateDevice(m_adapter.Get(), D3D_FEATURE_LEVEL_12_1, IID_PPV_ARGS(&m_device)))) if(FAILED(D3D12CreateDevice(m_adapter.Get(), D3D_FEATURE_LEVEL_12_0, IID_PPV_ARGS(&m_device)))) if(FAILED(D3D12CreateDevice(m_adapter.Get(), D3D_FEATURE_LEVEL_11_1, IID_PPV_ARGS(&m_device)))) if(FAILED(D3D12CreateDevice(m_adapter.Get(), D3D_FEATURE_LEVEL_11_0, IID_PPV_ARGS(&m_device)))) throw "Failed to create DirectX12 device"; That used to work with 10.0.10586.0 ...
  14. Hello,   I wonder if what I am doing is dangerous/non-portable...   In my program, all my texture descriptors are texture array descriptors (most of the time they are arrays of only one slice). In shaders, I do not always use Texture2DArray but simply use Texture2D assuming that the first slice is used.   I did not see any warning, glitch or problems but I know that usually means nothing and that kind of stuff needs to be verified.   So... Is it bad? Like crossing the streams?   Thanks!
  15. Hello,   I have trouble with one of my shader, it works flawlessly optimized but if I compile it with skip optimization, it fails. I tracked down the issue, it seems one of my static const variable (stored in r0 in asm) is being overwritten at some point.   Worth nothing that this happens in debug and release, with or without warp device and to the best of my knowledge is only related to the compilation process. The problem is clearly visible in the disassembled code.   The shader is quite long so I will post snippets of the shader with corresponding asm. (I can post the whole shader or make a small program to illustrate the issue if needed). static const float g_oneOverSqrt2 = 0.7071067811865475f; static const float g_twoOverSqrt2 = 1.414213562373095f; ... float4 decompressQuaternion(uint compressedQuaternion) { uint4 uc = compressedQuaternion; uc >>= uint4(22, 12, 2, 0); uc &= uint4(1023, 1023, 1023, 3); float4 fc; fc.xyz = (float3)uc.xyz / 1023.0f * g_twoOverSqrt2 - g_oneOverSqrt2; fc.w = sqrt(1 - dot(fc.xyz, fc.xyz)); if(uc.w == 0) return fc.wxyz; else if(uc.w == 1) return fc.xwyz; else if(uc.w == 2) return fc.xywz; else return fc.xyzw; } ... const uint componentIndex = DTid.x; if(componentIndex < b_parameters.count) { ... float4 rotation = decompressQuaternion(p_data.rotation); ... } The same sections in ASM looks like this: // Init static const variables mov r0.x, l(0.707107) // NOTE: r0.x <- g_oneOverSqrt2 // At this point r0.x == g_oneOverSqrt2 // The asm version of the if(componentIndex < b_parameters.count) mov r0.w, vThreadID.x // r0.w <- componentIndex ult r1.x, r0.w, CB0[0][0].w if_nz r1.x mov r0.x, CB0[0][0].w // r0.x <- b_parameters.count // Just above, r0.x is overwritten by CB0[0][0].w // The asm version of decompressQuaternion, // note the use of the r0.x mov r7.xyzw, r7.xyzw mov r8.xyz, l(22,12,2,0) ushr r7.xyz, r7.xyzx, r8.xyzx mov r8.xyzw, l(1023,1023,1023,3) and r7.xyzw, r7.xyzw, r8.xyzw utof r7.xyz, r7.xyzx div r7.xyz, r7.xyzx, l(1023.000000, 1023.000000, 1023.000000, 0.000000) mul r7.xyz, r0.yyyy, r7.xyzx mov r8.xyz, -r0.xxxx // NOTE: use of r0.x add r7.xyz, r7.xyzx, r8.xyzx itof r0.x, l(1) dp3 r0.y, r7.xyzx, r7.xyzx mov r0.y, -r0.y add r0.x, r0.y, r0.x sqrt r8.w, r0.x if_z r7.w mov r8.x, r8.w mov r8.yzw, r7.xxyz else mov r0.x, l(1) ieq r0.x, r0.x, r7.w if_nz r0.x mov r8.y, r8.w mov r8.zw, r7.yyyz else mov r0.x, l(2) ieq r0.x, r0.x, r7.w if_nz r0.x mov r8.z, r8.w mov r8.w, r7.z else mov r8.z, r7.z mov r8.w, r8.w endif mov r8.y, r7.y endif mov r8.x, r7.x endif I have avoided the issue by using literals in the shader code for oneOverSqrt2 and twoOverSqrt2 but I tend to use static const variable to share code between c++ and hlsl and using macros is messing up my design.   Anyone else had trouble with static const variables in un-optimized shaders?   Cheers!