• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

1886 Excellent

About GuyWithBeard

  • Rank
  1. Thanks guys, this is all really good stuff. Currently I am still working on wrapping the APIs, (DX11, DX12 and Vulkan) under a common interface. DX11 and Vulkan are now both rendering my GUI and the next piece of work is to get DX12 to that point. My plan is to rewrite large parts of the high-level renderer to make better use of the GPU, but leave other parts as-is for now, eg. the GUI and debug rendering. It would be nice to go the route of allocating larger buffers and offsetting based on the frame, but for now I am using a pool, ala Ryan_001's suggestion, where I can acquire temporary buffers and command buffers. The buffers as still as small as they used to be, there are just more of them. This is probably not the most performant way, but it gets the job done. Regarding the "full stall" I actually had to implement something like that already for shutdown (ie. you want to wait until all GPU work is done before destroying resources) and for swap chain recreations. In Vulkan this is easy, you can just do: void RenderDeviceVulkan::waitUntilDeviceIdle() { vkDeviceWaitIdle(mDevice); } However, I am a little confused about how to do that on DX12. This is what I have come up with but it has not been tested yet. What do you think? void RenderDevice12::waitUntilDeviceIdle() { mCommandQueue->Signal(mFullStallFence.Get(), ++mFullStallFenceValue); if(mFullStallFence->GetCompletedValue() < mFullStallFenceValue) { HANDLE eventHandle = CreateEventEx(nullptr, false, false, EVENT_ALL_ACCESS); mFullStallFence->SetEventOnCompletion(mFullStallFenceValue, eventHandle); WaitForSingleObject(eventHandle, INFINITE); CloseHandle(eventHandle); } } That would obviously only stall the one queue, but I think that might be enough for now. Is there an easier way to wait until the GPU has finished all work on DX12? Cheers!
  2. Thanks guys! I have now implemented a resource pooling system where resources are returned to the pool on fence release, as per Ryan_001's suggestion. Works great! The modern low-level APIs are great in the sense that they make you a better programmer whether you want it or not. I ported my old GUI rendering system which I originally wrote for DX11 4 years ago to Vulkan, and I now realize how many hoops the driver has had to jump through to get my GUI on the screen.
  3. I am not sure about any ready-made wrappers. However, my Vulkan backend which uses glfw contains only about 10-15 glfw function calls. That should get you up and running with a render window and Vulkan compatible surface. Seems to me like it would be fairly easy to make a small C# wrapper yourself for only what you need to get started. Seeing as glfw is a C library it should be straightforward to import the C functions using DLLImport or something similar. You can then add more functions from the glfw API as needed. That said, I am not sure if the Vulkan API itself is easily usable from C#.
  4. Hi, In older APIs (OpenGL, DX11 etc) you have been able to access textures, buffers, render targets etc pretty much like you access CPU resources. You bind a buffer, draw using it, then update the buffer with new data, draw that data etc. It has all just worked. In new low-level APIs such as Vulkan or DX12 you no longer have this luxury, but instead you have to take into account the fact that the GPU will be using the buffer long after you have called "draw" followed by "submit to queue". Most Vulkan texts I have read suggests having resources created for three frames in a ring buffer, ie. you have three sets of command pools, command buffers and frame buffers + any semaphores and/or fences you need to sync graphics and present queues etc. AFAIK it works the same in DX12. With this system you can continue with rendering the next frame immediately and only have to wait if the GPU cannot keep up and you have rendered all three of the frames. My question is, since there are obviously many more resources you need to keep around "per-frame", how do you structure your code? Do you simply allocate three of everything that might get written to during a frame and then pass around the current frame index whenever you need to access one of those resources? Is there a more elegant way to handle these resources? Also, where do you draw the line of what you need several of? Eg. a vertex buffer that gets updated every frame obviously needs its own instance per frame. But what about a texture that is read from a file into device local memory? Sounds to me like you only need one of those, but is there some case where you several? Is there some grand design I am missing? Thanks!
  5. Thanks, that sounds a lot more efficient!
  6. Hey, I am wondering what is the most efficient way to create a mipmapped texture for use with a texture sampler in Vulkan. Basically I have a bunch of mip levels that I want to tuck into an image with the tiling set to VK_IMAGE_TILING_OPTIMAL, the layout VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL and the memory properties VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT. To be able to map to host memory and memcpy the pixel data into the image I have to create a staging image with VK_IMAGE_TILING_LINEAR right? However, VK_IMAGE_TILING_LINEAR only supports one mip level. So what I do is create a staging image for every mip level, memcpy into those and then copy the mip level 0 of each staging image into the correct mip level of the final texture, with vkCmdCopyImage. This seems like an awful lot of temporary images and copying just to get the texture put together. Is this the best way to do it? How do you do it? Do you have a pool of staging images or do you create temporaries every time? Cheers!
  7. Thanks Hodgman for the detailed response! Incidentally I found an interesting video on Vulkan yesterday at: https://www.youtube.com/watch?v=RkXa4RiERu8 Especially the first and last talks are very interesting. Around the four minute mark there is a slide on what Xenko's renderer exposes. Seems like they went with exposing descriptor sets too, but don't really explain why, just "you can't get around it", and that they went with the Vulkan approach of descriptor sets rather than the DX12 approach (which I cannot comment on at all yet). I also very much like the PSO approach even on older hardware so I think I am going to make them first class objects in the front-end too.
  8. I am looking to replace my current DX11-only renderer with something better and to make it easier to support many graphics APIs I am writing a common "front-end" API for the graphics APIs I want to support. The renderer is layered and looks a bit like this (I assume this is a farily common way to organize things): First there is the high-level renderer, ie. the API that the rest of the game communicates with. It contains concepts such as scene-graphs, meshes, materials, cameras, lights etc. Below the high-level renderer sits the front-end API for the actual graphics APIs we want to target. This is an API that contains (or can emulate) all important features of the graphics APIs, such as devices, textures, shaders, buffers (vb, ib, cb), pipeline state etc. Below the front-end API are the actual graphics APIs (DX11, DX12, OpenGL, Vulkan, Metal, libGCM etc). These are loaded in as plugins an can be switched during runtime. I started writing the front-end API with the mindset that I'll target DX11 first and perhaps add DX12 and Vulkan support later. However, this seems to be a very bad idea especially since I have the rare opportunity to rewrite my whole renderer without having to worry about shipping a game right now. Most people seem to agree that it is better to write the front-end to look like the modern APIs and the emulate (or in some cases simply ignore) the modern-only features for the older APIs. My question is this: What features of the modern APIs should I expose through the front-end API? I would like to make somewhat good use of DX12 and Vulkan so consider those as the main back-ends for now. In my current version of the API I already moved all state into a PSO-like object which will be the only way to set state, even on older APIs. However, after looking into DX12/Vulkan a bit more (note that I still only have a few hours worth of experience with either) it seems that there are other new object types that ideally should be exposed through the front-end, such as command lists, queues, fences, barriers, semaphores, descriptors and descriptor sets + various pools. What about these? Anything else? Does it make sense to try to wrap them up as they are and can DX12's concepts be mapped to Vulkan's concepts or do I have to abstract some of these into completely new concepts? Thanks for your time!
  9. Pays the bills: Generalist programmer at a mid-size AAA development studio, focusing on multiplayer at the moment. Never-ending hobby project: Misc small game prototypes that over the years have merged into a fairly mature game engine. Currently working on rewriting the renderer to be platform and API agnostic. Here is a screenshot of the editor from a while back: The game in the screenshot is a prototype I made over the course of two weeks last December. More info here if you are interested: https://www.gamedev.net/topic/684831-jailbreak-prototype-released/
  10. Unity

    You can use Vector3.Slerp to calculate the intermediate look direction of the camera. Just remember that the vector parameters are directions rather than points in space. When you have the intermediate direction you can use Quaternion.SetLookRotation to calculate the rotation of the camera transform. Alternatively you can add an empty GameObject to the scene that you lerp between the two world positions, and then simply call Transform.LookAt, passing in the object as the target. Do note that this does not necessarily give you a constant rotation speed of the camera transform.
  11. Thanks for the explanation :)   However, after reading your and Sean's post I have seen several posts on the internet about this particular issue and many suggestions on "how to solve it", eg. flip the image in memory before passing to OpenGL, flip it beforehand as part of the asset pipeline etc. If the texture coordinates are flipped too I don't see why people would be upset about it. Is there something I am missing here?
  12.   Hmm, I don't quite understand what you mean. What is the 'also' referring to in "Because you also..."?   If I have a texture loaded into D3D using the [0,0 == left,up] convention wont loading that texture into OpenGL make it appear upside down?
  13. Cool, thanks guys! Seems like I am roughly on point then.
  14. Hey,   I recently started studying OpenGL with the intention of adding an OpenGL 4.X rendering backend to my, currently DX11-only, engine. Before I started I want to get a better picture of what "DX is left-handed, OpenGL is right-handed" means in the modern APIs. To my understanding, modern OpenGL has done away with the LookAt() functions and similar functionality (that heavily depended on a certain handedness) in the same way as DX no longer has these things built in. I have built a very simple vector and matrix library that I use with DX11 and it assumes a left-handed coordinate system. I would like to use the same library with the OpenGL rendering backend. Where do I need to do any conversions? Does the modern OpenGL API still assume I am working with right-handed coordinates, or was that only in the old API?   Some things I do know (and correct me if I am wrong):   - My matrices are row-major and OpenGL wants them as column-major, so that's a transpose call.. (I am using the FX-framework for DX11 and I am going to get rid of that too. I actually think that DX11 wants them as column-major too but the FX-framework has done a transpose for me, so that might be one difference that will go away once I ditch the FX-framework?) - The NDC z-axis goes from 0->1 in DX and -1->1 in OpenGL, so I have to take that into account, but that's just using a different projection matrix, right?   Anything else that needs to be done for this to work in OpenGL? For now, I am going to hand write all shaders as separate HLSL and GLSL versions, ie. no automatic translation or higher-level language. Is there something that needs to be reversed in the shader code with regards to the handedness of the coordinate space?   Cheers!    
  15.   Wow, 100,000 players, really? That's impressive. How many of them will be playing simultaneously? Because, you might have to implement some sort of sharding or similar load balancing system, and mirroring databases across all the nodes is an art form in itself.   But anyway, many online games use several different types of databases for storing persistent data. For example, if you have data that naturally fits in a table with the same columns then a good ol' SQL database is probably the best choice, eg. postgreSQL is known to be reliable and performant. For data where the structure varies more some sort of no-SQL database might be a better option, eg. mongoDB.   Also note that you will most likely have lots of logging and auditing data generated by the game, ie. data that is not strictly speaking required for the game to run but allows you to more easily debug and develop the game. This kind of data is usually stored in a different database than the actual game state data.