• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Tessellator

Members
  • Content count

    280
  • Joined

  • Last visited

Community Reputation

1394 Excellent

About Tessellator

  • Rank
    Member
  1. Where have the DirectX/OpenGL specific forums gone?
  2. An alternative approach (with different positives and negatives) is to use a Immediate Mode GUI library such as this one: https://github.com/ocornut/imgui Here are a couple of posts that cover both approaches: Pro Qt: https://deplinenoise.wordpress.com/2017/03/05/why-qt-and-not-imgui/ Pro IMGUI: https://gist.github.com/bkaradzic/853fd21a15542e0ec96f7268150f1b62 FWIW I really like the IMGUI approach since there's so little friction to creating new bits of UI. However I use it for my own stuff so I'd potentially have to re-evaluate if I was making a tool that was to be worked on/in by a larger team of coders. That said, Unity also uses an IMGUI approach. T
  3. If I understand correctly, I'd tackle this in 2 parts: 1) Store colours into a set so that each colour is stored only once. Each vertex gets an index into the set for its colour. 2) Add the set of colours to a texture, converting the vertex's colour index into a texture coordinate. For (1) I'd create a simple hash function that allows you to bin the colours together to reduce compares, or use some other mechanism to create the set. For (2), that's a case of writing the colours as pixels to a texture of a given dimension, and then storing the resulting location as a UV.   Sorry my answer is brief, my lunch break just run out!   T
  4. Separating out classes based on API like D3dSkybox and D3dLight is something I'd avoid. For example what does D3dLight have that CLight doesn't? It's a structure that has a position+/orientation, colour, type etc. In terms of interfacing with D3D, all you'll be doing is setting some constants or other buffer type - that can be abstracted efficiently at a much lower level. Abstracting buffers and entire blobs of data (buffer + state) makes porting to other platforms much easier too (and fits well with modern rendering APIs). T
  5. Most of the time I use reasonably opaque vertex data as Dingleberry suggests. Any time I need to do operations on the mesh though, I tend to split it up into several streams of data e.g. Positions and UVs and tangents etc. Alternatively, I also have functions that take an e.g. Vec3* to operate on positions and a stride value that defines the size between each vertex. So to increment between each vertex you can do ((char*)pPositions) += stride.
  6. I use handles for pretty much every resource that needs a direct link - I have different ways of managing them, but most of them are 32bit and store an index to an array of resources along with 8 bits or so reserved for a "magic" ID so I can track resource lifetime (i.e. if the resource is alive or freed). I think this might have been the first thing I read about it years ago: http://scottbilas.com/publications/gem-resmgr/. It's evolved since then but the premise is the same.   I tend to use hashes when serialising if one thing needs to look-up another. I've started to use a unique string index that's created at build time along with a dictionary (i.e. list) of all strings used. That can be used for compares + easy string lookup for tools and debugging, and saves having to keep the hash + string pointer around. However, it requires some additional management that i'm not sure is worth the pay-off yet.   T
  7. I'm trying out something similar to this currently: http://casual-effects.blogspot.co.uk/2014/04/fast-terrain-rendering-with-continuous.html It's nice and easy to setup but haven't finished the implementation of materials etc so can't say how good it'll be. Initial perf seems plenty fast enough.   T
  8. If you pass in the debug flag when creating the device do you get any warning/error messages?   I remember I had a bug where running the debugger with a D3D9 game (years ago!). Not closing down correctly would occasionally leave the device in an unhappy state, causing it to fail during subsequent creation calls. I've not seen anything like that since though.   T
  9. @ingramb - yes you're absolutely right of course. I've been trying to avoid a z-prepass by making use of instancing and bindless/texture arrays so that I can sort front to back as much as possible and avoid overdraw (via lots of small mesh pieces too), although I have yet to test if that's enough to skip it altogether in my case.
  10. One thing that prevents me from jumping entirely into Forward+/clustered is screen space lighting stuff (I originally had decals on that list but MJP nicely showed that clustering those works well too!). While it's perfectly possible to apply screen space AO and reflection after a forward rendering pass, you need to worry about grabbing explicit samples and also splitting ambient/direct light (or storing a ratio perhaps) so that SSAO gets applied correctly. Neither are deal breakers but it's another thing to consider. On the flipside, something that nudges us away from SSAO is maybe no bad thing ;).
  11. OpenGL

    I also do what Hodgman does :)
  12. There are few things you can implement to handle large scenes: 1) View frustum culling (as mentioned by vinterberg). With this you avoid rendering anything outside of the view frustum as a simple way to keep draw calls down. You typically do this by having objects bounded  by a simple shape (e.g. Axis Aligned Bounding Box or Bounding Sphere), which are then intersected with the camera frustum. To improve this, a hierarchy or grid is often used e.g. you first check if the bounding shape of an entire block of buildings is visible, if so, then check the individual bounding volumes within. Look into grids, nested grids, quadtrees and octrees as structures often used to accelerate this process. 2) Level-of-Detail (LOD) systems: Instead of drawing a full detail version of an object at all distances, you prepare simpler versions of the object which are then used when far enough away from the camera, so much so that the extra detail wouldn't really be visible. You can also stop drawing small or thin items at a distance too e.g. if you look all the way down a street, the rubbish bins needn't be rendered after a few hundred meters etc. For a very large scenes, people will sometimes use billboards or imposters (quads that face the camera essentially, very similar to particles) as the lowest level of detail. Generating the LOD models is often a manual process, but it can be automated in some cases.   3) Occlusion culling: Being inside a city, you can make use of the fact that buildings often obscure lots of the structures behind them. There are a number of techniques to do this: i) you can break the level into parts and precompute which parts of a level are visible from each area (search for Precomputed Visibility Sets). This technique is fairly old fashioned as it requires quite a bit of precomputing and assumes a mostly static scene, but it can still be handy today in huge scenarios or on lower spec platforms. ii) GPU occlusion queries - this is where you render the scene (ideally a much simpler version of it) and then use the GPU to determine which parts you actually need to render. Googling should provide you with lots of info, including the gotchas with this approach. iii) CPU based occusion culling - this can be done by rasterizing on CPU a small version of the scene into a tiny zbuffer-like array, which is then used to do occlusion queries much like the GPU version. This avoids the latency of the GPU version at the expense of more CPU cost. iv) A mix of both GPU and CPU approaches where you re-use the previous frame's zbuffer. There are other methods but I think these are the most common. HTH, T
  13. I went looking for a similar thing when the sponza got a bit boring and I quite like the Natural History Museum model from here: http://www.3drender.com/challenges/ IIRC it might require a bit of tidy up with the materials (but it's been a long time since I grabbed it).
  14. This is really great, thanks for taking time to share this.