• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Digitalfragment

Members
  • Content count

    688
  • Joined

  • Last visited

Community Reputation

1504 Excellent

About Digitalfragment

  • Rank
    Advanced Member

Personal Information

  • Location
    Australia
  1. If Back-face culling is set, then any primitives that are determined to be back facing will be discarded, similarly front-face culling will discard any primitives that are determined to the front facing. This determination is done by looking at the interpolator for SV_POSITION and checking to see if the winding order in screenspace is clockwise or counter-clockwise. The IsFrontCounterClockwise field indicates whether CCW or CW is "front facing". Note that normals are not considered at all for this. So, if your polygons are counter clock wise and you want to not draw polygons that are facing away from the camera, you want IsFrontCounterClockwise to be true, and CullMode to be Back. If you want then to not draw polygons that are facing towards the camera, and only draw the ones facing away, you would then change CullMode to Front.
  2. OpenGL

      https://msdn.microsoft.com/en-us/library/windows/desktop/bb509700(v=vs.85).aspx You declared the TextureCube as a single channel. TextureCube<float4> would be RGBA float (which is what TextureCube defaults to)
  3. What you described is a typical LRU (Least Recently Used) scheme, and it works pretty well. Make a public GC function that does the clean up so that you can explicitly run it at known points (e.g. scene/level transitions), and have it automaticaly call GC when failing to allocate space in an atlas and then allocate from the freed space. If GC fails during an allocation (e.g. the least recently used element was used this frame) then allocate a new atlas.
  4. There are a lot of times i'll use either 4 bytes or 4 floats - sometimes bandwidth is more important than precision, sometimes i need HDR values. I've never bothered with an explicit type for colour, and instead use math library types e.g. float4 / ubyte4n as there aren't really any colour specific functionality besides conversions (all of which reside in the DirectXMath library anyway)
  5. Or go a step smaller and store a quaternion tangent frame instead of a separate normal and tangent. Crytek did a presentation a while ago on this.
  6. I'm not sure about that, since you generate from a cubemap which can be sRGB, surely the generated values will be in sRGB if the cubemap is in sRGB no ?   Given that most people do lighting in HDR now, these cubemaps are generally at least 16bit and typically floating point.
  7. Jittering is taking the raycast vector and modifying it slightly randomly per pixel to help break down any noticeable aliasing in the sampling result. It's a technique also used in SSAO, shadow maps and temporal AA.
  8. Generally the SSR would be scaled by AO which would leave the area under the car dark. edit 1: On the 2nd question, scale the SSR down as the angle between the surface normal and the direction to the camera approaches zero. A lot of modern engines here would blend towards an environment cubemap in the same way that they would if the SSR ray never collided with the scene. edit 2: (Just realized hours later what was bugging me about it) Regarding the initial artifact, you want to fade out the SSR when the sampling point's depth is closer to the camera than the reflection point, rather than comparing the angle between the normal and view vector.
  9. You can always rewrite the consumer of this data so that it can work directly with N visibility lists, then there's no need to merge :D This is it really - if you run a merge operation then you're going to be reading and writing memory that you don't otherwise need to. Unless your arrays are smaller than a cache line looping over multiple visibility lists isn't going to add any overhead; at least no where near the cost of the merge step would.
  10. As said above, each node graph can produce a unique shader. Its pretty common to do do a step afterward on all of the final shaders to remove duplicate shaders (i.e. only differences in the graph are input constants/textures) and compile only the unique bytecode streams. From there it isn't any different to the renderer how this works from hand written shaders. I'd argue slightly against spek's point, as most artists i've worked with use shader graphs specifically with PBR. Its not about having a node graph that modifies the entire shader (lighting/shadows math typically being hand-optimized), but making tweaks such as how textures are combined or how texture coordinates are generated, without having to be blocked waiting on a programmer to implement features. Worse yet is when both the programmer and artists time gets wasted as the artist sits with the programmer to constantly tweak and refine hand written shaders. But, YMMV - if you are doing a solo/small project this isn't typically a major concern. When you have 20 artists to 2 graphics engineers its a different matter of priorities. I've been amazed by some of the creativity non-technical artists have produced with simple graphs. On the flip side, i've also been horrified by the same creativity when it comes down to having to optimize these graphs without breaking the end result. But if you do go down the node system, it doesn't have to be so completely low level - nodes can be pretty high level functions and the graph just dictates order & input/output flow. And then once you have an awesome nodegraph tool. The whole engine can be abstracted a lot this way - such as rendertarget flow & post processing, particles effects, even gameplay logic.
  11. Slight side note, you never have to set member pointers to nullptr in a destructor as after the destructor that memory is invalid and should not be accessed by anything!
  12. You don't need all 4 time of day probes for blending - the time of day can only ever exist between two of those probes at a time due time being 1D... Unless you're rendering such a large area that it encompasses multiple timezones ;) Also, instead consider doing the blending of the probes into a new cubemap - you can then cache the results of blending. Depending on resolution, you end up paying the cost of blending on far fewer pixels.
  13. Its Texture Arrays in directx instead. The setup isn't too much different to a standard Texture2D - its an extra parameter "ArraySize" on the TEXTURE2D_DESC when creating the ID3D11Texture2D The limitation with these is that all textures must be the same resolution.
  14. My OpenGL is pretty rusty, but while they look the same, the fix is closer to what the hardware will be doing which is pretty poor for performance. This is due to the fact that you aren't supposed to dynamic index across uniforms. The fact that it worked at all surprises me, the artifact you are seeing is probably wavefronts that are spanning two textures bugging out. But instead your fix will actually sample all 3 textures and then discard the 2 that it doesnt need. Shaders don't really branch in the traditional cpu sense, but execute all paths. In DX you can force a branch using [branch] before the if, i'm not sure what the GL equivelant is.   To take this approach 'the 'fast way', you need to instead use "Array Textures" https://www.opengl.org/wiki/Array_Texture This basically lets you use a vec3 texture coordinate where z maps to the array element index.
  15.   Hah, its all good. I've fallen back to modifying n dot l countless times, its just good to understand the why / why not for it. IIRC Valve modified the dot product results for its skin shader in HL2 to fake subsurface scattering and give that soft feel. Good luck with your shaders :)