Digitalfragment

Members
  • Content count

    688
  • Joined

  • Last visited

Community Reputation

1504 Excellent

About Digitalfragment

  • Rank
    Advanced Member
  1. Culling in SharpDX

    If Back-face culling is set, then any primitives that are determined to be back facing will be discarded, similarly front-face culling will discard any primitives that are determined to the front facing. This determination is done by looking at the interpolator for SV_POSITION and checking to see if the winding order in screenspace is clockwise or counter-clockwise. The IsFrontCounterClockwise field indicates whether CCW or CW is "front facing". Note that normals are not considered at all for this. So, if your polygons are counter clock wise and you want to not draw polygons that are facing away from the camera, you want IsFrontCounterClockwise to be true, and CullMode to be Back. If you want then to not draw polygons that are facing towards the camera, and only draw the ones facing away, you would then change CullMode to Front.
  2. OpenGL DDS issue about ABGR

      https://msdn.microsoft.com/en-us/library/windows/desktop/bb509700(v=vs.85).aspx You declared the TextureCube as a single channel. TextureCube<float4> would be RGBA float (which is what TextureCube defaults to)
  3. Need algorithm for purging glyphs from an atlas.

    What you described is a typical LRU (Least Recently Used) scheme, and it works pretty well. Make a public GC function that does the clean up so that you can explicitly run it at known points (e.g. scene/level transitions), and have it automaticaly call GC when failing to allocate space in an atlas and then allocate from the freed space. If GC fails during an allocation (e.g. the least recently used element was used this frame) then allocate a new atlas.
  4. Color storage convention

    There are a lot of times i'll use either 4 bytes or 4 floats - sometimes bandwidth is more important than precision, sometimes i need HDR values. I've never bothered with an explicit type for colour, and instead use math library types e.g. float4 / ubyte4n as there aren't really any colour specific functionality besides conversions (all of which reside in the DirectXMath library anyway)
  5. Recommended byte size of a single vertex

    Or go a step smaller and store a quaternion tangent frame instead of a separate normal and tangent. Crytek did a presentation a while ago on this.
  6. I'm not sure about that, since you generate from a cubemap which can be sRGB, surely the generated values will be in sRGB if the cubemap is in sRGB no ?   Given that most people do lighting in HDR now, these cubemaps are generally at least 16bit and typically floating point.
  7. DirectX12 - Screen Space Reflection Artefacts

    Jittering is taking the raycast vector and modifying it slightly randomly per pixel to help break down any noticeable aliasing in the sampling result. It's a technique also used in SSAO, shadow maps and temporal AA.
  8. DirectX12 - Screen Space Reflection Artefacts

    Generally the SSR would be scaled by AO which would leave the area under the car dark. edit 1: On the 2nd question, scale the SSR down as the angle between the surface normal and the direction to the camera approaches zero. A lot of modern engines here would blend towards an environment cubemap in the same way that they would if the SSR ray never collided with the scene. edit 2: (Just realized hours later what was bugging me about it) Regarding the initial artifact, you want to fade out the SSR when the sampling point's depth is closer to the camera than the reflection point, rather than comparing the angle between the normal and view vector.
  9. You can always rewrite the consumer of this data so that it can work directly with N visibility lists, then there's no need to merge :D This is it really - if you run a merge operation then you're going to be reading and writing memory that you don't otherwise need to. Unless your arrays are smaller than a cache line looping over multiple visibility lists isn't going to add any overhead; at least no where near the cost of the merge step would.
  10. Node-based materials and performance

    As said above, each node graph can produce a unique shader. Its pretty common to do do a step afterward on all of the final shaders to remove duplicate shaders (i.e. only differences in the graph are input constants/textures) and compile only the unique bytecode streams. From there it isn't any different to the renderer how this works from hand written shaders. I'd argue slightly against spek's point, as most artists i've worked with use shader graphs specifically with PBR. Its not about having a node graph that modifies the entire shader (lighting/shadows math typically being hand-optimized), but making tweaks such as how textures are combined or how texture coordinates are generated, without having to be blocked waiting on a programmer to implement features. Worse yet is when both the programmer and artists time gets wasted as the artist sits with the programmer to constantly tweak and refine hand written shaders. But, YMMV - if you are doing a solo/small project this isn't typically a major concern. When you have 20 artists to 2 graphics engineers its a different matter of priorities. I've been amazed by some of the creativity non-technical artists have produced with simple graphs. On the flip side, i've also been horrified by the same creativity when it comes down to having to optimize these graphs without breaking the end result. But if you do go down the node system, it doesn't have to be so completely low level - nodes can be pretty high level functions and the graph just dictates order & input/output flow. And then once you have an awesome nodegraph tool. The whole engine can be abstracted a lot this way - such as rendertarget flow & post processing, particles effects, even gameplay logic.
  11. Guideline for CComPtr and D3D11?

    Slight side note, you never have to set member pointers to nullptr in a destructor as after the destructor that memory is invalid and should not be accessed by anything!
  12. IBL Cubemap blending and dynamic daytime

    You don't need all 4 time of day probes for blending - the time of day can only ever exist between two of those probes at a time due time being 1D... Unless you're rendering such a large area that it encompasses multiple timezones ;) Also, instead consider doing the blending of the probes into a new cubemap - you can then cache the results of blending. Depending on resolution, you end up paying the cost of blending on far fewer pixels.
  13. Its Texture Arrays in directx instead. The setup isn't too much different to a standard Texture2D - its an extra parameter "ArraySize" on the TEXTURE2D_DESC when creating the ID3D11Texture2D The limitation with these is that all textures must be the same resolution.
  14. My OpenGL is pretty rusty, but while they look the same, the fix is closer to what the hardware will be doing which is pretty poor for performance. This is due to the fact that you aren't supposed to dynamic index across uniforms. The fact that it worked at all surprises me, the artifact you are seeing is probably wavefronts that are spanning two textures bugging out. But instead your fix will actually sample all 3 textures and then discard the 2 that it doesnt need. Shaders don't really branch in the traditional cpu sense, but execute all paths. In DX you can force a branch using [branch] before the if, i'm not sure what the GL equivelant is.   To take this approach 'the 'fast way', you need to instead use "Array Textures" https://www.opengl.org/wiki/Array_Texture This basically lets you use a vec3 texture coordinate where z maps to the array element index.
  15. Examples of PBR shaders

      Hah, its all good. I've fallen back to modifying n dot l countless times, its just good to understand the why / why not for it. IIRC Valve modified the dot product results for its skin shader in HL2 to fake subsurface scattering and give that soft feel. Good luck with your shaders :)