• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By MoreLion
      Project Name:The Legends Gate:Battle Of Sorrugar
      Studio: Inbound Entertainment.
      Engine: Unreal Engine 4
      Platforms: PC And Mac(Steam) & Xbox One/PS4 (Very Very Far Future)
      Team Size:3
      Compensation: Royalty/Payback After Funding
      Roles Required:Environment Artist/3D Animator/Level Designer And Unreal Engine Programmer And Any Others!
      Project Length: Hoping For Release In 2020/2022
      [3D ANIMATOR/ENVIRONMENT ARTIST] - Realistic Assets/Textures - GUI Creation - Environment Asset Creation - Create Animations
      [PROGRAMMER REQUIREMENTS] - Fluent Programming - Work With Other Programmers And 3D Artists - Must Be Able To Create Combat Systems And NPC AI Systems
      About Me: I’m A Game Designer With A Bit Of Experience, I’ve Been Working On This Project Game Design Wise For Nearly 4 years.
      The Project Has Changed So Much, especially Story And GamePlay Wise.
      Project Description: Hello! Me And A Concept Artist Are Looking To Start Up This Team, You Can See What We Are Looking For Above, The Game Is A First Person RPG And Is Also Open World, This Is An Ambitious Project That I’ve Been Working On For 4 Years Game Design Wise And Story Wise. I Will Tell You More About The Story When You Apply Below. Also We Welcome Any Other Skills You Want To Offer As The Game Is Currently In Pre Production And Building A Team Is Very Important.
      Apply:DM Me On Discord riobio55#1958 or email me at liondude12@gmail.com
    • By zfvesoljc
      I have a particle system with the following layout:
       system / emitter / particle
      particle is driven by particle data, which contains a range of over lifetime properties, where some can be random between two values or even two curves. to maintain a smooth evaluation between two ranges, i randomize a "lerp offset" on particle init and use that value when evaluating curves. the issue is that i'm using that same offset value for all properties (10ish) and as a result i'm seeing some patterns, which i'd like to remove. The obvious way is to just add more storage for floats, but i'd like to avoid that. The other way is to generate a seed of some sort and a random table, and use that to generate 10 values, ie: start with short/integer, mask it, then renormalize to float 0-1.
      any other ideas?
    • By dgi
      Hey all ,
      For a few days I'm trying to solve some problems with my engine's memory management.Basically what is have is a custom heap with pre allocated memory.Every block has a header and so on.I decided to leave it like that(not cache friendly) because my model is that every block will be large and I will have a pool allocators and stack allocators dealing with those blocks internally. So far so good I figure out how to place my per scene resources . There is one thing that I really don't know how to do and thats dealing with containers.What kind of allocation strategy to use here.
      If I use vector for my scene objects(entities , cameras , particle emitters .. ) I will fragment my custom heap if I do it in a standard way , adding and removing objects will cause a lot of reallocations . If I use a linked list this will not fragment the memory but it's not cache friendly.I guess if a reserve large amount of memory for those vectors it will work but then I will waste a lot memory.I was thinking for some sort of mix between a vector and a linked list , where you have block of memory that can contain lets say 40 items and if you go over that number a new one will be created and re location of the data would not be needed.There would be some cache misses but it will reduce the fragmentation.
      How you guys deal with that ? Do you just reserve a lot data ?
    • By Hermetix
      I am trying to setup the custom wizard for making a 3ds MAX 2018 plug-in (to export a character animation data), but I can't locate the wizard file folder to put the .vsz file in. In the 3ds MAX 2018 docs, it only mentions where the folder is in VS 2015 (VC/vcprojects). It's a VC++ project, but I don't see any folder in VC for the wizard files. I'm using VS 2017 update 15.5.6 Enterprise, and the folders in VC are: Auxiliary, Redist and Tools.
    • By elect
      ok, so, we are having problems with our current mirror reflection implementation.
      At the moment we are doing it very simple, so for the i-th frame, we calculate the reflection vectors given the viewPoint and some predefined points on the mirror surface (position and normal).
      Then, using the least squared algorithm, we find the point that has the minimum distance from all these reflections vectors. This is going to be our virtual viewPoint (with the right orientation).
      After that, we render offscreen to a texture by setting the OpenGL camera on the virtual viewPoint.
      And finally we use the rendered texture on the mirror surface.
      So far this has always been fine, but now we are having some more strong constraints on accuracy.
      What are our best options given that:
      - we have a dynamic scene, the mirror and parts of the scene can change continuously from frame to frame
      - we have about 3k points (with normals) per mirror, calculated offline using some cad program (such as Catia)
      - all the mirror are always perfectly spherical (with different radius vertically and horizontally) and they are always convex
      - a scene can have up to 10 mirror
      - it should be fast enough also for vr (Htc Vive) on fastest gpus (only desktops)

      Looking around, some papers talk about calculating some caustic surface derivation offline, but I don't know if this suits my case
      Also, another paper, used some acceleration structures to detect the intersection between the reflection vectors and the scene, and then adjust the corresponding texture coordinate. This looks the most accurate but also very heavy from a computational point of view.

      Other than that, I couldn't find anything updated/exhaustive around, can you help me?
      Thanks in advance
  • Advertisement
  • Advertisement

DX12 SkyBox Depth Troubles

Recommended Posts

I am having an issue with always passing the depth test and writing my skybox over existing geometry.  I draw the skybox last, and it is all you see on screen.  Turn it off, and my scene renders fine.  Here is the setup, hope someone can tell me where I am wrong (pulling hair out) trying to setup DirectX 12 depth testing correctly:


using the Microsoft mini engine core as my renderer

	SamplerDescriptor				m_skySampler;
	RootSignature					m_skyRootSig;
	GraphicsPSO					m_skyPSO;


	// root signature sky map
	m_skyRootSig.Reset(2, 2); 
	m_skyRootSig.InitStaticSampler(0, SamplerAnisoWrapDesc, D3D12_SHADER_VISIBILITY_PIXEL); 

	SamplerDesc samplerSkyDesc;
	samplerSkyDesc.ComparisonFunc = D3D12_COMPARISON_FUNC_LESS_EQUAL;

	m_skyRootSig.InitStaticSampler(1, samplerSkyDesc, D3D12_SHADER_VISIBILITY_PIXEL);
	// parameters
	m_skyRootSig[0].InitAsConstantBuffer(0, D3D12_SHADER_VISIBILITY_VERTEX); // vertex shader float4x4 modelToProjection;
	m_skyRootSig[1].InitAsDescriptorRange(D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 0, 6, D3D12_SHADER_VISIBILITY_PIXEL);

	/* Sky box PSO */
	rastDesc.FillMode = D3D12_FILL_MODE_SOLID;
	rastDesc.CullMode = D3D12_CULL_MODE_NONE; 
	rastDesc.DepthClipEnable = TRUE;

	stencilOp.StencilDepthFailOp = D3D12_STENCIL_OP_KEEP;
	stencilOp.StencilPassOp = D3D12_STENCIL_OP_KEEP;
	stencilOp.StencilFailOp = D3D12_STENCIL_OP_KEEP;
	stencilOp.StencilFunc = D3D12_COMPARISON_FUNC_NEVER;

	D3D12_DEPTH_STENCIL_DESC skyBox = DepthStateDisabled;
	skyBox.DepthEnable = TRUE; // depth testing enabled on pixel shader
	skyBox.DepthWriteMask = D3D12_DEPTH_WRITE_MASK_ZERO; // read only
	skyBox.StencilEnable = FALSE;
	skyBox.StencilReadMask = D3D12_DEFAULT_STENCIL_READ_MASK;
	skyBox.StencilWriteMask = D3D12_DEFAULT_STENCIL_WRITE_MASK;
	skyBox.FrontFace = stencilOp;
	skyBox.BackFace = stencilOp;


	D3D12_INPUT_ELEMENT_DESC vertElemSky[] =
	m_skyPSO.SetInputLayout(_countof(vertElemSky), vertElemSky);
	m_skyPSO.SetRenderTargetFormats(1, &ColorFormat, DepthFormat);
	m_skyPSO.SetVertexShader(g_pSkyVS, sizeof(g_pSkyVS));
	m_skyPSO.SetPixelShader(g_pSkyPS, sizeof(g_pSkyPS));

Shader parameter validation:

#define Sky_RootSig \
	"CBV(b0, visibility = SHADER_VISIBILITY_VERTEX), " \
	"DescriptorTable(SRV(t0, numDescriptors = 6), visibility = SHADER_VISIBILITY_PIXEL)," \
	"StaticSampler(s0, maxAnisotropy = 8, visibility = SHADER_VISIBILITY_PIXEL)," \
	"StaticSampler(s1, visibility = SHADER_VISIBILITY_PIXEL," \
		"addressU = TEXTURE_ADDRESS_WRAP," \
		"addressV = TEXTURE_ADDRESS_WRAP," \
		"addressW = TEXTURE_ADDRESS_WRAP," \
		"comparisonFunc = COMPARISON_LESS_EQUAL," \

Vertex Shader:

#include "SkyRS.hlsli"

cbuffer VSConstants : register(b0)
	float4x4 modelToProjection;

struct VSInput
	float3 position : POSITION;

struct VSOutput
	float4 position  : SV_POSITION; // screen space position of the pixel
	float3 texLookUp : POSITION;

VSOutput main(VSInput vsInput)
	VSOutput vsOutput;

	vsOutput.texLookUp = vsInput.position; // use position as index to cube map
	float3 projected = mul(vsInput.position, (float3x3)modelToProjection); // ignore translation 
	vsOutput.position = float4(projected, 1.0f).xyww; // Set z = w which forces the farthest possible z value
	//vsOutput.position = float4(vsInput.position, 1.0f).xyww;

	return vsOutput;

Pixel shader:

#include "SkyRS.hlsli"

struct VSOutput
	float4 position  : SV_POSITION; // screenspace position of the pixel
	float3 texLookUp : POSITION;

TextureCube cubeMap : register(t0);

SamplerState sampler0 : register(s0);
SamplerState sampler1 : register(s1);

float3 main(VSOutput pin) : SV_Target0
	//return float3(1.f,0.f,0.f);
	return cubeMap.Sample(sampler1, pin.texLookUp);


Share this post

Link to post
Share on other sites

I don't know the correct way to do it in DirectX, but in OpenGL I draw the skybox last with the depth range set to [1.0,1.0] instead of [0.0,1.0], with depth test less-than-or-equal. This way, all output fragments of the skybox are at the furthest depth value (the clear depth), and won't overdraw any foreground objects.

Share this post

Link to post
Share on other sites

The sky box is behind everything, so you can just do so at the end of the vertexshader :


return float4(vsOutput.xy,1,1);


This way the geometry is pushed to the far plane or infinity, independently from his scale. A change to the viewport bounds is not the best option as it is an heavier state to change + in theory, you should not set a NIL range for depth.

Edited by galop1n

Share this post

Link to post
Share on other sites

@galop1n Thank you, but my shader is doing the same thing with vsOutput.position = float4(projected, 1.0f).xyww; so that does not appear to be the issue.

@Aressera Thank you, thank you!  Solved!  This is a great practice I will now adopt.  By setting the depth range to one value, it can be only one value.  However, my problem still exists, so that means only one thing: the comparison function is not working. 

Solution:  Mini-engine open source code from Microsoft optimizes the z-pass by reversing it (0 far plane and 1.0 near plane).  So the comparison tests have to be greater-than-or-equal rather than less-than-or-equal.  I changed that and it works perfectly now.

Thank you both for your help.  First time posting here because I was out of ideas and you led me to water :)


Share this post

Link to post
Share on other sites

samplerSkyDesc.ComparisonFunc = D3D12_COMPARISON_FUNC_LESS_EQUAL;

Sampler comparisons are used for shadow-mapping, where you want your texture filter to compare the texel values against a reference that you provide and return a boolean result (0.0f or 1.0f).

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement