Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


WFP

Member Since 23 Mar 2013
Offline Last Active Yesterday, 08:04 PM

Topics I've Started

Temporary Buffer Management in Post-Processing

07 June 2015 - 09:11 AM

How do you all handle things like avoiding a bunch of temporary buffers sitting around? For example, a lot of my post-processing effects need to write data to an intermediate buffer for a calculation, then read that result and perform another calculation and apply the final result back to the light buffer. Currently, I've been just giving each of these classes their own temporary buffer resource that they can use as they see fit, but obviously this adds up, especially when you have things like mip-mapped RGBA16F textures. For quickly adding in and testing a new effect, it's feasible, but it obviously won't scale well to lower-end hardware. One solution I was thinking might help with memory consumption would be to have some type of "scratch pad" that these effects can access - basically, a buffer with the same format as the light buffer that any effect can use whenever necessary to act as an intermediate buffer. Expanding on that, I might essentially have a single class for managing temporary resources that takes texture dimensions and format as its arguments and returns a buffer if it exists, or creates and returns it if not. I would still need to handle cases where two temporary resources of the same type are needed in this case, though (for example, ping-ponging blur buffers).


From what I'm reading, Direct3D 12 (and Vulkan probably) seems like a better overall fit for what I'm trying to accomplish, since (and please correct me if I'm misunderstanding) you can allocate a large buffer of memory up front, and essentially for creating a texture resource you just move the pointer ahead by the amount needed - meaning no dynamic allocation per frame and these effects can just allocate whatever temporary buffer they need as they're turn is up to be processed. I know there are a few threads on this site right now with people already tinkering around with drawing triangles, etc., with Direct3D 12 on Windows 10 Preview builds, but I'm trying to hold off on that until at the very least the documentation stabilizes a bit, but will likely just wait until it's officially released since I have plenty to do in my current app without worrying about a graphics port or rewrite.


Thanks,


WFP

Propagating data through an engine to constant buffers

29 January 2015 - 03:32 PM

I was hoping to get some advice about engine design, and more specifically, about passing data that models need to exist in a constant buffer at render time.  The below is based on an entity-component framework, in case that helps clarify where data should be coming from.

 

For probably >90% of the content in my scene (basic models with a vertex shader, pixel shader, and only really need a WVP transform) I have a reserved (in the context of my engine) constant buffer slot I'm using to set data that comes from the Entity's corresponding TransformComponent combined with the view and projection matrix that comes from the current camera being used.  For such a simple case, this is easy and straightforward enough that I haven't really thought to revisit it.

 

Recently, I've started adding tesselated heightmap-based terrain to the engine, and unlike the common entities, it also requires an additional constant buffer that houses things like min and max tesselation factors, camera position, frustum planes (for culling), and a view matrix (used to move generated normals to view space for the GBuffer).  I haven't done a good job in building flexibility into the current pipeline to accomodate the need for anything outside of the standard constant buffer described above which, again, houses mostly the WVP transformation matrix.

 

When I started thinking longer term, I realized I was going to run into the same issues for things like ocean rendering, volumetric fog, or really anything that is "non-standard" in that it's not just a model with a straightforward Vertex Shader -> Pixel Shader -> Done type of setup.  I'll go over below what I have existing right now to band-aid this situation, but I would really appreciate input about how to better be able to get at the data I need for a specific model's constant buffer requirements without the model having to know about upstream objects (for example, without the model being able to query the scene for the current camera being used to get the view matrix).

 

Current solution:

In my render component, which houses a pointer to a model (which contains a vertex buffer, index buffer, and subset table describing offsets and shaders/texture per subset - ideally where a would like constant buffer data to live since models depend on it), I have added a std::function member to allow for "extra work" and a boolean flag to acknowledge its presence.  The gist is that during setup if a renderable entity (one with a RenderComponent) needs to perform extra setup work, it can define what work needs to be done in that std::function member and the main render loop will check if it's flag is set during each iteration.  So, like below:

// during scene setup - create a render component with the provided model
RenderComponent* pRC = Factory<RenderComponent>.create(pTerrainModel);
pRC->setExtraWork = [&](DeviceContext& deviceContext, FrameRenderData& frameRenderData)
{
  // do the additional work here - in the case above, retrieve extra data needed
  // for constant buffer from the frameRenderData and store and enable that to a
  // shader constant buffer slot
}


/////// later in rendering loop
if(pCurrent->hasExtraWork())
{
  pCurrent->getExtraWork()(deviceContext, frameRenderData);
}


//////// and the way the extra work member is defined in RenderComponent
std::function<void(DeviceContext& deviceContext, FrameRenderData& frameRenderData)> m_extraWork;

The FrameRenderData is just a generated struct of references to the data relevant to any given frame - the current camera, the current entities to be rendered, etc.

 

The other thought I had would be to trigger an event at the start of each frame containing the FrameRenderData and let anything that wants to know about it listen for it, but then I feel like my models or render components would need to have event listeners attached, which also seems like iffy design at best.

 

While the above technically works, I feel like it's kludgy and was wondering if anyone had thoughts on a better way to get data to dependent constant buffers in a system setup similar to what's above.

 

Thanks for your time and help.


Playing sound in response to collisions

07 June 2014 - 03:55 PM

Hi,

 

I am currently trying to come up with a good solution for the situation where I have several collisions and want to play sounds in response to them.  With most aspects of the game, I can control with a degree of certainty when I play what sound, but for something as dynamic as the collision detection system, I'm a little stuck.  For example, say I have a large stack of boxes that I knock over.  I would like them to play small bumping-into-each-other sounds as they are falling and colliding with one another and with the floor.  The issue is that, especially once they have reached the floor but before the physics system (impulse based) has put them to sleep, they keep firing collision events and playing the sound as they make minuscule adjustments to their positions.

 

One solution I've come up with is to observe the velocity of the impacts and not play a sound below a certain threshold.  This could be extrapolated to include playing softer sounds at lower impact velocities until a final cutoff (so resting but not sleeping objects would still be silent), but there's a lot of trial-and-error involved in getting the thresholds exactly right.

 

Are there any more direct or obvious approaches to handling this type of situation that I'm overlooking, or is this the right track?


SSAO with Deferred Shading Issues

27 March 2013 - 07:56 PM

Greetings,

 

In my deferred shading pipeline, I am trying to add screen-space ambient occlusion.  I'm (of course) creating it after my G-Buffer is created so I have the normals in view space and can reconstruct the position in view space from the depth buffer.  I originally implemented it in my forward renderer following the example from Frank Luna's Direct3D 11 book, and it worked well, but I am running into some issues trying to adapt that to my deferred shading approach.

 

My G-Buffer normal render target is a R32G32B32A32_FLOAT format and my depth buffer is D24S8, so precision for calculations isn't an issue.  As mentioned, I'm reconstructing the view space position from depth as described here by MJP.

 

The results I'm getting are shown below.  As you can see in the first image, some of the occlusion looks correct, particularly where the box sits over the ground and the corner of the upper-middle box touches the lower-middle box, as well as the occlusion occurring on the sphere behind the left-most box.  You'll notice in the first image that there is some occlusion being generated in the gap between the upper two boxes, but not as much as I would probably expect.  Furthermore, when I move the camera to the right very slightly (second image), the occlusion value between them basically disappears, which makes me think I'm doing something wrong somewhere in view space.  Notice that for those two boxes, too, the faces facing the screen have nothing else in front of them, so I would imagine they would receive no occlusion at all.

 

The third image is the same scene rendered from a different angle, this time showing that the sphere is somehow getting occlusion on faces that have no other geometry in front of them.

 

Also, the images are intentionally unblurred so we can see them for what they are and hopefully get a better idea of what is happening.  I've tried it with bilateral blur passes enabled and the blurring works fine, but it's still just blurring the same incorrect data.  Any ideas that might help in fixing this issue are very welcome and appreciated.

 

Here is the shader code I am using.

 

Vertex Shader:

struct VertexIn
{
	float3 posL : POSITION;
	float2 tex : TEXCOORD;
};

struct VertexOut
{
	float4 posH : SV_POSITION;
	float3 viewRay : VIEWRAY;
	float2 tex : TEXCOORD;
};

cbuffer cbPerFrame : register(cb0)
{
	float4x4 inverseProjectionMatrix;
};

VertexOut main(VertexIn vIn)
{
	VertexOut vOut;

	// already in NDC space
	vOut.posH = float4(vIn.posL, 1.0f);

	float3 positionV = mul(float4(vIn.posL, 1.0f), inverseProjectionMatrix).xyz;
	vOut.viewRay = float3(positionV.xy / positionV.z, 1.0f);

	// pass to pixel shader
	vOut.tex = vIn.tex;

	return vOut;
}

 

 

Pixel Shader:

 

struct VertexOut
{
	float4 posH : SV_POSITION;
	float3 viewRay : VIEWRAY;
	float2 tex : TEXCOORD;
};

cbuffer cbPerFrame : register(cb0)
{
	float4x4 gViewToTexSpace; // proj * texture
	float4 gOffsetVectors[14];

	float gOcclusionRadius;    //0.5f
	float gOcclusionFadeStart; // 0.2f
	float gOcclusionFadeEnd;   // 2.0f
	float gSurfaceEpsilon;     // 0.05f

	// for reconstructing position from depth
	float projectionA;
	float projectionB;

	float2 _padding;
};

Texture2D normalTexture : register(t0);
Texture2D depthStencilTexture : register(t1);
Texture2D randomVecMap : register(t2);

SamplerState samNormalDepth : register(s0);
SamplerState samRandomVec : register(s1);

// determines how much the sample point q occludes the point p as a function of distZ
float occlusionFunction(float distZ)
{
	float occlusion = 0.0f;
	if(distZ > gSurfaceEpsilon)
	{
		float fadeLength = gOcclusionFadeEnd - gOcclusionFadeStart;

		// linearly decrease occlusion from 1 to 0 as distZ goes from fade start to end
		occlusion = saturate((gOcclusionFadeEnd - distZ) / fadeLength);
	}
	return occlusion;
}

float4 main(VertexOut pIn) : SV_TARGET
{
	float3 normal = normalize(normalTexture.SampleLevel(samNormalDepth, pIn.tex, 0.0f).xyz);
	float depth = depthStencilTexture.SampleLevel(samNormalDepth, pIn.tex, 0.0f).r;
	float linearDepth = projectionB / (depth - projectionA);
	float3 position = pIn.viewRay * linearDepth;

	// extract random vector from map from [0,1] to [-1, 1]
	float3 randVec = 2.0f * randomVecMap.SampleLevel(samRandomVec, 4.0f * pIn.tex, 0.0f).rgb - 1.0f;

	float occlusionSum = 0.0f;

	// sample neighboring points about position in the hemisphere oriented by normal
	[unroll]
	for(int i = 0; i < 14; ++i)
	{
		// offset vectors are fixed and uniformly distributed - reflecting them about a random vector gives a random, uniform distribution
		float3 offset = reflect(gOffsetVectors[i].xyz, randVec);

		// flip offset vector if it is behind the plane define by (position, normal)
		float flip = sign(dot(offset, normal));

		// sample a point near position within the occlusion radius
		float3 q = position + flip * gOcclusionRadius * offset;

		// project q and generate projective tex-coords
		float4 projQ = mul(float4(q, 1.0f), gViewToTexSpace);
		projQ.xy /= projQ.w;

		// find nearest depth value along ray from eye to q
		float rz = depthStencilTexture.SampleLevel(samNormalDepth, projQ.xy, 0.0f).r;

		// reconstruct full view space position r = (rx, ry, rz)
		linearDepth = projectionB / (rz - projectionA);
		float3 r = pIn.viewRay * linearDepth;

		// test whether r occludes position
		float distZ = position.z - r.z;
		float dp = max(dot(normal, normalize(r - position)), 0.0f);
		float occlusion = dp * occlusionFunction(distZ);

		occlusionSum += occlusion;
	}

	occlusionSum /= 14;

	float access = 1.0f - occlusionSum;

	// sharpen the contrast of the SSAO map to make the effect more dramatic
	return saturate(pow(access, 4.0f));
}

 

 

 

Here is the application code for setting projectionA and projectionB (from Matt's post).

float clipDiff = farClipDistance - nearClipDistance;
float projectionA = farClipDistance / clipDiff;
float projectionB = (-farClipDistance * nearClipDistance) / clipDiff;

 

 

Thanks!


Direct3D 11 Deferred Shading Banding

23 March 2013 - 09:12 AM

Greetings,

 

I am having some issues with my deferred shading implementation that I am spinning my tires on.

 

The back buffer format I am using is:

DXGI_FORMAT_R8G8B8A8_UNORM.

 

My GBuffer render targets are:

 

GBufferSetup     R                 G                B                A
SV_Target0        normal.x      normal.y     normal.z     specularPower
SV_Target1        diffuse.r       diffuse.g     diffuse.b     ambient.r
SV_Target2        specular.r    specular.g   specular.b  ambient.g
SV_Target3        position.x     position.y   position.z   ambient.b
 

SV_Target0 is DXGI_FORMAT_R32G32B32A32_FLOAT

SV_Target1 is DXGI_FORMAT_R8G8B8A8_UNORM

SV_Target2 is DXGI_FORMAT_R8G8B8A8_UNORM

SV_Target3 is DXGI_FORMAT_R32G32B32A32_FLOAT

 

The texture used for my light pass (additive blending) is also DXGI_FORMAT_R32G32B32A32_FLOAT.

 
I know some the rgba32_f textures may be overkill, but I'm just trying to get things working before worrying about saving too much on bandwidth and other similar concerns - with such simple scenes (one sphere and a plane for the ground, oh my!) there's not too much performance impact smile.png.
 
I have Googled around and tried a number of different fixes such as using a DXGI_FORMAT_R16G16B16A16_FLOAT back buffer format and changing the GBuffer and light pass render target formats around, but so far everything still has the ugly banding present.

 

Here is an example of what is going on.

Attached File  ugly banding r8g8b8a8_unorm_bbuffer.PNG   138.41KB   137 downloads

 

Any insight to possible fixes I may have missed would be greatly appreciated.  I may be missing something obvious, and have just been staring at it for too long.  If there is any other information I can supply to better troubleshoot this issue, just let me know.

 

Thanks!

 


PARTNERS