• 13
• 27
• 9
• 9
• 20
• ### Similar Content

• By fgp069
Hi there, this is my first time posting, but have been a long-time lurker in this community. I am currently developing a 3D game engine using a deferred renderer and OpenGL.
I have successfully implemented recursive portals (and mirrors) in my engine utilizing the stencil buffer to mask out regions of the screen. This solution is very favorable as I am able to have dozens of separate views drawn at once without needing to worry about requiring multiple G-buffers for each individual (sub)view. I also benefit with being able to perform post processing effects over all views, only needing to apply them over what is visible (one pass per-section with stencil masking for no risk of overdraw).
Now presently I am pondering ways of dealing with in-game camera displays (for an example think of the monitors from Half-Life 2). In the past I've handled these by rendering from the camera's perspective onto separate render target, and then in the final shading pass applying it as a texture. However I was greatly disappointed with the performance and the inability to combine with post-processing effects (or at least the way I do presently with portals). Another concern being that I wish to have scenes containing several unique camera screens at once (such as a security CCTV room), without needing to worry about the associated vram usage of having several G-Buffers.
I wanted to ask the community it would be possible to handle them in a similar fashion as portals -- with the only difference being for them take on the appearance of a flat 2D surface (but without actually being one). Would anybody with a more comprehensive understanding of matrix maths be able to tell me if this idea is feasible or not, and if so could come up with a possible solution?
I hope all this makes enough sense. Any possible insight would be greatly appreciated!
Thanks!
• By GytisDev
Hello,
me and few friends are developing simple city building game with unity for a school project, think something like Banished but much simpler. I was tasked to create the path-finding for the game so I mostly followed this tutorial series up to episode 5. Then we created simple working system for cutting trees. The problem is that the path-finding is working like 90% of the time, then it get stuck randomly then there's clearly a way to the objective (tree). I tried looking for some pattern when it happens but can't find anything. So basically I need any tips for how I should approach this problem.
Use this image to visualize the problem.
• By FFA702
I've been working on a small 3D game maker app for a while, but it's now shaping up to be a full fledged (albeit simple) all integrated 3d engine. I think it's promising in the sense that I've built the App I would want to use, and I can see people (mainly beginners) using it for a lot of applications. It has no name yet. I don't plan on making it open source or selling it. I'm just considering setting up a small website with some documentation and a download link.
What kind of license would I join with the tool given that:
I want people to be able to use it freely
I want to be completely free of responsibility
I want to prevent people from removing, let's say (hypothetically, not sure how I'd go about this yet), a small banner advertising my software at startup from the application the software would produce
The tool was developed in visual studio community 2017, using C# and a single external library, openTK
Is there anything else I should think about ? Perhaps when naming it ?
EDIT: Also, what about, let's say, a logo, or a design pattern (Artistically speaking) I would use throughout the program and the documentation to make it easily recognizable. How would I go about protecting that ?
Thanks guys
• By Orella
I'm having problems rotating GameObjects in my engine. I'm trying to rotate in 2 ways.
I'm using MathGeoLib to calculate maths in the engine.
First way: Rotates correctly around axis but if I want to rotate back, if I don't do it following the inverse order then rotation doesn't work properly.
e.g:
Rotate X axis 50 degrees, Rotate Y axis 30 degrees -> Rotate Y axis -50 degrees, Rotate X axis -30 degrees. Works.
Rotate X axis 50 degrees, Rotate Y axis 30 degrees -> Rotate X axis -50 degrees, Rotate Y axis -30 degrees. Doesn't.

Code:
void ComponentTransform::SetRotation(float3 euler_rotation) { float3 diff = euler_rotation - editor_rotation; editor_rotation = euler_rotation; math::Quat mod = math::Quat::FromEulerXYZ(diff.x * DEGTORAD, diff.y * DEGTORAD, diff.z * DEGTORAD); quat_rotation = quat_rotation * mod; UpdateMatrix();  } Second way: Starts rotating good around axis but after rotating some times, then it stops to rotate correctly around axis, but if I rotate it back regardless of the rotation order it works, not like the first way.

Code:
void ComponentTransform::SetRotation(float3 euler_rotation) { editor_rotation = euler_rotation; quat_rotation = math::Quat::FromEulerXYZ(euler_rotation.x * DEGTORAD, euler_rotation.y * DEGTORAD, euler_rotation.z * DEGTORAD); UpdateMatrix();  }
Rest of code:
#define DEGTORAD 0.0174532925199432957f void ComponentTransform::UpdateMatrix() { if (!this->GetGameObject()->IsParent()) { //Get parent transform component ComponentTransform* parent_transform = (ComponentTransform*)this->GetGameObject()->GetParent()->GetComponent(Component::CompTransform); //Create matrix from position, rotation(quaternion) and scale transform_matrix = math::float4x4::FromTRS(position, quat_rotation, scale); //Multiply the object transform by parent transform transform_matrix = parent_transform->transform_matrix * transform_matrix; //If object have childs, call this function in childs objects for (std::list<GameObject*>::iterator it = this->GetGameObject()->childs.begin(); it != this->GetGameObject()->childs.end(); it++) { ComponentTransform* child_transform = (ComponentTransform*)(*it)->GetComponent(Component::CompTransform); child_transform->UpdateMatrix(); } } else { //Create matrix from position, rotation(quaternion) and scale transform_matrix = math::float4x4::FromTRS(position, quat_rotation, scale); //If object have childs, call this function in childs objects for (std::list<GameObject*>::iterator it = this->GetGameObject()->childs.begin(); it != this->GetGameObject()->childs.end(); it++) { ComponentTransform* child_transform = (ComponentTransform*)(*it)->GetComponent(Component::CompTransform); child_transform->UpdateMatrix(); } } } MathGeoLib: Quat MUST_USE_RESULT Quat::FromEulerXYZ(float x, float y, float z) { return (Quat::RotateX(x) * Quat::RotateY(y) * Quat::RotateZ(z)).Normalized(); } Quat MUST_USE_RESULT Quat::RotateX(float angle) { return Quat(float3(1,0,0), angle); } Quat MUST_USE_RESULT Quat::RotateY(float angle) { return Quat(float3(0,1,0), angle); } Quat MUST_USE_RESULT Quat::RotateZ(float angle) { return Quat(float3(0,0,1), angle); } Quat(const float3 &rotationAxis, float rotationAngleRadians) { SetFromAxisAngle(rotationAxis, rotationAngleRadians); } void Quat::SetFromAxisAngle(const float3 &axis, float angle) { assume1(axis.IsNormalized(), axis); assume1(MATH_NS::IsFinite(angle), angle); float sinz, cosz; SinCos(angle*0.5f, sinz, cosz); x = axis.x * sinz; y = axis.y * sinz; z = axis.z * sinz; w = cosz; } Any help?
Thanks.

# DX11 having problems debugging SSAO

## Recommended Posts

Posted (edited)

Please look at my new post in this thread where I supply new information!

I'm trying to implement SSAO in my 'engine' (based on this article) but I'm getting odd results. I know I'm doing something wrong but I can't figure out what's causing the particular issue im having at the moment.

" rel="external">Here's a video of what it looks like . The rendered output is the SSAO map.

As you can see the result is heavily altered depending on the camera (although it seems to be unaffected my camera translation). The fact that the occlusion itself isn't correct isn't much of a problem at this stage, since I've hardcoded a lot of stuff that shouldn't be. E.g. I don't have a random-vector texture, all I do is use one of the sample vectors in order to construct the TBN matrix.
One issue at a time...

//SSAO VS

struct VS_IN
{
float3 pos : POSITION;
float3 ray : VIEWRAY;
};

struct VS_OUT
{
float4 pos : SV_POSITION;
float4 ray : VIEWRAY;
};

VS_OUT VS_main( VS_IN input )
{
VS_OUT output;
output.pos = float4(input.pos, 1.0f);  //already in NDC space, pass through
output.ray = float4(input.ray, 0.0f); //interpolate view ray
return output;
}

Texture2D depthTexture  : register(t0);
Texture2D normalTexture : register(t1);

struct VS_OUT
{
float4 pos : SV_POSITION;
float4 ray : VIEWRAY;
};

cbuffer	cbViewProj : register(b0)
{
float4x4 view;
float4x4 projection;
}

float4 PS_main(VS_OUT input) : SV_TARGET
{
//Generate samples
float3 kernel[8];

kernel[0] = float3(1.0f, 1.0f, 1.0f);
kernel[1] = float3(-1.0f, -1.0f, 0.0f);

kernel[2] = float3(-1.0f, 1.0f, 1.0f);
kernel[3] = float3(1.0f, -1.0f, 0.0f);

kernel[4] = float3(1.0f, 1.0f, 0.0f);
kernel[5] = float3(-1.0f, -1.0f, 1.0f);

kernel[6] = float3(-1.0f, 1.0f, .0f);
kernel[7] = float3(1.0f, -1.0f, 1.0f);

//Get texcoord using SV_POSITION
int3 texCoord = int3(input.pos.xy, 0);

//Fragment viewspace position (non-linear depth)
float3 origin = input.ray.xyz * (depthTexture.Load(texCoord).r);

//world space normal transformed to view space and normalized
float3 normal = normalize(mul(view, float4(normalTexture.Load(texCoord).xyz, 0.0f)));

//Grab arbitrary vector for construction of TBN matrix
float3 rvec = kernel[3];
float3 tangent = normalize(rvec - normal * dot(rvec, normal));
float3 bitangent = cross(normal, tangent);
float3x3 tbn = float3x3(tangent, bitangent, normal);

float occlusion = 0.0;
for (int i = 0; i < 8; ++i) {
// get sample position:
float3 samp = mul(tbn, kernel[i]);
samp = samp * 1.0f + origin;

// project sample position:
float4 offset = float4(samp, 1.0);
offset = mul(projection, offset);
offset.xy /= offset.w;
offset.xy = offset.xy * 0.5 + 0.5;

// get sample depth. (again, non-linear depth)

// range check & accumulate:
occlusion += (sampleDepth <= samp.z ? 1.0 : 0.0);
}

//Average occlusion
occlusion /= 8.0;

return min(occlusion, 1.0f);
}

I'm fairly sure my matrices are correct (view and projection) and that the input rays are correct.
I don't think the non-linear depth is the problem here either, but what do I know  I haven't fixed the linear depth mostly because I don't really understand how it's done...

Any ideas are very appreciated!

Edited by GreenGodDiary

##### Share on other sites
Posted (edited)

Bumping with new information. I'm getting quite desperate, if someone could help me out I would be forever greatful<3

I have revamped my way of constructing the view space position. Instead of directly binding my DepthStencil as a shader resource (which thinking back made no sense to do), I'm now in the G-buffer pass outputting 'positionVS.z / FarClipDistance' to a texture and using that, and remaking my viewRays in the following way: (1000.0f is FarClipDistance)

		//create corner view rays
float thfov = tan(fov / 2.0);
float verts[24]
{
-1.0f, 1.0f, 0.0f, //Pos TopLeft corner
-1.0f * thfov * aspect, 1.0f * thfov, 1000.0f,	//Ray

1.0f, 1.0f, 0.0f, //Pos	TopRight corner
1.0f * thfov * aspect, 1.0f * thfov, 1000.0f,	//Ray

-1.0f, -1.0f, 0.0f,	//Pos BottomLeft corner
- 1.0f * thfov * aspect, -1.0f * thfov, 1000.0f,//Ray

1.0f, -1.0f, 0.0f, //Pos BottomRight corner
1.0f * thfov * aspect, -1.0f * thfov, 1000.0f,	//Ray
};

In my SSAO PS, I reconstruct view-space position like this:

	float3 origin = input.ray.xyz * (depthTexture.Load(texCoord).r);
origin.x *= 1000;
origin.y *= 1000;

Why do I multiply by 1000? Because it works. Why does it work? Don't know. But this gives me the same value that I had in the G-pass vertex shader. If someone knows why this works/why it shouldnt, do tell me.

Anyway, next I get the world-space normal from the G-buffer and multiply by my view matrix to get view-space normal:

	float3 normal = normalTexture.Load(texCoord).xyz;
normal = mul(view, normal);
normal = normalize(normal);

I now have a random-vector-texture that I sample.
Next I construct the TBN matrix using this vector and the view-space normal:

	float3 rvec = randomTexture.Sample(randomSampler, input.pos.xy).xyz;
rvec.z = 0.0;
rvec = normalize(rvec);
float3 tangent = normalize(rvec - normal * dot(rvec, normal));
float3 bitangent = normalize(cross(normal, tangent));
float3x3 tbn = float3x3(tangent, bitangent, normal);

This is where I'm not sure if I'm doing it right. I am doing it exactly like the article in the original post, however since he is using OpenGL maybe something is different here?
The reason this part looks suspicious to me is that when I later use it, I get values that to me don't make sense.

        float3 samp = mul(tbn, kernel[i]);
samp = samp + origin;

samp here is what looks odd to me. If the values are indeed wrong, I must be constructing my TBN matrix wrong somehow.

Next up, projecting samp in order to get the offset in NDC so that I can then sample the depth of samp:

		float4 offset = float4(samp, 1.0);
offset = mul(offset, projection);
offset.xy /= offset.w;
offset.xy = offset.xy * 0.5 + 0.5;

// get sample depth:
float sampleDepth = depthTexture.Sample(defaultSampler, offset.xy).r;

occlusion += (sampleDepth <= samp.z ? 1.0 : 0.0);

The result is still nowhere near what you'd expect. It looks slightly better than the video linked in the original post but still same story; huge odd artifacts that change heavily based on the cameras orientation.

What am I doing wrong?

help im dying

Edited by GreenGodDiary

Bump. (sorry)

##### Share on other sites

While just briefly reading the code (it's quite hard to say what is going on - your SSAO calculation doesn't look correctly to me though), here are few notes which might lead you to where the issue is:

• Make sure you know in which space you are - world space, view space, object space, etc. ... doing this wrong will be one of the reasons for view-dependent errors.
• Do NOT multiply by random constants that make it "look good" - make sure each constant has a reason why it is there. Put it in the comment.
• Compare everything - you can write how 'view space normals', 'view space position', etc. when generating G-Buffer (into another buffer), and compare against your reconstruction - this way you can proof that you have your input data correct

Now, for the SSAO:

• Make sure you're sampling in hemisphere ABOVE the point in direction of normal. From your specified vectors you will also attempt to sample in the opposite hemisphere.
• You will need some randomization (otherwise you will need a lot of samples to make SSAO look like anything resembling SSAO).
• I also recommend checking out other shaders doing SSAO - F.e. on ShaderToy - https://www.shadertoy.com/view/4ltSz2 - it might help you find what is wrong on your side (I'm intentionally adding it here, as if you compare the actual SSAO calculation, as yours does seem incorrect to me)

##### Share on other sites
8 hours ago, Vilem Otte said:

While just briefly reading the code (it's quite hard to say what is going on - your SSAO calculation doesn't look correctly to me though), here are few notes which might lead you to where the issue is:

• Make sure you know in which space you are - world space, view space, object space, etc. ... doing this wrong will be one of the reasons for view-dependent errors.
• Do NOT multiply by random constants that make it "look good" - make sure each constant has a reason why it is there. Put it in the comment.
• Compare everything - you can write how 'view space normals', 'view space position', etc. when generating G-Buffer (into another buffer), and compare against your reconstruction - this way you can proof that you have your input data correct

Now, for the SSAO:

• Make sure you're sampling in hemisphere ABOVE the point in direction of normal. From your specified vectors you will also attempt to sample in the opposite hemisphere.
• You will need some randomization (otherwise you will need a lot of samples to make SSAO look like anything resembling SSAO).
• I also recommend checking out other shaders doing SSAO - F.e. on ShaderToy - https://www.shadertoy.com/view/4ltSz2 - it might help you find what is wrong on your side (I'm intentionally adding it here, as if you compare the actual SSAO calculation, as yours does seem incorrect to me)

Thanks alot for these pointers, I will definitely look into it further using your advice.
One question though:

Quote

From your specified vectors you will also attempt to sample in the opposite hemisphere.

Are you sure this is the case? Because my kernels are in the range ([-1, 1], [-1, 1], [0, 1]), wont it exclusively sample from the "upper" hemisphere? Or am i thinking about it wrong?

Thanks again