• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
gboxentertainment

Alchemy Ambient Occlusion

6 posts in this topic

Hello all,

 

I've recently ported over some Alchemy AO code to glsl into my engine and get decent results:

 

[attachment=17995:giboxao1.png][attachment=17996:giboxao2.png]

 

The only issue seems to be the thin black edge around each object, which is exactly 1 pixel thick. Does anyone who has implemented Alchemy AO know what the cause of it is and how I can minimize it?

 

Here's my code:

const vec2 focalLength = vec2(1.0/tan(45.0*0.5)*screenRes.y/screenRes.x,1.0/tan(45.0*0.5));

float linearDepth(float d, float near, float far)
{
	d = d*2.0 - 1.0;
	vec2 lin = vec2((near-far)/(2.0*near*far),(near+far)/(2.0*near*far));
	return -1.0/(lin.x*d+lin.y);
}

vec3 UVtoViewSpace(vec2 uv, float z)
{
	vec2 UVtoViewA = vec2(-2.0/focalLength.x,-2.0/focalLength.y);
	vec2 UVtoViewB = vec2(1.0/focalLength.x,1.0/focalLength.y);
	uv = UVtoViewA*uv + UVtoViewB;
	return vec3(uv*z,z);
}

vec3 GetViewPos(vec2 uv)
{
	float z = linearDepth(texture(depthTex, uv).r,0.1,100.0);
	return UVtoViewSpace(uv, z);
}

vec2 RotateDir(vec2 dir, vec2 cosSin)
{
	return vec2(dir.x*cosSin.x - dir.y*cosSin.y, dir.x*cosSin.y + dir.y*cosSin.x);
}

vec2 tapLocation(int sampleNumber, float spinAngle, out float ssR)
{
	float alpha = float(sampleNumber + 0.5) * (1.0 / NUM_SAMPLES);
	float angle = alpha * (NUM_SPIRAL_TURNS * 6.28) + spinAngle;
	ssR = alpha;
	return vec2(cos(angle), sin(angle));
}

void main()
{
	float ao = 0.0;
	float aoRad = 0.5;
	float aoBias = 0.002;
	vec2 aoTexCoord = gl_FragCoord.xy/vec2(screenRes.x,screenRes.y);
	vec2 aoRes = vec2(screenRes.x,screenRes.y);
	vec3 aoRand = texture(aoRandTex, aoTexCoord*aoRes/1).xyz;
	vec3 aoC = GetViewPos(aoTexCoord);
	vec3 aoN_C = normalize(cross(dFdx(aoC),dFdy(aoC)));
	float uvDiskRadius = 0.5*focalLength.y*aoRad/aoC.z;
	vec2 pixelPosC = gl_FragCoord.xy;
	pixelPosC.y = screenRes.y-pixelPosC.y;
	float randPatternRotAngle = (3*int(pixelPosC.x)^int(pixelPosC.y)+int(pixelPosC.x)*int(pixelPosC.y))*10.0;
	float aoAlpha = 2.0*pi/NUM_SAMPLES + randPatternRotAngle;
	for(int i = 0; i < NUM_SAMPLES; ++i)
	{
		float ssR;
		vec2 aoUnitOffset = tapLocation(i, randPatternRotAngle, ssR);
		ssR *= uvDiskRadius;
		vec2 aoTexS = aoTexCoord + ssR*aoUnitOffset;
		vec3 aoQ = GetViewPos(aoTexS);
		vec3 aoV = aoQ - aoC;
		float aoVv = dot(aoV,aoV);
		float aoVn = dot(aoV, aoN_C);
		float aoEp = 0.01;
		float aoF = max(aoRad*aoRad - aoVv, 0.0);
		ao += aoF*aoF*aoF*max((aoVn - aoBias)/(aoEp+aoVv),0.0);
	}
	float aoTemp = aoRad*aoRad*aoRad;
	ao /= aoTemp*aoTemp;
	ao = max(0.0,1.0 - ao*(1.0/NUM_SAMPLES));

	color = vec4(vec3(ao),1);
}
Edited by gboxentertainment
0

Share this post


Link to post
Share on other sites

Okay, so I've resolved that issue of the edge problem.

It turned out that it is quite inaccurate when I reconstruct the camera space normals from the camera space position (which is constructed from depth).

Sampling from an actual normal texture eliminated the problem:

vec3 aoN_C = camSpaceNorm;//normalize(cross(dFdx(aoC),dFdy(aoC)));

[attachment=17997:giboxao3.png]

 

Now I've just got to add bilateral filtering. I'll provide updates when I've done that.

0

Share this post


Link to post
Share on other sites

Glad you got it working. If you don't like the extra textures, you can also sample your position texture at offsets and do the derivatives manually, which removes the 2x2 pixel artifacts.

1

Share this post


Link to post
Share on other sites

Glad you got it working. If you don't like the extra textures, you can also sample your position texture at offsets and do the derivatives manually, which removes the 2x2 pixel artifacts.

 

But I thought that was what was causing the problem in the first place?

vec3 aoC = GetViewPos(aoTexCoord);
vec3 aoN_C = normalize(cross(dFdx(aoC),dFdy(aoC)));

Or are you saying that I offset aoTexCoord when sampling my depth texture? by how much?

0

Share this post


Link to post
Share on other sites

Actually I mean you can manually compute the derivatives instead of using dFdx and dFdy, which are the cause of the artifacts (they operate on 2x2 pixel blocks).

 

Essentially you just need to recompute your position at 4 different locations (sides of the pixel) and get the difference between them and the center. It's obviously more expensive than dFdx/dFdy (I'd assume :P) because you're sampling 4 more times, but it's artifact free and doesn't require storing a normal texture (if you're not using one for anything else).

 

Here's some pseudo code (not really tested but should work if I'm remembering this right):

vec3 aoC = GetViewPos( aoTexCoord );

vec3 aoC0 = GetViewPos( aoTexCoord + vec2( pixSize.x, 0.0 ) );
vec3 aoC1 = GetViewPos( aoTexCoord - vec2( pixSize.x, 0.0 ) );
vec3 aoC2 = GetViewPos( aoTexCoord + vec2( 0.0, pixSize.y ) );
vec3 aoC3 = GetViewPos( aoTexCoord + vec2( 0.0, pixSize.y ) );

vec3 dx = GetDerivative( aoC, aoC0, aoC1 );
vec3 dy = GetDerivative( aoC, aoC2, aoC3 );

vec3 aoN_C = normalize( cross( dx, dy ) );

pixSize is the inverse of your resolution (1 / res). The GetDerivative function looks something like this:

vec3 GetDerivative( vec3 p0, vec3 p1, vec3 p2 )
{
    vec3 v1 = p1 - p0;
    vec3 v2 = p0 - p2;
    return ( dot( v1, v1 ) < dot( v2, v2 ) ) ? v1 : v2;
}

Try it out and see if it works for you. If you're not going to use your normal texture elsewhere then the cost of generating it from geometry might be a bit much. smile.png

Edited by Styves
0

Share this post


Link to post
Share on other sites

Thanks for the advice - I haven't tried it yet though because I'm reusing a normal map from a previous pass.

 

Anyways, here are my results of cross-bilateral filtering:

 

[attachment=18004:giboxao4.png]

[attachment=18005:giboxao5.png]

 

Its reasonable, but the main concern that I've got is the blurring does not completely hide the noise. I have to use more samples, which is slower, but even if I do, it seems that the random sampling directions of this method is too uniform - i.e. the screen-space artifacts of "hovering" occlusion is still apparent - which is not really much better than that of standard ssao.

Another disadvantage seems to be the occluder distance falloff - which I can only increase when I increase the sampling radius (there must be some way to make this independent).

However, the main advantage of Alchemy AO is that flat surfaces are not incorrectly occluded.

 

I've also implemented the method by Arkano22 from these forums (http://www.gamedev.net/topic/550699-ssao-no-halo-artifacts/)

Here's a comparison below (left - Alchemy AO; right - Arkano AO):

 

[attachment=18007:giboxao6-1.png][attachment=18006:giboxao6.png]

 

Looking at these two methods, Arkano's method provides the ability to independently adjust occluder distance falloff - so I can start having occlusion when the occluder is a distance away from the occluded surface. It is also faster because it does not require any blur filtering. However, it suffers from some slight incorrect occlusion of flat surfaces.

Alchemy AO avoids the incorrect occlusion of flat surfaces but the blurred noise artifacts are still apparent.

 

Here's another comparison (left - Alchemy AO; right - Arkano AO):

 

[attachment=18009:giboxao7-alch.png][attachment=18010:giboxao7-ark.png]

 

This is an emissive object with specularity/reflection turned to its highest setting.

The Alchemy AO in the left picture captures all the subtle details, but the blurring makes it look like a diffuse material.

Arkano's method, does not capture all the details, but still retains the specular nature of the material.

 

Here's a picture that emphasizes the only problem which bugs me about Arkano's method:

[attachment=18012:giboxao8-ark.png]

 

Everything else looks really good, except for the per vertex occlusion artifacts.

With Arkano's AO, I am only using a depth buffer and nothing else. So it must be the depth buffer causing this.

0

Share this post


Link to post
Share on other sites

Hi,

Good results so far. For the noise/blur problem - on console (where we use fewer samples), we use a regular (4x4) random rotation pattern (with 16 random angles in a texture), rather than completely random at every pixel. That helps tremendously - you get slightly worse (theoretical) total information extracted, but the noise is regular (and limited to a small window), so the blur is able to hide it much better.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0