Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 18 Jan 2012
Offline Last Active Jan 11 2015 08:01 AM

Topics I've Started

problem with Anttweakbar with multiple objects

29 December 2013 - 05:24 AM

I've recently gotten Anttweakbar to work with my graphics engine.

I have done a quick test to be able to adjust the position of an object in space using the anttweakbar gui.


What I want to do now is to do this for multiple objects - so that I can change the active object and adjust the position of each different object individually.


Here's where I initialize the GUI:

bool init(void)
	myBar = TwNewBar("GiBOX");
	TwDefine(" GiBOX size='240 320' ");
	TwDefine(" GiBOX valueswidth=140 ");
    // - Directly redirect GLUT mouse button events to AntTweakBar
    // - Directly redirect GLUT mouse motion events to AntTweakBar
    // - Directly redirect GLUT mouse "passive" motion events to AntTweakBar (same as MouseMotion)
    // - Directly redirect GLUT key events to AntTweakBar
    // - Directly redirect GLUT special key events to AntTweakBar
	TwAddVarRW(myBar, "Object Id", TW_TYPE_INT32, &objId, "");
	TwAddVarRW(myBar, "Object Position", TW_TYPE_DIR3F, &(models->objPos[objId]), "");


I use object id to specify which object would be active. So in the gui, I can change the object id.

The problem is even when I change the object id, it only adjusts the position of the first object (which is objId = 0).


It seems that when I set models->objPos[objId] it doesn't update the objId in this function.

Has anyone who's used Anttweakbar ever gotten it to work with multiple objects and can update their attributes individually?

Voxel Cone Tracing Experiment - Part 2 Progress

25 September 2013 - 05:01 PM



I thought it might time to provide you folks with an update on the progress of my voxel cone tracing (VCT) test engine done in OpenGl 4.3. Everything is 100% dynamic running at approximately 20fps on gtx485m @ 1024x768:


Attached File  giboxv3-0.png   1014.17KB   6 downloadsAttached File  giboxv3-1.png   1.55MB   7 downloads

Attached File  giboxv3-2.png   1008.28KB   6 downloadsAttached File  giboxssr7.png   681.56KB   7 downloadsAttached File  giboxssr9.png   1013.94KB   4 downloads


The VCT itself (with "unlimited" bounces by temporal coherence) runs at around 50fps @ 1024x768.

The effects that slow this down to 20fps are:

  • Screen space jittered soft shadows (all omnidirectional, with maximum 1 direct light and 3 emissive lights)
  • Screen space reflections
  • Screen space ambient occlusion (Arkano22's method)

The area is currently only limited to this scene (i'm using a single 64x64x64 3D texture), but I intend to have a go at this "camera-centred cascades" idea to see if I can get consistent interactive framerates for a much larger scene. The main concern I have with using cascades is that it involves dynamically shifting the volume texture, thus leading to large scale flickering. This flickering is evident when I move around emissive objects in my current implementation.


I also want to try optimizing this engine, firstly by trying to implement bindless graphics.


Here is a video of my latest engine:



You'll have to excuse the banding artifacts - the quality of the video makes them look worse than they actually are!


Link to part one, which details the journey I've been on since January: http://www.gamedev.net/topic/638933-voxel-cone-tracing-experiment-progress/

Alchemy Ambient Occlusion

21 September 2013 - 06:30 AM

Hello all,


I've recently ported over some Alchemy AO code to glsl into my engine and get decent results:


Attached File  giboxao1.png   283.63KB   12 downloadsAttached File  giboxao2.png   357.58KB   11 downloads


The only issue seems to be the thin black edge around each object, which is exactly 1 pixel thick. Does anyone who has implemented Alchemy AO know what the cause of it is and how I can minimize it?


Here's my code:

const vec2 focalLength = vec2(1.0/tan(45.0*0.5)*screenRes.y/screenRes.x,1.0/tan(45.0*0.5));

float linearDepth(float d, float near, float far)
	d = d*2.0 - 1.0;
	vec2 lin = vec2((near-far)/(2.0*near*far),(near+far)/(2.0*near*far));
	return -1.0/(lin.x*d+lin.y);

vec3 UVtoViewSpace(vec2 uv, float z)
	vec2 UVtoViewA = vec2(-2.0/focalLength.x,-2.0/focalLength.y);
	vec2 UVtoViewB = vec2(1.0/focalLength.x,1.0/focalLength.y);
	uv = UVtoViewA*uv + UVtoViewB;
	return vec3(uv*z,z);

vec3 GetViewPos(vec2 uv)
	float z = linearDepth(texture(depthTex, uv).r,0.1,100.0);
	return UVtoViewSpace(uv, z);

vec2 RotateDir(vec2 dir, vec2 cosSin)
	return vec2(dir.x*cosSin.x - dir.y*cosSin.y, dir.x*cosSin.y + dir.y*cosSin.x);

vec2 tapLocation(int sampleNumber, float spinAngle, out float ssR)
	float alpha = float(sampleNumber + 0.5) * (1.0 / NUM_SAMPLES);
	float angle = alpha * (NUM_SPIRAL_TURNS * 6.28) + spinAngle;
	ssR = alpha;
	return vec2(cos(angle), sin(angle));

void main()
	float ao = 0.0;
	float aoRad = 0.5;
	float aoBias = 0.002;
	vec2 aoTexCoord = gl_FragCoord.xy/vec2(screenRes.x,screenRes.y);
	vec2 aoRes = vec2(screenRes.x,screenRes.y);
	vec3 aoRand = texture(aoRandTex, aoTexCoord*aoRes/1).xyz;
	vec3 aoC = GetViewPos(aoTexCoord);
	vec3 aoN_C = normalize(cross(dFdx(aoC),dFdy(aoC)));
	float uvDiskRadius = 0.5*focalLength.y*aoRad/aoC.z;
	vec2 pixelPosC = gl_FragCoord.xy;
	pixelPosC.y = screenRes.y-pixelPosC.y;
	float randPatternRotAngle = (3*int(pixelPosC.x)^int(pixelPosC.y)+int(pixelPosC.x)*int(pixelPosC.y))*10.0;
	float aoAlpha = 2.0*pi/NUM_SAMPLES + randPatternRotAngle;
	for(int i = 0; i < NUM_SAMPLES; ++i)
		float ssR;
		vec2 aoUnitOffset = tapLocation(i, randPatternRotAngle, ssR);
		ssR *= uvDiskRadius;
		vec2 aoTexS = aoTexCoord + ssR*aoUnitOffset;
		vec3 aoQ = GetViewPos(aoTexS);
		vec3 aoV = aoQ - aoC;
		float aoVv = dot(aoV,aoV);
		float aoVn = dot(aoV, aoN_C);
		float aoEp = 0.01;
		float aoF = max(aoRad*aoRad - aoVv, 0.0);
		ao += aoF*aoF*aoF*max((aoVn - aoBias)/(aoEp+aoVv),0.0);
	float aoTemp = aoRad*aoRad*aoRad;
	ao /= aoTemp*aoTemp;
	ao = max(0.0,1.0 - ao*(1.0/NUM_SAMPLES));

	color = vec4(vec3(ao),1);

Recursive Screen-Space Reflections

14 September 2013 - 01:07 AM

I've managed to implement screen-space reflections into my forward renderer and now I've been trying to go the step further by rendering recursive reflections (i.e. reflections of reflections of reflections).

I've been trying to do this in a single-pass, by:

  1. rendering the camera view of a scene to a 2d texture in the first frame, then
  2. feeding this texture back into the same shader pass in the next frame, performing the SSR calculations in this shader pass using this texture.
  3. then adding the results to the original color results -> this is then rendered into the same 2d texture as Step 1 and the feedback loop continues.

Here are my results:


Attached File  giboxssr3.png   886.94KB   9 downloads


Several issues are immediately noticeable:


The most noticeable one on the screenshot above is that reflections within reflections become distorted.

I suspect that this is due to the normals used in the SSR calculations, which continually use the original normals of the scene, thus the higher order reflections are incorrectly using the normals of the base geometry (eg. look at the reflections on the floor).

I believe that a solution to this may be to reflect the normals as well and store this into a new texture - which would be used in every subsequent normal calculation of the SSR algorithm.


Another big problem, which you won't be able to notice in the screenshot above, is any movement that occurs in the scene will show a lag between the original color and the reflected color. This only happens when I feed the texture of the scene back into the scene in the same render pass. If I use two passes, then this doesn't occur, but then I won't get the advantage of the free recursive reflections. I guess this is because, using a single pass, the lag is because the renderer has to wait until all the other effect calculations, such as shadows, global illumination, etc, are completed.


I've also tried putting aside the recursive attempt by using a single reflection so that I can address some of the other issues.

The main issue is how to mask what the camera cannot see - at the moment these areas are black in the reflections because there is no information available in the captured scene from screen space because they are either occluded by other objects, or are outside of the view.


Attached File  giboxssr4.png   621.85KB   7 downloadsAttached File  giboxssr5.png   479.61KB   5 downloads


For the areas that are outside of the view, even if I use fade, they are still noticeable.

Screen-space reflections camera angle issues

10 September 2013 - 08:20 AM

I've tried to implement screen-space reflections and it seems to work but only at a very tight reflection angle:


Attached File  giboxssr0.jpg   187.88KB   8 downloads


Here's what happens when I inrease the camera angle to the surface:


Attached File  giboxssr1.jpg   150.68KB   15 downloads


And if you look at the image below, when I am facing the surface, it just renders the entire scene from the camera's point of view, instead of a reflection:


Attached File  giboxssr2.jpg   113.18KB   10 downloads


I understand that SSRs only render what the camera can see, so at larger angles from each surface, the reflection will disappear; however, in my case it seems to only work for very tight angles - I've seem implementations that still show a correct reflection at larger angles.


Here's my code:

	vec4 bColor = vec4(0.0);
	float reflDist = 0.0;
	vec3 screenSpacePos;	
	E = normalize(camPos - gsout.worldPos.xyz);
	reflDir = normalize(reflect(-E, bumpN));
	float currDepth = 0.1;

	for(int i = 0; i<20; i++)
		vec4 clipSpace = proj*view*vec4(gsout.worldPos.xyz+reflDir*reflDist,1.0);
		vec3 NDCSpace = clipSpace.xyz/clipSpace.w;
		screenSpacePos = 0.5*NDCSpace + 0.5;

		float sampleDepth = texture(reflTex, screenSpacePos.xy).w;
		currDepth = (proj*view*vec4(gsout.worldPos.xyz+reflDir*reflDist,1.0)).z;
		float diff = currDepth - sampleDepth;
		if(diff < 0)
			bColor.xyz = texture(reflTex, screenSpacePos.xy).xyz;
		reflDist += 0.1;

The reflTex stores the screen space rendering of the scene in the xyz components and length(camPos.xyz - worldPos.xyz) in the w component in a 32-bit floating point texture.


Would anyone be able to give me some tips on what I may be doing wrong?