Jump to content

  • Log In with Google      Sign In   
  • Create Account


bzroom

Member Since 20 Jan 2003
Offline Last Active Jun 16 2013 02:39 AM
-----

Topics I've Started

Linear depth and double fast Z render

09 February 2012 - 03:14 AM

On the 360, I'm trying to accomplish two things at once.

Double fast Z render: Disable pixel shader and color writes. vertex shader -> depth buffer only.
Output linear depth: Used for reconstructing world position and depth in later stages of the pipeline.

I've found all sorts of material on linear depth, but lots of it is contradicting.

You've got this, which seems too good to be true, and has been denounced in other threads:
http://www.mvps.org/directx/articles/linear_z/linearz.htm

This, which requires a pixel shader and a render target or a fragment depth change:
http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/

What's the best way to put it all together?
Is it possible to output "double fast" linear-z?

G Buffer

16 December 2011 - 04:28 AM

Here's my format so far

RGBA8: diffuse RGB 8 bits each, shininess 7 bits, is emissive 1 bit*
RGBA8: normal xyz 8 bits each, spec value 8 bits.
Depth 24f s8: depth 24 bits, stencil unused*

The problem is that I'm not storing both emission and diffuse. so an object must be one or the other. Unfortunately the artists will not accept this.

I could:
Add a render target (seems excessive for such a simple emission effect)
Some how use the stencil or existing channels differently (material id?)
Render emissive stuff in the transparent/blending pass afterwords.
Somehow modulate diffuse and emmision together and store some kind of unpacking ratio. (thats what I'm doing right now, except the ratio is binary)

What are your thoughts?

Being a good technical director

01 December 2011 - 03:45 AM

I've recently been given the position of technical director at my studio. I've worked there for almost two years. When i started there were 7 employees. Now there are nearly 40, 8 of which are programmers, spread across 5 projects! I've been there the longest and have the most experience with the code base. I also feel that i am most personally connected to the existing code and seek to preserve its essence and ensure that it's extended in the right directions.

Like any promotion i'm having a bit of stage fright at the moment. Being in charge is a lot of pressure. I need to stay open minded, and listen to the needs and suggestions of my colleagues. But i also need to be firm and direct about my stance on the matters.

On my first day as TD we reinterviewed a programming candidate and i decided we needed a set of real questions, not just the "hey how ya doin? do you like programming? what did you do in school?" questions. I feel like I put together a very fair set of questions, covering a wide spectrum of topics. It was by far our longest interview and unfortunately the candidate did not pass the test. I feel like for the company this was a movement in the correct direction.

I really love designing and writing code and I dont want to see myself slip into a purely managerial role. I want to lead with experience and prove by example. I kind of have some possessive issues when it comes to programming though. I'd rather do everything myself and see that it get's done correctly than to allow someone else the chance of screwing it up.

But now i need to delegate more than ever. To lead game/engine programmers, IT, art leads, designers, test teams. I must really live up to the name director. And I most certainly need to get more organized; start documenting and planning more. I need to look at the very long range picture as well as the near range, weekly tasks.

I'm posting here because i'm seeking advice.

What kinds of things would make me a better technical director?
Will you please share your experiences, either as, or dealing with a technical director?
What would have made that experience a better one?

Thank you very much

GJK Raycast

17 November 2011 - 08:32 PM

I read Gino's paper Here on general convex raycasting. I tried to just get a feel for what we was doing and implement it myself.

The way i see it, one finds the closest feature to a minimum ray bound, clips the ray with the result, and continues searching until the minimum bound:
* goes beyond the ray
* passes the convex shape
* gets within a tolerance to the shape

For the most part my implementation works.

In one test case, i have two lumpy convex hulls (rocks) overlapping each other and i cast a ray straight down into the intersection region.
Depending on the result, i move the ray up and down to be a fixed distance away from the intersection pt. (Basic raycast character behavior)

What i'm seeing is that the intersection results differ depending on the starting height of the ray. In one frame it will find rock A and move the ray up.
Then it misses rock A and finds rock B and moves the ray down. This oscillates back and forth every frame.

I suspect that the order of terminations conditions or another simple mistake was made. Any suggestions?



void tGJKRaycast::fCompute( )
{
	const f32 cTolerance = 0.001f;
	const f32 cToleranceSqr = cTolerance * cTolerance;

	tVec3f lastNormal = tVec3f::cYAxis;
	b32 hasStepped = false;

	mIntersects = false;
	mT = 0.f;
	mRaySupport->mCenter = mRay.fEvaluate( mT );
	mGJK.fReset( );

	while( 1 )
	{
		mGJK.fResume( );
		mGJK.fCompute( );

		if( !hasStepped && mGJK.fIntersects( ) )
		{
			// origin is contained
			//  miss
			return;
		}

		// points from shape to ray.
		tVec3f diff = mRaySupport->mCenter - mGJK.fClosestPtA( );
		if( diff.fLengthSquared( ) < cToleranceSqr )
		{
			mIntersects = true;
			mNormal = lastNormal;
			mPoint = mGJK.fClosestPtA( );
			mT = fClipRay( mRay, lastNormal, mGJK.fClosestPtA( ) );
			return;
		}

		if( diff.fDot( mRay.mExtent ) >= -cEpsilon )
		{
			//clip plane and ray are either coplanar or facing the same direction.
			// miss.
			return;
		}

		hasStepped = true;
		lastNormal = diff;
		lastNormal.fNormalize( );			

		// plane on convex shape.
		mT = fClipRay( mRay, lastNormal, mGJK.fClosestPtA( ) );

		if( mT > 1.f )
		{
			// beyond ray length.
			//  miss
			return;
		}

		mRaySupport->mCenter = mRay.fEvaluate( mT );
	}
}


My tolerance is 0.001f but any smaller number exhibits the same behavior.
mRaySupport is just a point support mapping, containing the location of the current minimum bound location.

Sequential Impulse (Bias velocities)

29 October 2011 - 06:50 PM

Sequential impulses have made the system extremely stable during stacking.

There is a small amount of sliding around though which i'm sure is a common problem.

Here' how i apply friction:

* During the warm start i cache all the data related to the contact point. Precompute all the redundant calculations.
- This includes the tangential direction, which is perpendicular to the normal in the direction of the relative velocity. Can be zero or a normalized direction
- Apply warm start impulses. Old normal impulse magnitude is applied in current normal direction. Previous friction impulse magnitude is applied in the current tangential direction. (Sounds wrong, that direction has changed.)

For each iteration
- Apply normal impulse, keep running accumulative impulse positive.
- Apply friction impulse, clamp running accumulative impulse based on friction coefficient times the current accumulated normal impulse (Sounds wrong, that magnitude has not settled yet)
- All of these friction impulses are applied in the precomputed (during warmstart) tangential direction (sounds wrong, the direction is likely changing)

The results are extremely pleasing. How ever i'm sure with more effort they could be better.

Options:
* Keep friction direction around from previous frame for warm starting.
* Handle friction in two dimensional coordinate frame relative to the body.
* Update tangential direction with each iteration

I briefly heard of central friction but haven't found many resources on it.
Maybe there are new best practices for friction?

Thank you

PARTNERS