Sign in to follow this  
cozzie

Renderqueue design and weapons

Recommended Posts

Hi,
2 days ago I ran into a design issue with my renderqueue ):
My approach till now works fine:

- renderer class takes "buckets" of renderables and renders them
(blended and opaque, separated)
- renderqueue feeds from my scenemanager class (with 3d scenes)

But, I distinct weapon (FPS) mesh instances from all other, because they have a different approach for culling,transformation etc (related to the camera).

Now my problem is, the weapon renderables are handled like all others and are not identifiable by the renderer. This sounds good, because the renderer shouldn't. The major disadvantage is, that I dont clear the Z buffer before rendering the weapon renderables (since there just a part of the full buckets).

How would you approach this from a design point of view?
Some thoughts:

- distinct a render bucket for weapons
- sort the current full buckets with weapons last and place some marker where they start, so my renderer can clear the Zbuffer at that moment

Share this post


Link to post
Share on other sites

If I understand you correctly, you're talking about the first person weapons that are in "front" of all other objects in the scene. If so, and they're opaque, they should be rendered first (not last), before anything else, as they're the closest to the near plane. If you render them last, you're doing a lot of overdraw. That solves the  depth buffer clearing problem, also.

Edited by Buckeye

Share this post


Link to post
Share on other sites

If I understand you correctly, you're talking about the first person weapons that are in "front" of all other objects in the scene. If so, and they're opaque, they should be rendered first (not last), before anything else, as they're the closest to the near plane. If you render them last, you're doing a lot of overdraw. That solves the  depth buffer clearing problem, also.

He is most likely talking about using a different projection for rendering the weapon, which requires rendering it last after clearing the depth buffer.

And if so, just add it to a later pass.
Every pass should be flagged as either color/depth clearing or not (and any combination).


L. Spiro

Share this post


Link to post
Share on other sites

Thanks both. It is indeed for weapons positioned right in front of the camera.

My goal is to always have them visible ('on top' of everything else).

 

The theory I've learned so far is:

- clear backbuffer & zbuffer

- zbuffer and zwrite enabled

- draw scene opaque

- disable Z write

- draw skybox

- keep Z write disabled

- draw scene blended stuff, back to front

(till here so far all good and working)

- what do do with zwrite/ zbuffer enabled/disbled and/or clear Zbuffer?

- draw weapon positioned in front of camera

 

@L Spiro; clear on adding a flag for clearing color/depth buffer yes/no per pass. But I have a bucket of renderables having properties, I don't have a number of passes with defined flags. To illustrate, this is a part of my mean rendering function:

	/** UPDATE THE RENDERQUEUE **/
	if(!mRenderQueue.Update(pD3dscene)) return false;

	/** SHADERS: UPDATE SCENE SPECIFIC CONSTANTS, I.E. CAMERA, LIGHT, FOG, AMBIENT **/
	if(!ShaderUpdateScene(pD3dscene, pCam)) return false;

	/** OPAQUE: RENDER SCENE USING BASE UBERSHADER **/
	mStateMachine.SetRenderState(D3DRS_ZWRITEENABLE, TRUE);
	mStateMachine.SetRenderState(D3DRS_ALPHABLENDENABLE, FALSE);
	mStateMachine.SetSamplerState(0, D3DSAMP_ADDRESSU, D3DTADDRESS_WRAP);
	mStateMachine.SetSamplerState(0, D3DSAMP_ADDRESSV, D3DTADDRESS_WRAP);

	if(!RenderBucket(pD3dscene, "SingleTexTech", mRenderQueue.GetRefRenderablesOpaque(), mRenderQueue.GetRefBucketOpaqueSorted())) return false;
	
	/** RENDER SKYBOX **/ 
	if(pD3dscene.HasSkybox())
	{
		mStateMachine.SetRenderState(D3DRS_ALPHABLENDENABLE, FALSE);
		mStateMachine.SetRenderState(D3DRS_ZWRITEENABLE, FALSE);										
		mStateMachine.SetRenderState(D3DRS_CULLMODE, D3DCULL_CW);
		mStateMachine.SetSamplerState(0, D3DSAMP_ADDRESSU, D3DTADDRESS_MIRROR);
		mStateMachine.SetSamplerState(0, D3DSAMP_ADDRESSV, D3DTADDRESS_MIRROR);
		if(!pD3dscene.mSkyBox.Render(pCam)) return false;
	}

	/** BLENDED: RENDER SCENE USING UBERSHADER **/
	mStateMachine.SetRenderState(D3DRS_CULLMODE, D3DCULL_CCW);
	mStateMachine.SetRenderState(D3DRS_ZWRITEENABLE, FALSE);
	mStateMachine.SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);

	if(!RenderBucket(pD3dscene, "SingleTexTech", mRenderQueue.GetRefRenderablesBlended(), mRenderQueue.GetRefBucketBlendedSorted())) return false;


My main concern/ problem is that all opaque renderables (including weapon) are in one big bucket, sorted on a combination of things (material, mesh, materialgroup, mesh instance etc.). This means I can't distinct the renderables for the weapon currently (in the bucket). I'm looking for a good solution to solve this, for example having a separate bucket, so in my mean render function I can change the Zbuffer states and/or clear the Zbuffer and then render that specific bucket. The problem with this is that I believe the renderer should not need to know about weapons. Another solution could be to have 'color/ depth buffer clear' flag in the renderables and include that in the sorting procedure. Then in my render bucket function I could check for that flag and clear a buffer or change a state if not.

Edited by cozzie

Share this post


Link to post
Share on other sites

if your depth near and far are the same

The whole point is that they would not be the same—you’re supposed to be distributing the gun across the whole range of Z values (roughly speaking) for the extra up-close precision you need when the scene in its entirety covers 5 kilometers or such.
This is why in games even since GoldenEye 007 the gun does not penetrate the wall no matter how close you get to it.
So when you change the projection matrix, you are specifically changing the near and far.

 

 

L. Spiro

Share this post


Link to post
Share on other sites

My bucketing approach is kinda messy in its current implementation but the idea works should work fine:

 

The idea is that you have a bunch of render passes that are configurable with depth testing, projection, blending, what buffer(s) to clear if needed, buffers to sample from, buffers to draw to, and all other state you can think of. You have a "pipeline" in which you plug in render passes, to establish the order in which they'll be rendered. For each pass you have a bucket of particular objects you'll render with it.

 

This is all very generic, so you need to add some bit of game specific logic on top that can grab entities and put them in the right bucket, ie something that knows that an entity is a weapon and it goes into weapon pass's bucket. Since from the renderer's perspective all passes and buckets are the same, from outside you'll need something to handle the specifics, there could be many ways of doing things.

 

So, in my renderer, if you need a specific pass for rendering the first person weapon, you'd just add a new pass to whatever slot you need in the pipeline, configuring it so it sets up the right state (this is done with D3D-like state descriptors), and provide a bucket for rendering stuff with it. Then put the game specific logic on top that says "this entity is a weapon and needs to be rendered, this goes into weapon bucket."

 

In my case I deal with redundant state changes by passing a state context object in between the passes, so the state descriptors can check what the previous pass did and only modify the necessary state. Inside the buckets you could do more involved things like sorting by texture usage, front to back, and so on.

Share this post


Link to post
Share on other sites
Thanks all.
I've solved it for now by rendering 3d mesh instances for the hud (where the weapon in my case now belongs too), using my hud class and the rendering of the hud. This is separated from the rest/ full scene, meaning I can simply clear the Z buffer before drawing the hud.

In this example the small blue ghost is rendered as part of the hud, and always visible (it's matrix is the inverse camera viewmatrix multiplied by the mesh instance's local matrix/offsets):

http://www.sierracosworth.nl/gamedev/booh_ghostind1.jpg

Share this post


Link to post
Share on other sites
It probably doesn’t look terribly different in many cases, but if the gun is isolated within its own depth-testing (not to allow any other objects to occlude it) then it only makes sense to snap the near and far planes around the gun for that extra precision.

It’s actually not as much about giving the gun more precision as it is about giving the world more precision. By giving the gun its own set of depth values, you can then set the near plane very high (perhaps up to a meter).
The gun can have 0.001f-2.0f while the world could have 1.0f-2000.0f, which would clearly give you a better overall result than using 0.001f-2000.0f.


L. Spiro

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this