Renderqueue design and weapons

Started by
9 comments, last by L. Spiro 9 years ago
Hi,
2 days ago I ran into a design issue with my renderqueue ):
My approach till now works fine:

- renderer class takes "buckets" of renderables and renders them
(blended and opaque, separated)
- renderqueue feeds from my scenemanager class (with 3d scenes)

But, I distinct weapon (FPS) mesh instances from all other, because they have a different approach for culling,transformation etc (related to the camera).

Now my problem is, the weapon renderables are handled like all others and are not identifiable by the renderer. This sounds good, because the renderer shouldn't. The major disadvantage is, that I dont clear the Z buffer before rendering the weapon renderables (since there just a part of the full buckets).

How would you approach this from a design point of view?
Some thoughts:

- distinct a render bucket for weapons
- sort the current full buckets with weapons last and place some marker where they start, so my renderer can clear the Zbuffer at that moment

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Advertisement

If I understand you correctly, you're talking about the first person weapons that are in "front" of all other objects in the scene. If so, and they're opaque, they should be rendered first (not last), before anything else, as they're the closest to the near plane. If you render them last, you're doing a lot of overdraw. That solves the depth buffer clearing problem, also.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

If I understand you correctly, you're talking about the first person weapons that are in "front" of all other objects in the scene. If so, and they're opaque, they should be rendered first (not last), before anything else, as they're the closest to the near plane. If you render them last, you're doing a lot of overdraw. That solves the depth buffer clearing problem, also.

He is most likely talking about using a different projection for rendering the weapon, which requires rendering it last after clearing the depth buffer.

And if so, just add it to a later pass.
Every pass should be flagged as either color/depth clearing or not (and any combination).


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Thanks both. It is indeed for weapons positioned right in front of the camera.

My goal is to always have them visible ('on top' of everything else).

The theory I've learned so far is:

- clear backbuffer & zbuffer

- zbuffer and zwrite enabled

- draw scene opaque

- disable Z write

- draw skybox

- keep Z write disabled

- draw scene blended stuff, back to front

(till here so far all good and working)

- what do do with zwrite/ zbuffer enabled/disbled and/or clear Zbuffer?

- draw weapon positioned in front of camera

@L Spiro; clear on adding a flag for clearing color/depth buffer yes/no per pass. But I have a bucket of renderables having properties, I don't have a number of passes with defined flags. To illustrate, this is a part of my mean rendering function:


	/** UPDATE THE RENDERQUEUE **/
	if(!mRenderQueue.Update(pD3dscene)) return false;

	/** SHADERS: UPDATE SCENE SPECIFIC CONSTANTS, I.E. CAMERA, LIGHT, FOG, AMBIENT **/
	if(!ShaderUpdateScene(pD3dscene, pCam)) return false;

	/** OPAQUE: RENDER SCENE USING BASE UBERSHADER **/
	mStateMachine.SetRenderState(D3DRS_ZWRITEENABLE, TRUE);
	mStateMachine.SetRenderState(D3DRS_ALPHABLENDENABLE, FALSE);
	mStateMachine.SetSamplerState(0, D3DSAMP_ADDRESSU, D3DTADDRESS_WRAP);
	mStateMachine.SetSamplerState(0, D3DSAMP_ADDRESSV, D3DTADDRESS_WRAP);

	if(!RenderBucket(pD3dscene, "SingleTexTech", mRenderQueue.GetRefRenderablesOpaque(), mRenderQueue.GetRefBucketOpaqueSorted())) return false;
	
	/** RENDER SKYBOX **/ 
	if(pD3dscene.HasSkybox())
	{
		mStateMachine.SetRenderState(D3DRS_ALPHABLENDENABLE, FALSE);
		mStateMachine.SetRenderState(D3DRS_ZWRITEENABLE, FALSE);										
		mStateMachine.SetRenderState(D3DRS_CULLMODE, D3DCULL_CW);
		mStateMachine.SetSamplerState(0, D3DSAMP_ADDRESSU, D3DTADDRESS_MIRROR);
		mStateMachine.SetSamplerState(0, D3DSAMP_ADDRESSV, D3DTADDRESS_MIRROR);
		if(!pD3dscene.mSkyBox.Render(pCam)) return false;
	}

	/** BLENDED: RENDER SCENE USING UBERSHADER **/
	mStateMachine.SetRenderState(D3DRS_CULLMODE, D3DCULL_CCW);
	mStateMachine.SetRenderState(D3DRS_ZWRITEENABLE, FALSE);
	mStateMachine.SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);

	if(!RenderBucket(pD3dscene, "SingleTexTech", mRenderQueue.GetRefRenderablesBlended(), mRenderQueue.GetRefBucketBlendedSorted())) return false;


My main concern/ problem is that all opaque renderables (including weapon) are in one big bucket, sorted on a combination of things (material, mesh, materialgroup, mesh instance etc.). This means I can't distinct the renderables for the weapon currently (in the bucket). I'm looking for a good solution to solve this, for example having a separate bucket, so in my mean render function I can change the Zbuffer states and/or clear the Zbuffer and then render that specific bucket. The problem with this is that I believe the renderer should not need to know about weapons. Another solution could be to have 'color/ depth buffer clear' flag in the renderables and include that in the sorting procedure. Then in my render bucket function I could check for that flag and clear a buffer or change a state if not.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

A simple solution might be to put these first person hack objects into a completely different scene, with its own complete set of buckets.

Why would you draw the gun last instead of first? Even if you change the projection matrix, if your depth near and far are the same, you are perfectly fine.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

if your depth near and far are the same

The whole point is that they would not be the same—you’re supposed to be distributing the gun across the whole range of Z values (roughly speaking) for the extra up-close precision you need when the scene in its entirety covers 5 kilometers or such.
This is why in games even since GoldenEye 007 the gun does not penetrate the wall no matter how close you get to it.
So when you change the projection matrix, you are specifically changing the near and far.

L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

My bucketing approach is kinda messy in its current implementation but the idea works should work fine:

The idea is that you have a bunch of render passes that are configurable with depth testing, projection, blending, what buffer(s) to clear if needed, buffers to sample from, buffers to draw to, and all other state you can think of. You have a "pipeline" in which you plug in render passes, to establish the order in which they'll be rendered. For each pass you have a bucket of particular objects you'll render with it.

This is all very generic, so you need to add some bit of game specific logic on top that can grab entities and put them in the right bucket, ie something that knows that an entity is a weapon and it goes into weapon pass's bucket. Since from the renderer's perspective all passes and buckets are the same, from outside you'll need something to handle the specifics, there could be many ways of doing things.

So, in my renderer, if you need a specific pass for rendering the first person weapon, you'd just add a new pass to whatever slot you need in the pipeline, configuring it so it sets up the right state (this is done with D3D-like state descriptors), and provide a bucket for rendering stuff with it. Then put the game specific logic on top that says "this entity is a weapon and needs to be rendered, this goes into weapon bucket."

In my case I deal with redundant state changes by passing a state context object in between the passes, so the state descriptors can check what the previous pass did and only modify the necessary state. Inside the buckets you could do more involved things like sorting by texture usage, front to back, and so on.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Thanks all.
I've solved it for now by rendering 3d mesh instances for the hud (where the weapon in my case now belongs too), using my hud class and the rendering of the hud. This is separated from the rest/ full scene, meaning I can simply clear the Z buffer before drawing the hud.

In this example the small blue ghost is rendered as part of the hud, and always visible (it's matrix is the inverse camera viewmatrix multiplied by the mesh instance's local matrix/offsets):

http://www.sierracosworth.nl/gamedev/booh_ghostind1.jpg

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

I thought the problem we were talking about was changing the projection matrix so the gun had more foreshortening, not depth precision. I've never changed the depth precision of a gun and it looks fine.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

This topic is closed to new replies.

Advertisement