D3D & OGL Renderer startFrame Method

Started by
10 comments, last by CrazyCdn 16 years, 9 months ago
I'm still working on my D3D and OGL Renderer. I'm using the method from the GD article to define some enums etc to set states and other stuff in the renderers but I would like to hear others opinions on this startFrame method. Here is the enums first :

//Buffers
enum eBuffers
{
	COLOR_BUFFER = 1,
	DEPTH_BUFFER = 2,
	STENCIL_BUFFER = 4
};
//States 
enum eState
{
	LIGHTING,
	BLENDING,
	DEPTHTEST,
	SRCBLEND,
	DSTBLEND,
	ALPHATEST,
	FOG,
	FOGMODE,
	FOGSTART,
	FOGEND,
};
//Operations to perform on the states
enum eOperation
{
	ONE,
	ZERO,
	SRC,
	SRC_ALPHA,
	INV_SRC,
	INV_SRC_ALPHA,
	DEST,
	DEST_ALPHA,
	INV_DEST,
	INV_DEST_ALPHA,
	ENABLE,
	DISABLE,
	LINEAR,
	EXP,
	EXP2
};

Now I got the startFrame method in the base Renderer class like this.

...
virtual void	startFrame(unsigned int bits)=0;
...

Here comes the OGL and D3D implementations.

void OGLObject::startFrame(unsigned int bits)
{
	unsigned int glBits = 0;
	if(bits & COLOR_BUFFER)
		glBits |= GL_COLOR_BUFFER_BIT;
	if(bits & DEPTH_BUFFER)
		glBits |= GL_DEPTH_BUFFER_BIT;
	
	glClear(glBits);
	glLoadIdentity();
}

void D3DRenderer::startFrame( unsigned int bits )
{
	unsigned int d3dBits = 0;
	if(bits & COLOR_BUFFER)
		d3dBits |= D3DCLEAR_TARGET;
	if(bits & DEPTH_BUFFER)
		d3dBits |= D3DCLEAR_ZBUFFER;
	

	m_pD3DDevice->Clear(0, NULL, d3dBits,m_BackColor, 1.0f, 0);
	m_DeviceStatus = m_pD3DDevice->BeginScene();
}

And I can call them from my application like this : m_pRenderer->startFrame(COLOR_BUFFER|DEPTH_BUFFER); Is there anything wrong with this approach?
Advertisement
We do a very similar thing. There is no right way to do anything otherwise there would be only one engine. Do what works best for you.

What GD article are you talking about btw?

"Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety." --Benjamin Franklin

About the state enums - I would create seperate functions that set states (setBlendFactors(), setStencilOp(), etc.) rather than one big setRenderState() function like Direct3D uses. I think this makes the class more maintainable and easier to use. But, that's just my opinion (note that if you do make seperate state functions, you should also make seperate enums for greater type safety).
Your startFrame method looks fine.

For your states I suggest not having individual methods to set states, it would be expensive as these are all virtual calls and its generally considered not good for performance to abstract a renderer at a level this low.

I would put all states into a structure, you can collect instances of these up at rendertime, sort them to minimise the expensive state changes and then pass the instances off to the renderer. The renderer can then set all the states at once from one virtual function call simply be reading out the structure.

Note that this structure can contain all your states, so thats not only blend states but texture and shader states and you can even store the vertex-buffer ID (or pointer) and sort by that to reduce switching between vertex buffers.
Quote:Original post by dmatter
Your startFrame method looks fine.

For your states I suggest not having individual methods to set states, it would be expensive as these are all virtual calls and its generally considered not good for performance to abstract a renderer at a level this low.

I would put all states into a structure, you can collect instances of these up at rendertime, sort them to minimise the expensive state changes and then pass the instances off to the renderer. The renderer can then set all the states at once from one virtual function call simply be reading out the structure.

Note that this structure can contain all your states, so thats not only blend states but texture and shader states and you can even store the vertex-buffer ID (or pointer) and sort by that to reduce switching between vertex buffers.


How would you put states in a structure?Can you elaborate on that a little more?
Is it something like:

struct tRenderState
{
bool blending;
bool lighting;
bool texturing;
int vertexbufferID;
int textureID;
int shaderID;
..
};

Then each renderable can have a renderState instance and pass it to a queue in the renderer?Or did I totally miss the point?
Quote:
Original post by dmatter
For your states I suggest not having individual methods to set states, it would be expensive as these are all virtual calls and its generally considered not good for performance to abstract a renderer at a level this low.

I would put all states into a structure, you can collect instances of these up at rendertime, sort them to minimise the expensive state changes and then pass the instances off to the renderer. The renderer can then set all the states at once from one virtual function call simply be reading out the structure.


You can still do something similar with seperate functions that are not virtual and are inline. Instead of actually changing the state, they will simply update an internal structure like the one you described. Then when it's time to render, a virtual flushState() function is called that sets all states that havn't changed.

The renderer can still have a function that sets all states at once, it's just that sometimes it is convinient to set individual states (maybe for small demos or something).
Quote:Original post by Black Knight
How would you put states in a structure?Can you elaborate on that a little more?
Is it something like:

struct tRenderState
{
bool blending;
bool lighting;
bool texturing;
int vertexbufferID;
int textureID;
int shaderID;
..
};

Then each renderable can have a renderState instance and pass it to a queue in the renderer?Or did I totally miss the point?

Specifically how you do this depends on many factors, perhaps the biggest factor is whether you plan on using a shader based system or fixed function one. Most people would recommend a shader based approach and you can then (if you wanted to) implement the fixed function pipeline as a special shader.

You also need to consider the order things happen in, for example heres a simple overview of a state-sorting rendering pipeline of an engine:
1) iterate over world entities  i)  request a renderOp from each entity (see later description)  ii) push visible renderOps onto a renderQueue (or renderScheduler)2) sort renderQueue by: ShaderID -and- Shader parameters -and- vertexBuffer (if available)3) for each renderOp in renderQueue  i)   bind shader (if not current)  ii)  bind vertexBuffer, or create one and upload data into it (if not current)  iii) upload shader parameters (if not current)  iv)  set any other states necessary (e.g. fog, render targets, etc)  v)   render

A renderOp is a structure that is requested from each renderable entity at render-time, it needs to contain (references to) the geometry that will be rendered, the shader to render the geometry with, all the shader parameters and possibly non-shader related states like fog blending (or depending on how you abstract a shader these might be part of the shader parameters anyway).

At stage-2 when sorting takes place, we sort based on these states and/or any others that we might have available to us, we dont necessarily need to sort by every state as there is likely a breakeven point between the cost of sorting by another state and simply running the risk of needlessly setting it.

We dont need a structure with booleans that enable or disable texturing etc, because this is very fixed-function and it is shaders that decide whether they want to texture or not. If we want a simple textured mesh then the renderable will return a renderOp that contains the mesh to use, a diffuseTexture shader and a texture to use. If we want a non-texture phong shaded sphere then renderOp contains the sphere mesh, a suitable shader and a NULL texture pointer/handle.
So a simple renderOp implementation could look like:
class renderOp{public:    // Required data for each and every renderable    Geometry* geometry;          // The geometry stored in the system RAM    Matrix* matrix;              // Transformation    int shaderID;                // shader handle    int vertexBufferID;          // VBO handle (or (-1) if not cached in the video-RAM)    int textures[MAX_TEXTURES];  // Texture handles    /*       Any other states you think every renderable should explicitly give    /*        // Possibly some way to allow states that not every renderable need have    // Or just have every conceivable state in this class and give then default values.    std::map<std::string, OptionalStateType> otherStates;};


Quote:Original post by Gage64
You can still do something similar with seperate functions that are not virtual and are inline. Instead of actually changing the state, they will simply update an internal structure like the one you described. Then when it's time to render, a virtual flushState() function is called that sets all states that havn't changed.

The renderer can still have a function that sets all states at once, it's just that sometimes it is convinient to set individual states (maybe for small demos or something).

Yes you might well want to make the actual data members of the renderOp class private and use functions similar to those you describe as its interface.
I wouldnt advocate the use of helper or utility functions as part of the renderer as its just not necessary and leads to code bloat, such functions in this case would belong in a utility namespace somewhere I suspect.

I hope that as some help? [smile]
Thanks for the indepth explanation.Looks like I need to rewrite my whole engine to have a shader approach as of now its heavily based on FFP and OGL.
I may even kick out OGL and switch completely to D3D because trying to keep two renderers up to date and working is much more time consuming.
Anyway thanks for the help.
If you wanted to stick with the FFP (although its not recommended as a long-term investment of time [wink]) then you could still have a similar approach, i tihnk it would look more like the following:

1) iterate over world entities  i)  request a renderOp from each entity  ii) push visible renderOps onto a renderQueue2) sort renderQueue by FFP states3) for each renderOp in renderQueue  i)   bind vertexBuffer, or create one and upload data into it (if not current)  ii)  set FFP states (if not current)  iii) render


Then renderOp class would perhaps be someway between what you thought about first of all and in my post above.
I still wouldnt bother with booleans for enabling and disabling things like texturing, I would probably have a number of different texture handle IDs, if the IDs are set to NULL (or whatever) then texturing is disabled otherwise its enabled.
I think I would whack all the possible FFP states into the renderOp class (or put them in another class and give the renderOp class an instance of this) then the constructor could give default values to the lesser used ones.

Ditching an API is probably a good idea if you dont need them both [smile]

Good luck
Yea ill probably leave OGL cuz Im always developing on windows.
Should I also throw away my own vector class etc and use D3DX functionality?
I think it will be easier to port it to somewhere if I use my own math classes but again it seems like an overkill and duplication to write all that stuff when you can just use the D3DX classes.

This topic is closed to new replies.

Advertisement