Designing a Renderer abstraction
I'm going to start writing a portable game engine as a hobby project in the next few weeks. By portable, I mean that I would like it to run on three target platforms (Windows XP, Mac OS X, and Linux) and support multiple 3-D APIs (OpenGL on all platforms and Direct3D on Windows XP). I've written a few small "game engines" (and I use that term very lightly ;)) as class projects this semester and wanted to find out what other indie game engine developers think in terms of renderer abstraction. What do I mean? Well, as I see it, there are three basic models (though there could certainly be more?):
Model #1:
Have a RenderContext class to manage creating/maintaining/destroying a rendering context. Some basic abstractions for the actual drawing are provided (e.g. RenderContext::BeginFrame(), RenderContext::EndFrame()). When it comes to actual drawing, you'll have various objects (e.g. Tree) with Draw() methods (e.g. Tree::Draw()) that use the 3-D API directly to draw themselves.
Model #2:
Have a Renderer class similar to the RenderContext class in model #1, but with Renderer::Draw() methods for each drawable. So, continuing with the Tree example, the Renderer would have a Renderer::Draw(const Tree& tree) method that would use its associated API to draw the tree.
Model #3:
Have a Renderer class, similar to the Renderer in model #2, but this time the renderer only knows how to draw one thing: indexed lists of vertices/normals/texture coordinates/etc. So now, your Tree class would have a Tree::Draw(Renderer* renderer) method that would construct the indexed arrays and then pass them on to the Renderer.
The purpose of all of this is to try to separate platform/API dependent code from game specific code.
So, which model to your prefer/use? Why?
I want to start out the discussion by giving my own thoughts.
I have implemented a simple engine based on models 2 and 3 (the one based on model 3 runs on WinXP/OpenGL and Mac OS X/OpenGL currently).
Having done that, I didn't particularly like model 2. It seemed to violate the rules of encapsulation. However, it makes it fairly easy to port to different rendering APIs. All of the changes occur in the Renderer class -- nothing else needs to be touched.
Model 3 has worked out alright, but it just feels restrictive. Porting it to OS X was trivial and porting it to use D3D on XP would also be trivial I expect, since so little of the code actually depends on the underlying 3-D API. During the course of the project though, there are times when it would have been nice to directly utilize the underlying 3-D API rather than just constructing indexed vertex lists to pass on to the Renderer. (Though, since rendering using arrays like that is much more efficient than other forms of rendering, it did force me to avoid hacking in some immediate rendering calls, which is a good thing I suppose).
Model 1 looks promising and is likely what I will base my next engine on (if nothing else, to have it as a comparison to the previous models I've used). By directly having access to the API, things like drawing lists can be used (more easily) when appropriate, and any other features of the 3-D API that would be useful. The only annoyance with model 1 is that porting everything over to D3D when the time comes will be much harder than under model 3. Every single drawable will have to have separate implementations of its Draw() method for OpenGL and D3D. (Though the GL code can remain constant across XP/OS X/Linux).
I have implemented a simple engine based on models 2 and 3 (the one based on model 3 runs on WinXP/OpenGL and Mac OS X/OpenGL currently).
Having done that, I didn't particularly like model 2. It seemed to violate the rules of encapsulation. However, it makes it fairly easy to port to different rendering APIs. All of the changes occur in the Renderer class -- nothing else needs to be touched.
Model 3 has worked out alright, but it just feels restrictive. Porting it to OS X was trivial and porting it to use D3D on XP would also be trivial I expect, since so little of the code actually depends on the underlying 3-D API. During the course of the project though, there are times when it would have been nice to directly utilize the underlying 3-D API rather than just constructing indexed vertex lists to pass on to the Renderer. (Though, since rendering using arrays like that is much more efficient than other forms of rendering, it did force me to avoid hacking in some immediate rendering calls, which is a good thing I suppose).
Model 1 looks promising and is likely what I will base my next engine on (if nothing else, to have it as a comparison to the previous models I've used). By directly having access to the API, things like drawing lists can be used (more easily) when appropriate, and any other features of the 3-D API that would be useful. The only annoyance with model 1 is that porting everything over to D3D when the time comes will be much harder than under model 3. Every single drawable will have to have separate implementations of its Draw() method for OpenGL and D3D. (Though the GL code can remain constant across XP/OS X/Linux).
I use kind of a combination. A rendering context that is passed to the various objects telling them to render themselves. I mainly use this because I use it in a variety of projects, I think model 2 would need me to specify different types of geometry (levels/models etc) before hand.
I develope mine mainly by trying to work out what the best performance/quality would be, and then trying to generalise it enough to be usable. So vertex/index buffers are the main geometry things for me, although I've found it extremely useful for debugging purposes to have a kind of immediate mode: I can specify an array of Graphics::Vertex and draw that straight away without having to set up buffers.
I'd recommend you start out with a really simple but visible feature you know you need. Vertex/index buffers are a good place to start. Get them working in a couple of api, see whats common and what isn't, whittle down the features you need and then try a wrapper. Mine looks a little like this:
I develope mine mainly by trying to work out what the best performance/quality would be, and then trying to generalise it enough to be usable. So vertex/index buffers are the main geometry things for me, although I've found it extremely useful for debugging purposes to have a kind of immediate mode: I can specify an array of Graphics::Vertex and draw that straight away without having to set up buffers.
I'd recommend you start out with a really simple but visible feature you know you need. Vertex/index buffers are a good place to start. Get them working in a couple of api, see whats common and what isn't, whittle down the features you need and then try a wrapper. Mine looks a little like this:
#ifndef _PLATFORMGRAPHICS_H_#define _PLATFORMGRAPHICS_H_#include "asset/Colour.h"#include "IImageFile.h"#include "IFile.h"#include "BMath.h"#include "cg/cg.h"#define VERTEX_2D 0x0001#define VERTEX_3D 0x0002#define VERTEX_COLOUR 0x0004#define VERTEX_TEXTURE 0x0008#define VERTEX_POINTLIST 0x0010#define VERTEX_LINELIST 0x0020#define VERTEX_LINESTRIP 0x0040#define VERTEX_TRILIST 0x0080#define VERTEX_TRISTRIP 0x0100enum GRAPHICS_STATE{ GS_SCISSORTEST, GS_ALPHABLEND, GS_ALPHATEST, GS_LIGHTING, GS_MULTISAMPLE, GS_ZWRITE, GS_ZTEST, GS_WIREFRAME, GS_CULL, GS_SRC_BLEND, GS_DST_BLEND, GS_TEXTURE_ADDRESS, GS_LAST = 0xffffffff,};enum GRAPHICS_STATE_VALUE{ GSV_BLEND_ZERO, GSV_BLEND_ONE, GSV_BLEND_SRCCOLOR, GSV_BLEND_INVSRCCOLOR, GSV_BLEND_SRCALPHA, GSV_BLEND_INVSRCALPHA, GSV_BLEND_DESTALPHA, GSV_BLEND_INVDESTALPHA, GSV_BLEND_DESTCOLOR, GSV_BLEND_INVDESTCOLOR, GSV_ADDRESS_CLAMP, GSV_ADDRESS_REPEAT, GSV_CULL_CW, GSV_CULL_CCW, GSV_CULL_NONE, GSV_LAST = 0xffffffff,};enum GRAPHICS_STATE_TYPE{ GST_COLOROP, GST_COLORARG1, GST_COLORARG2, GST_ALPHAOP, GST_ALPHAARG1, GST_ALPHAARG2, GST_LAST = 0xffffffff,};enum GRAPHICS_STATE_OP{ GSO_DISABLE, GSO_SELECTARG1, GSO_SELECTARG2, GSO_MODULATE, GSO_ADD, GSO_CURRENT, GSO_TEXTURE, GSO_DIFFUSE, GSO_SUBTRACT, GSO_LAST = 0xffffffff,};enum GRAPHICS_FILTER{ GF_POINT, GF_LINEAR, GF_ANISOTROPIC, GF_LAST = 0xffffffff,};enum GRAPHICS_VERTEXFORMAT{ GV_XYZ = 0x0001, GV_NORMAL = 0x0002, GV_DIFFUSE = 0x0004, GV_SPECULAR = 0x0008, GV_TEX0 = 0x0100, GV_TEX1 = 0x0200, GV_TEX2 = 0x0400, GV_TEX3 = 0x0800, GV_TEX4 = 0x1000, GV_TEX5 = 0x2000, GV_TEX6 = 0x4000, GV_TEX7 = 0x8000, GV_LAST = 0xffffffff,};enum GRAPHICS_MATRIX{ GM_PROJECTION, GM_WORLD, GM_VIEW, GM_TEXTURE0, GM_TEXTURE1, GM_TEXTURE2, GM_TEXTURE3, GM_TEXTURE4, GM_TEXTURE5, GM_TEXTURE6, GM_TEXTURE7, GFXM_LAST = 0xffffffff,};struct DisplaySettings{ unsigned int iWidth; unsigned int iHeight; unsigned int iBpp; bool bFullscreen; unsigned int iAntiAliasSamples; GRAPHICS_FILTER eTextureFilter; unsigned int iFilterQuality;};class Material;class Texture{public: virtual int getHeight( void ) = 0; virtual int getWidth( void ) = 0; virtual bool lock( unsigned char** bytes, int* pitch ) = 0; virtual void unlock( void ) = 0;};class Image{public: virtual int getHeight( void ) = 0; virtual int getWidth( void ) = 0;};class VertexBuffer{public: virtual bool lock( void** pVerts ) = 0; virtual void unlock( void ) = 0;};class IndexBuffer{public: virtual bool lock( unsigned short** pIndices ) = 0; virtual void unlock( void ) = 0;};class Graphics;class Shader{public: virtual void setUniformParameterWVPMatrix( CGparameter, Graphics* ) = 0; virtual void setUniformParameter( CGparameter, float ) = 0; virtual void setUniformParameter4( CGparameter, float* ) = 0; virtual CGparameter getParameter( const char* ) = 0;};class Graphics{public: struct Rect { long left, top, right, bottom; }; struct Vertex { Math::Point3D p; Math::Point3D n; Colour c; Math::Point2D t0; }; virtual ~Graphics(void){} virtual const wchar_t* getRendererName( void ) = 0; virtual void start( const DisplaySettings* ) = 0; virtual void stop( void ) = 0; virtual int getHeight( void ) = 0; virtual int getWidth( void ) = 0; virtual void getMatrix( GRAPHICS_MATRIX type, Math::Matrix4x4* pMatrix ) = 0; virtual void setMatrix( GRAPHICS_MATRIX type, Math::Matrix4x4* pMatrix ) = 0; virtual Image* getImage( LPIMAGEFILE ) = 0; virtual Image* getImage( unsigned char* rawRBGA, unsigned int iWidth, unsigned int iHeight ) = 0; virtual Texture* getTexture( LPIMAGEFILE ) = 0; virtual Texture* getTexture( unsigned char* rawRBGA, unsigned int iWidth, unsigned int iHeight ) = 0; virtual Texture* getTexture( unsigned int iWidth, unsigned int iHeight ) = 0; virtual Texture* getRenderTargetTexture( unsigned int iWidth, unsigned int iHeight ) = 0; virtual void freeImage( Image* ) = 0; virtual void freeTexture( Texture* ) = 0; virtual VertexBuffer* getVertexBuffer( unsigned int iVertCount, unsigned int iVertSize, unsigned long dwVertFormat ) = 0; virtual void freeVertexBuffer( VertexBuffer* ) = 0; virtual IndexBuffer* getIndexBuffer( unsigned int iIndexCount ) = 0; virtual void freeIndexBuffer( IndexBuffer* ) = 0; virtual void setIndexBuffer( IndexBuffer* ) = 0; virtual unsigned long setVertexFormat( unsigned long dwFormat ) = 0; virtual void setVertexBuffer( VertexBuffer* ) = 0; virtual void enableState( GRAPHICS_STATE ) = 0; virtual void disableState( GRAPHICS_STATE ) = 0; virtual void setState( GRAPHICS_STATE, GRAPHICS_STATE_VALUE ) = 0; virtual void setTextureState( int iState, GRAPHICS_STATE_TYPE, GRAPHICS_STATE_OP ) = 0; virtual void clear( void ) = 0; virtual void clear( Colour* ) = 0; virtual void beginScene( void ) = 0; virtual void begin2DScene( void ) = 0; virtual void end2DScene( void ) = 0; virtual void endScene( void ) = 0; virtual void resetView( void ) = 0; virtual void translate( float x, float y, float z ) = 0; virtual void scale( float sx, float sy, float sz ) = 0; virtual void rotateX( float angle ) = 0; virtual void rotateY( float angle ) = 0; virtual void rotateZ( float angle ) = 0; virtual int blit( Image* hImage, int x, int y, Colour* c=0) = 0; virtual int blit( Image* pImage, Rect* dest, Colour* c=0) = 0; virtual int blit( Image* pImage, Rect* dest, Rect* src, Colour* c=0) = 0; virtual int billboard( Texture* hTexture, Math::Point3D& Pos, float fSize = 1.0f, Colour* col = 0 ) = 0; virtual void clearClipRect( void ) = 0; virtual void getClipRect( int* x, int *y, int *w, int *h ) = 0; virtual void setClipRect( int x, int y, int w, int h ) = 0; virtual void getViewFrustum( Math::Frustum* pFrustum ) = 0; virtual void setCamera( Math::Point3D* Pos, Math::Point3D* LookAt, Math::Point3D* Up ) = 0; virtual Shader* getPixelShader( LPFILE ) = 0; virtual Shader* getVertexShader( LPFILE ) = 0; virtual void freePixelShader( Shader* ) = 0; virtual void freeVertexShader( Shader* ) = 0; virtual void setShader( Shader* ) = 0; virtual void clearTextures( void ) = 0; virtual void setTexture( int iStage, Texture* pTexture ) = 0; virtual void setMaterial( Material* pMaterial ) = 0; virtual void setRenderTarget( Texture* pTexture ) = 0; virtual void copyToRenderTarget( Texture* pRender ) = 0; virtual void drawPrimitive( Graphics::Vertex* Points, int NumPoints, unsigned long dwBitflag ) = 0; virtual void drawBufferedPrimitive( int iFirstIndex, int NumPoints, unsigned long dwBitflag ) = 0; virtual void drawIndexedPrimitive( int iFirstIndex, int iCount, unsigned long dwBitflag ) = 0; virtual void traceScreenPoint( Math::Point3D&, Math::Ray& ) = 0; virtual void traceWorldToScreen( Math::Point3D& ptin, Math::Point3D& ptout ) = 0;};#endif//_PLATFORMGRAPHICS_H_
(To establish a context, I am in the middle of implementing something that is somwhat similar to model 3. My experience is therefore rather limited.)
You speak of separating render code from game code, but both model 1 and 2 (but #1 especially) result in a very tight coupling. At worst, with model 1, every object knows how to draw itself. There the separation goes right out the window, and you'll have to hunt down all the api-specific stuff when you want to port it. If you want to isolate the api stuff, you'll want to abstract away the common features, like Bezben described it. Then the various implementations of that interface are api-specific while the users of the interface know exactly nothing about the underlying implementation. This looks nice on paper - I can't comment on how well it works with the differences between OpenGL and DirectX due to lack of experience with DirectX.
You speak of separating render code from game code, but both model 1 and 2 (but #1 especially) result in a very tight coupling. At worst, with model 1, every object knows how to draw itself. There the separation goes right out the window, and you'll have to hunt down all the api-specific stuff when you want to port it. If you want to isolate the api stuff, you'll want to abstract away the common features, like Bezben described it. Then the various implementations of that interface are api-specific while the users of the interface know exactly nothing about the underlying implementation. This looks nice on paper - I can't comment on how well it works with the differences between OpenGL and DirectX due to lack of experience with DirectX.
I've found that my directx implementation is a lot better, but that's mainly because I prefer the directx style and therefore know more about it. I'm still not entirely sure I'm using vertex buffers correctly with OpenGL, the documentation is less than abundant.
Have you considered having a side class for just the draw data for objects (geometry + texture data)? This class would be a member of a game object class and the render class could take it as such:
CRender::Draw( &gameObject.objectDrawData );
In this way all the game data is separate from the draw data, and the renderer could draw anything. The limitation is that anything you want to draw must be drawable in the manner your side class is drawn.
This is kind of a simplication of how it would really be. You could even have different types of draw data objects which inherit from a base type.
CRender::Draw( &gameObject.objectDrawData );
In this way all the game data is separate from the draw data, and the renderer could draw anything. The limitation is that anything you want to draw must be drawable in the manner your side class is drawn.
This is kind of a simplication of how it would really be. You could even have different types of draw data objects which inherit from a base type.
Some excellent replies so far!
Bezben: That's actually fairly similar to my current codebase. Actually, it's more of a mix between yours and what acw83 describe. I have a struct Renderer::Op that contains pointers to arrays of vertices, normals, tex coords, etc.:
So any object that wants to be drawn keeps a Renderer::Op structure containing the geometry and passes it on to the renderer when being drawn.
My thoughts exactly when I wrote my first two engines. I first did model 2 because it seemed like a good compromise. I wasn't limited to just vertex arrays (e.g. I could use gluQuadrics, since for these class projects we sometimes have a very limited amount of time ;)).
Even so, I am still thinking it is possible to maintain some sort of separation between the engine and game code, though. For example, a good majority of things that will be drawn will just be vertex arrays loaded in from a 3-D modelling program (such as Maya, Blender, or Milkshape). So, all I have to do is keep some sort of Model class that knows how to draw itself with each API. Most entities then will just contain a Model, and Entity::Draw can just call Model::Draw to do the actual API calls. Thus, higher level entities (e.g. a Desk or a Chair or even a House) need not know anything about the API. Its underlying model will.
Good to see that I was on the right track though. In the end, I think model 3 is probably the best way to write a portable game engine, and most of you seem to agree so far.
Bezben: That's actually fairly similar to my current codebase. Actually, it's more of a mix between yours and what acw83 describe. I have a struct Renderer::Op that contains pointers to arrays of vertices, normals, tex coords, etc.:
class Renderer {... struct Op { std::vector<Vec3> *vertices; std::vector<Vec3> *normals; std::vector<Vec2> *tex_coords; std::vector<unsigned int> *indices; Material* material; Texture* texture; Primitive primitive; }; void render(const Op& op) = 0;...};class GlRenderer : public Renderer {... void render(const Op& op);...};void GlRenderer::render(const Renderer::Op& op) { // set up vertex arrays and call glDrawElements here}
So any object that wants to be drawn keeps a Renderer::Op structure containing the geometry and passes it on to the renderer when being drawn.
Quote:You speak of separating render code from game code, but both model 1 and 2 (but #1 especially) result in a very tight coupling. At worst, with model 1, every object knows how to draw itself. There the separation goes right out the window, and you'll have to hunt down all the api-specific stuff when you want to port it.
My thoughts exactly when I wrote my first two engines. I first did model 2 because it seemed like a good compromise. I wasn't limited to just vertex arrays (e.g. I could use gluQuadrics, since for these class projects we sometimes have a very limited amount of time ;)).
Even so, I am still thinking it is possible to maintain some sort of separation between the engine and game code, though. For example, a good majority of things that will be drawn will just be vertex arrays loaded in from a 3-D modelling program (such as Maya, Blender, or Milkshape). So, all I have to do is keep some sort of Model class that knows how to draw itself with each API. Most entities then will just contain a Model, and Entity::Draw can just call Model::Draw to do the actual API calls. Thus, higher level entities (e.g. a Desk or a Chair or even a House) need not know anything about the API. Its underlying model will.
Good to see that I was on the right track though. In the end, I think model 3 is probably the best way to write a portable game engine, and most of you seem to agree so far.
Quote:Original post by penwan
Even so, I am still thinking it is possible to maintain some sort of separation between the engine and game code, though. For example, a good majority of things that will be drawn will just be vertex arrays loaded in from a 3-D modelling program (such as Maya, Blender, or Milkshape). So, all I have to do is keep some sort of Model class that knows how to draw itself with each API. Most entities then will just contain a Model, and Entity::Draw can just call Model::Draw to do the actual API calls. Thus, higher level entities (e.g. a Desk or a Chair or even a House) need not know anything about the API. Its underlying model will.
There still is one problem with this (and I don't have a solution for it :S) and that is the minimization of state changes. Given that anything but the most simple model probably consists of multiple meshes, each with a corresponding material and texture (possibly multiple textures if you're doing anything like bumpmapping or such), the approach of per-model rendering will result in a lot of unnecesary state changes. But maybe I'm on the wrong track here, and having a one-mesh, one-texture model is a better approach? In which case you only need to care about the order in which your models render themselves.
Quote:Original post by acw83
Have you considered having a side class for just the draw data for objects (geometry + texture data)? This class would be a member of a game object class and the render class could take it as such:
CRender::Draw( &gameObject.objectDrawData );
In this way all the game data is separate from the draw data, and the renderer could draw anything. The limitation is that anything you want to draw must be drawable in the manner your side class is drawn.
This is kind of a simplication of how it would really be. You could even have different types of draw data objects which inherit from a base type.
Again, that works if you know what you are drawing in advanced. It breaks down a little if you have shaders than need vertices with two sets of texture coordinates or something...
class Renderer {... struct Op { std::vector<Vec3> *vertices; std::vector<Vec3> *normals; std::vector<Vec2> *tex_coords; std::vector<unsigned int> *indices; Material* material; Texture* texture; Primitive primitive; }; void render(const Op& op) = 0;...};
Using vectors like that, I would imagine it being a little bad on the performance front. Maybe you could stick a pre-processing stage in your render pipeline to swizzle it to a nice format...
Yes, I have a feeling that using vectors like that is adversely affecting performance. Not for static sized arrays, though. There is little-to-no overhead for retrieving a pointer to the start of a std::vector in memory. The only overhead comes when creating the arrays (all of the push_back operations are probably killing performance). This really only affects loading of models while playing (not a concern right now -- though one for the future) and objects that change. For example, I am using ROAM for CLOD terrain, so everytime the view frustum changes I pretty much have to nuke the vertex arrays, re-tessellate, and recreate the arrays. I think using std::vectors here is probably causing a huge performance hit. It would be much more efficient to just use statically sized arrays. std::vector could still be used for models, though, as they (currently) don't change at run time.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement