Sign in to follow this  

Graphics engine design. A bit confused..

This topic is 3661 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello folks :) Well first of all I'm not an engine/framework-lover and I'm not going to make a mega engine or something else, just want to wrap some things (like Initialization, some drawing functions and etc) into one class that will be my so called "Graphics engine". Well wrapping things into class not a problem, the problem is presented here: I want to have some objects, actually not "few" but only 3. One object will be some basic things like cube, sphere, parallelepiped and etc. The second one going to be an MD2 (quake2 model format) object, and the last one is BSP (quake/half-life map format) object. The problem that everyone of them will have a different Render method. A cube will be rendered as vertices, MD2 will be rendered using OpenGL commands found inside the file itself, and BSP don't know yet :) I assume it will be vertex arrays. And finally the problem: I can add to every object his own Render() method, but then they become isolated from the engine, I mean engine MUST render object, not the object itself.. The second option is to add Render methods to Engine, things like RenderObject(Object& o), RenderMD2(MD2Model& md2), RenderBSP(BSPModel& bsp). This suits more to the structure of "engine must render objects". But it this case objects not managed through engine (good or bad I don't know). The last one I used it in my last project, Is to create std::list of base class Object, and every model that need to be rendered added to this list and then the Render() method of my engine simply run over the list and call Render() method of object. This Is the options I came with. Can someone help to decide what model is the best? How you do it? How others do it? Thanks a lot :)

Share this post


Link to post
Share on other sites
There is not a single design that is better than all the other ones. Architectures in which objects are responsible for rendering themselves are pretty common.

Another alternative (which I have seen on larger scale frameworks) would be to decouple the rendering part from the object on its own generic self-contained object, which specifies how the object should be rendered and which resources it uses.

RenderAttributes
{
PrimitiveType
ShaderInstance
VertexBuffer
IndexBuffer
}

Every object has RenderAttributes that the renderer can grab and do whatever it needs to do with them. This also helps with sorting rendering by transparents and shader types, etc, etc

Share this post


Link to post
Share on other sites
Hi,

I don't have a load of time at the moment, but I can provide an idea to getting a nice way to render. It's called the visitor design pattern, and it serves as a solution to extending a class without modifying the class a lot, especially each time you add a new renderable object. Here's how it works.

We have an object that is dedicated to doing the rendering of these objects from their data. This object is called the visitor. When we get to rendering we 'send' this object to everything that can be rendered. The object then calls a method on this visitor with itself as the parameter - sending itself back to the visitor. Seems a bit pointless right? Well, the magic is that with polymorphic languages this causes different code paths to be executed, depending on the class.

Let's have a look at an example:


class MD2 {
public:
void accept(RenderingVisitor rv) {
rv.visit(this);
}
};

class BSPModel {
public:
void accept(RenderingVisitor rv) {
rv.visit(this);
}
};

class RenderingVisitor {
public:
void visit(MD2* md2) {
// Render code specific to rendering MD2s here.
}

void visit(BSPModel* bsp) {
// Render code specific to rendering BSPModels here.
}
};





Looks pretty cool so far! Now we have a lot of common functionality grouped together, we hopefully starts making our code more organised. We can go even further now by making both BSPModel & MD2 inherit from a base class, maybe Renderable. Then, we all our renderable objects in a list, and it's very easy to render them all now. Let's see that:


class Engine {
private:
RenderingVisitor rv;
std::vector<Renderable*> renderableObjects;

public:
void render() {
std::vector<Renderable*>::iterator = renderableObjects.begin();
Renderable* renderable;
while ((renderable = iterator.next()) {
renderable.accept(rv);
}
}
};




You can go even further in some languages that allow for nice meta-programming (you might be able to do it in C++ with templates, but my knowledge of C++ is a bit rusty atm), like in Python you could add a class decorator to save you having to write that accept method.

I hope this sheds some light on your problem :)

[Edited by - Cycles on December 4, 2007 6:31:50 PM]

Share this post


Link to post
Share on other sites
ldeej
Well I don't really got your idea. You mean every object have render attribute that the renderer grad and operate with it?
If yes than its sound pretty cool cause I can change the render method of object.

Cycles
Nice idea! Looks nice and pretty configurable.

Oh well there are another problem. If the renderable object registered in engines so called "render list" how should I operate with this object? I mean objects class can have functions to control objects animation (Like the design I did for MD2). How should I control the animation? Using some method in engine like getObject(ID).setAnimation(someAnim);? Or operating on the outside pointer?
Sec Ill write code.


--------main-------
//Engine was inited already

MD2* model = new MD2("modelfile.md2", other parameters);
int id = engine.RegisterObject(model);
-------Option 1------
engine.getObject(id)->setAnimation(someanim);
engine.render(); //here we run through all render list

---------Option 2------
model->setAnim(someanim);
engine.render(); //here we run through all render list




Hmm actually its quite obvious that easier and better to use option 2.

I am open to another solutions/support of already given solutions.
Well actually would be nice to know how "pro" do it, I mean how big engines designed, maybe someone know and ready to share.

Thanks again.

Share this post


Link to post
Share on other sites
Quote:

I want to have some objects, actually not "few" but only 3. One object will be some basic things like cube, sphere, parallelepiped and etc. The second one going to be an MD2 (quake2 model format) object, and the last one is BSP (quake/half-life map format) object. The problem that everyone of them will have a different Render method. A cube will be rendered as vertices, MD2 will be rendered using OpenGL commands found inside the file itself, and BSP don't know yet :)

This is actually Bad Idea. You should, instead, convert those objects (or generate them) in a single format processed by your "engine." The reason is that your method does not scale well. As you need to add new effects, or slight modifications to the render pipeline (which will occur more and more frequently as your framework is used), you will need to replicate that functionality in a domain-specific fashion in multiple places. This is redundant, and makes for poor maintainance.

Selecting a single render path increases maintainability as changes to that path can affect all types of objects, allowing faster iteration. Futhermore, the code required to add a new type of object is simply the code to transform or generate an internal representation of that object. Domain-specific concepts are layered on top (for example, the BSP tree's culling routines are applied to select a set of internal primitive geometry to be submitted as any other geometry).

It is slightly more work up front, but only slightly. The benefit is worth it.

Share this post


Link to post
Share on other sites
jpetrie
Well I agree with you, but those files was created to serve their needs. MD2 was made to hold Animated character format and BSP was made to hold level data format.
I can't convert one to another (well actually I can).
Of course quite possible that in future when Ill work on my own game I will develop my own file format, for now I'm learning the API (OpenGL).
Thanks again :)

Share this post


Link to post
Share on other sites
I think he means internally, create your own engine format. Instead of cubes having the vertices directly, MD2's having the OGL commands, BSPs having etc etc, have one unified method of rendering. Instead of separate render classes, you now have separate loader classes, all of which load into a unified render class. This still won't scale well for larger projects (in a full game engine, your world is typically going to require custom code for stuff like culling, collision, etc, so it's not going to be as simple as rendering the whole mesh like you can with models), but it works well for starters.

Share this post


Link to post
Share on other sites
All those file formats store things as triangles, one way or another. For example, the ,md2 file has a list of vertices, and a list of triangles. Each triangle has 3 vertex references. The same is true of the .bsp file format, which has a series of brushes each containing a list of triangles, and a series of brush references stating where each brush is to be drawn.

The cube is a seperate problem. You won't find yourself using cubes once you program your game to load meshes from file, so put that code separately.

Everything else uses triangles. Start from the bottom up, writing low level functions first.

Firstly, create a triangle class and program the renderer to receive a triangle and render it.

Then create a camera class, consisting of a position and a view vector and some methods to move it around. Program your renderer to receive a camera and set the view frustum accordingly.

Give your renderer class the ability to store different textures inside an std::map and write some methods to switch between the textures loaded. Writ a method to get the OpenGL texture ID number when you pass in a string.

Then modify your triangle class so that it has a texture ID. The renderer should switch between the textures it has loaded automatically. Your triangle class should use the get_tex_id method to choose which texture ID it has.

Then, give the renderer the ability to recieve an std::vector of triangles and sort-copy them into some kind of optimized storage structure, e.g. octrees. It would do this for all the triangles it recieves.

What you then do is pass in an std::vector or triangles and render them in the order they are received.

Then write methods inside the .md2 model class to pass the triangles to the renderer as an std::vector, and then remove all GL code from the MD2 class. Do the same for your other mesh formats.

So, your draw code would look like this:

renderer.apply_camera( character_1.get_camera() );
character_1.send_triangles( renderer );
bsp_level1.send_triangles ( renderer );

renderer.optimize_triangles();
renderer.render_scene();
renderer.flip_buffers();

Share this post


Link to post
Share on other sites
Representation as "a list of triangles" is relatively uselss, as in a triangle class beyond some initial sanity checking that your render path is sane and pixels are appearing. Vertex and index buffers (or the appropriate GL concepts) are the way to go.

Additionally, "optimizing" the triangle list -- while a non-issue if you don't bother storing your data in a useless internal format -- should only be done once. Ideally, at asset prep time (during build).

The point of an internal representation of the data is to have a way to quickly and efficiently submit the data to the rendering API in the format the rendering API wants. This internal format typically consists of a vertex and index buffer, a shader, textures, and appropriate state. That's all.

Share this post


Link to post
Share on other sites
Some information for you. First, I wrote these various articles regarding starting a 3D graphics engine a long time ago:

Simple Scene Graph
Simple Camera (OpenGL)
Advanced Camera
Canonical Game Loop

There are a number of other articles that might help as well on that general page: http://www.mindcontrol.org/~hplus/graphics/


Second, here is an API for a real, live, mini-scene-graph ("graphics engine") that I've implemented in the past. You might get inspired, because implementing this API is fairly simple, and any kind of graphics you want to render is easily described as a renderable (vertex/index buffers plus material). This makes you totally insulated from the underlying graphics API; in fact, it could be Direct3D, OpenGL or ASCII and the users wouldn't know.

Pay special attention to SgRenderable and SgMaterial, as that's how actual data goes into the system.


// Note that there are dependencies like Vectors, Matrices, smart
// pointers etc that aren't included here.

struct SgSceneParameters {
SG_API SgSceneParameters();
SgColor SceneAmbient;
SgColor FogColor;
float FogHalfDistance;
};

class Sg : public DisposeSignal {
public:
virtual SgNode *Root() = 0;

// Resources are owned by whomever calls New to create them.
virtual SgNode *NewNode(char const *name) = 0;
virtual SgRenderable *NewRenderable() = 0;
virtual SgMaterial *NewMaterial() = 0;
virtual SgTexture2D *NewTexture2D() = 0;
virtual SgCamera *NewCamera() = 0;
virtual SgDirectionalLight *NewDirectionalLight() = 0;
virtual SgSpotLight *NewSpotLight() = 0;
virtual SgMaterialSource *NewMaterialSource() = 0;

virtual void AddPass(char const *name, char const *path) = 0;
virtual size_t CountPasses() = 0;
virtual char const *GetPassName(unsigned int pass) = 0;
//! GetPassIndex will not throw on error; instead it will return -1.
virtual int GetPassIndex(char const *name) = 0;
virtual SgMaterialSource *DefaultMaterialSource(unsigned int pass) = 0;

virtual void AddPrepareSignal(Signal *s) = 0;
virtual void RemovePrepareSignal(Signal *s) = 0;
virtual double PreparedTime() = 0;
virtual void Present(SgCamera *camera, SgNode *node, double time) = 0;

virtual void SetUpdateTime(double time) = 0;
virtual double UpdateTime() = 0;
virtual void SetSceneParameters(SgSceneParameters const &params) = 0;
virtual SgSceneParameters const &SceneParameters() = 0;
};

class SgNode : public DisposeSignal {
public:
virtual void Remove() = 0;
virtual SgNode *Parent() = 0;
virtual void AddChild(SgNode *cld) = 0;
virtual Iterator<SgNode *> *Children() = 0;
virtual void GetWorldTransform(Matrix4 *mat, double time) = 0;
virtual void GetLocalTransform(Matrix4 *mat, double time) = 0;
virtual void SetLocalPos(Vector3 const &vec) = 0;
virtual void SetLocalOri(Quaternion const &ori) = 0;
virtual void SetLocalScale(float scale) = 0;
virtual void SetLocalPosOriScale(Vector3 const &vec, Quaternion const &q, float s) = 0;
virtual void AddObject(SgObject *r) = 0;
virtual void RemoveObject(SgObject *r) = 0;
virtual Iterator<SgObject *> *Objects() = 0;
virtual char const *Name() = 0;
virtual void SetName(char const *name) = 0;
//! What time was the node last updated at? That's a good time to ask for information at,
//! if you want a "stable" answer (not interpolated). \see GetLocalTransform().
virtual double UpdateTime() = 0;

protected:
virtual void SetParent(SgNode *par) = 0;
SG_API void AddNodeToObject(SgObject *obj);
SG_API void RemoveNodeFromObject(SgObject *obj);
};

class SgObject : public DisposeSignal {
public:
virtual Sg *SceneGraph() = 0;
virtual Iterator<SgNode *> *Nodes() = 0;

virtual void *As(std::type_info const &type) = 0;
template<typename T> T *As() {
return static_cast<T *>(As(typeid(T)));
}
virtual void SgVisit(SgNode *owner, unsigned int pass) = 0;
protected:
friend class SgNode;
virtual void AddNode(SgNode *node) = 0;
virtual void RemoveNode(SgNode *node) = 0;
};

class SgCamera : public SgObject {
public:
virtual void SetParams(float fov, float width, float height, float n, float f) = 0;
virtual void GetParams(float *fov, float *width, float *height, float *n, float *f) = 0;
};

class SgDirectionalLight : public SgObject {
public:
virtual void SetParams(SgColor const &color) = 0;
virtual void GetParams(SgColor *oColor) = 0;
};

class SgSpotLight : public SgObject {
public:
virtual void SetParams(SgColor const &color, float fov, float distance) = 0;
virtual void GetParams(SgColor *oColor, float *fov, float *distance) = 0;
};

enum SgPrimitiveType {
spNull,
spTriangleList,
spTriangleFan,
spTriangleStrip,
spLineList,
spLineStrip,
};

class SgRenderable : public SgObject {
public:
virtual void SetVertexDecl(SgVertexDecl *vd) = 0;
virtual SgVertexDecl *VertexDecl() = 0;
virtual size_t CountVertexStreams() = 0;
virtual SgVertexStream *VertexStream(size_t index) = 0;
virtual void SetVertexCount(size_t count) = 0;
virtual size_t CountVertices() = 0;

virtual void SetIndexCount(size_t count) = 0;
virtual size_t CountIndices() = 0;
virtual void SetIndices(unsigned short const *ibase, size_t count) = 0;
virtual void LockIndices(size_t offset, size_t count, unsigned short **oPtr) = 0;
virtual void UnlockIndices() = 0;

virtual size_t CountRanges() = 0;
virtual void SetRangeCount(size_t count) = 0;
virtual void GetRange(size_t index, SgPrimitiveType *oType, unsigned int *oFirst, unsigned int *nPrims, SgMaterial **oMaterial) = 0;
virtual void SetRange(size_t index, SgPrimitiveType type, unsigned int first, unsigned int nPrims, SgMaterial *material) = 0;
};

enum SgParameterType {
ptNull,
ptFloat,
ptFloat2,
ptFloat3,
ptFloat4,
ptMatrix4,
ptTexture2D,
};

class SgMaterialSource : public DisposeSignal {
public:
virtual char const *Identifier() = 0;
virtual void SetSource(char const *path) = 0;
};

class SgMaterial : public DisposeSignal {
public:
virtual size_t CountParameters() = 0;
virtual char const *ParameterName(size_t ix) = 0;
virtual SgParameterType ParameterType(size_t ix) = 0;
virtual bool FindParameter(char const *name, size_t *oIx, SgParameterType *oType, size_t *aCount) = 0;
virtual void SetParameter(size_t ix, float f) = 0;
virtual void GetParameter(size_t ix, float *f) = 0;
virtual void SetParameter(size_t ix, float const *f, size_t count) = 0;
virtual void GetParameter(size_t ix, float *f, size_t count) = 0;
virtual void SetParameter(size_t ix, Vector3 const &v3) = 0;
virtual void GetParameter(size_t ix, Vector3 *v3) = 0;
virtual void SetParameter(size_t ix, Quaternion const &q) = 0;
virtual void GetParameter(size_t ix, Quaternion *q) = 0;
virtual void SetParameter(size_t ix, Matrix4 const &m4) = 0;
virtual void GetParameter(size_t ix, Matrix4 *m4) = 0;
virtual void SetParameter(size_t ix, SgTexture2D *tex) = 0;
virtual void GetParameter(size_t ix, SgTexture2D **tex) = 0;

virtual void LoadFromSource(SgMaterialSource *src) = 0;
};

enum SgTextureType {
ttNull,
ttAlpha8,
ttRgb32,
ttRgba32,
ttDxt1,
ttDxt5,
};

class SgTexture2D : public DisposeSignal {
public:
virtual bool GetInfo(SgTextureType *oType, unsigned int *oWidth, unsigned int *oHeight, unsigned int *oMipCount) = 0;
virtual void SetInfo(SgTextureType type, unsigned int width, unsigned int height, unsigned int mipCount = 100) = 0;
virtual void SetData(unsigned int mipLevel, void const *src, unsigned int top, unsigned int left, unsigned int width, unsigned int height) = 0;
};

enum SgCompTypeCode {
ctNull,
ctUnsignedByte,
ctColor,
ctFloat,
};

enum SgCompUsage {
vuNull,
vuPosition,
vuNormal,
vuTexCoord,
vuColor,
vuTangent,
vuBitangent,
vuWeight,
vuIndex,
};


class SgComp {
public:
SG_API SgComp();
SG_API SgComp(int type, int count, int usage, int index, int stream, int offset);
int Type;
int Count;
int Usage;
int Index;
int Stream;
int Offset;
SG_API SgComp &operator+(SgComp &other);
private:
friend class SgVertexDecl;
SgComp *next_;
};

template<typename T> struct SgCompType;
template<> struct SgCompType<float> {
enum { Value = ctFloat, Count = 1 };
};
template<size_t S> struct SgCompType<float[S]> {
enum { Value = ctFloat, Count = S };
};
template<> struct SgCompType<unsigned int> {
enum { Value = ctColor, Count = 1 };
};
template<> struct SgCompType<unsigned char> {
enum { Value = ctUnsignedByte, Count = 1 };
};
template<size_t S> struct SgCompType<unsigned char[S]> {
enum { Value = ctUnsignedByte, Count = S };
};
template<> struct SgCompType<Vector3> {
enum { Value = ctFloat, Count = 3 };
};
template<> struct SgCompType<Quaternion> {
enum { Value = ctFloat, Count = 4 };
};

template<typename T, int I = 0, int S = 0, size_t O = 0> class Position : SgComp {
public:
Position() : SgComp(SgCompType<T>::Value, SgCompType<T>::Count, vuPosition, I, S, O) {}
};

template<typename T, int I = 0, int S = 0, size_t O = 0> class Normal : SgComp {
public:
Normal() : SgComp(SgCompType<T>::Value, SgCompType<T>::Count, vuNormal, I, S, O) {}
};

template<typename T, int I = 0, int S = 0, size_t O = 0> class TexCoord : SgComp {
public:
TexCoord() : SgComp(SgCompType<T>::Value, SgCompType<T>::Count, vuTexCoord, I, S, O) {}
};

template<typename T, int I = 0, int S = 0, size_t O = 0> class Color : SgComp {
public:
Color() : SgComp(SgCompType<T>::Value, SgCompType<T>::Count, vuColor, I, S, O) {}
};

template<int I, int S, size_t O> class Color<unsigned int, I, S, O> : SgComp {
public:
Color() : SgComp(ctColor, 1, vuColor, I, S, O) {}
};

template<typename T, int I = 0, int S = 0, size_t O = 0> class Tangent : SgComp {
public:
Tangent() : SgComp(SgCompType<T>::Value, SgCompType<T>::Count, vuTangent, I, S, O) {}
};

template<typename T, int I = 0, int S = 0, size_t O = 0> class Bitangent : SgComp {
public:
Bitangent() : SgComp(SgCompType<T>::Value, SgCompType<T>::Count, vuBitangent, I, S, O) {}
};

template<typename T, int I = 0, int S = 0, size_t O = 0> class Weight : SgComp {
public:
Weight() : SgComp(SgCompType<T>::Value, SgCompType<T>::Count, vuWeight, I, S, O) {}
};

template<typename T, int I = 0, int S = 0, size_t O = 0> class Index : SgComp {
public:
Index() : SgComp(SgCompType<T>::Value, SgCompType<T>::Count, vuIndex, I, S, O) {}
};



//! SgVertexDecl is easy to define:
//! SgVertexDecl vd(Position<Vector3> + Color<unsigned int>);
//! The vertex declaration will automatically calculate offsets.
class SgVertexDecl {
public:
SG_API SgVertexDecl(SgComp const &sc);
SG_API ~SgVertexDecl();
SG_API size_t CountComponents() const;
SG_API SgComp const &ComponentAt(size_t index) const;
SG_API unsigned int StreamCount() const;
SG_API unsigned int StreamStride(size_t index) const;
private:
size_t count_;
size_t streamCount_;
size_t stride_[4];
void *data_;
};

class SgVertexStream {
public:
virtual size_t Stride() = 0;
virtual size_t Size() = 0;
virtual void CopyDataIn(void const *base, size_t offset, size_t size) = 0;
virtual void Lock(size_t offset, size_t size, void **oPtr) = 0;
virtual void Unlock() = 0;
};


SG_API Sg *NewSg(SgHostWindowRef ref);
SG_API SgTexture2D *SgLoadTexture2D(Sg *sg, char const *path);
SG_API SgNode *SgNodeChildByName(SgNode *node, char const *name);
SG_API SgFrustum SgCalculateFrustum(SgCamera *camera, SgNode *node, double time);



Share this post


Link to post
Share on other sites

This topic is 3661 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this