Archived

This topic is now archived and is closed to further replies.

hansi pirman

Objects drawing themself -or- getting drawn ?

Recommended Posts

Hi there! I have a design specific question. In the most books i''ve read so far, every 3D-Object or Effect (particles,..), etc.. draws itself. Like this: C3DObject->draw(RenderDevice* rd); or much bader (IMHO): C3DObject->draw(IDirect3DDevice9* device); the 3dObject does then set all the neccessary Render States on the Device and Draws itself. I think this solution is very easy to implement, and maybe this is the reason why it''s used in much 3D books.. However, if you change the RenderDevice class; or to take the second approch: if directx changes, every 3dobject,effect,... hast to be rewritten... There''s another idea for a solution i had: no 3dobject or effect draw''s itself, but it is drawn by the renderdevice. Eeach object that want''s to be able to be drawn by the RenderDevice, has to implement an Interface, like Renderable. Some code for this solution could look like this: RenderDevice->drawRenderable(meshObj); However, this solution get''s harder to design and implement... The Design has to be very good to be compatible with new features... So my main question: Which solution should i prefer ? Are there oter solutions ? and which solution do you use in your engine ? Kind regards hansi pirman

Share this post


Link to post
Share on other sites
quote:
Original post by hansi pirman
So my main question: Which solution should i prefer ? Are there oter solutions ? and which solution do you use in your engine ?



This is the reason why I don''t use OOP at all. In OOP there are ALWAYS multiple ways of doing everything. And every of those ways have multiple advantages and drawbacks. Too much thinking.

For example, in non-OOP programming there is no question about where rendering functions belong. They belong in rendering module and take objects to render as arguments. Period. That''s what I like.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Personally, I think that your second option, although more complex to implement, offers great advantages..

1) It becomes quite easy to port your engine to another 3D API. So to port from OGL to D3D say, all you have to do is port your "Renderer" class, instead of every object. I try to keep all API dependent stuff in a renderer class, and create a C++ abstract interface which all Renderers derive from. I managed to port my engine from D3D to OGL in less than a day!

2) It certainly offers more opportunity for optimisation. In my case, I take your Renderer idea further. Instead of drawing everything exactly as it is sent, they are added to a "draw queue" in the renderer class. Then, the Renderer can sort the draw order of the objects (taking account of objects with alpha blending etc), so that it can sort and render the objects in the most efficient way possible (eg some APIs might find it better if you render in such a way to minimise texture changes, others might render quicker if you minimise vertex shader changes etc).

I haven''t done a large amount of testing, but what I have seen is that minimising state changes with my render queue gives quite a large performance increase - and I can also alter the sort order (eg try sorting by material or shader etc), so that I can quickly see which gives me the best performance..

That''s what I have found works best anyway.. interested to hear what other people have done..

Share this post


Link to post
Share on other sites
What I dont like is the tying of game related objects (entities) to visible objects (render elements)

Note that it is isnt always 1-1 relationship - indeed if you start doing anything remotely complicated it isnt. For example - one of my player elements has models for its body, a semi trans model for its trail, and then double that because its reflected in the ground. Four entries at least that must be called at completely different times.

Because the solution mentioned above is the one I'm going down, it triggered me to specify how mine works - (its an extension of that idea)

Each entity registers with the render mngr a "render entry". (or multiple entries). This gets called as and when the render mngr when. the entity has no idea of when that call is, it just gaurantees that when it is called, all of the render states are _already_ set as required. all the entity has to do is set transform matrices and call appropriate rendermesh function.

Interesting part is this - each render entry has a renderstate block with it, so the mngr can rearrange the order it renders based on sorting via texture changes, state changes, that sort of thing. It also means that you can go off, create a new entry and never have to worry whether there are 'hidden' set renderstates that completely screw up your rendering, and because the sorting is automatic, you can move a model from say opaque rendering to semitrans rendering with one function call - something I've been gagging for for a long time.

Maintainance of render states should be a complicated thing of the past IMO.

Shrew

[EDIT] for some reason I didnt read the above post - this almost entirely mimics it. In that case - I back him up

[edited by - Shrew on January 28, 2003 6:27:07 AM]

Share this post


Link to post
Share on other sites
I would say use the last method bu tyou could also use a hybrid aproach if you prefer the syntax of the first.

Implement CD3DIbject::draw(RenderDevice * rd) by calling:

RenderDevice->drawRenderable(*this);

This will enable you the option of both ways of calling the rendering code, but with the same effect. Just make sure that you inline the above code to prevent a performance hit.

James

Share this post


Link to post
Share on other sites
Thanks alot for all the great answers and ideas!!
Now im sure that i''ll do it with the second solution!

But instead of storing all requied states for each 3dobject, each 3dobject has an array of StateChange Objects (in my engine):
e.g.
StateChange {
DWORD stateName;
DWORD value;
}

because I think there are states which value the 3dobject doesn''t care, for example the shade mode. so the shade mode could be set global, but each 3dobject can overwrite this value by specifing its own shade mode with a StateChange.


Shrew''s idea is very good! Reflection is a good example where this solution is usefull!
This solution could also be extended to store all objects that have the same vertices and states (for exmaple some houses) only once, and then create multiple "render entries" for it. Hmm.. but then not each house isn''t really an object itself anymore..

thanks again !

kind regards
hansi pirman





Share this post


Link to post
Share on other sites
Pretty much thats it - the hard part is determining a sort value based on the render states - it boils down to working out which states are more important and likely to change. so some states are grouped together in a catch-all "unlikely to change" value. Its the alpha blend modes, textures, zbuffer, stencil states that change most frequently

what is nice is that you can overload the sorting function to give yourself more control, ie in the case i used in my first post, you need to render and process all reflections before the rest of the scene, so you create a custom isReflection flag which gets used with a custom sort to ensure that precedence takes place.

for menus and stuff i use a second render mngr (mean to rename it as its ambiguous with two of them!)

I might go over my code and see what I can put up that might help - its currently a work-in-progress (isnt everything) and still on its first implementation pass and a bit messy.

Shrew

Share this post


Link to post
Share on other sites
quick write up of the easy part. doesnt free the render list but cant be arsed to write that - its purely to get an idea. Will see about the 2nd part, but im still not sure the way ive done it is the best anyway...


#include <iostream>
#include <vector>

typedef int RndrDev; // really should be whatever device is required

//---------------------------------------------------------
class IRndrEntry
{
public:
virtual void Render( RndrDev* const device) const = 0;
};

//---------------------------------------------------------
class RndrList
{
public:
void InsertRenderEntry( IRndrEntry* entry) {_entryColl.push_back( entry);}
void Render( RndrDev* const device) const
{
const int size = _entryColl.size();
for( int idx=0; idx _entryColl[ idx]->Render( device);
}

private:
typedef std::vector< IRndrEntry*> EntryColl;
EntryColl _entryColl;
};

//---------------------------------------------------------
template< typename T>
class RndrEntry : public IRndrEntry
{
public:
typedef void (T::*RenderFunc)( RndrDev* const device);

void Init( T* entity, RenderFunc renderFunc) {_entity = entity; _renderFunc = renderFunc;}
virtual void Render( RndrDev* const device) const {(_entity->*_renderFunc)( device);}

protected:
T* _entity;
RenderFunc _renderFunc;
};

//---------------------------------------------------------
class AnEnt
{
public:
AnEnt( RndrList* const list)
{
m_rndrEntryBody.Init( this, AnEnt::RenderBody);
m_rndrEntryHead.Init( this, AnEnt::RenderHead);
list->InsertRenderEntry( &m_rndrEntryBody);
list->InsertRenderEntry( &m_rndrEntryHead);
}

private:
void RenderBody( RndrDev* const device) {std::cout << "body" << std::endl;}
void RenderHead( RndrDev* const device) {std::cout << "head" << std::endl;}

RndrEntry< AnEnt> m_rndrEntryBody;
RndrEntry< AnEnt> m_rndrEntryHead;
};

//---------------------------------------------------------
int main(int argc, char* argv[])
{
RndrList renderList;
AnEnt anEnt( &renderList);

RndrDev device = 255; // blah
renderList.Render( &device);

return 0;
}


[edited by - Shrew on January 28, 2003 7:50:26 AM]

Share this post


Link to post
Share on other sites