System that casts base class to derived class

Started by
13 comments, last by CortexDragon 6 years, 5 months ago

So I'm trying to design a system where I have this list of renderables


std::vector<Renederable*> renderList

Where Renderable is a base class that looks like


class Renderable
{
public:
  Renderable();
  virual ~Renderable();

  VertexShader* vertShader;
  PixelShader* pixShader;
}

Then other things like Sprites or Meshes can be derived from Renderable. Where these derived classes could have different properties


class Sprite : public Renderable
{
public:
  Sprite();
  ~sprite();

  Texture* texture;
}

Class Mesh : public Renderable
{
public:
  Mesh();
  ~Mesh();

  Texture* texture;
  VertexBuffer* vertices;
}

Now my idea is when I actually want to try to render these items I would pass them to their specific renderer


for(std::vector<Renderable*>::iterator i = renderList.begin(); i != RenderList.end(); ++i)
{
   //Psuedo code
   if(*i == Sprite)
     useSpriteRenderer(i);
   else if(*i == Mesh)
     useMeshRenderer(i);
}

I know I would have to cast the Renderable to either a Sprite or Mesh, but is this a terrible idea as a whole?

If I give Renderable a variable to hold the type, use of a enum or something, and then cast based on that would that work?


//psudeo code
if(*i->type == SPRITE_RENDERABLE)
   useSpriteRenderer((Sprite)*i);

I know that there is dynamic_cast, but I have seen people say that this not very good for performance if you do it a lot

Advertisement

Generally speaking this is a bad idea. If you need to render each of these objects in a different way, there's not much point having them all in the same container, which in turn brings into question whether it's worthwhile having them derive from Renderable at all. (Deriving from a base class so that you can avoid typing out those 2 buffers for each derived class is not a good reason to do it.)

Why don't you just make a use_rendererer() virtual member function? That way you can simply call (*i)->use_renderer() and the right function will be called.

 

Notice please that this


for(std::vector<Renderable*>::iterator i = renderList.begin(); i != RenderList.end(); ++i)
{
   //Psuedo code
   if(*i == Sprite)
     useSpriteRenderer(i);
   else if(*i == Mesh)
     useMeshRenderer(i);
}

does not really help with avoiding to cast the object, because the particular renderers still just get a Renderable.

What alvaro suggests is something that is called a dispatcher (IIRC). It let the objects identify their relevant type by themselves. I.e. with the APIs of the renderers looking like this


class SpriteRenderer {
public:
    void render(Sprite* renderable);
};

class MeshRenderer {
public:
    void render(Mesh* renderable);
};

the dispatching may look like this:


class Renderable {
public:
    virtual void callRenderer() = 0;
};

class Sprite : public Renderable {
public:
    virtual void callRenderer() override {
        someSpriteRenderer->render(this);
    }
};

class Mesh : public Renderable {
public:
    virtual void callRenderer() override {
        someMeshRenderer->render(this);
    }
};

However, now every Renderable need to have access to the belonging renderer instance to work. This can be solved by passing all defined renderers:


struct Renderers {
    SpriteRenderer* theSpriteRenderer;
    MeshRenderer* theMeshRenderer;
};

class Renderable {
public:
    virtual void callRenderer(const Renderers* renderers) = 0;
};

class SpriteRenderer : public Renderable {
public
    virtual void callRenderer(const Renderers* renderers) override {
        assert(renderers!=nullptr && renderers->spriteRenderer!=nullptr);
        renderers->spriteRenderer->render(this);
    }
};
...

This way is IMO definitely to be preferred in comparison to a singleton concept, because it allows to prepare the renderers accordingly for the upcoming rendering pass.

The dispatch concept can also be used for what Kylotan is suggesting but only if rendering all sprites and rendering all meshes can be separated. Then instead of dispatching on each rendering run through, the Renderable can register right after instantiation with the renderer that is suitable for it.

47 minutes ago, haegarr said:

What alvaro suggests is something that is called a dispatcher (IIRC).

This what I wanted to do. I wanted to be able to send each renderable to its appropriate renderer. Then that would do all the heavy lifting, but I guess it got skewed by the render list

So the reason behind having everything in the render list was cause I wanted to sort it by various conditions like texture, shader, blend state, etc. Then with those results render everything. Also I was looking to add the renderables though a core renderer or some kind of object like GameWorld or Scene.

Something like


//Somewhere in my app
Sprite* player = new Sprite();
game.coreRenderer.add(player); //add to a list of renderables

//Render method of the core renderer
for(std::vector<Renderable*>::iterator i = rendables.begin(); i != i.end(); ++i)
{
  //Using what alvaro suggested
  (*i)->useRenderer();
}

 

1 hour ago, haegarr said:

Then instead of dispatching on each rendering run through, the Renderable can register right after instantiation with the renderer that is suitable for it.

I'm not entirely sure what you mean. Even if I add the renderable directly on creation don't I still have a run through and render?

A mesh exists in 3D while a sprite exists in 2D, so it's highly unlikely you'd ever want or need them in the same renderer to begin with.

Also keep in mind that the "appropriate" renderer isn't an intrinsic property of what you're rendering. In other words, meshes and sprites shouldn't be choosing their own renderers, because there could be multiple renderers serving different purposes.

For example, let's say you have a 2D game world and 2D UI. That would mean there are two sprite renderers, and the correct renderer would depend on how the sprite is being used. This isn't information the sprite would/should have, so the calling code would determine whether the sprite belongs in the "world renderer" or the "UI renderer" based on usage and pass that single, correct one in.

So you're going to sort, every frame that list by a certain criteria you listed, texture, shader, etc?  I would highly recommend against that and instead use an integer (uint32/64) and mask in the information based on sorting preferences.  Then you can sort without touching all sorts of memory once and near instantly in comparison to inspecting each item each frame.  You then only have to update their sort mask if internal information changes.

Here is a better written explanation of what I mean:

http://bitsquid.blogspot.com/2017/02/stingray-renderer-walkthrough-4-sorting.html

 

"Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety." --Benjamin Franklin

2 hours ago, Zipster said:

Also keep in mind that the "appropriate" renderer isn't an intrinsic property of what you're rendering. In other words, meshes and sprites shouldn't be choosing their own renderers, because there could be multiple renderers serving different purposes.

For example, let's say you have a 2D game world and 2D UI. That would mean there are two sprite renderers, and the correct renderer would depend on how the sprite is being used.

I agree that there could be multiple renderers and that each one would have its own purpose, but wouldn't the class of the object that I'm trying to render inherently pick its own renderer to use? Example:


//We the have following object classes
GameWorldObject3D
GameWorldObject2D
UIObject2D
  
//We the have following renderer classes
Renderer3D
Renderer2D
RendererUI

GameWorldObject3D, because it is a GameWorldObject3D, inherently uses Renderer3D as its renderer

GameWorldObject2D, because it is a GameWorldObject2D, inherently uses Renderer2D as its renderer

UIObject2D, because it is a UIObject2D, inherently uses RendererUI as its renderer

2 hours ago, Zipster said:

This isn't information the sprite would/should have, so the calling code would determine whether the sprite belongs in the "world renderer" or the "UI renderer" based on usage and pass that single, correct one in.

Assuming we are still keeping in line with what @haegarr and @alvaro mentioned, isn't this information (what renderer to use) something that the render object would have to keep track of? Otherwise we wouldn't know how it should be rendered/what renderer to use. Unless you do what hargarr does in his sample where he passes all renderers and then we pick the right one

I'm thrown off here because when I read this I go back to thinking about how I originally was going to handle this. Where the renderer object was going to be passed to its renderer, but it seems pretty clear that this may not be a good idea

6 hours ago, Zipster said:

A mesh exists in 3D while a sprite exists in 2D, so it's highly unlikely you'd ever want or need them in the same renderer to begin with.

Unlikely perhaps, but not impossible. There are several examples out there that mix the both concepts, e.g the engine used for Rayman Legends comes to my mind - not to mention my own experiments ;)

11 hours ago, noodleBowl said:

I'm not entirely sure what you mean. Even if I add the renderable directly on creation don't I still have a run through and render?

Of course you still have to iterate, but you have to iterate lists of objects you know the types of. Notice that I've written "only if rendering of all sprites and all meshes can be separated". So in that case you can create a list of sprites and a list of meshes kind of renderables. 

------

There are many ways to write a graphic rendering engine. If we move away from "how to solve this particular problem" to "how to design a graphic engine" then - yes - also I would suggest to go another way. My personal solution currently supports two high level rendering APIs that are distinct in the way how graphic is described: the one uses structures comparable with SVG and the other the usual 3D scene stuff. So on this high level I do something similar to what the OP asks for. However, both high level renderers generate the same kind of intermediate graphic rendering jobs that are tagged and enqueued. The sorting then happens on the jobs in the queue, considering the said tags as criteria. So scene description (i.e. how the game objects are organized w.r.t. the scene) and render related sorting are two totally separate things. Finally the low level rendering works on the (unified) jobs and translates them into OpenGL / OpenGL ES / D3D / whatever else.

The both high level rendering APIs mentioned above are in so far separate as they work on different kinds of scenes, so to say. Looking at the 3D scene renderer in more depth, in fact it implements not really a full featured renderer. Instead it is something that understands how to process a given render pipeline. Mesh rendering is part of that pipeline. When this part is processed, the renderer "just" looks at the components of the game objects and identifies kind of sub-renderer (i.e. suitable for the requested rendering effect) and calls that as graphic job generator. (This is somewhat similar to Unity3D's renderer components.) This way renderer selection (at this level) is done the same way like all other component look-ups because - well - it is in fact a component look-up. And because I handle components in dedicated sub-systems, that look-up can be done just at instantiation time; after that it is a question of how the sub-system manages the distinct effect renderers.

I mentioned all this because it shows a way to decouple several concerns and put them into own software layers, so they can be handled in a way that is suitable for the specific concern. On the other hand ... well, doing it so is a notable couple of work for sure.

This is an example of a typecast you would like to use


If 	(  ((AComponent*)components)->type == TCTGLTrackBar )
	((TGLTrackBar*)components)->Init();

 

This topic is closed to new replies.

Advertisement