Jump to content
  • Advertisement
Sign in to follow this  

2d renderer basic questions

This topic is 2298 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts


so, I've got my 2d renderer using the lpd3dxsprite-interface working, but since I've just started my new projet I want to make sure everything gets designed well from the beginning. My main goals for this 2d rendering "libary" are:

- Reusability (at least for me in future projects)
- I might want to step away from lpd3dxsprite to plain vb/ibs, as well as change the api (d3d11, open gl) with as little extra work on the mid/high-level layers as possible
- I want to be able to render 2d-only games as well as built a 3d-renderer around this in the future

So this is what I have right now:

- a wrapper for direct3D, the device and the lpd3dxsprite-object (low-level)
- on top of that, graphical objects like: sprite, texture..

And here are a couple of questions:

- Should I have some sort of render-queue like a 3d renderer here, or is it ok to having my sprite-objects issue the draw-command directly to the dx-sprite wrapper?

- Should I rather draw my sprites directly (say I have a player entity composed of a controller, and a sprite, that gets stored in a scene-graph. I call now Player->Draw() which calls Draw() on its sprite-component) or would it be better to have a seperate class handling this, having sprites register/unregister to that class and then drawing them in a flush?

- Extention to both questions above: Is it better to have sprite->draw() issue any command to whether a renderqueue or the lpd3dxsprite-wrapper, or should I rather have sprite->draw() add an internal command in a generic struct to a vector<>. This vector<> could later on being pulled out and issued as a draw command to the lpd3dxsprite-interface, by whatever class finally is responsible for rendering the sprite. A command would consist of the Source-Rect, Destination-Rect, Z, and texture in any case.

- Would it be ok to have my d3ddevice and/or lpd3dxsprite-wrappers singletons/globals? Is there any possible reason to have more than one device and sprite-interface? I'd use Single-window, single-thread applications only.

- Is there any additional design I might want to apply on top of that? Any classes I might want to add to handle rendering? Edited by The King2

Share this post

Link to post
Share on other sites
Create a RenderDevice class that exposes all graphics API functionality that you'll need and also classes for every type of graphics API resource, it will mostly be functions like createTexture(const TextureDesc& desc) that calls the D3D CreateTexture function, etc.

Then if you want to add OpenGL support just write a different implementation of RenderDevice but using OpenGL instead of D3D.

This group of classes will be the API abstraction layer.

On top of this layer you can build whatever you want and it'll be API independent.

-A command queue really adds a lot of flexibility in my opinion, take a look at this topic for more info.

-Create a different class where you register the drawable objects and let that class handle the command dispatching.

-Personally I don't use singletons but I've used them in the past, I just like the flexibility that not using them gives. A singleton will simply restrain you but do whatever works for you.

-Create some basic classes and define an interface for every kind of them then only extend the basic classes to implement new functionality:
For example a simple Entity class:

class Entity

//returns an array of commands that sets device state and draw class used to draw this entity
Commands* getCommands() const;

Commands* _pCommands;

Then the renderer simply has to go through the scene graph finding each entity and dispatching it's commands.

Everything that needs to be drawn just has to use Entity as parent class and store its commands (like binding buffers, textures, shaders and draw calls) in the _pCommands array.

You'll probably also want to use some kind of sort key associated with each entity to render each entity in the appropriate time. Edited by TiagoCosta

Share this post

Link to post
Share on other sites
Thanks a lot,

this sounds indeed just like what I'd expect of a reliable graphics libary. I've got just one more question:

-A command queue really adds a lot of flexibility in my opinion, take a look at this topic for more info.[/quote]

I had a look, and I think a renderqueue is a really nice thing to have for me, since I want to expand upon 3d sometime, I'll need it anyway. But how does the renderqueue exactly work? I've read this blog-post http://realtimecollisiondetection.net/blog/?p=86, but I'm a bit confused of how to exactly sort. So I'm using a map<u64,Command*>-container to store the commands I dispatch to the renderqueue. I want to sort based on material, depth, and translusceny. I would pull that information out of the command when dispatching it, and the key really is only there for sorting, so that after it is sorted the draw-call is issued without any of the information in the key directly, right?
Plus, how do I exactly sort? I know I can use std::sort or something like this, but how would the sorting function look like (maybe some pseudocode?) I know how to work with bits, but I'm not too familiar with sorting-algorythms.

Share this post

Link to post
Share on other sites
The sorting function doesn't have to know what the bits inside the sort key mean, so it'll simply sort the keys as if it was sorting a number.
So the sorting function can be as simple as this:

struct RenderInstance
u64 sortKey;
Command* pCommands;

bool compare(RenderInstance i, RenderInstance j)
return (i.sortKey < j.sortKey);

void sortInstances(std::vector<RenderInstance> instances)
sort(instances.begin(), instances.end(), compare);

The order you put the bits inside the key will change how it'll be sorted.


You'll most likely want sort entity by translucency (so you render opaque objects first and then transparent) and then only then sort the two groups by depth.
So since the translucency is more important you store it in most significant bits.
The sort key would look like this:
| translucency (1 bit) | depth (24 bits) |

So whatever the depth of each entity is you can be sure that opaque entities will be separated from transparent entities because most significant bits have more influence in the sort key.

Long story short, think carefully what is more important and build the sort key accordingly.

P.S. Keep in mind that depth is usually a float value so you can't store it directly in 24 bits without any conversion, or without having a more complex sorting function. Edited by TiagoCosta

Share this post

Link to post
Share on other sites
Thanks, makes perfect sense now. But do you have any source or quick explanation how to convert the float-z to 24 bit? I might even have more bit, like 32, but Idk yet.

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!