• Advertisement
Sign in to follow this  

Design: Render System Upgrade

This topic is 904 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So I'm trying to upgrade the way I render things and create a Render System
Where the Render System has a queue of RenderPackages, other sub-systems pump packages into this queue, and then at some point the Render System processes the queue.

 

My dilemma comes down to the relationship between RenderPackages and the BufferObjects such as a Vertex Buffer Object. For example:

 

Lets say I have a SpriteBatcher sub-system and I need to render 4 buffers worth of data. Now in my head at first, I'm thinking easy I just need 4 RenderPackages and we are done. Then I realize this is wrong. What if next frame I have to render 5, 6, 7 buffers worth of data?

 

I dont want to have to create / destroy buffers on the fly, because sounds like a bad design choice. And I'm pretty sure that having the RenderPackages round robin cant be a solution since they are to go into a queue.

 

So would it be a bad design choice to put the responsibility of creating, destroying, and using buffer objects on my RenderSystem? Basically make Vector members for each type of BufferObject and have it act as a resource pool. Although I'm pretty sure there some minor things to work out because of static and dynamic buffers, but concept wise is this something to explore? Or am I going down the wrong path?

 

 

 My current render package and VertexBufferObject just for show:


//=========== Render Package ==============
class RenderPackage
{
public:
VertexBufferObject vbo;
//Other Data members such as Texture, Shader, etc are here
};

//=========== Vertex Buffer Object ==============

class VertexBufferObject
{

public:
GLuint bufferID
void create();
void bind();
//Other class members

};

void VertexBufferObject::create()
{
	glGenBuffers(1, &bufferID);
}

void VertexBufferObject::bind()
{
	glBindBuffer(GL_ARRAY_BUFFER, bufferID);
}

 
 

Share this post


Link to post
Share on other sites
Advertisement

What's a "buffer's worth"? Why can't you have one buffer that's 4x as big?

I was kinda of thinking in terms of my sprite batcher. Where one batch is a buffer's worth, not that I actually used all of the buffer or that the buffer is full, but more so that I need to tell the GPU to draw.

In the case of the "4 buffer's worth" I was thinking more along the lines of: I need to have the GPU draw 4 times due to a different texture or shader being bound.

What if next frame I have to render 5, 6, 7 buffers worth of data?


Then you should've made it 8x as big to begin with smile.png
Sure, you can write your code to be adaptive, so that if it needs more memory it will allocate more memory... But IMHO, this is a dangerous engine philosophy.


Here I was thinking along the lines of my sprite batcher again.
In my sprite batcher I had an array of buffers. Lets say this array was 4 buffers, where each buffer could hold 500 sprites before being considered full. I could use these buffers in a round robin manner, if we needed to change the texture / shader or the current buffer was full grab the next buffer and use that. But now I dont think that will work, because I am trying to queue the draw commands


In the "solutions" I play out in my head:
The 1 RenderPackage to 1 BufferObject will end up making my create new buffers on the fly, which is bad.

The multi RenderPackage round robin approach will end up making me overwrite and corrupting my draw data. Because since there is a limited amount of packages and the queue being processed later (After all of RenderPackages have been pushed into the queue)


Thats why I was thinking about a resource pool solution.
Where, off the top of my head, I could have 4 buffers (using previous sprite batcher example) and I round robin these buffers. Handling everything my RenderSystem::ProcessQueue method. But I'm kind of unsure as my experience with making this type of system is zero

Share this post


Link to post
Share on other sites

In the case of the "4 buffer's worth" I was thinking more along the lines of: I need to have the GPU draw 4 times due to a different texture or shader being bound.
if we needed to change the texture / shader or the current buffer was full grab the next buffer and use that.

You can issue a draw command that reads vertices from any offset within a buffer. A draw doesn't always have to begin reading from the first byte within the buffer.

Share this post


Link to post
Share on other sites

You can issue a draw command that reads vertices from any offset within a buffer. A draw doesn't always have to begin reading from the first byte within the buffer.


True I just have to change the first param in my glDrawArrays call right?
//Draw the 3rd square that is in the buffer
glDrawArrays(GL_TRIAGNLES, 12, 6);
But I'm kind of confused on what you are suggesting. Are you just saying I should have one huge mega buffer that all the sub systems pump to?
Basically I create a mega buffer in the RenderSystem and then when I create a sub-system they get passed a reference to this buffer?

Share this post


Link to post
Share on other sites

I think part of the confusion is that you're not fully isolating ownership and management of the buffers to the render system.

 

User code should communicate to the renderer, at load-time, their intent to render a particular sprite in the future, and the renderer then figures out how and where to place the vertex/index data. There's no circular buffering or any sort of guesswork, since the precise amount of data required is known ahead of time based on how many sprites are loaded, and you allocate that much space. Whether the renderer creates many smaller buffers or fewer larger buffers is an internal implementation detail, and that decision can be based on available GPU memory or other factors. When a sprite is loaded, all you should get back is opaque handles and offsets.

 

Rendering the sprites is then done in two passes. In the first pass, user code informs the renderer of all the sprites it wants to render that frame, passing back all the handles and information returned at load-time. In the second pass, the renderer sorts all the sprites based on which buffers and textures they use to minimize draw calls and state transfer to the GPU, and then renders them. In this sense, the "queue" of sprites is also an internal implementation detail of the renderer. User code only cares about correctness, and while sorting is typically how you achieve that, it's not something that should be exposed in the interface.

 

At the end of the day, treat the renderer as a black box that somehow knows how to render sprites (among other renderables) correctly, where all you need to do is tell it what you rendered, and when (which frames) you want it rendered. How it goes about that should be magic as far as outside code is concerned smile.png

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement