Yann Lombard's sliding slots - size?

Started by
15 comments, last by FxMazter 18 years, 6 months ago
Hello I have read some about Yann's sliding slot / binary buddy system. But what I wonder is why one would want to keep the slots in fixed sizes? Why not just give exactly the amount that is needed? OOps ;) [Edited by - FxMazter on September 22, 2005 2:11:22 PM]
Advertisement
A bit OT, but it's Yann Lombard, not Lee [wink].
If at first you don't succeed, redefine success.
Dynamic allocation is never your friend.
Dynamic allocation on the graphics adapter... even worse.

Using fixed sized slots allows you to maintain a great balance between allocation efficiency and memory fragmentation.

Instead of having to dynamically allocate some memory in VRAM, you simply have to find an open slot that's big enough to hold your data. It's a beautiful thing.

It's a geometry cache. A cache is supposed to be worked with efficiently and the most efficient method is simply to allocate once, and re-use.

Does that help clear things up for you? Good luck with everything.
So you mean that each and every slot is a VertexBuffer ?

Sure, I guess that using static buffers is the best for a VRAM cache.

But what I don't understand is why having the slots in fixed size?

Why would this be have worse performance:

static VertexBuffer1, can hold 1000 vertices

So, our first Model requests 100 vertices, and gets it.
Second Model requests 200 vertices, and gets it.
Third Model requests 150 vertices, and gets it ...

So the Layout is like this:
VertexBuffer1:
Offset 0 -> 100, Model_1
Offset 100 -> 300, Model_2
Offset 300 -> 450, Model_3

So, we get Three slots from this:
Slot1 -> Offset 0 -> 100, Model_1, VertexBuffer1
Slot2 -> Offset 100 -> 300, Model_2, VertexBuffer1
Slot3 -> Offset 300 -> 450, Model_3, VertexBuffer1

Then when it's decided that Model_2 isn't needed in VRAM anymore...
I could lock the buffer and uppload data at that position, couldn't I ?

Or is it just that the VRAM will get defragmented more with this way,
rather than with fized sizes on the slots.

So I still have ONE VertexBuffer but instead of giving the requested size
I try to make the best match possible with the combination of slots I have?

Thank you
Quote:Original post by FxMazter
So you mean that each and every slot is a VertexBuffer ?

Yes.

Quote:Original post by FxMazter
But what I don't understand is why having the slots in fixed size?

This avoid memory fragmentation due to the fixed size allocations.
It also avoids the overhead of dynamically allocating a new buffer.

Quote:Original post by FxMazter
Then when it's decided that Model_2 isn't needed in VRAM anymore...
I could lock the buffer and uppload data at that position, couldn't I ?

Yes, when geometry data is no longer needed, it is replaced by new data that is needed.
Quote:Original post by Sages
Quote:Original post by FxMazter
So you mean that each and every slot is a VertexBuffer ?

Yes.


To add a little nuance to this, not every slot is a unique vertex buffer (in OpenGL speak). Several slots share the same buffer. I believe Yann has decoupled the actual 'VertexBuffer' mechanism from the slots themselves. This is a much better way to do it, because you could conceivably change the allocation size of each vertexbuffer to some optimum, which might even be dependant on the videocard.
Yeh, exactly thats what I thought.

It seems to me that it would be a VERY bad idea to have ONE VertexBuffer for
each and every slot - that would make a hell of SetStream calls.

So basically it's best to have an optimum size of vertexbuffers: (I dont know the optimum size... but lets assume its 1024 kb ^^)

So My top level slot actually "owns" the VertexBuffer:

1024_slot1, 1024_slot2, 1024_slot3, 1024_slot4

That would be 4 VertexBuffers, each with a capacity of 1024kb - approximately 16000 vertices.

Then I could make Virtual slots which divide one of those into smaller partitions:

1024_slot1 -> 512_slot1 + 512_slot1

Then I could continue to divide the 512_slot1 into smaller partition and also
merge them if ever need would be right ?

But still, my point is that its not ONE VertexBuffer for each slot - just
for the top level ones?

Thx
Correct.

According to Yann the optimum size for a vertex buffer in OpenGL is around 8MB. But I don't know if this is still valid. It might be best to experiment a bit.
Hum, well sure... lets say I have this kind of cache scheme - won't I have dang so many Rendercalls?

I mean, prefferably would be to store all the vertices with the same Effect/params in a row so that you can send them with a drawcall all together?

So If I could batch slots with geometry of the same effect I wouldn't need to jump between positions in the vertexbuffer when rendering with offset.

It will eventually end up like this:

Slot1: filled with Effect1 vertices
Slot2: filled with Effect2 vertices
Slot3: filled with Effect3 vertices
Slot4: filled with Effect2 vertices
Slot5: filled with Effect1 vertices

Doesn't jumping around like this within the VertexBuffer hurt performance a lot?

Render(offset 10, count);
Render(offset 400, count);
Render(offset 2000, count);
etc...

I know this question was asked in an earlier thread, but never got anwsered.
So how would I go around this problem?

Or is that actually no performance hit?
You still need to batch things and this system actually helps you batch. That's where your index buffers come in to play. I'll explain more later if you need me to they just called into a meeting, so I have to go.

This topic is closed to new replies.

Advertisement