Archived

This topic is now archived and is closed to further replies.

WebsiteWill

Vertex Buffers Question

Recommended Posts

There are static and dynamic. What exactly are the capabilities of a static buffer? I mean, if I load a mesh and store its vertex info in a static VB can I animate that mesh and have the buffer still be static? Or does this require a dynamic vertex buffer? I think that static will work but I can''t be sure from reading the SDK, at least nothing in it says for sure that I''ve found. TIA, Webby

Share this post


Link to post
Share on other sites
To be safe use dynamic for animated objects - unless your going to write some simple code to do animation with it - (this is just general - im not an expert on vertex buffers so you may want to wait for some more replys before moving on)

Share this post


Link to post
Share on other sites
Depends on the kind of animation. If you intend to modify the actual vertex values in the vertex buffer with the CPU, then the vertex buffer should be dynamic. Static vertex buffers essentially imply a policy of write once and never write again. (Unfortunately "write infrequently" is not well-supported by the API and driver interface.)

Share this post


Link to post
Share on other sites
If you are planning on doing your animation in software you will most likely need to use dynamic buffers, however if you want to do animation using a vertex shader then static buffers can be used. I''ve implemented skeletal animation using static buffers as the vertices are manipulated by bones, but the mesh still remains the same.

For keyframing I normally use dynamic buffers, one contains the current animation, while the other contains the next animation. When you reach the end of the frame you then set the old buffer to the next animation state and switch the streams.

Hope this helps...

Share this post


Link to post
Share on other sites
Hmmm. Confusing. How can the vertex values remain the same of the mesh is being animated? This is a bridge I won''t officially be crossing for a while on the animation side so I can go on and do the terrain part with a static buffer and then figure out what I need for the animation stuff. That will probably be dynamic buffers as I am currently planning to use keyframes. I have no current idea on how to do animation with shaders as shaders are something still very foreign to me.

You mention vertices being manipulated by bones but the mesh remains the same. I''m not understanding how this is possible. If a bone moves an arm, don''t the vertices in the arm have to change to reflect the movement? Then doesn''t this justify the use of a dynamic buffer.

I''m thinking I''ll go with dynamic buffers for characters no matter what type of animation. This way I''m not locked in to using keyframes if I want to move into vertex shaders. OTOT, if I used static then as you noted, moving to shader animation would require switching to dynamic buffers.

On a side topic, using dynamic buffers does the process of animating go like this
Load the mesh and animation data.
Put the current animation frame into the buffer and render it.
When I get to the next animation frame I clear the buffer and load the newly moved vertices into the buffer and render that.
Based on this, I will likely be dumping and reloading the buffer
each frame since it isn''t likely that characters will be sitting still. Of course I can check if they do sit still and just not dump/reload the buffer if this is the odd case.

Is this correct?
TIA
Webby

Share this post


Link to post
Share on other sites
It''s not that confusing... If a hardware shader is affecting vertices, it affects data as it''s flowing through the pipeline... the data in the buffer doesn''t change. Same applies using hardware T&L fixed pipeline.

When using software shaders or software T&L, the data must be modified by the CPU, and sent to the card each time it''s needed.

Whether the GPU is affecting the data doesn''t affect which buffer to use. If the CPU isn''t modifying the data, it can be placed in a static buffer. If the CPU is modifying the data, it needs to be dynamic.

Share this post


Link to post
Share on other sites
quote:
Original post by WebsiteWill
Hmmm. Confusing. How can the vertex values remain the same of the mesh is being animated? This is a bridge I won''t officially be crossing for a while on the animation side so I can go on and do the terrain part with a static buffer and then figure out what I need for the animation stuff. That will probably be dynamic buffers as I am currently planning to use keyframes. I have no current idea on how to do animation with shaders as shaders are something still very foreign to me.

You mention vertices being manipulated by bones but the mesh remains the same. I''m not understanding how this is possible. If a bone moves an arm, don''t the vertices in the arm have to change to reflect the movement? Then doesn''t this justify the use of a dynamic buffer.



No, when using bones you don''t need to use a dynamic buffer. What you would do is load your mesh data into a static buffer, and then draw those verticies from model space into world space using a number of different matricies.

Share this post


Link to post
Share on other sites
Interesting.
I just read the first two articles on vertex shaders on this site and all I can say is WOW. I had some ideas on how I was going to handle some lighting schemes and such but now that''s completely out of the water. I think that after I get a grip on what kinds of buffers I will use I will then spend a few days getting mroe acquainted with these shaders. I saw a book the other day at B&N on shaders but I passed it over as not being immediately important. Boy was I wrong.

Thanks all for opening new avenues!
Webby

Share this post


Link to post
Share on other sites
Ahhh the occassional beauties of MSDN.
It appears that performance wise, there is typically not much reason to use static buffers at all.
Assuming a large terrain where not nearly all of the polys can be stored in a buffer it appears that something like thsi could work.

Use a single dynamic buffer.
Each frame I figure out (using quad/octtree structure) exactly what terrain polygons are visible. I lock my buffer, pack these vertices into it (up to 1000 at a time?), unlock the buffer and then call renderPrimitive (actual call different). Then I lock the buffer again using DISCARD and put in the next set. Repeat until all terrain has been rendered.
Next I move on to objects and repeat the above steps except I''ll be calling renderPrimitive after so many objects, like
Add house polys to new buffer
Lock buffer with discard flag and append nextItemPolys to buffer.
Unlock and render primitive again.
Lock again. Rinse and repeat for all objects.

Alternatively I can simply pack on polygons until I have up to 1000 in the buffer then render primitive followed by getting a new buffer pointer where I will pack in the next 1000 polys. Similar to the terrain method above.

Is this pretty standard for how this would be accomplished?
I don''t see a need to even use static buffers like this and I would pretty much only need a single dynamic buffer for everything I want to do.

This is pretty exciting!
Smiling Webby

What''s it called when you have one of those moments where things all of a sudden start to make sense and you get that feeling of "Oh yeah!" Epitomy?

Share this post


Link to post
Share on other sites
No, don''t get too enthusastic about using dynamic VBs and locking with DISCARD. Ideally you want to lock a dynamic VB for discard only once per frame, preferably the first lock per frame.

The reason is that D3DLOCK_DISCARD gives the driver permission to hand you a whole new chunk of memory, distinct from the one you were working on previously. Some drivers have a fixed limit for the number of times that they will do this buffer "renaming" (where logically it''s a single buffer but in fact multiple buffers are being filled and rendered from). And if you''re locking with DISCARD before the buffer is full, then you''re wasting memory and making the driver do more work than is necessary. In short, the best practice for dynamic VBs is to attempt to fill them from start to end. When you reach the end -- when there''s not enough room for the vertices you want to fill -- lock with DISCARD and start over from the beginning. At all other times, lock with NOOVERWRITE. And some drivers have bugs when the first lock per frame isn''t D3DLOCK_DISCARD.

Whenever possible, static buffers should be preferred over dynamic. This is because static buffers typically live in local video memory -- the RAM on the card -- which is fastest for reading by the GPU but rather slow to write by the CPU. This is obviously ideal for static vertices, where the idea is that they are written once by the CPU and read many many times by the GPU.

Dynamic vertices are written often by the CPU and so are typically placed in AGP memory. This makes them slower to read by the GPU but very fast to write by the CPU (potentially even faster than regular system memory). For vertices that change frequently this makes sense, but otherwise it is sub-optimal. Put ''em in a static VB so that the GPU doesn''t have to suck them across the AGP bus every frame.

To summarize: Vertices that never change belong in static vertex buffers. Vertices that change every frame belong in dynamic VBs. Vertices that are somewhere in between should probably be dynamic.

Share this post


Link to post
Share on other sites
Sigh

Guess I''m not grasping what it means for a vertice to never change or to change.

Is a vertice that is being animated a changing vertice? Or is the vertice FIXED in the buffer when I is created and then moved around with matrices? I mean, I know I will use matrices to handle the changes but I''m now guessing that this leaves the original vertice in the buffer unchanged so that if I load all of the vertices of a model then those values in the buffer will never ever change.

So what then would constitute changing vertices? LOD systems where one moment this section might be 10000 verts and the next frame it may only be 9900 because of some optimization? Or other similar curcumstances like particle systems that are only in memory when they are present?

Please clarify
Webby

Share this post


Link to post
Share on other sites
If the values stored in the vertex buffer change then it''s dynamic, so this doesn''t apply to matrix transforms or lighting calculations. These involve the GPU reading the source vertex from the VB and modifying the values as they pass through the pipeline. The next frame or the next instance of the mesh will use the same source vertex values but the results of transformation & lighting may be different due to differing render states.

If you''re modifying the vertices using the CPU outside of the rendering pipeline -- by locking and writing values to the VB -- then they''re dynamic and belong in a dynamic vertex buffer.

Particles are usually dynamic (animated by the CPU). LOD can use static or dynamic vertices, depending on how it''s done. Say the mesh has 500 vertices at the lowest LOD. At the middle LOD another 1000 vertices are added, and at the highest LOD another 2000 vertices are added. All of these vertices can be stored together in a static VB, it''s just a question of which ones you render. Even if the higher LODs don''t use all of the lower LOD vertices, it can still be static. You''d basically just have a pool of vertices and the index buffer selects which ones are part of the LOD mesh. Or of course each LOD might have its own set of vertices with no sharing.

But if you''re using the CPU to morph vertices between LODs, then those vertices belong in a dynamic VB.

Another way to think of it is that static vertices are relatively permanent whereas dynamic vertices are disposable. Like paper plates, you typically use them once and then throw them away.

Share this post


Link to post
Share on other sites