Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

original vesoljc

static / dynamic vertex / index buffers

This topic is 5763 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement
Guest Anonymous Poster
Static means unchanging. You create the buffer with a read only specification and you never lock the buffer again (after loading data). Locking buffers is slow. You would have to manipulation the data through the interpretation. Through a vertex shader is the fastest and most flexible, but you can do this with the fixed function pipeline too. However, with the fixed function, you may have to render parts of the buffer and change the transform, render somemore of it, change the...etc..depending if you are using static meshes..Bones..bla bla bla. Changing rendering states is slow.

Dynamic means it can change. You would be inclined to lock the buffer and change it''s contents. Locking the buffer is slow, but if you were using the fixed function pipeline, you may gain speed from batching primatives in large groups. This means you could do the bone rotations on the CPU and pass a large batch to the GPU. Remember that changing rendering states is slow too. Dynamic vertex buffers are usually created in AGP, which helps the bottle necking of sending the data across a bus to be rendered.

Index buffers usually point to static vertex buffers. You would have one, unchanging vertex buffer with your mesh and your index buffer could render several instances of it or only a portion of it. For the fixed function pipeline, this means you could submit only the part of your static world you could see or only one frame of a mesh sequence. For a vertex shader program, you could actually render several different moving versions of the mesh, at different world translations.

Example: You have a mesh sequance of a person walking. You could render several people all walking at different positions in the world and different at different frame times using Vertex shaders. you would index into the same vertex buffer, only you would index multiple copies of it into the index buffer.

Example2: You only have one frame of the mesh, but that frame is divided for use with bones. You could rotate the bones in any fashion and index several copies of the same mesh. Now you will have to specify the rotations of all of the bones and then translate them according to the character, but this is also fast in a vertex shader program.

With DirectX 8.1, vertex shader programs can be emulated pretty fast in software for cards that don''t support the programs.

Share this post


Link to post
Share on other sites
so, allmost everything is slow....

so, static indexed buffers are a good choice if my models are not deformable, or even if they are i could handle this with a second copy of a vertex buffer...?

i must admit all this is still quite strange to me...
what''s all this with SYSTEM MEM, AGP?
not to mention TnL. i know that if using vertex shaders u actually have to write your own TnL, right?

would be nice if someone could point me to some reading, bout how the TnL works (preferablly online). amazon is really to far for me

Abnormal behavior of abnormal brain makes me normal...

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by original vesoljc
so, allmost everything is slow....
Yes, but you are also dealing with tradeoffs. This technique is slower than this one, but the slower one is more widely supported. The technique you need is the one suitable for your data that uses the least amount of locks and renderstate changes.
quote:
Original post by original vesoljc
so, static indexed buffers are a good choice if my models are not deformable, or even if they are i could handle this with a second copy of a vertex buffer...?
To answer the first question: yes. Let''s deal with a non-deformable character who has a single instance in your game world. He has a move set that is stored in a frame manner. You want to render the 47th frame and you want that frame at a certain position in the world. So you set the transforms, and render the buffer starting at that frame and going for a vertex count equal to one frame.

Now let''s say you have 2 instances of the same object. You could lock and render both frames to a vertex buffer and traslate them yourself, but it would be faster to simple render to frame, switch the transforms and render the second frame.

Let''s bump it up to 50 people on a map. It would be prudent to have one copy of the mesh to save memory. Now in this case you would take the time to cull away any characters you can''t see. Maybe that knocks it down to 10 people. You could change the render state for these 10 people and it would still be faster than locking a vertex buffer.

Let''s say you can see all 50 people at once. In a Quake game, this would lag the game. Changing the renderstate in hardware 50 times is slower than locking a vertex buffer(for example, I don''t know the actual number). Locking the vertex buffer and rendeing it on the CPU changing the transforms there is faster than doing it on the hardware.

For your second question: technically you can. But you are supposed to conserve resources, including memory. It is very inefficient to have 50 copies of the same mesh, even if they are slightly deformed. This is where index buffers and vertex shaders shine, deforming.

Personally, Before I start deforming anything, I would have a strong working knowledge of Vertex shaders and deformation techniques.
quote:
Original post by original vesoljc
i must admit all this is still quite strange to me...
what''s all this with SYSTEM MEM, AGP?
not to mention TnL. i know that if using vertex shaders u actually have to write your own TnL, right?

Think of AGP memory as shared memory between the video card and the CPU. The AGP bus is very fast and has a high bandwidth, allowing larger chunks of data to flow through at a time.

If you implement a vertex shader, yes, you have to do all of your own TnL. That means you have to transform the geometry using a concantenated view and projection matrix and pass through the liting calculations.
quote:
Original post by original vesoljc
would be nice if someone could point me to some reading, bout how the TnL works (preferablly online). amazon is really to far for me
The book on my recommend list is "Real-Time Rendering Tricks and Techniques in DirectX" by Kelly Dempski. 1/4 of the book is primer and 3/4 is actual techniques that I will goto over and over again. Otherwise, search google, read the articles in the articles section on this site, the main page has two links for pixel shader (which can be traced to a vertex shader program tutorial I''m sure) and read the other posts about vertex buffers in this forum.

Kelly Dempski also has an article in the DirectX articles section about rendering a 2d quad in DirectX. This techniques uses Vertex buffers, with source to download and learn from.

Share this post


Link to post
Share on other sites
quote:


so, static indexed buffers are a good choice if my models are not deformable, or even if they are i could handle this with a second copy of a vertex buffer...?



Yes, but transposing (moving and rotating and scaling) can be done by leaving the model intact.

quote:

i must admit all this is still quite strange to me...
what''s all this with SYSTEM MEM, AGP?
not to mention TnL. i know that if using vertex shaders u actually have to write your own TnL, right?



The card has memory, and DX tries to store as much as possible there. If you make a 1000 triangle model, it might be 32 kbytes, and if it''s writeonly static, it will be sent to the card memory once and be stored there. Then every scene the card can use it. If it is modified and resent, it''s a data transfer 60 times per second, with every frame.

With hardware T&L you have the option of doing your own T&L. If you use the standard DX calls, just draw(indexed)primitive you can still use the hardware T&L. Writing your own vertex shader is really flexible. This means however that on a GF3+/Radeon 8500 a vertex shader is hardware, otherwise software. This is a problem with eg. the GF2. This has hardware T&L but no vertex shader. Then you''ll be using a software shader for something the card might do in hardware.

quote:

would be nice if someone could point me to some reading, bout how the TnL works (preferablly online). amazon is really to far for me

Abnormal behavior of abnormal brain makes me normal...


The SDK has extensive documentation on the Vertex shader. Just check it out. Also the real-time effects book has detailed vertex and pixel shader descriptions

Share this post


Link to post
Share on other sites
ok, think i''m starting to see light at the end of da tunnel...
so AGP mem is good for static? what bout POOL MANAGED?


little more on batching...
lets say we have a group of objects, which will be all rendered with the same render states. is it worth it to create a one large VB and IB and when we actually call myobject->draw we would actually only copy object VB&IB into the large one (just send data)?
of course when sending data (VB&IB) to our "batcher", one would have to offset indices.
what about if objects require diffrent render states?
example: object1 does not need alpha blending, so we just send it to the batcher. object 2 requires alpha blending, so we send the object data to the batcher and set a flag to alpha blend. and when we call batch->render, it would render object1, and when it came to object 2 and "saw" the flag, it would change the render state and render object. is it worth it?

Abnormal behavior of abnormal brain makes me normal...

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by original vesoljc
ok, think i''m starting to see light at the end of da tunnel...
so AGP mem is good for static? what bout POOL MANAGED?[quote]AGP mem is good if you want to operate on the Vertex Buffers with both the CPU and the GPU. Since you aren''t editing a static mesh, you would just create them on the video card (if possible).
[quote]Original post by original vesoljc
little more on batching...
lets say we have a group of objects, which will be all rendered with the same render states. is it worth it to create a one large VB and IB and when we actually call myobject->draw we would actually only copy object VB&IB into the large one (just send data)?
The idea behind batching is that you normalize all of your data to use the same render states. This means you would translate and rotate all of your models while copying them to the vertex buffer. After translating/rotating everything as you "render" it to the vertex buffer, you set the world matrix as your identity and your view matrix where the camera is and render it to the screen. This means that there would be no reason to use an index buffer because you already batched all of the primitives into one VB buffer. Index pointers point to the same data over and over again or they can omit triangles from the stream, render out of order...etc. etc. Index buffers point to the verticies in a Vertex buffer in case you aren''t rending the buffer exactly in the order it is stored.
quote:
Original post by original vesoljc
what about if objects require diffrent render states?
example: object1 does not need alpha blending, so we just send it to the batcher. object 2 requires alpha blending, so we send the object data to the batcher and set a flag to alpha blend. and when we call batch->render, it would render object1, and when it came to object 2 and "saw" the flag, it would change the render state and render object. is it worth it?
When you batch primatives into one VB buffer, you translate them so they are where they need to be, so you don''t have to worry about the transforms. As for alpha, that''s a different rendering state that you can''t change using the vertex buffer or render into the buffer a certain way. You would HAVE to create a seperate vb buffer and render with alpha on for that set. Side note: I would render Alpha data at the very end.

All of this post is assuming you are using the fixed function pipeline and not using vertex shader programs. My other posts contrasted the two, and I think I may have been confusing you.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by original vesoljc
ok, think i''m starting to see light at the end of da tunnel...
so AGP mem is good for static? what bout POOL MANAGED?
AGP mem is good if you want to operate on the Vertex Buffers with both the CPU and the GPU. Since you aren''t editing a static mesh, you would just create them on the video card (if possible).

I messed up the quote brackets on this first part...

Share this post


Link to post
Share on other sites
if not using index buffers, vertices should/have to be organized into triangle lists or strips???

why would i HAVE to use a new VB if let''s say using alpha ?!?

if using an "inteligent" batcher, we would send data along with the required render states. batcher would be then sort the data so that it uses minimum render state changes...

Abnormal behavior of abnormal brain makes me normal...

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by original vesoljc
if not using index buffers, vertices should/have to be organized into triangle lists or strips???
Yes.
quote:
Original post by original vesoljc
why would i HAVE to use a new VB if let''s say using alpha ?!?

No. You could render a character non alpha, then change the renderstate in the pipeline to alphablendable= true. You can use the same vertex buffer, but you would be using a different renderstate (look up SetRenderState()).
quote:
Original post by original vesoljc
if using an "inteligent" batcher, we would send data along with the required render states. batcher would be then sort the data so that it uses minimum render state changes...
Correct.
If you realize you cannot bake in the renderstates into the vertex buffer. Your database would have to have the geometry paired with the renderstates needed. Check out the FVF for vertex buffers. This is the info you can store in the buffer.

Also, before I forget again: POOL Managed creation of the vertex buffers lets DirectX manage them. I believe that DirectX keeps a copy of the buffer (when created) in the system software and restores to AGP memory when the vertex buffer is lost(much like when a surface is lost). You would really have to look into that though, that is off the top of my head.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!