Jump to content
  • Advertisement
Sign in to follow this  
DarkImp

Geometry tranfer uncertainties

This topic is 4858 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm a little confused as to exactly what happens when sending geometry in vertex and index buffers. Is the entire contents of both buffers loaded onto the graphics card before being used even if a lot of it is never referenced? If I'm only using a small number of entries from an index buffer is the entire index buffer copied across, or just the range I've provided? Also, will only the referenced vertices be loaded or the whole buffer? Basically, where does the identification of what is being used performed? Driver or GPU? Thanks.

Share this post


Link to post
Share on other sites
Advertisement
It's up to you what gets copied. When you lock a vertex or index buffer, you're requesting a certain amount of video memory and you're provided with a pointer to fill with your data. You copy everything that you need to the video memory at this time. Then you unlock the buffer, which allows that memory to be accessed by the 3D pipeline.

When you call DrawPrimitive or DrawIndexedPrimitive, you are specifying which pieces that already exist in memory to draw.

I'm not sure I understand your last question, but hopefully the stuff above answers it.

There are lots of articles, tutorials, and books that discuss how to approach filling the video memory, such as how much to put in, limitations, optimizations for speed/memory usage, etc.

Good luck,
Chris

Share this post


Link to post
Share on other sites
I'm meaning more the case where the buffers are pushed out of videa memory (as you've rendered so many other things since then) or that you create them in scratch or somesuch (i.e. they are in system memory). When you then set them as the active streams does DX copy the ENTIRE stream into video memory even if only a small amount of it is being used, or only the bits referenced (particularly with index buffers where you can identify the exact bit of the buffer that will be used during the API call)?

Problem is I have a scene full of tiny objects and if I put them in seperate buffers it runs rediculously slow due to the time taken to keep continually swapping things in and out, but if I were to batch them together then (it seems) I wouldn't be able to cull anything in software, and transforming all the vertices causes a performance hit. Appears to be a no win situation.

Also, any pointers to these articles/tutorials/books you speak of? I've tried searching but they all seem to just state the same sort of things as the API docs without any REALLY useful information.

Thanks for your help.

Share this post


Link to post
Share on other sites
Quote:
Original post by DarkImp
I'm meaning more the case where the buffers are pushed out of videa memory (as you've rendered so many other things since then) or that you create them in scratch or somesuch (i.e. they are in system memory). When you then set them as the active streams does DX copy the ENTIRE stream into video memory even if only a small amount of it is being used, or only the bits referenced (particularly with index buffers where you can identify the exact bit of the buffer that will be used during the API call)?

The resource manager works on a per-resource scale as far as I'm aware. It also only works for D3DPOOL_MANAGED (as the name suggests!). If your resource gets kicked back to AGP/SYSMEM then the whole thing will be re-uploaded when it's requested. It's worth noting that D3DPOOL_DEFAULT should remain resident up until a device-lost or device-destroyed state.

Quote:
Original post by DarkImp
Problem is I have a scene full of tiny objects and if I put them in seperate buffers it runs rediculously slow due to the time taken to keep continually swapping things in and out

Are you sure they're being swapped in/out of RAM - geometric data is tiny by comparison to textures, so it's unlikely that they'll get kicked out. What sort of performance data have you got?

Quote:
Original post by DarkImp
but if I were to batch them together then (it seems) I wouldn't be able to cull anything in software

Of course you could. Who said you have to use the same data for rendering and culling? In fact, I'd say that idea is fairly questionable anyway...

You have your geometry data in it's full glory uploaded to the GPU; you then maintain ONLY the information you need (positions/normals?) in a system-memory copy. If you link the two bits of data together by index/vertex ranges you can still render as appropriate.

Quote:
Original post by DarkImp
Also, any pointers to these articles/tutorials/books you speak of? I've tried searching but they all seem to just state the same sort of things as the API docs without any REALLY useful information.

I'm a big fan of "Real-Time Rendering, Second Edition" for these sorts of abstracted problems. It's the only book that permanently lives on my desk [smile]

hth
Jack

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!