vertex buffers and culling?
Hi,
I have a some decisions to make and I can't make my mind up on a few. Hopefully I can get some useful insight...
I am going to perform a culling/clipping step (using BSP and portals )before rendering each frame. Now to do this I have a triangle object that contains pointers pointing back to the original vertex positions (for efficiency).
After I determine what triangles are visible or not Ill have to modify the vertex buffer to only contain the visible triangles (so I assume my only choice here is a dynamic buffer?)
So basically what i was looking for opinions on:
- Is it acceptable to use only a big vertex buffer for storing mesh data, as opposed to using an index buffer as well...Among other things Im also using texture coords so the indexed approach seems more trouble than its worth.
- When clipping meshes, is the only option to use dynamic buffers for rendering? Im guessing so...
Thanks for your help.
I don't think clipping the geometry per frame is a good idea, unless I didnt understood your idea right, as BSPs are not used anymore for rendering. You should keep geometry changes in the cpu to a minimum, as each change requires new data to be uploaded to the gpu.
Just have your geometry separated in chunks, and then you determine the visible chunks per frame (with an quadtree, octree, portals or whatever) and render them.
Also, generally when using indices your mesh data will require less space, which means less time to upload to gpu, and more free gpu ram for you.
Just have your geometry separated in chunks, and then you determine the visible chunks per frame (with an quadtree, octree, portals or whatever) and render them.
Also, generally when using indices your mesh data will require less space, which means less time to upload to gpu, and more free gpu ram for you.
Quote:as BSPs are not used anymore for rendering.
BSP was never used for rendering -- it is an algorithm which splits space into convex parts.
But many developers still use it now to split maps in their editors...
Anyway, if you plan to use textures sized at least 2048x2048, you won't get the maximum speed out of that scene reuploading technique as you shouldn't change textures and materials (shaders/constants/d3dmaterials :D) when you work with data like that.
So, in the real world, you won't get much speed from that technique so I suggest you to use more traditional techniques and add some kind of grouping of splitted scene parts if you really need to reduce the draw calls.
I have a similar problem right now. Currently I split the mesh into chunks according to their material, to avoid state switching.. But rendering a lot of geometry is obviously slow, so I wanted to go for a binary tree (which I know how to implement roughly). However, if I use a binary tree now, I can't sort by texture/shader at all anymore, and I'll switch states like crazy, is that correct?
EDIT: Sorry for necroing, posted in the wrong thread
[Edited by - Eskapade on October 20, 2009 11:48:25 AM]
[Edited by - Eskapade on October 20, 2009 11:48:25 AM]
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement