Which is faster: Creating 1 VB or using many already ready VB?

Started by
8 comments, last by Erzengeldeslichtes 20 years, 3 months ago
Here''s what I have: A queue that takes all the buffers and textures to be drawn this frame. For opaque areas it simply sorts by texture and material and renders them in that order. Next it will sort and split transparent faces. Now my question is, since I''m rendering on a face-by-face basis regardless of how I do it, is it faster to create a new vertex buffer of the size of the vertices, fill it as a face list from back to front, and then SetStream that and render one face at a time from that single vertex buffer, or would it be faster to just use the preexistant vertex buffers and SetStream to each of those, render the traingle I want, and then SetStream to the next one? Basically the question is, what''s faster: Creating a new VB and filling it, or using SetStream many many times?
----Erzengel des Lichtes光の大天使Archangel of LightEverything has a use. You must know that use, and when to properly use the effects.♀≈♂?
Advertisement
Difficult to say, it''ll be very slow either way..
Maybe you should look at a solution that don''t require the "draw one triangle at a time" approach.

-Morten-
For the opaque geometry use the existing VBs. You can make very large VBs for static geometry, so you shouldn''t have too many SetStream calls there. For your sorted transparent polys you could use a single dynamic VB. Create the dynamic VB at the start and keep it between frames. Make it pretty big, but that might not be a problem in this case. If you hit the end of the buffer then just render the contents of the buffer, relock it, and go back to the beginning. This requires interleaving the writes to the VB with the rendering though.

It could actually be more efficient to transform the transparent polys on the CPU and then render the whole buffer at once. You have the CPU hit of transforming the poly, but you will greatly reduce the number of DrawPrimitive calls, which is a Good Thing.

I assume you''re only rendering the transparent polys one at a time? If you are doing that for the opaque geometry as well, then dont.

Also, bear in mind that sorting and splitting the faces is only really necessary when you have intersecting transparent meshes. If you can seperate the transparent polys into roughly convex non-intersecting submeshes then it is sufficient to sort by mesh and then render each mesh in 2 steps. First draw the backfacing polys, then draw the front facing polys.

HTH,
Chris



Morten: I'd be happy to look at such a solution. Could you tell me how I could possibly render alpha properly without first painting the furthest triangle, then the next closest, then the next closest, then the next closest, etc.? As far as I've been able to tell, it's 100% impossible to do. If I'm wrong in this assertion, please tell me how. I would love to know how alpha blending is done quickly.

Treething: Relocking the VB for large transparent scenes sounds like a good idea. I usually prefer to keep the memory at the needed space, but I suppose for the sake of speed that a large dynamic VB would be better.
And as I stated in my original post, my opaque sections are sorted by texture. I do render from the original vertex buffers when doing this.

[edited by - Erzengeldeslichtes on January 18, 2004 11:06:23 AM]
----Erzengel des Lichtes光の大天使Archangel of LightEverything has a use. You must know that use, and when to properly use the effects.♀≈♂?
To be 100% accurate in every situation you do indeed need to sort (and split where necessary) the transparent polys. But in 99% of cases this isnt required. Imagine if you have 2 transparent cubes, one in front of the other, but not intersecting. You could sort the individual polys and draw them back-to-front, one at a time. But the faster way is to sort first by mesh, then by facing. Draw the furthest mesh first, with clockwise face culling so that only the back faces are drawn. Then draw the front faces by reversing the culling. Then do the same for the next closest mesh. The end result is exactly the same as if you sorted the polys individually.

If the two cubes were intersecting though this wouldnt be the case. It should be possible to detect those cases though, and then sort the individual polys just for the transparent meshes that intersect.

Another way to cheat is to use an order-independent blending mode. In many older games you see them do all of their effects using additive blending. IMO they do this simply because they dont have to worry about sorting with additive blending.
Question about sorting within a mesh: You say draw backfacing first, then front facing. What about convex meshes? What if the mesh has two protrusions and you can see one through the other? Within ID3DXMesh I've had such instances work fine, would Draw(*)Primitive work fine in this case, and, if so, why not just Draw(*)Primitive the entire section(that doesn't intersect with another)? Why even draw the backfaces unless I want 2-sided?

[edited by - Erzengeldeslichtes on January 18, 2004 11:44:16 AM]
----Erzengel des Lichtes光の大天使Archangel of LightEverything has a use. You must know that use, and when to properly use the effects.♀≈♂?
You may want to look at the "Interactive Order-Independent Transparency" paper on the nVidia developer site.

-Morten-
Ooh, I like that idea given in that paper, Morten. I'll have to look into it further before continuing my coding...

Thanks.

[edited by - Erzengeldeslichtes on January 19, 2004 3:55:42 AM]
----Erzengel des Lichtes光の大天使Archangel of LightEverything has a use. You must know that use, and when to properly use the effects.♀≈♂?
i don''t think you can fill the buffer with different data per render, eg between begin/end scene. i tried this, it just draws the data that was last entered into the buffer
Question about the depth peeling. If I render randomly there is the possibility that on each render I will render objects behind my transparent objects (for interpenetrating objects that's guarenteed). These won't be counted in the z-buffer as being seen, but they will be seen. Now I could fix this by turning off alpha blending while doing the render phase, but how do I transfer the transparency of the materials to the alpha channel of the texture?

Edit: Scratch that, render twice. First sets up the zbuffer, second uses zbuffer to only draw the topmost faces. New question: How do I save the color data? UpdateSurface requires that the source surface be D3DPOOL_SYSTEMMEM, but D3DUSAGE_RENDERTARGET requires that it is D3DPOOL_DEFAULT. For that matter, so does D3DUSAGE_DEPTHSTENCIL. Do I just lock down both surfaces and memcpy from one to the other (if I make sure that both are the same size)?

[edited by - Erzengeldeslichtes on January 19, 2004 9:09:55 AM]
----Erzengel des Lichtes光の大天使Archangel of LightEverything has a use. You must know that use, and when to properly use the effects.♀≈♂?

This topic is closed to new replies.

Advertisement