optimize particle system

Started by
12 comments, last by _the_phantom_ 17 years, 1 month ago
thanks for the hint Lord_Evil!
Advertisement
Quote:Original post by ghostd0g
ok, got that. but how to evade that if i need updates in the fragment shader?
It depends on what kind of particle attributes you have but supposing you have a "life" value (which scales alpha down to 0 when the particle "dies") you could then just have a vertex attribute (I suppose you're doing this with shaders, but you can work it out on FFP as well in this specific example) that is set for each vertex. Then, the VS would pass this to the PS.
Obviously enough, if you have a complex attribute (like "texture to be used for this particle") this isn't really so easy.
Quote:Original post by ghostd0g
actually there are lots of pixels to be rendered because the big quads (20-30% of the viewport size) are all over the screen and they get blended too.
Ok. Then I agree you are likely ROP bound.
Quote:Original post by ghostd0g
i'll investigate into updating only at a lower framerate. but i do not understand how inter/extrapolating would spare some rendering power. or maybe i do not really understand what you mean by inter/extrapolating.
It wouldn't. At best, you could save a few bandwidth by having an update every N frames instead of one update evey frame. This would, really, require a bit of extra vertex work but it's a real gain on full update.
Quote:Original post by ghostd0g
first transform the whole particlesystem (emitter position and area of validity), then transform the particles relative to the system.
Ok, forget it.
Quote:Original post by ghostd0g
could you elaborate further on that topic or give me some more hints please?
the way i am thinking of is to have a VBO with vertex positions and texture coordinates. then i could send the rotations as attributes to the VS and calculate my new vertex positions (rotate them) in the shader. could this work and is it a good way?
I won't. You seem to have a better idea than me on what you're doing I fear doing an abstract example would actually make your ideas worse. Yes, it could work as you think now.
Quote:Original post by ghostd0g
i thought the real inefficiency of immediate mode comes from sending the whole daata over the bus for each frame. so of course it is more effective when the vertex data stays on the gfx memory. but when i use a dynamic buffer i want to update the positions/normals/whatever component and therefore i have to send those updates over the bus again (inefficient again?!?). but this might just come from my lack of knowledge.
IM is not just bandwidth inefficient. It also takes more driver mangling. Furthermore, the fact a dynamic VBO would send the same data anyway, doesn't mean it'll take the same time to transfer VBO DMA pulling may be faster than IM DMA (although that's likely the driver internally uses a VBO to simulate IM).

Anyway, if you're ROP bound and you need all those big particles you're in bad troubles.

Previously "Krohm"

I don't know about the shader part, but if VBOs are stored on the video card and you have a problem dynamically updating them each frame(which I wouldn't think would be a problem), then you could still send it normally using the good old original vertex arrays. This way, you keep you own copy in your system memory which you obviously update as you change position, and you can send all of it at once( or less depending on how many verts you have) and it will be much faster than Immediate Mode. Now VBOs should be faster still if you can use them, but if they cause you problems, maybe down it a notch, but not as far as Immediate Mode. Also, what exactly are you simulating with partices at 20% of the screen??? Maybe you could do the same effect with smaller and less particles using different texture and/or style of blending. Ofcourse it depends on what you are doing exactly.


One question; You mentioned in the orignal post you profiled the app; what steps did you take to do this?

As for the compressed textures;
They reduce the amount of bandwidth required to transfer a texture from VRAM to the cache and then into a register in the GPU, if you are bandwidth limited in anyway than this reduction of data transfer will help.

On Uniforms;
Uniforms are designed for per-polygon data at worse, normally they would be used with per model or per groups of polygons. If you really need to change a uniform then you should be grouping your draw operations, as best you can, by uniform state to remove redundant changes from the rendering pipeline.

This topic is closed to new replies.

Advertisement