• Advertisement
Sign in to follow this  

particle system design/code issues kind of..

This topic is 2345 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all, ive written a simple particle system though i have a couple questions about some of things im doing.

1. I am using a base class Object through out the system so users of the system can define thier own particle types by inheriting from this classs yadda yadda you know how it goes. Anyway the object class has two pure virtual functions; Active which lets the controller know when to remove the object from the list of objects and Clone (prototype design pattern if your familiar with it).. However i dont like imposing these responsibilities on the systems users. I was just curious if anyone else had managed to get the same flexibility without the need for virtual methods to determine when to kill a particle and how to avoid slicing? I realise you may require code, but at this time i am unable to give it to you (writing this on my phone/cell)

2. There is something odd about visual studio, first a little explanation of what the oddity is and then how to reproduce it:
I am slowing down and speeding up the simulation speed of the system for debugging purposes by multiplying dt by a simulation factor (though the actual emission of particles is not affected by dt currently) after running the program within the visual studio environment and changing the simulation speed a few times the program seems to struggle with allocating memory (particles are allocated and deallocated all the time no effort is made for optimization as this did not show up in my profile atall). However when i run the the program via its standalone exe the problem ceases to exist? What could explain this? My thoughts are that visual studio creates some kind of virtual memory for each program run from within the environment and my brute force technique is fragmenting this virtual memory causing slower allocations? Also im not sure but could this have anything to do with cache misses? (ive yet to fully understand them).

Thanks for your input, and im sorry i could not provide code perhaps i will be able to when i get settled into university accomodation sunday night.

Share this post


Link to post
Share on other sites
Advertisement
Presuming this is C++, clone can be handled in the base class via the curiously recursive template pattern.
http://en.wikipedia.org/wiki/Curiously_recurring_template_pattern#Polymorphic_copy_construction

Share this post


Link to post
Share on other sites
When you run a program in the debugger, you're getting a debug version of the heap from the OS, with extra safety checks and tracking information. The debug heap is substantially slower than the normal heap. Note that this is separate from the heap you get when you build a Debug build of your program - so you can technically pay two separate overlapping debug penalties when it comes to doing dynamic memory allocation.

Share this post


Link to post
Share on other sites
@Slavik81 yeah i considered that too but the system defines a prebuilt particle type with a set of prebuilt modifiers/actions/effects/components to be used in tandem with this particle type. The intention was to give beginners the chance of getting set up quickly and for those who wanted to extend the system could by inheriting from the particle type. Which means i would have to declare the particle type as a template aswell in order for this pattern to work eg.

template <class derived> particle : public object<derived>;

And then possibly typedef a version of particle<particle> for the system to use... Though now youve mentioned it to me im reconsidering. Thanks for your input (and sorry for not mentioning the language, c++ is correct)

EDIT: ah yes now i remember why i opted out of this method it requires that the rest of the system know which class derived from object is currently being used.. There are two ways i can see the system would have to be built in order to make this work; 1. Make the rest of the system template classes that take the particle type effectivley rendering the whole prototype thing useless, 2. Extend the each component in the system to mirror the type of particle in use, which would place a lot of unnecessary work on the end user if they wanted to create an enitrely new kind of particle and not use the built in one, which is unnacceptable.

@ApochPiQ hmm, yeah i figured this for debug versions however is the story the same in release mode? Again sorry i forgot to mention the build configuration.

One other thing that is bothering me, i would like to implement a pool based allocation system but im not sure how feasible this is (currently particles can be emitted in bursts but i am allocating within a tight loop which is the cause
of this struggle when the program is run from within the environment, but being as the clone method is the only way to allocate...

Thanks for your input i look forward to further comments.

Share this post


Link to post
Share on other sites
I already explained the situation regarding Debug builds in my earlier post.

As for pool allocation - what exactly are you uncertain about? People write pooled allocators all the time...

Share this post


Link to post
Share on other sites

template <class derived> particle : public object<derived>;


Ugh... I think I just threw up a little in my mouth.

A particle system with a 'particle' class is going to be slow. Dead dead slow. Particles are nothing more than data, the 'emitter' class is what does all the work for updating and customising the particle and most, if not all, of this should be doable via data.

At work we have two types of particle effects; 'simple' and 'advanced'. Simple would cover things that you can do with the simple movement function, where as 'advanced' has a large chunk of data containing things like movement curves to follow and other 'advanced' information which can be setup in an editor. The 'simple' emitters can be spammed all over the place because they are light, the more advanced ones are much heavier and require more memory/processing time.

In short; your 'particle' should be no more than a struct containing data (at the most, if you want to do SSE/AVX processing then you'll need to decompose this further into seperate pointers to blocks of memory for each component), the rest of the work is done in your 'emitter' to allocate, dealloate and update them.

Share this post


Link to post
Share on other sites


I was just curious if anyone else had managed to get the same flexibility without the need for virtual methods to determine when to kill a particle and how to avoid slicing?


Polymorphic containers can be "unrolled".

One way is to have all objects have same members, they differ just in methods called. If some members are unused, they will waste memory, which may or may not matter. To process, sort the container by update type, the process each subsection like the next example:

Alternative is to have one container per type.vector<A> a;
vector<B> b;

for_each(A a) update(a);
for_each(B b) update(b);
In second approach there is no need for polymorphism or even inheritance.

Active which lets the controller know when to remove the object from the list of objects and Clone[/quote]
A boolean flag can suffice. Put a 'bool active' as a member.

There is another way which doesn't require a flag as such.bool has_expired(T x); // predicate determining if particle is dead or alive

last = std::partition(a.begin(), a.end(), has_expired);
a.resize(a.size() - std::distance(a.begin(), last);

Here the "alive" property isn't stored explicitly but depends on some other property. Perhaps "lifetime" or "height". Partition will split the entries. Those to the right of 'last' have expired, so the container is resized.


Techniques above are fairly common with multi-processing and will lend themselves to various new compiler features such as autovectorization in VS as well as implementation via various libraries (TBB, the new VS stuff...).

They are also pooled by default, so there is no need for manual memory management.

Downside is that 'update' needs to be manually added to main update chain. There is no completely clean solution, but it would be possible to craft some sort of wrapper that would handle that.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement