# Improving cohesion of the interpreter of my particle system

This topic is 2512 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I have made a particle system that reads a file to produce an effect. The file looks something like this:

 effect Ball { image = "particle2.tga"; scale = random(0, 0.05); scaleVel = 0.05; density = 500; colour = rgb(150, 0, 0); colourVel = rgb(-50, 0, 0); position = sphere(1); velocity = towards(vector(0, 0, 0)); acceleration = vector(0, 0, 0); life = random(0, 2); }

There is functions like vector and random which each signify a predetermined functor. The effect is stored as a vector of Functor* as each functor is derived from the class Functor

Now at the moment I am having to write a separate function to construct each functor even though there is obvious repetition in each of these functions.

for example there is currently these functions:

 class EffectManager { //stuff private: typedef std::vector<Token>::iterator token_it; Functor* handle_func_rgb(token_it& it); Functor* handle_func_vector(token_it& it); Functor* handle_func_sphere(token_it& it); Functor* handle_func_random(token_it& it); Functor* handle_func_hemisphere(token_it& it); Functor* handle_func_square(token_it& it); Functor* handle_func_circle(token_it& it); Functor* handle_func_towards(token_it& it); Functor* handle_func_away(token_it& it); };

token_it is just the iterator for the tokens from the scanner.

I check if the current token is the name of a function (the scanner flags the token as a function) and then call the respective function.

This obviously isn't maintainable and is just bloating my class but I do not know how to better implement this.

A typical "constructor" function looks like this:

 Functor* EffectManager::handle_func_sphere(token_it& it) { Token& token = *it; if(token.type != Type::LPAREN) { error_str_ = "Syntax error: expected LPAREN after token sphere"; return 0; } token = next_token(it); if(token.type != Type::FLOAT && token.type != Type::FUNCTION) { error_str_ = "Error: unacceptable type for sphere call"; return 0; } else { if(token.type == Type::FLOAT) { return new FuncSphere(token.float_value); } else { return new FuncSphere(handle_function_call(it)); } } }

Which will create the FuncSphere with whatever number was in the parenthesis or if there was a function call inside the parenthesis it will create a new functor and pass it to FuncSphere.

Now for the actual question:
What would be a better way to construct each functor, considering that I know everything about them at compile time, in a more extendible manner that what I currently have? I know the amount of arguments each one takes (the functors have a field declaring if they're unary, binary, etc), the types of each argument and the return types of each functor.

The best I can have come up with is something along the lines of:

 template<typename T> typename boost::enable_if<boost::is_base_of<Functor, T>, Functor*>::type ctor_unary(token_it& it, FunctorDesc& desc) { //read args using FunctorDesc and the iterator return T(/*arg0*/); } template<typename T> typename boost::enable_if<boost::is_base_of<Functor, T>, Functor*>::type ctor_binary(token_it& it, FunctorDesc& desc) { //read args using FunctorDesc and the iterator return T(/*arg0*/, /*arg1*/); }

Where FunctorDesc holds an array of integers with each argument's type and another integer for the return type. This allows me to use boost::bind along with a premade FunctorDesc with the above functions to make a function pointer that takes the token iterator which I can then store in a map with a string that represents the function's name. So I then have a easier extendible way of making the "constructor" functions but it'll still result in a lot of functions plus I have to make a function for each amount of arguments.

Sorry for the lengthy post, I've tried to explain everything. This is the most complex thing I've ever wrote and my language knowledge is holding me back I feel.

##### Share on other sites
My first question is why think of the description as being tied to your objects in any way? What you are doing is writing a domain specific language, such languages are tied to the internal "descriptions" of systems but you don't want them as "part" of those systems. I.e. your constructors should have no knowledge of how to parse the data file or tokens from it. Using just boost bind/function you could do the following:

 typedef boost::function< boost::any ( token_it ) > ParseToken_t; boost::any ParseString( token_it it ) {....} boost::any ParseVector( token_it it ) {....} boost::any ParseRandom( token_it it ) {....} etc struct LineType { const char* const mpLineType; ParseToken_t mParser; }; LineType sParseEffect[] = { { "image", &amp;ParseString }, { "scale", &amp;ParseRandom } ... etc ... }; .. code outline for the parser .. parse the file and turn each line into a no whitespace included token stream Look for the start of the line prior to '=' in the above array and call the function. 

The more complicated cases are done very easily with boost::bind compositions. So, for instance the line "velocity = towards( vector( x, y, z ) )" is simply:

"velocity", boost::bind( &Towards, boost::bind( &ParseVector, _1 ) )

It will parse the vector, pass that to "Towards" and then the results is assigned to your "velocity" object. You get to catch all sorts of errors with the boost::any conversions being strict and you separate language from the implementation beyond the need to have a constructor with the required any input. And, of course if you need muliple arguments it is just a combination of bind compositions and a descriptive structure which you will pass to the object instead of requiring a complicated parsing constructor.

Getting much more complicated would justify looking at boost::spirit or other solutions (yacc, bison, etc) to separate language and parsing from the objects they construct. If you have problems getting your head around the boost function composition as it applies here, let me know. I don't mind going into details (possibly a different thread) as I've used this pattern many times when writing quick domain specific languages/bindings such as this.

##### Share on other sites
I agree with AllEightUp.
Having parsing, language and model so close each other is just ugly.
Designing an had-hoc language was probably a bad idea as well, JSON, YAML or XML appear to deliver just fine in those cases. JSON appears particularly attractive in my opinion.
So first thing is surely get the rid of the parsing and work on the pure model only. This involves [font=courier new,courier,monospace]struct[/font]s and [font=courier new,courier,monospace]union[/font]s at the lowest level perhaps you'll have to work it a bit.

##### Share on other sites

I agree with AllEightUp.
Having parsing, language and model so close each other is just ugly.
Designing an had-hoc language was probably a bad idea as well, JSON, YAML or XML appear to deliver just fine in those cases. JSON appears particularly attractive in my opinion.
So first thing is surely get the rid of the parsing and work on the pure model only. This involves [font=courier new,courier,monospace]struct[/font]s and [font=courier new,courier,monospace]union[/font]s at the lowest level perhaps you'll have to work it a bit.

I don't disagree with the json/xml etc comment but sometimes the custom domain languages make considerable sense so long as they remain "just" a little domain language and you don't turn around and try to make them into generic languages. With xml/json I can't think of a decently clean method of doing the semi-complicated line from the example:

velocity=towards(vector(0,0,0));

That form of semi-code in the domain language is where such things are sometimes justified, as very simple bits of logic can get you a long way. Of course, if you start adding variables or math or just about anything beyond the simple concepts in the example, stop, don't do it. Instead at such a point I suggest bypassing xml/json completely, embedding lua and going from there. Lua looks a bit like json when you are just trying to describe objects and such but of course it is a full blown script language at the same time. I.e. take the entire example descriptor and convert to lua:

[color=#660066]Ball =[color=#000000] [color=#666600]{
[color=#000000] image [color=#666600]=[color=#000000] [color=#008800]"particle2.tga",
[color=#000000] scale [color=#666600]=[color=#000000] random[color=#666600]([color=#006666]0[color=#666600],[color=#000000] [color=#006666]0.05[color=#666600]),
[color=#000000] scaleVel [color=#666600]=[color=#000000] [color=#006666]0.05,
[color=#000000] density [color=#666600]=[color=#000000] [color=#006666]500,
[color=#000000] colour [color=#666600]=[color=#000000] rgb[color=#666600]([color=#006666]150[color=#666600],[color=#000000] [color=#006666]0[color=#666600],[color=#000000] [color=#006666]0[color=#666600]),
[color=#000000] colourVel [color=#666600]=[color=#000000] rgb[color=#666600](-[color=#006666]50[color=#666600],[color=#000000] [color=#006666]0[color=#666600],[color=#000000] [color=#006666]0[color=#666600]),
[color=#000000] position [color=#666600]=[color=#000000] sphere[color=#666600]([color=#006666]1[color=#666600]),
[color=#000000] velocity [color=#666600]=[color=#000000] towards[color=#666600]([color=#000000]vector[color=#666600]([color=#006666]0[color=#666600],[color=#000000] [color=#006666]0[color=#666600],[color=#000000] [color=#006666]0[color=#666600])),
[color=#000000] acceleration [color=#666600]=[color=#000000] vector[color=#666600]([color=#006666]0[color=#666600],[color=#000000] [color=#006666]0[color=#666600],[color=#000000] [color=#006666]0[color=#666600]),
[color=#000000] life [color=#666600]=[color=#000000] random[color=#666600]([color=#006666]0[color=#666600],[color=#000000] [color=#006666]2[color=#666600])
[color=#666600]}
Pretty sure that would be the correct lua code and of course now if you do start using scripted items in the definitions, no extra work is put on the programmer.

For "pure" data, i.e. no even basic composition such as the velocity line, I'd probably go with json myself.

##### Share on other sites

My first question is why think of the description as being tied to your objects in any way?

If I'm honest I don't know why I decided to use functors. Reading your post has made me realise that I could avoid the problem I'm having by just making them generic functions and using function pointers.

As for the whole using xml/json/lua thing. I'm a first year CS student and my goal is to be programming low-level "stuff" (lack of a better word). I originally set myself the task of making a particle system that could be fully manipulated using some sort of code. I figured it'd teach me a lot about scanners and interpreters which it has. I knew that if I had just used a pre-existing language I would just end up googling for solutions instead of making myself think.

##### Share on other sites

[quote name='AllEightUp' timestamp='1326248507' post='4901494']
My first question is why think of the description as being tied to your objects in any way?

If I'm honest I don't know why I decided to use functors. Reading your post has made me realise that I could avoid the problem I'm having by just making them generic functions and using function pointers.

As for the whole using xml/json/lua thing. I'm a first year CS student and my goal is to be programming low-level "stuff" (lack of a better word). I originally set myself the task of making a particle system that could be fully manipulated using some sort of code. I figured it'd teach me a lot about scanners and interpreters which it has. I knew that if I had just used a pre-existing language I would just end up googling for solutions instead of making myself think.
[/quote]

Don't worry about the first bit, you get into something and your initial thoughts are "just hook it here and there will be no problem", happens to the best, just remember it for the next time.

The second bit is also fine. But, depending on what you want to do (seems games related of course), there is a time to innovate, a time to imitate and a time to just go get an existing library. For pure learning experience what you are doing sounds very reasonable and don't think of anything I've said as criticism. But, keep in mind also that your future will include "work with others" and as such, writing custom parsing code for something which can be easily encapsulated with a standard format/script language/whatever is probably something you do "not" want to do in the real world. At work, if I see custom format text files I go hunting for scalps, it's just a waste of time unless you really have a solid reason for something custom. A particle system "could" justify a custom system, but I'd take a lot of convincing.

Other the the above notes, totally willing to keep suggesting ways to clean things up until it trips the "go get a library" flag. I.e. at some point "learning" the libraries you should use in the real world becomes more viable than learning to write a scripting language could ever be. You'll have plenty of other things to learn various parsing/tokenizing/interpreting etc skills on.

##### Share on other sites
So I've taken another go and I've got another question

I want the ability to be able to pass the particle to the functions which is easy when I'm not dealing with embedded functions but when I'm not it creates issues.

First of all my parsing code is kinda horrible trying to get it to deal with embedded functions:
 typedef boost::function<boost::any(Particle*)> bound_particle_func; Vector3 vector(float x, float y, float z, Particle* p = 0); bound_particle_func parse(token_it& it); bound_particle_func ParseVector(std::vector<Token>& tokens) { const static int arg_count = 3; std::vector<boost::variant<float, bound_particle_func>> args(arg_count); int type[] = { 0, 0, 0 }; for(token_it it = tokens.begin(); it != tokens.end(); ++it) { Token& t = *it; if(t.type == Type::FLOAT) { args.push_back(t.float_value); } else if(t.type == Type::FUNCTION) { args.push_back(parse(it)); type[args.size() - 1] = 1; } else { throw std::runtime_error("Type error: expected float"); } if(args.size() > arg_count) { throw std::runtime_error("Too many arguments for function vector"); } } //the following was aligned but the alignment doesn't seem to want to work in the text editor so sorry for lack of readability return boost::bind(&vector, (type[0] == 0 ? boost::get<float>(args[0]) : boost::get<bound_particle_func>(args[0])), (type[1] == 0 ? boost::get<float>(args[1]) : boost::get<bound_particle_func>(args[1])), (type[2] == 0 ? boost::get<float>(args[2]) : boost::get<bound_particle_func>(args[2])), boost::lambda::_1); } 

So vector(x, y, z) should then turn into boost::bind(&vector, x, y, z, _1) and I can pass it the Particle whenever it's called.

But how do I deal with something like:

vector(random(a, b), y, z)?

it will turn into:

boost::bind(&vector, bound random call, y, z, _1)

but I get an error saying it doesn't evaluate to a function taking 1 parameter as random needs to be passed the particle somehow.

Do I need to bind it again like: boost::bind(&vector, boost::bind(bound random call, _1), y, z, _1)? but that seems very long winded

I just get the feeling I'm starting to bodge job things. i.e. hacking them together till it works

##### Share on other sites

So I've taken another go and I've got another question
<snip>
So vector(x, y, z) should then turn into boost::bind(&vector, x, y, z, _1) and I can pass it the Particle whenever it's called.

But how do I deal with something like:

vector(random(a, b), y, z)?

it will turn into:

boost::bind(&vector, bound random call, y, z, _1)

but I get an error saying it doesn't evaluate to a function taking 1 parameter as random needs to be passed the particle somehow.

Do I need to bind it again like: boost::bind(&vector, boost::bind(bound random call, _1), y, z, _1)? but that seems very long winded

I just get the feeling I'm starting to bodge job things. i.e. hacking them together till it works

That is exactly the bind you want to end up with but it needs to be procedurally generated and that means a more complicated parsing structure which again leads to suggesting a Lexx/Yacc (or related such as Antlr, Bison etc) combo for the parser itself or an existing language.

Now, having said the above, what you are looking to do is best described in BNF, which even without using a compiler compiler (the "cc" in Yacc) will still help you understand how you should structure the code (apologies if I goof this up, been a while):

 // Your entire file is either a single assignment or a list of assignments. statement ::= <assignment> | <statement> <assignment> // An assignment is a variable = <expression> assignment ::= <variable_name> '=' <expression> // The variable name is a string literal from the tokenizer. variable_name ::= StringLiteral // Now things start getting complicated. expression ::= literal | function | math_operator literal ::= StringLiteral | FloatLiteral | IntLiteral // From the tokenizer math_operator ::= '-' <expression> | <expression> '+' <expression> | <expression> '*' <expression> ... etc etc ... function ::= <function_name> '(' <expression_list> ')' function_name ::= "Vector" | "Random" | etc. // Or from the tokenizer as VECTOR | RANDOM etc expression_list ::= <expression> | <expression_list> ',' <expression> 

What the BNF outline does is tells you how your parser needs to be broken up into functionality. You start at the bottom and work backwards building the parser piece meal. So, basically before you even start parsing the actual structure you need to make sure your tokenizer can properly classify tokens. What I mean by that is let's say you are parsing just the "random" function it can be complex or simple such as:

Random( a+b*c, d )
Random( 0, 1 )

So, random in the bnf would be described (if you break it out of the generic outline above) as:

random ::= "Random" '(' <expression>, <expression> ')'

So, for the second case your parser can be very simple:
 Expression_t ParseRandom( TokenArray_t::iterator& begin, TokenArray_t::iterator end ) { // Do basic error checking: if( begin=="Random" && (*begin+1)=="(" && IsFloatLiteral( begin+2 ) && (*begin+3)=="," && IsFloatLiteral( begin+4 ) && (*begin+5)==")" ) { // Valid input for the simple random function. } } 

Expand the above to include the ability to parse sub expression, which are possibly additional function calls etc. You would basically change the "IsFloatLiteral" to "IsFloatExpression" and allow for type coercion from say int to float etc.

So, at this point I hope you understand how complicated this all gets very quickly as you try to make even a fairly simple language once you get beyond very very simple things. I haven't even gotten into doing operator precedence and how to deal with that and I could write several pages of parser code just based on what I've outlined above. Doing this once is probably not a bad idea but you are already pushing the limits of what I would call a "simple" parser.

1. 1
2. 2
3. 3
Rutin
16
4. 4
5. 5

• 13
• 26
• 10
• 11
• 9
• ### Forum Statistics

• Total Topics
633722
• Total Posts
3013549
×