# are c++ optimizers stl-aware ?

## Recommended Posts

well, for example, when looping through a vector you do something like that : for (std::vector<thing>::iterator iter = v.begin(); iter != v.end(); ++v) {...} my question is, does optimizers look at the code in the "for" in order to see if v.end() may change between iterations ? If v.end() do not change, we can save a function call per iteration...

##### Share on other sites
Considering that end() is a template function, the result of the expansion of end() is usually examined during the optimization of the loop, so will often be optimized. A stupid compiler, a bad vector implementation or a too deeply nested loop may cause the end() call not to be optimzied out.

##### Share on other sites
It depends on what you do with the vector. If you've shared the vector with any non-inlined functions as a non-constant reference prior to the loop and call any more such functions from the middle of the inner loop then the compiler is forced to assume that it will change. And since you're never updating the iterator when the vector actually changes you may as well cache it if it's a performance consern.

Something like this is virtually guaranteed to check the end each iteration (I know it's a bad example, but still..):
for(std::vector<Actor>::iterator i = gActors.begin(); i != gActors.end(); ++i) { i.update();}

##### Share on other sites
This isn't that different to a lot of C loops, e.g.:

int i;
for (i = 0; i < data->length; ++i) {...}

Since the compiler will expand end() as an inline function hopefully the result will be pretty much the same and so it will make the same assumptions as it would about a similar C loop. gcc doesn't seem to do things like this perfectly however (according to my timing measurements at least).

##### Share on other sites
Quote:
 Original post by ZQJThis isn't that different to a lot of C loops, e.g.:int i;for (i = 0; i < data->length; ++i) {...}Since the compiler will expand end() as an inline function hopefully the result will be pretty much the same and so it will make the same assumptions as it would about a similar C loop. gcc doesn't seem to do things like this perfectly however (according to my timing measurements at least).
True, but what's C got to do with anything?
In both languages the "solution" is to simply cache the end-point in a local variable prior to the loop.

Quote:
 Original post by SiCraneConsidering that end() is a template function, the result of the expansion of end() is usually examined during the optimization of the loop, so will often be optimized. A stupid compiler, a bad vector implementation or a too deeply nested loop may cause the end() call not to be optimzied out.
Considering that the vast majority of my code isn't inlined and use virtual functions and that most loops (although often not the critical inner loops) call such functions I'd say that the compiler can, in practice, rarely make such assumptions.
It does depend a lot on your coding style however..

##### Share on other sites
Actually some might say the question should perhaps be the other way around - "Are STL writers optimiser-aware?"
And the answer is most certainly yes.

##### Share on other sites
Quote:
 Original post by doynaxConsidering that the vast majority of my code isn't inlined and use virtual functions and that most loops (although often not the critical inner loops) call such functions I'd say that the compiler can, in practice, rarely make such assumptions.It does depend a lot on your coding style however..

If you're calling virtual functions (or any function really) I'd say one memory load (probably from cache) is the least of your worries performance-wise.

##### Share on other sites
Quote:
Original post by ZQJ
Quote:
 Original post by doynaxConsidering that the vast majority of my code isn't inlined and use virtual functions and that most loops (although often not the critical inner loops) call such functions I'd say that the compiler can, in practice, rarely make such assumptions.It does depend a lot on your coding style however..

If you're calling virtual functions (or any function really) I'd say one memory load (probably from cache) is the least of your worries performance-wise.
Probably, but it all adds up. And there's a whole lot of iteration going on in most games..
Consider things like small math functions optimized for multiple instruction sets called through virtual functions, or a very similar 32 to 16-bit color converter used in most of the image loaders of my current project.

##### Share on other sites
Moral: use for_each whenever feasable. Let the library authors worry about the proper way to bound your loops.

CM

##### Share on other sites
Quote:
 Original post by Conner McCloudMoral: use for_each whenever feasable. Let the library authors worry about the proper way to bound your loops.
There really isn't many ways for_each can be implemented considering that you have to provide both iterators as arguments. And I really don't like being forced into writing external functions for it..

What I'd like to see is a for_each version for processing entire containers with specializations for fast recursive traversal of trees (map/set), at least that way you'd have a much stronger argument for having to give up convenient iterative loops.

##### Share on other sites
Quote:
Original post by doynax
Quote:
 Original post by Conner McCloudMoral: use for_each whenever feasable. Let the library authors worry about the proper way to bound your loops.
There really isn't many ways for_each can be implemented considering that you have to provide both iterators as arguments.

Lies, damn lies, and statistics.

The library implementor is entirely free to provide a tag-dispatched for_each which tracks more information than the basic iterator to speed up iteration. In the case of for_each, I'm unaware of an implementation that does this... most people would just write a beefier iterator. That said, the implementation on my HD does more than simple iteration, it uses some other things for more debugging information:

// Copyright (C) 2001, 2002, 2003, 2004 Free Software Foundation, Inc.//// This file is part of the GNU ISO C++ Library.  This library is free// software; you can redistribute it and/or modify it under the// terms of the GNU General Public License as published by the// Free Software Foundation; either version 2, or (at your option)// any later version.//....  template<typename _InputIterator, typename _Function>    _Function    for_each(_InputIterator __first, _InputIterator __last, _Function __f)    {      // concept requirements      __glibcxx_function_requires(_InputIteratorConcept<_InputIterator>)      __glibcxx_requires_valid_range(__first, __last);      for ( ; __first != __last; ++__first)	__f(*__first);      return __f;    }

Of course taking advanage of all the possible compiler specific debugging information for all eleventybillion compilers out there just goes to show that even in the mundane, there's plenty of ways for_each can be implemented :-).

Quote:
 And I really don't like being forced into writing external functions for it..

Boost.Lambda? Using premade ones like std::mem_fun, std:mem_fun_ref, or std::fun_ptr usually do the trick for me.

Quote:
 What I'd like to see is a for_each version for processing entire containers with specializations for fast recursive traversal of trees (map/set), at least that way you'd have a much stronger argument for having to give up convenient iterative loops.

Considering the compiler and standard library I'm using right now are about a year old, I wouldn't be suprised if there is a standard library out there that does tag-dispatch for for_each. After all, even this one does it for std::find(iter,iter,value) - I see one of the tag dispatches right bellow the above for_each source :-).

##### Share on other sites
Quote:
Original post by MaulingMonkey
Quote:
 What I'd like to see is a for_each version for processing entire containers with specializations for fast recursive traversal of trees (map/set), at least that way you'd have a much stronger argument for having to give up convenient iterative loops.

Considering the compiler and standard library I'm using right now are about a year old, I wouldn't be suprised if there is a standard library out there that does tag-dispatch for for_each. After all, even this one does it for std::find(iter,iter,value) - I see one of the tag dispatches right bellow the above for_each source :-).
But how would you implement a recursive tree traversal for a range of iterators, and not just the special case of a complete container? I suppose it's possible but I can't really think of any fast way to do it.
Quote:
Original post by MaulingMonkey
Quote:
 And I really don't like being forced into writing external functions for it..

Boost.Lambda? Using premade ones like std::mem_fun, std:mem_fun_ref, or std::fun_ptr usually do the trick for me.
I guess it works for trivial functions but things tend to get unreadable fast. And I find that reading an idiomatic container traversal is a lot easier than deciphering complex template constructs.
So while it's a neat trick for single-line loops bodies it's just not an option for the majority of my loops..

##### Share on other sites
Quote:
 But how would you implement a recursive tree traversal for a range of iterators, and not just the special case of a complete container? I suppose it's possible but I can't really think of any fast way to do it.

You simply add extra tag types to your iterator classes indicating that some funkier way of traversal is possible then specialize for it.

Being a library writer is nice that way and the standard is flexible enough to allow in many cases extra template arguments (as long as they have defaults) and things.

Im not aware of any library taking that approach and I could guess that one reason is the multitude of programmers that simply would miss out on it on basis of trying to be efficent (i.e. using the explicit loop instead of relying on the library to do the best thing(tm))... (and yes I'm in many regards one of them programmers ;)

##### Share on other sites
Quote:
Original post by DigitalDelusion
Quote:
 But how would you implement a recursive tree traversal for a range of iterators, and not just the special case of a complete container? I suppose it's possible but I can't really think of any fast way to do it.

You simply add extra tag types to your iterator classes indicating that some funkier way of traversal is possible then specialize for it.
You're missing my point entirely.. I'm trying to say that it may be hard to gain anything from such specializations (for trees and other 'complex' datastructures) given the current interface of for_each.

I guess this is a relatively uninteresting special case, but I have a certain fascination with the internals of STLs tree implementation since trying to write something similar myself.

##### Share on other sites
Quote:
Original post by doynax
Quote:
Original post by MaulingMonkey
Quote:
 What I'd like to see is a for_each version for processing entire containers with specializations for fast recursive traversal of trees (map/set), at least that way you'd have a much stronger argument for having to give up convenient iterative loops.

Considering the compiler and standard library I'm using right now are about a year old, I wouldn't be suprised if there is a standard library out there that does tag-dispatch for for_each. After all, even this one does it for std::find(iter,iter,value) - I see one of the tag dispatches right bellow the above for_each source :-).
But how would you implement a recursive tree traversal for a range of iterators, and not just the special case of a complete container? I suppose it's possible but I can't really think of any fast way to do it.

Assuming we're talking about a sorted binary tree as per what my implementation uses for std::set/map, assuming our main expense is transversal, we could probably use some recursive processing to maintain some nodes on the stack to deal with cache thrashing issues we could get heading back up the tree as we iterate back up the tree, leaving us just with the ones for actually checking/touching new nodes. Basically we'd get something like (ignoring the template args):

void for_each( iterator begin , iterator end , functor f ) {   node_t * current = begin._M_node;   while ( current ) {       node_t * next = current->parent; //optimization: cache       if( ! rb_for_each_self_and_right_child( current , end._M_node , functor ) ) { //if this iterates deep           return; //we found the end       }       current = next; //optimization: no hit to current if the subcall iterated deep   }}bool for_each_self_and_right_child( node * current , node * end , functor f ) {    //returns true if end not reached    if ( end && current == end ) return false;    f( current->_M_value );    return current->_M_right && for_each_self_and_child( current->_M_right , end , f );};bool rb_for_each_self_and_child( node * current , node * end , functor ) {   //returns true if end not reached   node * right = current->right; //optimziation: cache   if ( current->left && ! for_each_self_and_child( current->left , end , functor ) ) return false; //end reached in left   if ( end && current == end ) return false;   f( current->_M_value );   return right && for_each_self_and_child( right , end , functor ); //optimization: no hit to current if the subcall iterated deep}

Of course, if this runs slower, I wouldn't be suprised. Lots of arguments to each iteration. I havn't profiled it either, which is a major must implementing any optimization of course :-).

Quote:
 So while it's a neat trick for single-line loops bodies it's just not an option for the majority of my loops..

Well, you're entitled to your opinion on that :-). Most of the time my processing is fairly easily convertable to functor. But that's just me.

##### Share on other sites
Quote:
Original post by MaulingMonkey
Quote:
Original post by doynax
Quote:
Original post by MaulingMonkey
Quote:
 What I'd like to see is a for_each version for processing entire containers with specializations for fast recursive traversal of trees (map/set), at least that way you'd have a much stronger argument for having to give up convenient iterative loops.

Considering the compiler and standard library I'm using right now are about a year old, I wouldn't be suprised if there is a standard library out there that does tag-dispatch for for_each. After all, even this one does it for std::find(iter,iter,value) - I see one of the tag dispatches right bellow the above for_each source :-).
But how would you implement a recursive tree traversal for a range of iterators, and not just the special case of a complete container? I suppose it's possible but I can't really think of any fast way to do it.

Assuming we're talking about a sorted binary tree as per what my implementation uses for std::set/map, assuming our main expense is transversal, we could probably use some recursive processing to maintain some nodes on the stack to deal with cache thrashing issues we could get heading back up the tree as we iterate back up the tree, leaving us just with the ones for actually checking/touching new nodes. Basically we'd get something like (ignoring the template args):

*** Source Snippet Removed ***

Of course, if this runs slower, I wouldn't be suprised. Lots of arguments to each iteration. I havn't profiled it either, which is a major must implementing any optimization of course :-).
I'll admit that your implementation is pretty neat, and you've obviously spent a bit of time on it. Still, I'm afaraid that it's a lot slower than the standard implementation. Additionally I don't think it ever walks back up the tree.

The full-tree version is, of course, straight forward and fast.:
void traverse(node *p) { if(p) {  traverse(left)  process(p);  traverse(right); }}
And, yes, it could easily be rewritten for right recursion only.
Quote:
Original post by MaulingMonkey
Quote:
 So while it's a neat trick for single-line loops bodies it's just not an option for the majority of my loops..

Well, you're entitled to your opinion on that :-). Most of the time my processing is fairly easily convertable to functor. But that's just me.
Hm.. I suppose it does depend a bit on your coding style.

But considering that most of my loops are at least 5 lines or so I fail to see how they can be converted in any straight forward manner. Feel like giving a reasonable complicated example of what you can do, preferable one split into "lines" with separate commenting, and how do you handle things like temporary calculations and branching, what's your (if any) indentation style?
Or maybe I should just check out some tutorials for boost.lambda instead?

I suppose that it may be that everything suddenly starts to look like LISP that annoys me ;)

[Edited by - doynax on October 23, 2005 3:37:41 AM]

##### Share on other sites
Quote:
 Original post by doynaxI suppose that it may be that everything suddenly starts to look like LISP that annoys me ;)

If it looked like Lisp I wouldn't mind. Things that are reasonable to do in Lisp often result in ugly monsters of code when ported to C++...

Lisp is often abused as functional programming language in university courses (where people didn't notice that it is spelled Lisp, not LISP since way over 20 years ago). That gives a wrong impression of both functional programming and Lisp.

##### Share on other sites
Quote:
Original post by doynax
Quote:
 Original post by MaulingMonkey...Of course, if this runs slower, I wouldn't be suprised.

I'll admit that your implementation is pretty neat, and you've obviously spent a bit of time on it. Still, I'm afaraid that it's a lot slower than the standard implementation. Additionally I don't think it ever walks back up the tree.

As I said, I wouldn't be suprised :-). How much slower might I ask? Also, it does however walk back up the tree, see the main while loop in the topmost function:

void for_each( iterator begin , iterator end , functor f ) {   node_t * current = begin._M_node;   while ( current ) {       node_t * next = current->parent; //optimization: cache       if( ! rb_for_each_self_and_right_child( current , end._M_node , functor ) ) { //if this iterates deep           return; //we found the end       }       current = next; //optimization: no hit to current if the subcall iterated deep   }}

Basically, we first simply navigate our start node (the starting value) and it's right side (if any - these will be "greater than" that node (e.g. next in line)).

If we didn't hit the end, we go up the path. Unforunately I just noticed a bug in the case that the node we were in was the right side - we'll navigate all through it again. The line "current = next" needs to be rewritten:

do {    current = next;    next = parent->parent;} while ( next->right == current );

Or somesuch. Hopefully I havn't miswritten this one too :S. Again, this optimization was targeted at issues where cache issues would be the norm, so it'll definately depend on the size of your various processor caches and data set size, and how much memory is accessed to process each node [counting each access to a page no more than once].

But once you start stretching out there, set iteration is pretty trivial. For the low rank nodes, their parent nodes will still be in L1 cache unless you're doing a lot of processing - at which point the iteration is really trivial, enough that I definately wouldn't spend my time optimizing it if I hadn't just been drawn into this hypothetical meyandering by myself :-p.

Quote:
Original post by MaulingMonkey
Quote:
 So while it's a neat trick for single-line loops bodies it's just not an option for the majority of my loops..

Well, you're entitled to your opinion on that :-). Most of the time my processing is fairly easily convertable to functor. But that's just me.
Hm.. I suppose it does depend a bit on your coding style.[/quote]

Exactly, which is exactly why you're entitled to that opinion :-p.

Quote:
 But considering that most of my loops are at least 5 lines or so I fail to see how they can be converted in any straight forward manner. Feel like giving a reasonable complicated example of what you can do, preferable one split into "lines" with separate commenting, and how do you handle things like temporary calculations and branching, what's your (if any) indentation style?

In reverse order:

3) Tabs for section-indents, spaces to align with the preceeding line - e.g.:

    while ( true ) {        //<--- tabs        int x = 1,            y = 2;//      ^^^^---- spaces//^^^^^^-------- tabs   }

2) I'm no B.L expert, the documentation has some examples... but I usually prefer boost::bind and the lambda placeholders (_1, _2) to adapt existing functions to do what I want. E.g. instead of:

std::set< object > values( .. , .. );size_t number = 0;for ( std::set< object >::iterator i = values.begin() ; i != values.end() ; ++i , ++number ) {    i->print_with_column_layout( number , 10 , 13 , i->default_pie( 1 ) , 447 );}

I'd do:

//not sure if I got it right, but it's at least close:std::for_each( values.begin() , values.end() ,    using boost::_1;    boost::bind( &(object::print_with_column_layout) , _1 ,        number_generator< size_t >( 0 ) , 10 , 13 ,        boost::bind( &(object::default_pie) , _1 , 1 ) , 477    ));//elsewhere:template < typename index_t >class number_generator {    index_t index;public:    number_generator( index_t start = 0 ) : index( start ) {}    index_t operator()() {        index_t ret = index;        ++index;        return ret;    }};

Well, assuming I was really going this batshit crazy with it. Really, I'd probably stick with the former or refactor everything into something more concise from a user perspective - e.g.:

class print_indexed_with_column_layout_f {    number_generator< size_t > index_gen;    int c1,c2,c3,c4;    typedef print_indexed_with_column_layout_f self_type; //most of my classes actually have this typedef - saves on typing often enough :-).public:    static const int default_pie = -1;    print_with_column_layout_f( int c1 , int c2 , int c3 , int c4 )        : index_gen( 0 ) , c1( c1 ) , c2( c2 ) , c3( c3 ) , c4( c4 )    {    }    void operator()( const object & o ) {        int loop_c1 = (c1 != default_pie) ? c1 , o.default_pie( 1 );        ...        o.print_with_colum_layout( index_gen() , loop_c1 , loop_c2 , loop_c3 , loop_c4 );    }};

This also opens up opportunities for more formatting options - different colored columns, what have you. For the other situations there's allways:

1) Most of my code consists of one-liners. When loops get involved, those are usually one-liners as well. Mainly due to my shifting of complicated stuff into concentrated areas. My earlier code would likely have been refactored like such:

    int column_check( const object & o , int value ) {        return (value == default_pie) ? o.default_pie( 1 ) : value;    }    void operator()( const object & o ) const {        int loop_c1 = column_check( o , c1 );        int loop_c2 = column_check( o , c2 );        int loop_c3 = column_check( o , c3 );        int loop_c4 = column_check( o , c4 );        o.print_with_colum_layout( index_gen() , loop_c1 , loop_c2 , loop_c3 , loop_c4 );    }

Or similar, assuming I had reasons for not wanting to use an array. Of course I would probably have from the start if they were really numbered like that, changing the assignments to:

int loop_c[4];std::transform( c , c + 4 , loop_c , boost::bind( &(self_type::column_check) , boost::cref( o ) , _1 ) );//note: the boost::cref(___) prevents boost::bind from making a copy of O, instead causing it to take a reference.//this is "needed" because boost::bind takes all it's arguments in the form of://template < typename functor_t , typename arg_1_t , ... , typename arg_N_t >//bind( functor_t functor , arg_1_t arg_1 , ... , arg_N_t arg_N )

Actually, I lied. It's not necessary because we do take the object by reference anyways. I just like being specific in this manner.

arg 1 = what's get turned into the this pointer if the functor is a member function pointer.

##### Share on other sites
Quote:
Original post by Trap
Quote:
 Original post by doynaxI suppose that it may be that everything suddenly starts to look like LISP that annoys me ;)

If it looked like Lisp I wouldn't mind. Things that are reasonable to do in Lisp often result in ugly monsters of code when ported to C++...

Lisp is often abused as functional programming language in university courses (where people didn't notice that it is spelled Lisp, not LISP since way over 20 years ago). That gives a wrong impression of both functional programming and Lisp.
Hehe, you're reading way too much into my casual comment.

What I meant was that having to learn something very close to a new languages, full of rules, conventions and idioms of it's own, just to avoid having to write external functions when working with standard algorithms doesn't really appeal to me.

They do have a certain hackish charm however, and I enjoy stretching the boundaries of C's in similar ways with myself.

Quote:
 Original post by MaulingMonkeyAs I said, I wouldn't be suprised :-). How much slower might I ask? Also, it does however walk back up the tree, see the main while loop in the topmost function:
Oh, sorry.. I should have read it a bit more carefully.

Well, I don't see how any version of the algorithm would affect the cache behaviour. They're all just walking up and down the tree access the same nodes in the same order really, the only thing that changes is the amount of work involved with jumping from one node to the next. If anything GCC's standard implementation accesses less memory since it doesn't use recursion, but it really shouldn't matter.

Being able to remove the parent pointers would make a real difference. And why not pack the node's color into the LSB of it's left or right child pointer? Surely the default allocator can guarantee proper alignment.

All the extra branching and conditionals and will certainly make a difference when everything is cached. And code size might start to become a problem, especially since the functor is expanded twice.
Quote:
 Original post by MaulingMonkey3) Tabs for section-indents, spaces to align with the preceeding line - e.g.: while ( true ) { //<--- tabs int x = 1, y = 2;// ^^^^---- spaces//^^^^^^-------- tabs }
Ah, well, I was more interested of how to translate the indentation structure of the loop body when converting it. But this is obviously not an issue for one-liners.

Quote:
 Original post by MaulingMonkeystd::set< object > values( .. , .. );size_t number = 0;for ( std::set< object >::iterator i = values.begin() ; i != values.end() ; ++i , ++number ) { i->print_with_column_layout( number , 10 , 13 , i->default_pie( 1 ) , 447 );}I'd do://not sure if I got it right, but it's at least close:std::for_each( values.begin() , values.end() , using boost::_1; boost::bind( &(object::print_with_column_layout) , _1 , number_generator< size_t >( 0 ) , 10 , 13 , boost::bind( &(object::default_pie) , _1 , 1 ) , 477 ));
I just can't help but feel that this example forms an argument for rolling your own loops rather than against it. Do you truly consider the second version easier to read and write?
All the binds and argument indices will obviously get more natural with time, in much the same way that the normal loop iteration idioms becomes trivial with time, but in any event there's a least more things there to keep track of.

Quote:
 Original post by MaulingMonkey1) Most of my code consists of one-liners. When loops get involved, those are usually one-liners as well. Mainly due to my shifting of complicated stuff into concentrated areas.
I guess that explains a lot. Might I ask if this is the way you naturally write code or something you, at least partially, chose to do to better benefit from generic algorithms?

Thanks for for the examples.

##### Share on other sites
Quote:
 Original post by doynaxWell, I don't see how any version of the algorithm would affect the cache behaviour. They're all just walking up and down the tree access the same nodes in the same order really, the only thing that changes is the amount of work involved with jumping from one node to the next.

It lies within the statement: "current->_____". This can cause a cache miss (loading the structure pointed to by current). After each deep-iteration, this structure may have become paged out assuming each node has little proximity to it's peers and children. Basically, from a cache perspective, I simply rearanged from:

node * current = ...;something_that_may_page_out_what_current_points_to( current->left , ... ); //first use of current->___ - 1 possible cache missf( current->_M_value ); //if *current paged out, this is a cache miss (2 posible)something_that_may_page_out_what_current_points_to( current->right , ... );current = current->parent; //if *current paged out, this is a cache miss (3 possible)

To:

node * current = ...;node * next = current->right; //1 possible cache miss (*current)node * parent = current->parent; //*current must still be paged in...something_that_wont_page_out_the_stack_since_thats_locality_based( current->left );...something_that_wont_page_out_the_stack_since_thats_locality_based( right );something_that_wont_page_out_the_stack_since_thats_locality_based( parent );

Of course current->value will still be needed so there's a limit to how many cache misses can be avoided (since the ftor should be applied after the possibly deep iteration on the left. Still, 2 is better than 3.

Quote:
 If anything GCC's standard implementation accesses less memory since it doesn't use recursion, but it really shouldn't matter.

But it accesses it more frequently. A lot of overhead could be removed with tail-recursion techniques as well.

Quote:
 Being able to remove the parent pointers would make a real difference. And why not pack the node's color into the LSB of it's left or right child pointer? Surely the default allocator can guarantee proper alignment.

Now I've lost what we're talking about. Then again RB trees are far from my specialty.

Quote:
 All the extra branching and conditionals and will certainly make a difference when everything is cached.

As you saw :-).

Quote:
 ...
I just can't help but feel that this example forms an argument for rolling your own loops rather than against it.[/quote]

I was trying to provide an example of the array of options and functionality rather than actually suggesting it's use in that manner. Now, in the refactoring of the functor I give another example using it, which I more fully endorse - and for that case I do consider it easier to read and write.

As I said before, I'd prefer to go functor-fu on such a problem's ass.

Quote:
 Do you truly consider the second version easier to read and write?All the binds and argument indices will obviously get more natural with time, in much the same way that the normal loop iteration idioms becomes trivial with time, but in any event there's a least more things there to keep track of.

There's a learning curve, and there's a payoff. I accept both with open arms :-).

Quote:
 Original post by MaulingMonkey1) Most of my code consists of one-liners. When loops get involved, those are usually one-liners as well. Mainly due to my shifting of complicated stuff into concentrated areas.
I guess that explains a lot. Might I ask if this is the way you naturally write code or something you, at least partially, chose to do to better benefit from generic algorithms?[/quote]

Both :-). I am naturally drawn towards taking advantage of and creating generic problems and solutions, probably overly much so.

It's a tradeoff - although genericness (implementing it) comes at a cost (more time to implement once) it also has an advantage (less time to use repeatedly).

It's similar on the functor side. Instead of having to provide multiple overloads with ever increasing functionality, causing an exponential increase in mantinence costs (overloads x average complexity of said overloads), there's a single point of implementation (the functor) which can be more easily expanded without requiring entire reimplementations (meaning a linear mantinence increase - the complexity alone - which is something I find hard to beat given a project being added to).

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628394
• Total Posts
2982427

• 10
• 9
• 19
• 24
• 9