Jump to content

  • Log In with Google      Sign In   
  • Create Account


Why am I only told what not to do


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
14 replies to this topic

#1 japro   Members   -  Reputation: 887

Like
9Likes
Like

Posted 05 August 2011 - 04:54 PM

Hi,

So maybe this is some sort of selective perception thing but I'm getting somewhat annoyed that software design articles/discussions and such seem to be mostly about what not to do. Were I to follow all the "advice" about bad practice I wouldn't write code at all.
So I was reading the "Demise of the game entity"-thread and while I agree with the problems described there and encountered them myself, no one is actually suggesting concrete solutions. Then I went on to read the articles the thread references to and right at the beginning I find this:

I have encountered some architecture’s that have banned the use of Constructors and Destructors in game entities! Usually because of a misunderstanding of how C++ works and usually backed up by the fact that some engineers just can’t stop themselves from doing memory allocations and other nonsense in the constructor.

When did allocating memory in a constructor become a bad idea?

I always feel stupid when I encounter this kind of statement, since the apparently competent authors are dropping these things like it was completely obvious that this is nonsense, but it isn't to me. Does that mean I'm a horrible programmer? And again, I'm told what not to do, but not what I should do...

(sorry, if this is kinda rantish)

Sponsor:

#2 Nypyren   Crossbones+   -  Reputation: 4189

Like
1Likes
Like

Posted 05 August 2011 - 05:59 PM

If you're working by yourself, write code however you want. There's no reason not to. If it works, it works. If you run into any major design problems, you'll gain a better understanding of the problem. You'll also understand your entire codebase so you will be able to fix it more easily than if someone else wrote it.

On your own projects, you should intentionally experiment with "weird" or "creative" program design. Chances are you'll come up with something that's already been made elsewhere, but more importantly you'll understand a wider variety of software architectures.

On a team, it's more important to figure out and follow the guidelines everyone else on the team is using. If everyone on a team codes in totally different ways, there's inevitably a lot of "WTF is going on here?!" moments. Generally you'll be on a team for any paid projects, and paid projects are always under time constraints (which means if you screw up the design the first time, you may not have a chance to fix it).

#3 ApochPiQ   Moderators   -  Reputation: 15090

Like
6Likes
Like

Posted 05 August 2011 - 06:34 PM

For what it's worth, the idea that you shouldn't allocate memory in a constructor is ludicrous.

#4 Sappharos   Members   -  Reputation: 140

Like
0Likes
Like

Posted 05 August 2011 - 07:06 PM

I'm assuming you're referring to this article? I also noticed that sentence a few days ago and felt similarly crushed - if one of my most basic assumptions, something I took for granted turns out to be bad practice then where am I really? Good to know it was a false alarm. It may just be poorly worded (or ironic?), though I have no clue how it was meant to read.

If you're interested in component/entity systems specifically, I can point you to post #51 by Lord_Evil, buried in this thread. After weeks of looking in vain at various threads and articles, this was the one approach that I personally understood, liked and was able to implement. It might be of some use to you.

#5 SiCrane   Moderators   -  Reputation: 9565

Like
5Likes
Like

Posted 05 August 2011 - 07:17 PM

For what it's worth, the idea that you shouldn't allocate memory in a constructor is ludicrous.

Depending on the environment anyways. For C++, if you're working on a platform with extremely poor exception support, which has historically included many of the console platforms (or really any platform ten or so years ago), then it becomes a legitimate tactic to ban exceptions. Without exceptions there's no way for constructors to report failure, so constructors are limited to only no-fail operations and real heavy lifting in initialization, like memory allocation, is handed off to initialization functions that can report errors. For single platform code bases with good exception handling support, or multiple platform code bases where all the platforms share the same property, this practice is much less defensible.

One major problem is when legitimate older advice becomes parroted without knowledge or re-evaluation of the initial reasons. Many conclusions about C++ features were often made based on older, inferior compilers and computers, such as the applicability of exceptions or standard library classes. For another example, look-up tables. It used to be clock cycles were much more expensive relative to memory access. Things have shifted in the other direction.

#6 Telastyn   Crossbones+   -  Reputation: 3726

Like
0Likes
Like

Posted 05 August 2011 - 08:01 PM

Because a lot of bad things are bad everywhere. Very few good things are good everywhere. They're good if you're in this scenario, with these requirements, with those developers... and even then there's trade offs.

Personally though, I dislike non-trivial constructors. They're problematic in design (near impossible to reuse/abstract, have many touchpoints so are hard to refactor, have limited options for graceful failure), difficult to test effectively, and 'sane defaults' (in my experience) allows you to reason about a program and work with the class much easier. If you're enforcing some invariant (like constructors are meant to/are good at) then by all means make a non-trivial constructor (even if that involves allocations) (imo).


And you're likely not a horrible programmer. If you learn from your coding enough to look at others' opinions with a critical eye, that's pretty much the definition of a good programmer.

#7 Antheus   Members   -  Reputation: 2397

Like
3Likes
Like

Posted 05 August 2011 - 11:56 PM

When did allocating memory in a constructor become a bad idea?


Welcome to engineering.

Over course of years and many projects, correlation was established between successful projects and those not so much. There are no hard rules or laws, but certain practices have been found to be better than others.

One observation was that low-coupled objects result in code that is easier to modify and test. Projects with such properties lead to easier experimentation and shorter turnaround times, providing more end-user value (aka what really matters).

When it comes to resource allocation, there are two sides to the story which make it undesirable. One is the issue of testing, covered by several Google Talks. While it favors certain development style, lessons there are somewhat universal, even if presented in scope of Java.

Second issue is non-determinism of resource allocation. "But it's fast enough" doesn't cut it. Resource allocation is almost without exception non-deterministic (may take arbitrary amount of time) and has nothing to do with the rest of the code - a renderer obviously needs to load everything before it's useful, an algorithm obviously needs entire data before it can do useful work.

Effects of non-determinism can be observed when testing. As project grows, the loading times increase, even though algorithms don't change.

The latter issues are especially important when dealing with real-time systems. Ideal real-time system has resource allocation completely decoupled during initialization phase while operational state cannot allocate anything (be it file, memory, even socket). Memory is frequently considered "fast enough", but in well-designed OO system it often becomes a bottle-neck (30 second Tomcat restart cycles).

Finally, from design perspective, putting all allocation in constructor creates artifical oblivion to rest of design. Assume Window which always creates Widgets. Whenever you create a new window it will ignore the programmer's knowledge that other widgets already exist and it will recreate them. Since most OO systems favor resource-oblivious encapsulation (ignore external knowledge) they end up encapsulating more than they should. A new resource typically implies "new identity" even though many resources are shared and "equal". JVM exploits this fact by reusing string instances (.intern()) and C++ compilers offer string pooling and constant resources which may technically violate the expected behavior of each allocation being unique.

Resource allocation of any kind should be considered strictly orthogonal to algorithms that operate on data. It's fairly common to use resource loaders or caches as well as different allocators in same application reinforcing such observation. While rarely explicitly stated, many projects independently arrive at the need for decoupling the two.

As a C++ detail, operator new tends to be overused by programmers coming from Java or C# which results in unneeded verbosity.

Most of these issues can be solved by parametrizing "allocator" either at compile or run-time. STL has allocator parameter, constructor might take a Factory or Allocator interface which can act as cache, C idiom is to pass allocator function or pointer to memory block (see zlib), dynamic languages may use a different (sqlite instead of MySql).

Some of "best practices" come through as FUD or just something consultants like to repeat to justify their existence. But as with everything, there are both exceptions as well as grains of truth.

Generalizations do not help much and truth be told, most projects simply aren't big or important enough for such details to matter. But it helps to be aware of such issues since they can make or break a certain part of development.

#8 japro   Members   -  Reputation: 887

Like
3Likes
Like

Posted 06 August 2011 - 08:14 AM

Thanks for the replies.

So what I take away from this at the moment is that these kind of dogmatic statements usually target an underlying problem and try to avoid it by forbidding all the possible causes beforehand.
I can absolutely see why one would do this, but find it very confusing when there is no explanation available. Also without knowledge of the underlying problem the "bad programmer" that you are trying to keep from doing something stupid may be working around your rule in an even more stupid way. I once saw en example where someone apparently knew that he shouldn't use goto. So instead he emulated the behavior with huge "do{... }while(false)" loops and break statements. I'd also suspect that someone who is only told to not allocate in a constructor will just move the allocation to some even more "dangerous" spot.

So: Please don't propagate a dogma without explanation. :)

#9 frob   Moderators   -  Reputation: 20331

Like
3Likes
Like

Posted 06 August 2011 - 11:35 PM

So: Please don't propagate a dogma without explanation. :)


That about sums it up.

Continuing on that, consider that the environment of corporate software is radically different from those of hobby development and academic development. They are not even superficially similar.




In an academic environment you are generally taught multiple ways of doing things (e.g. implement 15 different sorting algorithms), or else taught rules for doing a thing but only a single way to do it (e.g. this is a linked list, this is the theory behind it, now implement it a single time). Your job in the academic environment is to learn enough that you can become fluent and hopefully handle an entry-level job in the field.

In a corporate environment you generally have peer code reviews where people can tell you what they dislike and how to fix it. If you can't do it their way you can figure out if that is important, and if so, have them implement it instead and teach the entire team the alternate forms.

In an online environment like these forums many people start saying "should I do such-and-such", or "why is such-and-such bad", which results in a very long debate where the answers are context sensitive.




There are many things that are absolutely forbidden at work that are standard practice on hobby projects and face random levels of scrutiny in academia.

Hard-coded objects and value are the most egregious offense in that bucket. In college I had professors who were strict, others that recommended and docked a few points, others who recommended but didn't grade against it, and still others that didn't care. In a hobby project where you are alone, it makes sense to do it because you run your compiler every time you modify the stuff, it is faster and easier for one individual to work with it all in one place. But in a corporate environment with multiple designers and producers and level editors and object script writers and many more people involved, such an action would effectively block multiple teams from being able to do their jobs.




For a personal project do whatever you need to make the product. It doesn't matter if it is ugly, or spaghetti code covered with goto statements, or every image and model is hard-coded, or strings are composed in a way that makes translation nearly impossible. What matters in that case is just finishing the game.
Check out my personal indie blog at bryanwagstaff.com.

#10 Álvaro   Crossbones+   -  Reputation: 12936

Like
0Likes
Like

Posted 08 August 2011 - 09:51 AM

For what it's worth, the idea that you shouldn't allocate memory in a constructor is ludicrous.


As evidence of how implausible it is that this is good advice, many objects that are part of the standard library (std::string, std::vector...) do provide constructors that allocate memory.

So here's a positive-style piece of advice: Write classes that behave in ways similar to the classes provided as part of the standard library. And here's an explanation to go with it: If you follow my advice, any developer that is familiar with the language will feel comfortable using your classes.

#11 Antheus   Members   -  Reputation: 2397

Like
1Likes
Like

Posted 09 August 2011 - 11:06 AM

As evidence of how implausible it is that this is good advice, many objects that are part of the standard library (std::string, std::vector...) do provide constructors that allocate memory.


Standard library treats memory allocation as orthogonal by using allocator concept. Entire operation of the library handles all resource allocations via external interface, which falls in line with original advice. While operator new can be overloaded, the default versions do not pass in context in which they are called, suffering from typical global namespace issues.

When designing APIs, the biggest mistake is to internalize resource allocation. In C, allocator function should be provided. In C++, allocator. In Java or C#, factory interface. Dynamic languages do not suffer from this as much since any call can be mixed-in or modified at any point to insert cross-cut functionality.


The lesson here is rarely emphasized enough, but resource life cycle is absurdly complicated problem. By including it into some library or algorithm it creates such rigid design that it may be rendered useless for any further development, either through reuse or through feature changes. Resources here can range from memory to files but even to number of rows in database, number of URLs supported or lines in a file.

FactoryStrategyFactorySingletonProvider is somewhat of a joke in Java world. Admittedly, Java is a clumsy and verbose language, but the problem is real. Di/IoC is in many ways the worst of all worlds, but it emerged as only scalable (in terms of development) methodology. Key lesson is to decouple resource creation/allocation from business logic/rule engine. It really is that important as projects grow.

And as said, dynamic languages do not suffer from this problem, even though at surface they appear to be allocating objects left and right. Focus is on resource allocations and memory in most languages and problems isn't important enough. C and C++ tend to be used precisely for this reason, so memory should be treated as one of resources that needs to carefully managed.

#12 Shannon Barber   Moderators   -  Reputation: 1362

Like
2Likes
Like

Posted 25 August 2011 - 09:22 PM

Hi,

So maybe this is some sort of selective perception thing but I'm getting somewhat annoyed that software design articles/discussions and such seem to be mostly about what not to do. Were I to follow all the "advice" about bad practice I wouldn't write code at all.
So I was reading the "Demise of the game entity"-thread and while I agree with the problems described there and encountered them myself, no one is actually suggesting concrete solutions. Then I went on to read the articles the thread references to and right at the beginning I find this:

I have encountered some architecture’s that have banned the use of Constructors and Destructors in game entities! Usually because of a misunderstanding of how C++ works and usually backed up by the fact that some engineers just can’t stop themselves from doing memory allocations and other nonsense in the constructor.

When did allocating memory in a constructor become a bad idea?

I always feel stupid when I encounter this kind of statement, since the apparently competent authors are dropping these things like it was completely obvious that this is nonsense, but it isn't to me. Does that mean I'm a horrible programmer? And again, I'm told what not to do, but not what I should do...

(sorry, if this is kinda rantish)


You hit upon two pervasive issues of confusion.
The first is that constructive criticism is hard work - I would hope that most articles say don't do it this way or that way and then present an acceptable, ideal, or optimized method.
e.g. It's easy to say some aspect of a design is undesirable. It is quite difficult to provide a universally better approach. Adding complexity to address rare conditions or marginal concerns is not a good idea.

The more specific issue about memory allocation in constructors is mostly about determinism (which is a lacking quality in software today).
The difficulties of testing it is rooted in the lack of determinism.

Consider this contrived example,
class Exploder
{
   int* all_good;
   int* death_to_all;
public:
   Exploder()
   {
      all_good = new int[1000];
      death_to_all = new int[1<<31];
   }
}

If the ctor throws... the dtor is not invoked.
I think the only way to handle this is zero all members prior to performing any allocations (inject determinism, we now know they are all zero prior to the possibility of an exception thrown), then catch any exception thrown, perform clean-up on non-null members, then rethrow the exception. (This is a pain-in-the-ass so no one does this.)
If you start new'ing in the ctor you may become tempted to call other functions in the ctor. Perhaps even a virtual function in the ctor which will not invoke the current class's implementation but, rather, the base-class implementation because 'this' class does not exist until the ctor successfully completes (so the vtable cannot be trusted). In C++ I believe this is technically undefined behavior, there might not be a base-class implementation. I would assume in Java and certainly in C#/.Net there are additional keywords that clarify and eliminate this issue (you get a compiler error instead of crashing at run-time.)

The more general problem this touches on is one I call "granularity" (it also touches "Conceptual Integrity" but everything touches CI).

Our Conceptual Integrity goal is to make our entire program adhere to RAII without requiring the programmer to resort to extraneous tactics such as a two-staged atomic object initialization consisting of a zeroing phase followed by a allocation phase combined with lots of catch/rethrow blocks to clean everything up.

To have good CI it has to be straight-forward.
What we need is a consistent rule that we can follow that will produce correct code.

In C++ there is a concept known as RAII (resource-acquisition-is-initialization).
An example simple rule is you are not allowed to use 'new' in a constructor unless you are creating a primitive object that contains one, and only one, dynamic data member.
Primitive objects must be RAII compliant.
All other objects are now composed of primitive objects which automatically makes them RAII compliant.
e.g. replace new int[] with a vector.resize().

Another simple rule is object-model classes (classes part of your big-picture design) are not allowed to use RAII.
Only low-level utility classes are allowed to use RAII.
(Why I call it a granularity issue.)

Or even, Only use RAII if there is no other choice.
For tasks such as [exception] safely acquire and hold a mutex, this is the "only" choice.
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara

#13 NightCreature83   Crossbones+   -  Reputation: 2745

Like
0Likes
Like

Posted 30 August 2011 - 10:21 AM

If the ctor throws... the dtor is not invoked.
I think the only way to handle this is zero all members prior to performing any allocations (inject determinism, we now know they are all zero prior to the possibility of an exception thrown), then catch any exception thrown, perform clean-up on non-null members, then rethrow the exception. (This is a pain-in-the-ass so no one does this.)
If you start new'ing in the ctor you may become tempted to call other functions in the ctor. Perhaps even a virtual function in the ctor which will not invoke the current class's implementation but, rather, the base-class implementation because 'this' class does not exist until the ctor successfully completes (so the vtable cannot be trusted). In C++ I believe this is technically undefined behavior, there might not be a base-class implementation. I would assume in Java and certainly in C#/.Net there are additional keywords that clarify and eliminate this issue (you get a compiler error instead of crashing at run-time.)


You got the C# part wrong there is no keyword to do this in C#, it allows you to call virtual functions from the constructor. C# constructs from the derived type downwards so the vtable exists as soon as you hit the constructor body.

And to extend your point about if the constructor throws the destructor is not invoke this is valid for any failing constructor regardless of it throwing or not, it it fails it will not call it's destructor. You have just created a zombie object that will keep up taking memory but won't ever be released until program termination. And one of the easiest ways to generate this is whit allocations in the constructor, thats the reason why people insist on banning allocations in the constructors.


Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, Mad Max

#14 Lewis_1986   Members   -  Reputation: 102

Like
-1Likes
Like

Posted 31 August 2011 - 11:54 PM

So basically the article suggests using a bool sometype::init() instead of the constructor object for allocating resources as init can fail gracefully and constructors will just create massive mem leaks. I actually agree 100% with the article on this issue and would like to suggest http://www.scs.stanford.edu/~dm/home/papers/c++-new.html as a good read on the subject of constructors

#15 Codarki   Members   -  Reputation: 462

Like
0Likes
Like

Posted 02 September 2011 - 06:24 AM

So basically the article suggests using a bool sometype::init() instead of the constructor object for allocating resources as init can fail gracefully and constructors will just create massive mem leaks. I actually agree 100% with the article on this issue and would like to suggest http://www.scs.stanf...rs/c++-new.html as a good read on the subject of constructors

I think multi-step construction has too many disadvantages. If you really need complex building, maybe builder pattern could do the job. Pass the builder object around until the real object can be safely constructed.

That rant you linked is wrong on so many levels I'd "unlike" your post if I could. It is referring to gcc 2.8.1 so I guess it is written around 1998.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS