Jump to content
  • Advertisement
Sign in to follow this  
japro

Why am I only told what not to do

This topic is 2456 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement
If you're working by yourself, write code however you want. There's no reason not to. If it works, it works. If you run into any major design problems, you'll gain a better understanding of the problem. You'll also understand your entire codebase so you will be able to fix it more easily than if someone else wrote it.

On your own projects, you should intentionally experiment with "weird" or "creative" program design. Chances are you'll come up with something that's already been made elsewhere, but more importantly you'll understand a wider variety of software architectures.

On a team, it's more important to figure out and follow the guidelines everyone else on the team is using. If everyone on a team codes in totally different ways, there's inevitably a lot of "WTF is going on here?!" moments. Generally you'll be on a team for any paid projects, and paid projects are always under time constraints (which means if you screw up the design the first time, you may not have a chance to fix it).

Share this post


Link to post
Share on other sites
I'm assuming you're referring to this article? I also noticed that sentence a few days ago and felt similarly crushed - if one of my most basic assumptions, something I took for granted turns out to be bad practice then where am I really? Good to know it was a false alarm. It may just be poorly worded (or ironic?), though I have no clue how it was meant to read.

If you're interested in component/entity systems specifically, I can point you to post #51 by Lord_Evil, buried in this thread. After weeks of looking in vain at various threads and articles, this was the one approach that I personally understood, liked and was able to implement. It might be of some use to you.

Share this post


Link to post
Share on other sites
Because a lot of bad things are bad everywhere. Very few good things are good everywhere. They're good if you're in this scenario, with these requirements, with those developers... and even then there's trade offs.

Personally though, I dislike non-trivial constructors. They're problematic in design (near impossible to reuse/abstract, have many touchpoints so are hard to refactor, have limited options for graceful failure), difficult to test effectively, and 'sane defaults' (in my experience) allows you to reason about a program and work with the class much easier. If you're enforcing some invariant (like constructors are meant to/are good at) then by all means make a non-trivial constructor (even if that involves allocations) (imo).


And you're likely not a horrible programmer. If you learn from your coding enough to look at others' opinions with a critical eye, that's pretty much the definition of a good programmer.

Share this post


Link to post
Share on other sites

When did allocating memory in a constructor become a bad idea?


Welcome to engineering.

Over course of years and many projects, correlation was established between successful projects and those not so much. There are no hard rules or laws, but certain practices have been found to be better than others.

One observation was that low-coupled objects result in code that is easier to modify and test. Projects with such properties lead to easier experimentation and shorter turnaround times, providing more end-user value (aka what really matters).

When it comes to resource allocation, there are two sides to the story which make it undesirable. One is the issue of testing, covered by several Google Talks. While it favors certain development style, lessons there are somewhat universal, even if presented in scope of Java.

Second issue is non-determinism of resource allocation. "But it's fast enough" doesn't cut it. Resource allocation is almost without exception non-deterministic (may take arbitrary amount of time) and has nothing to do with the rest of the code - a renderer obviously needs to load everything before it's useful, an algorithm obviously needs entire data before it can do useful work.

Effects of non-determinism can be observed when testing. As project grows, the loading times increase, even though algorithms don't change.

The latter issues are especially important when dealing with real-time systems. Ideal real-time system has resource allocation completely decoupled during initialization phase while operational state cannot allocate anything (be it file, memory, even socket). Memory is frequently considered "fast enough", but in well-designed OO system it often becomes a bottle-neck (30 second Tomcat restart cycles).

Finally, from design perspective, putting all allocation in constructor creates artifical oblivion to rest of design. Assume Window which always creates Widgets. Whenever you create a new window it will ignore the programmer's knowledge that other widgets already exist and it will recreate them. Since most OO systems favor resource-oblivious encapsulation (ignore external knowledge) they end up encapsulating more than they should. A new resource typically implies "new identity" even though many resources are shared and "equal". JVM exploits this fact by reusing string instances (.intern()) and C++ compilers offer string pooling and constant resources which may technically violate the expected behavior of each allocation being unique.

Resource allocation of any kind should be considered strictly orthogonal to algorithms that operate on data. It's fairly common to use resource loaders or caches as well as different allocators in same application reinforcing such observation. While rarely explicitly stated, many projects independently arrive at the need for decoupling the two.

As a C++ detail, operator new tends to be overused by programmers coming from Java or C# which results in unneeded verbosity.

Most of these issues can be solved by parametrizing "allocator" either at compile or run-time. STL has allocator parameter, constructor might take a Factory or Allocator interface which can act as cache, C idiom is to pass allocator function or pointer to memory block (see zlib), dynamic languages may use a different (sqlite instead of MySql).

Some of "best practices" come through as FUD or just something consultants like to repeat to justify their existence. But as with everything, there are both exceptions as well as grains of truth.

Generalizations do not help much and truth be told, most projects simply aren't big or important enough for such details to matter. But it helps to be aware of such issues since they can make or break a certain part of development.

Share this post


Link to post
Share on other sites
Thanks for the replies.

So what I take away from this at the moment is that these kind of dogmatic statements usually target an underlying problem and try to avoid it by forbidding all the possible causes beforehand.
I can absolutely see why one would do this, but find it very confusing when there is no explanation available. Also without knowledge of the underlying problem the "bad programmer" that you are trying to keep from doing something stupid may be working around your rule in an even more stupid way. I once saw en example where someone apparently knew that he shouldn't use goto. So instead he emulated the behavior with huge "do{... }while(false)" loops and break statements. I'd also suspect that someone who is only told to not allocate in a constructor will just move the allocation to some even more "dangerous" spot.

So: Please don't propagate a dogma without explanation. :)

Share this post


Link to post
Share on other sites

So: Please don't propagate a dogma without explanation. :)


That about sums it up.

Continuing on that, consider that the environment of corporate software is radically different from those of hobby development and academic development. They are not even superficially similar.




In an academic environment you are generally taught multiple ways of doing things (e.g. implement 15 different sorting algorithms), or else taught rules for doing a thing but only a single way to do it (e.g. this is a linked list, this is the theory behind it, now implement it a single time). Your job in the academic environment is to learn enough that you can become fluent and hopefully handle an entry-level job in the field.

In a corporate environment you generally have peer code reviews where people can tell you what they dislike and how to fix it. If you can't do it their way you can figure out if that is important, and if so, have them implement it instead and teach the entire team the alternate forms.

In an online environment like these forums many people start saying "should I do such-and-such", or "why is such-and-such bad", which results in a very long debate where the answers are context sensitive.




There are many things that are absolutely forbidden at work that are standard practice on hobby projects and face random levels of scrutiny in academia.

Hard-coded objects and value are the most egregious offense in that bucket. In college I had professors who were strict, others that recommended and docked a few points, others who recommended but didn't grade against it, and still others that didn't care. In a hobby project where you are alone, it makes sense to do it because you run your compiler every time you modify the stuff, it is faster and easier for one individual to work with it all in one place. But in a corporate environment with multiple designers and producers and level editors and object script writers and many more people involved, such an action would effectively block multiple teams from being able to do their jobs.




For a personal project do whatever you need to make the product. It doesn't matter if it is ugly, or spaghetti code covered with goto statements, or every image and model is hard-coded, or strings are composed in a way that makes translation nearly impossible. What matters in that case is just finishing the game.

Share this post


Link to post
Share on other sites

For what it's worth, the idea that you shouldn't allocate memory in a constructor is ludicrous.


As evidence of how implausible it is that this is good advice, many objects that are part of the standard library (std::string, std::vector...) do provide constructors that allocate memory.

So here's a positive-style piece of advice: Write classes that behave in ways similar to the classes provided as part of the standard library. And here's an explanation to go with it: If you follow my advice, any developer that is familiar with the language will feel comfortable using your classes.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!