Good/Bad coding practices

Started by
12 comments, last by Kylotan 17 years, 8 months ago
I have dabbled in a little programming over the years, but have never really put a lot of time into getting a solid working foundation. Now, I have decided to go into full-hearted, and not set it down until I can functionally code small games and apps. I decided to start working with C++, since it is so versatile and popular, lending itself to many online resources (both people on this forum, and articles/tuts/etc). The major question I have is though: What specifically are people really talking about when they talk about good/bad coding practices? I see so often on these forums, in books, and everywhere people saying "Don't do or use <blank>, because it enforces bad coding habits you will have to try very hard to break yourself of". I would really like to get a good idea of what these practices/habits are, so that I don't have to one day go back and adjust what will then be second nature, because it is fundamentally wrong or troublesome! I know the basics, like always comment your codes clearly and everything, but I would think it runs a lot deeper than that. If anyone would like to shed some light for me by adding in tips, personal policies, thoughts, theories, etc., and maybe this thread could become a good resource for beginners, as well! Thank you everyone!
Advertisement
There are many 'religions' of how to do what and why. It doesn't only involve commenting although it does play a part. Too much commenting is not good.

Other practices are mostly about spacing.

For ex:
if ( condition ) {
action;
} else if (condition) {
action;
} // Less space consuming but is it less clear or is it cleaner code??

or


if (condition)
{
action;
}
else if (condition)
{
action;
} // Does using up spaces for braces make it clearer/cleaner to read??

// basically its your preference and nothing more to it. There are advantages
// and disadvantages to using both.

Some practices are crucial which everyone will agree to but others introduce differing reasonable arguments. I believe in such instances, both viewpoints are correct and it's just a matter of taste. What's important is that if one starts out one way, then he is to maintain that same way throughout his code.

Here are some of these practices:

* Distinguishing between signed/unsigned variables (important)
* Distinguishing between constants (important)
* Spaces or Tabs? (i prefer spaces for my reasons. Some agree - Others don't)
* Braces (as above)
* Variable naming conventions (Hungarian notation and the like)
* Not exceeding column 80
* etc... etc... (lots more)

You can have a look around the net and there will be lots of coding standard documents around.

Duncan

Duncan Camilleri
dnc77: You are mainly talking about code formating standards - there are many of them, and there is no the one and only perfect standard for this. I do agree, that the most important part is not to mix standards in your project - use the same formating through all of the code in your project.

OP: As for coding practices, well, it's just something that comes with experience. For instance - 3 or 5 years ago, i wrote a logger library for myself. It was pretty simple, it had all the features i wanted it to have. It was implemented as a singleton object. After using it in my *big* project for a year or so - i realized that my log is too big - i need a separate logs for every subproject or groups of subprojects. And then it striked me. The logger is a singleton, so it limits me to only one instance - i can't make any more logs with it. So then i had to do some dirty hacks (inside the logger), so thet it sorts the messages into different log files. I had more problems like this involving singletons or globals.
I have a new logger library now, and guess what! It isn't a singleton anymore. In fact, i try to avoid singletons or globals at all. And now, in my head, singletons and globals are noted as the bad practice.
Note: not everyone will agree with me on this.

So, IMHO, don't try to write *The Perfect Code* - just write *some* code. And from your own, personal experience, you will learn what is good and what is bad.
I'd say good coding practice is much wider than good conventions. There are things you're told not to use, like globals and macros. The reason behind these sorts of guidelines is that things can get out of hand as a project grows. For example, if you use too many globals it becomes impossible to tell what the code actually does. Some little statement like offset = 5 could cause a crash in a seemingly unrelated function. But these guidelines are just that: guidelines. If you actually need something, by all means use it. That's what it's there for.

Here are my fairly broad guidelines:
1. The simpler, more obvious, and less interrelated the code is, the better. If it's really clever, it's probably bad.
2. When you write something, always think "if I was working on a different project and just wanted to rip out and use this little bit, how much of a pain would it be?" (that'll help you write reusable code).
3. If you want "something a bit like an array only..." or "a little utility function to" then it's quite likely in the standard library, so have a look. The standard library one will be better than you can do if you use it well.
___________________________________________________David OlsenIf I've helped you, please vote for PigeonGrape!
Good programming practice have a single end objective, which is to reduce the cost of software development and maintenance. It derives from several key facts:


  • The human brain is extremely limited in scope and concentration power. The less things it has to deal with at a given time, the better it will fare. As a corollary, every thing the compiler or libraries lift off the human's shoulders helps tremendously.
  • Different areas in code have different time efficiency: time spent on code has a large effect in certain areas, and a small effect in others. It is vital to work on the areas where the effect is large, in order to optimize development time.
  • Maintenance and debugging work cost much more than additional work spent at code writing time to avoid them. It is more efficient to spend time making code maintainable or easily debugged, than it is to debug or maintain code that was not meant for that purpose.


Because of this, good programming practices place the emphasis on human-human communication, time efficiency and computer-assisted verification:


  • No undocumented use of exotic features or clever hacks. Always prefer the idiomatic way of doing things. Always be consistent in your code.
  • Choose all identifier names wisely, and adhere to the most popular naming conventions.
  • Whenever your code isn't self-documenting, comment.
  • The usage of a function or object must be understandable without reading its code (either through good names, or through documentation).
  • No mixing two trains of thought (such as doing both memory management and another algorithm in the same code section).
  • Consider warnings from the compiler as errors. Temporarily disable warnings in documented areas if ever necessary.
  • Code defensively. Any error caught by an assert as soon as it occurs represents an hour of debugging spent tracking down that error.
  • Never trust that the code does what you intend, except maybe from the current function you are working on. By extension, always check arguments and states for validity.
  • Never hide anything from the compiler. The more the compiler can check for correctness, the better it is for you. Type-safety is the prime example of this, and so are compile-time assertions.
  • Always test before you move on. The earlier errors are caught, the easier it is to correct them.
  • Keep everything to a minimum, locally. While your program may be large, every single bit of your code should only depend on a few clearly outlined other elements. Mutable global elements generally do not comply with this.
  • In addition to the above, always keep things local: small functions, small objects (with few functions at any inheritance level), etc.
  • When the compiler can't help you, use known procedures to accomplish your task. For insance, there are documented ways of using mutexes: these ways are proven to work, and designed so the compiler will catch you if you mistype or forget something small.
  • Never reinvent an existing wheel, spend your time on things where it will be more effective. While creating a side-project to experiment (this is called a spike in the industry) is possible, never include your own code for a functionality implemented by a library — especially if said library is free (or nearly free) for any use.
  • If using what you wrote gives you trouble, refactor. This results in better code, both in terms of speed and in terms of usage simplicity. Always use the experience gained using a feature to implement it in a better way.
  • Whenever you intend to spend time on something, prove that it is worth said time. This involves profiling before you optimize and experimenting before commiting lastingly to a design.
  • Never make any assumptions. Either you can use the compiler (or another tool) to prove you're right, or you're wrong. The code is not for your puny human brain to handle correctly.
Read linkies (warnign PDFs, though each only a couple pages)

The Open-Closed Principle
The Dependency Inversion Principle
The Interface Segregation Principle
The Liskov Principle
The Single Responsibility Principle

These basically sum up the major good design practices, specifically with OOP/OOD in mind.
Wow, thank you everyone! I have learned from everything everyone has mentioned! For example, I always found myself confused at what my globals were, even in small programs. I just thought that using them was considered the best practice! Maybe someone else has something to add? I think this is already a great thread for other beginners to read, as I have already learned a lot!
The real fundamental issue is that you want to avoid bugs, and some ways of doing things are inherently more prone to error (i.e. bugs) than other ways. When somebody says that such-and-such is a bad coding practice they (usually) mean that you are more likely to make a mistake then if you did it some other way. Closely related to this idea are practices, that while they may work for one person over a short period of time, are hard on people who aren't intimately familiar with the code. i.e. you are doing something that increases the odds that somebody else will make a mistake while working with your code.

Other people have given long lists of things they feel are good or bad practices. Mostly they're common sense. A few seem dumb at first and can only be truely appreciated through hard and bitter experience. In other cases it isn't really clear whether one way is better or the other. People will pick one and then tend to get rigid and dogmatic about it and you end up with the infamous "religious" flame wars.
-Mike
Read the chapter on "Bad smells in code" in the Refactoring book by Martin Fowler.

Cheers
Sorry to bump this, but it's such a good thread. I vote for sticky somewhere.

I like to think of good practices in software as means to deal with complexity. I got this mostly from SICP and Bartosz Milewski's writings, such as the chapter About Software from his C++ book.
Once I realized this, notably when some app I made broke apart at the end (it violated all principles Ezbez mentioned in numerous ways), it helped me to figure out what and why good practices are, even if I have yet to learn / master this or that practice.

This topic is closed to new replies.

Advertisement