Jump to content
  • Advertisement
Sign in to follow this  
mattnewport

Is it just me or do these programming techniques suck?

This topic is 4809 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm working on a large multi platform (including consoles) C++ project which uses a number of programming techniques that I think are bad choices and in some cases which fly against all the advice I've seen given by books on good C++ style. Now I know some of the engineers who've developed this thing are not complete idiots so I'm wondering if I'm missing some compelling arguments for doing things the way they are done - perhaps some alternate schools of thought on what I thought were established best practices. In many cases I think I know the reasons the choices were made (usually performance related) and can sort of understand them but just don't think the benefits are worth the costs in terms of extra maintenance, extra development time and difficulty of debugging. Much of it seems to be one giant exercise in premature optimization. If anyone can offer good arguments in support of these techniques or point me at resources that argue for them I'd be interested to see if I'm missing something. 1.) Coding standard that requires no headers to include other headers. I've seen plenty of books that say any header should be able to compile on it's own as the only line in a .cpp file. This prevents situations where you must include headers in exactly the right order to get anything to compile. Obviously in general it's a good idea for headers not to include other headers unless they really have to but this is the first time I've seen it mandated that headers should not include headers even when they will not build without them. This causes numerous headaches: you have to get the include order exactly right in your .cpp file to use a header; you have to figure out all the implicit dependencies in a header file yourself in order to use it; you can't look at a header file and tell at a glance what it is physically dependent on; you can't know for certain what any given function call or type in a header will resolve to since it depends on what happens to be included before this header in any given .cpp file. This seems monumental idiocy to me. Am I missing something? 2.) Classes that are not. It is standard practice throughout the code base to have 'classes' whose size and members are not fixed and are only implicitly defined. The classes provide static functions for determining their real size based on their initialization parameters (the size can and often does vary from instance to instance) and provide a static initialization function. Creating an instance of the class involves allocating the amount of memory returned by the size function then calling initialize on this chunk of raw memory (making sure you use exactly the same parameters as you used to get the size). The class will typically have a few members declared and then will tack a whole bunch of arbitrary data on to it's own tail, the layout and size of which varies and is defined implicitly in the code that manipulates it. This approach saves the space required for an occasional pointer but at what a cost... You can no longer use standard C++ new and delete; you can't use sizeof(); you can't create arrays of the object; you have a maintenance nightmare (since the actual layout of the data is not defined explicitly anywhere, it is all implicit in the code that manipulates the data); you have no hope of automating serialization by analyzing the declaration of the class... I could go on. This seems to me like a crazy way to go about software engineering. Is it common practice in embedded systems or some other area that I'm maybe not familiar with? 3.) Funky auto-generated headers. Presumably partly in order to reduce the pain induced by 1.) there is a custom tool that takes in a big bunch of cpp files and include directories and spits out a concatenated header containing all the necessary headers in one big lump. It doesn't always work correctly. Even if it did this seems pointless - are there really still compilers that are so bad at handling include guards that this makes a difference to build times? Is it worth the pain? 4.) No run time memory allocation. I can understand the rationale of this for many systems but it's not really true in this case. What they really mean is that they don't perform it, not that it is not needed. Instead clients must query the static size function mentioned above for each class and allocate all the memory required themselves before passing it to the initialization function. It seems to me this doesn't buy you much over simply always using a client provided memory allocator for all internal allocations. That will do for now, there's other design decisions which make me want to tear my hair out but I've ranted enough for one afternoon. So can anyone offer any good justifications for any of these techniques? Are they more widely used than I realise? Or is this really as crazy as I think it is?

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by mattnewport
I'm working on a large multi platform (including consoles) C++ project which uses a number of programming techniques that I think are bad choices and in some cases which fly against all the advice I've seen given by books on good C++ style. Now I know some of the engineers who've developed this thing are not complete idiots so I'm wondering if I'm missing some compelling arguments for doing things the way they are done - perhaps some alternate schools of thought on what I thought were established best practices. In many cases I think I know the reasons the choices were made (usually performance related) and can sort of understand them but just don't think the benefits are worth the costs in terms of extra maintenance, extra development time and difficulty of debugging. Much of it seems to be one giant exercise in premature optimization. If anyone can offer good arguments in support of these techniques or point me at resources that argue for them I'd be interested to see if I'm missing something.

1.) Coding standard that requires no headers to include other headers.


I think that's monumentally stupid too. It could and will prevent implementing certain class functions inline in the header (which depend on a class layout which we're not allowed to include), which could actively hurt preformance by eliminating the ability to inline in certain places.

Quote:
2.) Classes that are not.

It is standard practice throughout the code base to have 'classes' whose size and members are not fixed and are only implicitly defined.


Now that's just plain nasty. I wouldn't be suprised if the standard defines lots of undefined behavior for anyone doing that too.

If you wanted to pack a variable-length array along with the class data, you could do:

char * memory = allocate( ... );
object * o = new( memory ) object; //placement new
char * after_memory = memory + sizeof( object );

And then supply the poiner after_memory when it's needed/used via simple calculation (which shouldn't be at object's level of implementation)

Quote:
3.) Funky auto-generated headers.

Presumably partly in order to reduce the pain induced by 1.) there is a custom tool that takes in a big bunch of cpp files and include directories and spits out a concatenated header containing all the necessary headers in one big lump.


Yes, it seems like an extremely retarded workaround for #1 - I can see tons of problems involving multiple definitions. Further, It should slow down parsing as instead of a header being able to simply reference another with include guards (skipping it when allready #included in a unit)

Quote:
4.) No run time memory allocation.


This is pretty crazy too. It can be a good idea to reduce "run time" allocation for the sake of efficiency, but that's normally done by doing it earlier, e.g. during initialization, which is part of "run time".

Personally? I think you should ask them. It may be they have to work with an extremely retarded compiler, or something else which forces them to make otherwise stupid decisions.

Share this post


Link to post
Share on other sites
Quote:
Original post by MaulingMonkey
Personally? I think you should ask them. It may be they have to work with an extremely retarded compiler, or something else which forces them to make otherwise stupid decisions.

It's a bit difficult to ask directly as they don't work in the same office (or on the same continent for that matter). I've raised the header file issue here and tried to get an answer back on why they do things that way but haven't had any luck. I also have to be a bit careful not to sound like I'm being negative / attacking them when asking about these things as well - we are supposed to be working on the same product. I also know it's too late for them to change these decisions now so even if I convinced them it was a bad idea it's a bit late.

Share this post


Link to post
Share on other sites
#1 is probably an overreaction from people that got burned by the opposite extreme. In large projects dependencies can get out of control really easily causing problems with build times, refactoring, unit testing, precompiled headers, etc.

#2 is caused by two things - people worried about the performance of new/malloc so they want to do one big allocation rather than several little ones, and for people that don't want to make proper destructors and instead want to de-allocate the entire mess in one go.

#3 I've never seen.

#4 is basically a varient of #2.

Share this post


Link to post
Share on other sites
I am using C, rather than C++, but I don't really like runtime memory allocation. I usually have static memory and some dynamic memory where needed, but it is allocated at the beginning of the program, rather than in the middle.

Share this post


Link to post
Share on other sites
Quote:
Original post by Anon Mike
#1 is probably an overreaction from people that got burned by the opposite extreme. In large projects dependencies can get out of control really easily causing problems with build times, refactoring, unit testing, precompiled headers, etc.

Dependencies are certainly a problem but this 'solution' completely fails to solve it. The dependencies are still there, it's just you have to state them explicitly in every C++ file. When you have all your necessary dependencies in the header you can see them and if you're a conscientious programmer you can refactor things to minimize them. If you're not including headers in other headers you can't even see from looking at the header what it's dependencies are so there's extra steps in trying to minimize them. I think it's very important to minimize physical dependencies in your code but this is not a good way to do it.
Quote:

#2 is caused by two things - people worried about the performance of new/malloc so they want to do one big allocation rather than several little ones, and for people that don't want to make proper destructors and instead want to de-allocate the entire mess in one go.

These are legitimate concerns but I can think of ways of solving them that don't throw the type system out of the window. It also seems like premature optimization - focus on clean, correct, maintainable code first and optimize later where it's really needed. With all the development time you've saved not having to deal with this kind of mess you'll have plenty of time for optimization.
Quote:

#3 I've never seen.

I've heard of people concatenating everything (code and headers) together to reduce build times (the idea being that it's much faster for the compiler to open and build a few large files than many small ones) but I'm not even sure that this particular technique helps in that respect.
Quote:

#4 is basically a varient of #2.

And the same questions apply: why make it default behaviour, why not use it only when you profile and discover it's really necessary. I can also think of other ways of achieving this without resorting to such unmaintainable coding techniques.

Share this post


Link to post
Share on other sites
I end up doing the opposite - allocating as little memory as possible, only when it's in scope.

Share this post


Link to post
Share on other sites
Quote:
Original post by Raduprv
I am using C, rather than C++, but I don't really like runtime memory allocation. I usually have static memory and some dynamic memory where needed, but it is allocated at the beginning of the program, rather than in the middle.

Runtime memory allocation is certainly something to be careful of and it's not something you want to be doing in the middle of a frame but I don't see how this approach addresses that. When you need dynamic allocation you have to get the user to do it for you explicitly. You could achieve the same result with less pain for the user (and yourself) by doing all allocation through user-supplied callback functions, which is an approach that is quite common in runtime libraries.

Share this post


Link to post
Share on other sites
Quote:
Original post by mattnewport
I've heard of people concatenating everything (code and headers) together to reduce build times (the idea being that it's much faster for the compiler to open and build a few large files than many small ones) but I'm not even sure that this particular technique helps in that respect.

Wouldn't this in practice lead to exactly opposite, i.e. waste of time? Few large files might build faster than lots of small ones, but then each small change affects large file and requires it to be built anew, instead of having to recompile and relink just the small thing that was actually changed..? o.O;

Share this post


Link to post
Share on other sites
Quote:
Original post by tolaris
Quote:
Original post by mattnewport
I've heard of people concatenating everything (code and headers) together to reduce build times (the idea being that it's much faster for the compiler to open and build a few large files than many small ones) but I'm not even sure that this particular technique helps in that respect.

Wouldn't this in practice lead to exactly opposite, i.e. waste of time? Few large files might build faster than lots of small ones, but then each small change affects large file and requires it to be built anew, instead of having to recompile and relink just the small thing that was actually changed..? o.O;

I think the idea is that you concatenate whole libraries / subsystems into single units. Those units will then compile faster and hopefully they will change sufficiently infrequently that overall you get a speedup. Typically any one programmer will only be working on a small section of the code at a time, and if you've done a good job of splitting things up into modules he will only be touching one of them at a time most of the time, so an individual programmer will see faster iteration times on average. I'm not convinced it's worth the effort though - I think the resources could be better spent elsewhere.

This particular project is a library which end users will rarely if ever change so concatenating the headers may help end users' build times. I think it would be better to manage physical dependencies so that they didn't need to include so many files though.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!