Sign in to follow this  
SymLinked

How do you manage multiple builds sharing the same code?

Recommended Posts

SymLinked    1233
Hello everyone, I have been playing around with different ways of seperating my two build. Both share alot of code and both need to have some parts synced or they won't be able to communicate. I tried two things and I would like to know which one you find the best: 1. Keeping both builds in the same project, and putting two preprocessor defines. One each for Build1 and Build2. This actually works great, but the code looks like sh*t. Very cluttered by all the ifdefs. 2. Two different projects and keeping both updated by the changes made to them. This way it looks cleaner (IMO, of course) but on the other hand mistakes are more common when you forget to copy/paste over the updates from the other project. I prefer number 2 myself, because then I can customize both projects perfectly, deleting the project files/code I don't need, instead of having to ifdef them out. Any thoughts?

Share this post


Link to post
Share on other sites
lightbringer    1070
I would refactor those reasons and still go for number 3 :)

If I would really have to pick between the first two, then I'd probably pick number one to avoid the whole cut-and-paste desynchonisation mess. I would postulate that ugly code that always works is better than beautiful code that only works 50% of the time.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
Tough question. If your two builds differ only by a particular set of files, go with option 2. If you need to make changes here and there and everywhere defines are the better option of the two you've presented.

The "right" way (as mentioned) to do this is to put shared code into libraries. Where possible, non-shared code should share a common set of interface files, but separate implementation files.

Share this post


Link to post
Share on other sites
jkleinecke    251
You could create an abstraction layer that gave each set of code a common interface. And then put #defines around the portion of code that implements the abstraction layer. Sort of like abstracting out a renderer or file system access.

Share this post


Link to post
Share on other sites
SymLinked    1233
Quote:
Original post by DrEvil
Definately 1 IMO. Code doesn't have to be pretty to be functional. Managing 2 projects would get real old real fast.


Thanks.

Quote:
Original post by Anonymous Poster
Tough question. If your two builds differ only by a particular set of files, go with option 2. If you need to make changes here and there and everywhere defines are the better option of the two you've presented.

The "right" way (as mentioned) to do this is to put shared code into libraries. Where possible, non-shared code should share a common set of interface files, but separate implementation files.


Aye, I would had used a shared library if the code wasn't so much dependant on different parts. It would require alot of recoding and that's alot of work.

Quote:
Original post by jkleinecke
You could create an abstraction layer that gave each set of code a common interface. And then put #defines around the portion of code that implements the abstraction layer. Sort of like abstracting out a renderer or file system access.


I made two abstraction layers for the renderer and audio system, but I'm not sure how to handle the rest.

Your opinions are greatly appreciated and thanks again for the suggestions.

Share this post


Link to post
Share on other sites
SymLinked    1233
After some thinking, I have decided to go with option 2. I just couldn't get my head around all those ifdefs, and I just realized the differences are mostly in a particular set of files, and some random ones.

Thanks again for your opinions.

Share this post


Link to post
Share on other sites
Jaymar    140
#ifdefs are nasty. I strive to avoid them at all costs, or to confine them to a single file or something. If you have build-specific code scattered all over the code then you should probably refactor that functionality into build-specific classes or files.

That said, I'll take a thousand ifdefs over two separate branches. Especially if I'm using VC++ 8, which will color-code ifdefs correctly.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
Good luck with #2, that's a horrific solution. Such a waste of time copying back and forth. Why is #3 not an option? Your boss won't allow it? Seems its a personal project. I'd put common code into seperate files and project specific code into seperate files so it can be blocked out.

Share this post


Link to post
Share on other sites
l3mon    128
Ever heard of "post-build-events" and "include paths"?
I'd go with number 2 if the projects have nothing to do with each other except sharing some convenient code.

Just make a "common" include directory and have your project update anything you need to after you built it. Same goes for "pre-build-events" :)

But if it's a huge bunch of code as you described, you should really go for a lib...

Share this post


Link to post
Share on other sites
MaulingMonkey    1728
Quote:
Original post by Jaymar
#ifdefs are nasty. I strive to avoid them at all costs, or to confine them to a single file or something. If you have build-specific code scattered all over the code then you should probably refactor that functionality into build-specific classes or files.

That said, I'll take a thousand ifdefs over two separate branches. Especially if I'm using VC++ 8, which will color-code ifdefs correctly.


This about sums up my opinion too (with the noteworthy exception of include guards of course). It's a lesser of two evils issue, and manual syncronization is just repeatedly kicking yourself in the nutts (and a horrible violation of the DRY principle - Don't Repeat Youself). The only time I use multiple branches is when I'm working with multiple versions for bug hunting (comparing "buggy" and "bug free" versions), archival purpouses, or large scale OSS apps which use multiple branches to seperate feature testing/debugging off from the trunk for stability purpouses (or similar).

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
If you use anything like a decent IDE for your development, the best way is to make several distinct build targets.

Target 1 contains the aources with the code shared by both different versions, building a static library.
Target 2 holds the sources making up flavour one, and links with the library produced by Target 1.
Target 3 holds the sources making up flavour two, and links with the library produced by Target 1, too.

Share this post


Link to post
Share on other sites
Kylotan    9853
I can't imagine trying to manage 2 separate projects with cut and paste. Ifdefs aren't that bad. It should be possible to push those ifdefs into a small minority of files, which can then be managed separately. Some files can be excluded from certain builds to make life easier too.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
I am not sure if I understand your problem, but why can't you use option 2 but without copying and pasting? I mean, why can't you simply have two projects, each of which depend on whatever files they need?

The only reason I could think of is that there are files which a project "only needs half of", but it seems like those situations, beyond being bizarre, could be very easily refactored into two files, and then the target that needs everything could just include both. This wouldn't seem to involve actually writing any code (beyond a few #includes perhaps).

Share this post


Link to post
Share on other sites
Quote:
Original post by SymLinked
Aye, I would had used a shared library if the code wasn't so much dependant on different parts. It would require alot of recoding and that's alot of work.


As far as this type of thing goes, there are two kinds of work: The kind you pay for up front, by doing the organizational and planning work that you should, and then there's the kind of work you pay for constantly down the line, because instead of doing the organizational and planning work, you took the "easy" way out.

The easy way seems cheaper, just because you don't necessarily see the subsequent wasted time in one chunk. However, it's almost always more expensive, because, instead of one lump of time, you're constantly being pulled off of what you're intending to work on, in order to fix whatever broke this time.

Share this post


Link to post
Share on other sites
moeron    326
If you ever think of "copying and pasting" then you should probably look into using some sort of source control. At least then you can use branching which will do the copying and pasting but allow you to more easily merge between the two. I don't know if this is a viable solution for the OP though. If you are thinking of using source control I'd check out Subversion. Its pretty cool and you can use TortoiseSVN for a GUI interface( I think its Windows only, check out RapidSVN for a cross platform gui interface ).

Share this post


Link to post
Share on other sites
DerekSaw    243
We managed 100+ different kind of platform specific codes by having a seperate platform folders. eg.

\projA\src\main - main program
\projA\src\render - renderer subsystem
\projA\src\ai - AI subsystem
\projA\src\platform\linux\main - main program for Linux generic
\projA\src\platform\linux\render - renderer subsystem for Linux generic
\projA\src\platform\linux\amd64\render - renderer subsystem for Linux AMD64
\projA\src\platform\linux\amd64\ai - AI subsystem for Linux AMD64

Our build system will use the correct files from the targeted platform specific folders when compiling. When no platform specific files is found, the more generic version of the files will be used. That said, it won't be applicable when using the project/workspace build of MSVC; we use command line for the builds.


As for code modification, well yes, you have to apply them to the correct platform specific files manually. We found it rather easier to have seperate files/folders than to put in everything in a file with #ifdef. Remember we have 100+ platforms to support.

Share this post


Link to post
Share on other sites
SymLinked    1233
Quote:
Original post by l3mon
Ever heard of "post-build-events" and "include paths"?
I'd go with number 2 if the projects have nothing to do with each other except sharing some convenient code.

Just make a "common" include directory and have your project update anything you need to after you built it. Same goes for "pre-build-events" :)

But if it's a huge bunch of code as you described, you should really go for a lib...


I never thought about it that way. I still don't see how it solves my "issue" as mentioned before, though.

Say I put the renderer into it's own project and include it in the solution. Now, say that main.cpp has calls to the renderer in it, and that both builds share this file.

The build which does not include the renderer will get errors. This was the whole deal with #ifdefs and why I wanted to use them in the first place, as I could place them around any calls to the renderer AS WELL as the actual renderer.cpp/renderer.h files. But this way the source files became very cluttered and I just couldn't stand it.

Thanks for your suggestion!

Quote:
Original post by yaustar
I would use number 3 as suggested above but as an alternative, I would create two project/solution files that only compiles/includes the files needed for the build.


Not possible since they share some of the same files, and those files might contain (for example) calls to the renderer, which is not present in both builds. This is why I considered #ifdefs.

Quote:
Original post by JasonBlochowiak
As far as this type of thing goes, there are two kinds of work: The kind you pay for up front, by doing the organizational and planning work that you should, and then there's the kind of work you pay for constantly down the line, because instead of doing the organizational and planning work, you took the "easy" way out.

The easy way seems cheaper, just because you don't necessarily see the subsequent wasted time in one chunk. However, it's almost always more expensive, because, instead of one lump of time, you're constantly being pulled off of what you're intending to work on, in order to fix whatever broke this time.


Good point! And in this case I saw that it would mean *alot* more work (and thus: time) to put everything into a lib rather than use two builds, which so far haven't been a problem at all.

Honestly, I think this is a matter of taste more than anything. Perhaps you guys are working with larger projects than I am and that I'm not seeing the issues with my approach until I scale :)

Share this post


Link to post
Share on other sites
l3mon    128
Quote:
Original post by SymLinked
I still don't see how it solves my "issue" as mentioned before, though.

Say I put the renderer into it's own project and include it in the solution. Now, say that main.cpp has calls to the renderer in it, and that both builds share this file.


Well either you put your render project into the same solution and make sure the build order is correct, or you have a seperate project for the renderer and each time you update the code, you'll need a post-build-event, which then copies your render files to a common directory. In your other projects you simply have an additional include path to that common directory.
That way you make sure every project uses the up-to-date version of your renderer. Of course you should only make modifications to the renderer in the renderer project as any changes would be simply overwritten by the post-built-event the next time you build the renderer project.

Hope that removes the confusion, or did I get something wrong?

Share this post


Link to post
Share on other sites
Quote:
Original post by SymLinked
Quote:
Original post by JasonBlochowiak
As far as this type of thing goes, there are two kinds of work: The kind you pay for up front, by doing the organizational and planning work that you should, and then there's the kind of work you pay for constantly down the line, because instead of doing the organizational and planning work, you took the "easy" way out.

The easy way seems cheaper, just because you don't necessarily see the subsequent wasted time in one chunk. However, it's almost always more expensive, because, instead of one lump of time, you're constantly being pulled off of what you're intending to work on, in order to fix whatever broke this time.


Good point! And in this case I saw that it would mean *alot* more work (and thus: time) to put everything into a lib rather than use two builds, which so far haven't been a problem at all.

Honestly, I think this is a matter of taste more than anything. Perhaps you guys are working with larger projects than I am and that I'm not seeing the issues with my approach until I scale :)


I wouldn't say it's so much a matter of taste as it is good habits. Bad habits are easy to get into, and tend to follow you around as you move from one project to another. Good habits are a pain up front, but tend to reward you continuously.

To your point, though, yes - I am used to working with full-sized real game projects, with hundreds of modules and tens of libraries. Good organizational habits are more relevant in that case than for situations with tens of modules. I do, however, stick by the notion that it's easier to learn good habits in a smaller scope, before you really need them, because usually by the time you've realized you've in a situation where you really need them, it's too late, and you have to go back and do cleanup work.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this