Cross platform library linking with CMake

Started by
9 comments, last by Alberth 7 years, 3 months ago

I'm trying to setup CMake in my project's Git repository so that anyone working on the project can just double click a batch file or something to build the game and then run the generated executable. I guess that they would have to install a compiler and CMake to be able to build the project, but I would prefer it to be as simple as possible to set everything up for every user. I'm currently the only programmer in the team, the rest are artists and designers.

The problem I'm having right now is with linking to the libraries. The game uses SDL2 and its modules image, ttf and mixer. It also uses Boost right now, but I will probably get rid of it once filesystems are introduced in the standard library in C++17.

A thing that confuses me is dynamic and static linking. I understand that Linux makes it easy to use dynamic linking, but I'm working on Windows where there is no standard directory for libraries such as SDL and I don't really know where to put the SDL files. Should I put them in a library folder in the repository to make sure that everyone has acess to the libraries? But the question then is: would that make it so that I'm using static linking instead of dynamic linking and is that a bad idea? And how do I tell CMake where to look?

Linking to libraries wasn't really a problem when I was just working on my local drive in Visual Studio, but since I want to share the project with my team now it feels a lot more confusing.

Advertisement

Your external libraries should definitely be added to the repository. This not only ensures everyone has access to the library on every platform, but that they're also all using the same version. Whether or not they're statically or dynamically linked depends on the configuration.

Say that you put all your external libraries in a folder called 'libraries' under in the root source tree, then your configuration might look something like this:


TARGET_INCLUDE_DIRECTORIES (myExe PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/libraries/sdl/include)

IF (UNIX)
    TARGET_LINK_LIBRARIES (myExe PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/libraries/sdl/lib/linux/libSDL_mixer.so)
ELSEIF (WIN32)
    TARGET_LINK_LIBRARIES (myExe PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/libraries/sdl/lib/win32/SDL_mixer.lib)
ENDIF ()

You can tell that in the Linux configuration SDL mixer is dynamically linked, because you're specifying an .so (as opposed to an archive). As for the Windows configuration, you can't really tell from looking because a .lib can either be a static library or an import library, so that's something you just have to know as the person configuring the build. If it's a static library then that's it, nothing else to worry about. But where things get trickier is installation and the handling of dynamic libraries

On Windows, the installation step is usually pretty simple: for every import library in your source tree you linked against, copy the corresponding DLL to the same folder as the main EXE. See How to copy DLL files into the same folder as the executable using CMake?. If you install the EXE outside the build tree, just copy the DLLs along with it.

On Linux, you have to worry about the rpath. CMake will handle that detail for you within the confines of your build tree, but if you install the EXE elsewhere, you need to make sure the shared libraries can be found. Take a look at CMake RPATH handling for more information. I usually don't like to have any dependence on the build tree once something is installed, so I'll typically copy the .so files along with the executable, then launch the game with a script that sets LD_LIBRARY_PATH appropriately. However, how you want to handle this is completely up to you.

You can store just the built libraries in git. Git is not great with binaries, but 3rd party code libraries are not that large and they don't change that often, so you shouldn't have a problem.

Regarding standard paths: a common system that I've seen at pretty much every professional gamedev company I've worked at is that they have a dedicated hard drive or virtual drive for development. So it might be that everything under H:\ is stored in version control, and everything in version control can assume its absolute paths will start with H:.

Anywhere that's not practical, the next step is environment variables. Each developer will have environment variables set up containing paths to key tools, and you can read/expand those in your build process, use them from within tools, etc.

Thanks for the helpful replies! I will put the libraries in the repository then and link to them in the way that Zipster suggested. Not completely sure about where to put the exe yet, will look more into it. We'll see if we will use virtual drives for development. Probably not right now, maybe later if we decide that it would be useful.

I guess I was also a bit worried about how linking would work when doing installation for the end user, but there will be a couple of years before that will be relevant for us anyway and installers are a whole issue on its own, so I'll worry about it when that time comes.

I wouldn't store the .exe in Git. Your version control system needs to hold all your 'input files' and your 3rd party libraries are included in this, but output files like *.obj/*.o files or .exe files are (a) redundant as they can be recreated from the input files, and (b) likely to waste a lot of space as they change often and don't compress as well as source files.

You're probably thinking, "then how do the artists get the executable if it's not in Git?" but bear in mind that Git is a version control system - being a content distribution system as well is something of an accident. Some pro devs do throw everything in version control - this has some benefits, but it's also necessary to use something more robust than Git (e.g. Perforce, Alienbrain, even SVN copes better than Git/Mercurial on such things), and it has downsides (those files will never merge well, but change often). Some have a middle ground where they periodically package up approved stable builds and check them in. Others distribute these builds via a completely separate system (file share, Dropbox, whatever), so a build might have a tag in the VCS, but doesn't change it in any other way, just copying the output files to a designated place.

(Some of the problems with storing .exes will also apply to storing large art assets, so you'll need to consider all of this fairly soon.)

I wouldn't store the .exe in Git.

Sorry if I was being unclear! I never intended to actually store any of the built files on Git. My plan is to set up a batch file (or two as I guess unix needs its own version) that can easily be run to generate the exe file with CMake. The exe file will be generated inside the Git folders, but I will make sure that Git ignores all files in the build directory so it won't take up any space on the repository. Each user will have to run the batch file if they want an up-to-date build.

Regarding art assets and other binary data, I had come to the conclusion that Git could handle that fairly well with LFS. I was also thinking of just storing the work files (like .psd) in the repository and have a script generate the sprite sheets and such from them into the build directory. It just seemed like Git was the best option when I was looking at different version control systems and I would prefer to store assets at the same place as the code. Now the only problem I see with Git is that it doesn't provide file locking, which may or may not become a problem... And of course it can't merge the binary files, but that's just something we have to live with for now.

My plan is to set up a batch file (or two as I guess unix needs its own version) that can easily be run to generate the exe file with CMake.

I usually like to write these sort of cross-platform configuration scripts in Python, so I'm not having to maintain one for each platform. You can then provide platform-specific wrappers (configure[.sh] and configure.bat) that just invoke the Python script.

I'd also keep the configuration script separate from the build script, because CMake configuration should really only be done once, while building happens as many times as necessary. Also, the Visual Studio generators create project and solution files, so a developer might chose to build the project through the IDE as opposed to the command-line.

The exe file will be generated inside the Git folders, but I will make sure that Git ignores all files in the build directory so it won't take up any space on the repository.

Take a look at Out-of-source build trees. It will keep your revision-controlled directories nice and clean!

I usually like to write these sort of cross-platform configuration scripts in Python, so I'm not having to maintain one for each platform. You can then provide platform-specific wrappers (configure[.sh] and configure.bat) that just invoke the Python script.

I'd also keep the configuration script separate from the build script, because CMake configuration should really only be done once, while building happens as many times as necessary. Also, the Visual Studio generators create project and solution files, so a developer might chose to build the project through the IDE as opposed to the command-line.

Take a look at Out-of-source build trees. It will keep your revision-controlled directories nice and clean!


Thanks for the tips! I know absolutely no Python right now, but I guess it's almost inevitable that I would learn it someday... But I think that will have to wait a little while.

We'll see what I do to actually generate the executables. I just discovered the wonderful support that Visual Studio 2017 has for CMake, although that's more fore programming rather than building. Also, out-of-source builds was apparently what I had already been doing, so I'm fine on that at least.

Each user will have to run the batch file if they want an up-to-date build.


Okay. In my experience artists and designers get a bit funny about being expected to maintain a full compiler toolchain.

Regarding art assets and other binary data, I had come to the conclusion that Git could handle that fairly well with LFS.


Come back when your artists have to resolve their first merge conflict. :) Anyway, LFS is a help in some ways, but Git's normal method of operation is just not as effective as more traditional systems when it comes to this sort of thing. Artists don't normally care about every change to source code, and coders don't normally care about every change to source assets, but they're all going to get them in their updates anyway.

Now the only problem I see with Git is that it doesn't provide file locking, which may or may not become a problem...


It will. ;) You can mitigate this by ensuring there are few shared files, and that the ones that are shared, are text-based.

Thanks for the tips! I know absolutely no Python right now, but I guess it's almost inevitable that I would learn it someday... But I think that will have to wait a little while.

I believe every developer should learn Python at some point, if only because there are a mind-boggling number of packages of libraries out there for handling almost anything. You can actually get to the meat-and-potatoes of your task right away without having to worry about a lot of boilerplate.

At this point I've almost completely stopped using shell scripts :)

This topic is closed to new replies.

Advertisement