Where To Place External Libraries

Started by
11 comments, last by Angelic Ice 7 years, 9 months ago

Hello forum!

Where do you store external libraries?

My current way is like this:

For SFML I do it like this:

C:\SOMEWHERE\SFML\2.3.2\

This allows me to quickly change between versions (if needed) and keep an overview.

Followed by a system-variable to that SFML folder.

However, this does not fully please me. What if I want to create a new project and push that project together with the external library to my Git?

Should I simply include SFML and all the other libraries into my project folder?

The reasoning for having external libraries on my Git is simple: I want to simply get my project and compile it without any further downloading of other resources when I switch my computers.

Should I simply do:

C:\SOMEWHERE\Projects\ProjectXYZ\libs\SFML\ ?

Therefore, where should my external libraries reside in?

How do you structure your projects?

Advertisement

I think it depends a bit on what environment you are working in...

I am using C#, with sfml.net... which is a bit of an edge case with dlls b/c it has two types of library requirements...

There are the naitive c/c++ assemblies that I can't reference from managed c# and have to be dynamically linked by the sfml.net libiraries

and there are the .net CLR assemblies which do need to be referenced directly and can be linked statically...

Also, b/c I am using Visual Studio the base FileStruct looks like

<Root>

Proj.Sln

<Proj>

Proj.csproj

...morefiles...

<moredirs>

..morefiles

I added a Resources directory just below the root, and within that directory I added a directory for extLibs (the c/c++ naitive asseblies) and a directory for libs (CLR .net assemblies)...

So... that's where I store them... I figure that for a given project, I select the version of the lib at the start of coding and don't plan on updating that ver for the lifespan of the project b/c in the general case, you can't expect to replace xyz_v1.dll w/ xyz_v2.dll and assume backwards compatibility... so changing lib versions is a manual process BY DESIGN so I know exactly which version is being used and am reminded to test for backwards compatibility and refactor code when changing versions.

As Paragon says, it depends on your intentions. I personally use two git repositories which has it's own downsides but has some preferable bits. So I have project x in git 'x' and its dependencies in 'x_deps'. I build x_deps, copy the appropriate headers and the built libs (automated via CMake in my setup) into a folder of 'x' such that when working in x I have the minimal needed set of items. Those get committed with the source so everything is usable without messing around. Anytime I need to update an external I jump back to x_deps, do the updates and recopy the includes and libraries to 'x'. This has benefits in terms that my primary work repository stays relatively clean and small but it can be a hassle to update libs since I have multiple targets and have to move from machine to machine updating the libs.

It is one one of doing things.

I select the version of the lib at the start of coding and don't plan on updating that ver for the lifespan of the project b/c in the general case, you can't expect to replace xyz_v1.dll w/ xyz_v2.dll and assume backwards compatibility...

Perfect reasoning. Totally forgot about this. Thanks a lot!

I personally use two git repositories which has it's own downsides but has some preferable bits. So I have project x in git 'x' and its dependencies in 'x_deps'.

Really interesting, especially since it keeps the reps clean. I could also think of branching them, but that kind of defeats the real concept of branches, I assume. Just sounds like bad practise.

I will think about getting two reps, might be worth!

Thanks to both you : ) Already helped me a lot to get a complete different view onto this topic!!

Typical approach for most games is to check in external deps with the source tree. It's also common to check in the compiled binaries instead of the sources; no reason to spend time compiling dependencies on full rebuilds if you aren't ever changing them. Thankfully git-lfs makes checking in binaries a less insane idea than it was on git previously.

An alternative git-friendly setup is to put all your external deps into their own repository and then use git submodules to pull them in. There's not a particularly compelling reason to do this for binary dependencies IMO, and there's plenty of compelling reasons to stay far away from git submodules. :)

Sean Middleditch – Game Systems Engineer – Join my team!

I was about to say if your compiling from source you can just use git submodules within some folder in your repo to allow mixing submodules and binary libs as needed. I would suggest you fork the repo first though in case you need to make some changes to the CMakeLists file you can commit them to your fork (or whatever build system it uses files).

For example - you may compile freetype from source but use binaries for glfw. Freetype would then be a submodule and glfw would be a version tracked folder in your repo. If this is your project dir:

deps/
glfw/
include/ (header files)
bin/ (dlls here)
lib (all unix libs and windows import/static libs)
freetype/ (submodule)
src/
include/
Etc...

This has worked pretty nicely for me. When you clone the repo pass -recursive flag to automatically init/update the submodules

Typical approach for most games is to check in external deps with the source tree.

Agreed, and for good reasons.

If we need to re-create an old build for any reason we'll need to have all the parts. That is for code AND data AND tools.

If there is a reason we need to re-create an old build, like an executive comes out and says "I want to show the same thing we showed at E3", or perhaps the project gets mothballed for a year and people need to get started on it again, then you'll need all the old stuff.

If you are pulling from a public source perhaps the library version 2.73 is no longer available; maybe they've moved on to 2.96 and didn't bother to archive the 2.73 release. Now you're forced to hunt down the older version or potentially try to rebuild and migrate and fix anything that broke.

That applies to tools as well. Let's say you've got something that compiled with a specific version of an in-house tool, but that tool has moved on. Maybe you were using an art export tool to extract data from all the Maya files and import that data into your automated process. Now you cannot build any more because the new version of the tool doesn't work with the old data formats.

And finally, it applies to game data as well. If development has gone on for six months perhaps the asset file format has changed. For the old build to run I need the assets in the old format. If everything is kept in a database that isn't archived, or kept on another file system that is not part of the identical version control system, then I cannot re-create the build with matching old build. You'll have the code but not the assets.

Everything goes in version control. If it is an online resource then snapshots of the online resources are made and included as well. This includes localization database snapshots and online service snapshots, where necessary. This is a big reason why systems like git don't work well in these environments; git is great if all you're storing is a few megabytes of text-based source code. Throw in few gigabytes of assets and git falls down, let alone the terabytes of raw assets common in large modern games.

At work (and at home), I use a combination of GIT repositories, a build server (Dropbox), and a project management+VCS/build chain tool I wrote. Before deciding to automate the entire process, however, I use to maintain a "thirdparty" folder in my code directory that contained built copies of required third-party libraries.

"The code you write when you learn a new language is shit.
You either already know that and you are wise, or you don’t realize it for many years and you are an idiot. Either way, your learning code is objectively shit." - L. Spiro

"This is called programming. The art of typing shit into an editor/IDE is not programming, it's basically data entry. The part that makes a programmer a programmer is their problem solving skills." - Serapth

"The 'friend' relationship in c++ is the tightest coupling you can give two objects. Friends can reach out and touch your privates." - frob

I strictly keep external libraries in source control, alongside the project using them. Checking out the source tree is enough to build the project without any other steps. frob's explanation covers it completely.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

At my last job I seperated the 3rd party code from the project by put each 3rd party code in a seperate repo and commit the binaries in the project repo.

It happens from time to time, that you have to apply a patch of a commercial middleware because you have to deliver now but the middleware developer runs it's release cicles.

On top your coworkers have to work with only one software for file versioning instead of a couple.

In my private projects I use cmake and build a little framework on top to handle git, svn and http dependencies.

project:

-build/[cmake_build]/DEPENDENCIES/...

-code/[targetname]/...

-dist/...

If there is a svn, git or http archive then the cmake script pull/checkout/download it into the DEPENDENCIES directory of the cmake build.

I commonly specify a revision which disable the update of git/svn and I can ensure that no issues occure from 3rd party updates.

The big disadvantage is you have to be online at the first creation of the cmake build.

I put all binaries into a dist directory and if I use closed source I put all the stuff in a private git repo with the same structure and handle it as dependency.

This topic is closed to new replies.

Advertisement