Static or Dynamic Linking?

Started by
19 comments, last by SiS-Shadowman 15 years, 3 months ago
Hi I wanted to know the more standard way of designing a game. Should I keep recompiling all my .cpp files, create a .lib/.a core and static link, or create a .dll core and dynamic link (or load)? I read the differences/benefits, but I don't notice many games using DLL's for their game's - just libraries (like OpenAL, threads, etc). Steam uses a lot of DLL's and manages updates well. Torque games are mostly entirely recompiled (lib).. My guesses would be that DLL's have speed issues, aren't necessary enough for individual games, interfaces aren't preferable (can't instantiate objects - must use DLL create methods, and still require an API), and/or cross-platform compatibility? Thanks in advance..
010001000110000101100101
Advertisement
Depends mostly on whether you use templates and inline functions a lot. If you do, then you shouldn't bother making any separate libraries, but instead compile everything at once. Libraries are mostly useful for APIs, or at extension points you know about in advance when designing the game. For most cases, recompiling everything will be the best option.
Well, I haven't heard of speed issues with using dlls yet, I doubt it has any impact at all.

Well, using interfaces is great. Using an Engine (whatever low/high level mechanism you want to implement) is alot easier when you separate the implementation from the interface. It's not a big deal to call a seperate create method to actually create an object (like a mesh, a texture, whatever).

If your objects depend on some higher class (like the mesh depends on the device) you will need to pass the device reference to the constructor or use some sloppy global variable to access the device from within the constructor. So it's no big deal to call device->create.

But do you really think that games don't make extensive use of dlls? You should take a look at the Bin directory of UT3, there are hundreds of dlls in there.

But in the end, it's up to you. If you're only creating one application at all, it doesn't really matter. But if you want to reuse some components in different applications, separating code is a neat thing.
Currently I'm writing a little engine that is compiled into a dll. And I've also written a sample that depends on it, as my editor that needs the dll as well. I can imagine I'll need to create alot more applications for the game (some more editors for various tasks), and I separate my code as much as possible (and useful).
Thanks guys! Anymore info I need to know?

Still not sure if I should put the entire core into 1 DLL, or each system of the core into separate DLL's (sound, graphics, etc.).

Quote:Original post by SiS-Shadowman
Well, I haven't heard of speed issues with using dlls yet, I doubt it has any impact at all.


Good to know. Seems to me it would have some sort of impact, but I guess it's minimal. I'm talking about LoadLibraryEx and GetProcAddress now.

Quote:Original post by SiS-Shadowman
Well, using interfaces is great. Using an Engine (whatever low/high level mechanism you want to implement) is alot easier when you separate the implementation from the interface. It's not a big deal to call a seperate create method to actually create an object (like a mesh, a texture, whatever).


Right right right, those are some good points. I'm not a big fan of C interfaces since it's not OO and a bit disorganized (dozens of unrelated groups of functions). If I keep the API (.h's) separate I can at least pass the main object over DLL and work with that, so I won't need many interface functions (just a lot of -create- class methods in the core).

Quote:Original post by SiS-Shadowman
But do you really think that games don't make extensive use of dlls? You should take a look at the Bin directory of UT3, there are hundreds of dlls in there.


Oh of course, I was speaking generally. I know lots of games use lots of DLL's, I just don't see as many compiling separate DLL's (Engine.dll, Sound.dll, GUI.dll, etc.) for the core versus compiling it into the executable. I'll take a look at UT3 though. :)

Quote:Original post by SiS-Shadowman
But in the end, it's up to you. If you're only creating one application at all, it doesn't really matter. But if you want to reuse some components in different applications, separating code is a neat thing.


I'm going to need the core code for some tools (map editor, etc.). Something like this:

/Synergy/API
/Synergy/SDK
/Synergy/Editor
/Synergy/

Quote:Original post by all_names_taken
Depends mostly on whether you use templates and inline functions a lot. If you do, then you shouldn't bother making any separate libraries


Ah, I was thinking of that as well. If I've got most of my code in the header files why bother (inline/templates). Then again preprocessor directives such as templates severely increase compile time don't they (if I use a lot).
010001000110000101100101
I think there is no perfect answer. I've set up the make process so that my app runs on several Linux-Distributions and on Windows without forcing the user to install 3rd party entities (so I statically link wxWidgets/boost::regex/boost::serialization and LLVM). Real packages are provided later, and they will be, because my app is 25MiB currently! So currently, the build is just a hack for beta versions.


Some arguments

Static Linking
+ lower dependencies and preassumptions (at the moment)
- may work with an outdated API that just crashes after an OS-Update
- may have security holes that don't get fixed by central package manager
- bigger binary

Dynamic Linking
* inverse of static linking
- different apps may depend on incompatible versions of a library (but generally distributors give them different names)

Of course, just putting every dll into an application specific folder leads the listed arguments of dynamic linking ad absurdum. So what to do to cope with the last argument on a non centrally managed OS? I can't help here.

All in all, I don't know how Vista and 7 handle it, but on a gnu/linux system with a central package system, my recommendation is dynamic linking + a package. On Pre-Vista (?) Windows: static linking (or it's equivalent dynamic linking and then distributing all DLLs with your app, in the app folder), unless there is a guarantee that always a compatible version can be linked up (also in the future).
Quote:Original post by Dae
Good to know. Seems to me it would have some sort of impact, but I guess it's minimal. I'm talking about LoadLibraryEx and GetProcAddress now.


I wonder why you want to use that method to access the dlls. Can't you just use static dll linking that's supported by MSVC for example (I'm pretty shure every well-known compiler/linker offers that)?
Fiddling with those functions has to be a pain. It it's possible to dynamically load a dll and aquire function-pointer for a class (a static function of an interface-implementation that creates an instance of that implementation for example), however it's a huge mess to deal with those things.
Quote:Original post by Dae
Still not sure if I should put the entire core into 1 DLL, or each system of the core into separate DLL's (sound, graphics, etc.).


1 DLL, then break out related classes/funcs into a seperate DLL when you identify that a collection of classes in the DLL can be refactored into a re-usable library. Until you've written enough code to make it worthwhile, don't bother planning a DLL structure for your game, because it will almost certainly be wrong.

Quote:Good to know. Seems to me it would have some sort of impact, but I guess it's minimal. I'm talking about LoadLibraryEx and GetProcAddress now.


zero difference between DLL and static linkeage. The only real caveat is that if you're statically linking, the compiler may decide to inline a function call to a library function (with the right processor settings). Not worth worrying about.

Quote:Right right right, those are some good points. I'm not a big fan of C interfaces since it's not OO


It's a language, and nothing stops you implementing OO systems with it. Softimage XSI implements it's entire object model using C function calls - there is not a single base class in sight.... It's still an entirely OO architecture.

Quote:
Oh of course, I was speaking generally. I know lots of games use lots of DLL's, I just don't see as many compiling separate DLL's (Engine.dll, Sound.dll, GUI.dll, etc.) for the core versus compiling it into the executable. I'll take a look at UT3 though. :)


The only real benefit of DLL's for games is that it saves you from crippling link times on big projects. In the case of UT3, the majority are related to Unreal Ed.

Quote:Ah, I was thinking of that as well. If I've got most of my code in the header files why bother (inline/templates). Then again preprocessor directives such as templates severely increase compile time don't they (if I use a lot).


templates are not pre-processor directives, though they do increase build times. The problems with templates in DLL's are not trivial ones to solve, so you'd be better off avoiding templates like the plague for any exported DLL interface (that includes everything from STL and everything from boost). It's fine to define and use templates within a DLL's implementation though...
To add to Phresnels - for DLLs:

- Exporting classes is not favoured unless you can guarantee that the executable linking to the DLL was built with the same compiler. This is because the index to the DLL contains names mangled by the compiler and different compilers produce different names.

- You can't export STL types (except vector) from DLLs. This changed the way i adopted DLLs completely.

Tip - You may consider adopting the COM approach. Consider how Direct3D is distributed and what you link to etc.

Tip - Run dumpbin.exe from the visual studio command prompt with the following command line "dumpbin.exe /exports [path to dll]" and it will show you the interface to the DLL.

- Consider a C interface to obfuscate the usage of the DLL. Run the dumpbin command on some well known DLLs to get an idea for how they are constructed. I did this for the Crysis DLLs and they all had 2 interface functions in C.

- Consider a solid memory management strategy, this falls in line with COM. Investigate this. You need to make sure you have a schema for what allocations are made and cleaned up in which locations.

- Consider dependancies. This is important, make sure that you have it predefined which DLLs require other DLLs.

Tip - Use the delayed loading of DLLs. Do an MSDN search for this, it is an option in the project settings for a DLL project.

- I personally prefer defining DLL interfaces with DEF files.

That's all i can think of off the top of my head.
Quote:Original post by Dave
- You can't export STL types (except vector) from DLLs. This changed the way i adopted DLLs completely.


It is possible and works like a charm for years for me. You will however need to dynamically link to the CRT (don't use the static one) and disable some warnings (4251 in that case) and you're ready to go.
You need to be careful to always use the same compiler for both the dll and the executable (or dll) that links to it. There have been comments on the net that a dll compiled with VC6 doesn't link well with a program compiled with another visual studio compiler.
Dynamic linking can be useful if you need to target different architectures (such as Intel/AMD specific instruction sets), or plan on using some method of run-time patching, but doing it properly involves a bit of extra work.

If you not planning to do anything that requires the use of DLL, they stick with static.

After careful deliberation, I have come to the conclusion that Nazrix is not cool. I am sorry for any inconvienience my previous mistake may have caused. We now return you to the original programming

This topic is closed to new replies.

Advertisement