Jump to content
  • Advertisement
Sign in to follow this  
Strewya

Unity DLL question

This topic is 2174 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey community!

I have a question about DLLs, and whether what i'm trying to do is possible at all.

Here's my issue. I would like to have all of my rendering code packed away nicely in a DLL, to which i would link in my main project. However, i would also like to keep the rendering specific API linkage (D3D or OpenGL) also packed in that DLL, so my main project wouldn't have to worry about what rendering API is lying underneath.

Note that i'm not trying to use runtime API switching or anything, as right now i'm only working my way up to learning D3D9, and if (heavy emphasis) at any point i'd try supporting another API, i would build the project with different files. But i'd like it that i only needed to change the graphics classes in question (like, instead of having a LPDIRECT3DTEXTURE9 member it would be an GL_int). The main project would never use these API specific members, only the renderer itself would use them.

I know i could solve this by making an interface for every class that has any API specific code in its header, but i really don't need to have runtime polymorphism since i'm only using one API in a build. I was thinking about doing something with typedefs, but i'd still be adding the graphics classes headers into the main project, which would in turn try to include d3d9.h, which i'm hoping i don't have to link in the main project. Even if i never get to a point where i would try adding support for OpenGL, it'd still be nice if all d3d9 linking was contained in the renderer DLL.

So in short, i'm trying to:
1. link my renderer DLL into my main project
2. only link the d3d libs in the renderer DLL (EDIT: oh, and that includes the include/lib/source directories as well)
3. not have to link d3d in the main project
4. not use inheritance for any of the graphics classes, but still be able to use them in the main project (and not use any API specific calls on those classes, mostly just getters)

I hope i explained myself well enough to get some advice. Thanks in advance! smile.png Edited by Strewya

Share this post


Link to post
Share on other sites
Advertisement

Then you've got two options. 

1. Use C functions only. 

2. Use C++, along with some ungodly hacks, and revel in the ensuing maintenance nightmare!

 

The first goals 3 are achievable, but requirement 4 is a problem. The simplest way of doing this is with pure virtual interfaces (because you only need to export a single C function to create an instance of the main device class). To achieve all 4 goals, you'd need to use dbghelp.dll to unmangle the names returned from the DLL, and then find a way to invoke those methods via member function pointers (to classes you can't use, because you can't link to them). The (now ancient, and not particularly great for new projects) function binding interface from Scott Bilas (http://scottbilas.com/files/2001/gdc_san_jose/fubi_slides_outline.pdf) might provide some pointers, although expect to spend a lot of time digging through the PLE file, and how that relates to x64 (the code is 32bit). It's possible, but really really fugly.

A much simpler (and more efficient solution), is to simply use static libs for the renderer code. Compile D3D and OGL versions of your app, and then use a 3rd 'game-config' app to execute the relevant exe (based on whether the user has selected. D3D or GL). Any other libs you are using that aren't dependent on rendering code, should be fairly easy to move to DLLs that can be reused by both executables, which kinda gives you the same result, but somewhat in reverse (and you'll retain your sanity)

 

\edit 

I've just re-read your question, and I may have misread the bit about runtime switching. Ok, so if runtime switching is not a goal, then yes it's possible via the pimpl idiom (which should bury the platform specific stuff deep within the DLL, and leave your exported classes to expose only themselves [rather than their platform specific members]). Generally though, you'll end up with exactly the same amount of indirection as if you'd used pure virtual interfaces, and the code will be even more annoying to use in practice (pure virtual interfaces are the easiest way to achieve this goal, which will leave you with a runtime switchable DLL as a bonus). There are other ways to ensure ABI compatibility between DLL's, but in this particular case, your code would need to conform to some extremely rigid coding conventions (As an aside, I started putting together into an article for gamedev that explains how to do this, but that's on hold for the next few weeks whilst I move house!). Either don't bother, or go with virtual interfaces, or with pimpl. Those are your best options imho. 

Edited by RobTheBloke

Share this post


Link to post
Share on other sites

I've done the whole renderer-in-a-runtime-swappable-dll-accessed-via-pure-virtual-interface thing in the past.

I can't be sure why you don't want to use interface inheritance, but I can say that if my engine were making so many calls to the renderer that the virtual function call overhead became a performance concern, I'd think I must be doing something very wrong.

Edited by vreality

Share this post


Link to post
Share on other sites

So in short, i'm trying to:
1. link my renderer DLL into my main project
2. only link the d3d libs in the renderer DLL (EDIT: oh, and that includes the include/lib/source directories as well)
3. not have to link d3d in the main project
4. not use inheritance for any of the graphics classes, but still be able to use them in the main project (and not use any API specific calls on those classes, mostly just getters)

I hope i explained myself well enough to get some advice. Thanks in advance! smile.png

If that is all, than use dx,ogl headers only in renderer definitions (cpp files) and never expose them to declarations of rendered classes. That is all that's needed to do. You will have to write a lot more code to recieve such an isolation with enough level of operatibility though. And when you do, watch out for optimization side, (avoid memory copying and other unneeded stuff).

Share this post


Link to post
Share on other sites

Thanks for the replies.

It seems like the last point is the troublemaker here, so i guess i'll drop it and go with interfaces. What i wanted to avoid is having to create an interface for each class that i'd put into the graphics dll, but i haven't made a list of what classes from the dll would actually be exposed outward. I thought i'd have to expose all the low level stuff (like vertex/index buffer, textures and such), but if i compose them into higher level concepts (like sprite, model etc), than i end up with just a few interfaces. Is this a good approach?

 

While i'm at it, i've got another question. If i use the interfaces to call the underlying concrete objects, that means that all of them have to be heap allocated, unless i use globals inside the dll (and i don't want to if i can help it)? Also, everything created within the dlls factory method should also be sent to the dll to destroy instead of destroying them manually, right? And lastly, the runtime switching, is that possible if i link the .lib and use the DLL for implementation, or do i need to use LoadLibrary and manually do the switching? I'm actually not sure if its even possible to change the DLL on the fly while the exe is running, as the DLL would be in use, right? And even if that's possible, how would the exe know to reload the DLL?

 

Again, thanks for the advice! :)

Share this post


Link to post
Share on other sites

I thought i'd have to expose all the low level stuff (like vertex/index buffer, textures and such), but if i compose them into higher level concepts (like sprite, model etc), than i end up with just a few interfaces. Is this a good approach?

Whichever approach is going to be easiest for you. There's no right or wrong way really.
 

While i'm at it, i've got another question. If i use the interfaces to call the underlying concrete objects, that means that all of them have to be heap allocated, unless i use globals inside the dll (and i don't want to if i can help it)?

Correct. Stack allocating a vertex buffer isn't the most intelligent approach ;)
 

Also, everything created within the dlls factory method should also be sent to the dll to destroy instead of destroying them manually, right?

You can call delete on the objects, but you WILL need a virtual destructor. I still prefer a destroy method myself, but that's just me.

And lastly, the runtime switching, is that possible if i link the .lib and use the DLL for implementation, or do i need to use LoadLibrary and manually do the switching?

Yes, you will need to use LoadLibrary (or dlopen for posix). *If* you have a pure virtual base class, then the minimum you'd need to faff around with is:

 

// plugin API

[source]

class Device
{
public:

   virtual ~Device() {}

};

 

extern "C"
{
typedef Device* (*createDeviceFunc)();

}

[/source]

 

// plugin Implementation

[source]
class MyDevice : public Device
{
public:
  MyDevice() {}
}:

 

extern "C"
{
__declspec(dllexport) Device* createDevice() { return new MyDevice; }

}

[/source]

// in your app

[source]

#include "PluginAPI.h"

 

createDeviceFunc createDevice = 0;
HMODULE plugin = 0;

Device* device = 0;

 

void loadPlugin(const char* dllname)

{

  plugin = LoadLibraryA( dllname );
  if(plugin)
  {
     createDevice = (createDeviceFunc)GetProcAddressA(plugin, "createDevice");

     if(createDevice)
    {

      device = createDevice();
    }
  }
}

// then somewhere in your startup code
loadPlugin("myGLRenderer.dll");

[/source]

I'm actually not sure if its even possible to change the DLL on the fly while the exe is running, as the DLL would be in use, right?

You'll need to make sure it's no longer in use, and then it can be unloaded safely.

[source]

void unloadPlugin()

{
  delete device; //< this must also delete all buffers, textures, etc
  device = 0;
  createDevice = 0;
  FreeLibrary(plugin);

}

[/source]

 

And even if that's possible, how would the exe know to reload the DLL?

You'll have to tell it to load the plugin again!

[source]
loadPlugin("myD3D11Renderer.dll");

[/source]
 

Share this post


Link to post
Share on other sites
What i wanted to avoid is having to create an interface for each class that i'd put into the graphics dll, but i haven't made a list of what classes from the dll would actually be exposed outward. I thought i'd have to expose all the low level stuff (like vertex/index buffer, textures and such), but if i compose them into higher level concepts (like sprite, model etc), than i end up with just a few interfaces. Is this a good approach?

Yes.  You should keep the renderer interface used by the rest of the engine as minimal and simple as possible.

Also, everything created within the dlls factory method should also be sent to the dll to destroy instead of destroying them manually, right?

No.  Your dll will generally be compiled against the same memory manager as the rest of the engine (unless you do something to prevent it), and your engine is linked against the interface you expose from the dll, so generally, the engine can delete objects it gets from the dll.

That being said, code defined in the dll, including destructors and anything they call, can only be called while the dll is loaded.  So it's a bad idea for the renderer to be handing out objects implemented in the dll.

In my implementation, the engine only handled a single dll specific object - the renderer itself.  All other render related objects handled by the engine were defined outside of the dll.  They were written in terms of the renderer's interface.  All data resources with API specific formats were held within the renderer internally, and referenced externally by ID (so the resource could swap out when the renderer did, without breaking external references).

On a side note, this means that if you ever do decide to make your renderer hot-swappable, it'll need to do a little fancy footwork for the post-swap renderer to acquire all the API specific data resources corresponding to those which had been held by the pre-swap renderer.

I'm actually not sure if its even possible to change the DLL on the fly while the exe is running, as the DLL would be in use, right? And even if that's possible, how would the exe know to reload the DLL?

Yes, you can unload a dll.  It's up to you to make sure you don't use it before loading or after unloading it.

I'm not sure I understand the question - but the exe "knows" because you write it to know.  You unload one dll and load another when you see fit.

Edited by vreality

Share this post


Link to post
Share on other sites
I wasn't quite thinking when asking the hot-swap question, as in my mind i was referring to hot swapping the exact same DLL during runtime, which is impossible since you can't overwrite a file being in use. But hot-swapping to another DLL i do know what is needed, but it requires a lot of tech design and preparation in order to get right (mostly related to managing renderer specific resources when the swap occurs).

But i did figure that i'd need to have two sets of resource caches, one within the DLL itself which would manage the renderer specific resource, and one outside of the DLL which would manage ID references to the resources inside the DLL. And i like the ID referencing system because it allows for resource hot-loading, at least for the development phase. smile.png

However, that does mean writing two separate caches for the same resource type. I'm guessing it's possible to template it somehow, but that requires even more design biggrin.png

Btw, on a completely unrelated note to DLLs, but related to Direct3D itself, is there any "standard" (or a majority opinion) for using COM smart pointers? I'm not sure whether i should use the CComPtr (or whatever it's called, been a while since i read about them), or simply write my own wrapper class which would call AddRef() and Release() during copy ctor/operator=? It's a bit hard to find information on whether they should be used or not.

Thanks!

Share this post


Link to post
Share on other sites

Btw, on a completely unrelated note to DLLs, but related to Direct3D itself, is there any "standard" (or a majority opinion) for using COM smart pointers?

 

If DLL's are involved, I'd avoid smart pointers completely (personally speaking). If you *really* need that kind of thing, and it works for you, then using CComPtr will be better than the alternatives (i.e. boost et al).

Share this post


Link to post
Share on other sites

...in my mind i was referring to hot swapping the exact same DLL during runtime...

Ah, I get it.

It's been a while, so I don't know for certain, but it seems like the running program shouldn't be holding the dll file open after it loads it.  If not, you should be able to overwrite the file after it's loaded.  Then you could have the engine periodically check its modified date, and initiate swap proceedings when it sees that date change.

With a little attention to data handling and architecture you should be able to update lots of different systems on the fly without restarting the engine.  You could also choose to load dlls only in a development build, and link to static libraries for a release build.

Edited by vreality

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!