mdias

Members
  • Content count

    882
  • Joined

  • Last visited

Community Reputation

823 Good

About mdias

  • Rank
    Advanced Member
  1. Game engine or programming luanguage?

      I think this is something that people either don't know about or just plain ignore when trying to dismiss engines like Unity/UE4.  If something isn't supported or you just don't like how it's done in their scripting language there is nothing stopping you from creating a native level plugin to do it.     Its worth mentioning that going this route will make your life harder when targeting other platforms, defeating one of the main features of using "vanila" Unity.
  2. (Super) Smart Pointer

    I would go with an event approach.   Maybe right now your only requirement is to check if the object exists (which weak_ptr would solve very easily, even though what Ravyne said stands), but in the feature you may want your marine's target to look somewhere else even if the target still exists (maybe the enemy has applied a stealth "power up").   For maximum flexibility I would implement an event/observer pattern where your marine could register to the enemy's events and listen for a "destroyed" event, and possibly more events. void Marine::SetTarget( const ITargettable* target ) { if( _currentTarget ) _currentTarget->RemoveListener( this ); _currentTarget = target; if( _currentTarget ) _currentTarget->AddListener( this ); } void Marine::OnTargettableEvent( const ITargettable* target, const TargettableEvent& event ) { if( event.type == TargettableEvent_Destroyed || event.type == TargettableEvent_StealthApplied ) { SetTarget( nullptr ); } }
  3.   Those were just examples. Shaders are a better example of a resource that has common functionality (set resources, constant buffers) and that are further derived into specific types representing their stage. (example: IDeviceChild -> IResource -> IShader -> IVertexShader)   You seem overly inflexible regarding OO principles, which I understand, but again, given the narrow scope of the library I think (and I believe many others do) sacrificing some principles in exchange for an easier/lighter API has more benefits than not doing so.   I don't think we can say D3D API is a bad API because it implements GetType() on it's resources, or because it has an interface hierarchy. Every problem you point to ends up only being relevant to the implementation details, which is why I'm being so "stubborn" with my interface design, which apparently isn't all that bad, we just can't find a way to properly implement it. I'll probably have to abandon the design for an apparent lack of tools in C++ to properly implement it. A simple way to implement a virtual method by creating an alias to another base method (effectively hinting the compiler to reuse the vtable pointer) would solve the extra indirection introduced in my previous example...       I know, I'm doing that right now actually. However I'm not convinced it's a good way to go. The more I go with it the more I think I should abbandon that idea and have a reference counting object as the base class, and just provide a helper template function to handle automatic AddRef/Release. Anyway, that's a minor problem I can analyse later.
  4. Wouldn't it just be easier to, at load time of the data you have, add/subtract a random number so that at load time you already have "randomized" data and act deterministically from there?   If you want to repeat the process several times, keep a copy of the original data, and recalculate the next time (with some randomness) a truck should go everytime a new truck goes.
  5. Your problem occurs because humans don't perceive sound linearly.   try: float lin_volume = (float)(veN-50) / 30.f; alListenerf( AL_GAIN, powf(lin_volume, exp) ); where "exp" try E or some other value (>0.0f) that sounds better to you.
  6. Not right now, I'm still in the early stages of developing this wrapper. However I can see myself adding reference couting to all objects through the base class, or adding an IResource::GetType method. Each addition of a method, or worse, change to it's behaviour would be prone to add inconsistency if the implementation doesn't share code.   You're probably right. But I think there are valid reasons for D3D to be the way it is. Such as being more important to closely mirror the hardware/drivers capabilities than to meticulously follow OO-correctness concepts in a relatively small-scope library. The truth is, however ugly or not their implementation is, to the user things are abstracted in a way that IMO makes sense. And this is what I'm after.   Yeah, that works but looks ugly. I've thought of a similar solution that would be remove virtual inheritance and: class Concrete : public IConcrete, public Base { public: int MyBaseMethod() { return Base::MyBaseMethod(); } } ..which would have similarities to the PIMPL idiom suggested by Hodgman, requiring an additional level of indirection. And since the base interfaces are pure interfaces with no data, it should be OK, even if ugly..   However this really sounds like we're fighting the language limitations here... Feels really weird to have workarounds like these to solve an OO problem in a "specialized-in-OOP" language.   How do you handle resource specific functionality, such as mapping the buffer and so on?   This is exacly the reason why I'm after pure interfaces, othewise I'd just typedef everything on the preprocessor and live a simple life with only a few virtual methods and single inheritance :)
  7. The idea behind a multibind-in-one-call entry point is to closely mirror functionality provided by both OpenGL and Direct3D. Your idea of calling MakeActive on several textures and transparently batch it to the backend API, while attractive and certainly more correct in "human reasoning" terms will introduce new complexity (such as tracking the state to know when to actually bind the textures on the backend API etc) which is a step away from the very thin wrapper I'm looking for. Plus, if both OpenGL and D3D already provide entry points with that same exact functionality, why should I create a higher level of managing?   We can continue to discuss this (happy to do so if it will teach me something), but to be honest I came here looking to solve the interface hierarchy + implementation hierarchy problem.   First of all, let me tell you I completely agree with you. However, I don't think it matters much in this specific case. Users are unlikely to expect an OpenGL resource to successfuly bind to a Direct3D one. My oglGraphicsContext implementation doesn't need to know the exact implementation type, it only needs to check for compatibility (example: to check if the final implementation implements GetGLHandle()).   I'm not fixated on the idea of casting to an implementation pointer, and I actually hate that I see no other option than doing so in order to be able to do things like multi-bind. D3D does this also, they don't have an ID3DTexture2D::MakeActive method. Surely the device checks for compatibility with the object you throw at it; probably through QueryInterface or similar.   I was confused myself when I first created this topic. Downcasting is what I mean.   Because a Resource IS-A GraphicsChild, and a Texture IS-A Resource and so on. If you're asking why I need the implementation hierarchy; well I don't need it, I just want to share implementation code. Say I have 10 types of device resources; doesn't make much sense to re-implement IResource methods manually in each of the derived types, no?   For sure getting rid of the implementation hierarchy would solve all my problems, but if I have to do that (and there's no other, better way) I begin to wonder if C++ isn't missing something...
  8. I don't think we're understanding each other at all. Maybe the fact that english is not my native language is somehow impeding me to explain my problem properly.       Where to exacly should I pass an array/container of ITexture pointers if you're telling me I should use ITexture::MakeActive() ? And doesn't that mean again that this container could have ITexture pointers created from another context?       I'm not really questioning the validity of LSP. I just don't see a way to fully comply with it in this particular problem. I must add constraints to what is accepted. We don't live in a perfect world, there are exceptions I want to deal with appropriately. Say I give it a texture that is resident in another GPUs memory; I want to return an error from that (instead of invalid rendering or crash), even if in theory it should work, I could document that behaviour.   The *::MakeActive approach would solve this though, but I would like to be able to MakeActive several textures in one call.       Again, you misunderstood me. I mean the correctness of the internal state of the objects. I'm not talking about language correctness. In this specific case, by correctness I mean "the texture must belong to the same context you're trying to bind it to for rendering".   Curious question: how would D3D behave if it were given, say a shader, from another device?       Well the OpenGL or not problem is not relevant at all to the question I'm asking. They still have interface inheritance to which they must provide implementations somewhere. When I said "how does D3D do it", I meant do they duplicate code for each object that needs to provide an implementation to ID3D11Resource or solve this problem any other way? They still must cast it to an implementation pointer somewhere...       This makes sense and I had already thought of it. However then ITexture::MakeActive() would be nothing but a bridge to the oglGraphicsContext's method. Which I don't have a problem with except if there's a better solution. Also, again this would mean a call to MakeActive() for each texture, and also not the way D3D does it.       Exacly. So do you agree (excluding personal preference) with me that oglGraphicsContext could accept an ITexture and down-cast it there?     Maybe I should try to put my main question in another way: How can I implement an interface hierarchy and reuse implementation code (without ugly "hacks" like macros) while avoiding multiple inheritance?
  9. This would certainly solve some problems, such as not needing to cast anymore. However, the way I see it, the Texture object would be modifying the owner's (oglGraphicsContext) state which doen't make much sense. I guess this is also the reason the D3D API goes this way instead of backwards.   [Edit] It would also make it impossible to bind more than just one object in a single call.       I disagree. The fact that SetTexture contains an ITexture doesn't automatically mean it will be OK to pass any ITexture. I can certainly write in the documentation that the ITexture must be created by the same IGraphicsContext and be in a specific state.       I think I would like the API but not the fact that I would be double checking if the objects are valid everywhere. I am trying to adopt concepts of future graphics APIs such as PSOs which aliviate a lot of this by only checking at PSOs creation time. However some state just doesn't belong in a PSO...   I am inspiring my class design on D3D11's, which seems to do things exacly like I'm trying to implement, but they somehow manage to not need virtual inheritance.   I think I might be worrying too much about correctness and should think more about ease of use (adopt ::MakeActive() and ignore the fact that I'm modifying state that doesn't belong to this child object), but I'm afraid that by doing so I might be commiting suicide in the long run... This still doesn't solve being unable to static_cast from an IResource to an ITexture for example. How does D3D do it? The only way I see it working is to avoid shared implementations and only implement things in the final class, which can be a maintenance nightmare...
  10. Actually the current workaround I have found is to have a void* ITexture::GetPrivateDeviceData() which doesn't feel very clean but works for now. I'm still searching for a better way.       oglTexture:oglTexture( oglGraphicsContext& ctx, const TextureDescription& desc ) { glGenTextures( 1, &_gl_texture ) // _gl_texture is a private/protected member variable ... // bind and create texture with content from "desc" } GLuint oglTexture::GetGLTexture() const // this is the method I wish oglGraphicsContext to see { return _gl_texture; } bool oglGraphicsContext::SetTexture( unsigned slot, ITexture* texture ) { // check that texure is a texture created by this graphics context if( texture->GetDeviceCtx() != this ) return false; auto ogl_texture = static_cast<oglTexture*>(texture); // this is the cast that's needed and won't work with virtual inheritance GLuint native_texture = ogl_texture->GetGLTexture(); ... // use native_texture for whatever... } I understand and I've been thinking myself that maybe I'm not choosing the best approach, but can't really think of anything as "clean"/"elegant" as this, which is why I posted here       If I implement the ITexture::GetPrivateDeviceData() method as I mentioned above, I can do this, but I still feel this shouldn't really be a visible method. Plus having virtual inheritance on the interfaces will bring more problems to the end client trying to cast for example from IResource to ITexture... This could again be worked around by having a IDeviceChild::CastTo( Type ) but then it will have a performance cost and ugly feel...
  11. I think I probably didn't explain my problem well enough. I agree with what you're saying, however this is not the problem. The clients using this wrapper will only interact with the interfaces without ever knowing about the implementation.     This is the problem. Imagine this piece of code on the client side: ITexture* myTexture = ...; myAbstractedDevice->SetTexture( 0, myTexture ); And this is the implementation: class ITexture : public IResource { ... }; class oglTexture : public ITexture, public oglResource { ... }; void oglDevice::SetTexture( unsigned slot, ITexture* texture ) { assert( texture ); assert( texture->GetServer() == this->GetServer() ); oglTexture* ogl_texture = static_cast<oglTexture*>(texture); ... } If oglTexture only inherited from ITexture, this would work perfectly, but then I'd have to re-implement the other interfaces (IResource, IDeviceChild, ...) with mostly duplicated code, which makes me think it's not the right way to solve this.
  12. Hi,   I'm facing a problem I can't seem to find a good answer for.   I'm trying to build a wrapper for D3D11 and OpenGL. For this I have some interfaces similiar to D3D11's. Let's assume these interfaces: class IDeviceChild; class IResource : public IDeviceChild; class IBuffer : public IResource; Now, what I wish to do, is for each of those interfaces to have their own implementation class, like this: class oglDeviceChild : public IDeviceChild; class oglResource : public IResource, public oglDeviceChild; class oglBuffer : public IBuffer, public oglResource; Now this obviously won't work like this because of the diamond problem I have here, and the only way to solve it is to have virtual inheritance in the interface classes themselves. But that leads to another problem! If I have an oglResource, I can't static_cast to an oglBuffer.   It sure must be possible to solve this, since D3D does it (and doesn't use virtual inheritance in the interfaces). It also looks like virtual inheritance only signals the class being inherited to be virtual, instead of that class plus it's parents...   The only way out I see right now is to avoid multiple inheritance and only inherit the interface, but that doesn't look like a proper solution to me... Can anybody shed some light?   Thanks in advance.
  13.   No. What you may be wasting is bandwidth. The concept to keep in mind is this: Passing information to and from the graphics card is slow, therefore you should do it as few times as possible. You should also know that it only really matters when your game is running in interactive mode -> you should do slow stuff at Load Time, instead of Run Time because slow is only slow for real time stuff, but it's still fast enough to do in a Loading screen.   Now, every time you tell your GPU to render something, that's information passing to and from the GPU*, so if you could manage to render your whole scene with just 1 draw call, it would be awesome, but you're probably going to need many more. However you group (as much as you can) static geometry in big buffers, instead of having all static objects in their separate buffers, you will be able to render much more objects with just 1 draw call! Example: Calling Draw() 1000 times, each time rendering just 1 polygon is much slower than rendering 10000 polygons with just 1 Draw() call. This is called geometry batching in case you want to research further.   That's a tough question engine developers have to fight with every day! Indeed dividing you geometry in 2 groups (static geometry and non-static geometry) is a good start! Basically you should find the way that allows you to render more stuff with less Draw calls and state changes.     * Driver details apply here, but ignore these for now.   P.S: Geometry Instancing also allows you to render the same (small?) vertex buffer multiple times with different properties per "object" with just 1 draw call. This is useful to render stuff like a bunch of trees or rocks without actually duplicating the objects in the vertex buffer (wasting memory).
  14. Mono or Lua - Scripting

    I just noticed I posted an example passing the struct by value instead of reference. I don't have the code at hand but as far as I remember, to pass as reference you just pass a normal C++ pointer as an argument to the method.   Like I said, I never worked with Lua, so I'm not familiar with it's strengths or weaknesses. However, it looks weird to me that different parts of the engine are scripted with different languages. I don't know if it applies to your project, but I'd abstract away all interactions with scripting so you could script any part of the engine with any or multiple implementations (including Mono, Lua and others) with relative ease. Maybe this way you could stick to just one of the scripting engines like you want and implement an other at a later stage if the need arises.   Regarding "Lua vs Mono" I believe you'll have to tell more about the kind of project you're working on.
  15. Mono or Lua - Scripting

    You mean pointers to structs or to objects?   Here's an example of me passing a pointer to a C++ on-stack struct: bool RayCastResults::call_internalAddResultOrdered( RayCastResult& result ) { MonoObject* excep = nullptr; MonoObject* resObject = mono_value_box( _domain, _classResult, &result ); _method_internalAddResultOrdered( _obj, resObject, &excep ); if( excep ) { mono_print_unhandled_exception( excep ); return false; } return true; } "_classResult" is: "_classResult = mono_class_from_name_case( img, "Engine.Physics", "RayCastResult" );" Which on C# side is: using System; using System.Collections.Generic; namespace Engine { namespace Physics { public class RayCastResults { private void internal_AddResultOrdered( RayCastResult result ) { ... } } public struct RayCastResult { public override string ToString() { return string.Format("[RayCastResult] distance: {0}", distance); } public float distance; public CollisionObject body; public Shape shape; public Vector3 normalWorld; public Vector3 pointWorld; } } } If passing a pointer to an object, you just need to pass the MonoObject* pointer. Sometimes it's a good idea to have internal methods on C# side where you do some processing on the C++ passed data before calling the real final method.