Jump to content

  • Log In with Google      Sign In   
  • Create Account

Juliean

Member Since 29 Jun 2010
Offline Last Active Today, 05:31 PM

Topics I've Started

Which alignment to use?

28 April 2016 - 01:12 PM

Hello,

 

in some places in my code, ie. in my entity-component system, I do some custom memory management. What this means is that I will allocate a customary block of memory, and put some objects in, like in this code sample:

template<typename Component>
class ComponentPool
{

public:
    Component* AddComponent(void)
    {
        auto pComponent = (Component*)(m_vMemory.back() + 1);
        m_vMemory.resize(m_vMemory.size() + sizeof(Component));

        new(pComponent) Component();

        return pComponent;
    }

private:

    std::vector<char> m_vMemory;
}

Thats not the full implementation, but you get the idea.

 

So now I'm wondering about alignment. I've heard up and low that you should optimally align your memory for performance, and I know how to implement it, but I've never actually been able to find out:

 

What should you actually choose for alignment? 4 byte, power of two, 16 byte? For simplicity, lets just talk about desktop platforms, 32 bit and the MSVC-compiler for now. I failed to find anything in google and stackoverflow, so does anyone have some advice on how to choose or find out the optimal alignment for my case? "Component" is a class/struct that can be anything from 8 Byte to a few kilobyte (less likely, but oh well).

 

Thanks!


[VS2015] Natvis: inspect vector data in custom visualiser

15 April 2016 - 04:36 AM

Hello,

 

I'm using custom memory management using a vector<char> for my components in a template-class. With how its used, this vecto data is equal to:

template<typename ComponentType>
class ComponentPool
{
    std::vector<char> m_vComponents;

    void Test()
    {
        auto pComponents = (ComponentType*)vComponents.data();

        for(size_t i = 0; i < vComponent.size() / sizeof(ComponentType); i++)
        {
        }
    }
}

Just so you get the idea. Now I want to write a visualiser that allows me to view that content of the vector as an array of ComponentType, as shown in the Test()-function.

 

So since you cannot call functions in natvis, I have to access the vector data directly. Looking at how vector is implemented in natvis:

  <Type Name="std::vector&lt;*&gt;" Priority="MediumLow">
      <DisplayString>{{ size={_Mylast - _Myfirst} }}</DisplayString>
      <Expand>
          <Item Name="[capacity]" ExcludeView="simple">_Myend - _Myfirst</Item>
          <ArrayItems>
              <Size>_Mylast - _Myfirst</Size>
              <ValuePointer>_Myfirst</ValuePointer>
          </ArrayItems>
      </Expand>
  </Type>

It seems you have to use _Myfirst. However, when I do it:

    <Type Name="acl::ecs::ComponentPool&lt;*&gt;">
        <Expand>
            <Item Name="Components">m_vComponents._Myfirst</Item>
        </Expand>
    </Type>

I get an error message:

D:\Acclimate\Repo\acclimate.natvis(42,28): Fehler: Ein Zeiger auf eine gebundene Funktion darf nur zum Aufrufen der Funktion verwendet werden.

(roughly translates to) Error: A pointer to a bound function can only be used to call the function

That error btw. is in the line where I access _Myfirst. Now I see that in c++ _Myfirst is actually called (_Myfirst()), but I wonder why it works in the stl-natvis and not here.

 

Anyone any idea on why this doesn't work, or any solution? (I could just screw std::vector<char> and use a char*, but thats making the code more complicated and unsafe).

 

Thanks!


Wrong template instantiation with SFINAE

13 January 2016 - 08:47 AM

Hello,

 

so I've got this template code for handling optional reference counting in my type system. There is an issue with a specific type where it calls the wrong version of the template, but I'll first show as much code of it as I can. (BTW, I'm on Visual Studio 2015).

struct refCountHelper
{
    template<typename ObjectType>
    static auto AddRef(ObjectType* pObject) -> decltype(AddRef(pObject, 0))
    {
        return AddRef(pObject, 0);
    }

private:

    template<typename ObjectType>
    static auto AddRef(ObjectType* pObject, int) -> decltype(pObject->AddRef(), void())
    {
        if(pObject)
            pObject->AddRef();
    }

    template<typename ObjectType>
    static void AddRef(ObjectType* pObject, long)
    {
    }
};

This is the class that handles the reference-counting. If the object being passed in has an AddRef-method, it calls it, otherwise it chooses an empty method (which hopefully gets compiled out in release).

 

The class in question which gets used by this and produces the issue is my asset-class:

class ACCLIMATE_API BaseAsset
{
public:

    DECLARE_REF_COUNTER // see below

};

template<typename Type>
class Asset final :
    public BaseAsset
{
};

#define DECLARE_REF_COUNTER \
        void AddRef(void) \
        { \
            m_referenceCounter++; \
        } \
        void Release(void) \
        { \
            m_referenceCounter--; \
            if(!m_referenceCounter) \
                delete this; \
        } \
        int m_referenceCounter = 0;

So the BaseAsset-class has a reference-counter, and the Asset<Type> class is then used as an actual representation of an asset of a certain type.

 

Now I can do ie:

refCountHelper::AddRef<asset::Asset<ai::BehaviourTree>>(&asset);

To increase the reference count of an asset. This is obviously used by other template code, like a wrapper for storing an arbitrary object for the script system:

template<typename ObjectType>
class ObjectContainer :
	public Object
{
public:

	// TODO: get rid of const-cast
	ObjectContainer(const ObjectType& object) : Object(Type()),
		m_pObject(&const_cast<ObjectType&>(object))
	{
		refCountHelper::AddRef<ObjectType>(m_pObject);
	}

private:

    ObjectType* m_pObject;
}

I hope you can still follow the code and get what it does.

 

Because now we get to the issue. The code works fine, except for one specific instance of an asset-type. Look at this test-code:

using TestAsset = asset::Asset<ai::BehaviourTree>;

int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
    TestAsset asset(L"Testing");

    core::refCountHelper::AddRef<TestAsset>(&asset); // 1 - works

    core::ObjectContainer<TestAsset> object2(asset); // 2 - doesn't - essentially calls the empty "static void AddRef(ObjectType* pObject, long)" in the refCounterHelper-class
}

This is what is happening:

In the line "1" where I call the reference counting directly, it calls the correct template method and increases the ref-counter.

In the line "2", where the ObjectCointainer<TestAsset>-constructor calls the referencing counting itself (implementation above), it calls the empty AddRef-helper method instead.

 

What the heck? I checked the callstack, the template type is passed correctly. Also its really just that I added another layer of indirection, the actual refCounterHelper::AddRef-calls are identically, yet when I call it via the ObjectContainer it somehow fails to choose the correct method for calling AddRef on the Asset<>-class.

 

Now here's the deal:

- This only happens with this specific type of asset. Asset<Texture>, Asset<Script>, ... I pretty much checked all of them and this very instance is the only one that fails.

- The test code is executed in a separate exe, but it also happens inside of the actual DLL where all the other code is located.

- It also only happens if I actually include the definition of the class in question. If I remove the header and forward declare it instead:

namespace acl
{
	namespace ai
	{
		class BehaviourTree;
	}
}

It works as expected.

- It also works if I make a similar class inside the test-project, where I have a templated class whose constructor calls the refCounterHelper.

 

So even though this also happens inside the same (DLL)-project, I suspect it has something to do with the goddamn DLL-**** again. Anyways, here is what the header of the class that is not working looks like:

#pragma once
#include "Types.h"
#include "..\Asset\ExportAsset.h"
#include "..\Core\Dll.h"
#include "..\System\Pointer.h"
#include "..\System\ClassHelper.h"

namespace acl
{
    namespace ai
    {
        class BehaviourNode;
        class Blackboard;

        class ACCLIMATE_API BehaviourTree
        {
            using NodeVector = std::vector<sys::Pointer<BehaviourNode>>;
        public:
            NON_COPYABLE(BehaviourTree); // deletes assignment-operator & copy-ctor

            BehaviourTree(const Blackboard* pBlackboard);
            BehaviourTree(const Blackboard* pBlackboard, NodeId uid);
            ~BehaviourTree(void);
        };

        EXPORT_ASSET_OBJECT(BehaviourTree) // exports some important template-declarations outside of the DLL, see below

    }
}

#define EXPORT_ASSET_OBJECT(Type) EXPORT_OBJECT(acl::asset::Asset<Type>) \
                                EXPORT_OBJECT(Type)

#define EXPORT_OBJECT(Type) \
        static_assert(acl::core::isObject<Type>::value, "Type is not an object"); \
        EXPORT_TEMPLATE_CLASS(acl::core::ObjectContainer<Type>); 

// exporting code for template symbols
#ifdef ACCLIMATE_EXPORT
#define ACCLIMATE_API __declspec (dllexport)
#define EXPORT_TEMPLATE template
#else
#define ACCLIMATE_API __declspec (dllimport)
#define EXPORT_TEMPLATE extern template
#endif

#define EXPORT_TEMPLATE_CLASS(...) \
    __pragma(warning(disable : 4251)) \
    EXPORT_TEMPLATE class ACCLIMATE_API __VA_ARGS__; \
    __pragma(warning(default : 4251))

So I am already exporting some of the symbols of the class in question (core::ObjectContainer<ai::BehaviourTree>), which is needed for the type-system to work. I just cannot see how this is any different here though, since this is how I declare all my other asset data classes which are working, too:

#pragma once
#include "ExecutionUnit.h"
#include "Trigger.h"
#include "CustomTrigger.h"
#include "Stack.h"
#include "..\Asset\Asset.h"
#include "..\System\Pointer.h"

namespace acl
{
    namespace event
    {

        class ACCLIMATE_API Instance
        {
            Instance(const std::wstring& stName, core::TypeId type);
            Instance(const std::wstring& stName, core::TypeId type, unsigned int startUid, unsigned int staticUid);
            ~Instance(void);
            Instance(const Instance& instance);

            void operator=(const Instance&) = delete;
        };

        EXPORT_ASSET_OBJECT(Instance);

    }
}

_____________________________________________________________________________________

 

So this is it. I want to apologize for the huge amount of explanation and code, I just cannot really get a working test example, since the error is as you can see very specific.

I also hope I managed to explain what the code does and what the actual problem is. I'm actually not sure if anyone even has any idea what the issue could be, but I'd like to see if someone has at least something I could try. What could be the potential problem? I just cannot see why it would just choose the wrong implementation for this specific combination of Asset<ai::BehaviourTree>, in case that it can see the actual implementation of the header.

 

I'm pretty lost at this point, its just one of those f**** up issue I just don't have any idea anymore and probably has eighter a really complicated, or stupidly simple solution... so, got anything? Thanks!
 

 

 

 


WMVCore.dll missing under Windows 10

04 January 2016 - 07:35 AM

Hello,

 

I've just installed Windows 10 N (clean install, no upgrade) and installed Visual Studio 2015 Update 1. Now, when trying to start my application, I get an DLL-missing error message box saying that an "WMVCore.DLL" is not found, and it fails to start.

 

So google showed that apparently you need to install the "Media Feature Pack for N and KN versions of Windows 10", which I did for both x86 and x64 versions. Still, even after a restart, it says the DLL cannot be found.

 

Did anyone have a similar issue with Windows 10? How can I solve this, and why do I even need to do that in the first place? The only external library I'm including is the DirectX June 2010 SDK, and yeah I'm obviously using the windows headers but what does it need this DLL for?

 

Thanks!


SSD for programming?

29 December 2015 - 11:05 AM

Hello,

 

I've just bough an SSD (Samsung 850 EVO 500 GB), and the general idea was to put Windows and most applications (Visual Studio, etc...) on this disc.

 

However, what about my Visual Studio project(s)? Is it worth putting it on this disc as well for increased compilation speed? My main project has about 150k LoC split to ~2000 source files/headers, so I assume that file IO could also be a bottleneck for compilation, or is it mostly CPU capped (I have an i7 4790k and 16 GB RAM already)?

 

Also, in case I put my project on the SSD, how would it affect the longelivety of the drive? The binaries/symbols and object files produced by visual studio take about 2.5 GB for a full compile (1.5 GB obj files, 1 GB binaries), and I don't really know how much visual studio has to reproduce for each compilation. I'm a little worried since I sometimes spend 8+ hours coding, and if every compile is going to write somewhat around a gigabyte, I might use up the SSD very quickly... I think I remember samsung giving a quaranteed lifetime of the SSD for 3 years with 40 GB/day written, seems not like a lot if you look at what VS produces. Is this a valid concern for the drive I bought?

 

Thanks!


PARTNERS