The test works most of the time, but has a 30% chance of producing the following error:
tests/src/test_StressTests.cpp:58: Failure
Value of: entity.getComponent<Position>()
Actual: Position(1994, 1000)
Expected: Position(2000, 1000)
How is this possible? How can position.x be updated three times less than position.y? Sometimes it's also Position(1998, 1000) or Position(1996, 1000). The only thing I can observe is that position.y is never incorrect.This is my test case:
#include <gmock/gmock.h>
#include <ontology/Ontology.hpp>
#include <math.h>
#define NAME Stress
using namespace Ontology;
// ----------------------------------------------------------------------------
// test fixture
// ----------------------------------------------------------------------------
struct Position : public Component
{
Position(int x, int y) : x(x), y(y) {}
unsigned int x, y;
};
inline bool operator==(const Position& lhs, const Position rhs)
{
return lhs.x == rhs.x && lhs.y == rhs.y;
}
struct Movement : public System
{
void initialise() override {}
void processEntity(Entity& e) override
{
e.getComponent<Position>().x += 2;
e.getComponent<Position>().y += 1;
for(volatile int i = 0; i != 10000; ++i)
sqrt(938.523);
}
};
// ----------------------------------------------------------------------------
// tests
// ----------------------------------------------------------------------------
TEST(NAME, ThousandEntities)
{
World world;
world.getSystemManager()
.addSystem<Movement>()
.initialise()
;
for(int i = 0; i != 1000; ++i)
world.getEntityManager().createEntity("entity")
.addComponent<Position>(0, 0)
;
// udpate world 1000 times
for(int i = 0; i != 1000; ++i)
world.update();
// all entities should have moved to 1000*[2,1] = [2000, 1000]
for(auto& entity : world.getEntityManager().getEntityList())
{
ASSERT_EQ(Position(2000, 1000), entity.getComponent<Position>());
}
}
Could someone take a look at the way I'm dispatching the worker threads and tell me if they spot something blatantly obvious? The relevant sections of code are as follows.
World class creates the thread pool as follows:
World::World() :
m_IoService(),
m_ThreadPool(),
m_Work(m_IoService)
{
// populate thread pool with as many threads as there are cores
for(int i = 0; i != getNumberOfCores(); ++i)
m_ThreadPool.create_thread(
boost::bind(&boost::asio::io_service::run, &m_IoService)
);
}
Where m_IoService, m_ThreadPool and m_Work are declared as: boost::asio::io_service m_IoService;
boost::thread_group m_ThreadPool;
boost::asio::io_service::work m_Work;
World::update() calls SystemManager::update(). SystemManager holds a vector of System object pointers.void SystemManager::update()
{
for(const auto& system : m_ExecutionList)
system->update();
}
System::update() pushes as many System::processEntity calls into the worker queue as there are entities (in this case 1000). System::processEntity is pure virtual. this->world->getIoService().post(
boost::bind(&System::processEntity, this, boost::ref(it))
);
// wait for all entities to be processed
this->world->getIoService().poll();
I have a hunch that the very last line, this->world->getIoService().poll(), only waits until the queue is empty instead of waiting for all worker threads to be idle (that's actually the behaviour I intend but I couldn't find any way to do this). This being the case, the test case could finish asserting all of the expected entity positions before the last few entities' positions are actually processed.It doesn't explain why the X component of Position is screwed while the Y component is always fine.
[EDIT] In case someone wishes to obtain the project and run it themselves:
https://www.github.com/TheComet93/ontology
git checkout optimise
mkdir build && cd build
cmake -DBUILD_TESTS=ON ..