Jump to content

  • Log In with Google      Sign In   
  • Create Account

AllEightUp

Member Since 24 Apr 2011
Online Last Active Today, 08:37 PM

#5301230 Basic Multithreaded Question

Posted by AllEightUp on 18 July 2016 - 11:11 AM

Coming into this late but there are three items which should be asked explicitly and were only implied in various replies along the way.  First: is the data in the array mutable?  Second: how variable is the execution time of the work, short on some, longer on others, 2x, 3x, more variation?  What amount of work are we talking about?  These three questions should preface any discussion of multi-threading since there are very important implications of each.

 

Taking the first item.  If the data is mutable, then make it immutable.  That solves the issues of alignment and cache line false sharing in the input side.  So, in the first response suggesting making copies, don't do that, just all read from the initial array but 'store' the results to a second array.  When everything is done, the results are in the second array, if they happen to be pointers, just swap the pointers and away you go.  False sharing in the 'output' array is still possible but with partitioning of any significant amount of work, the threads should never be looking at data in close proximity so it should never show up as a problem.  I.e. thread 1 would have to be nearly done with it's work before it got near work thread 2 is looking at, unless thread 2 was REALLY slow, it should have moved out of the area of contention so there is no overlap.

 

Second item: variable execution time can destroy performance in any partitioning scheme.  If one thread gets unlucky working on heavier work items, the other threads will be stuck waiting till that one gets done.  Most partitioning schemes are meant for nearly fixed cost computations where no one thread gets stuck doing unexpectedly large amounts of work.

 

Third item: as suggested, you should only do this on significantly large amounts of data.  Something I did not see mentioned at all is that threads don't wake up from mutex/semaphore/etc immediately, so right off the top you loose a significant amount of performance due to slow wakeup on the worker threads.  Unless you have the threads in a busy wait andthe work is significant, you may never see a benefit.

 

Hope I didn't restate too much, but I didn't see much of this asked specifically.




#5300882 Bip File Format

Posted by AllEightUp on 15 July 2016 - 06:17 AM

There is a reason that the 3DS Max file format is undocumented, it is because Autodesk claims exclusive rights to the format.  In the past they have sued PolyTrans for writing converters for Max files which were not plugins and they continue to have various lawsuits over another format (DWG) which they appropriated.  So, generally speaking this is why there are no general readers of max files (i.e. where character studio lives assuming I have the correct item).

 

So, if you wish to deal with these files you will need to either load them into Max and use a plugin to write out the data you wish to your own format or you will need to export the data in a neutral format such as FBX or Collada.  Sorry to be a buzzkill but I've been on the receiving end of a C&D from Autodesk over this stupidity, it is not fun.




#5297512 Game Object Update Order

Posted by AllEightUp on 21 June 2016 - 06:39 PM

True dependencies are actually pretty rare in most game logic, in my experience. If your update code is highly order-dependent you might consider using different methods that aren't so fragile.

 

In this rare case I have to strongly disagree with Apoch unless I'm miss-reading the intentions here or I'm reading more into the question than he was..  Everything is about order and in fact the OP is incorrect about Unity, there is order control built in, you can tell the engine the order of execution you want.  Additionally, when it comes to the future with multi-core, without execution order control you simply can't effectively utilize current CPU's.  (Well, not that I know of 'effectively'.)  I don't disagree in that it *can* be fragile, but it is much like bad patterns in C++, you learn to avoid the parts that cause issues.

 

Having said that, let me rephrase it so the negatives have some context.  Unity has ordering in terms that you can control which 'components' execute before other components.  I think they call this the priority system or something like that.  The purpose in Unity seems to be making sure that an AI update component can complete making decisions before any movement is calculated in that frame.  This is fairly trivial common sense behavior I would think.  But, perhaps, what Apoch was suggesting is that this is 'system' level dependency and not individual object to object relationship dependency.  What I mean is that i always update AI before physics, I can apply impulses via the AI and have those acted on when physics updates, I never try to interleave AI and physics, this is *system* level dependency.  The other, bad, option is that I write a follow behavior that executes at the same time as the AI for the object I'm following, well, which one updates first changes the behavior of follow since I could be following last frame or next frame's position.  It's just bad news.

 

So, depending on the OP's intentions, dependency always matters.  It depends on where you apply it as to if it is a problem or not.




#5296546 Work queue with condition variable - design issue

Posted by AllEightUp on 14 June 2016 - 06:48 PM

Generally speaking, I've fiddled with variations of the shutdown problem and come to the conclusion that the best solution of the various ugly ones is to treat the shutdown as nothing more than another task.  So, the loop uses the first variation of "while (running)" without any other special code checks.  The owner of the thread simply implements a special "exit" task which changes the value of running.  With this, 'running' does not need to be volatile since it is modified within thread so it cleans up that little annoyance.  So, shutdown becomes:

 

post(exitTask);

thread.Join();

 

That's about as clean as you are going to get with threads in this area.

 

 

As to the priorities item.  I will warn you that priority systems are a massive pain to get correct and you might be better off not doing that and reconsidering the issue you are trying to solve.  I gave up on prioritized tasking for a number of reasons, the first reason is simple; doing it correctly is expensive and I wanted the performance back.  Generally speaking, for game work, I look at priorities as a fix for something that breaks my preferred separation of responsibilities approach.  Let me try and explain this a bit better.

 

So, the basic reasons I ended up wanting priorities turned out that I had some tasks that I didn't care if they finished this frame or a frame or two in the future, but I had a consistent need of many tasks executing in the frame and I could not complete the frame until those finished.  So, after considerable trial and error, I ended up with a frame work system and thread pool working in conjunction.  Making the two items work together is a bit of a trick but generally speaking, much easier than getting priorities correct in a single solution.  Sorry I can't suggest a solution to your actual problem and only suggest a different solution all together..




#5294881 I'm having trouble with constructors and destructors

Posted by AllEightUp on 03 June 2016 - 08:26 PM

First guess: you copy and pasted your include guards from a file named extra's and forgot to change them.  I.e. EXTRAS_H_INCLUDED if that were already included your correct looking class declaration won't be included and as such the compiler doesn't know about Wall and of course declaring the constructor/destructor won't work.

 

Edit: also note that your drawMe function has no class name in the declaration.




#5294628 Doubts about thread consumption

Posted by AllEightUp on 02 June 2016 - 06:00 AM

The number of threads generally won't matter in such a functional model, so long as you stay in reasonable bounds such as less than 100.  My multicore engine currently starts up 19 utility threads beyond the 1 thread per hw thread (i.e. 6 core ht box, 19+12 total threads) and it works like a charm.  The reason is that the 19 threads are generally all very light weight and primarily IO bound items which run in the small gaps where the multi-core is not 100% active.  This same sort of interleaving will happen in a functionally threaded engine just as well and adding more threads is highly unlikely to be a detriment.




#5294455 Creating Cross-Platform Code

Posted by AllEightUp on 01 June 2016 - 07:16 AM

Figured I would chime in on a couple points here, some of this may rehash the above points but given you requested a how to I'll walk you through what I generally do.  The first thing to mention is that you started at the correct point, cross platform starts and ends with the build setup then the tools you use.  I will obviously be suggesting CMake given that my experience with Premake was great for limited areas but the bugs and lack of support generally made it unusable in the long run, and there are some new reasons for this I'll get into shortly.  While it is an old article, you may take a look at: http://www.gamedev.net/page/resources/_/technical/general-programming/cross-platform-test-driven-development-environment-using-cmake-part-2-r2994 which was where my build system started.  Today I support Windows, OsX, iOs, Android, Linux, XBOne, Ps4 and a bunch of different AR/VR system variations, it started from the presented build system in that article series.

 

Once you have a build solution working, editors and more importantly debuggers, are your next step to consider.  Being comfortable on multiple platforms takes a while and as such I am always on the lookout for the best tools available.  In the process of looking at recent updates to the Android environment I ran across a very useful tool which allows me to use a single IDE for all platforms other than Windows.  (I could use it on Windows but I prefer VC on that platform.)  Clion, the basis of Android Studio 2+, is a very interesting IDE for other platforms.  First, unlike XCode it doesn't constantly attempt to do things in unusual (and I'd go as far as saying massively annoying) manners, it is more closely aligned with VC and other IDE's in most behavior.  But, more importantly if you decide to use CMake, it doesn't use a typical solution file, it directly works against CMake itself as it's solution file.  So, without inbetween steps, when I go to Linux/OsX, I just open it up and it is ready to go.  This is not the only solution but it is most definitely worth looking into given how easy it is to hop to other platforms and fix the build quickly with low hassle.

 

With editors out of the way, as others have mentioned, you need an effective method of knowing when you have broken things.  The first thing most folks are likely to suggest will be continuous integration using Jenkins.  I tend to alter that just a bit and suggest TeamCity (https://www.jetbrains.com/teamcity/) for a couple of reasons.  The primary reason is that setting up Jenkins is quite the chore and after it blew up one of my Macs and made me reinstall from scratch I gave up on it.  I had TeamCity up and running in just under 2 hours from initial investigation to CI running on Windows, Os X and Linux.  And, as it happens, the free version has enough allowed agents and configurations to support those platforms as needed so I have not yet purchased an upgrade.  The second reason I ended up with TeamCity is the built in support in Clion, it is very sweet in terms that Clion is a fully integrated solution to managing TC such that it will pick up logs, allow you to start investigations, trigger builds etc all without leaving the IDE.  I'm sure that is all possible with Jenkins, but generally speaking it was a lot of work where this just existed as a benefit after setup and initialization.

 

Code build/link is of course just one step to having confidence in what you are doing.  Unit testing, integration testing, etc is your next step.  As with the linked article, I generally still use Google Test for C++ code as it is low boilerplate and high utility.  Additionally Jenkins and TC both have direct support such that after my builds complete my unit tests run and I get feedback on the over 500 tests across all platforms.

 

I'd say at this point you are generally ready to do real cross platform work.  While all these bits and pieces may seem like overkill, I would argue that without them you spend more time jumping around doing repetitive nonsense than you do writing code.  All this setup takes a couple days before it is working smoothly even for a simple hello world test.  But once done you will thank yourself for having set it up because you can focus on the code, where you should be, instead of fighting with different platforms constantly and switching from IDE to IDE.

 

Hope this helps.




#5293577 Do you usually prefix your classes with the letter 'C' or something e...

Posted by AllEightUp on 26 May 2016 - 07:50 AM

It is worth mentioning that the history of the C prefix is directly tied to the history of C++.  Back when MFC was written C++ namespaces were a horrible mess of half implemented concepts in the compilers and barely worked.  C prefix was a useful work around to prevent conflicts in the global namespace.  There were other examples of this such as an entire Os where all classes were prefixed with B I believe it was.  Anyway, as others have mentioned, C++ has moved on and prefixes are rarely desirable compared to using namespaces.  Also, as others have said, certain types of instances still tend to use Hungarian notations such as g_ or variations.

 

My preferences are generally:

namespaces instead of class prefixes.

often I will use a 't' in front of template class names to differentiate a utility template from a base class, but it is generally only in the case where the template is a helper to implement boilerplate code over the base class.

generally I use the g_ or other prefix to differentiate special scoping of items.

Unless I'm using a class style enum, I tend to drop a little 'e' in front of enumeration types.  I.e. enum Blargo {eFail} versus enum class Blargo {Fail}.

 

The point of listing this is to reinforce what others have said: while the class names themselves have moved along, there are still reasons to for similar practices in other areas of the language.  So long as you come up with a consistent set of rules, you can write clean code.




#5293457 responsiveness of main game loop designs

Posted by AllEightUp on 25 May 2016 - 06:05 PM

While I agree that you are overthinking this, there is also a gross simplification you are making.  Even if you follow your intention of limiting things to input then immediately render, that won't make the game feel responsive.  The key word is 'feel', it's not about absolute numbers here, it is about perception.  Even the most twitchy of games is not able to apply input immediately, it is often several frames later.  The difference between feeling responsive and the latency issues some games have is in application.  Basically the trick is *not* to act immediately but to buffer just enough such that all input is equally latent.  The human mind is a masterful computation device, without knowing it, the player will adapt to slight amounts of latency.  If that latency is bouncing all over the place, the player can not adapt, if it is consistent on the other hand, the player will generally not notice.

 

If things were about the absolute latency, games would never have been able to survive triple buffering, remote streaming etc.  Those things seem to work (ok, streaming not so well yet) and the reason is generally that folks put the time to smooth the latency, not try and remove it completely.




#5291660 when to use concurrency in video games

Posted by AllEightUp on 14 May 2016 - 11:44 PM

 

*except on some MS compilers, where on x86, volatile reads/writes are generated using the LOCK instruction prefix

 
MS doesn't use the LOCK instruction for volatile reads and writes. LOCK would provide sequential consistency, but MS volatile only guarantees acquire/release. On x86, reads naturally have acquire semantics and writes naturally have release semantics (assuming they're not non-temporal). The MS volatile just ensures that the compiler doesn't re-order or optimize out instructions in a way that would violate the acquire/release semantics.

Yeah, you're right - I've struck that bit out in my above post. From memory I thought that they'd gone as far as basically using their Interlocked* intrinsics silently on volatile integers, but it's a lot weaker than that. I even just gave it a go in my compiler and couldn't get it to emit a LOCK prefix except when calling InterlockedCompareExchange/InterlockedIncrement manually :)

 

This means that even with MS's stricter form of volatile, it would be very hard to use them to write correct inter-thread synchronization (i.e. you should still only see them deep in the guts of synchronization primitives, and not in user code).

 

 

As a general note involving the volatiles, I also went and did a test for fun.  I took the scheduler for my distribution system and added a single volatile to the head index of the lazy ring buffer.  I changed nothing else, I'm still using explicit atomic load/store to access it.  It slowed down the loop by about 10%.  That's quite a bit worse than my worst guess.  This was on a dual Xeon and compiled by Clang, I'd be terrified to see what happens with the MS hackery on volatiles.  As a note: there is an option in VC2015 to disable the MS specific behavior now I believe, so it may not be any worse than Clang with that set.

 

As to volatiles and threading in general, I don't believe I use the keyword volatile anywhere in my code, both home and work, and it is fully multi-core from the ground up.  Unlike what I called out above, I'm not using it just to ship, it is a fundamental design goal of the overall architecture.




#5291441 when to use concurrency in video games

Posted by AllEightUp on 13 May 2016 - 12:25 PM

I think one key point is being glossed over in regards to the when portion of the question... When do you use concurrency? "Only when you need it to ship!"  I have shipped many games where I've added threading engines but only bothered to port bits and pieces of code to the parallel models to hit a solid 60FPS with a little headroom.  I just wanted to point this out as it seemed to be getting glossed over in the 'shiny' reasons you do concurrency.. :)




#5290634 How do I design this kind of feature?

Posted by AllEightUp on 08 May 2016 - 06:32 AM

 

There is another manner to look at this which may be a variation of haegarr's response.  ...

Yes, that is definitely what I meant except that you're going the refactoring way (which, of course, is totally fine) where I already have been trapped once back in time and hence know of that particular necessity we're talking about here.

 

Interestingly enough, there is many good stuff to learn from TA / IF engines, where the interaction of the player is focused on performing such kind of actions.

 

 

My primary motivation for separation of usable from use in this case is that the results can now be exposed to script systems considerably easier.  I generally dump most of these items into a behavior tree since the simple 'use a key' example can be extended to include a lot more checks and becomes the basis of a puzzle system also.  I.e.:

 

sequence

-  haveItem(key, "Some door")

-  haveItem(scroll, "Nasty Green Ritual")

-  isDay("Tuesday")

-  isHour("Noon")

-  actorInVicinity(Player, "Nasty Green Altar", 5)

-  actorHasPerformed(Player, "Sacrifice", "Chicken Feet")

-  makeActionAvailable("Trigger Nasty Green Apocalypse")

 

Now the player can destroy the world by reusing prior work.  With enough generic actions, even the final line can be pushed off to script such that none of this requires custom code.  Maybe the OP is not making a game with puzzles, maybe he is, either way though fixing SRP allows throwing this stuff in script where it is easier to reconfigure and experiment.




#5290399 How do I design this kind of feature?

Posted by AllEightUp on 06 May 2016 - 06:58 AM

There is another manner to look at this which may be a variation of haegarr's response.  At the top level, your problem is generally known as cross cutting where a design works for 90% but 10% doesn't fit the same pattern and causes issues such as this.  Usually this is caused by a failure in the design to separate concepts properly and your current design suffers from this.  Consider your ItemEffect, it only supplies a single function so how could it be breaking SRP?  Well, in the example provided it conflates the concept of 'usable' with the concept of 'use'.  I.e. it checks that it is in a usable condition before attempting to trigger a state change.  Separation of the 'is it usable' from the 'use it' concerns would be a first step to solving much of the design issue and as haegarr suggests it is similar in behavior to how a component entity model works.  So, reworking your example, you could do something like the following:
 
 
class Action {public: virtual void do(...) = 0;}
class UseKey {
public:
  virtual void do(...) override
  {
    target->unlock();
  }
};
 
 
It has no condition checks, it just performs the action with the assumption that something else has validated that everything is ready to go.  The thing that makes the checks could be broken down into many conditions (generally a good idea) but I'll be lazy and outline it in a single class and also assume you have a services oriented design around things:
 
 
// This assumes it is an inventory item, the same pattern holds for other variations.
class ConditionCheck {
public:
  virtual void addToInventory(...) = 0;
  virtual void removeFromInventory(...) = 0;
};
class KeyConditions {
public:
  virtual void addToInventory(...) override
  {
    // Assume a 'smart world' which supplies various services.
    // The primary service of concern here is an awareness system.
    mQuery = GetWorld().SpatialAwareness().Query()
      .Center(owner->GetTransform())
      .Radius(2.5) // keys pay attention to things within 2.5 meters.
      .Filter(Door::ObjectType)
      .OnChange(std::bind(&KeyConditions::onVisible, this, std::placeholder::_1));
  }

  virtual void removeFromInventory(...) override
  {
    mQuery = nullptr;
  }

private:
  void onVisible(const std::vector<ObjectHandle>& doors)
  {
    for(door : doors)
    {
      if (haveKeyFor(door) && facing(door))
        validCondition(usable(door));
    }
  }

  SpatialAwareness::QueryHandle mQuery;
};
This is not a perfect example but hopefully shows the goals and direction such an architecture would take. It combines a more complete following of SRP with a reactive design to prevent the behavioral lock in you are finding.

Perhaps this is too much of a change, you could still borrow some concepts and fix the SRP issue to be in a better position at a minimum. Of course, the issue a lot of folks have with something like this is that it feels (is) pretty abstract and takes a bit to get used to. Additionally, WoopsASword, does have a point, unless this is a relatively major portion of your gameplay, a simplified solution may be better. I would tend to use this solution if I were creating a huge RPG style game, but if it were a fairly simple game, I'd keep it simple.


#5284955 In terms of engine technology, what ground is left to break?

Posted by AllEightUp on 03 April 2016 - 09:30 PM

Among others, look at the recently-released Maxplay engine. We're using it at work -- I spend my days mixed between it, Unity, and custom work -- and many of my former co-workers from past jobs have been using the engine on new projects. (Notably, The Void, hi guys!) While not polished like Unity or Unreal or Source or other longstanding engines, there are many things that once you realize they exist, make it hard to switch back to the older engines.

At GDC there were quite a few groups pushing new game engines, many with excellent innovations. A few of them I mentioned above.

It is probable that Unity and Unreal will be incorporating the functionality over time, but as they are older an established they've got to invest heavily on maintaining the past. As is typical, incumbents are less agile than newcomers but will adopt features as they can.

 

Correction, MaxPlay is not released.  It was used to build a demo for Intel and then shown privately.




#5281561 Building in a OS that you don't have (Cross-platform 2D engine)

Posted by AllEightUp on 16 March 2016 - 05:16 PM

I generally use VMware Fusion running Windows 7 or 10.  It can support running DX 10 I believe and GL 4.0 so it does pretty well for our needs.






PARTNERS