Jump to content

  • Log In with Google      Sign In   
  • Create Account

AllEightUp

Member Since 24 Apr 2011
Offline Last Active Today, 10:11 AM

#5305690 need a algorithm to update skin mesh global AABB

Posted by on 13 August 2016 - 09:29 PM

Dirk's suggestion is probably the most common solution and JoeJ's would be effective.  But, if you have need of tighter bounds the manner I've always used is to simply compute the bounds into the animation itself at point of export.  It's the same cost as computing an extra joint and you can get accurate bounds.  If you are doing blending, you simply take the max bounds of all running animations.  It is not quite as performant as a single unified bounds but it is better than the capsule method since you have no traversal involved, but it is not as tight when using blended animations.

 

At this point, the suggestions should solve whatever you are looking for.  The benefits/detriments are:

 

Unified bounds:  No computation overhead.  Sloppy so often animating even if not in view.

Capsules: Relatively tight bounds.  Requires a hierarchical traversal, can cause cache issues if you have lots of characters.

Animation driven: Perfect for single animation, not as tight as capsules when blending animations, but tighter than unified.  Cost is the same as an additional bone in the skeleton and costs no traversal.




#5303668 Where To Place External Libraries

Posted by on 02 August 2016 - 10:27 AM

As Paragon says, it depends on your intentions.  I personally use two git repositories which has it's own downsides but has some preferable bits.  So I have project x in git 'x' and its dependencies in 'x_deps'.  I build x_deps, copy the appropriate headers and the built libs (automated via CMake in my setup) into a folder of 'x' such that when working in x I have the minimal needed set of items.  Those get committed with the source so everything is usable without messing around.  Anytime I need to update an external I jump back to x_deps, do the updates and recopy the includes and libraries to 'x'.  This has benefits in terms that my primary work repository stays relatively clean and small but it can be a hassle to update libs since I have multiple targets and have to move from machine to machine updating the libs.

 

It is one one of doing things.




#5303248 Multithreading Library Preference

Posted by on 30 July 2016 - 04:58 PM

Another reason folks end up rolling their own is that the available libraries are not always available on all the desired target platforms.  Additionally in house threading is specialized for games which can get very specific gains.  Having written a number of threading solutions for shipped titles, these were the reasons we didn't start with some existing solution.




#5301230 Basic Multithreaded Question

Posted by on 18 July 2016 - 11:11 AM

Coming into this late but there are three items which should be asked explicitly and were only implied in various replies along the way.  First: is the data in the array mutable?  Second: how variable is the execution time of the work, short on some, longer on others, 2x, 3x, more variation?  What amount of work are we talking about?  These three questions should preface any discussion of multi-threading since there are very important implications of each.

 

Taking the first item.  If the data is mutable, then make it immutable.  That solves the issues of alignment and cache line false sharing in the input side.  So, in the first response suggesting making copies, don't do that, just all read from the initial array but 'store' the results to a second array.  When everything is done, the results are in the second array, if they happen to be pointers, just swap the pointers and away you go.  False sharing in the 'output' array is still possible but with partitioning of any significant amount of work, the threads should never be looking at data in close proximity so it should never show up as a problem.  I.e. thread 1 would have to be nearly done with it's work before it got near work thread 2 is looking at, unless thread 2 was REALLY slow, it should have moved out of the area of contention so there is no overlap.

 

Second item: variable execution time can destroy performance in any partitioning scheme.  If one thread gets unlucky working on heavier work items, the other threads will be stuck waiting till that one gets done.  Most partitioning schemes are meant for nearly fixed cost computations where no one thread gets stuck doing unexpectedly large amounts of work.

 

Third item: as suggested, you should only do this on significantly large amounts of data.  Something I did not see mentioned at all is that threads don't wake up from mutex/semaphore/etc immediately, so right off the top you loose a significant amount of performance due to slow wakeup on the worker threads.  Unless you have the threads in a busy wait andthe work is significant, you may never see a benefit.

 

Hope I didn't restate too much, but I didn't see much of this asked specifically.




#5300882 Bip File Format

Posted by on 15 July 2016 - 06:17 AM

There is a reason that the 3DS Max file format is undocumented, it is because Autodesk claims exclusive rights to the format.  In the past they have sued PolyTrans for writing converters for Max files which were not plugins and they continue to have various lawsuits over another format (DWG) which they appropriated.  So, generally speaking this is why there are no general readers of max files (i.e. where character studio lives assuming I have the correct item).

 

So, if you wish to deal with these files you will need to either load them into Max and use a plugin to write out the data you wish to your own format or you will need to export the data in a neutral format such as FBX or Collada.  Sorry to be a buzzkill but I've been on the receiving end of a C&D from Autodesk over this stupidity, it is not fun.




#5297512 Game Object Update Order

Posted by on 21 June 2016 - 06:39 PM

True dependencies are actually pretty rare in most game logic, in my experience. If your update code is highly order-dependent you might consider using different methods that aren't so fragile.

 

In this rare case I have to strongly disagree with Apoch unless I'm miss-reading the intentions here or I'm reading more into the question than he was..  Everything is about order and in fact the OP is incorrect about Unity, there is order control built in, you can tell the engine the order of execution you want.  Additionally, when it comes to the future with multi-core, without execution order control you simply can't effectively utilize current CPU's.  (Well, not that I know of 'effectively'.)  I don't disagree in that it *can* be fragile, but it is much like bad patterns in C++, you learn to avoid the parts that cause issues.

 

Having said that, let me rephrase it so the negatives have some context.  Unity has ordering in terms that you can control which 'components' execute before other components.  I think they call this the priority system or something like that.  The purpose in Unity seems to be making sure that an AI update component can complete making decisions before any movement is calculated in that frame.  This is fairly trivial common sense behavior I would think.  But, perhaps, what Apoch was suggesting is that this is 'system' level dependency and not individual object to object relationship dependency.  What I mean is that i always update AI before physics, I can apply impulses via the AI and have those acted on when physics updates, I never try to interleave AI and physics, this is *system* level dependency.  The other, bad, option is that I write a follow behavior that executes at the same time as the AI for the object I'm following, well, which one updates first changes the behavior of follow since I could be following last frame or next frame's position.  It's just bad news.

 

So, depending on the OP's intentions, dependency always matters.  It depends on where you apply it as to if it is a problem or not.




#5296546 Work queue with condition variable - design issue

Posted by on 14 June 2016 - 06:48 PM

Generally speaking, I've fiddled with variations of the shutdown problem and come to the conclusion that the best solution of the various ugly ones is to treat the shutdown as nothing more than another task.  So, the loop uses the first variation of "while (running)" without any other special code checks.  The owner of the thread simply implements a special "exit" task which changes the value of running.  With this, 'running' does not need to be volatile since it is modified within thread so it cleans up that little annoyance.  So, shutdown becomes:

 

post(exitTask);

thread.Join();

 

That's about as clean as you are going to get with threads in this area.

 

 

As to the priorities item.  I will warn you that priority systems are a massive pain to get correct and you might be better off not doing that and reconsidering the issue you are trying to solve.  I gave up on prioritized tasking for a number of reasons, the first reason is simple; doing it correctly is expensive and I wanted the performance back.  Generally speaking, for game work, I look at priorities as a fix for something that breaks my preferred separation of responsibilities approach.  Let me try and explain this a bit better.

 

So, the basic reasons I ended up wanting priorities turned out that I had some tasks that I didn't care if they finished this frame or a frame or two in the future, but I had a consistent need of many tasks executing in the frame and I could not complete the frame until those finished.  So, after considerable trial and error, I ended up with a frame work system and thread pool working in conjunction.  Making the two items work together is a bit of a trick but generally speaking, much easier than getting priorities correct in a single solution.  Sorry I can't suggest a solution to your actual problem and only suggest a different solution all together..




#5294881 I'm having trouble with constructors and destructors

Posted by on 03 June 2016 - 08:26 PM

First guess: you copy and pasted your include guards from a file named extra's and forgot to change them.  I.e. EXTRAS_H_INCLUDED if that were already included your correct looking class declaration won't be included and as such the compiler doesn't know about Wall and of course declaring the constructor/destructor won't work.

 

Edit: also note that your drawMe function has no class name in the declaration.




#5294628 Doubts about thread consumption

Posted by on 02 June 2016 - 06:00 AM

The number of threads generally won't matter in such a functional model, so long as you stay in reasonable bounds such as less than 100.  My multicore engine currently starts up 19 utility threads beyond the 1 thread per hw thread (i.e. 6 core ht box, 19+12 total threads) and it works like a charm.  The reason is that the 19 threads are generally all very light weight and primarily IO bound items which run in the small gaps where the multi-core is not 100% active.  This same sort of interleaving will happen in a functionally threaded engine just as well and adding more threads is highly unlikely to be a detriment.




#5294455 Creating Cross-Platform Code

Posted by on 01 June 2016 - 07:16 AM

Figured I would chime in on a couple points here, some of this may rehash the above points but given you requested a how to I'll walk you through what I generally do.  The first thing to mention is that you started at the correct point, cross platform starts and ends with the build setup then the tools you use.  I will obviously be suggesting CMake given that my experience with Premake was great for limited areas but the bugs and lack of support generally made it unusable in the long run, and there are some new reasons for this I'll get into shortly.  While it is an old article, you may take a look at: http://www.gamedev.net/page/resources/_/technical/general-programming/cross-platform-test-driven-development-environment-using-cmake-part-2-r2994 which was where my build system started.  Today I support Windows, OsX, iOs, Android, Linux, XBOne, Ps4 and a bunch of different AR/VR system variations, it started from the presented build system in that article series.

 

Once you have a build solution working, editors and more importantly debuggers, are your next step to consider.  Being comfortable on multiple platforms takes a while and as such I am always on the lookout for the best tools available.  In the process of looking at recent updates to the Android environment I ran across a very useful tool which allows me to use a single IDE for all platforms other than Windows.  (I could use it on Windows but I prefer VC on that platform.)  Clion, the basis of Android Studio 2+, is a very interesting IDE for other platforms.  First, unlike XCode it doesn't constantly attempt to do things in unusual (and I'd go as far as saying massively annoying) manners, it is more closely aligned with VC and other IDE's in most behavior.  But, more importantly if you decide to use CMake, it doesn't use a typical solution file, it directly works against CMake itself as it's solution file.  So, without inbetween steps, when I go to Linux/OsX, I just open it up and it is ready to go.  This is not the only solution but it is most definitely worth looking into given how easy it is to hop to other platforms and fix the build quickly with low hassle.

 

With editors out of the way, as others have mentioned, you need an effective method of knowing when you have broken things.  The first thing most folks are likely to suggest will be continuous integration using Jenkins.  I tend to alter that just a bit and suggest TeamCity (https://www.jetbrains.com/teamcity/) for a couple of reasons.  The primary reason is that setting up Jenkins is quite the chore and after it blew up one of my Macs and made me reinstall from scratch I gave up on it.  I had TeamCity up and running in just under 2 hours from initial investigation to CI running on Windows, Os X and Linux.  And, as it happens, the free version has enough allowed agents and configurations to support those platforms as needed so I have not yet purchased an upgrade.  The second reason I ended up with TeamCity is the built in support in Clion, it is very sweet in terms that Clion is a fully integrated solution to managing TC such that it will pick up logs, allow you to start investigations, trigger builds etc all without leaving the IDE.  I'm sure that is all possible with Jenkins, but generally speaking it was a lot of work where this just existed as a benefit after setup and initialization.

 

Code build/link is of course just one step to having confidence in what you are doing.  Unit testing, integration testing, etc is your next step.  As with the linked article, I generally still use Google Test for C++ code as it is low boilerplate and high utility.  Additionally Jenkins and TC both have direct support such that after my builds complete my unit tests run and I get feedback on the over 500 tests across all platforms.

 

I'd say at this point you are generally ready to do real cross platform work.  While all these bits and pieces may seem like overkill, I would argue that without them you spend more time jumping around doing repetitive nonsense than you do writing code.  All this setup takes a couple days before it is working smoothly even for a simple hello world test.  But once done you will thank yourself for having set it up because you can focus on the code, where you should be, instead of fighting with different platforms constantly and switching from IDE to IDE.

 

Hope this helps.




#5293577 Do you usually prefix your classes with the letter 'C' or something e...

Posted by on 26 May 2016 - 07:50 AM

It is worth mentioning that the history of the C prefix is directly tied to the history of C++.  Back when MFC was written C++ namespaces were a horrible mess of half implemented concepts in the compilers and barely worked.  C prefix was a useful work around to prevent conflicts in the global namespace.  There were other examples of this such as an entire Os where all classes were prefixed with B I believe it was.  Anyway, as others have mentioned, C++ has moved on and prefixes are rarely desirable compared to using namespaces.  Also, as others have said, certain types of instances still tend to use Hungarian notations such as g_ or variations.

 

My preferences are generally:

namespaces instead of class prefixes.

often I will use a 't' in front of template class names to differentiate a utility template from a base class, but it is generally only in the case where the template is a helper to implement boilerplate code over the base class.

generally I use the g_ or other prefix to differentiate special scoping of items.

Unless I'm using a class style enum, I tend to drop a little 'e' in front of enumeration types.  I.e. enum Blargo {eFail} versus enum class Blargo {Fail}.

 

The point of listing this is to reinforce what others have said: while the class names themselves have moved along, there are still reasons to for similar practices in other areas of the language.  So long as you come up with a consistent set of rules, you can write clean code.




#5293457 responsiveness of main game loop designs

Posted by on 25 May 2016 - 06:05 PM

While I agree that you are overthinking this, there is also a gross simplification you are making.  Even if you follow your intention of limiting things to input then immediately render, that won't make the game feel responsive.  The key word is 'feel', it's not about absolute numbers here, it is about perception.  Even the most twitchy of games is not able to apply input immediately, it is often several frames later.  The difference between feeling responsive and the latency issues some games have is in application.  Basically the trick is *not* to act immediately but to buffer just enough such that all input is equally latent.  The human mind is a masterful computation device, without knowing it, the player will adapt to slight amounts of latency.  If that latency is bouncing all over the place, the player can not adapt, if it is consistent on the other hand, the player will generally not notice.

 

If things were about the absolute latency, games would never have been able to survive triple buffering, remote streaming etc.  Those things seem to work (ok, streaming not so well yet) and the reason is generally that folks put the time to smooth the latency, not try and remove it completely.




#5291660 when to use concurrency in video games

Posted by on 14 May 2016 - 11:44 PM

 

*except on some MS compilers, where on x86, volatile reads/writes are generated using the LOCK instruction prefix

 
MS doesn't use the LOCK instruction for volatile reads and writes. LOCK would provide sequential consistency, but MS volatile only guarantees acquire/release. On x86, reads naturally have acquire semantics and writes naturally have release semantics (assuming they're not non-temporal). The MS volatile just ensures that the compiler doesn't re-order or optimize out instructions in a way that would violate the acquire/release semantics.

Yeah, you're right - I've struck that bit out in my above post. From memory I thought that they'd gone as far as basically using their Interlocked* intrinsics silently on volatile integers, but it's a lot weaker than that. I even just gave it a go in my compiler and couldn't get it to emit a LOCK prefix except when calling InterlockedCompareExchange/InterlockedIncrement manually :)

 

This means that even with MS's stricter form of volatile, it would be very hard to use them to write correct inter-thread synchronization (i.e. you should still only see them deep in the guts of synchronization primitives, and not in user code).

 

 

As a general note involving the volatiles, I also went and did a test for fun.  I took the scheduler for my distribution system and added a single volatile to the head index of the lazy ring buffer.  I changed nothing else, I'm still using explicit atomic load/store to access it.  It slowed down the loop by about 10%.  That's quite a bit worse than my worst guess.  This was on a dual Xeon and compiled by Clang, I'd be terrified to see what happens with the MS hackery on volatiles.  As a note: there is an option in VC2015 to disable the MS specific behavior now I believe, so it may not be any worse than Clang with that set.

 

As to volatiles and threading in general, I don't believe I use the keyword volatile anywhere in my code, both home and work, and it is fully multi-core from the ground up.  Unlike what I called out above, I'm not using it just to ship, it is a fundamental design goal of the overall architecture.




#5291441 when to use concurrency in video games

Posted by on 13 May 2016 - 12:25 PM

I think one key point is being glossed over in regards to the when portion of the question... When do you use concurrency? "Only when you need it to ship!"  I have shipped many games where I've added threading engines but only bothered to port bits and pieces of code to the parallel models to hit a solid 60FPS with a little headroom.  I just wanted to point this out as it seemed to be getting glossed over in the 'shiny' reasons you do concurrency.. :)




#5290634 How do I design this kind of feature?

Posted by on 08 May 2016 - 06:32 AM

 

There is another manner to look at this which may be a variation of haegarr's response.  ...

Yes, that is definitely what I meant except that you're going the refactoring way (which, of course, is totally fine) where I already have been trapped once back in time and hence know of that particular necessity we're talking about here.

 

Interestingly enough, there is many good stuff to learn from TA / IF engines, where the interaction of the player is focused on performing such kind of actions.

 

 

My primary motivation for separation of usable from use in this case is that the results can now be exposed to script systems considerably easier.  I generally dump most of these items into a behavior tree since the simple 'use a key' example can be extended to include a lot more checks and becomes the basis of a puzzle system also.  I.e.:

 

sequence

-  haveItem(key, "Some door")

-  haveItem(scroll, "Nasty Green Ritual")

-  isDay("Tuesday")

-  isHour("Noon")

-  actorInVicinity(Player, "Nasty Green Altar", 5)

-  actorHasPerformed(Player, "Sacrifice", "Chicken Feet")

-  makeActionAvailable("Trigger Nasty Green Apocalypse")

 

Now the player can destroy the world by reusing prior work.  With enough generic actions, even the final line can be pushed off to script such that none of this requires custom code.  Maybe the OP is not making a game with puzzles, maybe he is, either way though fixing SRP allows throwing this stuff in script where it is easier to reconfigure and experiment.






PARTNERS