Generally speaking I think the bigger concern is the other way around. As a programmer who has spent many a day dealing with artists, the biggest barriers to communication have been when I didn't understand the content creation process well enough and was unable to effectively communicate the necessary constraints or requirements, or fully understand the implications of such from their perspective.
So I got myself a subscription to Digital Tutors and spent a couple of months learning what I could about developing game assets with Max, Maya, ZBrush and MotionBuilder, and typically now I spend at least a couple days a month doing the same, and the communication has become much, much easier.
This is probably not something that every programmer will be able to or want to do, but as somebody else mentioned, there should be one guy on the programming team that can act as liasion between the technical and artistic sides, and it would be a good idea for that person to have a reasonable understanding of the content creation process.
Posted by krippy2k8
on 30 September 2012 - 03:10 AM
I could expect it, given that the two objects should have both identical code and data, and one has an interface that is a subset of the other. The fact that it's so difficult to use one in place of the other is just a limitation imposed by poor design choices in the C++ template system.
Actually it is the result of a powerful C++ design choice: template specializations.
If it C++ disallowed atrocities like this, the types would be compatible:
It seems it would be a pretty silly tradeoff to disallow template specializations in order to allow implicit template type conversions, particularly when the solution to your original problem is so simple:
Posted by krippy2k8
on 07 September 2012 - 11:50 PM
For iOS, for example, profiling revealed 2 functions to be high on the time-consuming list (Xcode comes with “Instruments” that show you how long every function takes) and I was able to reduce the time spent inside those functions by half by reversing the order of the loop to count down to 0.
Yeah I would pay to see that happen. If you actually had that result, then there either had to have been something in your loop that was early-outing sooner by counting down, or you were doing something in your comparison that was resulting in a calculation on each loop iteration to possibly have gotten those results: (i.e. i < something.size()).
Either that or you were using some kind of interpreted or intermediate language that did strange things. Otherwise it is physically impossible to get that result with native machine code. Can't say for Java but I would really have to see it to believe that it would have anything more than a extremely negligible effect.
The difference between iterating up and comparing against a memory address, and iterating down and comparing to zero, is exactly 1 test instruction on all ARM and x86 platforms with any reasonable compiler, and an extremely small memory latency issue which is definitely going to be a cache hit on every single iteration except possibly the first, unless your loop is doing so much that the difference in iteration method couldn't even be in the equation.
I just did a quick benchmark on 4 different systems with different CPUs (Windows and OSX, AMD and Intel), counting up and counting down by 2 billion iterations and ran the test 10,000 times, and the worst-case difference was 0.3%. This is with the internals of the loop doing nothing more than a simple addition accumulation and rollover.
It is true that it is usually going to be slightly faster to iterate to 0, but the difference is only going to matter in the most extreme cases.
Posted by krippy2k8
on 06 September 2012 - 08:41 PM
Unless I'm wrong, having 1 class with 100 methods, over 1 class, and 10 subclasses, each one having 10 methods, will result in 1 object vs 11 objects
What do you mean with sensible design?
1) Unless you're spawning many monsters every second, the overhead of spawning 11 objects per monster instead of just 1 is negligible and typically not worth compromising your design. This is in fact a perfect example of why you shouldn't engage in these types of "optimizations" without analyzing your bottlenecks first. If it has no impact on the overall performance of your application, it is a useless optimization.
2) When you have something like MonsterAttackModule, if it is primarily just a collection of methods and doesn't manage state, you can usually get away with only having a single MonsterAttackModule instance per monster type, thus incurring no additional allocation overhead per monster. Or perhaps just a collection of static methods, and thus incurring no allocation overhead at all.
3) Paying more attention to your higher level design can produce performance benefits many orders of magnitude greater than your micro-optimizations. Compromising such a design in any way in favor of micro-optimizations without profiling will almost always hurt you in the long run.
This doesn't mean that you should never think about low-level performance issues as you write your code, as you should, but only where you know it will make a significant and noticeable difference or that it won't compromise your design in any meaningful way.
i.e. block-copying memory instead of copying memory a byte at a time is always a good idea because it has a significant performance impact and doesn't usually result in any significant design compromises. Doing things like string1.append(string2) instead of string1 += string2 is also usually a good idea as it usually eliminates the creation of temporary objects, and also has no impact on your design.
Creating a monolithic class with 100 methods because you think it's faster to allocate is definitely not a good choice for a premature optimization.
Posted by krippy2k8
on 03 September 2012 - 02:51 PM
With a virtual server, you usually have something like 50 virtual servers (of which probably 30 are porn websites) running on a 8-core machine under the assumption that nobody will be needing big power anyway, and if someone needs a burst, it's ok for the others to be stalled for half a second or so.
This was true several years ago, but not any more. CPUs now have virtualization support built into the hardware and much better operating system support. On a decent VPS you typically get a certain amount of dedicated CPU power/time as long as it is being used (often you will get a dedicated CPU core) with the ability to burst beyond the dedicated when the other virtual servers are idle. If your server is using a lot of CPU, none of it's power is available for bursting to the other servers. So while there is the potential for some short delays, it would typically happen when a game starts, not while a game is running.
The only real issue is disk IO which is not likely to be a big problem for a game like this.
I wouldnt suggest trying to run a Counter-Strike server on one, but it should work okay for an RTS. RTS games are usually fine with an occasional blip anyway.
PeekMessage doesn't clear the MSG structure if no message is read, so processing WM_INPUT outside of that block means that any time you receive a WM_INPUT message, you're going to continue to process that same message over and over and over until another message is received. Which means your Render is not going to get called very often.
Hosting from your cable connection is not a good idea, except maybe for testing. You're probably going to want at least a Virtual Dedicated Server (or Virtual Private Server). You can get a decent VDS/VPS for around $30 per month that should handle 32-players in a simple RTS game with reasonable performance.
this is one of the things that really drives me mad about C++ ... there should be (is there?) a compiler switch to auto initialise every variable to zero and get rid of this '60 cold war age nonsense, I'd happily pay the performance penalty for that.
And make your code completely unsafe for anybody else that uses it.
Automatically initializing variables to zero is not just a trivial performance issue, it can have huge performance implications on some code and algorithms, and the potential benefits of doing so are marginal; it can eliminate some bugs, while making other bugs harder to track down.
Not sure if there are any special considerations for Total Commander plugins, but in general the way you would do what you want is exactly what Endurion says you can't. You open the file, move your file pointer to the beginning of the packed file you want to delete, then move the data following the current file up and truncate the file. There is no need to use temporary files.
What I've read until now always said singletons should be avoided at any cost, so this is a case where a singleton should be used, right?
No, you should not avoid them at any cost. You should avoid them when they would put unnecessary restrictions on your application, or the cost of avoiding them is more expensive than the cost of using them.
QCoreApplication is at it's core an interface between the application and the operating system. It does not make sense to have more than one such object in your application. It is also an object manager in that it maintains a collection of all QObjects in the application and supports sending events between them.
A case can be made that it could be useful to have more than one object manager in an application. And that is true, but having a master QCoreApplication that maintains a collection of all objects in the application does not preclude you from having separate object managers that manage distinct collections of objects. In fact QObject is itself a manager of other QObjects, so it's a trivial matter to create as many object managers or window managers as you want, while at the same time having the one master manager that can handle things like shutdown events from the operating system and then send out messages to everything else in the application to ensure that it's shut down gracefully where possible.
And perhaps more importantly in this case, having a singleton QApplication as the top level window manager helps make the Qt library more user friendly, and that is at the core of the design principles behind Qt.
It doesn't sound like it should be all that difficult, you just need to use a mutex and lock it before pushing data onto the back of the queue, and lock it before removing data from the front of the queue. You also need to lock it in the audio thread when determining how much data is in the queue, though you probably don't need to keep it locked on the audio thread while actually pushing the data to the device.
On the other hand, it might make more sense and be easier to manage to just use a queue of audio chunks instead of a single byte array. Then you can just use a basic thread-safe queue for pushing/popping chunks on and off the queue. If you use a pool of pre-allocated chunks it will probably be more efficient too.