Firstly, many companies or teams do not distinguish sharply between gameplay programmers and other programmers. The skillsets overlap too much to make that a useful distinction anyways. It's rather an affinity towards certain kinds of problems.
Secondly, scripting languages are usually only available for very specifc domains. Like the GUI, AI scripting or cutscenes. And they are supposed to be used by non-programmers. The reason you can script everything in engines like UDK and Unity is that they do not like giving out the source code or native APIs, making scripting pretty much the only option.
Thirdly, there's nothing wrong with knowing C++. I'd say, it's probably one of the most useful programming languages to know, even if you don't use it, as it gives a lot of insight into how programming languages do things and what the costs of certain features are.
Hello all, been a while, I'm working on this project for a university assignment and I would really like
delegates in C++ haha, so I came up with something in a side test project and I'm pretty sure its
evil, could someone explain to me what could happen if it is in fact evil (ignoring the usual errors that could happen)
Delegates per se are not evil. There are plenty of implementations out there, with various pros and cons.
Your particular implementation has a pretty nasty fault, that's also very easy to spot: The c-style cast to "handler_t". Pretty much anything can happen at that point. From compiler errors to obvious runtime errors to subtle runtime errors.
As a general rule, when programming C++, stick to static_cast and dynamic_cast. When interfacing with legacy code or low level, use reinterpret_cast and const_cast. Never ever use c-style cast, especially if you only put it in there to shut the compiler up.
When new-ing an array of non-PODs, the compiler needs to store how many elements are in that array, so that it know how many destructors to call on delete. This value is often stored in memory before the array. By extension, this also means that it will allocate a bit more memory than N times sizeof(element), so your original placement new might write over the end of your buffer.
This behaviour is compiler specific, the best advice is to stay away from placement new for non-pod arrays!
If you can't do that, you may need to adapt your implementation for each compiler, so it might be easier to use a vector with a custom allocator.
Posted by Rattenhirn
on 28 December 2012 - 04:15 PM
Productivity, not language performance, is the key feature.
Since this seems to be your primary argument now, I've spent some time looking into that.
Productivity is very difficult to quantify objectively, and in my experience the main productivity gain you get from C# is the excellent library that comes with it, especially for GUI programs. Hence it's high usefulness for tools.
But it like to hear more about those productivity gains!
Posted by Rattenhirn
on 27 December 2012 - 07:55 AM
GC is another topic that always pops up in these kind of discussions... IMO is pure non-sense. GC won't trigger if you don't "new" stuff, and will be VERY fast if you don't have objects with medium life expectancy.. it's just a matter to take some time to understand how the system works and how you can make it work for you... it's much easier to learn to deal with .NET's GC than learning proper memory management in C++, simple or through the 6-7 "smart" pointers available.
Just as you try to avoid new and delete in your game loop in C++, avoid newing class objects in C# and GC won't cause any troubles.
Dynamic memory management incurs by definition a certain amount of performance penalties. No matter what system is used, these penalties can be managed.
However, in a language that forces you to use one single tool for dynamic memory management, the garbage collector, limits one's flexibility in dealing with issues that arise quite a lot.
This is why languages that allow manual memory management will always have an edge in performance potential. Whether that's used is up to the programmers involved.
I don't think that in the future, general GCs will be that good, that manual memory management won't matter any more. After all, GCs also need to be implemented somehow. ;)
So what will happen (and is happening already, if you look close enough), is that manual and automatic memory management will be mixed.
you dont seem to understand how C# runtime works at all, so your claim are as wrong as it gets.
Every single C# function gets compiled to native code by the JIT the first time it is invoked, from that point on, that function is running native code period. So the "more work to do for every instruction" is just... uninformed and uninformative.
This has been the case for ages, since Java started doing it loooong time ago.
It's important to know that not all platforms allow emitting native code, because you either can't write to executable pages, can't change the executable flag on pages or the platform will only execute code signed with a secret key. This is especially true for the platforms we're usually dealing with in gamedev (consoles, smartphones, tablets).
In all of these cases, there's no (allowed) way to avoid using runtime interpretation of byte code.
It is possible, to "pre-JIT" byte code in some languages, but at that point you're basically back to a standard compiled language with a worse compiler.
Additionally, thanks to the LLVM project (and others like Cint or TCC), it's possible to JIT or interpret C and C++ source or byte code, closing this particular gap even more.
What remains is, that "cafe based" languages (Java, .net) need to assume a virtual machine to work properly. So runtime performance can only ever be as good as this virtual machine matches to the real machine used, causing more and more troubles as the virtual machine ages and the real machines progress.
Therefor, one will, all other things being equal, always pay a performance penalty when using languages targeting virtual machines. The question is how big this gap is. In my opinion, this performance penalty will shrink to almost zero over time, as JIT and regular compilers converge (again, see LLVM).
Posted by Rattenhirn
on 27 December 2012 - 04:11 AM
SAT works by finding all the common axises (sp?) between the two colliders and see how much they overlap. If one of them doesn't overlap, there's no collision. Otherwise the collision normal can be devised from the overlaps.
AABB's, by definition, have the common axises global x, y and z. With the min/max tests you find out if all of them overlap or not. So what's left to do, is figure out, how much they overlap, and them simply pick the largest overlap, since the axises are also, by definition, the face normals.
I hope that helps!
And maybe someone can let me know what the correct plural of axis is... ;)