Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 02 Sep 2006
Offline Last Active Jan 19 2013 04:19 PM

Posts I've Made

In Topic: Program Wide Global Vars in C++?

29 December 2012 - 05:16 PM

I actually am not very found of singletons; they can, as stated before, be easily abused to take a different role than they were designed. And, again, not a fan about "the role that they were designed for". You NEED one instance, I'm gonna trust that you are a big enough boy to instantiate it only once. You cannot? Oh, I have news for you... you're doing something wrong :) <insert rage replies here>


Also, I've met, more than once, naïve singleton implementation that did more wrong than right; for example in the function I wrote I needed to make sure that the said singleton was "destroyed" (the why's of me needing that are a whole different STUPID story, needles to say, I needed to), and simply checking if the "singleton" instance was != NULL would have the "nice" side effect of instantiating it if it was not instantiated before... WHICH WAS STUPID, error inducing and simply nerve wreaking. Me having to check the implementation of something that advertises itself as a singleton and modify it is not my idea of a fun time :)


Usually, through careful architecture design, no singleton would be implemented... ever... nor global variables. But, sure, as with everything else in C++, you want it, you could usually build/use it; but you usually pay the appropriate price for it

In Topic: Miscellaneous OpenGL questions (mostly dealing with shaders)

29 December 2012 - 01:14 AM

2 more cents throws in:


If you use deferred shading (as described by LordJulian above)

DON'T! don't use deferred shading as described by me above, research it and use it as described by someone who actually used it; I only wanted to paint the available pictures.


Also, my advice is: use the forward shading technique, it's more forgiving. When that fails to provide the results you're looking for, add a post processing step and milk that into more and more complex filters. In no time you'll do the steps to prepare yourself for deferred shading and you won't even know it. Don't go full deferred until you are ready, deferred shading, while offering an elegant model that promises to solve lots of issues,  presents lots of problems in itself. You'd have to be more controlled in the way you render your scene, with the different render targets and the way you pack/unpack information, etc. In one word, you'd have to be an obsessed management whore with your rendering pipeline(are we allowed to say whore in here? we should, it's the internets).


And, before you reach the "relevant shading data for that pixel" stage, you still have to first RENDER that data into the render targets; this is done, of course, with a custom shader that outputs all the needed values, which, later, will be interpreted by the super-shader. So, shader complexity is not really lower, it's just separated into multiple shaders, which, in itself, is a complex thing to do.


So, go gradually, and know that whatever you learn/experiment with "the classical way of applying shaders to models and basic lighting " will stay with you and will be relevant. Deferred shading is, after all, forward shading a full screen quad with some real-time generated textures :).

In Topic: Try/catch absurdity and calling destructors...

28 December 2012 - 11:35 PM

quite good suggestion, BUT: in the cleanup function check for NULL and if not, then delete and assign to NULL. smile.png. I know that delete checks if the pointer is NULL, but for teaching purposes it is good to suggest that. Also, setting it to NULL after deleting is not mandatory, but is, again, good practice and, perhaps, would keep the user to double delete the same pointer and/or access it after deletion.

After reading what Servant of the Lord wrote on this, I really can no longer say it is good practice. Let's say that I do call delete on a pointer twice. If I set it to NULL, nothing happens and the error is never found and corrected. However, if I don't, the program crashes and the error gets fixed. It would seem to be better practice to not give yourself the ability to do things incorrectly in the first place.



Late reply, but better late... you know the rest.


There are two kinds of "best practices".


The first one is over-zealous, over-religious, fanatic approach  "the program should blow to bits as soon as I do something stupid, so I get a chance to get all the context I need in order to fix this". This is wonderful, and for a while I was a zealot for this. Again, this is good IN TESTING CONDITIONS, when you have the means to do something about it and another crash won't matter that much.


The second one is the motherly, lovely, caring, "peace to the world" type of thinking, in which you try to recover and give the program as many chances to continue like nothing happened as you can. This is good for release code, when a crash is the worst you could do.

Try to have them both and to easily switch between them.


Think of this as a theater play/ live show. When doing the repetitions, the director/actors stop at every mistake, correct it and start over; that's why they have the repetitions. But during a live performance, if they stumble, they do whatever they can to carry on until the end of the show and recover the normal flow as soon as possible. Stopping the event and restarting it at each mistake would be too much for the audience. (back to game context) Not to mention that console owners will usually reject your game for any crash :)

In Topic: What kind of optimization makes C++ faster than C#?

28 December 2012 - 11:14 PM

Well, since the original topic went straight to hell and since everyone is throwing their hat in the ring, here I come as well.

For me (a 6.5 years developer at a huge game company, working at a few AAA titles you might just have heard of - assassin's creed, anyone?, on all the major consoles out there), the deal is like this:


GAME DEVELOPERS (because this was the original context of the question) are choosing to build their engines in C/C++ because (choose any/some/all "of the below") :


- tradition: since forever engines were made in C/C++, whatever was before that couldn't really be called an engine


- 1st/3rd party libraries: there are literally millions of libraries and APIs written in C/C++ out there. Sure, most of them are junk, but you simply cannot build a complete engine without some of them. Also, you can access them in mostly any other language, but why should you? Plus, any translation is likely to cost you.


- platform support: even though it is basically the previous reason, it deserves a repetition: any platform owner (game consoles, mainly, for our purpose) will deliver their SDK that WILL target C/C++. That's it. If you want to use it in a different language, a wrapper is a must.


- The promise of gold at the end of the rainbow: raw memory access, the way you like it. Even though, at the beginning, you don't need it, when push comes to shove and the framerate just isn't high enough, you WILL want to fiddle with memory layout and all the other tricks in the book that will yield better performance. Don't mistake this point, it is for ninja-tier programmers, but if you want it, it is there. I've witnessed some very nice and some very low level trickery that was done by a colleague of mine for a PS3 project on a particle implementation that was very optimized on the main platform to begin with. The result was mind blowing, we almost got the particles "for free" on PS3, while them being a major strain on the main platform.  To summarize: given a good C# programmer and an average C++ programmer, the C# programmer will probably produce faster code on most of the tasks; but given excellent programmers on both languages, my money is on the C++ one, every time, anytime. He just has more wiggle room and the promise of total memory layout freedom.


- Rock solid compilers. The c++ compilers usually carry 20+ years of effort spent into delivering very very fast code. The things the C++ compiler does with your code are just amazing. The other compilers are catching on quite fast, so this is becoming a non-reason fast, but still, the C++ compilers (and linkers, specifically) are geared towards maximum speed output, given the proper switches. Granted, with a certain compiler, I was able to write a VERY simple program that gave WRONG results in a certain configuration, but that was a solitary issue, they fixed it and we never spoke of it since.


Well, there are a few more reasons, but basically this is it. And now an advice for you: if you want to do a game and you're sure you can do it in C#, GO AHEAD. It is a lovely language with quite a few tricks up the compiler's sleeve. If you want to do an engine... do it, for fun, in C++. You will never finish it on your own in reasonable time with production-ready features , but it's a very nice exercise and you will gain lots of benefits.


Have fun!

In Topic: Miscellaneous OpenGL questions (mostly dealing with shaders)

28 December 2012 - 10:20 PM

Brother Bob wanted to stay away from exotic lighting techniques, but I'm feeling adventurous.


Basically, ignoring extremely exotic lighting techniques (and if anyone else feels more adventurous, go ahead), there are two main ways in which you do lighting:


1) forward shading: you render your scene model by model and do the lighting either in the vertex shader or fragment shader, depending on the effect you want to obtain and the trade between visual quality and speed.


2)  deferred shading: you first render your scene in special textures that will retain different attributes per pixel, like diffuse color (before lighting), screen position (depth), normal and any other kind of attribute that you would need in your lighting calculations. After the first pass is complete (all opaque objects were rendered in these special textures), you draw a fullscreen quad with a special shader that uses all these textures to perform lighting on each pixel on the screen.   This technique is a bit more tricky, but I'm assuming you're not afraid and you want to learn new and exciting stuff. Lots of new games use this, but it can be quite heavy on texture memory. Also, handing transparent objects can be quite tricky, but there are tons of books explaining all that (ShaderX 4- 7, GPU Gems, etc).


After all that, you can (and should, because it's nice) do some image post processing. In the deferred shading technique, since you have more information per pixel than just color and depth, you can get quite creative with your post processing. In forward shading, for more complex post processing you will most likely need to render the scene multiple times in multiple textures. Come to think about it. deferred shading can be considered a quite heavy post processing effect :P