• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

151 Neutral

About LordJulian

  • Rank
  1. I actually am not very found of singletons; they can, as stated before, be easily abused to take a different role than they were designed. And, again, not a fan about "the role that they were designed for". You NEED one instance, I'm gonna trust that you are a big enough boy to instantiate it only once. You cannot? Oh, I have news for you... you're doing something wrong :) <insert rage replies here>   Also, I've met, more than once, naïve singleton implementation that did more wrong than right; for example in the function I wrote I needed to make sure that the said singleton was "destroyed" (the why's of me needing that are a whole different STUPID story, needles to say, I needed to), and simply checking if the "singleton" instance was != NULL would have the "nice" side effect of instantiating it if it was not instantiated before... WHICH WAS STUPID, error inducing and simply nerve wreaking. Me having to check the implementation of something that advertises itself as a singleton and modify it is not my idea of a fun time :)   Usually, through careful architecture design, no singleton would be implemented... ever... nor global variables. But, sure, as with everything else in C++, you want it, you could usually build/use it; but you usually pay the appropriate price for it
  2. 2 more cents throws in:   DON'T! don't use deferred shading as described by me above, research it and use it as described by someone who actually used it; I only wanted to paint the available pictures.   Also, my advice is: use the forward shading technique, it's more forgiving. When that fails to provide the results you're looking for, add a post processing step and milk that into more and more complex filters. In no time you'll do the steps to prepare yourself for deferred shading and you won't even know it. Don't go full deferred until you are ready, deferred shading, while offering an elegant model that promises to solve lots of issues,  presents lots of problems in itself. You'd have to be more controlled in the way you render your scene, with the different render targets and the way you pack/unpack information, etc. In one word, you'd have to be an obsessed management whore with your rendering pipeline(are we allowed to say whore in here? we should, it's the internets).   And, before you reach the "relevant shading data for that pixel" stage, you still have to first RENDER that data into the render targets; this is done, of course, with a custom shader that outputs all the needed values, which, later, will be interpreted by the super-shader. So, shader complexity is not really lower, it's just separated into multiple shaders, which, in itself, is a complex thing to do.   So, go gradually, and know that whatever you learn/experiment with "the classical way of applying shaders to models and basic lighting " will stay with you and will be relevant. Deferred shading is, after all, forward shading a full screen quad with some real-time generated textures :).
  3. After reading what Servant of the Lord wrote on this, I really can no longer say it is good practice. Let's say that I do call delete on a pointer twice. If I set it to NULL, nothing happens and the error is never found and corrected. However, if I don't, the program crashes and the error gets fixed. It would seem to be better practice to not give yourself the ability to do things incorrectly in the first place.     Late reply, but better late... you know the rest.   There are two kinds of "best practices".   The first one is over-zealous, over-religious, fanatic approach  "the program should blow to bits as soon as I do something stupid, so I get a chance to get all the context I need in order to fix this". This is wonderful, and for a while I was a zealot for this. Again, this is good IN TESTING CONDITIONS, when you have the means to do something about it and another crash won't matter that much.   The second one is the motherly, lovely, caring, "peace to the world" type of thinking, in which you try to recover and give the program as many chances to continue like nothing happened as you can. This is good for release code, when a crash is the worst you could do. Try to have them both and to easily switch between them.   Think of this as a theater play/ live show. When doing the repetitions, the director/actors stop at every mistake, correct it and start over; that's why they have the repetitions. But during a live performance, if they stumble, they do whatever they can to carry on until the end of the show and recover the normal flow as soon as possible. Stopping the event and restarting it at each mistake would be too much for the audience. (back to game context) Not to mention that console owners will usually reject your game for any crash :)
  4. Well, since the original topic went straight to hell and since everyone is throwing their hat in the ring, here I come as well. For me (a 6.5 years developer at a huge game company, working at a few AAA titles you might just have heard of - assassin's creed, anyone?, on all the major consoles out there), the deal is like this:   GAME DEVELOPERS (because this was the original context of the question) are choosing to build their engines in C/C++ because (choose any/some/all "of the below") :   - tradition: since forever engines were made in C/C++, whatever was before that couldn't really be called an engine   - 1st/3rd party libraries: there are literally millions of libraries and APIs written in C/C++ out there. Sure, most of them are junk, but you simply cannot build a complete engine without some of them. Also, you can access them in mostly any other language, but why should you? Plus, any translation is likely to cost you.   - platform support: even though it is basically the previous reason, it deserves a repetition: any platform owner (game consoles, mainly, for our purpose) will deliver their SDK that WILL target C/C++. That's it. If you want to use it in a different language, a wrapper is a must.   - The promise of gold at the end of the rainbow: raw memory access, the way you like it. Even though, at the beginning, you don't need it, when push comes to shove and the framerate just isn't high enough, you WILL want to fiddle with memory layout and all the other tricks in the book that will yield better performance. Don't mistake this point, it is for ninja-tier programmers, but if you want it, it is there. I've witnessed some very nice and some very low level trickery that was done by a colleague of mine for a PS3 project on a particle implementation that was very optimized on the main platform to begin with. The result was mind blowing, we almost got the particles "for free" on PS3, while them being a major strain on the main platform.  To summarize: given a good C# programmer and an average C++ programmer, the C# programmer will probably produce faster code on most of the tasks; but given excellent programmers on both languages, my money is on the C++ one, every time, anytime. He just has more wiggle room and the promise of total memory layout freedom.   - Rock solid compilers. The c++ compilers usually carry 20+ years of effort spent into delivering very very fast code. The things the C++ compiler does with your code are just amazing. The other compilers are catching on quite fast, so this is becoming a non-reason fast, but still, the C++ compilers (and linkers, specifically) are geared towards maximum speed output, given the proper switches. Granted, with a certain compiler, I was able to write a VERY simple program that gave WRONG results in a certain configuration, but that was a solitary issue, they fixed it and we never spoke of it since.   Well, there are a few more reasons, but basically this is it. And now an advice for you: if you want to do a game and you're sure you can do it in C#, GO AHEAD. It is a lovely language with quite a few tricks up the compiler's sleeve. If you want to do an engine... do it, for fun, in C++. You will never finish it on your own in reasonable time with production-ready features , but it's a very nice exercise and you will gain lots of benefits.   Have fun!
  5. Brother Bob wanted to stay away from exotic lighting techniques, but I'm feeling adventurous.   Basically, ignoring extremely exotic lighting techniques (and if anyone else feels more adventurous, go ahead), there are two main ways in which you do lighting:   1) forward shading: you render your scene model by model and do the lighting either in the vertex shader or fragment shader, depending on the effect you want to obtain and the trade between visual quality and speed.   2)  deferred shading: you first render your scene in special textures that will retain different attributes per pixel, like diffuse color (before lighting), screen position (depth), normal and any other kind of attribute that you would need in your lighting calculations. After the first pass is complete (all opaque objects were rendered in these special textures), you draw a fullscreen quad with a special shader that uses all these textures to perform lighting on each pixel on the screen.   This technique is a bit more tricky, but I'm assuming you're not afraid and you want to learn new and exciting stuff. Lots of new games use this, but it can be quite heavy on texture memory. Also, handing transparent objects can be quite tricky, but there are tons of books explaining all that (ShaderX 4- 7, GPU Gems, etc).   After all that, you can (and should, because it's nice) do some image post processing. In the deferred shading technique, since you have more information per pixel than just color and depth, you can get quite creative with your post processing. In forward shading, for more complex post processing you will most likely need to render the scene multiple times in multiple textures. Come to think about it. deferred shading can be considered a quite heavy post processing effect :p
  6. And, taking the wonderful example of Rod, if you don't feel like including a whole h(pp) file for a variable, you can extern declare said variable in any place that you want it.   i.e.   void BlaBluBli() {  int a = 2; extern int counter = a + 7; }   As long as you ONLY declare it NON EXTERN once, every other external declaration will be solved by the compiler at link time.   But, DON'T! NEVER! Not even once! As an abuser of global variables, especially at work, especially when I need to do things quickly, I can tell you that usually it comes back to byte you (pun intended!).  The hassle, when you decide to expand on your work, far outweighs any gains; usually this is a sign that your overall design has a few flaws in it.   Don't do it, EVER, I mean it :p  (or if you do, don't let anyone else play with your globals :p )
  7. Java: Cannot find symbol error

  8. Wanting to delete memory and not being sure if you actually should delete it is the root of all evil. Try to impose as many rules as you can on your own code and make it as close to impossible as possible for others using your classes to do something stupid. One such an idea would be to enforce the creation and deletion of your object to stay in your own code (private constructors/destructors and friend class manager?). Anyhow, it's a tricky subject that cannot be taught in a few posts. Learn the basics of allocation/deletion very well, experiment a lot and grow wiser. At one point, you will feel confidant enough to answer your own question (and, from time to time, you will fail , but keep it up ). You will know that you have the right implementation the moment you will KNOW that you MUST delete some memory and not wonder anymore.
  9. OpenGL glBindBuffer zero target valid?

    *AsK a friend ...
  10. OpenGL glBindBuffer zero target valid?

    Seems like a gDEBugger issue with not correctly identify this case. if you're concerned about performance issues , you could #ifdef that zero-buffer-binding line to only run in debug mode . This way you will have the behavior when running in debug mode, which will allow you to check for issues, and when you're sure you have none, you won't need to unbind buffers anymore.   As for the actual performance delta, ALWAYS MEASURE. Just run a test scene, that will stress that particular binding/unbinding part of your code with both that line and without it and time the difference. As a friend, if they have a different video card, to run the same test and compare the data. Repeat the test lots of times, for consistency. After that, take the decision based on those numbers.
  11. OFFTOPIC, straight ahead!   AHAHA! I thought my sharp-jitsu senses were tingling from the 1st implementation. In c# indeed, the behavior is inferred from the type being a value-type or a reference type  (and from the fact that the garbage collector sweeps all the time behind your back).   Now, in C++ you have to do all the hand-holding yourself for all your class instances (and if you want to share them around via pointers, you're gonna have a ... good... time ). The thing is, try to keep it simple and "make up" rules for yourself, like "whoever requests a pointer must also tell me when they don't need it anymore"  or " I made you, I'll destroy you" i.e. "who calls new will also call delete".   Also, when learning C++, since you have all the time in the world (yea, right...) my personal advice, and some people will certainly disagree, is to try to do it only with pure c/c++ features, not with all kinds of libraries (boost, etc etc ). It helps separate the language learning from everything else and, AFTER you learn to do it in the pure language variant, you will have a better experience in identifying the pros and cons of each library feature and will choose them wisely. Again, my very own personal opinion is (and I am allowed to have one, dear internet :) ) that using only language features is beneficial for learning and libraries are for productivity.   Have fun!
  12. Ohai!   First, some background: the DispatchTimer does not create another thread to run its stuff on, it is periodically checked from the WPF UI thread. This is why you can access UI elements directly and not get an exception. Now, the UI thread itself, (and here is where I start assuming stuff without any real base) is running once per rendered frame (running multiple time would be wasteful). Now, the rendering frame, my guess, is vsync ... umm... synchronized. So, no matter how small you'll set the interval of a DispatcherTimer, it won't be checked faster than your refresh rate (which, coincidentally, is almost always 60 hertz).   Now, what I would do: I would start a BackgroundWorker (which creates a different thread) and run an infinite loop in which I do my work and then sleep a bit (5 ms  minus the_time_it_got_to_do_my_work_this_frame ). Now all would be nice and well if I wanted to check stuff that is "external" to WPF (like stream some stuff from the HDD, check for a network message, etc).   And from here, trouble: if you would want to access and, heaven forbid, modify UI elements from the secondary thread (our BackgroundWorker), you would have to use Dispatcher.Invoke or Dispatcher.BeginInvoke, and this is where things get messy: Dispatcher.Invoke or Dispatcher.BeginInvoke (msdn it to see the difference or PM me) do their work on the , yes, you guessed it, the Dispatcher thread (in our case the WPF UI one). And, as with everything dispatcher-driven, it is the WPF UI that checks the Invoke or BeginInvoke work queues, yep, you've guessed it, ONCE PER RENDERED FRAME.   So, if you go the BackgroundWorker route BUT you NEED to access or modify at least one UI element EVERY loop, then your update rate will slow down to 60 times per second (if you use beginInvoke this won't happen, but the UI changes will be the same - 60 hertz - in both cases).    My recommendation is (and this is a general WPF recommendation): split the UI logic from the application logic and UPDATE THE UI ONLY WHEN YOU REALLY NEED IT. So you should do your update cycle once every 5 ms, if you have to, but don't touch the UI unless when and if you really need to (and never once per update cycle).   And the real answer: i am unaware of any way to SPEED UP the WPF update cycle (but I guess it's definitely not recommended either way).   Cheers!
  13. Sorry to double post, but, for learning purposes, here's another suggestion: If you want to track "troublesome bugs" with memory allocation (i.e. using pointers after you delete them - happens more often than you think), you set them to an invalid, but easily recognizable value, kinda like 0xfefefefe. Then, when the program blows to bits, you look at the pointer in the debugger, and if it matches (or it is close) the 0xfefefefe, you know you have this problem. enjoy
  14. [quote name='Bregma' timestamp='1350328563' post='4990477'] If you absolute want to avoid using smart pointers, you could try using a cleanup member function. [code] class some_class{ public: some_class(int); ~some_class(); private: void cleanup(); private: int *ptr1; int *ptr2; int *ptr3; int *ptr4; }; some_class:some_class(int some_val) : ptr1(nullptr), ptr2(nullptr), ptr3(nullptr), ptr4(nullptr) { try { ptr1 = new int(some_val); ptr2 = new int(some_val); ptr3 = new int(some_val); ptr4 = new int(some_val); } catch (...) { cleanup(); throw; } } some_class::~some_class() { cleanup(); } void some_class::cleanup() { delete ptr4; delete ptr3; delete ptr2; delete ptr1; } [/code] This takes advantage of the fact that it's OK to use the delete operator on a pointer equal to nullptr. [/quote] quite good suggestion, BUT: in the cleanup function check for NULL and if not, then delete and assign to NULL. . I know that delete checks if the pointer is NULL, but for teaching purposes it is good to suggest that. Also, setting it to NULL after deleting is not mandatory, but is, again, good practice and, perhaps, would keep the user to double delete the same pointer and/or access it after deletion.
  15. [quote name='swiftcoder' timestamp='1347630994' post='4980054'] [quote name='GrandMaster789' timestamp='1347617031' post='4980011'] - Code will compile to just about any platform[/quote] I actually missed this one the first time around. It should probably say something more like "Code can be beaten into compiling on just about any platform, given a significant porting effort". [list] [*]Want to compile your 32-bit code with a 64-bit compiler? Sorry, your standard integer type changed size, and nothing works anymore. [*]Want to compile your standard-conforming code on Android? Sorry, we disable RTTI and exceptions, just because. [*]Want to compile your file-system tool on Windows? Sorry, we only kind-of support POSIX over here. [*]Want to compile your file-system tool on Mac? Oh, hey, we do support POSIX, but we moved all the header files. [*]Want to compile your GUI application on a Mac? Sorry, we don't have Win32 or MFC. We do have GTK+ and QT, but they have subtle differences from their linux counterparts. [/list] Even though a number of those points are library related, when you compare to the portability of a language/ecosystem like Java, C++ just doesn't cut it. [/quote] And then you start thinking about more game-like issues , such as direct memory access and , let's not forget, execution speed, and then you see the light
  • Advertisement