Toji

Members
  • Content count

    410
  • Joined

  • Last visited

Community Reputation

535 Good

About Toji

  • Rank
    Member
  1. OpenGL OpenGL 4.0

    While this is nice and all, does anyone get the feeling that KHRONOS has officially given up pretense of introducing new features and is now content to just follow DirectX around? Kinda sad, considering how it wasn't that long ago that they were the pack leader. Beyond that, they're missing one big feature that (for me anyway) is the primary draw of DirectX 11: Multi-threading. The whole API has been designed around segregating thread-safe and non-thread-safe calls, which is a huge plus even if you don't intend to target DX11 level hardware. As such, just adding a few new features again is a bit of a letdown. [Insert "Should have been called OpenGL 2.x" joke here.] Still, at the very least this exposes the current hardware capabilities to non-Microsoft platforms, so I guess I can't gripe too much...
  2. Simple answers: 1) Is D3D10 inherently slower than D3D9? No. In fact, it has the potential to be faster. 2) Is your driver's implementation of 10 slower than 9? Good possibility. Several years after the fact, the drivers are still improving. Unfortunately there's just not much D3D10 software out there (let's not talk about whose fault that is), so there's not much push to focus on it performance wise. 3) Is the way you are using D3D10 optimal? Almost certainly not. Especially if you're simply wrapping it in a framework that was built around D3D9 concepts. D3D10 must be used differently if you want to get peak performance out of it, so renderers built to support both will rarely do so optimally for both cases.
  3. SDL Directx 10 on Vista

    Quote:Original post by Twinblad3r I'm still pretty new to directx and I read there are major change between dx 9 and dx 10 but all I found on google is unified shader. Can someone briefly explain to me if there are other differences. First and foremost, DX10 is Vista only. That may limit your decision to program with it depending on your target audience. Beyond that, the differences get pretty technical but the basic idea is that DX10 is a complete re-write of DirectX to take advantage of newer hardware and driver models. This makes it very different to program in than DX9 or any of the previous versions. Some of the highlights are: * No fixed-function rendering. You MUST use shaders in DX10 (not a bad thing, mind you) * Addition of geometry shaders * Shader uniforms built and passed as buffers. Lets you use them like textures or vertex buffers. * Heavy emphasis on render-to-surface. Even your main window is basically a texture that you render to. Overall the changes streamline the process greatly, though you will still see a lot of complaints about the API (Primarily because of it's ties to Vista). What it boils down to in the end is that you should use DX9 for older hardware or OSes and DX10 for newer hardware on Vista and up. For more info, start here
  4. Great, thank you! *sigh* Visual Studio has spoiled me so badly... I really need to do more GCC work.
  5. I've been working on a C++ binding library for Squirrel recently, and have had a couple of people ask for GCC support. I'm not terribly familiar with GCC, but I thought I would give it a go. Fortunately, most of the errors were pretty easy to clean up, but I've got one that's got me stumped. With the following code, I get an error: typedef int (*SQFUNCTION)(int); template <class R> class SqGlobal { public: static int Func0(int vm) { // Doesn't really matter what's in here. } template <class A1, int startIdx> static int Func1(int vm) { // Doesn't really matter what's in here. } }; template <class R> SQFUNCTION SqGlobalFunc(R (*method)()) { return &SqGlobal<R>::Func0; // This works fine, for the record } template <class R, class A1> SQFUNCTION SqGlobalFunc(R (*method)(A1)) { return &SqGlobal<R>::Func1<A1, 2>; // ERROR HERE } In visual studio this all works fine and dandy, but in GCC I get: error: expected primary-expression before ',' token error: expected primary-expression before ';' token Both of which point at the line I've marked ERROR HERE. Wondering if it was me using an int in the template list that was problematic I removed it and got this instead: error: expected primary-expression before '>' token error: expected primary-expression before ';' token So that's not it apparently. Now, correct me If I'm wrong but that seems like perfectly valid (if slightly complex) C++ code... I've been poking around for a while now, and can't figure out why it's complaining, much less how to fix it. Does anyone have anything that could point me in the right direction?
  6. Quote:Original post by frob At the risk of derailing the thread... The reverse of this is actually true. I suppose this is in the eye of the beholder (and maybe I'm just still feeling burned from the OpenGL 3 fiasco), but you did a good job of highlighting the pros and cons of the different philosophies there. The point I was trying to bring across (not to well, now that I look at it) is that in modern graphics there are several well established "right" ways to do things. Sure, you can structure your animation framework a billion different ways, but you should always be submitting your vertices via a VBO,, you really ought to be using shaders if you want to keep up with the hardware, that sort of thing. DirectX actively (if somewhat strictly) encourages these "good" behaviors. In OpenGL, however, it is actually most natural (ie: no extensions required) to do things in what is possibly the slowest and most limited way! Not to mention many of the OpenGL resources available will happily teach you immediate mode till the cows come home, and oftentimes buffers and shaders are some small footnote at the end of the series. You really do have to actively search for information on how to use the most optimal path, which is going to be difficult for a newcomer to the graphics scene. And now I'M derailing the thread. Sorry :) Still, it really is about preference in the end, as both are very capable APIs. Don't pay too much attention to us nitpicking the details!
  7. Both have a lot of tutorials behind them, and there's a lot of shared concepts between them too. I would say that your choice should come down to that of programming style and project scope. The most obvious difference is that DirectX is Object oriented (via COM, although the COM layer is pretty much hidden away) while OpenGL is very functional in nature. Therefore if you find it more natural to develop with an OO mindset, go DirectX otherwise look at OpenGL. Another consideration to make is if you are planning on doing platforms other than windows, in which case you should consider OpenGL as well (since it's really the only option outside of the Microsoft Platforms). If you are just starting out with 3d, though, I would highly recommend setting cross-platform desires aside until you're feeling more secure in some of the concepts. Finally it should be noted that while DirectX has been moving pretty fast lately (in a good way) OpenGL's API has stagnated, with all new functionality coming in through a somewhat clunky extension mechanisim. As a result, OpenGL has a lot of outdated cruft still lingering that can make it a tad more confusing for newbies who are looking to learn how "modern" graphics programming works. Good luck, whichever path you choose!
  8. Quote:Original post by Lovens And also is it wize to switch IDE in the middle of a project? If you have "nightmare" stories to share please do so. I just did, and suffered no worse for the wear. I have a project at work that we were trying to improve load times on. It was originally a .Net 2.0 project done in Visual Studio 2005, but we found that upgrading to .Net 3.0 and 2008 gave us a "free" 19% speed bump! The upgrade literally took all of 3 minutes, and we've been humming along nicely ever since. Of course, You don't have to move to .Net 3.0 with 2008 (your .Net version is just a drop down in the project settings) but honestly, there's not much reason to migrate if you're planning to stick to 2.0. And yes, 3.0 (and 3.5) are fully backwards compatible with 2.0. I made no code changes whatsoever during my upgrade.
  9. Just wondering if anyone here can point me in the right direction. I'm looking for an algorithm(that can be implemented in a shader) that will process an image so that it is easier for a colorblind person to make out differences in their problem colors. A couple of nights worth of Googling hasn't given me much except examples of pictures that have been run through the process. Conversely, I'd also be interested in a process to simulate the various forms of color blindness, which certainly seems more popular. Thanks for any tips! (And yes, to answer the obvious question, I am colorblind. Red/Green Deficient, to be exact)
  10. IDs versus Pointers

    Quote:Original post by dmatter - D3D10 doesn't lose devices so I could safely cast pointers to and from integers which again avoids lookup overhead. So I take it you're not developing in a 64 bit environment? Honestly, that's fine if you know what you're doing and you know that you will only be running on a single targeted platform (in this case 32 bit Vista only) but as a general rule things like casting pointers to integers should be discouraged due to compatibility issues between OS versions (or even different compilers on the same OS, for that matter)
  11. Great! Thank you! I feel a little silly for not realizing what was going on now (obvious in retrospect), but I'd rather feel silly and learn something than remain blissfully ignorant of my own bad practices :) Thanks again for everyone who lent their advice! Rate++
  12. Ask and ye shall receive: // For catching memory leaks #define _CRTDBG_MAP_ALLOC #include <cstdlib> #include <crtdbg.h> #include <string> #include <iostream> #include <boost/spirit/core.hpp> using namespace std; using namespace boost::spirit; class MyGrammar : public grammar< MyGrammar > { public: template <typename SCANNER> class definition { public: rule< SCANNER > myRule; const rule< SCANNER >& start() const { return myRule; } definition(MyGrammar const &self) { myRule = int_p >> ch_p(',') >> int_p; } }; }; int main( int argc, char* argv[] ) { { // Create some scope to clean up before we hit the leak dump string src = "1,2"; MyGrammar gr; if(parse(src.c_str(), gr, space_p).full) { cout << "Success!" << endl; } else { cout << "Fail!" << endl; } } _CrtDumpMemoryLeaks(); } That's about as simple as it gets, no? When I run this, it outputs the following: Detected memory leaks! Dumping objects -> c:\program files\microsoft visual studio 8\vc\include\crtdbg.h(1147) : {121} normal block at 0x003660A8, 16 bytes long. Data: <\SC H`6 > 5C 53 43 00 00 00 00 00 01 00 00 00 48 60 36 00 c:\program files\microsoft visual studio 8\vc\include\crtdbg.h(1147) : {119} normal block at 0x00366008, 4 bytes long. Data: < > CD CD CD CD c:\program files\microsoft visual studio 8\vc\include\crtdbg.h(1147) : {118} normal block at 0x00365FB8, 16 bytes long. Data: <42C `_6 > 34 32 43 00 01 00 00 00 01 00 00 00 60 5F 36 00 c:\program files\microsoft visual studio 8\vc\include\crtdbg.h(1147) : {117} normal block at 0x00365F60, 24 bytes long. Data: < `6 > 00 00 00 00 00 00 00 00 CD CD CD CD 08 60 36 00 Object dump complete. So give it a go if you don't mind! If it really is just my machine (well, both of my machines...) then I'll be a happy man! Thanks!
  13. I've been reading up on the Component Object approach to game entities recently (after being introduced by this journal entry) and have been intrigued by some of the concepts presented about it. The overall concept seems sound and logical, but I do have a couple of questions lingering that weren't quite answered by the articles presented in the previous link. Foremost of these questions: how to handle object properties? Everything about this model seems to revolve around iterating over the game objects and their child components and letting each do it's own isolated little chunk of work, isolated being the key word there. Most of the articles I've found mention the necessity for different components to communicate, but very few mention anything about inter-object communication. As an example: an AI component of an enemy object may need to observe the health of the player object. Health is not something that all objects in the game world will need, so it's a prime target for encapsulation in a component. But if this value is wrapped away in a component, how does the enemy AI access it? The components themselves should be just a generic interface that go into a simple list of other components yes? So in order to get to that value it would seem that you would have to iterate over every component in the list querying to see if it contained the desired property. While not horrible that kind of overhead for every property would be completely unworkable in even a marginally complex realtime game. So does anyone have an example of how this might be handled? I'm sure there's a few ways to do it, so any insights that you care to share would be appreciated.
  14. I've already posted this to the Spirit mailing list, but it's been pretty much ignored there, so I'll try my luck with you guys. Mostly I'm wondering if anyone else has seen this leak (so I know it's not me going crazy/being stupid :) I noticed while using TinyJSON (a library that uses boost::spirit for its parsing) after performing a single parse that my program would leak ~60 bytes on shutdown. I initially suspected the JSON library but upon further inspection found that it seems that memory is leaked any time I instantiate a grammar, no matter how complex or simple the grammar may be. While the leaks are not large, this is a frustration since it is an obvious goal to make one's applications leak-free. I have not yet seen any other reference to such a problem online, so I'm a little wary that this may be localized to my environment somehow. I have tested this on two machines at this point, both running Windows XP and Visual Studio 2005. Both boost 1.35.0 and 1.36.0 demonstrate the same issue. Is this a known bug, and if so is there a workaround? Or is it perhaps an issue caused by my development environment? Thank you for any assistance!
  15. It is normal to wear out after a while, especially if you have a single large project that you've been working on for a while. I program for a living (not games... yet) and to be frank staying motivated is probably THE hardest part of the job, hands down! Now, as a professional, I don't have the luxury of kicking back and watching anime while waiting for my muse to come back, so I have to find other ways of keeping motivated. My methods? I will usually switch to another portion of the project momentarily if its big enough (ie: leave the UI alone for a bit and work on networking) or if that's not an option I'll go help a co-worker with a problem they're having or do research on an upcoming project. The key is to stay in the mindset of development and stay on tasks that help you feel productive, even if it's not working on the exact thing that you started out on. Of course, I feel that I'm lucky that my work trusts me enough to allow me to do that. Other employers may not be so generous. Now, granted, I still go home at night and kick back with some friendly rounds of TF2 :) Everyone needs an actual break! But here's the bottom line: burnout, even momentarily, is normal. What separates the good programmers from the bad is knowing when to end your break and get on the horse again!