Zoomulator

Members
  • Content count

    68
  • Joined

  • Last visited

Community Reputation

273 Neutral

About Zoomulator

  • Rank
    Member
  1. I've got some troubles making my (C++) OpenGL project to work on ATI cards. I've only got NVidia cards myself so it's rather difficult to track down the exact point of failure. So sorry for the lack of code samples. I'm only using OpenGL 3.3 core features, VBOs and that whole kit of stuff. There's no problems on any NVidia card I've tested so far, which is about 5 different models of varying age, some laptops some desktops. SDL1.2 for input, window handling and context creation. I've tested on two ATI Radeon mobility cards and both failed to run my project properly. The first one, Radeon HD 5650, starts my project alright and renders [i]most [/i]things. I'm using a RUI8 texture and I'm attaching that to a framebuffer for direct rendering to it. This seems to be failing on this ATI card since the feature using it simply fails silently, while all other 'usual' GL rendering works fine. I'm using the RUI8 texture more or less as a bitmask. I am checking for FB completion and there are no other errors from GL. The second ATI card, 7670M won't even pass the test for GL3.3, which I'm doing with GLEW, even though it's GL4.1 card! [CODE] glewInit(); if( !GLEW_VERSION_3_3 ) { cout << "OpenGL v3.3 required."; exit(EXIT_FAILURE); } [/CODE] Goes without saying that all GL4.1 NVidia cards I've tested passed this. I've tried searching for documented differences between these two manufacturers, but I've failed to find anything. Is NVidia more lenient so that my code perhaps [i]shouldn't[/i] work in its current state according to the GL specs? Or is ATI patchy on less used features such as uint textures, at least on mobility/laptop cards? I do know that ATI is more strict with shader compilation, but my shaders [i]are[/i] being compiled successfully on the ATI cards (I'm doing all checks). Anyone encountered similar or any differences between NVidia and ATI regarding the standard core opengl 3.3?
  2. Then it would indeed make sense to move it. Just a side note about that though. If you're doing this to make use of move semantics and gain performance, you might actually not get much out of it. The indirection caused by the pointer may actually take more time due to cache misses than copying a small matrix, like 4x4, that's already present in the cache. The performance would be obvious in larger matrices of course. I recommend you do some tests, if this is for optimization.
  3. Another thing to keep in mind is [i]why[/i] you'd want to move it. If your matrix class is flat and contains the data itself, there's no point in moving it. Moving for optimization's sake only ever makes sense when a class holds pointers to data that it owns on the heap. In such case the move result in only the pointer being moved and thus the heap data stays put and is now owned by the newly located object. In comparison: a proper copy/assignment operation should copy all the data pointed to as well. But if the class contains all its data directly it will have to move (copy really) each value in the class to the new location anyway. It's the same procedure whether you copy it or not and you wont get any performance gain. Only reason to use move in this case would be to make absolutely sure that the data is unique to one instance, but since it's a matrix class this wouldn't make sense.
  4. In simple words, a message queue is where messages are stored until they're handled. Call it a buffer if you'd like. If you've ever used SDL, its events are put into a queue, which you pull the events from at a good time in your main loop. A message queue is required here because your program can't just stop in its tracks when ever SDL receives a new input and asks your program to handle it. Instead, you get the option to look at it later. This is called asynchronous message handling. You fire off an event and it may be handled at what ever time the receiver thinks is a good time. Much networking works like this and things like Node.js make use of it to a great extent. The opposite is synchronous message handling in which case something fires off the event and wont continue execution until the event is handled. No queue is required here, because the receiver will only handle one message because the sender wont be able to send more than one message at the time. Apart from being essential for networking and multi-threading, a message queue can also be used if you want to send events that wont be handled until the next game turn or "tick". Of course, a message queue can be polled at less frequent intervals.. I don't have much say in that. But the most normal use would be for it to handle events each iteration of the game loop.
  5. [quote name='littletray26' timestamp='1341531066' post='4956147'] So basically you're all saying that if I can use a global constant rather than a #define, I should? [/quote] You can only benefit by doing so. Const values can be defined in headers as well. If you need a global variable (god forbid) you'll have to use the extern keyword in the header and define it in an implementation file.
  6. I used to put all #includes in a header for a specific implementation file with its declarations. This is of course problematic if that header has to be used by other headers for other implementation files, causing them to include a whole lot of other headers. My solution which is a bit in the middle, is simply to take all the #includes that aren't mentioned in the declarations in the header file can be moved to the top of the implementation file. For instance, a class uses std::string in its interface and uses std::stringstream internally. Putting only the #include <string> in the header and the #include <sstream> in the implementation file, since code referencing to this class need not know about what's used by the implementation. It helped my compile times a lot.. but maybe I'm doing something now that every one is doing by common sense from the get go. =P How many use the monster headers with all an implementation's includes put into them? (Of those who aren't using forward declaring headers) Might just have been me...
  7. I'm assuming you mean C++. A define statement replaces -any- instance of "money" in your code, no matter if it's a variable or function in any namespace or scope. You get no namespace and the possibility of name clashes is pretty much guaranteed unless you give it a very long and unique name like "MYPROJECT_MONEY". A const global can be overridden in a scope defining another instance of "money" and you can even put it in a specific namespace avoiding other files declaring the same name. Defines are uncontrollable and will find a way of leaking into places where you don't want it unless you give them very specific and ugly names. The windows.h header is a great example of this.. you better define LEAN_AND_MEAN before using it and hope all the defines really get undefined. They're only "global" in the sense that if you include a file with it, you'll include the define it contains as well. But the same goes for globally defined const values, so there's no difference there.
  8. [i]About performance:[/i] It still requires a very large amount of events before it's bogged down and as I said: it's not always the event system that needs to be fast. The logic gets a message, it does something that's probably pretty heavy relative to the actual event transmission and then it sends it's own event. The most performance will most likely still be required in the logic itself and the observer pattern will most likely remain one of the better options for that, since it's more direct. The best way is simply to stress test your event system if you're unsure and see how many calls to how many handlers you can pull off per second. If it were to happen that an event system would be saturated you can lessen the load by batching and revisiting the need for some events. It's likely you can stretch it quite far. All I'm saying, don't write it off until you've tested it for the current case. I'm using events only as a glue between input handling, logic, networking, rendering and other systems. You [i]can[/i] use it internally in the logic as well, but that could become a performance issue a lot faster. There's another issue of event systems: it can make the larger picture more difficult to follow and the order of execution between events may not be explicit which can cause determinism issues. This is especially true if used in the game logic because it tend to become quite intricate. Using it just to notify other systems keeps it rather simple. An event system can also help if you plan on adding some sort of scripting. The scripting engine can be detached and communicate with the logic via events, rather than littering the logic with the scripting. Same with AI, which most likely will be using some scripting itself. Make higher level functions in the logic be accessible via events, and both GUI and scripting will be able to access them easily. [i]About ØMQ:[/i] it gives you a highly efficient queuing system. I'm mainly using it because it lets me have a thread safe messaging system. ØMQ's context is completely thread safe which makes it possible for different threads to pass events between them. If you keep the messaging system the only means of communication between threads, it practically removes the issues with race conditions and many other problems related to threads. It's not a holy grail of multithreading, but I find it fitting for my purposes and you might not. Main issue would be it's asynchronous nature, which isn't as efficient in a single threaded environment. I see the possibility of networking as an added benefit. I lay out the structure of my game so that communicating with a remote client is virtually the same as communicating with the local neighboring thread client. It sends the same messages, only via a different pipe. ØMQ will also gladly take any number of channels to communicate, meaning an indefinite number of clients (network or threads or just objects) can be added with relative ease. I've heard claims of some mmo's using ØMQ and that it was quite sufficient, but I can't really back it up..That's enough of me selling ØMQ for now =P I'm not making a real time game, which wouldn't be as simple as just 'hooking it up remotely' the way I put it. You have to do prediction algorithms and other jazzy stuff.. a good reason why network multiplayer is such a pain in most cases. But as I said, it's probably a good thing to wait with ØMQ anyway but it's good to know that it's there.
  9. [quote name='Reactorcore' timestamp='1341409963' post='4955621'] I really like how the event driven system sounds and is exactly what I'd prefer to build. Can you give any pointers on how you start building such architecture? Again, I'm doing this in C#, if it matters. [/quote] Well, I'm using C++ and I don't know what extra tools you get in C# at all, unfortunately. [url="http://www.gamedev.net/page/reference/index.html/_/technical/game-programming/effective-event-handling-in-c-r2459"]This article[/url] inspired me a lot for my current event system. What I've done most recently is to wrap an interface such as this around ØMQ, allowing me to fairly safely send complete objects as messages. Using ØMQ allows it to be either thread safe in-process messaging, inter-process or over TCP almost completely fluidly. ØMQ is blazing fast too and optimized to take some million requests a second on large servers but still really easy to use compared to handling classic connection sockets. There's C# api (and many others) but I can't vouch for it. It's also not all that common in games from what I understand. The basic idea anyway, is to have listener objects registering to a dispatcher object. It's basically the observer pattern, but a bit more centralized. You register to certain 'global' events, rather than all events from a specific object. Any object wanting to send an event does so via the dispatcher. Fight the temptation of making it a singleton though. Event systems have one downside, especially if you're using one big (or a couple*) central one(s) such as I do. It's not very performance optimized, since you've got a big middle hand between all objects that communicate via it. The flexibility added usually outweighs this problem [i]imo [/i]and the more performance intensive algorithms are usually isolated in one of the modules. But keep in mind that they can be saturated if there's a lot going on in a real time game. I'm using two dispatchers in my current project. One for strictly local application events, such as window, input and inter-module communication (I've got a server object running on it's own thread). The other one is for the game events which the rendering and network listens to. There's also a few varieties of how and when to handle the messages. You can put messages into a queue and poll them when it's the objects turn to update. This is what I do with my ØMQ version. This means the queue has to be handled at regular intervals. A second and not very thread safe way is to make them call instantly. My first did so. The message handling took place inside the send method of the dispatcher, as it looked up and triggered the event handlers of the listeners. It's a matter of taste and need which you'll chose. Since you're not familiar with the concept yet I'd recommend not to use ØMQ or threads for your first few projects. Make a simple 'instant' message dispatcher and tweak it until you're happy with what it's capable of doing. When you feel more comfortable with that, move on to the more advanced and flexible ones.
  10. Wrote First Game[SDL/C++] - Feedback

    [quote name='fastcall22' timestamp='1341376826' post='4955515'] People still use RAR files? [/quote] I do.. what's wrong with rar files? =P Rather use that than zip.. JAR, now that's a legacy format. Haven't seen that in years and years Moving on.. [quote name='fastcall22' timestamp='1341376826' post='4955515'] Your vcproj assumes I have SDL installed in C:\SDL-1.2.5 -- I actually have it installed in C:\bin\sdk\SDL\1.2.5. It doesn't really matter as long as your project links to an SDL.lib, and I have properly set up the appropriate directory. [/quote] How [i]do[/i] you set up the appropriate directory? I've never found a good standard way for looking up libraries depending on the user. I always get frustrated with msdn. It's worked for me so far, because I mostly handle my development alone.
  11. Sorry, I don't have any good example code for it.. I worked it out myself. I used [url="http://www.khanacademy.org/"]khan academy[/url] to get my matrix inversion right.. the ID buffer should be easy enough if you know how to manage framebuffers in OpenGL. I'm not using legacy GL though. v3.3 and shaders, with the fragment shader outputting both to a color buffer and an ID buffer. I guess it's OpenGL ES for android? It's kindof like 3.3 scaled down? You could also make two rendering passes, one for color and one for ID and binding different buffers. Once you've got the ID buffer, it's very easy to use the glReadPixels to get at the information. Sorry that it's a bit thin on code. See it as an exercise ;) Anything specific that you can't figure out? I'll try to help
  12. The inverse matrix exactly reverses the transformations you did to get the screen coordinates. I forgot to mention that you also need the depth buffer value as the z coordinate in this case and normalizing the xy coordinate which you mentioned.. I also realised that it's probably just the view-projection you'd want to get the coordinate. The model matrices can be left out..* A single matrix multiplication and you're done, as long as you got the inverting right. VPmatrix * WorldPos = NormScreenCoord InverseVPmatrix * NormScreenCoord = WorldPos No need for a raycast search or compensation for scaling, but still requires a spatial search of some kind to get the object. Regarding the ID buffer, maybe the tapping could be resolved by sampling an area rather than a single pixel. Check the pixels for an ID, and if there's more than one that isn't just background, count which one there's more of. Or something like that.. I don't really know how touch screens calculate those things. In my current project I'm using both. The ID buffer to look up objects, since it's a pretty fast mapping, but for moving things I use the screen-to-world transformations. I don't know how optimal it is, but it lets me get away with mapping things without keeping more advanced query structures that can do spatial and ray cast searches. * I'll have to look this up though
  13. Visual Studio Library linking problem

    [size=4][font=arial, helvetica, sans-serif]Is there a GL\ folder in the [/font][/size] [left][background=rgb(250, 251, 252)]dependencies\glew\include where the glew.h file is?[/background][/left] Full path being dependencies\glew\include\GL\glew.h
  14. Help me change my life and career.

    kcrik, of course each language has its use and I merely stated what I didn't like about it. I didn't say they're all awful for every purpose and that you should never use them. I'm not such a C++ bigot that I can't see that. Scripting is great for productivity. Most games doesn't need blazing performance for their logic or general structure, and puts most of its effort into the graphics engine. Those engines are low level and handles those things where the scripting doesn't have to. Tools and pre-processing data is part of a production tool chain, which is naturally part of making a game, but my post was about the core language making the heavy lifting. I didn't crack down on scripting languages, so ease down. Yes, java is used largely in server applications, search engines and management software, and it is mainly the standard there. It doesn't mean people using it actually enjoys working with it, even though it gives reasonable production value. As for learning via Java, I have no say other than that I found learning C++ and it's "memory management" less difficult than it's made out to be. How ever, learning java is a top down approach to learning how to program. It shields a programmer much like you said, from more low level operations. There's a lot to unlearn when going to lower level languages after and I think it's more benefiting to go bottom up. Just my opinion. My post was about my experiences, which was what the OP asked for. Make an objective case instead next time.
  15. Improvements on a game

    From what I assume you set out to do, this is quite fine. It's not really a complex design that can have many solutions, so I guess it's not much to comment on. How ever, if you would expand the attacks and magic to include more different types you should probably go with a [url="http://en.wikipedia.org/wiki/Strategy_pattern"]strategy pattern[/url] instead of all those if-else statements. You basically replace the if-else statements with an object structure with virtual calls. There's no point in doing that to the existing code, since its so small, but any larger switches should use something similar to the strategy pattern.