romer

Members
  • Content count

    211
  • Joined

  • Last visited

Community Reputation

323 Neutral

About romer

  • Rank
    Member
  1. Loading swf files

    Perhaps this will be of use to you, but Adobe publishes the spec for the [url="http://www.adobe.com/devnet/swf.html"]SWF file format[/url]. It covers file versions up through v10.
  2. MacBook Pro good for game dev?

    What I've found the thing you have to careful of when using Macs for game development is knowing what is exposed through their OpenGL driver. Only recently with 10.7 does Apple claim their OpenGL driver have core OpenGL 3.2 support, which you can tell by looking at the OpenGL capabilities table on their developer website. That said, it's been my experience that even though they claim to support a particular version of OpenGL, its best to double check with something like GLEW's glewinfo utility to see HOW that functionality is exposed. Sometimes you'll have to use an ARB extension function pointer instead of a "core" function pointer. Again, libraries like GLEW or GLEE will make setting up the function pointers a cinch, just gotta double check to see which one to use. That all said, the particular specs for the Macbook pro you listed ought to be adequate for game development, assuming you aren't looking to really tax the system. I don't know if new MBPs come with 10.7 or 10.6, but either way, based on the capabilities table [url="http://developer.apple.com/graphicsimaging/opengl/capabilities/GLInfo_1068.html"]here[/url], it indicates that enough is exposed so that you ought to be able to develop using a modern approach that utilizes the programmable shader pipeline (as opposed to being forced to use the older, now deprecated fixed function pipeline). For what it's worth, I develop on a Mac on 10.6 that has a Radeon 5750, and I haven't had any problems yet. Hope that helps.
  3. It's been said before, but really, it boils down to 1) who your target end-user is and 2) what's your personal preference. If you're familiar with Windows and will be writing software for people running on Windows, then by all means use Windows. Ultimately, in terms of getting stuff done it shouldn't matter because you can setup efficient development tool chains in either. Under Windows, the de facto standard is Visual Studio, which bundles everything you'll care to use for day-to-day activities: editor, compiler, linker, debugger. Under Linux, you're more likely to use (at first) a more decentralized tool chain. You'll pick an editor (vim, emacs, nano, joe, whatever) and do all your coding in all that. You'll more likely than not use GCC for compiling and linking, on the command line. It's definitely a different approach than you'll typically see using Visual Studio, but after fumbling a bit with the options, it's not really that hard to use. Eventually though hand compiling is bloody tedious, and then you'll look at some sort of build system. The foundation for a lot of build systems under Linux is GNU Make and the Makefile. You may write a couple of these for some small projects, but then realize that writing these, too, by hand tends to suck. And that's when you start learning to use one of the many open source build systems out there. For the longest of time, and even still today, the Autotools package has been the de facto standard for building projects from source under Linux. I damn well near shot myself trying to figure out how to use it effectively and instead switched to CMake. If I were to recommend an open source build system for someone to use, it'd definitely be CMake. Once things are built though, and you start running your programs, you'll then start using GDB (with possibly one of its many front-ends) predominantly for your debugging purposes. Now I may make Linux to sound like this PITA to use on a day-to-day activity, but honestly, once you get the initial project files setup, it's really not that bad. There are also IDEs under Linux that (I would guess) automate a lot of the setup, for instance tools like Eclipse CDT (note, I've never used it, so I don't know what all it provides) and bundle your tool chain into one place if you like that approach. Personally I find it useful to at least be aware of what these environments are doing for you behind the scenes because at the end of the day, they're still using a lot of the tools I mentioned above, and if anything screwy happens from within the IDE, it might be helpful to understand exactly which tool failed when trying to diagnose the problem. As far as third party library support goes, it's pretty much a toss up. You can always find examples that go both ways where a library is easy to configure and install under one OS and not the other. But that's the "fun" with cross-platform, open source software. Linux package managers may make this easier because they'll (for the most part) know how to also install whatever additional third party dependencies are needed, but for well known and popular libraries, typically I've seen developers either 1) provide a link to the exact dependencies you need to build their software, 2) provide binary distributions to the dependencies you need, or 3) bundle the source code for the dependencies you need with their software. Regardless of whatever OS you choose, you're going to find some library or tool that's going to require some finagling to get working right -- just comes with the software development gig.
  4. [quote name='undead' timestamp='1314905529' post='4856428'] I am going to contradict my own practices or the common sense but my experience tells me it's more important to focus on good code and to write documents than documenting the source code via doxygen.[/quote] For me personally, I don't view it as an either/or sort of thing, In my library that I've been building up, I use doxygen for both purposes. In my top level header that I use to include commonly used components, I use doxygen's @mainpage and @section tags to write the high level overview that you describe. It hits on the major components of my library, how to use them in the common case with code snippets and cross-references to the actual API documentation. This pretty much serves as my quick start "user's manual". Granted, I'm the only one working on the code base at the time, but it's written in such a way that if anyone else happens to work on it in the future, they can read the doxygen main page that's generated and get a pretty good overview of how to use the code base. All the other documentation is for what you'd typically expect to see generated from doxygen, that is to say descriptions of classes, methods, free functions, user types, etc. I probably go into more detail than your average developer, going to great lengths to describe what state things need to be in to successfully use a part of the API, the valid range of values for methods/functions, what exceptions are thrown and why, the interactions of various components, etc. I don't claim to be perfect in my coverage, but I do my best to be as comprehensive and accurate as possible because I want my users to not have to make assumptions or second guess how to use the APIs I write.[i] [/i]If people are tearing their hair out debugging [i]my [/i]library and stepping through the code all because I was too lazy to give them adequate documentation, then I've just failed as a library developer. At least, that's my opinion. I will agree though, that latter bit of documentation can be next-to-useless to someone who has no clue as to how things are structured from a higher level just due to the detail it goes into. And that's what I hope to mitigate with the doxygen main pages I write.
  5. For documenting code, I use Doxygen. I've never used UML, but for some of the state diagrams that I wanted to show in my API documentation, Doxygen has built-in support for dot/dia, which can make some visually appealing directed graphs. Most other images I've needed to embed I create in an image editor and store alongside my code (usually in the project_name/docs/images/ directory in my repository), and I use the appropriate Doxygen tags to include them where needed. For documenting development in general, nothing has beat a notebook + pencil for me. I use it for all sorts of things: brainstorming new features, mocking up interfaces, writing down notes from whatever topic I'm researching, concept sketches, working through prototypes, you name it. It's just so damn versatile and generally easier to get ideas out of my head on something more permanent than if I tried using some sort of software. The only exception is I might open up a mind mapping tool (like MindNode on OSX) if I'm doing some far out, non-linear brainstorming session, kinda like the ones where you have a single idea in the middle and branch out in all directions with related and tangential ideas based the initial topic.
  6. It's been a while since I've used CUDA, but you need to put all your GPU code and any host-side code that calls the GPU kernel(s) in a *.cu file. You may be able to put pure C++ code that strictly calls host functions that run on the CPU into *.cpp files, but I don't know for sure. The main important point though is that your CPU and GPU code need to be passed to NVIDIA's compiler driver (nvcc), which comes with the CUDA SDK. The compiler driver knows how to take *.cu and compile them into GPU code and the necessary bindings used to call GPU kernels from the host using NVIDIA's tool chain, and all other C++ code ought to be passed on to your system compiler to build the pure CPU code executed by the host. The SDK docs ought to go more into detail on how to use nvcc from your environment.
  7. another python code issue

    I think your problem is in your [font="Courier New"]copy = game.gameArray[/font] lines. You're not actually creating a copy of the underlying object when you do that, but rather making [font="Courier New"]copy[/font] refer to the same list, namely [font="Courier New"]game.gameArray[/font] in this case. Any changes you make to [font="Courier New"]copy[/font] will also be made to [font="Courier New"]game.gameArray[/font]. If you really want a copy, you would do the following: [code] import copy # note i had to change the variable name because copy now refers to the imported module. copy_gameArray = copy.copy( game.gameArray ) [/code] Hope that helps.
  8. timing on mac

    You need to be careful as gettimeofday() isn't necessarily guaranteed to be monotonic, especially with multicore systems. This means taking the difference between successive calls to gettimeofday() could result in a negative result (ie, you're moving backwards in time). On Mac OS X I think you want to use mach_absolute_time(), but it's been a while since I've looked into it. On Linux, you'd use something like clock_gettime() with the CLOCK_MONOTONIC clock ID.
  9. Open-GL help

    Depending on how much is a "little bit" of C++, you may want to brush up more on that first. That said, I found this blog to give a pretty good introduction to modern OpenGL development. I also picked up OpenGL Superbible 5th edition and have found it to be a good way to learn modern OpenGL (version >= 3.2). Other than that, Google and the wiki and reference pages on http://www.opengl.org/ have been enough to get me through any problems I come across.
  10. Beginning with CUDA

    I believe what you want is to implement a parallel prefix sum (scan) type operation so that you can effectively do your summation in parallel. Take a look at this PDF put out by NVIDIA for more info on doing it within CUDA.
  11. Killing compiler warnings?

    Quote:Original post by Codeka I would think the reasoning is obvious: sprintf_s works differently to snprintf. snprintf will not NULL-terminate the destination buffer if it overflows, while sprintf_s does. snprintf_s does the same thing, but it has the added "stop after this many characters" parameter that snprintf has. This is true for the Microsoft implementation, but I believe an actual implementation that conforms to the C99 spec says that 1) the n'th character will be the \0 terminator (where n is the size of the output string you pass in), and 2) if the output is truncated, the return value will be the number of characters that would have been written (not including the terminator) had the output string been long enough or -1 if an error occurred. The second issue alone I have seen the Microsoft version of snprintf not conform to at all, which bit me in the ass a while back when having to write some C code. And in fact I believe I've seen a bug report somewhere on MSDN someone filed regarding the second issue where the MS developer basically said they wouldn't be fixing this behavior anytime in the foreseeable future. So basically, while you should be able to rely on snprintf to always terminate and provide the correct return value, if your developing on Windows, and you rely on this functionality, you basically have to either provide your own version that provides the right behavior or just use one of their platform specific versions that does the right thing if they exist.
  12. Unity The object list problem

    Quote:Original post by ToohrVyk 1. If the lifetime of your objects is not obvious, use smart pointers to keep the objects alive until you don't need them anymore. 2. Remember that if you need some data to die along with an object (such as a sprite registered with the rendering system), you can make that data a member so that the destructor eliminates it. 3. Do you really need an array of all objects? I read through the article and agree pretty much with everything you wrote in it, but I still wonder how you would facilitate interaction between different types of objects. I'm just going to pull a couple of quotes to help better illustrate my question: Quote:My advice is pretty simple: strive to keep one list per type of object. In your average video game, you would have a list of projectiles flying around, a list of AI-controlled opponents, and a player-controlled character. Quote:The only dupicated code is the traversal of the various lists (since instead of traversing a single list, now several lists must be traversed in order to reach all elements), and that code is not only very small, but also fairly easy to factor out (for instance, by implementing an iterator that traverses several containers in order). Say then for instance you need to perform collision detection and that bullets, opponents, and the player character are the only types of "collidable" objects (say, for instance, objects inheriting from some hypothetical WithCollision class like in your examples) in your game world. Now simply iterating over just the opponent objects won't perform a full collision detection check as you miss potential collisions with bullets and the player. Do you just (in the code) make whoever is performing the collision detection know which lists it will have to go through each time it performs its checks, ie, is this what is meant by the second quote? That seems to most sensible to me, although I haven't really thought of any potential pitfalls to this. Also, in your design, who is to maintain these different lists, the engine or the game (assuming the game is a client to an engine and utilizing its services)? If the engine, then you make it aware of the various types of objects that exist in a specific game, which as far as I can tell hampers reuse. Getting around this wouldn't be too hard I'd imagine -- you could either: 1) provide some interface to the engine that allows the game to establish new object lists keyed off some identifier and when it came time for the game to tell the engine to do various tasks (e.g., engine->handle_collisions()), you pass it some list of identifiers telling the engine which of its registered lists to iterate over 2) make the lists managed by the game and change the interface(s) of the engine so that the various operations take in as argument(s) the lists that that they need to operate over. I personally prefer 2 a little bit more, but I'm just interested in hearing yours and others opinions on design in this case.
  13. Help, please!!!

    Hard to say without knowing more about how your project is setup, but could it be that one or more dependencies (e.g., third party libraries) were built different versions of the C runtime library when the release build for them was made? Or could the C runtime they expect to use be different than the one expected to be used by your program? I can't think of why else under normal circumstances VS would be trying to use both the static and dynamic link versions of the C runtime library.
  14. In pthreads, I'd prefer using locks and condition variables to synchronize access to the shared data, but apparently in .NET this is all bundled together in the Monitor class with lock/SyncLock provided as syntactic sugar in C#/VB.NET respectively. The basic form is something like this: // acquire the lock. the object passed to lock() is generally some // internal *reference* type object. common idiom is to do something like // private System.Object my_lock = new System.Object(); lock( my_lock ) { // we have the lock, now we can safely access the shared data. check to // see if we can do some meaningful work, usually some predicate on the // shared state. NOTE: always while, never if. while( ! work_to_do ) { // we can't do anything now, so we just wait to receive a signal // from some other thread sharing this resource. this atomically // releases the lock and puts the thread in a waiting state. Monitor.Wait( obj ); } // at this point we have both the lock to the shared state and we've // satisfied whatever conditions are necessary for meaningful work. // here you would typically change the state so that it generates // meaningful work for other threads that might be waiting. do_work_on_shared_state(); // before we leave the critical section and release the lock, we need // to signal to other threads that might be waiting that they can try // to reacquire the lock and check their conditions. NOTE: Pulse() // will awaken one thread on the waiting queue, whereas PulseAll() will // awaken all threads on the waiting queue, but regardless only one of // them will reacquire the lock. Monitor.Pulse( obj ); } // lock is released. That basic layout can be used for producer/consumer types of thread sychronization and can be modified to do more sophisticated patterns like reader/writer. It's also more likely to ensure you're threading logic is correct and help prevent from certain types of threading problems. This pdf has some more information on using monitors/locks/condition variables for doing synchronization between threads.
  15. C++ floating point error

    Quote:Original post by Perost Quote:Original post by romer Instead, you should check to see if your computed value is within some error threshold of the desired value. A quick and dirty method is computing the absolute error like so: *** Source Snippet Removed *** While your logic is correct you shouldn't use fabsf, since it isn't a standard C++ function. In C++ you should instead use fabs from cmath, which is defined for both float and double. Fair enough, given that the OP was asking about C++. Where I work we actually code strictly in C, and fabsf() *is* a standard C99 function. I don't know about most people, but for me sometimes keeping straight what's all standard C++ and C99 (and sometimes C89 for that matter) gets a bit muddied, especially when talking about the standard math functions.