Andrew Kesterson

  • Content count

  • Joined

  • Last visited

Community Reputation

150 Neutral

About Andrew Kesterson

  • Rank
  1. When to start with C++?

      That's not a debug build, that's just unoptimized. Debug builds include stabs, extra nametable info, etc, none of which the -O0 builds include. -O0 just means that there is no optimization performed by the compiler - how your code is written, is how it will run.   Yes, you will make a production build at -O2 or potentially even higher, and when you do that, you'll get near-C levels of performance. But that's not the point. The point is that the containers perform terribly until the compiler gets its hands on them and optimizes them; why? There's no obvious reason for it, not when C and glib are smoking it by orders of magnitude. RTTI and dynamic dispatch can't account for all of the slowness there - the C++ STL containers just aren't written for performance from the get-go.   Just realized we're jacking OP's thread, can make a separate thread re: STL performance if we want to keep talking about this.
  2. When to start with C++?

    Which containers and on what platform? Do you have data to support this conclusion available to show us?   Google "C++ STL performance" and you'll be inundated with people complaining about the performance, with no indication that it is relative to any one compiler or implementation. std::map and std::vector are particularly troublesome offenders. Compared to their counterparts in Boost, or their dynamic contemporaries in languages like Python, their performance is garbage until you start cranking up the compiler optimizations (which make the code arguably more difficult to debug.)   I have some example numbers on a (very narrow set of use cases) in a thing I've been putting together to illustrate these issues, actually.   Those two test cases illustrate C (with glib & hash maps) vs C++ (std::map and std::unordered_map) vs Python vs JavaScript vs Bash, comparing their relative execution speed to count the occurence of an item in a list of items (strings and ints), to illustrate the speed of each implementation's containers in setting, looking up, and checking for the existence of keys.   You CAN get C++ STL containers to perform admirably (see the -O2 settings on the documentation), but out of the box, the containers perform incredibly badly.
  3. When to start with C++?

    Talking about premature optimisation and then saying to avoid the standard library for performance reasons is a massive self-contradiction.   Actually, no, it goes to prove the point. Many times, when deciding to optimize early based off of minor observations, without really completing the solution, you just muck it up. In this case, OP is asking to switch to C++ because (essentially) "that's what everyone uses"; having a java background, OP would expect the C++ STL containers to confer that same sort of naive "all native code is fast" performance benefit, when in actuality, they're the fastest way to murder C++ code.   Write first, optimize later. OP's java will quite likely be faster than their C++, since they already know their Java, and the best patterns/weaknesses in the language. If they write their Java and finds that it is, in fact, too slow (or nobody wants to run Java), then they can look to a new language.   Unless they really honestly just WANT to learn a new language, just for the sake of learning it, which is an entirely different conversation.
  4. Tiled games

    @Endurion, that solution is cool, the problem with it is that all the bounds checking (e.g. only check tiles X1,Y1-X2,Y2) is done at collision time (meaning it runs every single time). It is arguably more efficient to treat your colliding map tiles (not all map tiles collide with the player) just like enemies/bullets/etc, and have them use the same collision quadtree, which is only updated when something moves.   But +1 for static arrays, an often overlooked performance winner.
  5. When to start with C++?

    @Alpha_ProgDes has it right; use what you know first. Learning another language "because it's probably got better performance", before you have anything published in the languages you're already good with, is premature optimization - and as we all know, premature optimization is bad.   And for what it's worth, I'd avoid C++ like the plague anyway. If you DO pick up C++, avoid the STL containers like the plague - their performance is absolutely abyssmal.
  6. Tiled games

    While the guide is quite nice, a lot of it is theoretical, not practical.   OP - the most common solution to this is called a Quadtree. Basically, as you add objects to the screen (whether colliding map tiles or other actors), you continue to divide the screen up into continually smaller regions, each containing at most N number of objects; so that, whenever you check collisions, each object only has to check collisions with N other objects. This is exponentially faster than checking the entire grid. <-- This guy wrote a pretty good C++ quadtree-based space partitioning lib in C++ to support what he was working on, and put it up on github. There is a decent description of the technique, with a link to a much better description, to get the technical details. If you just want the code, skip down to the Github link for the C++ version without SFML.   Happy trails.
  7. I second the "tail -f" solution. This works on windows (just install mingw, or cygwin, or gnuwin32, or any of the host of other things that can give you tail), mac os x, and all unix variants. If youre on windows and use powershell, look into get-content, I understand it can help with this.
  8. Create an operating system

    TL;DR - you don't need to program if you want to "mock up" an operating system. A design idea can be done in flash, a workable demo can be done in Java or any other programming language on top of an existing OS (this is what Nintendo did for their Gecko emulation, and AmiOS - the AmigaOS open source project - has been doing for a while). For those who were led here by the (slightly misleading) "Create an operating system" title, here are some links to how others are using qemu and gdb to develop low-level system code for ARM processors on their cheap, readily-available PC hardware: Have fun.
  9. Is memory management a must have?

    [quote name='larspensjo' timestamp='1350296126' post='4990331'] It is a mechanism I am hesitant to. ... Of course, there may be a benefit of speed. But it can also result in everyone losing. Please excuse me for associations in tangent space. [/quote] It's not suitable for every situation, certainly, but there are times when you know you are better off allocating everything up front, rather than piecemeal. YMMV.
  10. On posix systems, SDL does not create stdout.txt and stderr.txt; those are purely Windows conventions. On posix systems (e.g. ubuntu), if you run your program directly from the terminal, you will see output in the terminal, instead of in those files. For example: [CODE] akesterson@localhost:~$ cat printer.c #include <stdlib.h> #include <stdio.h> int main(void) { fprintf(stdout, "This is output.\n"); fprintf(stderr, "This is an error.\n"); return 0; } akesterson@localhost:~$ gcc -o printer printer.c akesterson@localhost:~$ ./printer This is output. This is an error. akesterson@localhost:~$ ./printer > stdout.txt This is an error. akesterson@localhost:~$ ./printer > stdout.txt 2>stderr.txt akesterson@localhost:~$ cat stdout.txt This is output. akesterson@localhost:~$ cat stderr.txt This is an error. [/CODE] ... If you want stdout.txt and stderr.txt, you'll have to manually redirect them. POSIX default behavior is that both output streams go to the terminal running the program. If you're not seeing this behavior, it's likely a result of the IDE you're using, and how it's launching/consuming the output. If you go to the directory with the compiled binary, and run it by hand, you should see the expected output in your terminal.
  11. Is memory management a must have?

    As the other posters have pointed out, the gem refers to having a pool of memory that you allocate up front, as opposed to allocating on demand. This is the way that the Java JVM works; at startup time, it requests (from the operating system) the maximum amount of memory the program is configured to use (per environment flags), and then does its own allocations out of that memory later. This way it doesn't have to wait on the OS scheduler, kernel, whatever, to do the job for it, and it can optimize its memory arrangement however is optimal for that specific program. The previously mentioned boost::pool does the same thing. There are C libraries that do the same, etc, ad infinitum. See the wikipedia article on Memory Pools for more generalized information: [url=""]http://en.wikipedia....iki/Memory_pool[/url]
  12. Hiding SDL from the rest of the program

    FWIW, I had the same issue when trying to add a controller mapping class to my game engine (which backends into SDL). Upon spending numerous hours trying to wrap SDL's already well-wrapped event pump functionality, I decided that trying to wrap SDL was pretty silly and not very fruitful, so I just exposed it. I figure SDL has already encapsulated things about as well as I could hope to anyway, and putting any kind of wrapper around SDL's functionality, when not 100% necessary, would only serve to muddle my API. Your mileage may vary.