Jump to content
  • Advertisement

ProfL

Member
  • Content Count

    112
  • Joined

  • Last visited

Community Reputation

717 Good

About ProfL

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. ProfL

    Bare bones AAA team

    Didn't know that. I know that developers of the skill level specified in the OP's original question burn bugs down as quickly as they're found so they don't dog the project and add to the schedule. I believe that you didn't know that, because you don't know me, that's why you can't know my experience. But your reply is orthogonal to what you quoted. Speed vs Predictability. One is the slope of a function, the other is the variance. I have no doubt that most devs will burn bugs down as quickly as they can, that's not team size related, but there are always exceptions, for whatever reason. The simple mathematical fact behind that is: The increase of the sample count, decreases the noise. Means: the more devs, the less the individual counts, but still all horses (supposedly) pull the game into the same direction. If you invest $100M, you might prefer the kind of team that return you $120M in a year, like it works for other companies. Over a team, that might return you anything from $0M to $2000M, in several years. But it really depends whether you "invest" your savings or you "gamble" some pocket money. And I replied to 'OP's original question'. He asked about the minimal AAA team, which (and I repeat that every post ), is not the same as a team that makes an AAA game. You can fund a $115M AAA team that burns money on APB ( https://en.wikipedia.org/wiki/APB:_All_Points_Bulletin ) or you can invest in a small team like Obsidian (afaik 30 devs at that time) who create the best modern Fallout game. (With quite some tech problems, but regarding fans, still the best Fallout)
  2. ProfL

    Bare bones AAA team

    Every developer creates bugs, one individual dev for the whole project might create less bugs than 2, as he knows more about the scope. But from some amount of devs, the bug count will depend on the scale of the project, less on the dev head count. (And to some degree based on the architecture of the software, but that's what every dev gonna tell you to sell his idea of how it should be). From my experience, big teams have a more predictable bug and burn down chart. If you have 3 programmers, every programmer has his area and all bugs in that area are mostly his to solve. There might be 2 or 3 critical on someones table and that might define how stressed out you are. In an AAA team, you get more of a factory feeling. After a week, you see about the average bugs/day for the team, you see who's behind and you can shift the bugs of e.g. one network programmer to another network programmer. When you see the burn down chart falling behind the schedule, you know it's not stuck on one programmer, but just the way it is. You can either cut features, mark bugs as wont fix/minor, or delay the shipping date. (If the burn down is just like 10% behind, you can also ask to work over time or a day more). All is quite predictable. QA and other testing becomes more important in a AAA production, simply cause you can't ask one programmer or artist about the state, you don't even want them to make an assessment. You rather want independent tester who check for bug, but you ALSO want people from outside the dev team that will rate the game in various ways "Is it clear what you have to do?" "Is the UI meaningful" "Is it fun?" "what's the most similar game you know and how does it compare?" etc. That's really an advantage. Again: Your team of 10 might think a game is in a really good shape, you ship, and there are 1000 times the people playing it, with various hardware configurations etc. and you might get pretty bad feedback by that 1% that runs into some bugs. An independent QA might catch most of these bugs. (Edit: The QA is also like an independent factory, you might want a few core people, who know the game, and rotate the other QA people, to have a lot of fresh feedback every week. Nobody biased cause he knows the game for 12 months) But that's really all about your original question. If you would ask "Can a small team create a AAA game?", it would be a different story.
  3. ProfL

    Bare bones AAA team

    AAA development just means you have a very huge, expensive team. Your question translates to "how small can a team be to still call it huge and expensive". The answer is simple and meaningless: Get as many as you can for as much as you can afford. That isn't strictly limited by your pockets, but rather by what you want to spend. How much you want to spend, on the other side, can depend on things like "how much are we going to sell?", which might be far less than your pockets are deep. You might expect that in a AAA (senior) team, things will be more structured, but the opposite is the case (Just ask anyone here, they will "tell you stories"). Far more things will go wrong, far more things will happen than you can handle and resolve, but that's when the advantage of an AAA team will come into play. On average, you can expect things will turn out fine. If you hire 100 peps, you won't be able to avoid bad apples and Wallys, but the average dev will be someone who does his work and does it just fine, their efficiency might be 90%. On average, you will get some 80% game. On average you will get your money back. In bad cases, it will be an Anthem or Homefront 2 game. In good cases, you might get a Bioshock, Crysis or Gears. The key is: 1. you hire many many devs. 2. you don't really care about keeping it minimal, cause you want to get enough to keep the average at average. 3. You might want to add some experienced leads to the pool, but you'd rather hire 3 random non-senior/non-junior guys than one really skilled. IF you deviate from this idea, by trying to keep it minimal, you completely shift your risk. Imagine you have 10 guys and 2 don't get along at all. Your project might be busted. You might get them to cooperate, but then it might just blow up and it's insanely difficult to replace one key member of a team of 10. ( Yes, sure, you could expect professionals to be professional, but many in the industry are ego driven and chances are high you will end up with one of these in your team. ) Also: Don't expect that a new AAA team means you get an AAA game, it might be just AA. And eveb AAA games don't lead to AAA sales. On the other side. Minecraft might be not even an A game.
  4. Getting "fixed point" into an engine, to get 100% determinism on calculations is actually more feasible than fixing floats. Ages ago I've been working on adding a multiplayer, post-launch of the game, cause some marketing person promised it. It was tons of work and we've spend a lot of time tracking de-syncs. We tried for a long time to get the FPU code somehow synchronized, we really pullet tricks out of our hat to make this happen, but across AMD/Intel and even across different Intel CPU generation, results diverged, hence one night I've just went rage mode and replaced all "float" by a class with an fixed point implementation. (Important: make the ctor explicit and explicit extraction of floats, e.g. to pass to the renderer). That made nearly all of the game work in-sync, we had just a few rand issues that were easy to fix. HOWEREVER, fixed point turned out to to run into tons of under and overflow. First we've tried to tweak the range, then we've gone for several ranges, depending on where the code is, but that was still not 100% stable. 3rd step was to go for 64bit (back then, before 64bit cpus, 2x 32bit), but that just reduced the issues, you had to play way longer to find new issues. Finally, my colleague suggested to implement a software float "like pascal does". That was the silver bullet. Nowadays, there are many float implementations, and I think some compiler (GCC?) have even a flag for it. IF you ever decide to go that route, skip all the pitfalls we ran into
  5. ProfL

    OpenGL sprite collision

    what makes you think it's a scoping problem?
  6. thanks for your feedback, and I think it's good you've tried it yourself rather than just believing blindly the guys on the internet But like someone said, your GPU results might differ. I suggest you try that also, simplest would be https://www.shadertoy.com/ just click on "new" and modify the color output to run a few more pow and increase your loop count until the fps drops (try full screen for the best slowdown). you might be also surprised how many times you can run pow without to worry about that instruction.
  7. In my case I get the same timings from both code paths after my change. No, you don't need to add random numbers, the array already obfuscates the access for the compiler, at least in the default settings, but my mod has some rand to have noise. results are still the same for 4 and 400
  8. yes, but you only proven that a higher exponent overflows (quicker) and that causes a slow down. #include <ctime> #define NCOUNT 10000000 float vals[ NCOUNT ]; int PowTest( ) { using namespace std; for ( int i = 0; i < NCOUNT; i++ ) { vals[ i ] = 1.f + rand( )* 0.00001f / RAND_MAX; } float vall0 = 0.0f; clock_t begin0 = clock( ); for ( int i = 0; i < NCOUNT; i++ ) { vall0 += pow( vals[ i ], 4.0f ); } clock_t end0 = clock( ); clock_t begin1 = clock( ); float vall1 = 0.0f; for ( int i = 0; i < NCOUNT; i++ ) { vall1 += pow( vals[ i ], 400.0f ); } clock_t end1 = clock( ); double elapsed_secs0 = double( end0 - begin0 ) / CLOCKS_PER_SEC; double elapsed_secs1 = double( end1 - begin1 ) / CLOCKS_PER_SEC; printf( "%f %f %f %f\n", elapsed_secs0, elapsed_secs1, vall0, vall1 ); return 0; } sorry, my lazy mod to get it compile
  9. the reason you get different results is because your vall overflows (becomes inf), which is a special number for the cpu that is handled in a fallback mode, slower. instead of initializing vals to random, initialize it to 1.f or 1.00001f (in case you worry that affects the pow function). after that change and accumulating to vall4 and vall400, I get the same time results.
  10. 99% of the time, an artificial benchmark like this benchmarks the capability of the creator, not of it supposes to test could you share your test code, your compiler name, compile settings.
  11. if you use pow, to calculate a highlight, in a shader, you most likely will pass the value as a constant, that's what the compiler won't see, hence it will use a generic exp+log. In that case it will run at the same speed, no matter what value that light power constant is.
  12. That depends a lot on the hardware and the compiler. in general, both should be the same if there is hardware support. If the compiler realizes you want x*=x; x*=x; it most likely will be quicker. the way pow works is usually : exp( exponent * log( base ) );
  13. well, if the 10 times faster algorithm gets implemented before profiling first and it turns out to make no difference... you should have used a profiler first. just replying. and yes, roughly his MapRectangleShapeBuffer function is problematic, but why exactly? With a good profiler, you can get information on whether your L1 or L2 cache runs misses, whether you are limited on memory write (e.g. due to PCIe if you write persistent buffers), whether you might accidentally read from unmapped memory, or get write cache misses. And there are many more possible reasons. It's better to have a tool report the issue and fix it, than to stumble blindly in the garden of 100 obvious reasons until you fix the right one.
  14. It's good you've started your optimziation by profiling, there are so many that just guess. But I'd suggest you to use a better profiler, you can try CodeXL or VTune, both will tell you the cost of every module and every assembly line. Both offer you sample based profiling or hierarchical, hence you can find expensive lines (e.g. some cache misses that might cause all the trouble) or a global overview.
  15. ProfL

    I can't find an artist, why?

    They look for collaborators, there is a chance to find someone who'd like to work on exactly the same kind of game. The "Idea stage" is the most interesting for these people. (Most of these collaboration end after time wasting brainstorming talks). You look for labor worker, unpaid. Imagine someone would offer you to work on their cooking simulator game. All art and the game design document are settled. It will be done in Unity3D. You only need to be programming it half a year, for rev share, of course. Does that sound better to you than to work on your own game? The decision hierarchy/chain would be: 1. They would rather work on their own art. 2. They would work on some more polished/existing game. 3. They would work on your game. Hence, until your game offers really something beyond other games, it will be hard for you to find someone. Maybe you already have something beyond? Then sell/advertise it like it was Minecraft 2.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!